SlowGuess commited on
Commit
1f54b02
·
verified ·
1 Parent(s): e426aa5

Add Batch 87b75760-5fd9-447c-bf6e-cfad2ed2c1a1

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/1295a4f1-c7ce-439d-bda2-7381ed440c80_content_list.json +3 -0
  2. afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/1295a4f1-c7ce-439d-bda2-7381ed440c80_model.json +3 -0
  3. afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/1295a4f1-c7ce-439d-bda2-7381ed440c80_origin.pdf +3 -0
  4. afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/full.md +467 -0
  5. afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/images.zip +3 -0
  6. afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/layout.json +3 -0
  7. agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/f2937cd3-06b1-4c49-8aaf-ae705332900f_content_list.json +3 -0
  8. agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/f2937cd3-06b1-4c49-8aaf-ae705332900f_model.json +3 -0
  9. agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/f2937cd3-06b1-4c49-8aaf-ae705332900f_origin.pdf +3 -0
  10. agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/full.md +690 -0
  11. agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/images.zip +3 -0
  12. agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/layout.json +3 -0
  13. agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/5c881628-64b0-4316-a515-f7b2cc56875d_content_list.json +3 -0
  14. agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/5c881628-64b0-4316-a515-f7b2cc56875d_model.json +3 -0
  15. agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/5c881628-64b0-4316-a515-f7b2cc56875d_origin.pdf +3 -0
  16. agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/full.md +895 -0
  17. agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/images.zip +3 -0
  18. agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/layout.json +3 -0
  19. aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/3f03da78-6d2f-40af-bfc2-3d46976aa364_content_list.json +3 -0
  20. aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/3f03da78-6d2f-40af-bfc2-3d46976aa364_model.json +3 -0
  21. aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/3f03da78-6d2f-40af-bfc2-3d46976aa364_origin.pdf +3 -0
  22. aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/full.md +575 -0
  23. aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/images.zip +3 -0
  24. aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/layout.json +3 -0
  25. algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/e99dadff-e646-41a3-bd3b-f07620a5f205_content_list.json +3 -0
  26. algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/e99dadff-e646-41a3-bd3b-f07620a5f205_model.json +3 -0
  27. algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/e99dadff-e646-41a3-bd3b-f07620a5f205_origin.pdf +3 -0
  28. algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/full.md +441 -0
  29. algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/images.zip +3 -0
  30. algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/layout.json +3 -0
  31. aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/953b6389-7a3c-46d4-8089-0acfdc5cb164_content_list.json +3 -0
  32. aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/953b6389-7a3c-46d4-8089-0acfdc5cb164_model.json +3 -0
  33. aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/953b6389-7a3c-46d4-8089-0acfdc5cb164_origin.pdf +3 -0
  34. aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/full.md +450 -0
  35. aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/images.zip +3 -0
  36. aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/layout.json +3 -0
  37. amphistabidirectionalmultiheaddecodingforacceleratingllminference/c43b5d34-1d0e-41f4-b5d7-782ab331bb8c_content_list.json +3 -0
  38. amphistabidirectionalmultiheaddecodingforacceleratingllminference/c43b5d34-1d0e-41f4-b5d7-782ab331bb8c_model.json +3 -0
  39. amphistabidirectionalmultiheaddecodingforacceleratingllminference/c43b5d34-1d0e-41f4-b5d7-782ab331bb8c_origin.pdf +3 -0
  40. amphistabidirectionalmultiheaddecodingforacceleratingllminference/full.md +358 -0
  41. amphistabidirectionalmultiheaddecodingforacceleratingllminference/images.zip +3 -0
  42. amphistabidirectionalmultiheaddecodingforacceleratingllminference/layout.json +3 -0
  43. analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/19b26c58-d911-416d-bc63-d19c951c0f10_content_list.json +3 -0
  44. analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/19b26c58-d911-416d-bc63-d19c951c0f10_model.json +3 -0
  45. analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/19b26c58-d911-416d-bc63-d19c951c0f10_origin.pdf +3 -0
  46. analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/full.md +574 -0
  47. analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/images.zip +3 -0
  48. analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/layout.json +3 -0
  49. analyzingandimprovingcoherenceoflargelanguagemodelsinquestionanswering/67398457-5763-4b62-8745-979ad4f8746f_content_list.json +3 -0
  50. analyzingandimprovingcoherenceoflargelanguagemodelsinquestionanswering/67398457-5763-4b62-8745-979ad4f8746f_model.json +3 -0
afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/1295a4f1-c7ce-439d-bda2-7381ed440c80_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:968726bf0defcb431a7cfd32cda8bea03ee8ddf17b883b638836e77c39d2829d
3
+ size 111666
afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/1295a4f1-c7ce-439d-bda2-7381ed440c80_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e3e3c2120ba3035d2ec855927796d5bc87f0af018afe734d6e9c99dd9961c4a
3
+ size 139396
afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/1295a4f1-c7ce-439d-bda2-7381ed440c80_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf7610dd6bd3ad4745a2326306446554b7b4d4cb9346e773cc6e667aab35b9bc
3
+ size 558480
afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/full.md ADDED
@@ -0,0 +1,467 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Afrispeech-Dialog: A Benchmark Dataset for Spontaneous English Conversations in Healthcare and Beyond
2
+
3
+ Mardhiyah Sanni $^{1,2}$ , Tassallah Abdullahi $^{2,3}$ , Devendra D. Kayande $^{1,2,4}$ , Emmanuel Ayodele $^{1}$ , Naome A. Etori $^{2,5}$ , Michael S. Mollel $^{2,6}$ , Moshood Yekini $^{2}$ , Chibuzor Okocha $^{2,7}$ , Lukman E. Ismaila $^{2,8}$ , Fofafunmi Omofoye $^{2,9}$ , Boluwatife A. Adewale $^{2}$ , Tobi Olatunj $^{1,2,10}$
4
+
5
+ $^{1}$ Intron, $^{2}$ BioRAMP, $^{3}$ Brown University
6
+
7
+ $^{4}$ Indian Institute of Information Technology Allahabad, $^{5}$ University of Minnesota-Twin Cities University of Glasgow, $^{7}$ University of Florida, $^{8}$ Johns Hopkins University $^{9}$ University of North Carolina at Chapel Hill, $^{10}$ Georgia Institute of Technology tobi@intron.io
8
+
9
+ # Abstract
10
+
11
+ Speech technologies are transforming interactions across various sectors, from healthcare to call centers and robots, yet their performance on African-accented conversations remains underexplored. We introduce Afrispeech-Dialog, a benchmark dataset of 50 simulated medical and non-medical African-accented English conversations, designed to evaluate automatic speech recognition (ASR) and related technologies. We assess state-of-the-art (SOTA) speaker diarization and ASR systems on longform, accented speech, comparing their performance with native accents and discover a $10\%+$ performance degradation. Additionally, we explore medical conversation summarization capabilities of large language models (LLMs) to demonstrate the impact of ASR errors on downstream medical summaries, providing insights into the challenges and opportunities for speech technologies in the Global South. Our work highlights the need for more inclusive datasets to advance conversational AI in low-resource settings.
12
+
13
+ # 1 Introduction
14
+
15
+ Deep Learning approaches have revolutionized and yielded significant performance gains across several natural language tasks (Bharadiya, 2023), especially for high-resource languages (Radford et al., 2023). While conversational speech recognition has continued to make significant strides in task automation in different domains such as medical (Biswas and Talukdar, 2024; Abacha et al., 2023; Yim et al., 2023), voice assistants (Pasandi and Pasandi, 2022; Mani et al., 2020), call center (Plaza et al., 2021), and robotics (Skantze, 2021), much of the research in Automatic Speech Recognition
16
+
17
+ (ASR) has focused on monolingual speech with native accents (Aksenova et al., 2022), with considerable performance gaps in diverse linguistic, low resource, and accented contexts (Radford et al., 2023; Olatunji et al., 2023a).
18
+
19
+ In Anglophone African contexts, regional accents further complicate the development of ASR systems resulting in poor generalization from state-of-the-art (SOTA) models (Hinsvark et al., 2021) in both general (Adelani et al., 2022; Olatunji et al., 2023a; Ogun et al., 2024; Tonja et al., 2024) and medical (Afonja et al., 2024; Olatunji et al., 2023b) contexts. In more developed countries, Performant Speech recognition models are particularly useful in overburdened healthcare systems, where they can help to reduce the documentation workload for overwhelmed clinicians (Afonja et al., 2024; Olatunji et al., 2023b).
20
+
21
+ Recent advancements in medical conversation transcription and summarization using LLMs have led to wide adoption of these technologies in hospitals in developed countries (Michalopoulos et al., 2022; Yim et al., 2023) helping to improve clinical documentation quality and productivity (Galloway et al., 2024). Given the heavy patient loads (Baye et al., 2020; Etori et al., 2023) as well as the recent integration of AI/NLP/ASR systems into clinical workflows in African clinics $^{12}$ , performant ambient transcription and summarization systems are highly desired. However, the generalizability of such systems to African contexts remains underexplored (Afonja et al., 2024).
22
+
23
+ To tackle this problem, our contributions are as
24
+
25
+ follows:
26
+
27
+ 1. AfriSpeech-Dialog a novel 7hr dataset of 50 long-form simulated African-accented English medical and non-medical conversations
28
+ 2. Benchmark the performance of SOTA speaker diarization, speech recognition, and LLM-based summarization on African-accented conversations
29
+
30
+ We lay the groundwork for more inclusive ASR and NLP technologies in healthcare and beyond.
31
+
32
+ # 2 Related Work
33
+
34
+ # 2.1 ASR in Medical Conversations and Summarization
35
+
36
+ The role of ASR in medical documentation has grown significantly, particularly in telehealth and in-person patient-physician consultations (Korfiatis et al., 2022; Galloway et al., 2024; Michalopoulos et al., 2022; Yim et al., 2023). Accurate ASR in medical dialogue is critical, as transcription errors can lead to incorrect medical records. Several datasets have been developed to facilitate the study of ASR in medical contexts. PriMock57 (Korfiatis et al., 2022) provides primary care mock consultations in European English accents; Fareez et al. (2022) evaluates medical ASR on simulated patient-physician medical interviews with a focus on respiratory cases; Enarvi et al. (2020) experiments automatically generated medical reports from diarized doctor-patient surgery conversations. Le-Duc et al. (2024) proposed a real-time speech summarization system for medical conversations with the ability to generate summaries for every (local) and end of (global) utterances, eliminating the need for continuous update and revision of the summary state.
37
+
38
+ These datasets primarily focus on non-African accents and therefore do not account for the challenges specific to African-accented medical speech.
39
+
40
+ # 2.2 Non-medical conversational ASR
41
+
42
+ Pezik et al. (2022) released DiaBiz, an Annotated Corpus of over 400hrs of Polish call center dialogs. Other conversational, parliamentary, or oratory datasets like AMI (Carletta et al., 2005), Earnings22 (Del Rio et al., 2022), Voxpopuli (Wang et al., 2021) have gained popularity on public ASR benchmarks. Conversational ASR has also been explored in other domains such as call centers (Plaza
43
+
44
+ et al., 2021), and robotics (Skantze, 2021). However, these datasets lack representation of African-accented speech.
45
+
46
+ # 2.3 African Accented ASR
47
+
48
+ There has been growing interest in developing ASR systems that cater to African languages; for example, Yilmaz et al. (2018) developed a multilingual ASR system for code-switched South African speech. Multilingual ASR systems, such as EVI dataset (Spithourakis et al., 2022), offer a strong foundation for developing similar models in African contexts where data scarcity hinders progress. Olatunji et al. (2023b) released a pan-African accented English dataset for medical and general ASR. While these datasets focus on single-speaker speech recognition, AfriSpeechDialog is the first African-accented English conversational dataset spanning medical and non-medical domains, enabling additional tasks like diarization and summarization.
49
+
50
+ # 2.4 Speaker Diarization in Multi-Speaker Conversations
51
+
52
+ To increase the efficiency of NLP/ASR systems, enormous contributions were made to researching the integration of speaker diarization (SD) into its pipeline. Serikov et al. (2024) provides a comparative analysis of SD models - Pyannote (Bredin et al., 2020), CLEVER $^3$ , and NVIDIA NeMo (Harper et al., 2019) on 20 different German dialects for diarization and identification (DI) task. NVIDIA NeMo performs slightly better with a competitive performance due to its multiscale segmentation for identifying and removing shorter segments. On a similar DI task, (Chua et al., 2023) also benchmarked the performance of multilingual ASR models in open and closed tracks on the challenging MERLIon CCS English - Mandarin datasets - an extracted spontaneous and codeswitching parent-child conversation speeches. However, SD for African-accented conversations remains underexplored. Benchmarking SOTA SD models on AfriSpeech-Dialog reveals their limitations in this setting.
53
+
54
+ # 3 Methodology
55
+
56
+ Figure 1 shows an overview of the dataset creation and benchmarking process, illustrating how AfriSpeech-Dialog supports tasks like speaker diarization, ASR, and summarization. Below, we
57
+
58
+ ![](images/8c62943f70ac2139b1d3dbb3b3352862dfac096f86cfdf59b3740dbbb5e1afa8.jpg)
59
+ Figure 1: AfriSpeech Dialog: Dataset and Benchmarking Pipeline
60
+
61
+ describe the dataset creation process and the evaluation of state-of-the-art (SOTA) models on these tasks to demonstrate their applicability and highlight challenges in African-accented conversational speech.
62
+
63
+ # 3.1 Dataset Statistics
64
+
65
+ Here, we outline our dataset creation process.
66
+
67
+ <table><tr><td></td><td>Medical</td><td>General</td></tr><tr><td>Counts</td><td>20</td><td>29</td></tr><tr><td>Timestamped Counts</td><td>9</td><td>21</td></tr><tr><td>Avg. Num. of Turns</td><td>78.6</td><td>30.55</td></tr><tr><td>Total Duration (hours)</td><td>2.07</td><td>4.93</td></tr><tr><td>Avg. Word Count</td><td>725.3</td><td>1356.83</td></tr><tr><td>Num. of Countries</td><td>1</td><td>3</td></tr><tr><td>Num. of Accents</td><td>6</td><td>8</td></tr><tr><td>Gender (M,F)</td><td>(14, 26)</td><td>(25, 33)</td></tr></table>
68
+
69
+ Table 1: Statistics of the medical and non-medical datasets.
70
+
71
+ # 3.1.1 Collecting Conversations
72
+
73
+ We recorded simulated virtual and in-person medical and non-medical conversations from African medical and non-medical crowdworkers on the Intron Platform $^{4}$ similar to the process described in Olatunjiet al. (2023b). Each conversation began with speakers providing consent, and any identifiable information in the consent segment was removed.
74
+
75
+ For medical conversations, following the process described in Fareez et al. (2022) and Korfiatis et al. (2022), clinical experts prepared "patient cards" with African-context disease conditions and
76
+
77
+ demographics. Doctor and Patient actors included medical professionals (e.g. doctors, nurses) familiar with Objective Structured Clinical Examinations (OSCE), a widely used assessment in medical education that simulates doctor-patient interactions (Fareez et al., 2022). Each patient actor was provided with a detailed "patient card" that included information on their condition, demographics, and medical history, as shown in Table 2. Consistent with OSCE format, patient cards were hidden from doctor actors to facilitate a more natural consultation.
78
+
79
+ For general domain conversations, participants engaged in open discussions based on "topic cards" prepared by a team of reviewers. Each card contained a conversation topic, a brief description, and two discussion prompts to guide the conversation. The pair of participants (actors) had prior access to the cards and were advised to read through and understand them before starting the conversation. Table 3 shows a sample topic card.
80
+
81
+ The conversation recordings were stored as mono-channel, 16-bit wav files, with a $48\mathrm{kHz}$ sampling rate. A team of clinician reviewers reviewed the conversations and selected a high quality subset for this release. The dataset was collected across three African countries—Nigeria, Kenya, and South Africa. The speakers represent a diverse range of accents (11 in total): Hausa, Isoko, Idoma, Urhobo, Ijaw, Yoruba, Swahili, Sesotho, Igbo, Igala, and Ebira.
82
+
83
+ # 3.1.2 Recording Characteristics
84
+
85
+ Dataset statistics are summarized in Table 1. The dataset features two speakers in each conversation
86
+
87
+ <table><tr><td>Condition</td><td>Malaria</td></tr><tr><td>Demographic (Age, Gender)</td><td>32-year-old Female</td></tr><tr><td>Presenting Complaint</td><td>Fever and chills (2 days)</td></tr><tr><td rowspan="7">History of Presenting Complaint</td><td></td></tr><tr><td>·Fever for 2 days (High grade, not relieved by medication)</td></tr><tr><td>·Chills for 2 days (Intermittent, severe)</td></tr><tr><td>·Headache for 2 days (Generalized, throbbing, 7/10 in severity)</td></tr><tr><td>·Fatigue and general body weakness</td></tr><tr><td>·No cough, diarrhea, vomiting, or urinary symptoms</td></tr><tr><td>·Patient lives in malaria-endemic area; no recent travel history</td></tr><tr><td>Past Medical History (PMH)</td><td>No chronic disease or surgery</td></tr><tr><td>Family History</td><td>No family history of similar illness</td></tr><tr><td>Social History</td><td>Does not drink alcohol or smoke</td></tr><tr><td>Allergy History</td><td>No known allergies</td></tr></table>
88
+
89
+ Table 2: Example Patient Card for Medical Conversations.
90
+
91
+ <table><tr><td>Topic: Cyberbullying</td></tr><tr><td>Overview: Cyberbullying is a form of bullying that takes place online or through electronic communication. It involves using technology (social media, text messages, online forums) to intimidate or humiliate someone. Examples include:• Spreading rumors• Sending hurtful messages• Sharing embarrassing information without consent</td></tr><tr><td>Discussion Prompts:• What steps do you take to protect yourself from cyberbullying?• Do you think social media has effective policies in place, or could they improve?</td></tr></table>
92
+
93
+ Table 3: Sample Conversation Card for General Conversations.
94
+
95
+ in both the medical and general domains. Medical conversations were more structured, with doctors asking direct questions and patients responding, resulting in a more formal exchange. General conversations were more relaxed, with spontaneous discussions on various topics. Overlapping speech occurs occasionally but is usually brief, involving short interjections like "yes" or "okay." Disfluencies and some code-switching reflect the natural flow of African English speakers. General conversations have fewer speaker turns but a higher average word count compared to medical ones, as speakers tend to talk longer to express their thoughts. These characteristics make the dataset valuable for testing speaker diarization and transcription models on African-accented speech.
96
+
97
+ # 3.1.3 Transcription Process
98
+
99
+ All conversations were manually transcribed by five professional annotators selected from top-performing contributors on the Intron Platform. Annotators were instructed to annotate speaker turns and insert timestamps for each turn, with all annotators required to be familiar with medical terminology. To ensure quality, clinician reviewers evaluated a random \(20\%\) of each annotator's work for accuracy, with at least \(90\%\) correctness threshold required for inclusion. Contributors and annotators were paid \\(10-\)\\)20 per hour depending on task complexity and clinical experience. The dataset is released under a CC-BY-NC-SA 4.0 License.
100
+
101
+ # 3.2 Speaker Diarization
102
+
103
+ To benchmark diarization performance on this dataset, we selected three recent high-performing neural diarization models:
104
+
105
+ - Pyannote (Bredin, 2023): This model leverages a pre-trained neural network that computes x-vector embeddings for speaker diarization. It uses an ECAPA-TDNN (Desplanques et al., 2020) architecture for speaker embeddings, shown to improve speaker separation in diarization tasks.
106
+ - Reverb diarization v2 (Bhandari et al., 2024): This model is an extension of Pyannote 3.0, fine-tuned on 26,000 hours of annotated speech. However, this model uses WavLM (Chen et al., 2022) instead of the SincNet (Ravanelli and Bengio, 2018) features in the Pyannote 3.0 basic model.
107
+ - Titanet: This Nvidia diarization pipeline uses MarbleNet (Nguyen-Vo et al., 2022) for voice activity detection and Titanet-L (Koluguri et al., 2022) for embedding extraction. Titanet uses a 1D depth-wise separable convolutions with Squeeze-and-Excitation (SE) layers with global context, followed by a channel attention-based statistics pooling layer to map variable-length utterances to fixed-length embeddings (t-vectors) (Koluguri et al., 2022).
108
+
109
+ # 3.2.1 Evaluation Metrics
110
+
111
+ We evaluated diarization performance using the Diarization Error Rate (DER) (Doddington et al., 2000). DER quantifies the percentage of time that speakers are misattributed or missed in the diarization output.
112
+
113
+ We used the pyannote.metrics library to calculate DER for each recording and computed the absolute DER for the entire dataset. The optimal mapping between the reference and hypothesis labels was obtained using the Hungarian algorithm (Kuhn, 1955), ensuring an accurate alignment. We report DER on medical and general domain conversations.
114
+
115
+ # 3.3 Automatic Speech Recognition (ASR)
116
+
117
+ To benchmark ASR performance, we compare SOTA open source pretrained models, Whisper (Radford et al., 2023), Distil-Whisper (Gandhi
118
+
119
+ et al., 2023), Nvidia Parakeet (Harper et al., 2019), Canary (Puvvada et al., 2024), MMS (Pratap et al., 2024) and Wav2vec2 (Baevski et al., 2020).
120
+
121
+ Open-source models offers transparency and reproducibility. They are often trained on diverse, real-world speech data, provide competitive baselines for dialog-specific tasks such as handling speaker variation, spontaneous speech, and overlapping speakers.
122
+
123
+ # 3.3.1 Preprocessing
124
+
125
+ The original transcripts contained timestamps and speaker tags. We removed these items from the text as they are unnecessary for the ASR task. Long form audio recordings exceeded the context length of most ASR models. They were therefore chunked into 30 second segments for inference and transcript segments returned were concatenated.
126
+
127
+ # 3.3.2 Evaluation Metrics
128
+
129
+ ASR performance was evaluated using Word Error Rate (WER). WER measures the total number of insertions, deletions, and substitutions in the predicted text with respect to the total number of words in the reference text.
130
+
131
+ # 3.4 Medical Conversation Summarization
132
+
133
+ Nine LLMs (large, small, open, closed, general, and biomedical) were benchmarked for summarizing doctor-patient dialogues. Each LLMs was presented a human conversation transcript and was prompted (Appendix section D) to generate a detailed summary.
134
+
135
+ Closed-source general LLMs: OpenAI (GPT-40, GPT-3.5-turbo) (Achiam et al., 2023), and Anthropic Claude-3-Sonnet(Anthropic, 2023) represent leading general-purpose commercial LLMs.
136
+
137
+ Open-source small general LLMs: Meta-Llama-3.1-8B-Instruct, Meta-Llama-3.2-3B-Instruct (Dubey et al., 2024), Microsoft-Phi-3-mini-4k-instruct (Abdin et al., 2024), and Google-Gemma-2-9b-it (Team et al., 2024) selected for their instruction-following abilities or multilingual support, which is essential for code-switching.
138
+
139
+ Biomedical open-source LLMs: m42-health-Llama3-Med42-8B(Christophe et al., 2024), johnsnowlabs-JSL-MedLlama-3-8B-v2.0, selected for their biomedical adaptation. Examples of generated summaries are provided in Appendix section E.
140
+
141
+ # 3.4.1 Quantitative Evaluation
142
+
143
+ We used the BERTScore (Zhang et al., 2019) to evaluate the quality of the LLM-generated summaries against the expert-generated reference summaries. Although BERTScore is widely used, studies like (Hanna and Bojar, 2021) have shown its limitations, particularly in capturing fine-grained semantic nuances and penalizing stylistic differences.
144
+
145
+ # 3.4.2 Qualitative Evaluation
146
+
147
+ To address the limitations of BERTScore, we complement it with two qualitative evaluation approaches (Human and LLM-as-Judge) where summaries were evaluated on a 5-point scale adapted from Zheng et al., 2023; Liu et al., 2023), 1 (worst) to 5 (best) on the following six criteria: recall of the diagnosis, accuracy of the treatment plan, avoidance of false or fabricated information, clarity and structure, and inclusion of important positive and negative clinical details. If any criterion (e.g., treatment plan) was absent from the conversation (transcript), a score of 0 was to be assigned for that criterion. Detailed evaluation criteria are in the Appendix Section A.
148
+
149
+ LLM-as-Judge Consistent with the growing trend in recent studies (Zheng et al., 2023; Liu et al., 2023) we used generative models for automated summary evaluation. We used the OpenAI's "o1" model (Temsah et al., 2024) prompting based on the criteria mentioned above. Detailed verbatim prompts and the method for computing accuracy scores are provided in Appendix Section B and C.
150
+
151
+ Human Evaluation In a blind study, we randomly present pairs of human transcripts and LLM or human-generated summaries to a team of 4 clinical experts. The experts compared the information available from summaries to conversation transcripts using the 6 criteria listed above (Kanithi et al., 2024; Singhal et al., 2023; Wang et al., 2023). Each summary was independently rated by 2 experts.
152
+
153
+ # 3.4.3 Error Propagation on Cascading Models
154
+
155
+ Since real-world conversation summarization systems rely on imperfect ASR transcripts and accented medical ASR transcription is challenging for several ASR systems (Afonja et al., 2024), we further evaluated summaries generated based on predicted (machine) transcripts to determine if
156
+
157
+ there was a drop in quality when compared with summaries generated based on human transcripts (Giuseppe et al., 2021). We measured summary quality using LLM-as-Judge.
158
+
159
+ # 4 Experiments
160
+
161
+ # 4.1 Diarization
162
+
163
+ We download and run inference using publicly released checkpoints from Hugging Face (Wolf et al., 2020), with default hyperparameters. We set the collar to 0.0, meaning no margin was allowed around speaker transitions, ensuring that even short overlaps (e.g., "yes" or "okay") were evaluated directly without any tolerance. Overlapping speech was also not excluded from the evaluation.
164
+
165
+ We ran inference on a single Nvidia T4 GPU. Inference for Pyannote and Reverb took approximately 2 hours while the Titanet took about 30 minutes. Results represent single runs.
166
+
167
+ # 4.2 ASR
168
+
169
+ Models were downloaded from publicly available huggingface (Wolf et al., 2020) checkpoints with default hyperparameters and the default generation configuration was used. We ran inference on Nvidia T4 GPUs. The inference required an average of around 30 minutes for the whole dataset for the open-source models. Results represent single runs.
170
+
171
+ # 4.3 Summarization
172
+
173
+ For open-source LLMs, we used publicly available checkpoints from HuggingFace (Wolf et al., 2020) without altering their default hyperparameters, except for setting max_new_tokens to 1024. Closed-source models were accessed via their respective APIs, also using default hyperparameters. The prompt template was adapted from prior work (Zheng et al., 2023; Liu et al., 2023), and to ensure consistency, the same prompt was used across all models (details can be found in the Appendix).
174
+
175
+ We conducted the summarization experiments under two scenarios: (1) generating summaries from human-produced transcripts and (2) generating summaries from transcripts created by our best-performing ASR model (Whisper-large-v3).
176
+
177
+ # 5 Results and Discussion
178
+
179
+ # 5.1 Diarization
180
+
181
+ We compute DER separately for a subset of conversations with accurate timestamps- medical (9
182
+
183
+ <table><tr><td>Model</td><td>DER (%)</td><td>Med DER (%)</td><td>Gen DER (%)</td><td>AMI DER (%)</td><td>DIHARD DER (%)</td></tr><tr><td>Titanet-L</td><td>16.27</td><td>34.64</td><td>12.28</td><td>(1.89)</td><td>-</td></tr><tr><td>Pyannote</td><td>21.30</td><td>31.46</td><td>19.09</td><td>24.8 (4.6)</td><td>34.4</td></tr><tr><td>Reverb</td><td>26.87</td><td>58.04</td><td>20.10</td><td>-</td><td>-</td></tr></table>
184
+
185
+ Table 4: Diarization Error Rate (DER) for all 30 audios, with detailed results for the Medical (Med. DER) and General (Gen. DER) subsets. The AMI DER and DIHARD DER columns show performance on the AMI MixHeadset (Carletta et al., 2005) and DIHARD II (Ryant et al., 2019) datasets, respectively. Lower DER is better, and (*) indicates results where overlapping speech regions were ignored.
186
+
187
+ samples, Med DER) and general (21 samples, Gen DER) domain conversations. The results are shown in Table 4 and Figure 2. We also show the performance of these models on conversational datasets on other accents, using the values reported in (Bredin, 2023; Koluguri et al., 2022; Landini et al., 2022).
188
+
189
+ The models consistently performed better on general domain conversations compared to medical conversations, likely due to their relaxed structure and fewer interruptions. Results show diarization results on Afrispeech-Dialog are better than AMI and DIHARD likely because of the simulated and structured nature of conversations.
190
+
191
+ ![](images/3a962e57f403b018a0eb64f8d72703d574d245773b08a3eec722b353e7fdeefd.jpg)
192
+ Figure 2: Comparison of Medical and General DER for Different Models
193
+
194
+ # 5.2 ASR
195
+
196
+ We report WER for all conversations, as well as separately for the medical and non-medical portions of the data (Table 5). The openai/whisper-large models achieved the best performance, followed by nvidia/canary, nvidia/parakeet and models and lastly wav2vec2 based models.
197
+
198
+ The results, as presented in Table 5 and Appendix Figure 3, demonstrate a clear trend: models exhibit superior performance on non-medical (general domain) audio compared to medical domain audio, roughly $5\%$ better WER, suggesting medical conversations are more challenging likely
199
+
200
+ due to accented medical jargon such as medication names, diagnoses, and density of numerical entities (Afonja et al., 2024).
201
+
202
+ Following trends in Olatunj et al. (2023b), Wav2vec2-large-960h finetuned exclusively on read librispeech audiobooks (Panayotov et al., 2015) performs worst, confirming that model size (number of parameters), training set size, training data domain (in-the-wild, conversational) and multilingual pretraining play a role in generalizability to accented speech.
203
+
204
+ Additionally, we compare performance on other conversational datasets reported in Table 5 and show a 5 to 20 point (absolute) performance drop on average suggesting poor generalizability of SOTA models to African-accented conversation English speech. This degradation underscores the challenges ASR models face with African accents, highlighting the pressing need for high-quality, domain-specific datasets to improve ASR performance for low-resource accents and languages.
205
+
206
+ # 5.3 Summarization
207
+
208
+ Table 6 shows our summarization results.
209
+
210
+ In BERTScore-H, summaries are generated based on human transcripts and evaluated against human reference summaries. In BERTScore-M, summaries are generated based on machine-predicted transcripts (Whisper-Large-v3) and evaluated against human reference summaries.
211
+
212
+ LLM-Eval-H represents the LLM-as-Judge results when key details in the human transcripts are sought from summaries generated from human transcripts. LLM-Eval-M represents the LLM-as-Judge results when key details in the human transcripts are sought from summaries generated from machine-predicted transcripts.
213
+
214
+ Table 6 shows that general domain models like GTP4o and Claude consistently achieve high scores across all evaluation metrics followed by Biomedical LLMs where generated summaries are semantically more similar to reference summaries.
215
+
216
+ <table><tr><td>Model</td><td>WER</td><td>Med WER</td><td>Gen WER</td><td>AMI</td><td>Earnings22</td><td>VoxPop</td></tr><tr><td>whisper-medium</td><td>21.27</td><td>26.49</td><td>19.47</td><td>16.6</td><td>-</td><td>7.4</td></tr><tr><td>whisper-large-v2</td><td>20.82</td><td>23.74</td><td>19.81</td><td>16.4</td><td>12.05</td><td>7.3</td></tr><tr><td>whisper-large-v3</td><td>20.38</td><td>23.81</td><td>19.19</td><td>16.01</td><td>11.3</td><td>-</td></tr><tr><td>whisper-large-v3-turbo</td><td>21.93</td><td>25.58</td><td>20.67</td><td>-</td><td>-</td><td>-</td></tr><tr><td>distil-whisper-large-v2</td><td>25.38</td><td>30.43</td><td>23.63</td><td>14.67</td><td>12.19</td><td>8.24</td></tr><tr><td>distil-whisper-large-v3</td><td>21.20</td><td>25.67</td><td>19.58</td><td>15.95</td><td>11.29</td><td>8.25</td></tr><tr><td>parakeet-rnnt-1.1b</td><td>28.16</td><td>34.03</td><td>26.13</td><td>17.1</td><td>15.15</td><td>5.44</td></tr><tr><td>parakeet-ctct-1.1b</td><td>28.97</td><td>34.16</td><td>27.19</td><td>15.67</td><td>13.75</td><td>6.56</td></tr><tr><td>parakeet-tdt-1.1b</td><td>28.69</td><td>33.57</td><td>27.01</td><td>15.9</td><td>14.65</td><td>5.49</td></tr><tr><td>canary-1b</td><td>22.82</td><td>27.40</td><td>21.25</td><td>13.53</td><td>12.05</td><td>5.79</td></tr><tr><td>wav2vec2-large-960h</td><td>86.34</td><td>88.35</td><td>81.17</td><td>37.0</td><td>-</td><td>17.9</td></tr><tr><td>mms-1b-all</td><td>61.75</td><td>69.04</td><td>59.22</td><td>42.02</td><td>31.17</td><td>17.63</td></tr></table>
217
+
218
+ Table 5: WER for various pre-trained models on the entire dataset, including the Medical (Med WER) and General (Gen WER) portions, as well as AMI (Carletta et al., 2005), Earnings22 (Del Rio et al., 2022), and Voxpopuli (Wang et al., 2021) datasets. Lower is better. Top 2 models in bold.
219
+
220
+ <table><tr><td>Model</td><td>BertScore-H (%)</td><td>LLM-Eval-H (%)</td><td>BertScore-M (%)</td><td>LLM-Eval-M (%)</td><td>Human-Eval (%)</td></tr><tr><td>Claude-3-Sonnet</td><td>88.39</td><td>73.53</td><td>88.50</td><td>69.42</td><td>69.74</td></tr><tr><td>GPT-4o</td><td>91.34</td><td>72.94</td><td>90.94</td><td>70.98</td><td>67.48</td></tr><tr><td>GPT-3.5-turbo</td><td>89.87</td><td>69.41</td><td>90.81</td><td>64.51</td><td>-</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>86.44</td><td>69.41</td><td>86.44</td><td>62.35</td><td>65.87</td></tr><tr><td>Llama-3.2-3B-Instruct</td><td>85.60</td><td>65.10</td><td>87.09</td><td>63.73</td><td>62.75</td></tr><tr><td>Gemma-2-9b</td><td>86.00</td><td>71.37</td><td>85.48</td><td>71.57</td><td>60.83</td></tr><tr><td>Human Expert</td><td>-</td><td>-</td><td>-</td><td>-</td><td>54.59</td></tr><tr><td>Llama3-Med42-8B</td><td>90.56</td><td>67.45</td><td>91.34</td><td>65.29</td><td>51.11</td></tr><tr><td>Phi3-mini-instruct</td><td>88.05</td><td>57.45</td><td>89.66</td><td>60.39</td><td>50.83</td></tr><tr><td>Meditron3-8B</td><td>89.83</td><td>60.20</td><td>90.40</td><td>59.61</td><td>42.98</td></tr></table>
221
+
222
+ Table 6: Summarization performance of various models, evaluated using BERTScore (F1), LLM-Eval, and Human Evaluation. BERTScore-H and LLM-Eval-H represent the metrics when summaries are generated and evaluated using human-generated transcripts, while BERTScore-M and LLM-Eval-M represent the metrics for machine-generated transcripts. Higher is better. Top 2 models in bold.
223
+
224
+ <table><tr><td>Model</td><td>positives</td><td>negatives</td><td>diagnoses</td><td>treatment</td><td>no hallucination</td><td>concise</td><td>Total</td></tr><tr><td>Human Expert</td><td>3.35</td><td>3.06</td><td>1.97</td><td>1.59</td><td>3.23</td><td>3.18</td><td>16.37</td></tr><tr><td>Claude-3-sonnet-20240229</td><td>4.31</td><td>4.31</td><td>1.54</td><td>1.46</td><td>4.77</td><td>4.54</td><td>20.92</td></tr><tr><td>GPT-4o</td><td>4.22</td><td>4.13</td><td>1.71</td><td>1.76</td><td>4.31</td><td>4.11</td><td>20.24</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>3.90</td><td>3.71</td><td>2.67</td><td>2.14</td><td>3.71</td><td>3.62</td><td>19.76</td></tr><tr><td>Llama-3.2-3B-Instruct</td><td>4.04</td><td>3.91</td><td>1.35</td><td>1.43</td><td>3.96</td><td>4.13</td><td>18.83</td></tr><tr><td>Gemma</td><td>3.81</td><td>3.88</td><td>1.31</td><td>1.69</td><td>3.69</td><td>3.88</td><td>18.25</td></tr><tr><td>Llama3-Med42-8B</td><td>3.13</td><td>2.67</td><td>2.00</td><td>1.87</td><td>2.83</td><td>2.83</td><td>15.33</td></tr><tr><td>Phi-3-mini-4k-instruct</td><td>3.22</td><td>2.84</td><td>1.59</td><td>1.67</td><td>2.96</td><td>2.96</td><td>15.25</td></tr><tr><td>Meditron3-8B</td><td>2.86</td><td>2.48</td><td>1.46</td><td>1.23</td><td>2.45</td><td>2.41</td><td>12.89</td></tr></table>
225
+
226
+ Table 7: Human Evaluation Results showing means of 5-point ratings for each criteria. Higher is better. Positives: Does the summary include all important positive clinical signs, symptoms, or details in the transcript?; Negatives: Does the summary include all important negative clinical signs, symptoms, or details in the transcript?; Diagnoses: Does the summary accurately recall the diagnoses from the transcript?; Treatment: Does the summary accurately reflect the treatment plan in the transcript?; No Hallucination: Does the summary avoid any false- incorrect- or fabricated information?; Concise: Is the summary concise- clear- and well-structured?
227
+
228
+ Claude-3 was ranked highest for summarizing human transcripts, while Gemma-2 ranked highest for summarizing ASR transcripts.
229
+
230
+ Open general smaller LLMs like Gemma and
231
+
232
+ the Llama models demonstrate lower semantic similarity with reference summaries (BERTScore) and their performance on LLM-Eval metrics demonstrate an inferior ability to retrieve critical informa
233
+
234
+ tion from the conversation. The weakest performer overall was Phi3-mini-4k-instruct, particularly in the LLM-Eval scores, signaling significant challenges in medical summarization.
235
+
236
+ # 5.4 Human Evaluation
237
+
238
+ Results in Table 6 show that LLM-as-Judge ratings (LLM-Eval-H) were strongly correlated (Pearsons=0.816) with Human Expert ratings, in line with Zheng et al. (2023).
239
+
240
+ Table 7 showed that, of the 6 criteria, expert summaries were ranked in the top 3 only at accurately recalling the diagnoses from the transcript, suggesting this was a key focus for experts. LLM summaries were more concise and retained greater amounts of key details when compared to the expert summaries.
241
+
242
+ The blind expert ratings showed that overall, experts ranked leading LLM summaries higher than reference summaries, scoring consistently higher on the completeness of key facts carried over from the transcript to the summary. This suggests that LLMs may perform better at certain tasks, such as compressing large amounts of information (10+ mins conversations) when compared with time-constrained physicians who may prefer brevity.
243
+
244
+ # 5.5 Error Propagation from cascading models
245
+
246
+ While summarizing ASR transcripts reveal no significant difference in semantic similarity with reference summaries (BERTScore-H vs BERTScore-M), a more nuanced pattern emerges with LLM-Eval-H and LLM-Eval-M metrics where many models show a notable drop in performance. LLM-Eval-M scores are on average 2 to 5 points lower than LLM-Eval-H. This indicates that ASR errors (noisy transcription outputs) may limit the summarization ability of LLMs leading to loss of critical healthcare information.
247
+
248
+ # 6 Conclusion
249
+
250
+ This study highlights several challenges with accented conversational ASR, especially its impact on downstream medical summarization. We benchmark speaker diarization, ASR, and medical summarization, drawing attention to gaps and opportunities to improve accented conversational ASR in the African context.
251
+
252
+ # 7 Limitations
253
+
254
+ While simulated conversations provide the opportunity to benchmark ASR systems, they may not
255
+
256
+ fully reflect the complexity of real-world natural dialogues where several interruptions, distractions and background noise are more typical. Additionally, the age range of contributors does not accurately reflect the diversity of real-world speaker populations, potentially impacting the generalizability of the results. The use of general large language models (LLMs) as evaluators may also introduce biases. At 5hrs, the dataset does not reflect the full breadth of medical or general conversations as several domains, subdomains, topics, and medical specialties were not covered in this release. Nonetheless, this work significantly contributes to addressing the gap in ASR systems' adaptability to African accents. Future research should aim to incorporate real-world data, broader age representation, and further optimize models to enhance the robustness and applicability of ASR technologies in diverse African linguistic environments.
257
+
258
+ # 8 Ethical Considerations
259
+
260
+ In a world where voice cloning and identity theft are major cybersecurity threats, releasing a voice-based dataset is not without risk. Each recording began with contributors expressing consent to the recording. However, since real names were sometimes used during consent, these segments were removed to protect participants' identities. During doctor-patient conversations, pseudonyms were used to maintain anonymity. Furthermore, releasing real-world doctor-patient conversations risks breaching privacy and confidentiality requirements, hence the preference for simulated conversations.
261
+
262
+ # 9 Acknowledgment
263
+
264
+ We thank the Intron Health team for providing the data and compute used in this work and all contributors, patient- and doctor-actors whose voices are included in the dataset.
265
+
266
+ # References
267
+
268
+ Asma Ben Abacha, Wen-wai Yim, Yadan Fan, and Thomas Lin. 2023. An empirical study of clinical note generation from doctor-patient encounters. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2291-2302.
269
+ Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219.
270
+
271
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
272
+ David Ifeoluwa Adelani, Graham Neubig, Sebastian Ruder, Shruti Rijhwani, Michael Beukman, Chester Palen-Michel, Constantine Lignos, Jesujoba O Alabi, Shamsuddeen H Muhammad, Peter Nabende, et al. 2022. Masakhaner 2.0: Africa-centric transfer learning for named entity recognition. arXiv preprint arXiv:2210.12391.
273
+ Tejumade Afonja, Tobi Olatunji, Sewade Ogun, Naome A Etori, Abraham Owodunni, and Moshood Yekini. 2024. Performant asr models for medical entities in accented speech. arXiv preprint arXiv:2406.12387.
274
+ Alëna Aksénova, Zhehuai Chen, Chung-Cheng Chiu, Daan van Esch, Pavel Golik, Wei Han, Levi King, Bhuvana Ramabhadran, Andrew Rosenberg, Suzan Schwartz, et al. 2022. Accented speech recognition: Benchmarking, pre-training, and diverse data. arXiv preprint arXiv:2205.08014.
275
+ Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817-1853.
276
+ Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In Proceedings of the 24th International Conference on Machine Learning, pages 33-40.
277
+ Anthropic. 2023. The claude 3 model family: Opus, sonnet, haiku. Accessed: 2024-10-15.
278
+ Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449-12460.
279
+ Yohannes Baye, Tesfaye Demeke, Nigusie Birhan, Agumasie Semahegn, and Simon Birhanu. 2020. Nurses' work-related stress and associated factors in governmental hospitals in harar, eastern ethiopia: A cross-sectional study. *PloS one*, 15(8):e0236782.
280
+ Nishchal Bhandari, Danny Chen, Miguel Angel del Río Fernández, Natalie Delworth, Jennifer Drexler Fox, Miguel Jette, Quinten McNamara, Corey Miller, Ondrej Novotny, Jan Profant, Nan Qin, Martin Ratajczak, and Jean-Philippe Robichaud. 2024. Reverb: Open-source asr and diarization from rev. Preprint, arXiv:2410.03930.
281
+ Jasmin Bharadiya. 2023. A comprehensive survey of deep learning techniques natural language processing. European Journal of Technology, 7(1):58-66.
282
+
283
+ Anjanava Biswas and Wrick Talukdar. 2024. Intelligent clinical documentation: Harnessing generative ai for patient-centric clinical note generation. arXiv preprint arXiv:2405.18346.
284
+ Herve Bredin, Ruiqing Yin, Juan Manuel Coria, Gregory Gelly, Pavel Korshunov, Marvin Lavechin, Diego Fustes, Hadrien Titeux, Wassim Bouaziz, and Marie-Philippe Gill. 2020. Pyannote. audio: neural building blocks for speaker diarization. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7124-7128. IEEE.
285
+ Hervé Bredin. 2023. pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe. In Proc. INTERSPEECH 2023.
286
+ Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The ami meeting corpus: A pre-announcement. In International workshop on machine learning for multimodal interaction, pages 28-39. Springer.
287
+ Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Xiangzhan Yu, and Furu Wei. 2022. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16(6):1505-1518.
288
+ Clément Christophe, Praveen K Kanithi, Tathagata Raha, Shadab Khan, and Marco AF Pimentel. 2024. Med42-v2: A suite of clinical llms.
289
+ Victoria YH Chua, Hexin Liu, Leibny Paola Garcia Perera, Fei Ting Woon, Jinyi Wong, Xiangyu Zhang, Sanjeev Khudanpur, Andy WH Khong, Justin Dauwels, and Suzy J Styles. 2023. Merlion ccs challenge: A english-mandarin code-switching child-directed speech corpus for language identification and diarization. arXiv preprint arXiv:2305.18881.
290
+ Miguel Del Rio, Peter Ha, Quinten McNamara, Corey Miller, and Shipra Chandra. 2022. Earnings-22: A practical benchmark for accents in the wild. arXiv preprint arXiv:2203.15591.
291
+ Brecht Desplanques, Jenthe Thienpondt, and Kris Demuynck. 2020. Ecapa-tdnn: Emphasized channel attention, propagation and aggregation in tdnn based speaker verification. arXiv preprint arXiv:2005.07143.
292
+ George R Doddington, Mark A Przybocki, Alvin F Martin, and Douglas A Reynolds. 2000. The nist speaker recognition evaluation—overview, methodology, systems, results, perspective. Speech communication, 31(2-3):225–254.
293
+
294
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
295
+ Seppo Enarvi, Marilisa Amoia, Miguel Del-Agua Teba, Brian Delaney, Frank Diehl, Stefan Hahn, Kristina Harris, Liam McGrath, Yue Pan, Joel Pinto, et al. 2020. Generating medical reports from patient-doctor conversations using sequence-to-sequence models. In Proceedings of the first workshop on natural language processing for medical conversations, pages 22-30.
296
+ Naome Etori, Ebasa Temesgen, and Maria Gini. 2023. What we know so far: Artificial intelligence in african healthcare. arXiv preprint arXiv:2305.18302.
297
+ Faiha Fareez, Tishya Parikh, Christopher Wavell, Saba Shahab, Meghan Chevalier, Scott Good, Isabella De Blasi, Rafik Rhouma, Christopher McMahon, Jean-Paul Lam, et al. 2022. A dataset of simulated patient-physician medical interviews with a focus on respiratory cases. Scientific Data, 9(1):313.
298
+ J Luke Galloway, Dominique Munroe, Pamela D Vohra-Khullar, Christopher Holland, Mary A Solis, Miranda A Moore, and Reema H Dbouk. 2024. Impact of an artificial intelligence-based solution on clinicians' clinical documentation experience: Initial findings using ambient listening technology. Journal of General Internal Medicine, pages 1-3.
299
+ Sanchit Gandhi, Patrick von Platen, and Alexander M Rush. 2023. Distil-whisper: Robust knowledge distillation via large-scale pseudo labelling. arXiv preprint arXiv:2311.00430.
300
+ Martucci Giuseppe, Mauro Cettolo, Matteo Negri, and Marco Turchi. 2021. Lexical modeling of asr errors for robust speech translation. In Proceedings of Interspeech 2021, pages 2282-2286.
301
+ Michael Hanna and Ondrej Bojar. 2021. A fine-grained analysis of bertscore. In Proceedings of the Sixth Conference on Machine Translation, pages 507-517.
302
+ Eric Harper, Somshubra Majumdar, Oleksii Kuchaiev, Li Jason, Yang Zhang, E Bakhturina, V Noroozi, S Subramanian, K Nithin, H Jocelyn, et al. 2019. Nemo: A toolkit for conversational ai and large language models. Computer software], URL: https://github.com/NVIDIA/NeMo.
303
+ Arthur Hinsvark, Natalie Delworth, Miguel Del Rio, Quinten McNamara, Joshua Dong, Ryan Westerman, Michelle Huang, Joseph Palakapilly, Jennifer Drexler, Ilya Pirkin, et al. 2021. Accented speech recognition: A survey. arXiv preprint arXiv:2104.10747.
304
+ Praveen K Kanithi, Clément Christophe, Marco AF Pimentel, Tathagata Raha, Nada Saadi, Hamza Javed, Svetlana Maslenkova, Nasir Hayat, Ronnie Rajan,
305
+
306
+ and Shadab Khan. 2024. Medic: Towards a comprehensive framework for evaluating llms in clinical applications. arXiv preprint arXiv:2409.07314.
307
+ Nithin Rao Koluguri, Taejin Park, and Boris Ginsburg. 2022. Titanet: Neural model for speaker representation with 1d depth-wise separable convolutions and global context. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8102-8106.
308
+ Alex Papadopoulos Korfiatis, Francesco Moramarco, Radmila Sarac, and Aleksandar Savkov. 2022. Primitock57: A dataset of primary care mock consultations. arXiv preprint arXiv:2204.00333.
309
+ Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83-97.
310
+ Federico Landini, Jan Profant, Mireia Diez, and Lukáš Burget. 2022. Bayesian hmm clustering of x-vector sequences (vbx) in speaker diarization: Theory, implementation and analysis on standard tasks. Computer Speech Language, 71:101254.
311
+ Khai Le-Duc, Khai-Nguyen Nguyen, Long Vo-Dang, and Truong-Son Hy. 2024. Real-time speech summarization for medical conversations. arXiv preprint arXiv:2406.15888.
312
+ Yixin Liu, Alexander R Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, and Arman Cohan. 2023. Benchmarking generation and evaluation capabilities of large language models for instruction controllable summarization. arXiv preprint arXiv:2311.09184.
313
+ Anirudh Mani, Shruti Palaskar, and Sandeep Konam. 2020. Towards understanding asr error correction for medical conversations. In Proceedings of the first workshop on natural language processing for medical conversations, pages 7-11.
314
+ George Michalopoulos, Kyle Williams, Gagandeep Singh, and Thomas Lin. 2022. Medicalsum: A guided clinical abstractive summarization model for generating medical reports from patient-doctor conversations. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4741-4749.
315
+ Hoang-Thi Nguyen-Vo, Huy Nguyen-Gia, Hoan-Duy Nguyen-Tran, Hoang Pham-Minh, Hung Vo-Thanh, and Hao Do-Due. 2022. Marblenet: A deep neural network solution for Vietnamese voice activity detection. In 2022 9th NAFTAED Conference on Information and Computer Science (NICS), pages 154-158.
316
+ Sewade Ogun, Abraham T Owodunni, Tobi Olatunj, Eniola Alose, Babatunde Oladimeji, Tejumade Afonja, Kayode Olaleye, Naome A Etori, and Tosin Adewumi. 2024. 1000 african voices: Advancing inclusive multi-speaker multi-accent speech synthesis. arXiv preprint arXiv:2406.11727.
317
+
318
+ Tobi Olatunji, Tejumade Afonja, Bonaventure FP Dossou, Atnafu Lambebo Tonja, Chris Chinenye Emezue, Amina Mardiyyah Rufai, and Sahib Singh. 2023a. Afrinames: Most asr models" butcher" african names. arXiv preprint arXiv:2306.00253.
319
+ Tobi Olatunji, Tejumade Afonja, Aditya Yadavalli, Chris Chinenye Emezue, Sahib Singh, Bonaventure FP Dossou, Joanne Osuchukwu, Salomey Osei, Atnafu Lambebo Tonja, Naome Etori, et al. 2023b. Afrispeech-200: Pan-african accented speech dataset for clinical and general domain asr. Transactions of the Association for Computational Linguistics, 11:1669-1685.
320
+ Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE.
321
+ Hannaneh B Pasandi and Haniyeh B Pasandi. 2022. Evaluation of asr systems for conversational speech: A linguistic perspective. In Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems, pages 962-965.
322
+ Piotr Pezik, Gosia Krawentek, Sylwia Karasińska, Paweł Wilk, Paulina Rybinska, Anna Cichosz, Angelika Peljak-Lapińska, Mikołaj Deckert, and Michal Adamczyk. 2022. Diabiz—an annotated corpus of polish call center dialogs. In Proceedings of the thirteenth language resources and evaluation conference, pages 723-726.
323
+ Miroław Płaza, Łukasz Pawlik, and Stanisław Deniziak. 2021. Call transcription methodology for contact center systems. IEEE Access, 9:110975-110988.
324
+ Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, et al. 2024. Scaling speech technology to 1,000+ languages. Journal of Machine Learning Research, 25(97):1-52.
325
+ Krishna C Puvvada, Piotr Želasko, He Huang, Oleksii Hrinchuk, Nithin Rao Koluguri, Kunal Dhawan, Somshubra Majumdar, Elena Rastorgueva, Zhehuai Chen, Vitaly Lavrukhin, et al. 2024. Less is more: Accurate speech recognition & translation without web-scale data. arXiv preprint arXiv:2406.19674.
326
+ Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International conference on machine learning, pages 28492-28518. PMLR.
327
+ Mohammad Sadegh Rasooli and Joel R. Tetreault. 2015. Yara parser: A fast and accurate dependency parser. Computing Research Repository, arXiv:1503.06733. Version 2.
328
+
329
+ Mirco Ravanelli and Yoshua Bengio. 2018. Speaker recognition from raw waveform with sincnet. 2018 IEEE Spoken Language Technology Workshop (SLT), pages 1021-1028.
330
+ Neville Ryant, Kenneth Church, Christopher Cieri, Alejandrina Cristia, Jun Du, Sriram Ganapathy, and Mark Liberman. 2019. The second dihard diarization challenge: Dataset, task, and baselines. pages 978-982.
331
+ Oleg Serikov, Ekaterina Voloshina, Anna Postnikova, Saliha Muradoğlu, Éric Le Ferrand, Elena Klyachko, Ekaterina Vylomova, Tatiana Shavrina, and Francis Tyers. 2024. Proceedings of the 3rd workshop on nlp applications to field linguistics (field matters 2024). In Proceedings of the 3rd Workshop on NLP Applications to Field Linguistics (Field Matters 2024).
332
+ Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature, 620(7972):172-180.
333
+ Gabriel Skantze. 2021. Turn-taking in conversational systems and human-robot interaction: a review. Computer Speech & Language, 67:101178.
334
+ Georgios P Spithourakis, Ivan Vulic, Michal Lis, Inigo Casanueva, and Paweł Budzianowski. 2022. Evi: Multilingual spoken dialogue tasks and dataset for knowledge-based enrolment, verification, and identification. arXiv preprint arXiv:2204.13496.
335
+ Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118.
336
+ Mohamad-Hani Temsah, Amr Jamal, Khalid Alhasan, Abdulkarim A Temsah, and Khalid H Malki. 2024. Openai o1-preview vs. chatgpt in healthcare: A new frontier in medical ai reasoning. *Cureus*, 16(10).
337
+ Atnafu Lambebo Tonja, Israel Abebe Azime, Tadesse Destaw Belay, Mesay Gameda Yigezu, Moges Ahmed Mehamed, Abinew Ali Ayele, Ebrahim Chekol Jibril, Michael Melese Woldeyohannis, Olga Kolesnikova, Philipp Slusallek, et al. 2024. Ethiollm: Multilingual large language models for ethiopian languages with task evaluation. arXiv preprint arXiv:2403.13737.
338
+ Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint
339
+
340
+ Conference on Natural Language Processing (Volume 1: Long Papers), pages 993-1003, Online. Association for Computational Linguistics.
341
+
342
+ Lucy Lu Wang, Yulia Otmakhova, Jay DeYoung, Thinh Hung Truong, Bailey E Kuehl, Erin Bransom, and Byron C Wallace. 2023. Automated metrics for medical multi-document summarization disagree with human evaluations. arXiv preprint arXiv:2305.13693.
343
+
344
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38-45.
345
+
346
+ Emre Yilmaz, Astik Biswas, Ewald van der Westhuizen, Febe de Wet, and Thomas Niesler. 2018. Building a unified code-switching asr system for south african languages. arXiv preprint arXiv:1807.10949.
347
+
348
+ Wen-wai Yim, Yujuan Fu, Asma Ben Abacha, Neal Snider, Thomas Lin, and Meliha Yetisgen. 2023. Acibench: a novel ambient clinical intelligence dataset for benchmarking automatic visit note generation. Scientific Data, 10(1):586.
349
+
350
+ Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
351
+
352
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36:46595-46623.
353
+
354
+ # A Human Evaluation Criteria
355
+
356
+ This section provides details on the criteria used to evaluate the AI-generated summaries of doctor-patient conversations. Each summary was evaluated based on the following aspects:
357
+
358
+ 1. How well does the summary include all positive clinical signs, symptoms, or details in the transcript?
359
+ 2. How well does the summary include all negative clinical signs, symptoms, or details in the transcript?
360
+ 3. How well does the summary recall the diagnosis from the transcript?
361
+ 4. How accurately does the summary reflect the treatment plan in the transcript?
362
+ 5. To what degree does the summary avoid false, incorrect, or fabricated information?
363
+ 6. How concise, clear, and well-structured is the summary?
364
+
365
+ # B Prompt Template for the LLM Evaluator
366
+
367
+ Table 8 outlines the prompt template utilized for the evaluation conducted by OpenAI's o1 model, serving as the LLM evaluator.
368
+
369
+ # C Evaluation Metric for the LLM Evaluator
370
+
371
+ In the LLM evaluation prompt (see Section B), each criterion has a maximum score of 5 points. Since there are six criteria per row, the perfect score for a single row is $5 \times 6 = 30$ points. For a dataset with ten rows, the total possible score (perfect accuracy) is:
372
+
373
+ Perfect Accuracy $= 5 \times 6 \times 10 = 300$ points.
374
+
375
+ To compute a model's percentage score for a dataset with $N$ rows:
376
+
377
+ 1. Sum the scores for all six criteria per row.
378
+ 2. Sum the total scores across all rows.
379
+ 3. Calculate the percentage score for the model as:
380
+
381
+ $$
382
+ \text {P e r c e n t a g e} = \left(\frac {\text {M o d e l ' s T o t a l S c o r e}}{5 \times 6 \times \text {N u m b e r o f R o w s}}\right) \times 1 0 0.
383
+ $$
384
+
385
+ # LLM as a Judge Prompt Template
386
+
387
+ You are a medical expert. Your task is to evaluate the accuracy and quality of an AI-generated summary of a doctor-patient conversation. Provide an impartial assessment based on the criteria below, rating each from 1 to 5 ( $1 =$ lowest, $5 =$ highest). If a specific criterion is not addressed, assign a score of 0.
388
+
389
+ Transcript: {transcript}
390
+
391
+ Summary: {summary}
392
+
393
+ # Evaluation Criteria:
394
+
395
+ 1. Completeness of positive clinical signs, symptoms, or relevant details: Does the summary capture all important positive findings?
396
+ 2. Completeness of negative clinical signs, symptoms, or relevant details: Does the summary capture all important negative findings?
397
+ 3. Accuracy of diagnosis: Does the summary correctly reflect the diagnosis?
398
+ 4. Accuracy of treatment plan: Does the summary convey the treatment plan correctly?
399
+ 5. Factual accuracy: Does the summary avoid false, incorrect, or fabricated information?
400
+ 6. Clarity and structure: Is the summary concise, clear, and well-structured?
401
+
402
+ # Please Return the evaluation scores in JSON format:
403
+
404
+ ```txt
405
+ "Criterion 1": score, "Criterion 2": score, "Criterion 3": score, "Criterion 4": score, "Criterion 5": score, "Criterion 6": score
406
+ ```
407
+
408
+ Table 8: Evaluation task description for assessing AI-generated summaries.
409
+
410
+ # D Prompt Template for Summarization
411
+
412
+ This figure outlines the prompt template utilized for the summarization task.
413
+
414
+ You are a medical expert; you will be provided a transcript that contains a conversation between a doctor and a patient. The task is for you to summarize the key points in the conversation. Transcript: {transcript}
415
+
416
+ # E Transcript and Summary Example
417
+
418
+ This section presents a sample of a human-written transcript in Table 9 and the corresponding LLM-generated summaries for the transcript. While GPT4o captures key details such as pain severity, lifestyle factors, and medical history, its summary omits a vital detail: the patient's experience of rectal bleeding "during urination", which is crucial to the case. The summaries are:
419
+
420
+ GPT4o's Summary: Dr. Philip conducted a consultation with Mrs. Ayike, a 45-year-old woman. She reported experiencing rectal bleeding and abdominal pain for the past two weeks. The pain, rated as a 6 out of 10 on the pain scale, is localized in the lower abdominal region and worsens with pressure. Mrs. Ayike also noted a loss of appetite and fatigue since the onset of her symptoms. She has not sought medical attention prior to this visit and has no history of surgery. She neither smokes nor drinks alcohol. Although Mrs. Ayike does not travel, she revealed a family history of colorectal cancer through her mother. Her usual diet includes "swallow," and she has no food allergies. Mrs. Ayike is married with four children, but currently lives separately from her husband. Dr. Philip concluded the session by acknowledging Mrs. Ayike's details.
421
+
422
+ Claude's Summary: Based on the conversation, the key points are: 1. The patient, Mrs. Ayike, is a 45-year-old woman presenting with rectal bleeding and abdominal pain for the past two weeks. 2. The rectal bleeding occurs when she urinates, and it is not mixed with stool.3. The abdominal pain is rated as 6 out of 10 in severity, and it is localized in the lower abdominal region.4. She has lost her appetite and feels tired since the symptoms started.5. She has not traveled recently and is not a frequent traveler.6. She is married but currently separated from her husband, living in a different state.n7. She has four children.8. She has not undergone any surgery before. 9. She does
423
+
424
+ not smoke or consume alcohol.10. Her mother had colorectal cancer in the past, indicating a family history of the condition.11. Her typical diet includes swallows (a type of food), and she is not allergic to any food. The doctor has gathered essential information about the patient's symptoms, medical history, and lifestyle to aid in the diagnosis and treatment process.
425
+
426
+ Gemma's Summary: Mrs. Ayike, a 45-year-old woman, presented to Dr. Philip complaining of two weeks of abdominal pain and rectal bleeding. Key points: Symptoms: Lower abdominal pain rated 6/10, worsening with pressure. Rectal bleeding during urination, not mixed with stool. Loss of appetite and fatigue. Medical History: No previous surgeries. No smoking or alcohol consumption. Family history of colorectal cancer (mother). Lifestyle: Prefers a diet high in swallow. No known food allergies. Not a traveler. Separated from her husband, living with four children. Next steps: Based on the symptoms and family history, Dr. Philip will likely order further investigations to determine the cause of Mrs. Ayike's abdominal pain and rectal bleeding. This may include blood tests, stool tests, imaging studies (such as a colonoscopy or CT scan), and possibly a biopsy.
427
+
428
+ Med42-8B's summary: Mrs. Ayike, a 45-year-old female, presents with rectal bleeding and lower abdominal pain, rated 6/10 in severity, which started two weeks ago. She reports seeing red blood in her urine but not mixed with stool. She has not sought medical attention previously for these symptoms. She describes her appetite as poor and feels tired. She denies any recent travel, smoking, or alcohol use. She is married with four children but is currently separated from her husband. Her mother had colorectal cancer. She consumes a diet primarily consisting of fufu (swallow). There are no known food allergies.
429
+
430
+ GPT3.5's summary: Mrs. Aike, a 45-year-old patient, visited Dr. Philip due to rectal bleeding and abdominal pain that started two weeks ago. The abdominal pain is severe, rated at a 6 on a scale of 1 to 3, and is localized in the lower abdominal region. She has also experienced a loss of appetite, fatigue, and difficulty eating since the pain started. Mrs. Aike is married with four children, not living with her husband, and
431
+
432
+ <table><tr><td>Transcript</td></tr><tr><td>Doctor: Good afternoon. My name is Dr. Philip, and what&#x27;s your name?</td></tr><tr><td>Patient: My name is Mrs. Ayike.</td></tr><tr><td>Doctor: Okay, how old are you?</td></tr><tr><td>Patient: I am 45 years old.</td></tr><tr><td>Doctor: Okay. You&#x27;re welcome, my dear. What brought you here?</td></tr><tr><td>Patient: I have rectal bleeding and abdominal pain.</td></tr><tr><td>Doctor: Can you tell me more about it?</td></tr><tr><td>Patient: The pain started two weeks ago, and the rectal bleeding happens when I urinate. The blood is not mixed with stool.</td></tr><tr><td>Doctor: Have you gone for any check-ups?</td></tr><tr><td>Patient: No, this is my first time coming here.</td></tr><tr><td>Doctor: How severe is the abdominal pain, from a scale of 1 to 10?</td></tr><tr><td>Patient: 6.</td></tr><tr><td>Doctor: Which side of the abdominal region do you feel the pain?</td></tr><tr><td>Patient: I feel pain in my lower abdominal region, and it worsens with pressure. And since the last two weeks, as I&#x27;ve started the abdominal pain, I have not been able to eat well. I lost appetite. I feel tired.</td></tr><tr><td>Doctor: Have you traveled recently?</td></tr><tr><td>Patient: No, I don&#x27;t travel.</td></tr><tr><td>Doctor: Are you married?</td></tr><tr><td>Patient: Yes, I am married, with four children, but my husband lives in another state.</td></tr><tr><td>Doctor: Have you undergone any surgery before?</td></tr><tr><td>Patient: No.</td></tr><tr><td>Doctor: Do you smoke or drink alcohol?</td></tr><tr><td>Patient: No.</td></tr><tr><td>Doctor: Does anyone in your family have similar issues?</td></tr><tr><td>Patient: Yes, my mother had colorectal cancer.</td></tr><tr><td>Doctor: What foods do you normally eat?</td></tr><tr><td>Patient: I like taking swallow. I like swallow a lot.</td></tr><tr><td>Doctor: Are you allergic to any food?</td></tr><tr><td>Patient: No.</td></tr></table>
433
+
434
+ Table 9: Snippet of a doctor-patient conversation (transcript) written by a clinician
435
+
436
+ has no history of surgeries, smoking, or alcohol consumption. Her mother had colorectal cancer before, indicating a family history of similar issues. She prefers eating swallow and is not allergic to any food. This is her first visit to seek medical help for these symptoms.
437
+
438
+ Meditron3-8B's summary: The patient, Mrs. Ayike, is a 45-year-old woman who has been experiencing rectal bleeding and abdominal pain for two weeks. She has not undergone any surgery before and does not smoke or take alcohol. Her mother had colorectal cancer before. She has lost her appetite and feels tired. The doctor asks her about her family history, food allergies, and the
439
+
440
+ severity of her symptoms. The doctor also asks about her marital status and children. The patient's symptoms are similar to those of colorectal cancer, which her mother had.
441
+
442
+ Llama-3.1-8B-Instruct's summary: Based on the provided transcript, here are the key points from the conversation between Dr. Philip and Mrs. Ayike: 1. Patient's Introduction: Mrs. Ayike, a 45-year-old woman, presents to the doctor with rectal bleeding and abdominal pain. 2.Symptoms: The abdominal pain started two weeks ago, and she experiences rectal bleeding when urinating, but not mixed with stool. The pain is located in the lower abdominal region and worsens with
443
+
444
+ pressure. 3. Medical History: Mrs. Ayike has not undergone any surgery before and does not have a history of smoking or alcohol consumption. 4. Family History: Her mother had colorectal cancer before.5. Diet and Allergies: Mrs. Ayike likes eating swallow (a type of food) and is not allergic to any food. 6.Lifestyle : She is married with four children but is currently separated from her husband. She does not travel and has not been able to eat well or feel energetic due to her symptoms. 7. Current Condition: Mrs. Ayike rates her abdominal pain as a 6 out of 10 and has lost appetite and feels tired since the onset of symptoms. Based on the information provided, Dr. Philip will likely perform a thorough examination and possibly order diagnostic tests to determine the cause of Mrs. Ayike's symptoms. Given her family history of colorectal cancer, a colonoscopy may be recommended to rule out any potential malignancies.
445
+
446
+ Llama-3.2-3B-Instruct's summary: Here's a summary of the key points in the conversation between Dr. Philip and Mrs. Ayike: Patient's Symptoms: Rectal bleeding (not mixed with stool), Abdominal pain (rated 6 out of 10) in the lower abdominal region, Loss of appetite and tiredness for the past two weeks. Medical History: No previous surgeries, No smoking or alcohol consumption, No known allergies to food Family History: Mother had colorectal cancer before. Lifestyle and Habits: Married with four children, but not living with her husband. No history of traveling. Prefers swallowing food. Other Information: Mrs. Ayike is 45 years old. She has not been able to eat well due to the abdominal pain. She is seeking medical attention for the first time, as the symptoms started two weeks ago.
447
+
448
+ Phi3-mini-instruct's summary: Dr. Philip, a medical expert, is speaking with Mrs. Ayike, a 45-year-old woman who has been experiencing rectal bleeding and abdominal pain for two weeks. Mrs. Ayike has not had any check-ups for her symptoms and has lost her appetite, feeling tired. She has not traveled recently and is not a smoker. She does not consume alcohol and has no known food allergies. Mrs. Ayike's mother had colorectal cancer. Dr. Philip asks about her diet, which mainly consists of swallow, and inquires about any family history of similar issues.
449
+
450
+ # F Overlapping Speech Example
451
+
452
+ In this example, overlapping speech occurs between Speaker 1 and Speaker 2. Speaker 2's "No" overlaps with Speaker 1's utterance starting with "Ok, thank you very much." Both utterances are fully captured with precise timestamps to ensure accurate representation of the conversation. This example illustrates how overlaps are handled in the transcription process for the dataset.
453
+
454
+ # G ASR and Summarization Result Graphs
455
+
456
+ ![](images/54bca89a7c6f7e1b9ff551095f908d7f4a411fd092de726a5112487b1700f9e4.jpg)
457
+ Figure 3: Comparison of Medical and General WER for Different Models
458
+
459
+ ![](images/d64b20721aa7b82ef0467b66faef74e36c25a99b5c86bc09574a5acde25c90e6.jpg)
460
+ Figure 4: Summarization results for several LLMs
461
+
462
+ <table><tr><td>Timestamp</td><td>Speaker</td><td>Utterance</td></tr><tr><td>02:40.98 – 02:41.99</td><td>Speaker 1</td><td>Past hospital admission?</td></tr><tr><td>02:42.97 – 02:43.02</td><td>Speaker 2</td><td>No.</td></tr><tr><td>02:43.00 – 02:45.99</td><td>Speaker 1</td><td>Ok, thank you very much. Do you live alone?</td></tr></table>
463
+
464
+ Table 10: Example of overlapping speech with accurate timestamps and speaker annotations.
465
+
466
+ ![](images/03fb366b6804eaf9a6c52687989a49d886e5c4adcafe62543f79f95f5b485c72.jpg)
467
+ Figure 5: WER, Med WER, and Non-Med WER for Various Models
afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:692c00e9f54a56e37f52c6f45d3b24e9abd570ecb494533eadd62897ce5d122c
3
+ size 837692
afrispeechdialogabenchmarkdatasetforspontaneousenglishconversationsinhealthcareandbeyond/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9f8112311f9ef5fb8dcf35302477e66bf1f043b6e938bf054f028f84ec54ec4
3
+ size 477775
agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/f2937cd3-06b1-4c49-8aaf-ae705332900f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58305b84b2957e1c12096d450fd1f232a67a21a0a04fb185f8f9107375e5018c
3
+ size 122663
agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/f2937cd3-06b1-4c49-8aaf-ae705332900f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3a2d4f23d9a596ef9110581bfb687fb13eae98b347f4dec5fe1b34c816b21b6
3
+ size 150467
agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/f2937cd3-06b1-4c49-8aaf-ae705332900f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f377a4b4aed601e96e1e8d97b3a3b40f7acaa0a7f9ec745d73ece9b4637eaa78
3
+ size 3381967
agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/full.md ADDED
@@ -0,0 +1,690 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AgentMove: A Large Language Model based Agentic Framework for Zero-shot Next Location Prediction
2
+
3
+ Jie Feng, Yuwei Du, Jie Zhao, Yong Li
4
+
5
+ Department of Electronic Engineering, BNrist, Tsinghua University, Beijing, China,
6
+
7
+ {fengjie, liyong07} @tsinghua.edu.cn
8
+
9
+ # Abstract
10
+
11
+ Next location prediction plays a crucial role in various real-world applications. Recently, due to the limitation of existing deep learning methods, attempts have been made to apply large language models (LLMs) to zero-shot next location prediction task. However, they directly generate the final output using LLMs without systematic design, which limits the potential of LLMs to uncover complex mobility patterns and underestimates their extensive reserve of global geospatial knowledge. In this paper, we introduce AgentMove, a systematic agentic prediction framework to achieve generalized next location prediction. In AgentMove, we first decompose the mobility prediction task and design specific modules to complete them, including spatial-temporal memory for individual mobility pattern mining, world knowledge generator for modeling the effects of urban structure and collective knowledge extractor for capturing the shared patterns among population. Finally, we combine the results of three modules and conduct a reasoning step to generate the final predictions. Extensive experiments utilizing mobility data from two distinct sources reveal that AgentMove surpasses the leading baseline by $3.33\%$ to $8.57\%$ across 8 out of 12 metrics and it shows robust predictions with various LLMs as base and also less geographical bias across cities. Our codes are available via https://github.com/tsinghua-fib-lab/AgentMove.
12
+
13
+ # 1 Introduction
14
+
15
+ Mobility prediction is of great importance in many real-world scenarios, e.g., recommending travel services, pre-activating mobile applications for potential usage, seamless switching of cellular network signals and efficient traffic management. Next location prediction is one of the most important task in human mobility prediction. In recent years, deep learning based models (Liu et al., 2016; Wu et al., 2017; Feng et al., 2018; Yang et al., 2020,
16
+
17
+ 2022) have been widely applied and have achieved promising results due to their ability to capture the high-order transition dynamics and mining shared mobility patterns among users. However, existing approaches have several key drawbacks. First, the success of deep learning models rely on the collection of large amounts of private mobility data. Second, the trained model are challenging to apply in zero-shot mobility prediction settings. Finally, the prediction accuracy remains limited due to the constrained sequential modelling capability of smaller deep learning models and a lack of deep understanding of commonsense in human daily life and urban structures.
18
+
19
+ Recently, large language models (LLMs) have made significant progress, achieving advanced results that far surpass previous methods in areas such as dialogue-based role-playing, code generation and testing, and mathematics problem solving. In the field of spatial-temporal data mining, researchers are exploring the potential of applying LLMs to various real-world tasks, including time series forecasting (Gruver et al., 2024; Li et al., 2024b), travel planning (Xie et al., 2024; Li et al., 2024a), trajectory analysis (Luo et al., 2024; Zhang et al., 2023; Du et al., 2024). Furthermore, several recent works (Wang et al., 2023; Beneduce et al., 2024) investigate the feasibility of using LLMs as the base model of mobility prediction, addressing the limitations of deep learning based models and achieving promising results. These works typically convert trajectories to a language based sentence and leverage the powerful sequential modelling capacities of LLMs to directly generate the mobility predictions. However, due to the lack of a systematic design throughout the entire process, they overlook the crucial components of human mobility modeling, resulting in limited performance. In summary, these methods fail to effectively capture the complex individual mobility patterns, neglect to model the effects of urban structure and do not
20
+
21
+ ![](images/77ae91f853900778d6e42410c98ed2e0272e6f2b904257f9183a0cecb44fec4b.jpg)
22
+ Figure 1: The framework of AgentMove, including three key components: spatial temporal memory unit for capturing individual mobility pattern, world knowledge generator for multi-level urban structure, and collective knowledge extractor for extracting shared mobility patterns among users.
23
+
24
+ discover the shared mobility patterns among populations.
25
+
26
+ In this paper, we propose AgentMove, a systematic agentic framework for generalized mobility prediction. By integrating domain knowledge of human mobility, we implement the core components in the general agentic framework (Wang et al., 2024; Xi et al., 2023), including the planning module, memory module, world knowledge module, external tool module and reasoning module. For the planning module in AgentMove, we introduce a manually designed mobility prediction task decomposition module that considers the most important factors influencing mobility prediction. This decomposition generates three sub-tasks: individual mobility pattern mining, shared mobility pattern discovery and urban structure modelling. First, we implement a spatial-temporal memory module for individual mobility pattern mining. This module contains three submodules—short-term memory, long-term memory and user profiles—to capture the multi-level mobility patterns of individuals. Compared to pure LLM methods, the memory module enables AgentMove to retain past mobility history and efficiently learn from experiences. Second, we design a world knowledge generator to explicitly extract inherent geospatial knowledge from LLMs, aiding in the modelling the effects of multi-scale urban structures on the human mobility, particularly in relation to the exploration behavior of human mobility. Third, we equip AgentMove with the capability to discover the shared mobility patterns from various user trajectories through a collective
27
+
28
+ knowledge extractor. This extractor utilizes NetworkX as an external tool to organize trajectories into a global location transition graph and then extract important neighboring locations for prediction. Finally, we combine the results from all the modules and perform a final reasoning step to generate the predictions. In summary, our contributions are as follows,
29
+
30
+ - To the best of our knowledge, this is the first attempt to apply LLM-based agentic framework to the field of mobility prediction. We build an effective mobility prediction framework by incorporating the crucial characteristics of human mobility into the design of core components.
31
+ - In AgentMove, we design a spatial-temporal memory module for individual mobility pattern mining, a world knowledge generator for modeling effects of urban structures, and a graph based collective knowledge extractor for discovering the shared mobility patterns among populations.
32
+ - Extensive experiments on mobility trajectories from two sources in 12 cities demonstrate the effectiveness of proposed AgentMove, which outperforms the best baseline, achieving performance improvements ranging from $3.33\%$ to $8.57\%$ in most cases. Additionally, AgentMove presents superior adaptability to different LLMs, as well as greater stability and reduced bias in prediction results across various cities worldwide.
33
+
34
+ # 2 Preliminaries
35
+
36
+ We define the mobility prediction task and related concepts for use in the following section.
37
+
38
+ ![](images/1ae9c7642d94ad93f27daacc6f95f3a615a6cdaeca579682dabf817777cfe15f.jpg)
39
+ Figure 2: Deep learning and LLM-based mobility predictors work in different ways. Deep learning models need to learn from training data for specific regions, while LLMs predict directly using zero-shot reasoning with its world knowledge.
40
+
41
+ Definition 1 (Location) A location point $p \in P$ is represented as a tuple $\langle id, \text{cate}, \text{lon}, \text{lat}, \text{addr} \rangle$ , where $id$ is the unique identifier, $\text{cate}$ is the category (e.g., restaurant), $\text{lon}$ and $\text{lat}$ are the coordinates of the location, $\text{addr}$ is the text address of location.
42
+
43
+ Definition 2 (User Trajectory) A trajectory of user $u \in U$ is represented as $T_{u} = \{(p_{1}, t_{1}), (p_{2}, t_{2}), \ldots, (p_{n}, t_{n})\}$ , where $p_{i} \in P$ is the $i$ -th location visited by the user and $t_{i}$ is the timestamp of the visit.
44
+
45
+ Definition 3 (Contextual Stays) Contextual stays of user $u$ is defined as the most recent sub-sequence in trajectory: $\mathcal{C}_u = \{(p_{n - k}, t_{n - k}), \ldots, (p_{n - 1}, t_{n - 1}), (p_n, t_n)\}$ ,
46
+
47
+ which captures the user's short-term mobility patterns. $k$ is the window size of contextual stays.
48
+
49
+ Definition 4 (Historical Stays) Historical stays of user $u$ is defined as the subsequence before contextual stays: $\mathcal{H}_u = \{(p_1, t_1), (p_2, t_2), \ldots, (p_{n-k-1}, t_{n-k-1})\}$ , which captures the user's long-term mobility patterns.
50
+
51
+ Given the historical movement data $\mathcal{C}_u$ , $\mathcal{H}_u$ as well as available external knowledge $\mathcal{K}$ (e.g., worldwide geospatial information), the objective is to predict the next location $p_{n+1}$ that user $u$ will visit. Formally, this paper aims to learn a mapping function $f$ :
52
+
53
+ $$
54
+ f: \left(\mathcal {C} _ {u}, \mathcal {H} _ {u}, \mathcal {K}\right)\rightarrow p _ {n + 1}. \tag {1}
55
+ $$
56
+
57
+ Figure 2 illustrates the differences between the deep learning based paradigm and the LLM based paradigm in the mobility prediction task. The deep learning model needs collecting training data before conduct the prediction task, which means it cannot directly used in the zero-shot scenario. LLM based method can directly applied into any scenario after carefully 'format converter' (known
58
+
59
+ as prompt engineering). While LLM based methods can be adapted easily to new scenarios, their effectiveness may not improve as the scenarios accumulates more data. In this way, the deep learning models with more data can achieve better performance when LLM based methods fail to improving. In this paper, we propose the LLM based agent solution AgentMove for mobility prediction task which enables the continue learning and improving of LLM based mobility predictor.
60
+
61
+ # 3 Methods
62
+
63
+ # 3.1 Overview
64
+
65
+ As shown in Figure 1, AgentMove comprises five core components: task decomposition module, spatial-temporal memory module, world knowledge generator, collective knowledge extractor and the final reasoning module. Serving as the high-level planning module, the task decomposition module is designed to break down the overall mobility prediction task into subtasks—personalized mobility pattern mining, collective mobility pattern discovery and modelling the effects of urban structures—by considering the crucial factors influencing mobility. The detailed design of the other components is introduced as follows.
66
+
67
+ # 3.2 Spatial-temporal Memory
68
+
69
+ The spatial-temporal memory module is designed to effectively capture, store and leverage mobility patterns, providing crucial insights for the personalized and multi-scale periodicity behavior modelling in mobility prediction. Inspired by the memory design principles in general LLM-based agents (Zhang et al., 2024), our spatial-temporal memory functions through three essential processes: memory organization, memory writing, and memory reading. The whole framework of spatial-temporal memory module is presented in Figure 3.
70
+
71
+ # 3.2.1 Memory Organization
72
+
73
+ The spatial-temporal memory is structured into three components to capture multifaceted nature of user mobility patterns: User Profile Unit. This unit provides a summary description of the user's mobility behavior as the user mobile profile, which offers deeper insights into when and why the user visits certain locations. The user profile is dynamically generated based on the current long-term memory introduced in the following part, allowing Agent-Move to adapt to the evolving user preferences;
74
+
75
+ ![](images/d8c1e8513572661594930980d18fb21954eeabf58e53a8ca8c03eabf5febb36e.jpg)
76
+ Figure 3: Illustration of spatial-temporal memory.
77
+
78
+ Long-term Memory Unit. This unit retains users' long-term mobility patterns, capturing overarching trends and recurring sequences in their movement history. It functions similarly to how LLMs store long-term dependencies in textual data; Short-term Memory Unit. This unit focuses on users' recent mobility patterns, providing dynamic updates that reflect the latest movements and short-term variations.
79
+
80
+ All users' memories are stored in a central memory pool, organized as key-value pairs. Each key corresponds to a unique user identifier, and the value consists of the long-term memory, short-term memory, and user profile info. This organization ensures a comprehensive extraction and storage of mobility data, enabling efficient retrieval and utilization for mobility prediction.
81
+
82
+ # 3.2.2 Memory Writing
83
+
84
+ Writing to the memory involves the extraction and structured storage of spatial-temporal patterns hidden in user's trajectories. This process consists of two main steps:
85
+
86
+ Long-term Memory Writing. Given the historical stays $\mathcal{H}_u$ , this module extracts long-term spatial-temporal information of user $u \in U$ , including: 1) location to category mapping. Associating visited locations with their respective categories. 2) top- $k$ active times and locations. Identifying the most active time periods and the most frequently visited locations. 3) location visit frequency. Recording how often various locations are visited. 4) transition matrix. A matrix that represents the transition probabilities between locations.
87
+
88
+ Short-term Memory Writing. Given the contextual stay $\mathcal{C}_u$ , this module extracts fine-grained short-term spatial-temporal information of user
89
+
90
+ $u \in U$ , including: 1) time sequence of recent visits. Documenting the sequence of recent visit times. 2) visit frequency of different locations. Tracking how frequently different locations are visited in the short term. 3) details of the last visit. Recording specific details about the latest location visit.
91
+
92
+ By systematically organizing and storing this information by processing the trajectories, Agent-Move can easily access to both long-term and short-term mobility patterns. This structured approach is crucial for enhancing the accuracy of next location predictions.
93
+
94
+ # 3.2.3 Memory Reading
95
+
96
+ The memory reading process involves generating spatial-temporal context relevant prompts from the structured memory to enhance AgentMove's predictive capabilities. This process consists of three key steps:
97
+
98
+ User Profile Prompt Generation. Utilizing the long-term memory, AgentMove constructs user profile prompts that encapsulate the intrinsic movement patterns and habitual behaviors of the user. These prompts include summaries of peak activity times, preferred locations, and temporal-spatial associations, providing a comprehensive mobility profile of user.
99
+
100
+ Long-term Memory Prompt Generation. Also based on the long-term memory, AgentMove generates prompts by summarizing the user's general mobility trends from the long-term view. These prompts include details on the most active times, frequently visited locations, and the relationships between these factors. This helps the LLM understand the user's regular movement patterns.
101
+
102
+ Short-term Memory Prompt Generation. AgentMove creates prompts from the short-term memory to reflect recent mobility patterns and contextual information of user. These prompts cover recent visit sequences, current visit frequencies, and specifics of the latest visits, which ensure LLMs efficiently adapt to recent changes in user's behavior.
103
+
104
+ Finally, these memory-based prompts are consolidated into a cohesive spatial-temporal summary of the original trajectory, which is then integrated as the first part of AgentMove's prompts. This spatial-temporal summary enhances the LLM's ability to engage in more logical and efficient reasoning, leading to more precise mobility predictions.
105
+
106
+ # 3.3 World Knowledge Generator
107
+
108
+ Numerous studies (Jiang et al., 2016) indicate that individual movement typically encompasses two types of behaviors: returning and exploring. As introduced before, the return behavior has been well-captured by the spatial-temporal memory module. In this section, we introduce the world knowledge generator, which extracts geospatial knowledge from LLMs and constructs a multi-scale urban structure to enable the modelling of explore behavior in mobility. To extract geospatial knowledge effectively, we propose aligning the knowledge of LLMs and the urban space of trajectory via text addresses. Once the spaces are aligned, we explicitly prompt the LLMs to generate potential candidate places for exploration from the perspective of the multi-scale urban structure.
109
+
110
+ # 3.3.1 Alignment via Address
111
+
112
+ Many existing works (Feng et al., 2018; Luo et al., 2021; Lin et al., 2021; Cui et al., 2021; Qin et al., 2022; Hong et al., 2023) on mobility prediction usually represent the locations directly using latitude and longitude coordinates or discrete spatial area IDs. While this approach facilitates the easy construction of deep learning-based spatial encoding, it is not suitable for LLMs. Since LLMs are trained on large scale human-generated text, they, like human, are not inherently adept at understanding the precise coordinates (Gurnee and Tegmark, 2023) or discrete area IDs. Thus, we propose to utilize the text address which human is familiar to describe the coarse location of trajectory. While text address is not precise as the coordinates, it is more natural and easy to be aligned with the existing spatial knowledge in the LLMs.
113
+
114
+ Thus, we adapt the address searching service from Open Street Map to build address for each point in the trajectory. Due to cultural and institutional differences, address information formats vary greatly across different countries. To address this, we leverage the common-sense knowledge of LLMs to extract unified structured address information from the original address information. LLMs can easily pinpoint a user's current and past locations, laying a solid foundation for subsequent modeling.
115
+
116
+ # 3.3.2 Multi-scale Urban Structure
117
+
118
+ Based on the real structured address information, we design prompts to motivate LLMs to generate
119
+
120
+ multi-scale potential places which may be explored by user in the future. We introduce multi-scale generation mechanism to help LLMs reduce hallucination and improve the accuracy and usability of generate places. The multi-scale location information covers four level: district, block, street and POI name. We first ask LLMs to generate the potential districts in the future. Then, based on these districts and the past blocks in the trajectory to generate the potential blocks in the future and so on. Finally, we can generate potential location information from different levels as the potential exploration candidate for the user.
121
+
122
+ # 3.4 Collective Knowledge Extractor
123
+
124
+ In the previous two sections, we introduce the spatial-temporal memory module and world knowledge generator for the individual-level mobility modelling. Here, we introduce the collective knowledge extractor, which captures shared mobility patterns among users to further enhance the mobility predictions. First, we construct a global location transition graph using NetworkX ${}^{2}$ by aggregating the location transitions from various users. We then employ a LLM to perform simple reasoning on the graph, utilizing functions in NetworkX as tool to generate potential locations visited by other users with similar mobility patterns.
125
+
126
+ # 3.4.1 Building Location Transition Graph
127
+
128
+ In the location graph, the node is location ID with various attributes, e.g., address information, function of location. The edge between nodes is constructed by considering the 1-hop transition between nearby trajectory points in each trajectory. The edge is weighted without direction. Based on the definition of graph, we use NetWorkX to build the graph from scratch and update it when infer trajectories for various users. If any history trajectory data, e.g. training data used by the deep learning based models, are available, the location graph can be initialized by them.
129
+
130
+ # 3.4.2 Reasoning on Graph
131
+
132
+ After obtaining the location graph, we can utilize LLM to perform reasoning on the whole graph via the function of NetworkX as tool. The most naive strategy is to query the k-hop neighbors of the current location. When the number of the neighbors is too much, LLMs need to filter the most promising ones from them by considering the attributes of
133
+
134
+ each node. Furthermore, we can extend the query nodes into the last n locations and generate the most promising ones from all the neighbors of them. In this way, we obtain the most relevant locations that has been visited by the users with similar mobility patterns.
135
+
136
+ # 3.5 Summarization and Prediction
137
+
138
+ Finally, we design prompts to employ LLM to analyze and summarize the information from different views and perform a final reasoning step to generate the prediction with reasons. The prompts for output format requirements are also be placed here to ensure that the output format meets the requirements as much as possible. Detailed prompts can refer to the appendix.
139
+
140
+ # 4 Evaluation
141
+
142
+ # 4.1 Settings
143
+
144
+ # 4.1.1 Datasets
145
+
146
+ We use the global Foursquare checkin data (Yang et al., 2016) and recent public released ISP GPS trajectory data (Feng et al., 2019) to conduct the experiments. The Foursquare data contains checkins from 415 cities which covers about 18 months from April 2012 to September 2013. The ISP GPS trajectory data is from the mobile network logs in Shanghai with 325215 records, covering April 19 to April 26 in 2016. Compared with Foursquare data, ISP data is much denser and was open-sourced in June $2024^{3}$ , which is beyond the training period of all the LLMs used in the experiment. This ensures that the evaluation results are not affected by potential data leakage.
147
+
148
+ To evaluate the general mobility prediction ability of AgentMove, we select 12 cities from the Foursquare dataset and the entire ISP trajectory data to conduct the experiments. We follow the preprocessing procedure (Hong et al., 2023; Feng et al., 2019) to process the trajectories data. For Foursquare checkin data, we divide each trajectory dataset into training, validation, and test sets in a ratio of 7:1:2. While the ISP data lasts only 7 days, we split the whole data into training set, validation set and testing data in a ratio of 4:1:5 for preserving enough testing data. Detailed description about preprocessing can refer to the appendix. We follow the data license in the original paper and use these trajectory data only for academic purpose.
149
+
150
+ We select Tokyo, Moscow and SaoPaulo with the largest amount of Foursquare check-in data and the ISP data from Shanghai to conduct the main analysis in the experiments and results of 12 cities are discussed in the final section of experiment. We divide each trajectory dataset into training, validation, and test sets. The training and validation sets are only used to train the deep learning model, and the resulting models are compared with the LLM-based methods on the test set. Due to the cost of the various API calling, e.g., Llama3.1-405B, we randomly sample 200 instances from the testing set for each city to calculate the performance in the experiments.
151
+
152
+ # 4.1.2 Baselines
153
+
154
+ We compare proposed models with following baselines: FPMC (Rendle et al., 2010), five deep learning models (RNN (Feng et al., 2018), DeepMove (Feng et al., 2018), LSTPM (Sun et al., 2020), GETNext (Yang et al., 2022), STHGCN (Yan et al., 2023)) and three LLM-based methods(LLM-Mob (Wang et al., 2023), LLM-ZS (Beneduce et al., 2024), LLM-Move (Feng et al., 2024c)). We use widely used Accuracy@1, Accuracy@5, and NDCG@5 as the main evaluation metrics (Sun et al., 2020; Luca et al., 2021) in the experiments.
155
+
156
+ # 4.1.3 Implementation
157
+
158
+ We use LibCity (Jiang et al., 2023) to implement the FPMC, RNN, DeepMove and LSTPM. We use the official codes from author to implement GETNext $^{4}$ and STHGCN $^{5}$ . We follow the default parameter settings of these models in the library and official codes for training and inference. For LLMs, we use OpenAI API $^{6}$ for accessing GPT4omini, DeepInfra $^{7}$ and SiliconFlow $^{8}$ for accessing other open source LLMs. Detailed parameter settings for those baselines can be found in the appendix.
159
+
160
+ # 4.2 Main Results
161
+
162
+ In this section, we compare AgentMove with 9 baselines in 4 cities at Table 1. In the experiments, we use GPT4omini as the default base LLM for all LLM-based methods.
163
+
164
+ <sup>4</sup>https://github.com/songyangme/GETNext
165
+ $^{5}$ https://github.com/ant-research/Spatio-Temporal-Hypergraph-Model
166
+ <sup>6</sup>https://platform.openai.com/
167
+ <sup>7</sup>https://deepinfra.com/
168
+ 8https://siliconflow.cn/models
169
+
170
+ <table><tr><td rowspan="2">Model</td><td colspan="3">FSQ@Tokyo</td><td colspan="3">FSQ@saoPaulo</td><td colspan="3">FSQ@Moscow</td><td colspan="3">ISP@Shanghai</td></tr><tr><td>Acc@1</td><td>Acc@5</td><td>NDCG@5</td><td>Acc@1</td><td>Acc@5</td><td>NDCG@5</td><td>Acc@1</td><td>Acc@5</td><td>NDCG@5</td><td>Acc@1</td><td>Acc@5</td><td>NDCG@5</td></tr><tr><td>FPMC</td><td>0.060</td><td>0.165</td><td>0.121</td><td>0.045</td><td>0.085</td><td>0.066</td><td>0.020</td><td>0.065</td><td>0.043</td><td>0.13</td><td>0.355</td><td>0.249</td></tr><tr><td>RNN</td><td>0.105</td><td>0.240</td><td>0.176</td><td>0.095</td><td>0.230</td><td>0.169</td><td>0.090</td><td>0.185</td><td>0.140</td><td>0.065</td><td>0.175</td><td>0.123</td></tr><tr><td>DeepMove</td><td>0.175</td><td>0.320</td><td>0.251</td><td>0.150</td><td>0.310</td><td>0.236</td><td>0.165</td><td>0.335</td><td>0.258</td><td>0.175</td><td>0.320</td><td>0.251</td></tr><tr><td>LSTPM</td><td>0.145</td><td>0.280</td><td>0.218</td><td>0.190</td><td>0.365</td><td>0.281</td><td>0.140</td><td>0.255</td><td>0.196</td><td>0.095</td><td>0.17</td><td>0.135</td></tr><tr><td>GETNext</td><td>0.205</td><td>0.450</td><td>0.317</td><td>0.165</td><td>0.375</td><td>0.258</td><td>0.175</td><td>0.380</td><td>0.269</td><td>0.115</td><td>0.260</td><td>0.178</td></tr><tr><td>STHGCN</td><td>0.198</td><td>0.430</td><td>0.300</td><td>0.175</td><td>0.398</td><td>0.299</td><td>0.180</td><td>0.372</td><td>0.265</td><td>0.125</td><td>0.277</td><td>0.195</td></tr><tr><td>LLM-Mob</td><td>0.175</td><td>0.370</td><td>0.277</td><td>0.140</td><td>0.275</td><td>0.210</td><td>0.080</td><td>0.175</td><td>0.129</td><td>0.100</td><td>0.345</td><td>0.221</td></tr><tr><td>LLM-ZS</td><td>0.175</td><td>0.410</td><td>0.299</td><td>0.165</td><td>0.385</td><td>0.277</td><td>0.120</td><td>0.340</td><td>0.233</td><td>0.170</td><td>0.425</td><td>0.298</td></tr><tr><td>LLM-Move</td><td>0.145</td><td>0.285</td><td>0.243</td><td>0.220</td><td>0.355</td><td>0.325</td><td>0.155</td><td>0.270</td><td>0.226</td><td>0.140</td><td>0.410</td><td>0.308</td></tr><tr><td>AgentMove</td><td>0.185</td><td>0.465</td><td>0.331</td><td>0.230</td><td>0.415</td><td>0.326</td><td>0.160</td><td>0.365</td><td>0.265</td><td>0.190</td><td>0.450</td><td>0.329</td></tr><tr><td>vs Deep Learning</td><td>-9.76%</td><td>3.33%</td><td>4.42%</td><td>25.71%</td><td>4.27%</td><td>9.03%</td><td>-11.11%</td><td>-3.95%</td><td>-1.49%</td><td>8.57%</td><td>40.63%</td><td>31.08%</td></tr><tr><td>vs Best Baselines</td><td>-9.76%</td><td>3.33%</td><td>4.42%</td><td>4.55%</td><td>4.27%</td><td>0.31%</td><td>-11.11%</td><td>-3.95%</td><td>-1.49%</td><td>8.57%</td><td>5.88%</td><td>6.82%</td></tr></table>
171
+
172
+ As the representative deep learning models, GETNExt and STHGCN achieve best or second-best results in 4 out of 12 metrics. Compared with the deep learning baselines, the best LLM-based baseline LLM-Move can achieve better results than GETNext and STHGCN in 3 out of 12 metrics, which present the powerful sequential pattern discovery and reasoning ability of LLM in modeling mobility. It is noted that the results of LLM-based methods are zero-shot prediction while the deep learning based methods rely on sufficient training with enough mobility data. Compared with these baselines, our proposed method AgentMove is the best method and achieves the best results in 8 out of 12 metrics in 4 datasets. Although AgentMove falls slightly behind the best baseline, GETNext, in three metrics, two of them are very close. These results in Table 1 demonstrate the effectiveness of proposed framework in stimulating the comprehensive ability of LLM-based agentic framework for mobility prediction.
173
+
174
+ # 4.3 Ablation Study on Model Designs
175
+
176
+ In this section, we provide a more detailed analysis of the proposed method under varying model designs to further demonstrate its effectiveness.
177
+
178
+ We first conduct ablation study to demonstrate the contribution of each component in AgentMove for its excellent performance, which are presented in Table 2. We first discuss the impact of three core components individually, as detailed in the top four lines of Table 2. Overall, all components contribute to performance improvement in most cases. However, the performance gains vary across different metrics. For example, while memory design leads
179
+
180
+ Table 1: The main results of baselines and AgentMove. GPT4omini is used as the base LLM for all the LLM-based methods in the table. Deep learning methods are first trained on the training set of each city and LLM-based models are directly evaluated on the test set with the zero-shot prediction settings.
181
+
182
+ <table><tr><td rowspan="2">Models</td><td colspan="3">FSQ@saoPaulo</td><td colspan="3">ISP@Shanghai</td></tr><tr><td>Acc@1</td><td>Acc@5</td><td>NDCG@5</td><td>Acc@1</td><td>Acc@5</td><td>NDCG@5</td></tr><tr><td>base</td><td>0.165</td><td>0.385</td><td>0.277</td><td>0.170</td><td>0.425</td><td>0.298</td></tr><tr><td>+STM</td><td>0.190</td><td>0.315</td><td>0.255</td><td>0.170</td><td>0.445</td><td>0.312</td></tr><tr><td>+WKG</td><td>0.175</td><td>0.365</td><td>0.269</td><td>0.155</td><td>0.390</td><td>0.276</td></tr><tr><td>+CKE</td><td>0.175</td><td>0.380</td><td>0.275</td><td>0.175</td><td>0.465</td><td>0.317</td></tr><tr><td>+STM+WKG</td><td>0.240</td><td>0.390</td><td>0.317</td><td>0.215</td><td>0.455</td><td>0.342</td></tr><tr><td>AgentMove</td><td>0.230</td><td>0.415</td><td>0.326</td><td>0.190</td><td>0.450</td><td>0.329</td></tr><tr><td>vs base</td><td>45.45%</td><td>7.79%</td><td>17.99%</td><td>11.76%</td><td>5.88%</td><td>10.30%</td></tr></table>
183
+
184
+ Table 2: Ablation studies of AgentMove. 'base' denotes the basic prompts which is similar to the baseline LLM-ZS, $^+$ STM' denotes adding spatial-temporal memory, $^+$ WKG' denotes adding world knowledge generator, $^+$ CKE' denotes adding collective knowledge extractor.
185
+
186
+ to better performance in the Acc@1 in SaoPaulo, the performance in other three metrics are dropped. The effects of the combination of the core components in the last three lines in Table 2. In summary, compared with the base prompt design, the combination of proposed designs introduce $7\% -45\%$ performance gain in all the datasets.
187
+
188
+ Besides, to demonstrate the effects of the World Knowledge Generator (WKG) in exploring new locations, we analyze whether our model explores more potential locations that are not present in the user's recent contextual stays after incorporating the WKG module. The results are presented in the Table 3. A higher percentage indicates that the model tends to revisit locations from the recent contextual stays, while a lower percentage suggests that the model explores more new locations. The results demonstrate that the WKG successfully encourages the model to explore new locations, which is particularly beneficial for improving performance.
189
+
190
+ <table><tr><td colspan="2">Location return rate</td><td>FSQ@saoPaulo↓</td><td>ISP@Shanghai↓</td></tr><tr><td rowspan="2">LLama3-8b</td><td>w/ WKG</td><td>94%</td><td>75.6%</td></tr><tr><td>w/o WKG</td><td>93%</td><td>87.8%</td></tr><tr><td rowspan="2">LLama3-70b</td><td>w/ WKG</td><td>87.5%</td><td>73.2%</td></tr><tr><td>w/o WKG</td><td>90%</td><td>85.4%</td></tr></table>
191
+
192
+ Table 3: Effectiveness of word knowledge generator (WKG) for encouraging mobility exploration, which is measured by the location return rates. The location return rate measures the tendency to revisit previously visited locations based on recent contextual stays.
193
+
194
+ # 4.4 Geographical Bias and LLM Effects
195
+
196
+ While LLMs are trained with the online web text which can be geographically bias (Manvi et al., 2024) around the world. We investigate the potential geographical bias in LLM based mobility prediction methods and attempt to answer whether AgentMove can alleviate the geographical bias inherent in LLMs to some extent. Experiment results conducted on 12 cities are presented in Figure 4.
197
+
198
+ In Figure 4(a), we can find significant differences in the accuracy of three LMM-based methods across cities. For instance, cities like Tokyo, Paris, and Sydney generally achieve higher accuracy, while cities like Cape Town and Nairobi see notably lower performance. This suggests the presence of geographical bias in trained LLMs. We also find that proposed AgentMove performs best in most of the cities. Figure 4(b) provides a box-plot test by comparing the Acc@5 of the three LLM-based methods in 12 cities. Results demonstrate that AgentMove not only outperforms the other methods in terms of overall accuracy but also exhibits a smaller range of error. The performance of AgentMove is more consistent across different cities, suggesting a reduced impact of geographical bias with carefully designs in it.
199
+
200
+ As the core foundation of AgentMove, the capabilities of base LLM play a critical role in its performance. Thus, we evaluate the impact of different LLMs with varying sizes and structures in Figure 5 by using FSQ@Tokyo. Figure 5(a) presents the impact of various 7B LLM with different training data and model structures. The results show that proposed AgentMove performs best adaptability among different LLMs. While LLM-Mob performs stable in all the 7B-LLMs, its performance on Gemma2-9B is far worse than other two methods. We then discuss the detailed impacts of LLM size on AgentMove's performance across different data. Figure 5(b) reveals that that larger models,
201
+
202
+ ![](images/fe926e9a494ae471c02fa9f2e5592b28e2441298b69bdcc17fa539e472460b09.jpg)
203
+ (a) Acc@1 of three LLM-based methods on 12 cities.
204
+
205
+ ![](images/74dbd45b37a2958df3f454dc6d72230486c32cf3b39623f9ffdb74535f4d8d60.jpg)
206
+ (b) Distribution of Acc@5 across 3 methods on 12 cities.
207
+
208
+ ![](images/014a46d43c6d802da8ab05ddf9f2f4d6c33a15fb4356712c900ab4f93a9f8f26.jpg)
209
+ Figure 4: Geospatial bias analysis of various methods in mobility prediction across 12 cities, where AgentMove outperforms most methods and exhibits lower geospatial bias.
210
+ Figure 5: The effects of LLM with varying sizes and sources on the prediction performance of three LLM based methods.
211
+
212
+ ![](images/3f865499978e5857e1c3976ee48389d7f4725feac1ac7d88c8eb6e9b8048bc2f.jpg)
213
+ (a) Effects of LLM types three LLM-based methods
214
+ (b) Effects of LLM size on three cities.
215
+
216
+ particularly Llama3.1-405B, generally deliver significant performance gains for AgentMove compared to smaller models like Llama3-8B across different cities. It is also observed that in Tokyo, Llama3-1-405B performs slightly weaker compared to Llama3-70B. This suggests that while larger models often excel, their effectiveness may vary depending on the unique mobility patterns and characteristics of each city.
217
+
218
+ # 5 Related Work
219
+
220
+ # 5.1 Mobility Prediction with Deep Learning
221
+
222
+ Significant efforts have been made in mobility prediction using deep learning models, encompassing research from both sequential-based methods and graph-based methods. Traditional approaches typically employ Markov models (Rendle et al., 2010; Cheng et al., 2013) to predict the next visit by learning the transition probabilities between consecutive POIs. In contrast, sequential-based deep learning methods have been proposed to model the high-order movement patterns in trajectory data. These methods can be categorized into two types:
223
+
224
+ recurrent neural networks(RNNs) (Kong and Wu, 2018; Huang et al., 2019; Yang et al., 2020; Zhao et al., 2020; Feng et al., 2020), and attention mechanisms(Feng et al., 2018; Luo et al., 2021; Lin et al., 2021; Cui et al., 2021; Qin et al., 2022; Hong et al., 2023) based works.
225
+
226
+ Despite their success, these methods primary focus on extracting mobility patterns from an individual perspective, while overlooking the collaborative information available from other users' trajectories. To address this limitation, recent works (Rao et al., 2022; Yang et al., 2022) have explored the use of graph neural network(GNNs) for their ability to model complex relationships. However, all these methods rely on collecting large volumes of private trajectory data. In contrast, our AgentMove leverages the world knowledge and sequential modeling abilities of LLMs to enable the generalized mobility prediction with zero-shot prediction ability.
227
+
228
+ # 5.2 Large Language Models and Agents
229
+
230
+ Due to the powerful language-based generalization and reasoning capabilities (Wei et al., 2022a), large language models (OpenAI, 2022; Touvron et al., 2023) have developed rapidly and have been widely applied in various tasks, including programming (Qian et al., 2024) and mathematics (Wei et al., 2022a). Recent studies (Gurnee and Tegmark, 2023; Manvi et al., 2023) have found that LLMs possess a significant amount of geographical knowledge about the world. Additionally, researchers also explore the potential of applying LLMs in spatial-temporal data modelling by directly converting domain-specific tasks into a language-based format, such as time series forecasting (Gruver et al., 2024), traffic prediction (Li et al., 2024b), trajectory mining (Wang et al., 2023; Beneduce et al., 2024), trip recommendation (Xie et al., 2024; Li et al., 2024a), traffic signal control (Lai et al., 2023; Feng et al., 2024b), comprehensive urban tasks (Feng et al., 2024a,b).
231
+
232
+ These early works highlight the potential of LLMs in spatial-temporal modelling. To effectively utilize the vast knowledge embedded in LLMs and stimulate their reasoning and planning abilities, various prompt techniques (Wei et al., 2022b; Kojima et al., 2022; Wang et al., 2022; Yao et al., 2024) have been proposed for solving naive text games and mathematical problems. However, when for more complex real-life and domain-specific tasks, these simple prompt techniques alone are insufficient. Recently, LLM based agents (Wang et al.,
233
+
234
+ 2024; Xi et al., 2023; Du et al., 2024) are been proposed to address this limitation by equipping LLMs with explicit memory, structured workflows and external tools. In this work, we are the first to design LLM based agent specifically for the mobility prediction task. By incorporating explicit spatial-temporal memory and a workflow for geospatial and social structure mining, we successfully leverage the world knowledge of LLMs and their structured reasoning capabilities for mobility trajectory modelling.
235
+
236
+ # 6 Conclusion
237
+
238
+ In this paper, we propose AgentMove, a systematic agentic framework for generalized human mobility prediction applicable worldwide. We design a spatial-temporal memory module and a collective knowledge extractor to learn both individual mobility patterns and shared mobility pattern among users. Furthermore, we develop a world knowledge generator that utilizes text-based address to understand urban structures in a manner similar to humans. Extensive experiments on trajectories from 12 cities demonstrate the superiority and robustness of AgentMove for mobility prediction.
239
+
240
+ In the future, we plan to explore more effective ways to extract and leverage the vast world knowledge and common sense of LLMs for mobility modeling and trajectory mining. Additionally, we aim to extend the framework to other trajectory data mining tasks, such as trajectory classification and generation. We believe that LLM-based agents, like AgentMove, hold great potential and adaptability, paving the way for a new paradigm in spatial-temporal modeling alongside deep learning.
241
+
242
+ # 7 Acknowledgement
243
+
244
+ This work was supported in part by the National Natural Science Foundation of China under grant 62476152 and 62171260, in part by the China Postdoctoral Science Foundation under grant 2024M761670 and GZB20240384, in part by the Tsinghua University Shimu Scholar Program under grant 2023SM235.
245
+
246
+ # 8 Limitations
247
+
248
+ Here, we discuss the potential limitations of the current work and outline directions for future exploration.
249
+
250
+ Robustness and Hallucination Based on an LLM, the output of AgentMove is not fully con
251
+
252
+ trollable. In this work, we define a simple output parser to extract the expected context from the LLM output, but it may occasionally fail. Due to the potential for hallucination in LLMs, the output of AgentMove may include false addresses that do not exist in the real world. While we can define a clear list of valid locations during experiments to verify this, doing so in real-world applications presents significant challenges.
253
+
254
+ High Cost The high cost of invoking the LLM API limited our experiments to 12 cities with a small test set. This cost will also pose a challenge for large-scale deployment in real-world scenarios. The reliance on LLMs does pose a significant limitation in terms of scalability of our method. With ongoing advancements (Liu et al., 2024; Qu et al., 2025) in the development of more efficient and scalable LLM alternatives—such as smaller LLMs, model pruning, and knowledge distillation—we are optimistic about the potential for rapidly decreasing computational costs while improving scalability.
255
+
256
+ Geospatial Bias Geographical bias has long been a challenging issue in LLMs (Manvi et al., 2024). While our proposed AgentMove incorporates specific design elements to mitigate some of these biases, it cannot completely eliminate them due to inherent limitations in LLMs. However, we believe that our current work represents a significant step forward in addressing geospatial bias in mobility prediction challenges. One promising direction for further reducing geographical bias could be the integration of more external knowledge during inference, and we are actively exploring this avenue in our future work.
257
+
258
+ # 9 Ethics Statement
259
+
260
+ All trajectory data used in the experiments come from publicly available open-source datasets (Yang et al., 2016; Feng et al., 2019). We do not attempt to extract any personal information from these datasets.
261
+
262
+ # References
263
+
264
+ Ciro Beneduce, Bruno Lepri, and Massimiliano Luca. 2024. Large language models are zero-shot next location predictors. arXiv preprint arXiv:2405.20962.
265
+ Chen Cheng, Haiqin Yang, Michael R Lyu, and Irwin King. 2013. Where you like to go next: Successive point-of-interest recommendation. In Twenty-Third international joint conference on Artificial Intelligence.
266
+
267
+ Qiang Cui, Chenrui Zhang, Yafeng Zhang, Jinping Wang, and Mingchen Cai. 2021. ST-PIL: Spatial-temporal periodic interest learning for next point-of-interest recommendation. In Proceedings of the 30th ACM International conference on information & knowledge management, pages 2960-2964.
268
+ Yuwei Du, Jie Feng, Jie Zhao, and Yong Li. 2024. Trajagent: An agent framework for unified trajectory modelling. arXiv preprint arXiv:2410.20445.
269
+ Jie Feng, , Tianhui Liu, Yuwei Du, Siqi Guo, Yuming Lin, and Yong Li. 2024a. Citygpt: Empowering urban spatial cognition of large language models. arXiv preprint arXiv:2406.13948.
270
+ Jie Feng, Yong Li, Chao Zhang, Funing Sun, Fanchao Meng, Ang Guo, and Depeng Jin. 2018. Deepmove: Predicting human mobility with attentional recurrent networks. In Proceedings of the 2018 World Wide Web Conference.
271
+ Jie Feng, Can Rong, Funing Sun, Diansheng Guo, and Yong Li. 2020. Pmf: A privacy-preserving human mobility prediction framework via federated learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(1):1-21.
272
+ Jie Feng, Jun Zhang, Tianhui Liu, Xin Zhang, Tianjian Ouyang, Junbo Yan, Yuwei Du, Siqi Guo, and Yong Li. 2024b. Citybench: Evaluating the capabilities of large language model as world model. arXiv preprint arXiv:2406.13945.
273
+ Jie Feng, Mingyang Zhang, Huandong Wang, Zeyu Yang, Chao Zhang, Yong Li, and Depeng Jin. 2019. Dplink: User identity linkage via deep neural network from heterogeneous mobility data. In The World Wide Web Conference, pages 459-469. ACM.
274
+ Shanshan Feng, Haoming Lyu, Fan Li, Zhu Sun, and Caishun Chen. 2024c. Where to move next: Zero-shot generalization of llms for next poi recommendation. In 2024 IEEE Conference on Artificial Intelligence (CAI), pages 1530-1535. IEEE.
275
+ Nate Gruver, Marc Finzi, Shikai Qiu, and Andrew G Wilson. 2024. Large language models are zero-shot time series forecasters. Advances in Neural Information Processing Systems, 36.
276
+ Wes Gurnee and Max Tegmark. 2023. Language models represent space and time. arXiv preprint arXiv:2310.02207.
277
+ Ye Hong, Yatao Zhang, Konrad Schindler, and Martin Raubal. 2023. Context-aware multi-head self-attentional neural network model for next location prediction. Transportation Research Part C: Emerging Technologies, 156:104315.
278
+ Liwei Huang, Yutao Ma, Shibo Wang, and Yanbo Liu. 2019. An attention-based spatiotemporal LSTM network for next poi recommendation. IEEE Transactions on Services Computing, 14(6):1585-1597.
279
+
280
+ Jiawei Jiang, Chengkai Han, Wenjun Jiang, Wayne Xin Zhao, and Jingyuan Wang. 2023. Libcity: A unified library towards efficient and comprehensive urban spatial-temporal prediction. arXiv preprint arXiv:2304.14343.
281
+ Shan Jiang, Yingxiang Yang, Siddharth Gupta, Daniele Veneziano, Shounak Athavale, and Marta C González. 2016. The timegeo modeling framework for urban mobility without travel surveys. Proceedings of the National Academy of Sciences, 113(37):E5370-E5378.
282
+ Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199-22213.
283
+ Dejiang Kong and Fei Wu. 2018. HST-LSTM: A hierarchical spatial-temporal long-short term memory network for location prediction. In *Ijcai*, volume 18, pages 2341-2347.
284
+ Siqi Lai, Zhao Xu, Weijia Zhang, Hao Liu, and Hui Xiong. 2023. Large language models as traffic signal control agents: Capacity and opportunity. arXiv preprint arXiv:2312.16044.
285
+ Peibo Li, Maarten de Rijke, Hao Xue, Shuang Ao, Yang Song, and Flora D Salim. 2024a. Large language models for next point-of-interest recommendation. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1463-1472.
286
+ Zhonghang Li, Lianghao Xia, Jiabin Tang, Yong Xu, Lei Shi, Long Xia, Dawei Yin, and Chao Huang. 2024b. Urbangpt: Spatio-temporal large language models. arXiv preprint arXiv:2403.00813.
287
+ Yan Lin, Huaiyu Wan, Shengnan Guo, and Youfang Lin. 2021. Pre-training context and time aware location embeddings from spatial-temporal trajectories for user next location prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4241-4248.
288
+ Qiang Liu, Shu Wu, Liang Wang, and Tieniu Tan. 2016. Predicting the next location: A recurrent model with spatial and temporal contexts. In AAAI.
289
+ Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, et al. 2024. Mobilellm: Optimizing subbillion parameter language models for on-device use cases. arXiv preprint arXiv:2402.14905.
290
+ Massimiliano Luca, Gianni Barlacchi, Bruno Lepri, and Luca Pappalardo. 2021. A survey on deep learning for human mobility. ACM Computing Surveys (CSUR), 55(1):1-44.
291
+
292
+ Yingtao Luo, Qiang Liu, and Zhaocheng Liu. 2021. Stan: Spatio-temporal attention network for next location recommendation. In Proceedings of the web conference 2021, pages 2177-2185.
293
+ Yuxiao Luo, Zhongcai Cao, Xin Jin, Kang Liu, and Ling Yin. 2024. Deciphering human mobility: Inferring semantics of trajectories with large language models. arXiv preprint arXiv:2405.19850.
294
+ Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, and Stefano Ermon. 2024. Large language models are geographically biased. arXiv preprint arXiv:2402.02680.
295
+ Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, and Stefano Ermon. 2023. Geolrm: Extracting geospatial knowledge from large language models. arXiv preprint arXiv:2310.06213.
296
+ OpenAI. 2022. Introducing chatgpt. https://openai.com/blog/chatgpt/.
297
+ Chen Qian, Zihao Xie, Yifei Wang, Wei Liu, Yufan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. 2024. Scaling large-language-model-based multi-agent collaboration. arXiv preprint arXiv:2406.07155.
298
+ Yanjun Qin, Yuchen Fang, Haiyong Luo, Fang Zhao, and Chenxing Wang. 2022. Next point-of-interest recommendation with auto-correlation enhanced multi-modal transformer network. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2612-2616.
299
+ Guanqiao Qu, Qiyuan Chen, Wei Wei, Zheng Lin, Xianhao Chen, and Kaibin Huang. 2025. Mobile edge intelligence for large language models: A contemporary survey. IEEE Communications Surveys & Tutorials.
300
+ Xuan Rao, Lisi Chen, Yong Liu, Shuo Shang, Bin Yao, and Peng Han. 2022. Graph-flashback network for next location recommendation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1463-1471.
301
+ Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010. Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th international conference on World wide web, pages 811-820.
302
+ Ke Sun, Tieyun Qian, Tong Chen, Yile Liang, Quoc Viet Hung Nguyen, and Hongzhi Yin. 2020. Where to go next: Modeling long-and short-term user preferences for point-of-interest recommendation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 214-221.
303
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
304
+
305
+ Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
306
+ Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6):186345.
307
+ Xinglei Wang, Meng Fang, Zichao Zeng, and Tao Cheng. 2023. Where would i go next? large language models as human mobility predictors. arXiv preprint arXiv:2308.15197.
308
+ Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
309
+ Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
310
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837.
311
+ Hao Wu, Ziyang Chen, Weiwei Sun, Baihua Zheng, and Wei Wang. 2017. Modeling trajectories with recurrent neural networks. IJCAI.
312
+ Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.
313
+ Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, and Yu Su. 2024. Travelplanner: A benchmark for real-world planning with language agents. arXiv preprint arXiv:2402.01622.
314
+ Xiaodong Yan, Tengwei Song, Yifeng Jiao, Jianshan He, Jiaotuan Wang, Ruopeng Li, and Wei Chu. 2023. Spatio-temporal hypergraph learning for next poi recommendation. In Proceedings of the 46th international ACM SIGIR conference on research and development in information retrieval, pages 403-412.
315
+ Dingqi Yang, Benjamin Fankhauser, Paolo Rosso, and Philippe Cudre-Mauroux. 2020. Location prediction over sparse user mobility traces using rnns. In Proceedings of the twenty-ninth international joint conference on artificial intelligence, pages 2184-2190.
316
+ Dingqi Yang, Daqing Zhang, and Bingqing Qu. 2016. Participatory cultural mapping based on collective behavior data in location-based social networks. ACM Transactions on Intelligent Systems and Technology (TIST), 7(3):1-23.
317
+
318
+ Song Yang, Jiamou Liu, and Kaiqi Zhao. 2022. Getnext: trajectory flow map enhanced transformer for next poi recommendation. In Proceedings of the 45th International ACM SIGIR Conference on research and development in information retrieval, pages 1144-1153.
319
+ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2024. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36.
320
+ Zeyu Zhang, Xiaohe Bo, Chen Ma, Rui Li, Xu Chen, Quanyu Dai, Jieming Zhu, Zhenhua Dong, and Ji-Rong Wen. 2024. A survey on the memory mechanism of large language model based agents. arXiv preprint arXiv:2404.13501.
321
+ Zheng Zhang, Hossein Amiri, Zhenke Liu, Andreas Züfle, and Liang Zhao. 2023. Large language models for spatial trajectory patterns mining. arXiv preprint arXiv:2310.04942.
322
+ Kangzhi Zhao, Yong Zhang, Hongzhi Yin, Jin Wang, Kai Zheng, Xiaofang Zhou, and Chunxiao Xing. 2020. Discovering subsequence patterns for next poi recommendation. In IJCAI, volume 2020, pages 3216-3222.
323
+
324
+ # 10 Appendix
325
+
326
+ # 10.1 Baselines
327
+
328
+ - FPMC (Rendle et al., 2010) It combines the matrix factorization and Markov chains methods together for sequential modeling.
329
+ - RNN (Feng et al., 2018) It is a simple RNN based mobility prediction model as regarding the mobility sequence as general sequence.
330
+ - DeepMove (Feng et al., 2018) It contains a LSTM for capturing the short-term sequential transition and an attention unit for extracting long-term periodical patterns.
331
+ - LSTPM (Sun et al., 2020) It consists of a non-local network for long-term modeling and a geo-dilated RNN for short-term learning.
332
+ - GETNext (Yang et al., 2022) It use transition flow map to assistant a transformer based model to predict next location with cold start settings.
333
+ - STHGCN (Yan et al., 2023) It designs a novel hypergraph transformer to capture higher-order relations between trajectories for prediction.
334
+ - LLM-Mob (Wang et al., 2023) It is the first work to apply LLM (GPT-3.5) to predict the next location.
335
+ - LLM-ZS (Beneduce et al., 2024) It defines simple prompts and testifies more LLMs in zero-shot mobility prediction task.
336
+ - LLM-Move (Feng et al., 2024c) It uses RAG to provide nearby POIs for LLM to predict next location more precisely.
337
+
338
+ # 10.2 Discussion about the usage of text-based locations
339
+
340
+ In most studies on mobility prediction, numerical representations, such as coordinates, are widely used. In this work, we incorporate text-based location information as the main part of the input. While geographic coordinates can precisely describe location information, they lack the semantic context necessary to activate the geospatial knowledge embedded in LLMs. As demonstrated in GeoLLM (Manvi et al., 2023), querying LLMs with raw coordinates alone is often ineffective for tasks like predicting population density. In contrast, text-based representations align naturally with LLMs' strengths in understanding and reasoning over natural language, allowing them to better leverage their pre-trained spatial knowledge. By converting coordinates into structured text addresses using Open Street Map and LLMs, our approach enriches tra
341
+
342
+ <table><tr><td>City</td><td>Users</td><td>Traj.</td><td>Loc.</td><td>Avg.</td><td>Traj.</td><td>Records</td></tr><tr><td>Tokyo</td><td>12464</td><td>112942</td><td>83190</td><td>9.06</td><td></td><td>1030105</td></tr><tr><td>SaoPaulo</td><td>11856</td><td>77120</td><td>78904</td><td>6.50</td><td></td><td>809198</td></tr><tr><td>Moscow</td><td>10501</td><td>100854</td><td>93599</td><td>9.60</td><td></td><td>950898</td></tr><tr><td>NewYork</td><td>15785</td><td>28502</td><td>41386</td><td>1.81</td><td></td><td>380247</td></tr><tr><td>Sydney</td><td>1720</td><td>4557</td><td>10523</td><td>2.65</td><td></td><td>54250</td></tr><tr><td>Paris</td><td>6903</td><td>7559</td><td>19837</td><td>1.09</td><td></td><td>111325</td></tr><tr><td>London</td><td>9724</td><td>14596</td><td>28687</td><td>1.50</td><td></td><td>188530</td></tr><tr><td>Beijing</td><td>1076</td><td>1847</td><td>5753</td><td>1.72</td><td></td><td>21030</td></tr><tr><td>Shanghai-FSQ</td><td>1272</td><td>3238</td><td>8014</td><td>2.54</td><td></td><td>33129</td></tr><tr><td>Shanghai-ISP</td><td>1762</td><td>2844</td><td>12576</td><td>1.61</td><td></td><td>325215</td></tr><tr><td>Capestown</td><td>403</td><td>1234</td><td>2988</td><td>3.06</td><td></td><td>13303</td></tr><tr><td>Mumbai</td><td>1070</td><td>3070</td><td>7942</td><td>2.87</td><td></td><td>40592</td></tr><tr><td>Nairobi</td><td>356</td><td>2690</td><td>5807</td><td>7.55</td><td></td><td>28453</td></tr></table>
343
+
344
+ Table 4: Trajectory statistics of 12 cities around the world.
345
+
346
+ jectory points with meaningful geospatial context, such as landmarks and cultural relevance, which raw coordinates cannot provide. This approach strikes a practical balance between precision and contextual richness, optimizing LLMs for human mobility modeling. However, in the future, integrating precise numerical representation with text-based representation presents a promising research direction.
347
+
348
+ # 10.3 Details of Data
349
+
350
+ Detailed information of processed trajectory data from 12 cities is presented in Table 4.
351
+
352
+ # 10.4 Examples of Extracted Mobility Behaviors by AgentMove
353
+
354
+ ```python
355
+ >>> The personal profile and long memory:
356
+ <historical_info>:
357
+ - place id to name mapping:
358
+ '''{'4f58467xxx': 'Middle Eastern Restaurant', '4b058793fxxx':
359
+ 'Miscellaneous Shop',
360
+ '4ebaaccfxxx': 'Residential Building (Apartment / Condo)', ...
361
+ }.''
362
+ - In historical stays, The user frequently engages in activities at 7 AM (2 times), 12 PM (2 times), 4 PM (2 times), ...
363
+ - The most frequently visited venues are Home (private) (2 times), Middle Eastern Restaurant (1 times), Miscellaneous Shop (1 times), ...
364
+ - Hourly venue activities include 12 PM: Indian Restaurant (1 times), 2 PM: Home (private) (1 times), 3 PM: Thai Restaurant (1 times), ...
365
+ ```
366
+
367
+ ```txt
368
+ - The user activity transitions often include sequences: ''Middle Eastern Restaurant -> Miscellaneous Shop (1 times), Miscellaneous Shop -> Residential Building (Apartment / Condo) (1 times), ....''
369
+ ```
370
+
371
+ <user_profile>:
372
+ end
373
+ ```txt
374
+ The user is most active at 7 AM with 2 visits. They frequently visit Home (private) with 2 visits. Based on the data, the user enjoys trying different types of food and drinks.
375
+ ```
376
+
377
+ The potential places from the global spatial view:
378
+ ```txt
379
+ 2. mobility behaviors from world knowledge generator
380
+ ```
381
+
382
+ 1. Taiyanggong
383
+ 2. Sanlitun
384
+ 3. Jiaodaokou Subdistrict
385
+ 4. Xiaoguan
386
+ 5. Qianmen
387
+ 2. Peking Hostel
388
+ 5. Peking University
389
+ ```txt
390
+ Names of subdistricts that are relatively likely to be visited:
391
+ ```
392
+
393
+ # 10.5 Prompt Examples
394
+
395
+ Here, we present the detailed prompts for each LLM based methods.
396
+
397
+ # Prompt of AgentMove
398
+
399
+ Task
400
+ ```txt
401
+ Your task is to predict <next_place_id> in <target Stay>, a location with an unknown ID, while temporal data is available.
402
+ ```
403
+
404
+ ```python
405
+ ## Predict <next_place_id> by considering:
406
+ ```txt
407
+ 1. The user's activity trends gleaned from <historical_stays> and the current activities from <context_stays>.
408
+ ```
409
+
410
+ ```txt
411
+ 2. Temporal details (start_time and day_of_week) of the target stay, crucial for understanding activity variations.
412
+ ```
413
+
414
+ 4. The personal profile and memory info extracted from the long trajectory history of each user.
415
+ ```txt
416
+ 3. The potential places that users may visit based on an overall analysis of multi-level urban spaces.
417
+ ```
418
+
419
+ The nearby places visited by other users with similar mobility pattern: {social_world_info}
420
+ ```txt
421
+ The potential places from the global spatial view: {spatial_world_info}
422
+ ```
423
+
424
+ The history data:
425
+ <historical_stays>: {historical_stays}
426
+ <context_stays>: {context_stays}
427
+ <target_stay>: {target_time, <next_place_id>}
428
+ ```txt
429
+ The personal profile and long memory: {spatial_temporal_memory_info}
430
+ ```
431
+
432
+ Names of POIs that are relatively likely to be visited:
433
+ 1. Yuan Yang Future Plaza Shopping Mall
434
+ 3. University of International Business and Economics
435
+ 4. Beijing Public Library
436
+ Output
437
+ Present your answer in a JSON object with:
438
+ ```txt
439
+ "prediction" (list of IDs of the five most probable places, ranked by probability) and "reason" (a concise justification for your prediction).
440
+ ```
441
+
442
+ # Prompt for spatial-temporal memory unit.
443
+
444
+ end
445
+ 3. mobility behaviors from collective knowledge extractor
446
+ The nearby places visited by other users with similar mobility pattern:
447
+ 1-hop neighbor places in the social world: Xibahu Road, Mars Garden
448
+ end
449
+ ```txt
450
+ long term memory info
451
+ Place id to name mapping:
452
+ {venue_id_to_name}.
453
+ In historical stays, The user frequently engages in activities at {frequent_hours}.
454
+ The most frequently visited venues are {frequent_venues}.
455
+ Hourly venue activities include {hourlyActivity_desc}.
456
+ The user's activity transitions often include sequences such as {transitions}.
457
+ ```
458
+
459
+ ```short
460
+ ## short term memory info
461
+ In recent context stays, user's last visit was on {}
462
+ Frequently visited locations include: {}
463
+ Visit times: {}
464
+ ```txt
465
+ ```java
466
+ ```java
467
+ ```java
468
+ ```java
469
+ The user is most active at
470
+ {most_frequent_hour} with
471
+ {most_frequent_count} visits.
472
+ They frequently visit
473
+ {most_frequent Venue_category} with
474
+ {most_frequent Venue_count} visits
475
+ Based on the data, the user {'',
476
+ '.join(-insights)}.
477
+ ```
478
+
479
+ # Prompts for world knowledge generator.
480
+
481
+ ```markdown
482
+ # Prompts for world knowledge generator
483
+ ```
484
+
485
+ ```txt
486
+ ## prompts for extracting structured address info
487
+ {original address info from https://nominatim.org/ by querying via reverse API}
488
+ Please get the administrative area name, subdistrict name/neighborhood name, access road or feeder road name, building name/POI name.
489
+ Present your answer in a JSON object with:'administrative'(the administrative area name), 'subdistrict'(subdistrict name/neighborhood name), 'poi'(building name/POI name), 'street'(access road or feeder road name which POI/building is on).
490
+ Do not include the key if information is not given. Do not output other content.
491
+ ```
492
+
493
+ # ```bash
494
+ ## block info
495
+
496
+ ```txt
497
+ This trajectory moves within following administrative areas:
498
+ {administrative_area}
499
+ This trajectory sequentially visited following subdistricts, with the last subdistrict being the most recently visited:{}
500
+ ```
501
+
502
+ # Consider about following two aspects:
503
+
504
+ ```txt
505
+ 1. The frequency each subdistrict is visited.
506
+ ```
507
+
508
+ ```txt
509
+ 2. Transition probability between two administrative areas.
510
+ ```
511
+
512
+ ```txt
513
+ Please predict the next subdistrict in the trajectory. Give {explore_num} subdistricts that are relatively likely to be visited. Do not output other content.
514
+ ```
515
+
516
+ # ```bash
517
+ ##poi and street info
518
+
519
+ ```javascript
520
+ This trajectory sequentially visited following POIs (Each POI is represented by 'POI name, the feeder road or access road it is on'), with the last POI being the most recently visited:{pois})
521
+ ```
522
+
523
+ # Consider about following two aspects:
524
+
525
+ ```txt
526
+ 1. The frequency each subdistrict is visited.
527
+ ```
528
+
529
+ ```txt
530
+ 2. The frequency each poi is visited.
531
+ ```
532
+
533
+ ```txt
534
+ 3. Transition probability between two subdistricts.
535
+ ```
536
+
537
+ ```txt
538
+ 4. Transition probability between two pois.
539
+ ```
540
+
541
+ ```txt
542
+ Please predict the next poi in the trajectory.Give {explore_num} POIs that are relatively likely to be visited.Do not output other content.
543
+ ```
544
+
545
+ ```txt
546
+ spatial world model info used in AgentMove
547
+ ```
548
+
549
+ ```txt
550
+ Names of subdistricts that are relatively likely to be visited: {block_info}
551
+ ```
552
+
553
+ ```txt
554
+ >>> Names of POIs that are relatively likely to be visited:
555
+ {poi_info}
556
+ ```
557
+
558
+ # Prompt for collective knowledge extractor.
559
+
560
+ ```python
561
+ ## Finding neighbors
562
+ neighbors =
563
+ list.graph.neighbors(venue_id))
564
+ sorted_neighbors_freq = [(n, 1) for n in neighbors if n not in context_trajs]
565
+ ```
566
+
567
+ ```markdown
568
+ ## Prompts in final reasoning step 1-hop neighbor places in the social world: {neighbors}
569
+ ```
570
+
571
+ ```txt
572
+ .
573
+ ```
574
+
575
+ # Prompt of LLM-Mob
576
+
577
+ ```txt
578
+ Your task is to predict a user's next location based on his/her activity pattern.
579
+ ```
580
+
581
+ ```txt
582
+ You will be provided with <history> which is a list containing this user's historical stays, then <context> which provide contextual information
583
+ ```
584
+
585
+ ```txt
586
+ about where and when this user has been to recently. Stays in both <history> and <context> are in chronological order.
587
+ ```
588
+
589
+ ```txt
590
+ Each stay takes on such form as (start_time, day_of.week, duration, place_id). The detailed explanation of each element is as follows:
591
+ ```
592
+
593
+ ```txt
594
+ start_time: the start time of the stay in 12h clock format.
595
+ ```
596
+
597
+ ```txt
598
+ day_of Week: indicating the day of the week.
599
+ ```
600
+
601
+ ```txt
602
+ duration: an integer indicating the duration (in minute) of each stay. Note that this will be None in the <target Stay> introduced later.
603
+ ```
604
+
605
+ ```txt
606
+ place_id: an integer representing the unique place ID, which indicates where the stay is.
607
+ ```
608
+
609
+ ```txt
610
+ Then you need to do next location prediction on <target Stay> which is the prediction target with unknown place ID denoted as <next_place_id> and
611
+ ```
612
+
613
+ ```txt
614
+ unknown duration denoted as None, while temporal information is provided.
615
+ ```
616
+
617
+ ```txt
618
+ Please infer what the <next_place_id> might be (please output the 10 most likely places which are ranked in descending order in terms of probability), considering the following aspects:
619
+ ```
620
+
621
+ ```txt
622
+ 1. the activity pattern of this user that you learned from <history>, e.g., repeated visits to certain places during certain times;
623
+ ```
624
+
625
+ ```txt
626
+ 2. the context stays in <context>, which provide more recent
627
+ ```
628
+
629
+ ```txt
630
+ activities of this user;
631
+ 3. the temporal information (i.e., start_time and day_of.week) of target stay, which is important because people's activity varies during different time (e.g., nighttime versus daytime) and on different days (e.g., weekday versus weekend).
632
+ ```
633
+
634
+ ```txt
635
+ Please organize your answer in a JSON object containing following keys: "prediction" (the ID of the five most probable places in descending order of probability) and "reason" (a concise explanation that supports your prediction). Do not include line breaks in your output.
636
+ ```
637
+
638
+ ```txt
639
+ The data are as follows: <historical>: {historical_stays} <context>: {context_stays} <target Stay>: {target_time, <next_place_id>}
640
+ ```
641
+
642
+ # Prompt of LLM-ZS
643
+
644
+ ```txt
645
+ Your task is to predict <next_place_id> in <target Stay>, a location with an unknown ID, while temporal data is available.
646
+ ```
647
+
648
+ Predict <next_place_id> by considering:
649
+ ```txt
650
+ 1. The user's activity trends gleaned from <historical_stays> and the current activities from <context_stays>.
651
+ ```
652
+
653
+ ```txt
654
+ 2. Temporal details (start_time and day_of_week) of the target stay, crucial for understanding activity variations.
655
+ ```
656
+
657
+ ```txt
658
+ Present your answer in a JSON object with:
659
+ ```
660
+
661
+ ```txt
662
+ "prediction" (IDs of the five most probable places, ranked by probability) and "reason" (a concise justification for your prediction).
663
+ ```
664
+
665
+ ```txt
666
+ The data:
667
+ <historical_stays>: {historical_stays}
668
+ <context_stays>: {context_stays}
669
+ <target Stay>: {target_time, <next_place_id>}
670
+ ```
671
+
672
+ # 10.6 Parameter settings
673
+
674
+ Detailed parameter settings for each Markov and deep learning based baselines are presented in Table 5. For each baseline, we adapt the early stopping methods by considering the accuracy of validation set and learning rate schedule threshold. All the experiments of deep learning baselines are running on a machine with 64 cores, 512GB of
675
+
676
+ Table 5: Detailed parameter settings for Markov and deep learning based baselines.
677
+
678
+ <table><tr><td>Parameters</td><td>FPMC</td><td>RNN</td><td>DeepMove</td><td>LSTPM</td></tr><tr><td>batch size</td><td>1024</td><td>1024</td><td>128</td><td>128</td></tr><tr><td>learning rate (lr)</td><td>-</td><td>1e-3</td><td>1e-3</td><td>1e-3</td></tr><tr><td>lr schedule step</td><td>-</td><td>2</td><td>3</td><td>2</td></tr><tr><td>lr schedule decay</td><td>-</td><td>0.1</td><td>0.1</td><td>0.1</td></tr><tr><td>schedule threshold</td><td>-</td><td>1e-3</td><td>1e-3</td><td>1e-3</td></tr><tr><td>early stop lr</td><td>-</td><td>9e-6</td><td>9e-6</td><td>9e-6</td></tr><tr><td>L2</td><td>-</td><td>1e-5</td><td>1e-5</td><td>1e-6</td></tr><tr><td>max epoch</td><td>100</td><td>30</td><td>30</td><td>30</td></tr><tr><td>loc embed size</td><td>64</td><td>500</td><td>500</td><td>500</td></tr><tr><td>hidden embed size</td><td>-</td><td>500</td><td>500</td><td>500</td></tr><tr><td>dropout</td><td>-</td><td>0.3</td><td>0.5</td><td>0.8</td></tr></table>
679
+
680
+ memory, and 2 NVIDIA RTX 4090 GPU, which is installed with Ubuntu 22.04.3 LTS.
681
+
682
+ All the generation parameter settings for LLM based methods are the same. The temperature is set as 0 for deterministic results, the maximum output token is 1000, the maximum input token is 2000, other parameters are not set and follow the default settings from API provider.
683
+
684
+ # 10.7 Preprocessing for Foursquare Data
685
+
686
+ As introduced in section 4.1.1, we select 12 cities around the world to evaluate the performance of proposed framework. We match each trajectories with the target cities by calculating the minimum distance to the city center. For the ordered trajectories in each city, we use 72 hours as the time window to split the trajectory into sessions. We filter the users with less than 5 sessions and filter sessions with less than 4 stays. Then, we divide each trajectory dataset into training, validation, and test sets in a ratio of 7:1:2. During the testing, we filter the users with less than 3 sessions or more than 50 sessions which is designed to ensure the quality of testing users and also balance the effects from different users. Different from the previous works, we do not specifically filter locations. All the users and trajectories of them are sorted by the id. We select one session of each user and aggregate the first $n$ sessions from all the users to calculate the average accuracy. Here, $n$ is utilized to control the cost of evaluation for LLMs and keep fixed in the experiment, which is set as 200 in most of the experiments. It is noted that only the volume of testing set is controlled for cost, the entire training set is provided to the deep learning based methods for training.
687
+
688
+ # 10.8 Preprocessing for ISP Data
689
+
690
+ Following the preprocessing in the original paper (Feng et al., 2019), we split the data into different sessions by merging trajectory points in the same day. Due to the regularity of human, there are too much repeated trajectory points in the original sessions. To make the prediction challenging, we compress the trajectory sessions by merging the same locations within a time window (2 hours) and ignoring the visiting occurred during the night (from 8 p.m. to 8 a.m.). While the ISP data lasts only 7 days, we split the whole data into training set, validation set and testing data in a ratio of 4:1:5 for preserving enough testing data. The minimum session filter parameter is changed from 3 to 1.
agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e5af045fdf7eedf02bd81a2ceb00a49dba4cd92f21ed88eda7f4e4743d9068b
3
+ size 476155
agentmovealargelanguagemodelbasedagenticframeworkforzeroshotnextlocationprediction/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:953618bbca33f58312f9247b21e04f9b11254f9bf5f2b14244e92904a09ffbbb
3
+ size 574257
agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/5c881628-64b0-4316-a515-f7b2cc56875d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:152891bf81d311233930a9584a069fe7b30011d56154eb1323426291576fc374
3
+ size 160280
agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/5c881628-64b0-4316-a515-f7b2cc56875d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76486b0fc5c83ade7632af10256ad85943ac1d727df56f2cf50ce53809a0fad2
3
+ size 192643
agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/5c881628-64b0-4316-a515-f7b2cc56875d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0161ff60382273379fb5f209d78bd513b76c476142786d8542a9bdaf1e6448b9
3
+ size 1854495
agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/full.md ADDED
@@ -0,0 +1,895 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios
2
+
3
+ Xinyi Mou $^{1\dagger}$ , Jingcong Liang $^{1\dagger}$ , Jiayu Lin $^{1\dagger}$ , Xinnong Zhang $^{1\dagger}$ , Xiawei Liu $^{1}$ , Shiyue Yang $^{1}$ , Rong Ye $^{1,2}$ , Lei Chen $^{1}$ , Haoyu Kuang $^{1}$ , Xuanjing Huang $^{1}$ , Zhongyu Wei $^{1‡}$
4
+
5
+ $^{1}$ Fudan University, $^{2}$ ByteDance
6
+
7
+ {xymou20, chenl18, xjhuang, zywei}@fudan.edu.cn, {jcliang22, jiayulin24, xnzhang23, liuxw24}@m.fudan.edu.cn, {shiyueyang24, yer23, hykuang23}@m.fudan.edu.cn
8
+
9
+ # Abstract
10
+
11
+ Large language models (LLMs) are increasingly leveraged to empower autonomous agents to simulate human beings in various fields of behavioral research. However, evaluating their capacity to navigate complex social interactions remains a challenge. Previous studies face limitations due to insufficient scenario diversity, complexity, and a single-perspective focus. To this end, we introduce AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios. Drawing on Dramaturgical Theory, AgentSense employs a bottom-up approach to create 1,225 diverse social scenarios constructed from extensive scripts. We evaluate LLM-driven agents through multi-turn interactions, emphasizing both goal completion and implicit reasoning. We analyze goals using ERG theory and conduct comprehensive experiments. Our findings highlight that LLMs struggle with goals in complex social scenarios, especially high-level growth needs, and even GPT-4o requires improvement in private information reasoning. Code and data are available at https://github.com/ljcleo/agent_sense.
12
+
13
+ # 1 Introduction
14
+
15
+ Benefiting from comprehensive training data and large-scale model parameters, large language models (LLMs) are increasingly employed to develop autonomous agents capable of simulating human behavior (Qin et al., 2023; Shinn et al., 2024; Schick et al., 2024). These language agents have been explored as human proxies in various fields of behavioral research, such as psychological and sociological surveys (Argyle et al., 2023; Chuang et al., 2024; Xie et al., 2024), and opinion dynamics modeling (Mou et al., 2024; Liu et al., 2024). These social science studies often assume that LLMs exhibit
16
+
17
+ ![](images/6d2eaba259a60bea457f0fc87231884958ec713b3ebc30ff4dedab69ea5ca956.jpg)
18
+ Figure 1: An illustration of challenging yet realistic social scenarios — a family gathering and an office conversation, where the characters are driven by ChatGPT. While the dialogue could flow smoothly, Emily is unable to achieve her goals during the family gathering and fails to deduce Jordan's thoughts in an office setting.
19
+
20
+ social intelligence, the ability to navigate complex and multifaceted social goals during interactions with others (Zhou et al., 2024b), given their ability to produce human-like responses and behaviors in certain scenarios. Social intelligence is essential for fostering effective and harmonious interactions among agents and between agents and humans (Xu et al., 2023). However, the question remains: can LLMs truly navigate the intricacies of such interactions and exhibit social intelligence? As illustrated in Figure 1, LLM-driven agents still struggle with complex social situations. For example, when facing multiple goals in an office conversation, agent Emily fails to complete all her goals and cannot guess what Jordan has in mind.
21
+
22
+ Early research on the social intelligence of LLMs relies on static and non-interactive tests that assess commonsense reasoning about social situations (Sap et al., 2019; Zadeh et al., 2019;
23
+
24
+ <table><tr><td>Benchmark</td><td>Scenario Construction</td><td>Scenario (Template) Coverage</td><td>Interaction Patterns</td><td>Social Goal</td><td>Private Info Reasoning</td><td>Evaluation</td><td>Observation Perspective</td></tr><tr><td>Sotopia (Zhou et al., 2024b)</td><td>top-down</td><td>90</td><td>between 2 agents</td><td>✓</td><td>✗</td><td>subjective</td><td>judge</td></tr><tr><td>STSS (Wang et al., 2024)</td><td>top-down</td><td>30</td><td>≥ 2 agents</td><td>✓</td><td>✗</td><td>objective</td><td>N/A</td></tr><tr><td>AgentSense (ours)</td><td>bottom-up</td><td>245</td><td>≥ 2 agents</td><td>✓</td><td>✓</td><td>mixed</td><td>self, other, judge</td></tr></table>
25
+
26
+ Table 1: Comparison between our benchmark and related interactive social intelligence benchmarks.
27
+
28
+ Shapira et al., 2023; Wilf et al., 2023), failing to capture the dynamic nature of social interactions. Recently, dynamic and goal-driven benchmarks (Zhou et al., 2024b; Wang et al., 2024) have emerged to study social intelligence in interactive environments. As shown in Table 1, although they have made some significant progress, they still exhibit three main limitations: (1) lack of scenario diversity: existing studies (Wang et al., 2024; Sabour et al., 2024) build social scenarios manually in a top-down manner, resulting in a narrow set of common scenarios and goals, such as persuasion and collaboration (Li et al., 2023a). However, real-world interactions feature a broader spectrum of social goals and situational dynamics. (2) insufficient scenario complexity: current work (Xie et al., 2024; Zhou et al., 2024b) often limits interactions to two participants, each pursuing a single goal. This oversimplifies real-life social interactions, where multiple actors engage simultaneously, each with multiple goals, as shown in Figure 1. As a result, it remains the performance of LLMs in group dynamics insufficiently studied. (3) single-perspective observation: while existing benchmarks primarily evaluate the extent to which agents achieve explicit social goals (Li et al., 2023b; Wang et al., 2024), they have neglected the concealment of private information and the inference of others' information, which are also important aspects in social interaction.
29
+
30
+ To this end, we introduce AgentSense: Benchmarking Social Intelligence of Language Agents through Interactive Scenarios. We get inspiration from Dramaturgical Theory (Goffman, 1959), which conceptualizes social interaction as a theatrical performance in which individuals assume specific roles within various settings. We adopt a bottom-up approach, extracting scenarios from massive scripts to ensure that scenarios and social goals are diverse and grounded in real life. In AgentSense, we construct 245 scenario templates from scripts, mitigating data leakage and expanding them into 1,225 scenarios. We apply ERG theory (Alderfer, 1969), which categories human
31
+
32
+ needs into existence, relatedness and growth, to show that the constructed scenarios reflect a comprehensive range of human motivations.
33
+
34
+ We then situate LLM-driven agents in the constructed scenarios, where each participant has social goals and may also safeguard some private information. In an interactive environment, we simulate multi-turn interactions and observe agents' social intelligence from two aspects: (1) goal completion: have the agents successfully achieved their social goals? (2) implicit reasoning: can the agents accurately deduce others' private information? These abilities are measured through interviews and multiple-choice questions, with our proposed PSI metric assessing profile sensitivity. We find that LLMs struggle with complex social scenarios, particularly with high-level growth goals.
35
+
36
+ Our contributions are as follows:
37
+
38
+ - We introduce AgentSense, a benchmark built on social scenarios derived from scripts using a bottom-up approach, distinguishing it from previous work. It encompasses diverse and challenging social scenarios, enabling a thorough evaluation of LLMs' social intelligence.
39
+ - We evaluate social intelligence from multiple aspects, considering both goal completion and information reasoning as well as profile sensitivity of social intelligence, through interviews with the agents and third-party judges.
40
+ - Our experiments reveal that LLMs struggle with complex scenarios and high-level goals, and their social intelligence is affected by profiles, interaction partners, and the balance between goals and privacy protection.
41
+
42
+ # 2 Related Work
43
+
44
+ # 2.1 Social Intelligence Benchmarks
45
+
46
+ Social intelligence is the ability to understand others and act wisely in social situations (Walker and Foley, 1973). While LLMs show potential in simulating human behavior (Xie et al., 2024), their social intelligence remains underexplored (Zhou
47
+
48
+ ![](images/3107e7ad5e2092dcbcb45324ed94ce96636ac43732e9bdb11d72cda5572db998.jpg)
49
+ Figure 2: Overall framework of AgentSense. We construct scenario templates from scripts and synthesize characters to diversify the scenarios. Then, language models role-play the characters to interact with each other. After that, the participants and third-party judges are interviewed for evaluation.
50
+
51
+ ![](images/bc5921243ca7ee5520425a4f8a21a17f802df0d3cf40054e4c1a6538614e71b8.jpg)
52
+
53
+ ![](images/b90e7269bbb8c12ca459e964b85c3ec919099eb65622a5f730d8ac938de5abb0.jpg)
54
+
55
+ et al., 2024a). Current research on evaluating social intelligence in LLMs falls into two main categories. The first involves static, non-interactive assessments that evaluate models through Q&A tasks based on commonsense reasoning about social situations (Sap et al., 2019; Zadeh et al., 2019). The second involves interactive benchmarks, where LLMs are assessed through goal-oriented interactions in role-playing scenarios (Zhou et al., 2024b; Chen et al., 2024a; Wang et al., 2024; Liu et al., 2023). AgentSense uses a bottom-up approach to create scenarios from scripts, allowing evaluating agents' capabilities in more diverse settings.
56
+
57
+ # 2.2 Role-playing Agents
58
+
59
+ LLMs are increasingly been used to construct roleplaying agents (RPAs) (Chen et al., 2024b), which enable efficient simulation of typical representatives, from individuals (Shao et al., 2023; Argyle et al., 2023) to demographic groups (Li et al., 2023a; Jiang et al., 2024). These agents can embody various personas to coordinate, collaborate, exchange information, and compete with one another (Chen et al., 2024b; Zhou et al., 2024b). Recently, RPAs have been applied in various domains, e.g., psychotherapy (Stade et al., 2024), economics (Fu et al., 2023), and social research (Grossmann et al., 2023). AgentSense leverages this by simulating social interaction scenarios through agents with diverse personalities and social goals.
60
+
61
+ # 3 AgentSense Benchmark
62
+
63
+ # 3.1 Framework Overview
64
+
65
+ AgentSense aims to provide a realistic social intelligence benchmark with enhanced diversity and
66
+
67
+ complexity. Following the Dramaturgical Theory, we propose an overall framework as in Figure 2.
68
+
69
+ Scenarios The core component of AgentSense is the social scenario set, extracted from real-world scripts to guide and evaluate social interactions between agents. A social scenario serves as a hypothetical context for simulating and analyzing social interactions, where two key components are measured: (1) Social Goal is what the agent aims to achieve, such as resolving an issue or building a relationship. The agent's proactive drive in social interactions, guided by this social goal, directs its active participation in social dynamics. (2) Private Information is information that is known solely to the agent and not to others. The agent is tasked with inferring others' private information without explicitly inquiring about it, a process referred to as passive reasoning during interactions. In summary, an agent's social intelligence is reflected in its ability to pursue social goals while safeguarding private information, balancing active engagement with passive respect for individual privacy.
70
+
71
+ Scenario Templates Social scenarios in scripts always have a fixed group of characters, causing a lack of diversity. To address this issue, we wipe out irrelevant character details to obtain scenario templates, which contains only background information and predefined character slots. We can instantiate multiple scenarios from a scenario template by filling in the slots with different sets of synthesized characters satisfying the template's constraints.
72
+
73
+ Benchmarking After building scenarios from the extracted templates, benchmarking LLMs with AgentSense comes as follows: (1) Simulation: We
74
+
75
+ ![](images/c15f78ba2face85f0e2f829febc01920b4477fd7d4819befdfc89ba201098d8c.jpg)
76
+ Figure 3: Scenario template construction pipeline (automated with Python and GPT-4o): (A) Scenario Extraction: We split the script into scenes then scenarios (1), and summarize their background and description (2), which are merged into a descriptive background for independent role-play (3). (B) Social Goal Extraction: We extract each character's social goals (4) and amend them by regenerating the whole scenario (5) and rewriting/deleting invalid goals (6). (C) Private Information Extraction: We determine if the scene involves private information inference (7); if yes, we extract private information as QA pairs (8) and generate private info records (9) and evaluation questions (10). (D) Leakage Mitigation and Template Generation: We remove elements associated with specific episodes and replace characters with slots for synthesized agents with similar characteristics to fill in (11).
77
+
78
+ prompt the models to role-play the characters and interact with each other, trying to achieve their social goals. (2) Evaluation: We evaluate the goal completion status of each model by interviewing the participants and third-party judges. We also assess the model's implicit reasoning performance with multiple-choice questions.
79
+
80
+ # 3.2 Scenario Construction
81
+
82
+ Following the definitions, building AgentSense requires constructing templates and instantiating scenarios with synthesized agents. We propose pipelines for the two parts respectively as follows:
83
+
84
+ Template Construction Figure 3 demonstrates the pipeline to construct scenario templates from real-world scripts, consisting of four stages:
85
+
86
+ (1) Scenario Extraction: Real-world scripts consist of multiple chronological scenes, within which several scenarios involve groups of characters. We first split scenes and scenarios from the script. Then, we generate each scenario's background from previous scenes and its own description. Finally, we generate a new descriptive background that allows the scenario to be role-played independently.
87
+ (2) Social Goal Extraction: After obtaining individual scenarios, we extract the social goals of each character, one sentence per goal. We polish the goals further, including rewriting the whole scenario to reduce goal dependencies
88
+
89
+ and rewriting the goals to meet certain criteria (or deleting if that is not possible).
90
+
91
+ (3) Private Information Extraction: We first identify if any private information exists in the original scene. If yes, we extract questions and answers that only one character can respond to. The rephrased answers are the character's private information, and the questions serve as implicit reasoning questions for others. We also enhance negative options to be more homogeneous with the correct ones.
92
+ (4) Leakage Mitigation and Template Generation: LLMs can identify plots and infer information by recognizing entities like locations and characters. To prevent this, scenario leakage mitigation is implemented using GPT-4o to extract and replace elements linked to specific episodes. The original characters are also replaced by slots. This maintains context while reducing the risk of identifying the plot.
93
+
94
+ More details of the scenario template construction can be found in Appendix A.1. Specifically, we used GPT-4o to automate the construction procedure; Appendix A.2 lists the prompts we used.
95
+
96
+ Scenario Instantiating We replace the original characters with multiple synthesized agents to prevent character leakage and enrich the social scenarios. A naive method is to replace the original character randomly, which may lead to unrealistic situations like two fifty-year-old students in a
97
+
98
+ middle school. Thus, we dynamically generate agents according to the constraints of the scenario. First, we extract the attributes and relationships of the original characters. Then, we transform these relationships into replacement rules that help define the demographic features of the agents (see Appendix A.2.5). Finally, we replace the original characters with agents that adhere to these constraints. After data leakage mitigation, a pre-test in Section 4.3 is conducted to ensure the scenarios remain anonymous.
99
+
100
+ # 3.3 Social Interaction Simulation
101
+
102
+ For each scenario, the agents are given social backgrounds, profiles, and corresponding social goals, along with any private information they may possess, as shown in Figure 2. This setup motivates them to engage in social interactions with other agents. The agents primarily interact through multiturn conversations, where they can also use language to convey facial expressions or actions. The conversation begins with one of the agents greeting, after which the agents take turns in a random order, rather than following a fixed pre-defined sequence, to restore the dynamics and uncertainty inherent in social interactions. Generally, we set the limit of the turns based on the average number of turns found in the scenes within the scripts, i.e., 15. After the simulation, the generated conversation history will be analyzed to evaluate the goal completion of each agent. Details are in Appendix C.
103
+
104
+ # 3.4 Evaluation
105
+
106
+ Goal Completion Since social goals can be subjective, we judge its completion from three different aspects, as shown in Figure 2: (1) Self: We ask the agent whether it has achieved its goals after interacting with others. (2) Other: We ask other agents in the scenario whether the target agent has achieved his or her specific goals. (3) External: We prompt third-party models with the chat history and ask if an agent has achieved his/her own social goals. We ask the interviewees to respond with yes (goal completed) or no. We take the average across all goals of a character to measure the agent's overall goal completion level.
107
+
108
+ Implicit Reasoning As mentioned in Section 3.2, each character's private information corresponds to a multiple-choice evaluation question. To evaluate an agent's information reasoning ability, we present it with questions related to the private in
109
+
110
+ formation of other agents within the scenario. We then calculate the average accuracy (Acc) of the current agent on these questions to determine the agent's score in information reasoning.
111
+
112
+ Profile Sensitivity After character enrichment, each template generates multiple scenarios. By incorporating diverse characters, we not only enrich the scenarios but also gain insights into the stability of social intelligence when simulating different roles. Thus, we propose profile sensitivity index (PSI). We compute the standard deviation (std) of goal/information metrics of the scenario sharing the same template, and the average std across all templates is calculated as PSI. A lower PSI indicates that social intelligence is more stable.
113
+
114
+ # 3.5 Data Validation and Analysis
115
+
116
+ Data Source We collect scripts from the Internet Movie Script Database (IMSDb*), an online repository of open-source screenings for movies and television shows. We use GPT-4o to divide each script into episodes according to discernible shifts in temporal settings, spatial locations, character dynamics, and narrative progressions. Each episode is further divided into scenes based on the variations in the dialogue content, with the prompts detailed in Appendix A.2.1. We filter out scenes with fewer than 10 dialogues or those featuring only characters speaking for multiple turns to ensure active interaction between at least two characters. After processing, we have 1,300 scenes, 12,401 rounds of conversations, and 114,834 tokens. The detailed statistics of scripts are provided in Appendix B.1.
117
+
118
+ Data Validation We conduct human validation on the generated templates from the automatic pipeline to ensure the quality of our benchmark. For social goals, we mainly consider: (1) whether the goal is achievable by the character, and (2) whether the goal is clear enough to evaluate. For private information, we focus on: (1) awareness by other characters and background information leakage, and (2) sufficiency of information for the character to answer questions.
119
+
120
+ The validation involves 6 graduate students in two groups, with each record annotated by 3 annotators. We take the majority vote as the final result when at least two annotators agree. For scenarios where all annotations diverged or were marked as invalid, we assign it to the other three annotators to
121
+
122
+ ![](images/91697c1767b5513f13fdc6d1292995bacf9085083c9ed02b50e0f85e0ba34d5e.jpg)
123
+ Figure 4: (a) Number of scenarios aligned with the eight categories under ERG theory. Each scenario may encompass multiple goals. (b) Moral values distribution of the agents. An individual may have multiple moral values, with those appearing fewer than 30 times categorized as Others. (c) Distribution of the agents' Big Five personality traits.
124
+
125
+ ![](images/e3f45fb122ba0fadcb647c20675ee45dc03f4e672849e4f494161cde86bce7d9.jpg)
126
+
127
+ review it. Details can be found in Appendix A.3.
128
+
129
+ Data Analysis Based on the scripts, we develop 245 effective templates by constructing scenarios. Each template is diversified with five scenarios, featuring characters tailored to fit specific scenario constraints. This results in a total of 1,225 scenarios that cover a wide range of social situations and objectives, with 363 of these scenarios containing roles that involve private information. Recognizing the inseparable connection between social interaction and space (Lefebvre, 1991), we categorize these scenarios into three types: personal domain, small society, and large society. A detailed introduction to the classification is shown in Appendix B.2. In our dataset, $54\%$ of scenarios are in the personal domain, $37\%$ in small society, and $9\%$ in large society.
130
+
131
+ For social goals, we apply the ERG theory (Alderfer, 1969) to categorize these goals into three hierarchical needs: Existence, Relatedness, and Growth. Based on these needs, we identify eight key social goals(e.g., Information Acquisition, Relationship Building, Competition), detailed in Appendix B.3. Our dataset includes all these social goals, with the number of scenarios for each shown in Figure 4 (a). Generally, higher-level social goals are less frequent, reflecting real-life patterns.
132
+
133
+ The final dataset contains a diverse collection of 859 individual profiles with 366 types of occupations. The characters exhibit a wide range of attributes, emphasizing the great diversity in terms of gender, age, occupation, big five, moral values, personality, and decision-making styles, as shown in Figure 4 (b) and Figure 4 (c). For instance, in terms of gender, there are individuals identified as male, female, non-binary, genderqueer, and so on. Ages vary from childhood to old age, providing a spread across different life stages.
134
+
135
+ # 4 Experiments
136
+
137
+ # 4.1 Experimental Settings
138
+
139
+ Agent Models We evaluate various LLM families including Llama-2-7b/13b/70b-Chat (Touvron et al., 2023), Llama-3-8b/70b-Instruct (Dubey et al., 2024), Mistral-7b-Instruct-v0.3 (Jiang et al., 2023), Qwen2.5-7b/14b/72b-Chat (Team, 2024), GPT-3.5-Turbo (Ouyang et al., 2022) and GPT-4o (Achiam et al., 2023). For interactions between different models and further analysis, we involve Llama-3-8b, Qwen2.5-14b, GPT-3.5-turbo, and GPT-4o.
140
+
141
+ Judge Models To effectively leverage current LLMs for automating the evaluation of social interactions as third-party judges, we conduct a human evaluation and compare it with model-based evaluations. We randomly sample 100 simulated scenarios, posing a total of 584 social goal evaluation questions, and manually verify whether the communication history shows that the goals were achieved. Using human annotations as ground-truth labels, we calculate the accuracy of various models acting as judges, as shown in Table 8. Given the results, we select GPT-4o, Qwen2.5-72b, and Llama-3-70b from different model families as our judge models, referred to as Judge-GPT-4o, Qwen2.5, and Llama-3 hereafter. Additionally, we incorporate a majority voting mechanism across these models to create a mixture-of-experts model as another judge.
142
+
143
+ Implementation Details We use vLLM (Kwon et al., 2023) to deploy all open source models: Qwen2.5-72b and Llama-3-70b on NVIDIA A100, and other models on NVIDIA RTX 4090. We apply AutoGen (Wu et al., 2024) to manage interacting and judging threads. We set max new tokens to 128 for all models. Temperature is set to 1 for agent models to encourage diversity, and 0 for judge models to ensure the stability of evaluation.
144
+
145
+ <table><tr><td rowspan="3">Model</td><td colspan="8">Goal</td><td colspan="2">Info</td></tr><tr><td rowspan="2">Self</td><td rowspan="2">Other</td><td colspan="6">Judge</td><td rowspan="2">Acc.</td><td rowspan="2">PSI ↓</td></tr><tr><td>GPT-4o</td><td>Qwen2.5</td><td>Llama-3</td><td>Average</td><td>Majority</td><td>PSI ↓</td></tr><tr><td>Llama-2-7b</td><td>83.38</td><td>62.70</td><td>52.73</td><td>57.68</td><td>55.37</td><td>55.26</td><td>55.84</td><td>21.94</td><td>33.06</td><td>20.53</td></tr><tr><td>Llama-2-13b</td><td>48.01</td><td>10.26</td><td>17.38</td><td>30.11</td><td>72.19</td><td>39.90</td><td>30.91</td><td>21.84</td><td>28.56</td><td>18.39</td></tr><tr><td>Llama-2-70b</td><td>85.72</td><td>65.65</td><td>33.78</td><td>42.37</td><td>73.80</td><td>49.98</td><td>45.53</td><td>22.31</td><td>36.78</td><td>18.60</td></tr><tr><td>Llama-3-8B</td><td>87.63</td><td>67.28</td><td>79.90</td><td>82.55</td><td>75.10</td><td>79.18</td><td>80.71</td><td>12.85</td><td>69.68</td><td>15.14</td></tr><tr><td>Llama-3-70b</td><td>80.38</td><td>77.27</td><td>86.22</td><td>87.61</td><td>79.88</td><td>84.57</td><td>86.27</td><td>8.92</td><td>73.08</td><td>16.58</td></tr><tr><td>Qwen2.5-7b</td><td>86.17</td><td>61.92</td><td>77.07</td><td>79.30</td><td>71.99</td><td>76.12</td><td>77.37</td><td>13.10</td><td>74.82</td><td>15.84</td></tr><tr><td>Qwen2.5-14b</td><td>86.62</td><td>84.17</td><td>88.43</td><td>89.83</td><td>80.47</td><td>86.24</td><td>88.14</td><td>8.09</td><td>75.02</td><td>14.81</td></tr><tr><td>Qwen2.5-72b</td><td>90.67</td><td>85.89</td><td>88.29</td><td>89.03</td><td>78.57</td><td>85.30</td><td>87.74</td><td>8.19</td><td>76.05</td><td>13.57</td></tr><tr><td>Mistral-7b</td><td>95.22</td><td>87.25</td><td>79.29</td><td>84.13</td><td>77.82</td><td>80.41</td><td>82.37</td><td>12.39</td><td>66.59</td><td>18.55</td></tr><tr><td>GPT-3.5-turbo</td><td>90.16</td><td>76.62</td><td>82.12</td><td>84.37</td><td>77.30</td><td>81.26</td><td>82.64</td><td>10.01</td><td>68.41</td><td>18.37</td></tr><tr><td>GPT-4o</td><td>88.46</td><td>86.29</td><td>88.47</td><td>89.00</td><td>81.57</td><td>86.34</td><td>88.36</td><td>6.99</td><td>76.86</td><td>15.48</td></tr></table>
146
+
147
+ Table 2: Overall performance of the interactions of agents driven by the same models. We report the best performance in bold format and the second best in underlined format.
148
+
149
+ # 4.2 Overall Performance
150
+
151
+ Single Model-based Table 2 shows the overall performance of the interaction of agents driven by the same models. Considering that LLMs may overestimate their own performance, we use the judge majority score as the primary metric for cross-model comparisons, as it is more objective and stable than other metrics.
152
+
153
+ Overall Performance: GPT-4o leads as expected, while Qwen-series models also show strong social intelligence, especially for Qwen2.5-14b, in both goal completion and information reasoning. Llama-2 series models perform poorly, with some improvement in the Llama-3 series, though still falling short of expectations. The interaction history of Llama-2-13b in Appendix D.9 reveals frequent struggles in maintaining roles, progressing conversations, and responding effectively to others. In terms of the stability of social intelligence, excluding the uncertainty introduced by the temperature parameter (Appendix D.2), the PSI results show that models with higher social intelligence, such as GPT-4o and Qwen, are also less sensitive to profile changes. Overall, different models' social abilities are well distinguished by AgentSense. Meanwhile, we observe that there still exists an improvement space even for the SOTA models, emphasizing LLMs still face challenges in diverse and complex social scenarios. We also analyze the differences in social interactions between single model-driven agents and humans (namely the HCI test, as shown in Appendix D.8), which indicates that the scenarios are even challenging for humans but interacting with humans generally improves the agents' performance.
154
+
155
+ ![](images/b7e1c497f1c66f979d4d5f084e1c36bb4f43f002f99eba98ffeec0d547bdc1ab.jpg)
156
+ Figure 5: (a) Judge majority score of interactions among different model-driven agents, highlighting that being a sender is more challenging. (b) Model performance as both attacker and defender, with notably weaker and less consistent results when acting as a defender.
157
+
158
+ ![](images/53c7e16afc006f165228b12d3d4e4f0ea003839dc3d95ec18265241b7776c101.jpg)
159
+
160
+ Evaluation Bias in Goal Completion: Llama-2-7b and Mistral-7b tend to overestimate themselves during the simulation, which can be told from the Self and Other scores as the judges are powered by the same models as the social agents. Judges also exhibit specific preference, with Qwen2.5-72b tending to prefer Qwen-series models and GPT-4o tending to prefer GPT-4o. Llama3-70b tends to be conservative in judging both self and others.
161
+
162
+ Pairwise Model-based We also evaluate how agents perform when interacting with other agents supported by different models. Given that our social scenarios can have more than two participants, we label each agent as either a sender or a receiver based on their social goals with the assistance of GPT-4o, inspired by the theory of communication (Blau, 1964; Barnlund, 2017). Senders share and transmit information, while receivers focus on understanding and responding.
163
+
164
+ Figure 5(a) presents the overall results of such interactions. GPT-4o and Qwen2.5-14b still perform
165
+
166
+ <table><tr><td>Jaccard</td><td>Gestalt</td><td>Levenshtein</td></tr><tr><td>10.58±3.24</td><td>3.35±2.24</td><td>11.62±4.29</td></tr></table>
167
+
168
+ best. However, engaging with weaker models adversely affects all models' performance, particularly when the sender is the weaker agent. Our analysis (Appendix D.4) shows that weaker models struggle more as senders than as receivers This is because senders take a more active role in social interactions, making the associated tasks inherently more challenging.
169
+
170
+ # 4.3 Analysis of Data Leakage
171
+
172
+ Since our data originates from scripts, it is essential to prevent data leakage and evaluate the effectiveness of leakage mitigation. Data leakage may arise from 1) the model's prior knowledge of the scenario's script, and 2) the information provided by the scenario that could help infer others' private information.
173
+
174
+ We first compute the similarity between the scenarios of the final test case and the original scripts. The overlap between them is sufficiently low, as indicated by multiple metrics listed in Table 3. This shows that after our intermediate data processing steps, the final test case is very different from the original script at the text level.
175
+
176
+ In order to further determine the possible knowledge leakage problem in the test case, we also design two experiments: (1) Script Prediction: Whether models can guess the original scripts from background information, with 245 test samples (one per template). (2) Blind Test: Whether models can answer private information reasoning questions with initial scenario information before interactions, with 100 test questions asked three times.
177
+
178
+ The script prediction results in Table 4 indicate that models are nearly unable to infer the original script from the background information. The blind test results also establish a baseline for each model's private information reasoning ability.
179
+
180
+ We also compute the similarity between simulated dialogues from different models and the original ones in the source scripts; details can be found
181
+
182
+ Table 3: Similarity between our scenarios and their source scripts. Jaccard: Jaccard similarity; Gestalt: Gestalt matching ratio; Levenshtein: Levenshtein ratio. Subscripts indicate standard deviation. We suggest interpreting them with caution since the comparison is taken between different forms.
183
+
184
+ <table><tr><td>Model</td><td>Script Acc.</td><td>Blind Acc.</td></tr><tr><td>GPT-4o</td><td>0.04</td><td>0.62</td></tr><tr><td>GPT-3.5-turbo</td><td>0.07</td><td>0.51</td></tr><tr><td>Mistral-7b</td><td>0.05</td><td>0.56</td></tr><tr><td>Llama-2-7b</td><td>0.06</td><td>0.35</td></tr><tr><td>Qwen2.5-7b</td><td>0.03</td><td>0.55</td></tr><tr><td>Llama-3-8b</td><td>0.04</td><td>0.54</td></tr><tr><td>Llama-2-13b</td><td>0.04</td><td>0.35</td></tr><tr><td>Qwen2.5-14b</td><td>0.06</td><td>0.61</td></tr><tr><td>Llama-2-70b</td><td>0.06</td><td>0.40</td></tr><tr><td>Llama-3-70b</td><td>0.04</td><td>0.59</td></tr><tr><td>Qwen2.5-72b</td><td>0.04</td><td>0.60</td></tr></table>
185
+
186
+ Table 4: Model performance on script prediction and blind test. The low Script Acc. indicates the model barely discerns the scripts, and the Blind Acc. establishes a baseline for model's reasoning ability.
187
+
188
+ in Appendix D.5.
189
+
190
+ # 5 Further Analysis
191
+
192
+ # 5.1 What goals are LLMs good/bad at?
193
+
194
+ In Section 3.5, we categorize all social goals into 8 types under ERG theory. Figure 6 illustrates the average goal completion scores of each goal type across different models. In general, all LLMs are good at goals about relationship management and cooperation. Compared with smaller models like Llama-3-8b, larger models like Qwen2.5-14b and GPT-4o gain significant improvement on goals about information exchange and identity recognition. However, there is still room for improvement on other goals like competition and conflict resolution. These are also the goals where LLMs tend to overestimate their progress.
195
+
196
+ We also compare goal completion scores under different scenario types, number of interaction rounds, and participants, where less or no significant difference regarding these factors is observed. More details can be found in Appendix D.6.
197
+
198
+ # 5.2 Which is harder: guessing thoughts or keeping secrets?
199
+
200
+ We further inspect the disparities in passive reasoning among different models. In interactive scenarios, agents with private information engage in a game where the defender aims to keep their information confidential, while the attacker seeks to uncover it, thus playing two distinct roles: (1) Defenders need to prevent the disclosure of their
201
+
202
+ ![](images/79617ca8928b1038f37f94a1b612a2b208dbc87e9a47e71d8dca9623bc8a6a39.jpg)
203
+ Figure 6: Goal completion scores of different goal types across models. LLMs perform best on relationship goals and cooperation, and worst on competition and conflict resolution (where they are also over-confident).
204
+
205
+ ![](images/a32afbca40545f94ff6755c2d0d38112a9813b2602de5e54a4f4a08e5b7860e7.jpg)
206
+
207
+ ![](images/adfa9dc7b95506d63b3614d1a29d7be4e6a99dd29f13f6d6835d0a2899351950.jpg)
208
+
209
+ ![](images/f35a811fecbc7ef08d4c783bca4e219923f0250b2a56502f45c77e2664f282ba.jpg)
210
+
211
+ private information. Their effectiveness is inversely proportional to the attacker's success in inferring this information, or 1 minus the attacker's inference success rate. (2) Attackers need to acquire others' private information. Their strength is simply proportional to their own inference success rate.
212
+
213
+ The benchmark reveals that these roles represent distinct capabilities. Models may act as both attackers and defenders during interactions. To assess the models' abilities in these roles, we analyze the outcomes of their interactions.
214
+
215
+ Figure 5(b) outlines model performance. Most models lack in defense, often revealing secrets and showing unstable performance across scenarios. GPT-4o excels at both keeping secrets and passive deduction. Qwen2.5-14b is aggressive and adept at inferring information, while Llama-3-8b is more conservative and better at keeping secrets.
216
+
217
+ # 5.3 What scenarios are more sensitive to profiles?
218
+
219
+ As mentioned in Sec 3.2, we enrich the scenario by replacing the original characters in the script with synthesized agents. Here we investigate the impact of profiles on social intelligence. We use the Chi-square test to identify abnormal templates (p-value $= 0.05$ , see Appendix D.7). The results show that the profile replacement and character enrichment following the above workflow satisfies null hypothesis $H_0$ (namely do not have significant difference) in over $92.6\%$ scenarios.
220
+
221
+ We conduct a human evaluation for the rest $7.4\%$ abnormal scenarios that have a significant difference to locate the key factor disturbing the robustness. The results unveil that: (1) some social goals involve opinions or behaviors that are against the universal value (like smoking prohibition and emotional outburst), which can be influenced by both the agent's personality and LLM's alignment; (2) some scenarios require detailed personal level information, making the general
222
+
223
+ replacement workflow fail, especially for emotional issues involving multiple characters.
224
+
225
+ # 6 Conclusion
226
+
227
+ In this paper, we introduce AgentSense, a benchmark evaluating LLM's social intelligence via diverse and challenging social environments. Extensive experiments reveal that current LLMs struggle with complex social scenarios and high-level goals. Further analyses verify potential influence factors during evaluation to show the robustness and discriminative power of AgentSense.
228
+
229
+ # Acknowledgement
230
+
231
+ The work is supported by National Key R&D Program of China (Grant Nos. 2023YFF1204800) and National Natural Science Foundation of China (Grant Nos. 62176058). The project's computational resources are supported by CFFF platform of Fudan University.
232
+
233
+ # Limitations
234
+
235
+ AgentSense builds on previous work by constructing social scenarios in a bottom-up manner, thereby increasing the diversity of social scenarios, enhancing the complexity of social interactions, and evaluating social intelligence from multiple perspectives, including goal completion and private information reasoning. Although we have undertaken proactive steps to extract social scenarios from scripts, with careful considerations to avoid data leakage, some concerns are worth attention here:
236
+
237
+ Potential Bias in Data Construction We remind readers that using the proprietary model for dataset construction may introduce potential bias, which might make the tasks easier for the data generator model, e.g., GPT-40, during the evaluation. This is a general systemic bias for benchmarks using model synthetic data. We will work on more data synthetic methods to minimize such risk.
238
+
239
+ Social Interaction Simplification AgentSense incorporates diverse scenarios and random turn-taking interactions to replicate challenges and uncertainty in social dynamics. Nevertheless, introducing more complex mechanisms for determining speaking order and incorporating dynamics such as interruptions could further enhance the realism and depth of the interactions.
240
+
241
+ Dramatic Features of Scripts Although most of the scripts we selected, such as Friends, are grounded in everyday life, and we manually validated the templates to ensure they closely reflect real-world scenarios, it's notable that some scenarios may exhibit dramatized characteristics. This may be reflected in their level of challenge, such as dealing with uncommon and highly complex interpersonal dynamics. While we believe these scenarios provide valuable insights into social intelligence in extreme situations, we recommend interpreting them with caution.
242
+
243
+ Manual Validation in Data Construction Although we have automated the scenario extraction process as much as possible, challenges in obtaining valid social goals and private information still require manual validation at certain stages. As a result, we have not expanded the scenarios to a larger scale, leaving it for future work.
244
+
245
+ # Ethics Statement
246
+
247
+ AgentSense is introduced to assess the social intelligence of LLM-driven agents. We do not encourage any agents that might disrupt social norms. We aim to offer insights that enhance LLMs' performance in complex social scenarios and promote effective, harmonious interactions among agents and between agents and humans. Besides, constructing role-playing agents can lead to anthropomorphism, resulting in unrealistic expectations, potential manipulation, and negative consequences. However, in AgentSense, we avoid having LLMs role-play specific individuals and instead portray various synthesized characters across different scenarios. For annotations, we paid the annotators according to the graduate wage standards of their respective countries.
248
+
249
+ # References
250
+
251
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman,
252
+
253
+ Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
254
+ Clayton P. Alderfer. 1969. An empirical test of a new theory of human needs. Organizational Behavior and Human Performance, 4(2):142-175.
255
+ Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua R Gubler, Christopher Ryting, and David Wingate. 2023. Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3):337-351.
256
+ Dean C Barnlund. 2017. A transactional model of communication. In Communication theory, pages 47-57. Routledge.
257
+ Peter M Blau. 1964. Exchange and power in social life. new york: John wiley.
258
+ Hongzhan Chen, Hehong Chen, Ming Yan, Wenshen Xu, Xing Gao, Weizhou Shen, Xiaojun Quan, Chenliang Li, Ji Zhang, Fei Huang, and Jingren Zhou. 2024a. Socialbench: Sociality evaluation of role-playing conversational agents. Preprint, arXiv:2403.13679.
259
+ Jiangjie Chen, Xintao Wang, Rui Xu, Siyu Yuan, Yikai Zhang, Wei Shi, Jian Xie, Shuang Li, Ruihan Yang, Tinghui Zhu, Aili Chen, Nianqi Li, Lida Chen, Caiyu Hu, Siye Wu, Scott Ren, Ziquan Fu, and Yanghua Xiao. 2024b. From persona to personalization: A survey on role-playing language agents. Preprint, arXiv:2404.18231.
260
+ Yun-Shiuan Chuang, Nikunj Harlalka, Siddharth Suresh, Agam Goyal, Robert Hawkins, Sijia Yang, Dhavan Shah, Junjie Hu, and Timothy T Rogers. 2024. The wisdom of partisan crowds: Comparing collective intelligence in humans and llm-based agents. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 46.
261
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
262
+ Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023. Improving language model negotiation with self-play and in-context learning from ai feedback. Preprint, arXiv:2305.10142.
263
+ Erving Goffman. 1959. The presentation of self in everyday life.
264
+ Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock, and William A. Cunningham. 2023. Ai and the transformation of social science research. Science, 380(6650):1108-1109.
265
+ Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
266
+
267
+ Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. 2024. Evaluating and inducing personality in pre-trained language models. Advances in Neural Information Processing Systems, 36.
268
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles.
269
+ Henri Lefebvre. 1991. The Production of Space. The Production of Space.
270
+ Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023a. Camel: Communicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems, 36:51991-52008.
271
+ Yuan Li, Yixuan Zhang, and Lichao Sun. 2023b. Metaagents: Simulating interactions of human behaviors for lmm-based task-oriented coordination via collaborative generative agents. arXiv preprint arXiv:2310.06500.
272
+ Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2023. Agentbench: Evaluating llms as agents. Preprint, arXiv:2308.03688.
273
+ Yuhan Liu, Xiuying Chen, Xiaqing Zhang, Xing Gao, Ji Zhang, and Rui Yan. 2024. From skepticism to acceptance: Simulating the attitude dynamics toward fake news. arXiv preprint arXiv:2403.09498.
274
+ Abraham Harold Maslow. 1943. A theory of human motivation. Psychological Review, 50:370.
275
+ Xinyi Mou, Zhongyu Wei, and Xuanjing Huang. 2024. Unveiling the truth and facilitating change: Towards agent-based large-scale social movement simulation. In Findings of the Association for Computational Linguistics ACL 2024, pages 4789-4809.
276
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
277
+ Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. 2023. Toollm: Facilitating large language models to master $16000+$ real-world apis. arXiv preprint arXiv:2307.16789.
278
+
279
+ Sahand Sabour, Siyang Liu, Zheyuan Zhang, June M Liu, Jinfeng Zhou, Alvionna S Sunaryo, Juanzi Li, Tatia Lee, Rada Mihalcea, and Minlie Huang. 2024. Emobench: Evaluating the emotional intelligence of large language models. arXiv preprint arXiv:2402.12071.
280
+ Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Common-sense reasoning about social interactions. Preprint, arXiv:1904.09728.
281
+ Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2024. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems, 36.
282
+ Yunfan Shao, Linyang Li, Junqi Dai, and Xipeng Qiu. 2023. Character-llm: A trainable agent for roleplaying. Preprint, arXiv:2310.10158.
283
+ Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. 2023. Clever hans or neural theory of mind? stress testing social reasoning in large language models. arXiv preprint arXiv:2305.14763.
284
+ Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2024. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems, 36.
285
+ Elizabeth Stade, Shannon Stirman, Lyle Ungar, Cody Boland, H. Schwartz, David Yaden, João Sedoc, Robert DeRubeis, Robb Willer, and Johannes Eichstaedt. 2024. Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. npj Mental Health Research, 3.
286
+ Qwen Team. 2024. Qwen2.5: A party of foundation models.
287
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
288
+ Ronald E. Walker and Jeanne M. Foley. 1973. Social intelligence: Its history and measurement. *Psychological Reports*, 33(3):839-864.
289
+ Chenxu Wang, Bin Dai, Huaping Liu, and Baoyuan Wang. 2024. Towards objectively benchmarking social intelligence for language agents at action level. Preprint, arXiv:2404.05337.
290
+ Alex Wilf, Leena Mathur, Sheryl Mathew, Claire Ko, Youssouf Kebe, Paul Pu Liang, and Louis-Philippe Morency. 2023. Social-iq 2.0 challenge: Benchmarking multimodal social understanding.
291
+
292
+ Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, et al. 2024. Autogen: Enabling next-gen llm applications via multi-agent conversation. In ICLR 2024 Workshop on Large Language Model (LLM) Agents.
293
+ Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi, Ziniu Hu, Philip Torr, Bernard Ghanem, and Guohao Li. 2024. Can large language model agents simulate human trust behaviors? Preprint, arXiv:2402.04559.
294
+ Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, and Yang Liu. 2023. Exploring large language models for communication games: An empirical study on werewolf. arXiv preprint arXiv:2309.04658.
295
+ Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, and Louis-Philippe Morency. 2019. Social-iq: A question answering benchmark for artificial social intelligence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
296
+ Xuhui Zhou, Zhe Su, Tiwalayo Eisape, Hyunwoo Kim, and Maarten Sap. 2024a. Is this the real life? is this just fantasy? the misleading success of simulating social interactions with llms. arXiv preprint arXiv:2403.05020.
297
+ Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, and Maarten Sap. 2024b. Sotopia: Interactive evaluation for social intelligence in language agents. Preprint, arXiv:2310.11667.
298
+
299
+ # A Scenario Construction
300
+
301
+ # A.1 Scenario Template Construction Details
302
+
303
+ # A.1.1 Social Goal Extraction
304
+
305
+ The first step of social goal extraction is to use GPT-4o to extract the social goals of the current character. However, this direct approach has two issues that requires further amendment:
306
+
307
+ - The goals may be too detailed (e.g. direct instruction), or depend on other characters' goals or some random events during the scenario. To address this problem, we prompt GPT-4o to rewrite the whole scenario, including all extracted social goals, into a similar but new scenario that avoids these issues to the best extent.
308
+ - Even after the conversion, some goals remain unattainable for characters, such as those requiring external information, physical actions, or evaluations beyond the scenario, e.g., goals
309
+
310
+ that are too abstract or long-term. Thus, we ask GPT-4o to determine whether each goal matches the above cases. If it does, we instruct it to either rewrite the goal or delete it if rewriting is not feasible.
311
+
312
+ # A.1.2 Private Information Extraction
313
+
314
+ The pipeline first determines whether the current scene involves the inference of private information. If it does, the pipeline proceeds to extract private information for each character from the script, otherwise the private information will be set to null. We use GPT-4o to perform reading comprehension on the scenes, extracting questions and answers that only one agent can respond to. The rephrased answers serve as the agent's private information, while the questions serve as test items for other agents. Finally, the pipeline generates three incorrect answer options, which, along with the original Q&A pair, form the complete evaluation question. To better assess passive reasoning abilities, we have rephrased the negative options to make them more homogeneous with the correct options.
315
+
316
+ # A.1.3 Leakage Mitigation and Template Generation
317
+
318
+ To prevent data leakage, we first perform scenario leakage mitigation. We prompt GPT-4o to extract elements associated with specific episodes, such as notable location entities like "The Facebook headquarters." Next, we ask GPT-4o to suggest replacement candidates to preserve the script's context and minimize the risk of identifying the specific episode.
319
+
320
+ # A.2 Prompts in Construction Pipeline
321
+
322
+ # A.2.1 Scenario Extraction
323
+
324
+ Prompt 1: Episodes Division
325
+ ```txt
326
+ You are very good at reading scripts and extracting key information. According to discernible shifts in temporal settings, spatial locations, character dynamics and narrative progressions, divide the following script into multiple episodes. Do not delete or modify script content.
327
+ ```
328
+
329
+ ```txt
330
+ ////Script: {script}
331
+ ```
332
+
333
+ ```txt
334
+ Please return the results according to the following JSON structure:
335
+ ```
336
+
337
+ ```snap
338
+ ```
339
+ ```
340
+ ```
341
+ ```
342
+ ```
343
+ ``
344
+ ```
345
+
346
+ ```jsonl
347
+ [["episode1": "xxx", "episode2": "xxx", "episode3": "xxx", ...}]
348
+ ```
349
+
350
+ Prompt 2: Scenes Division
351
+ ```txt
352
+ You are very good at reading scripts and extracting key information. According to the variations in the dialogue content, divide the following episode into multiple scenes. Do not delete or modify episode content.
353
+ ```
354
+
355
+ ```txt
356
+ ##Episode: {episode}
357
+ ```
358
+
359
+ ```txt
360
+ Please return the results according to the following JSON structure:
361
+ ```
362
+
363
+ ```snap
364
+ ```
365
+ ```
366
+ ```
367
+ ```
368
+ ```
369
+ ``
370
+ ```
371
+
372
+ ```jsonl
373
+ [{"scene1": "xxx", "scene2": "xxx", "scene3": "xxx", ...}]
374
+ ```
375
+
376
+ # Prompt 3: Descriptive Background Generation
377
+
378
+ You are an excellent writer good at analyzing story backgrounds.
379
+
380
+ You are given some information of a specific scenario in a story. More specifically:
381
+
382
+ - The story is split into scenes, and you are given the background of each scene until the current one;
383
+ - The current scene is also split into scenarios, and you are given the background of each scenario until the current one;
384
+ - Finally, you are given the current scenario's description and dialog.
385
+
386
+ ```txt
387
+ Write ONE paragraph to provide a DESCRIPTIVE background of the given scenario. A good background should cover the information that sets up the scenario, but does NOT reveal too many details from the scenario, or include irrelevant details.
388
+ ```
389
+
390
+ ```txt
391
+ Output a JSON document like {"background": "...}".
392
+ ```
393
+
394
+ ```txt
395
+ {scenario_json_string}
396
+ ```
397
+
398
+ # Prompt 4: Original Social Goal Extraction
399
+
400
+ You are an excellent psychologist good at understanding social goals and needs.
401
+
402
+ You are given a social scenario with its background, description, and dialog. For the specific character of $\ast$ {character} $\ast$ ,identify their social goals. Social goals typically fall into one of these categories:
403
+
404
+ - Exchange information with others;
405
+ - Build relationship with others;
406
+ - Maintain relationship or provide emotional support;
407
+ - Identify themselves with a group;
408
+ - Co-operate with others;
409
+ - Compete with others;
410
+ - Resolve conflicts.
411
+
412
+ Social goals should be objective, specific and clear; whether the character has achieved them should be observable.
413
+
414
+ The character can have one single goal or multiple independent goals in the scenario; find and list all of them. For each goal, write a sentence to describe the goal. Use infinitive verbs and third person pronouns.
415
+
416
+ Output a JSON document like {"name": "...", "goals": ["...", ..., ...]}.
417
+
418
+ {scenario_json_string}
419
+
420
+ # Prompt 5: Scenario Rewriting
421
+
422
+ You are an excellent psychologist good at designing social scenarios.
423
+
424
+ You are given a social scenario with background, description, and dialog. You are also given the social goals of several major characters.
425
+
426
+ Set up a new social scenario involving only these \*\*major characters\*. Each character's new social goals should appear \*\*before\*\* the scenario starts.
427
+
428
+ First, filter out contents from the background and description that describes the detail of the scenario; however, details of the beginning of the scenario can be kept. Second, rewrite each character's social goals so that it:
429
+
430
+ - DOES NOT rely on other character's goals;
431
+
432
+ - DOES NOT include potential action the character will take;
433
+
434
+ - Uses infinitive verbs and third person pronouns.
435
+
436
+ Filter out social goals that cannot obey these criteria. Modify the background/description to include more information if necessary.
437
+
438
+ Describe the background and description of the new scenario, and list the new social goals of each major character.
439
+
440
+ Output a JSON document like {"background": "...", "description": "...", "characters": ["name": "...", "goals": ["...", ..., ..."], ...]}}.
441
+
442
+ {scenario_json_string}
443
+
444
+ # Prompt 6: Social Goal Filtering
445
+
446
+ You are an excellent psychologist good at analyzing social goals.
447
+
448
+ You are given the social goals of a character in a designed social scenario. You are provided the background, description and character lists.
449
+
450
+ Now, for the specified goal, check if it needs to be rewritten or removed due to any of these reasons:
451
+
452
+ 1. The goal directly involves characters not participating in the scenario, e.g. 'deal with the client' (if 'client' is not in the list of characters:
453
+ 2. The goal requires information not provided in the background or description, e.g. 'describe the plan' (if the plan already exists but not provided);
454
+ 3. The goal is a physical action, e.g. 'fix the television';
455
+ 4. The goal is too abstract to evaluate, e.g. 'navigate professional challenges';
456
+ 5. The goal is too subjective to evaluate, e.g. 'maintain dignity';
457
+ 6. The goal is meaningless to evaluate, e.g. 'join the conversation'.
458
+
459
+ Write a detailed paragraph to examine the social goal. Compare it with each of the criteria above. If the goal matches one or more criteria above, check if you can rewrite the goal to avoid them. You should still remove the goal if this is not possible.
460
+
461
+ Based on your examination, write an updated version of the goal:
462
+
463
+ - If the goal is valid, return the original goal.
464
+ - If the goal can be rewritten, return the rewritten goal.
465
+ - If the goal needs to be removed, return an empty string.
466
+
467
+ Finally, any returned goal (if any) should be formatted into 'To xxx.', e.g. 'To share his/her discovery.' (including the final period).
468
+
469
+ Output a JSON document like {"examination": "...", "update": "..."}.
470
+
471
+ {scenario_with_current_character_goal_json_string}
472
+
473
+ # A.2.3 Private Information Extraction
474
+
475
+ # Prompt 7: Case Validation
476
+
477
+ You are an excellent psychologist who is good at analyzing the private information of each character in a social scenario. Private information refers to information that only the character knows and no one else knows.
478
+
479
+ To determine whether there is private information, we need to check whether a specific character has information known to him/her, and whether the information exists in the background and description. Because the information in the background and description will be obtained by all characters, only when a specific character can obtain this information through its own goal and this information does not exist in the background and description, it indicates that the scene is a scene involving private information reasoning.
480
+
481
+ You only need to return *Yes* or *No* to confirm whether there is any private information. The following is the background information, description, main characters and corresponding social goals:
482
+
483
+ Background: {background} Description: {description} Characters: {characters}
484
+
485
+ # Prompt 8: Private Info Generation
486
+
487
+ You are good at writing questions for specific roles based on a social scenario. Below you will be provided with background information, a description of the current scene, and the goals of each of the main characters.
488
+
489
+ Background: {background}
490
+
491
+ ##Description:{description}
492
+
493
+ Characters: {characters}
494
+
495
+ Please try to give some questions that the target character (in the following JSON format content, 'role' is used to refer to) can answer, but other characters will have difficulty answering before the interaction. These questions should strictly contain information that the target character knows, but is beyond the knowledge of other characters, so other characters cannot answer them at first. Specifically, the information required for these questions cannot appear in the background and description, because other characters will obtain this part as information. Questions cannot be expressed in the second person because the questions will eventually be used to ask other characters. For example, when the target character of a question is Rose, "Rose, why did you ..." is not a good question, but should be written as "Why did Rose ..."
496
+
497
+ Please provide a statement (in the following JSON format content, 'explanation' is used to refer to) that explains why the target character can answer the question, but other characters cannot. The statement should be objective factual information presented in the script, and should not mention the question, so it cannot appear in a sentence structure like "This question is ...".
498
+
499
+ Please provide the correct answer to the question, and the answer can be found in the information given.
500
+
501
+ Please use casual language as much as possible, and try to ask questions in the third person, such as "What is Jason's true identity?" Please answer in English. Please return the results according to the following JSON structure:
502
+
503
+ ```
504
+ ```
505
+ ```
506
+ ```
507
+ ```
508
+ ```
509
+ ``
510
+
511
+ [ \{ "role": str, "question": str, "explanation": str, "answer": str\}, \{ "role": str, "question": str, "explanation": str, "answer": str\} ]
512
+
513
+ # Prompt 9: Negative Option Generation
514
+
515
+ You are a multiple-choice generator. Given a description of social scenario, a question and an answer, you need to generate 3 additional incorrect options. Incorrect options should be expressed in a similar way to the answer, but need to have completely different actual meanings so that they are sufficiently distinguishable from the answer.
516
+
517
+ ##Description:{description}
518
+
519
+ Question: {question}
520
+
521
+ ```python
522
+ >>> Answer: {answer}
523
+
524
+ Please return the results according to the following JSON structure:
525
+
526
+ ```
527
+ ```
528
+ ```
529
+ ```
530
+ ```
531
+ ```
532
+ ``
533
+
534
+ [["option1": "xxx", "option2": "xxx", "option3": "xxx"]]
535
+
536
+ ··
537
+
538
+ # Prompt 10: Negative Options Rephrasing
539
+
540
+ The following is information and a corresponding quiz for a social simulation scenario.
541
+
542
+ ```txt
543
+ ```java
544
+ ```java
545
+ ```java
546
+ ```java
547
+ ```java
548
+ ```java
549
+ ```java
550
+ ```java
551
+ ```java
552
+ ```java
553
+ ```java
554
+ ```java
555
+ ```java
556
+ ```java
557
+ ```java
558
+ ```java
559
+ ```java
560
+ ```java
561
+ ```java
562
+ ```java
563
+ ```java
564
+ ```java
565
+ ```java
566
+ ```java
567
+ ```java
568
+ ```java
569
+ ```java
570
+ ```java
571
+ ```java
572
+ ```java
573
+ ```java
574
+ ```java
575
+ ```java
576
+ ```
577
+ ```java
578
+ ```java
579
+ ```java
580
+ ```java
581
+ ```java
582
+ ```java
583
+ ```java
584
+ ```java
585
+ ```java
586
+ ```java
587
+ ```java
588
+ ```java
589
+ ```java
590
+ ```java
591
+ ```java
592
+ ```java
593
+ ```java
594
+ ```java
595
+ ```java
596
+ ```java
597
+ ```java
598
+ ```java
599
+ ```java
600
+ ```java
601
+ ```java
602
+ ```java
603
+ ```java
604
+ ```java
605
+ ```java
606
+ ```java
607
+ ```java
608
+ ```java
609
+ ```java
610
+ `
611
+ ```
612
+
613
+ When I put myself in the role of $\{\}$ to do the question, I thought the options were too easy. The problem was that the negative options were not closely related to the given scenario or the character's motivation.
614
+
615
+ The criterion for a good negative option is that it is impossible to determine which option is correct based on the above information only. Now I want to rewrite these negative options to make them more similar to the correct answers and make the questions as difficult as possible.
616
+
617
+ The returned negative options should be in the same format as provided, both in list format. Make sure the new negative options also have 3 options. The return should be given in json format, for example:
618
+
619
+ ```snap
620
+ ```
621
+ ```
622
+ ```
623
+ ```
624
+ ```
625
+ ```
626
+ ```
627
+ ```
628
+ ```
629
+ ```
630
+ ```
631
+ ``
632
+ ```
633
+
634
+ # A.2.4 Leakage Mitigation and Template Generation
635
+
636
+ # Prompt 11: Entity Word Extraction
637
+
638
+ Your task is to extract key elements from the scene background and description, including location and characters.
639
+
640
+ ```txt
641
+ ## background:
642
+ {background}
643
+ ## description:
644
+ {description}
645
+ Output in the following JSON format:
646
+ {{"characters": [str, str...], "location": [str, str...], }}
647
+ ```
648
+
649
+ # Prompt 12: Entity Word Replacement
650
+
651
+ Please replace the provided scene background and description with a new location, and record the location before and after the modification.
652
+
653
+ ```txt
654
+ ## background:
655
+ {background}
656
+ ## description:
657
+ {description}
658
+ ## location involved:
659
+ {location_involved}
660
+ Output in the following JSON format:
661
+ {{"background_replace_location": str, "description_replace_location": str, "replace_location_list": {{{"original_name": str, "revised_name": str}}]}}
662
+ ```
663
+
664
+ Prompt 13: Attribute Extraction of Original Characters
665
+ ```txt
666
+ Template Example: !<INPUT 0>! Description Information: !<INPUT 1>! Characters: !<INPUT 2>! Instruction: Generate user profile for each character in the Characters according to to the Template Example profile attribute and the Description Information. Try your best to fill in each attribute and NEVER respond with 'Unknown'. The secret attribute should be consistent with the private info given in the Information. You should response in JSON format list with each character as dict and within each character, use attribute as a key and corresponding content as the value. Answer format: --- json [ { # charcater_1 profile } , { # character_2 profile } ]
667
+ ```
668
+
669
+ Prompt 14: Relationship Extraction of Original Characters
670
+ ```txt
671
+ Description Information: !<INPUT 0>!
672
+ Characters: !<INPUT 1>!
673
+ Relationship choice: [family, friend, romantic, acquaintance, stranger]
674
+ Instruction: Choose the relationship among the Characters according to the Description Information. The relationship cane only be chosen from [family, friend, romantic, acquaintance, stranger]. Do not respond with Unknown or any other labels beyond the choices.
675
+ When all the characters have the same relationship, just reply with one key "relationship": Answer format 1:
676
+ --- json
677
+ { "relationship": # your_choice}
678
+ ...
679
+ When there exist multiple relationships among characters, reply with the following format: Answer format 2:
680
+ --- json
681
+ { "relationship": {"A_and_B": "#your_choice_1", "A_and_C": "#your_choice_2"}
682
+ ...
683
+ ```
684
+
685
+ Prompt 15: Characters Attribute Replace-ability Assessment
686
+ ```jsonl
687
+ [Description Info]: !<INPUT 0>!
688
+ [Relationship]: !<INPUT 1>}!
689
+ [Characters]: !<INPUT 2>}!
690
+ [Instructions]: According to the [Description Info] of a script and [Relationship] among characters, determine whether each attribute of the [Characters] is replaceable with different settings without influencing the overall script.
691
+ Choose from [almost, maybe, no]. For example, if the Age attribute is almost replacebale, then the character's age has no impact on the background description; if the gender is not replaceable, then the character has to be a certain gender in the script.
692
+ Rules that help you choose: Family members usually have fixed ages and genders (if daughter or son appeared in the script); Romantic require exactly the same gender as script. Firends are usually similar ages, etc.
693
+ Answer with the following JSON format, where # is your output:
694
+ ```
695
+ json
696
+ {
697
+ "name": #character_1, "age": "#your_choice", "occupation": "#your_choice", "gender": "#your_choice"
698
+ }
699
+ {
700
+ "name": #character_2, ...
701
+ }
702
+ ```
703
+
704
+ Prompt 16: Agent Synthesizing
705
+ ```txt
706
+ Please generate {num} diverse user profiles that meet following requirements:
707
+ Gender: {cand_gender}
708
+ Age: {cand_age}
709
+ Occupation: {cand_occupation}
710
+ Please return your response in the following format of JSON:
711
+ [ {{"name":agent1, "gender":gender, "age":age, "occupation":occupation}}\}, {{"name":agent2, ...}}]
712
+ ```
713
+
714
+ <table><tr><td>Scripts</td><td>Episodes</td><td>Characters</td><td>Dialogs</td><td>Scenes</td><td>Tokens</td></tr><tr><td>American Psycho</td><td>10</td><td>81</td><td>182</td><td>35</td><td>2,596</td></tr><tr><td>Devil&#x27;s Advocate</td><td>10</td><td>83</td><td>371</td><td>41</td><td>4,415</td></tr><tr><td>10 Things I Hate About You</td><td>15</td><td>110</td><td>525</td><td>45</td><td>6,132</td></tr><tr><td>The Silence of the Lambs</td><td>15</td><td>77</td><td>242</td><td>26</td><td>5,441</td></tr><tr><td>Side Ways</td><td>13</td><td>68</td><td>175</td><td>35</td><td>3,244</td></tr><tr><td>The Social Network</td><td>160</td><td>581</td><td>1,694</td><td>237</td><td>24,515</td></tr><tr><td>Harry Potter</td><td>70</td><td>198</td><td>1,713</td><td>189</td><td>8,121</td></tr><tr><td>Derry Girl</td><td>191</td><td>24</td><td>4,454</td><td>428</td><td>31,942</td></tr><tr><td>Friends</td><td>185</td><td>32</td><td>3,045</td><td>264</td><td>28,428</td></tr><tr><td>Total</td><td>669</td><td>1,254</td><td>12,401</td><td>1,300</td><td>114,834</td></tr></table>
715
+
716
+ Table 5: Statistical Information of Original Scripts
717
+
718
+ # A.3 Data Validation Annotation
719
+
720
+ Social Goal As mentioned in Section 3.5, valid social goals should be both achievable and clear enough to evaluate. In the data validation annotation task, annotators need to check all social goals appeared in each scenario to see if it is valid with or without rewriting. Available tags include:
721
+
722
+ - 0 (invalid): At least one character has no valid social goals.
723
+ - 1 (valid): All social goals are valid.
724
+ - 2 (need fix): Some social goals are invalid, but every character has at least one valid goal.
725
+
726
+ Private Information We define a private information set to be valid as being sufficient to answer the corresponding question by the agent who owns it. Available tags in the data validation annotation task include:
727
+
728
+ 0:Invalid
729
+ - 1: Valid, but private info needs to be combined with the information from the goal to be complete.
730
+ - 2: Valid, private info is complete on its own.
731
+ - 3: Valid, but private info needs to be combined with the correct answer from the info reason question column to be complete.
732
+
733
+ Annotation Statistics The kappa consistency for the first and second groups of annotators is 0.79 and 0.61, respectively. After majority voting, there are still 37 templates that need to review; the corresponding data was handed over to another group of annotators for review. We pay the annotators according to the graduate wage standards of their respective countries.
734
+
735
+ # B Data Statistics and Analysis
736
+
737
+ # B.1 Detailed Statistical Overview of Scripts
738
+
739
+ Table 5 lists statistical information of the original scripts that AgentSense is based on.
740
+
741
+ # B.2 Social Scenario Categories
742
+
743
+ Table 6 lists the three social scenario categories and their corresponding definitions.
744
+
745
+ # B.3 Social Goal Classification
746
+
747
+ Maslow categorized the motivations of individual behavior into five levels (Maslow, 1943). Clayton Alderfer refined and simplified this into the ERG theory, categorizing needs into three types: Existence, Relatedness, and Growth (Alderfer, 1969). We apply the ERG theory to categorize individual goals in social interactions, as shown in Table 7
748
+
749
+ # C Simulation Details
750
+
751
+ # C.1 Prompts for Agents
752
+
753
+ Prompt 17: Social Agent System Prompt
754
+ You are {name}. {profile}
755
+ Now you are in a social scene. { background}
756
+ In this scene, as {name}, you have some social goals and secrets that you don't want anyone else to know.
757
+ SOCIAL GOALS: {social_goal}
758
+ SECRETS: {private_info}.
759
+ Please advance your social goals while hiding your secrets as much as possible.
760
+ Based on the interactions below, what would you like to say next? Please answer in one paragraph.
761
+
762
+ Prompt 18: Judge Agent System Prompt You are skilled at evaluating whether a person has achieved his or her social goal during social interactions.
763
+
764
+ # C.2 Simulation Details
765
+
766
+ Our simulation is built on the multi-agent conversation framework provided by AutoGen (Wu et al., 2024). Each episode of the simulation starts with a random agent greeting "Hi there!". The agents will then generate responses based on the system prompt and conversation history. When the conversation reaches the predefined maximum number of turns, the conversation history will be provided to each participant and the third-party judges to assess the agents' goal completion. Participants will also be asked to answer multiple-choice questions related to others' private information.
767
+
768
+ <table><tr><td>Personal
769
+ Domain</td><td>Interactions in the personal domain often happen at home, during private gatherings, or in intimate settings, involving family, close friends, and partners.</td></tr><tr><td>Small
770
+ Society</td><td>Small society interactions occur in smaller social environments like schools, workplaces, and communities, occurring in classrooms, offices, conference rooms, community activity centers, and other places.</td></tr><tr><td>Large
771
+ Society</td><td>Large society interactions take place in broader contexts such as public spaces, online platforms, and international conferences.</td></tr></table>
772
+
773
+ Table 6: Social scenario categories with definitions.
774
+
775
+ <table><tr><td colspan="2">Existence Needs</td></tr><tr><td>Brief Goals</td><td>Similar to Maslow&#x27;s physiological and safety needsInformation Acquisition, Information Provision</td></tr><tr><td colspan="2">Relatedness Needs</td></tr><tr><td>Brief Goals</td><td>Similar to Maslow&#x27;s social needs, it involves relationships and interactions with others.Relationship Building, Relationship Maintenance, Identity Recognition</td></tr><tr><td colspan="2">Growth Needs</td></tr><tr><td>Brief Goals</td><td>Similar to Maslow&#x27;s needs for respect and self actualization, it involves personal development and self-improvement.Cooperation, Competition, Conflict Resolution</td></tr></table>
776
+
777
+ Table 7: Social Goal Classification Based on ERG Theory
778
+
779
+ <table><tr><td>Models</td><td>Accuracy</td></tr><tr><td>GPT-4o</td><td>0.82</td></tr><tr><td>GPT-4-turbo</td><td>0.80</td></tr><tr><td>Qwen2.5-72b</td><td>0.79</td></tr><tr><td>Qwen2.5-14b</td><td>0.78</td></tr><tr><td>Llama-3-70b</td><td>0.74</td></tr><tr><td>Llama-3-8b</td><td>0.72</td></tr></table>
780
+
781
+ Table 8: Accuracy of models judging the goal completion when taking human evaluations as reference.
782
+
783
+ # D Additional Experiment Results
784
+
785
+ # D.1 Human Evaluation of Goal Completion
786
+
787
+ Table 8 illustrates the accuracy of different models judge the goal completions given specific conversation history. GPT-4o, Qwen2.5-72b and Llama-3-8b stand out to be the most reliable judges of their model families. Thus, we select these models as the third-party judges.
788
+
789
+ # D.2 Single Model-based Subset Experiment
790
+
791
+ Table 9 shows the additional experiment results on a subset of the test scenarios during the interaction of homogeneous agents. We sample one scenario from each template, resulting in a subset with 245 scenarios. We test each model on the subset 3 times
792
+
793
+ with the same settings as the main experiment.
794
+
795
+ # D.3 Pairwise Model-based Experiment Prompt
796
+
797
+ Prompt 19: Prompt for Sender and Receiver Recognition
798
+
799
+ In the context of social interactions, please categorize each individual into one of the following roles based on their primary social goals:
800
+
801
+ Sender: This role is characterized by the goal of sharing, transmitting, or providing information or opinions to others.
802
+
803
+ Receiver: This role is characterized by the goal of receiving, understanding, or reacting to information shared by others.
804
+
805
+ Please review the following individuals involved in the interaction and assign each one the appropriate role: {data}
806
+
807
+ Please return your response in the following format of JSON: {{"user1":"sender", "user2":"receiver,}}
808
+
809
+ <table><tr><td rowspan="2">Models</td><td colspan="2">Self</td><td colspan="2">Other</td><td colspan="5">Judge Mean</td><td>Judge SD</td><td colspan="2">Info</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>GPT</td><td>Qwen</td><td>Llama</td><td>Average</td><td>Majority</td><td>Majority</td><td>Mean</td><td>SD</td></tr><tr><td>Llama-2-7b</td><td>81.71-2.00%</td><td>1.42</td><td>61.30-2.24%</td><td>2.30</td><td>56.77</td><td>56.69</td><td>53.19</td><td>55.55</td><td>56.05+0.38%</td><td>2.60</td><td>32.68-1.16%</td><td>1.61</td></tr><tr><td>Llama-2-13b</td><td>47.98-0.06%</td><td>1.48</td><td>10.25-0.13%</td><td>0.61</td><td>28.78</td><td>30.84</td><td>72.05</td><td>43.89</td><td>35.20+13.87%</td><td>2.36</td><td>26.86-5.96%</td><td>1.97</td></tr><tr><td>Llama-2-70b</td><td>85.71-0.01%</td><td>0.58</td><td>66.15+0.76%</td><td>0.76</td><td>39.39</td><td>40.85</td><td>74.45</td><td>51.56</td><td>45.98+1.00%</td><td>0.62</td><td>36.24-1.48%</td><td>1.12</td></tr><tr><td>Llama-3-8B</td><td>86.71-1.05%</td><td>0.63</td><td>65.36-2.85%</td><td>1.12</td><td>80.84</td><td>81.02</td><td>73.33</td><td>78.40</td><td>79.47-1.53%</td><td>1.42</td><td>64.90-6.86%</td><td>2.79</td></tr><tr><td>Llama-3-70b</td><td>79.58-0.99%</td><td>0.43</td><td>77.29+0.03%</td><td>0.18</td><td>86.95</td><td>87.13</td><td>80.80</td><td>84.96</td><td>86.30+0.03%</td><td>0.24</td><td>70.71-3.24%</td><td>1.08</td></tr><tr><td>Qwen2.5-7b</td><td>84.69-1.72%</td><td>0.98</td><td>61.37-0.88%</td><td>0.84</td><td>79.79</td><td>78.59</td><td>70.74</td><td>76.37</td><td>77.82+0.58%</td><td>0.77</td><td>70.59-5.65%</td><td>3.26</td></tr><tr><td>Qwen2.5-14b</td><td>86.68+0.07%</td><td>0.65</td><td>83.88-0.34%</td><td>0.34</td><td>90.22</td><td>89.41</td><td>79.97</td><td>86.20</td><td>88.61+0.54%</td><td>0.63</td><td>75.75+0.97%</td><td>2.41</td></tr><tr><td>Qwen2.5-72b</td><td>90.63-0.04%</td><td>0.95</td><td>86.17+0.33%</td><td>0.91</td><td>89.04</td><td>88.67</td><td>78.30</td><td>85.33</td><td>87.37-0.43%</td><td>1.02</td><td>72.41-4.78%</td><td>1.84</td></tr><tr><td>Mistral-7b</td><td>95.20-0.02%</td><td>0.81</td><td>86.12-1.30%</td><td>0.85</td><td>83.42</td><td>83.91</td><td>76.81</td><td>81.38</td><td>82.77+0.49%</td><td>0.56</td><td>64.22-3.56%</td><td>1.43</td></tr><tr><td>GPT-3.5-turbo</td><td>96.04+6.53%</td><td>0.34</td><td>74.74-2.45%</td><td>0.44</td><td>78.68</td><td>80.82</td><td>70.46</td><td>76.65</td><td>78.18-5.40%</td><td>0.90</td><td>69.88+2.15%</td><td>1.48</td></tr><tr><td>GPT-4o</td><td>90.10+1.86%</td><td>0.32</td><td>88.18+2.19%</td><td>0.30</td><td>90.92</td><td>90.12</td><td>81.73</td><td>87.59</td><td>89.61+1.41%</td><td>0.30</td><td>77.31+0.59%</td><td>1.44</td></tr></table>
810
+
811
+ Table 9: Subset experiment results of interactions of agents driven by the same models. The percentage shows the gap with the main results in Table 2. The std is the standard deviation of the 3 times tests.
812
+
813
+ <table><tr><td>Model</td><td>Jaccard</td><td>Gestalt</td><td>Levenshtein</td></tr><tr><td>Llama-2-7b</td><td>12.15±3.1</td><td>2.53±1.22</td><td>9.59±2.36</td></tr><tr><td>Llama-2-13b</td><td>13.06±3.32</td><td>2.88±1.35</td><td>10.43±2.45</td></tr><tr><td>Llama-2-70b</td><td>12.71±3.29</td><td>2.69±1.26</td><td>10.02±2.46</td></tr><tr><td>Llama-3-8b</td><td>11.30±3.05</td><td>2.31±1.05</td><td>8.84±2.36</td></tr><tr><td>Llama-3-70b</td><td>11.02±3.05</td><td>2.18±0.95</td><td>8.48±2.31</td></tr><tr><td>Qwen2.5-7b</td><td>10.45±2.85</td><td>2.19±1.13</td><td>8.85±2.42</td></tr><tr><td>Qwen2.5-14b</td><td>10.34±2.94</td><td>2.46±1.18</td><td>9.54±2.59</td></tr><tr><td>Qwen-72b</td><td>10.8±3.13</td><td>2.46±1.22</td><td>9.41±2.61</td></tr><tr><td>Mistral-7b</td><td>9.03±2.49</td><td>2.25±1.06</td><td>8.93±2.27</td></tr><tr><td>GPT-3.5-turbo</td><td>12.09±3.02</td><td>2.99±1.53</td><td>11.11±2.59</td></tr><tr><td>GPT-4o</td><td>11.63±3.18</td><td>2.97±1.53</td><td>10.85±2.59</td></tr></table>
814
+
815
+ Table 10: Similarity between simulated dialogues by different models and the original dialogue in corresponding source scripts. Jaccard: Jaccard similarity; Gestalt: Gestalt matching ratio; Levenshtein: Levenshtein ratio.
816
+
817
+ # D.4 Pairwise Model-based Additional Results
818
+
819
+ Figure 7 and Figure 8 illustrate the judge majority score of goal completion of senders and receivers respectively. Llama-3-8b and GPT-3.5-turbo perform better when they are acting receivers than acting senders, while Qwen2.5-14b and GPT-4o can well handle both situations.
820
+
821
+ # D.5 Generated Dialogue-Script Similarity
822
+
823
+ Table 10 lists the similarity metrics between simulated dialogues by different models and the original dialogue in corresponding source scripts. Generated dialogues are not replicating or closely following the source scripts, including the ones created by GPT-4o which itself is used to build the benchmark. This is natural since the revised scenarios are already quite different from the original scripts. Our experimental settings also make it difficult for models to generate dialogues from scripts.
824
+
825
+ # D.6 Goal Completion Score Additional Results
826
+
827
+ Figure 9 compared goal completion scores under different scenario types across different models. In general, models are more likely to achieve social goals in smaller environments, yet the difference is relatively small, especially for larger models. Qwen2.5-14b, compared with other models, has shown the most balanced performance, especially superseding GPT-4o in large society scenarios.
828
+
829
+ Figure 10 demonstrates how goal completion scores change as we increase the number of interacting rounds. It appears that there is no best number of rounds regarding all three evaluation aspects (self, other and external), while the trends also vary between models. Again, larger models are more robust to this factor, indicating that they can complete their demands in a few number of interactions while keep concentrated during the whole dialog.
830
+
831
+ Figure 11 illustrates the relation between goal completion scores and the number of participants in the scenario. As expected, social goals become harder to achieve when more agents are involved. Note that in our benchmark, most 5-agent scenarios have relatively easy goals (e.g. a group of friends having a casual chat about a subject), leading to a higher average score than 4-agent scenarios. Therefore, we claim that the type of the goals are more important than the number of agent when measuring the difficulty of a social scenario.
832
+
833
+ # D.7 Experiment on Profiles' Effect on Scenarios
834
+
835
+ We discuss this experiment in Section 5.3. A Chi-square test is employed to identify the abnormal templates with the p-value=0.05. Typically, Chi
836
+
837
+ ![](images/6ba763e37c025a469fe73c2a3b762e652de26753bce4f1b919c0e63d5bd2bfee.jpg)
838
+ Figure 7: Judge majority score of senders in the interactions among different agents.
839
+
840
+ ![](images/aa4f1b19fb3b54de72c92ba290a539db4c1cd2552a5daf9d239f35047faa07a6.jpg)
841
+ Figure 8: Judge majority score of receivers in the interactions among different agents.
842
+
843
+ ![](images/9e3ec05e610ddf57f6a392141b45a64883ca52325250594b4d7b53a3ea3dd580.jpg)
844
+ Figure 9: Goal completion scores under different scenario types across models.
845
+
846
+ ![](images/62d0cad068216064dd018e71126b2bb6eb29536362c5dafb7c6e76136cdf1cd3.jpg)
847
+ Figure 10: Goal completion scores under different number of rounds across models.
848
+
849
+ ![](images/9a5d40f384dc93bb8e385bfc74fcef1ecee2880ec8b0d67a8313e83ca87b2d42.jpg)
850
+ Figure 11: Goal completion scores under different number of participants across models.
851
+
852
+ <table><tr><td rowspan="2">Agent Model</td><td colspan="2">Goal (Judge Majority)</td><td colspan="2">Info</td></tr><tr><td>Human</td><td>Agent</td><td>Human</td><td>Agent</td></tr><tr><td>GPT-4o</td><td>87.50</td><td>80.83</td><td>68.75</td><td>77.08</td></tr><tr><td>GPT-3.5-turbo</td><td>83.33</td><td>72.92</td><td>77.92</td><td>68.75</td></tr></table>
853
+
854
+ Table 11: Human and agent performance when roleplaying the same person.
855
+
856
+ <table><tr><td rowspan="2">Agent Model</td><td colspan="2">Goal (Judge Majority)</td><td colspan="2">Info</td></tr><tr><td>w/ HCI</td><td>w/o HCI</td><td>w/ HCI</td><td>w/o HCI</td></tr><tr><td>GPT-4o</td><td>78.75</td><td>75.27</td><td>75.38</td><td>71.98</td></tr><tr><td>GPT-3.5-turbo</td><td>74.58</td><td>75.56</td><td>74.24</td><td>61.87</td></tr></table>
857
+
858
+ Table 12: Agent performance when interacting with human (w/ HCI) or with other agents (w/o HCI).
859
+
860
+ square distribution is formulated as follows:
861
+
862
+ $$
863
+ \chi^ {2} = \frac {(n - 1) s ^ {2}}{\sigma_ {0} ^ {2}} \tag {1}
864
+ $$
865
+
866
+ where $n$ denotes number of samples, $s^2$ denotes the sample variance, and $\sigma^2$ denotes the population variance.
867
+
868
+ The Chi-square test has two hypotheses:
869
+
870
+ - Null hypothesis $H_0$ : the sample variance does not have a significant difference with the population variance.
871
+ - Alternative hypothesis $H_{1}$ : the sample variance has a significant difference with the population variance.
872
+
873
+ If $\chi^2 > p$ , then we accept the null hypothesis $H_0$ . On the contrary, we reject $H_0$ and accept the alternative hypothesis $H_1$ .
874
+
875
+ In our experiment, we first calculate the variance of all scenarios as the population variance. Then, the variance of scenarios derived from the same template is calculated as the sample variance. As we enlarge each template into five scenarios, the sample number is fixed to be 5. We apply the Chi-square test to all 11 models tested in our main experiments. For each template, Chi-square test rejecting over 5 models' results means that there's a significant difference between sample performance and overall performance on the majority of models. In this case, the template is marked as an abnormal one during the profile enrichment, eventually resulting in 18 abnormal templates.
876
+
877
+ # D.8 Human-Computer Interaction Performance
878
+
879
+ We sample 20 tasks covering different types of scenarios and numbers of participants. We randomly assign one of the participants as the human and instruct 5 people to interact with the models. The results are shown in Table 11 and Table 12. On the one hand, humans outperform models in goal completion but may struggle with private information reasoning against stronger models like GPT-4o. For weaker models like GPT-3.5, it's easier for humans to infer, since they are prone to expose information. On the other hand, interacting with humans enhances agents' goal completion and reasoning abilities, with humans prioritizing their goals, sometimes at the expense of revealing information.
880
+
881
+ # D.9 Under-performing Cases of Llama-2-7b
882
+
883
+ Tables 13, 14 and 15 lists example cases where Llama-2-7b under-performed during the social scenario due to different failures: maintaining roles, moving conversation forward and responding effectively.
884
+
885
+ <table><tr><td rowspan="3">Profile</td><td>Agent4</td></tr><tr><td>Social Goal: To respond calmly to AliceGreene&#x27;s flirtatious remarks.</td></tr><tr><td>Private Info: Agent4 is aware of AliceGreene&#x27;s flirtatious nature and is trying to handle it delicately. To respond calmly to AliceGreene&#x27;s flirtatious remarks, agent4 is trying to stay calm with AliceGreene to maintain the peace and avoid embarrassment in front of his friends.</td></tr><tr><td></td><td>...</td></tr><tr><td></td><td>Agent4: smirks</td></tr><tr><td></td><td>Oh ho ho! Miss Greene, you are a tricky one, aren&#x27;t you?</td></tr><tr><td></td><td>I can see that you&#x27;re trying to get me to reveal one of my secrets. laughs</td></tr><tr><td></td><td>But I&#x27;m not so easily swayed. I think I can handle whatever secrets you have up your sleeve.</td></tr><tr><td></td><td>winks</td></tr><tr><td></td><td>So, do tell, what&#x27;s one of your favorite secret ingredients? leaning in</td></tr><tr><td>Chat History</td><td>AliceGreene: giggles</td></tr><tr><td></td><td>Oh my, you&#x27;re so close! leans in closer</td></tr><tr><td></td><td>I&#x27;ll give you a hint: it involves a dash of wit, a pinch of Sass, and a healthy dose of whimsy.</td></tr><tr><td></td><td>bats eyelashes</td></tr><tr><td></td><td>But shh, it&#x27;s a secret! pecks you on the cheek</td></tr><tr><td></td><td>Now, let&#x27;s get this party started! takes your hand and leads you to the dance floor</td></tr><tr><td></td><td>...</td></tr><tr><td>Judge</td><td>The behavior of the character is contrary to the character setting. Despite AliceGreene&#x27;s flirtatious remarks, agent4 has not responded calmly, but has instead engaged in playful banter and flirtation.</td></tr></table>
886
+
887
+ Table 13: Llama-2-7b under-performing case 1: difficulty in maintaining role.
888
+
889
+ <table><tr><td rowspan="2">Profile</td><td>John Doe</td></tr><tr><td>Social Goal: To seek agent4&#x27;s insight about a current investigation.</td></tr><tr><td rowspan="17">Chat History</td><td>...</td></tr><tr><td>John Doe: approaching agent4 with a confident smile</td></tr><tr><td>Hello! I hope you&#x27;re enjoying the exhibit. My name is John Doe, and I&#x27;m an FBI agent. I couldn&#x27;t help but notice that you have a keen eye for art. I was wonder if I could pick your brain for a moment about a current investigation I&#x27;m working on. extending her hand for a handshake</td></tr><tr><td>Agent4: Oh, hello there John Doe!</td></tr><tr><td>shaking hands firmly</td></tr><tr><td>Ah, an FBI agent, you say?</td></tr><tr><td>smiling broadly</td></tr><tr><td>I must admit, I do have a bit of a fondness for art and the intrigue that surrounds it. leaning in slightly, lowering my voice</td></tr><tr><td>But please, do tell, what is this investigation you speak of? I would be more than happy to offer my expertise, as long as it&#x27;s not too confidential, of course.</td></tr><tr><td>winking subtly</td></tr><tr><td>...</td></tr><tr><td>John Doe:</td></tr><tr><td>smiling</td></tr><tr><td>I&#x27;m so glad you&#x27;re here. I&#x27;ve been hoping to run into you tonight. I&#x27;m working on a case that involves some stolen art pieces, and I was hoping you might be able to offer some insight.</td></tr><tr><td>leaning in slightly</td></tr><tr><td>Do you recognize any of the pieces on display tonight?</td></tr><tr><td>...</td></tr><tr><td rowspan="2">Judge</td><td>The dialogue lacks substance and fails to move forward within the limited number of turns.</td></tr><tr><td>John Doe&#x27;s goal was to seek agent4&#x27;s insight about a current investigation, but the conversationdidn&#x27;t lead to any meaningful discussion or exchange of information about the investigation. The conversation was more focused on introductions and small talk, and agent4 didn&#x27;t provide any substantial input or insight about the investigation.</td></tr></table>
890
+
891
+ Table 14: Llama-2-7b under-performing case 2: difficulty in moving conversation forward
892
+
893
+ <table><tr><td rowspan="2">Profile</td><td>MichaelRodrigues</td></tr><tr><td>Social Goal: To gain insight into his romantic future.</td></tr><tr><td rowspan="15">Chat History</td><td>...</td></tr><tr><td>MichaelRodrigues:</td></tr><tr><td>So, uh, Raven, I was wondering if maybe, ya know, you could give me a little fortune -telling reading? Maybe shed some light on my romantic future? smirk</td></tr><tr><td>Get it? Light? Like, you&#x27;re a fortune teller and all? Ah, never mind. It&#x27;s nothing. Just a little curiosity I have. Giggle</td></tr><tr><td>RavenMoonshadow:</td></tr><tr><td>Ah, my dear friend Michael!smiling warmly</td></tr><tr><td>It&#x27;s so good to see you here, amidst all this chaos. I sense that you&#x27;re seeking some guidance, perhaps a glimpse into the future?perking up with interest</td></tr><tr><td>Tell me, what&#x27;s on your mind? What do you hope to discover?</td></tr><tr><td>MichaelRodrigues:smiling mischievously</td></tr><tr><td>So, do you see any potential love interests in our little group here? Maybe someone with a ashing smile and a penchant for danger?winks</td></tr><tr><td>RavenMoonshadow:</td></tr><tr><td>raises an eyebrow playfullyOh ho, Mr. Rodrigues!smirks</td></tr><tr><td>I think I might have just the person in mind.pauses for dramatic effect</td></tr><tr><td>But before I reveal my romantic expertise, tell me: what&#x27;s your zodiac sign?tilts her head to the side and studies his palm</td></tr><tr><td>...</td></tr><tr><td>Judge</td><td>Failure to respond effectively to others led to their goals not being achieved.Michael Rodrigues focused more on flirting and trying to create a romantic atmosphere with Raven Moonshadow, but he hasn&#x27;t received any direct insights about his own romantic future from her.</td></tr></table>
894
+
895
+ Table 15: Llama-2-7b under-performing case 3: difficulty in responding effectively
agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0843c3bd2aab19fb96c955c642da35ac69a034a292cdd52e80337141646a35d0
3
+ size 1665884
agentsensebenchmarkingsocialintelligenceoflanguageagentsthroughinteractivescenarios/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38d75aecdd5c59506ae6230ea693b471e5ea67109ff62c0d75924b0b2d50cfbe
3
+ size 684450
aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/3f03da78-6d2f-40af-bfc2-3d46976aa364_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fe8668647f22e1aeadcb1fc257956aca3813a4ddc73424b34608529ceafefb6
3
+ size 121645
aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/3f03da78-6d2f-40af-bfc2-3d46976aa364_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90b0f728bee558e66bfcb7cee1f4f7a83befb43841859a794d86e04243619d6a
3
+ size 134556
aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/3f03da78-6d2f-40af-bfc2-3d46976aa364_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f67c8ee809bea204845c55bdc991a3a30a909d9d53a1cb9182acd3811feafb5
3
+ size 758705
aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/full.md ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
2
+
3
+ Georgios Chochlakis<sup>1</sup>, Alexandros Potamianos<sup>1,2</sup>, Kristina Lerman<sup>1</sup>, Shrikanth Narayanan<sup>1</sup>
4
+
5
+ <sup>1</sup>University of Southern California, <sup>2</sup>National Technical University of Athens
6
+
7
+ Correspondence: chochlak@usc.edu
8
+
9
+ # Abstract
10
+
11
+ In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs). The knowledge acquired during pre-training is crucial for this few-shot capability, providing the model with task priors. However, recent studies have shown that ICL predominantly relies on retrieving task priors rather than "learning" to perform tasks. This limitation is particularly evident in complex subjective domains such as emotion and morality, where priors significantly influence posterior predictions. In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt. Moreover, we evaluate the posterior bias towards certain annotators by grounding our study in appropriate, quantitative measures of LLM priors. Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead. However, aggregation does not explain the entire gap between ICL and the state of the art, meaning other factors in such tasks also account for the observed phenomena. Finally, by rigorously studying annotator-level labels, we find that it is possible for minority annotators to both better align with LLMs and have their perspectives further amplified.
12
+
13
+ # 1 Introduction
14
+
15
+ Large Language Models (LLMs) (Radford et al., 2019; Ouyang et al., 2022a; Touvron et al., 2023; Dubey et al., 2024; Brown et al., 2020; Achiam et al., 2023) have become to dominate language processing as generalists that can perform many tasks. This dominance comes from the emergence of methods such as In-Context Learning (ICL;
16
+
17
+ Brown et al. 2020) and Chain-of-Through prompting (CoT; Wei et al. 2022), wherein LLMs perform tasks by leveraging input-output demonstrations and task instructions in the prompt only, without any parameter updates.
18
+
19
+ While ICL is often contrasted with traditional in-weights learning (i.e., gradient-based updates of the models' parameters) (Kossen et al., 2023; Chan et al., 2022), the ICL abilities of LLMs depend on their general, in-weights prior knowledge, allowing them to perform many tasks in a zero-shot or few-shot manner. Therefore, studying how LLMs leverage the context in relation with their existing knowledge is a prerequisite to understanding ICL.
20
+
21
+ Prior work found evidence that LLMs may be overly reliant on their prior knowledge, disregarding the demonstrations in the prompt. Specifically, Min et al. (2022) demonstrated that, under certain circumstances, LLMs ignore the provided signal in their prompt in the form of the mapping between inputs and outputs, and instead act as a database of tasks; they focus on the examples and the labels independently to fetch the underlying task (Xie et al., 2021): Min et al. (2022) sampled examples and labels independently, and showed very little change in performance. Since no annotations are provided, the setting is given the status of "zero-shot" inference, namely task-recognition zero-shot. While follow-up work (Kossen et al., 2023; Wei et al., 2023; Pan et al., 2023; Yoo et al., 2022) has further studied this phenomenon and questioned its generality, more recent work (Chochlakis et al., 2024, 2025) has provided further, quantitative evidence for the relative importance of prior and evidence in the posteriors of the models. Specifically, in complex subjective tasks like multilabel emotion or morality recognition, LLMs seem to virtually disregard evidence from the dataset's mapping in their posterior predictions, even in the form of Chain-of-Thought prompting (CoT; Wei et al. 2022), performing significantly worse than traditional algo
22
+
23
+ rithms (Alhuzali and Ananiadou, 2021; Chochlakis et al., 2023). This may imply an inability to perform annotator modeling, as the same document can receive different but valid annotations by different people.
24
+
25
+ Here, we use complex subjective to denote survey settings (Resnick et al., 2021), in which people can reasonably disagree about their semantic interpretations, where the notion of ground truth is replaced with crowd truth (Aroyo and Welty, 2015).
26
+
27
+ In this work, we question whether we can efficiently use this crowd truth when modeling these tasks with LLMs. We hypothesize that the aggregation process creates artifacts in the labels provided to the model, and in turn in the prompt, causing the model to ignore the entire input-label mapping as noise. Specifically, aggregation can create inconsistent annotations, since, for instance, different annotators can prevail in different examples, which can cause inconsistencies within or across splits (for a toy example, consult Section C). We create a carefully crafted experimental setting to test our hypothesis with annotator-level labels using ICL and CoT, and try to gauge at the magnitude of this effect. We find that LLMs do indeed tend to consistently favor individual perspectives compared to the aggregate, and in fact favor minority annotators more than majority annotators, who better resemble the aggregate. Our contributions are:
28
+
29
+ - We show strong correlational evidence that aggregation creates artifacts that hinder the modeling of subjective tasks with LLMs
30
+ - We show that minority annotators can both align with LLMs' priors better and demonstrate larger positive effects in the posterior.
31
+ - We nonetheless conclude that there are more major factors hurting LLM performance in machine learning benchmarks.
32
+
33
+ We advocate for more transparency in data collection and data sharing, and urge releasing and modeling individual annotations and not simply the aggregates in benchmark datasets.
34
+
35
+ # 2 Related Work
36
+
37
+ # 2.1 In-Context Learning and Priors
38
+
39
+ ICL (Brown et al., 2020) has been used extensively to evaluate LLMs on standard benchmarks (Srivastava et al., 2022). It requires no gradient-based interventions, which are otherwise costly to perform
40
+
41
+ for large models, and usually achieves competitive or state-of-the-art performance. The existence of commercial APIs (Achiam et al., 2023), coupled with open-weights alternatives (Touvron et al., 2023; Dubey et al., 2024) have made ICL an accessible generalist for language tasks and more (Liu et al., 2024). Previous work has further studied controlling the reliance on context and in-weights knowledge through the distribution of the training data (Chan et al., 2022), examining how to optimally select examples for the prompt (Rubin et al., 2022; Gupta et al., 2023), integrating instructions explicitly during training (Touvron et al., 2023; Ouyang et al., 2022b), etc. Relevant to our work, prior work has elicited ICL priors by providing random labels for the examples of the prompt (Min et al., 2022). The resulting minimal variations in performance indicates that LLMs recognize and retrieve their prior knowledge of the task in the prompt rather than doing any "learning". Subsequent results challenged the view that LLMs mostly perform task recognition, showcasing a significant degradation in performance when scaling the prompt (Kossen et al., 2023) or when focusing on specific tasks instead of aggregates (Yoo et al., 2022), and analyzed behavior with certain label manipulations (Kossen et al., 2023; Pan et al., 2023; Wei et al., 2023). More recent work, however, suggests that in complex subjective tasks like emotion and morality classification, the prior understanding of the task dominates posterior predictions (Chochlakis et al., 2024).
42
+
43
+ One potential way to augment ICL and overcome the prior bias is with CoT (Wei et al., 2022). CoT incorporates the derivation process from input to output explicitly in the prompt, presenting a more human-like reasoning process. This has several advantages, such as making some patterns in the data explicit in the prompt, making model responses more explainable, and potentially directing more computing resources towards more complex problems. However, detailed analysis has cast some doubt on the reliability and the faithfulness of this reasoning technique (Lanham et al., 2023; Turpin et al., 2024). Nonetheless, CoT seems to improve the robustness and performance of LLMs across a plethora of tasks, yet follow-up work on subjective tasks showed no improvements and the same bias towards a reasoning prior, especially for larger models (Chochlakis et al., 2025). Methods such as Tree of Thoughts (Yao et al., 2024) or self-consistency (Wang et al., 2022) have experimented
44
+
45
+ with ways to further augment CoT. In this work, we study whether part of the prior bias in both ICL and CoT can be explained by the aggregation process used to derive the labels, and in turn the reasoning process, of the examined benchmarks.
46
+
47
+ # 2.2 Annotator Disagreement and Modeling
48
+
49
+ Many works have attempted to model individual annotator perspectives instead of the aggregate, like we advocate in this work. For example, researchers used the EM algorithm (Dawid and Skene, 1979) to assign confidence to each annotator's evaluations (Hovy et al., 2013). Recently, Gordon et al. (2022) concatenated features derived from Transformers (Vaswani et al., 2017) with annotator embeddings that incorporate demographics to model individual perspectives. Demographic information has also been incorporated in word embeddings by Garten et al. (2019). Demographic information and psychological profiles have been statistically examined in text annotations to derive insights into systematic biases (Sap et al., 2022). Recent work has tried to filter annotators based on deep learning methods (Mokhberian et al., 2022), to model annotators on top of common representations (Davani et al., 2022; Mokhberian et al., 2023), and to decrease annotation costs based on agreement (Golazizian et al., 2024). Modeling annotators with LLMs has shown limited success, and LLM biases have also been explored (Dutta et al., 2023; Abdurahman et al., 2024; Hartmann et al., 2023).
50
+
51
+ # 3 Methodology
52
+
53
+ We closely follow the methodology and notation of Chochlakis et al. (2025). For a set of examples $\mathcal{X}$ , and a set of labels $\mathcal{Y}$ , a dataset $\mathcal{D}^a$ defines a mapping $f^a: \mathcal{X} \to \mathcal{Y}$ , where $a$ denotes a specific annotator or the aggregate, as well as reasoning chains $R^a(x) = r, x \in \mathcal{X}$ that explicitly describe $f^a$ , and therefore $\mathcal{D}^a = \{(x, y, r): x \in \mathcal{X}, y = f^a(x), r = R^a(x)\}$ , from which we can sample demonstrations with $p(x, y, r)$ . We do not differentiate between splits for brevity. Given CoT prompt $S = \{(x_i, y_i, r_i): (x_i, y_i, r_i) \sim p_S, i \in [k]\}$ with $k$ demonstrations sampled with distribution $p_S$ from $\mathcal{D}^a$ without replacement, an LLM produces its own mapping and predictions for the task, denoted as $\hat{f}_k(:, p_S): \mathcal{X} \to \mathcal{Y}$ . When using regular ICL, we simply drop the reasoning text. For all our experiments, we set the temperature
54
+
55
+ ![](images/b58fe4a1426d703caeb112f65460a39f531bc80a994d8b995eaf7f994d8dc54d.jpg)
56
+ Figure 1: Performance comparison between LLMs w/ and w/o Chain-of-Thought prompting compared to Demux (BERT-based) using aggregated labels.
57
+
58
+ $\tau = 0$ to derive deterministic predictions, and vary the examples and/or labels between runs.
59
+
60
+ # 3.1 Similarity and Performance Metrics
61
+
62
+ To keep evaluations consistent when using API-based LLMs, we rely on similarity measures calculated directly on the final predictions rather than probabilistic measures like logits. Therefore, we use the Jaccard Score (JS), Micro and Macro F1 metrics (Mohammad et al., 2018) to evaluate the performance of the models. For consistency, we also use them to quantify the similarity between the predictions from different model runs or annotators, since they are symmetric functions which we apply to interchangeable sets.
63
+
64
+ # 3.2 Task Prior Knowledge Proxies via Zero-shot Inference
65
+
66
+ Here, we precisely define the priors that we use for ICL and CoT. We use the term prior to contrast it with the posterior predictions of the model after evidence (i.e., a specific annotator's labels) from the dataset have been presented to it. First, we have the true reasoning task-recognition zero-shot $^2$ prior, where the prompt contains $k$ demonstrations sampled with $p^I(x,y,r) = p(x)p(y)p(r)$ , so text, labels, and reasoning are sampled independently from each other from $\mathcal{D}^a$ , hence labels and reasoning are irrelevant to the text and each other. This effectively maintains the relationships between labels, which are strong in such multilabel tasks (Cowen and Keltner, 2017). For ICL, we have the corresponding task-recognition zero-shot prior sampled with $p^I(x,y) = p(x)p(y)$ , so text and labels are also sampled independently. The similarity of the priors to annotators and aggregate are, therefore, measured by comparing
67
+
68
+ ![](images/acef65138df4392c46b2cb1ee1f0a1823c2f77e99251bed97331d9fb925a8983.jpg)
69
+
70
+ ![](images/ad46a320bec5561064a11e67474bec83196c7d999099d3e79252f2767e810890.jpg)
71
+
72
+ ![](images/4f6255051198c1e70be24fc5f5ffb155191f1f5a578b2852099f90bf326a2ad5.jpg)
73
+
74
+ ![](images/8796a5ec33c9afd8d14c22b8e1763ee172def8daae5ab4bde1d89c06b7e877fb.jpg)
75
+
76
+ ![](images/06de7cef11eefed600851c6df8bfe8d8b745244566325ed82888e0f214d6c9a8.jpg)
77
+
78
+ ![](images/8d417602c5ca07cd3f9df7b2e3f0a141cb9ee7a9139c60af33fb3f0ff0123391.jpg)
79
+
80
+ ![](images/188ad683db9a1a5511a9da0307ba41f23e51150f51f9406b40a56947a3f4c6b6.jpg)
81
+ Figure 2: Scatter plot of annotator similarity with aggregate and with prior. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
82
+
83
+ ![](images/96d700c67020c13fd401b47f18de77ff0fc0b5d5fa6ca9ad00b2d71c33b0db2e.jpg)
84
+
85
+ ![](images/19cfe4077386d62c2c886a44e38e301e0f53ac6196a5b745680e1c24c52a8b39.jpg)
86
+
87
+ ![](images/b0b0b7e62a6fb11be28d47c1902292f8467d76e46d9595f6bf43553679b838db.jpg)
88
+
89
+ ![](images/25805bc191e69644b30db4440e60eb37d3f88d2e4be6fdc601908652693accea.jpg)
90
+
91
+ ![](images/5dcf6805e2045995871fb8071b7f069e82ebfdafc0e2d308f436645a8f893949.jpg)
92
+
93
+ (as described in Section 3.1) the prior predictions $\hat{f}_k(:,p_I)$ with the annotator's labels, which is equivalent to the prior performance for the annotator, and the posteriors are simply defined as the $\hat{f}_k(:,p_S)$ with the joint sampling distribution $p_S$ (meaning we present the gold labels to the model).
94
+
95
+ # 3.3 Prompt Design
96
+
97
+ Because the specific examples and their order in the prompt can affect the output of the model, we use exactly the same examples and in the same order across corresponding experiments. To achieve that, we find groups of annotators with significant overlap in the train and in the evaluation sets and use the same samples, including for aggregate and prior. Since the only degree of freedom is the labels, we eliminate the aforementioned confounding factors, among others (more details in Section D).
98
+
99
+ # 4 Experiments
100
+
101
+ # 4.1 Datasets
102
+
103
+ MFRC (Trager et al., 2022): Multilabel moral foundation corpus with annotations for six moral foundations: care, equality, proportionality, loyalty, authority, and purity. We use annotators $\emptyset \emptyset$ through $\emptyset 4$ (common examples between groups $\emptyset 0 - \emptyset 1 - \emptyset 3$ and $\emptyset 2 - \emptyset 4$ ).
104
+
105
+ GoEmotions (Demszky et al., 2020): Multilabel emotion recognition benchmark with 27 emotions. For efficiency and conciseness, we pool the emotions to the following seven "clusters" by using hierarchical clustering: admiration, anger, fear, joy, optimism, sadness, and surprise. We use annotator triplets 4-37-61 and 7-36-60.
106
+
107
+ # 4.2 Implementation Details
108
+
109
+ We use the 4-bit quantized versions of the open-source LLMs through the HuggingFace (Wolf et al., 2020) interface for PyTorch. We use LLaMA-2 7B
110
+
111
+ and 70B, LLaMA-3 8B and 70B, GPT-3.5 Turbo, and GPT-4o mini. We chose only models with RLHF (Ouyang et al., 2022b) for uniformity. We perform 3 runs for each LLM experiment, varying the examples used. Statistical significance is calculated with permutation tests and measured by considering all 3 runs as separate data points. We use random retrieval of examples. We use less shots for CoT given the increases in length in the prompt. We generated reasonings for each example per annotator and for the aggregate. For details on our CoT annotations, see Section B in the Appendix. We use one NVIDIA A100 and one V100.
112
+
113
+ # 4.3 Baselines
114
+
115
+ To establish baseline performance of LLMs compared to smaller, gradient-based methods, we present performance with and without CoT prompting compared to BERT-based (Devlin et al., 2018) Demux (Chochlakis et al., 2023). In Figure 1, we demonstrate the significant difference in performance across all the LLMs (45 shots for ICL, 15 shot for CoT) and Demux. In fact, given the very high JS and very low F1 scores, results for MFRC indicate close to random performance for the model, so we choose to use Micro F1 for MFRC, as opposed to JS for GoEmotions.
116
+
117
+ Nevertheless, we argue that this is an artifact of the inconsistent mapping used in the prompt, caused by the aggregation of labels for different annotators. Next, we evaluate whether aggregation does create annotation artifacts, and the extend to which they influence model behavior.
118
+
119
+ # 4.4 Main Results
120
+
121
+ In this section, we present our experiments, aimed at disentangling the role of aggregation in subjective tasks. First, we focus on 45-shot ICL (Section 4.4.1), and analyze how the similarity of each
122
+
123
+ ![](images/855115e19e4a14d21a19637f8c088779c495fe500e861cc068f0ad5f5221560e.jpg)
124
+ Figure 3: In-Context Learning performance for annotators and aggregate and their relative improvement compared to the model's prior as a function of the similarity of each with the model's prior. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
125
+
126
+ ![](images/e04b04253e49710326774a3cc59434cc247325420bdd6d2791eaaf22c3da8f50.jpg)
127
+ Figure 4: In-Context Learning performance for annotators and their relative improvement compared to the model's prior as a function of the similarity of each annotator with the aggregate. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
128
+
129
+ annotator and of the aggregate with the models' prior affects the relative improvement of the posterior $p_S$ over the prior $p_I$ , as well as absolute posterior performance. Then, we analyze how the majority and minority (or idiosyncratic) annotators fare on these tasks by looking at the similarity of each annotator to the aggregate. Following this analy-
130
+
131
+ sis, we perform equivalent experiments for 15-shot CoT, and present them in Section 4.4.2. Finally, we zoom out of individual experiments and summarize our complete body of evidence in Section 5.
132
+
133
+ In all experiments, we present averages. The similarity of the annotators to the prior, as well as the similarity to the aggregate, is measured on the
134
+
135
+ ![](images/b41353a2d9b1955fb7ccb53588a3d6f633fc692e00083a548463092a3bb21338.jpg)
136
+ Figure 5: Chain-of-Thought performance for annotators and aggregate and their relative improvement compared to the model's prior as a function of the similarity of each with the model's prior. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
137
+
138
+ test set, which may differ from the train set.
139
+
140
+ First of all, we present the relationship between the similarity of each annotator to the aggregate and the similarity to the prior of the model. Our results in Figure 2 indicate that the correlation tends to be negative, with 7 of 12 negative correlations, 2 of which are statistically significant, and only 4 positive correlations with one statistically significant result. This indicates that the more an annotator resembles the majority, the less aligned they tend to be with the models' priors. This is the first piece of evidence indicating that aggregation causes artifacts, and seems to suggest that the models perform more appropriate aggregation of evidence during their training compared to the simple (or even simplistic) majority-based aggregation used in such benchmarks, causing misalignment.
141
+
142
+ # 4.4.1 Annotator Modeling with ICL
143
+
144
+ Similarity to Prior In Figure 3, we first see the performance of the models for each annotator and the aggregate w.r.t. the similarity of each to the prior of each model. For the performance of the model, we see that, as expected, similarity with the prior correlates positively with final performance, with all results but one being statistically significant. It is interesting to see that the aggregate ranks low both for MFRC and GoEmotions in terms of posterior (from left to right and top to bottom: $5/6$ ,
145
+
146
+ 4/6, 6/6, 6/6, 5/6, 6/6, 5/7, 5/7, 6/7, 5/7, 5/7, 2/7; average is 22nd percentile) and prior (5/6, 2/6, 6/6, 6/6, 4/6, 5/6, 6/7, 4/7, 5/7, 2/7, 5/7, 2/7; average is 33rd percentile). Looking at the relative improvement, it is interesting to see that the only significant trends are negative trends, meaning that the LLMs tend to boost opinions they disagree with more. Despite the aggregate being among the worst performing mappings, with the expectation being that it receives significant gains in performance, it ranked below average (4/6, 4/6, 2/6, 6/6, 3/6, 5/6, 4/7, 5/7, 3/7, 6/7, 1/7, 4/7; average is 39th percentile).
147
+
148
+ Agreement with Aggregate By switching to examining at the similarity of each annotator to the aggregate, and how that correlates with absolute and relative performance, in Figure 4, we see strongly negative trends. In fact, 17 out of the 24 cases are negative, 6 of which are statistically significant, and only one positive trend is statistically significant. Therefore, we see that idiosyncratic annotators both have better performance and are more amplified.
149
+
150
+ Overall, we see very strong correlational evidence that not only are aggregates misaligned with the models' priors, they also benefit less from ICL with their labels in the prompt. This is happening in spite of annotators with worse alignment frequently receiving significant gains in performance. Consequently, by combining our findings from the priors and the aggregates, we conclude that the aggregate
151
+
152
+ ![](images/eca515c4df84e04414714f74f0e8d7f81a6291b5bb9e48308e587a8c76acf841.jpg)
153
+ Figure 6: Chain-of-Thought performance for annotators and their relative improvement compared to the model's prior as a function of the similarity of each annotator with the aggregate. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
154
+
155
+ mapping inherently has inconsistencies that inject sufficient noise in the prompt.
156
+
157
+ # 4.4.2 Annotator Modeling with CoT
158
+
159
+ We now switch to CoT, and evaluate consistency across prompting techniques. We note that because of the decreased number of runs in this setting, the confidence in these findings is similarly decreased.
160
+
161
+ Similarity to Prior In Figure 5, we observe similar trends to the equivalent setting for ICL (Figure 3), namely that final (posterior) performance positively correlates with the prior similarity (10 of 12 cases are positive, 6 of which are statistically significant), and that relative improvement of the posterior compared to the prior is negatively correlated with similarity with the prior (10 of 12 cases are negative, 4 of which are statistically significant). That being said, the differences in performance in Figure 5 tend to be smaller than in Figure 3. The aggregate is among the worst performers in MFRC, but the results in GoEmotions are equivocal.
162
+
163
+ Agreement with Aggregate In Figure 6, we present CoT results and correlate them with the similarity to the aggregate. Our experiments here seem to be split between negative and positive trends.
164
+
165
+ Here, due to the decreased number of experiments, it is difficult to extract concrete findings, yet we can ascertain that the findings here do not seem
166
+
167
+ to contradict our previous findings, and do no show improved performance compared to ICL modeling.
168
+
169
+ # 4.5 Detailed Analysis
170
+
171
+ Based on observations during our manual annotation efforts, we identified clear patterns that annotator 01 in MFRC provides the Authority label more frequently even when one authority figure is mentioned in the input, in contrast to the rest of the annotators. Therefore, if any learning is achieved from ICL and CoT, we expect the accuracy for Authority to be visibly improved compared to the prior baseline due to the consistency and the clear pattern. To achieve and test that, we made this implicit bias clear in the generated reasoning chains. We present the change in performance in Table 1 in comparison to another label, Equality, chosen at random, since we did not observe any other clear patterns in other labels. We do indeed observe that the gains in Authority F1 score are consistent and tend to be significant across models and settings (with only GPT-4o mini CoT presenting an insignificant drop), in opposition to Equality, where gains tend to be small and insignificant, and large drops in performance are observed. This does indicate an ability for the models to somewhat learn and revise priors from the prompt when the mapping and/or the rationale presented are consistent and clear.
172
+
173
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Authority F1</td><td colspan="2">Equality F1</td></tr><tr><td>Prior</td><td>Post</td><td>Prior</td><td>Post</td></tr><tr><td>Llama-2-7b-chat-hf ICL</td><td>0.135±0.121</td><td>↑ 0.217±0.153</td><td>0.204±0.059</td><td>↓ 0.133±0.189</td></tr><tr><td>Llama-2-7b-chat-hf CoT</td><td>0.157±0.124</td><td>↑ 0.330±0.030</td><td>0.524±0.140</td><td>↓ 0.197±0.004</td></tr><tr><td>Meta-Llama-3-8B-Instruct ICL</td><td>0.115±0.083</td><td>↑ 0.276±0.033</td><td>0.039±0.055</td><td>↑ 0.074±0.105</td></tr><tr><td>Meta-Llama-3-8B-Instruct CoT</td><td>0.082±0.116</td><td>↑ 0.277±0.044</td><td>0.300±0.216</td><td>↑ 0.481±0.086</td></tr><tr><td>Llama-2-70b-chat-hf ICL</td><td>0.035±0.050</td><td>↑ 0.053±0.075</td><td>0.080±0.063</td><td>↑ 0.133±0.189</td></tr><tr><td>Llama-2-70b-chat-hf CoT</td><td>0.107±0.097</td><td>↑ 0.257±0.053</td><td>0.044±0.063</td><td>↑ 0.250±0.041</td></tr><tr><td>Meta-Llama-3-70B-Instruct ICL</td><td>0.263±0.118</td><td>↑ 0.347±0.063</td><td>0.129±0.118</td><td>↑ 0.269±0.234</td></tr><tr><td>Meta-Llama-3-70B-Instruct CoT</td><td>0.242±0.121</td><td>↑ 0.383±0.047</td><td>0.243±0.096</td><td>↑ 0.381±0.135</td></tr><tr><td>gpt-3.5-turbo ICL</td><td>0.319±0.057</td><td>↑ 0.357±0.066</td><td>0.167±0.236</td><td>↑ 0.468±0.100</td></tr><tr><td>gpt-3.5-turbo CoT</td><td>0.253±0.068</td><td>↑ 0.361±0.057</td><td>0.260±0.131</td><td>↑ 0.350±0.048</td></tr><tr><td>gpt-4o-mini ICL</td><td>0.228±0.188</td><td>↑ 0.399±0.014</td><td>0.394±0.193</td><td>↓ 0.245±0.204</td></tr><tr><td>gpt-4o-mini CoT</td><td>0.296±0.039</td><td>↓ 0.287±0.053</td><td>0.166±0.060</td><td>↑ 0.222±0.057</td></tr></table>
174
+
175
+ Table 1: Comparison of prior and posterior F1 score of authority and equality for annotator 01 of MFRC using ICL or CoT. $\uparrow / \downarrow$ : increase/decrease with overlapping ranges, $\uparrow / \Downarrow$ : increase/decrease with non-overlapping ranges.
176
+
177
+ ![](images/15db7c37cfa0011055e601230ebf4792136ee34de6f83ca7fd398dc482f74420.jpg)
178
+ Figure 7: Performance comparison across LLMs using ICL when using aggregated labels and using best-case scenario individual annotator labels.
179
+
180
+ # 4.6 Best Performing Annotators
181
+
182
+ Finally, and as a best case scenario, we present the performance of the best annotator with ICL (based on performance across models as can be seen in Figures 3 and 4) and compare that to the aggregate. Results are shown in Figure 7. The benefits from modeling a specific annotator are evident, as we observe large gains in performance in two out of the three metrics, a finding that is consistent across all models except for LLaMA-2 70b. This further emphasizes that modeling individual perspectives instead of aggregates is beneficial.
183
+
184
+ # 5 Conclusion
185
+
186
+ In summary, we see that aggregated labels tend to align less with the LLMs' prior for each task. Furthermore, and in spite of worse aligned annotators receiving larger posterior performance increases,
187
+
188
+ the aggregate posterior appears to collapse to the prior, resulting in significantly worse performance to several annotators. This result indicates that the majority is not necessarily well-aligned with models. Then, it is evident that interpretable and consistent mappings can be modeled by LLMs and improve upon the prior, even though the model might not align with the specific annotator a priori. Finally, we see that individuals do indeed result in better performance on the task.
189
+
190
+ Given the commonsense reasoning capabilities of LLMs, the emotional and moral capabilities of LLMs demonstrated in settings different from traditional machine learning settings (Tak and Gratch, 2024), as well as best-case results, we conclude that the aggregation process introduces artifacts in the labels that cause LLMs to ignore the mapping as noise. It is interesting to note that in our prompts, the aggregate in rarely a hodgepodge of disparate opinions, wherein the aggregate does not match any individual, but factors like different annotator mixtures, especially between train and test splits, as well as different annotator groups prevailing in different examples, introduce sufficient noise in the annotations that cannot be modeled with ICL and CoT. That being said, the performance gap with gradient-based methods remains large, suggesting that other factors like task complexity also majorly account for the observed biasing effects.
191
+
192
+ Finally, we question what it means to model these aggregate opinions. Namely, since they might not reflect the opinion of any individual, even if in a minority of cases, then what rationale should be
193
+
194
+ provided and how should it be generated in a sound manner? We advocate, therefore, as previous work has done (Prabhakaran et al., 2021; Dutta et al., 2023), for releasing and modeling annotator-level labels instead of aggregates, and suggest that the field of subjective modeling should move away from aggregate modeling in the age of LLMs and of more elaborate modeling methods such as CoT.
195
+
196
+ # 6 Limitations
197
+
198
+ Given our constraints to standardize the prompt and remove other degrees of freedom that can constitute potential confounding factors in our evaluations, but also for computational efficiency, the evaluations sets contain a small number of examples (namely, 100 and 71 for the triplets in GoEmotions, and 100 for both groups in MFRC), increasing the noise in our findings. Nonetheless, this practice has become standard in the evaluation of LLMs.
199
+
200
+ A potential confounding factor that we do not control for is the quantization, as previous work has reported significant decreases in performance from it (Marchisio et al., 2024). We note, first, that there is no reason for the quantization to affect our results in a nonuniform way, second, that we perform the quantization because of obvious computational constraints, and third, it is possible that even some API-based models are served quantized (e.g., Turbo versions). For these reasons, we believe that quantized performance is representative of LLM performance in realistic scenarios. Moreover, this work does not aim to establish the performance of LLMs in these subjective tasks compared to other models, but rather to compare within the LLMs themselves. Nonetheless, we perform experiments with various levels of quantization of LLaMA-3 8B, and show the results in Table 3 in the Appendix.
201
+
202
+ In our study of specific labels and the effects of what we perceive as consistency on the performance of LLMs (Section 4.5), a potential confounding factor in analysis is the increased frequency of the label, since the studied annotator was more sensitive to specific stimuli in the input, as described. We specifically chose to perform a more detailed analysis in this positive pattern, however, because we expect the models, as did we, find it easier to distinguish positive patterns in the data.
203
+
204
+ It is important to note that we have performed experiments in only two problems and benchmarks, as we opted in favor of presenting more LLMs. Therefore, our findings may not generalize to other,
205
+
206
+ highly subjective tasks. Furthermore, other datasets with stricter annotation manuals that aim to resolve all ambiguities may not present similar behavior, as annotator agreement is artificially raised by removing some of the subjectivity of the semantic interpretations of the annotators in favor of following stricter, highly specific instructions.
207
+
208
+ We also want to note that we do not perform Bonferroni correction across models and datasets given the small number of datapoints we have to compute our correlations, yet we believe it is important to highlight the settings with smaller p-values.
209
+
210
+ Moreover, datasets with data derived from social media have been criticized as lacking the context for a model—or even humans—to make appropriate judgments about the emotion or morality expressed in them (Yang et al., 2023), and techniques to evaluate the correctness of the labels in a dataset have been designed to discard noisy samples (Swayamdipta et al., 2020; Mokhberian et al., 2022). Given the subjective nature of the task and the lack of context, such tools could be used to perhaps improve performance. That being said, the interpretation by humans is sufficiently consistent for Demux to achieve better performance than every LLM. We also note that by removing ambiguous examples, we may also remove that which makes these tasks challenging (Aroyo and Welty, 2015).
211
+
212
+ Finally, while we carefully control the experimental setting to control for confounding factors, we do not actually perform causal interventions, and consequently present correlational evidence.
213
+
214
+ # 7 Ethical Considerations
215
+
216
+ Our focus on traditional machine learning benchmarks, as well as our takeaways, should complement and not compete with quantifying bias in LLMs using other tools and techniques (Caliskan et al., 2017; Gonen and Goldberg, 2019; Ferrara, 2023; Abdurahman et al., 2024). It is also important to emphasize that improving affective and moral capabilities in LLMs entails perils, since better catering the emotional and moral responses to more contexts and personalizing them to specific individuals can lead to improved manipulation of users by LLMs.
217
+
218
+ # Acknowledgements
219
+
220
+ This project was funded in part by DARPA under contract HR001121C0168, and in part by NSF CIVIC.
221
+
222
+ # References
223
+
224
+ Suhaib Abdurahman, Mohammad Atari, Farzan Karimi-Malekabadi, Mona J Xue, Jackson Trager, Peter S Park, Preni Golazizian, Ali Omrani, and Morteza Dehghani. 2024. Perils and opportunities in using large language models in psychological research. 3(7):pgae245.
225
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
226
+ Hassan Alhuzali and Sophia Ananiadou. 2021. Spanemo: Casting multi-label emotion classification as span-prediction.
227
+ Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. 36(1):15-24.
228
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell. 2020. Language models are few-shot learners. 33:1877-1901.
229
+ Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.
230
+ Stephanie CY Chan, Adam Santoro, Andrew Kyle Lampinen, Jane X Wang, Aaditya K Singh, Pierre Harvey Richemond, James McClelland, and Felix Hill. 2022. Data distributional properties drive emergent in-context learning in transformers. In Advances in Neural Information Processing Systems.
231
+ Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. 2023. Leveraging label correlations in a multi-label setting: A case study in emotion. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
232
+ Georgios Chochlakis, Niyantha Maruthu Pandiyan, Kristina Lerman, and Shrikanth Narayanan. 2025. Larger language models don't care how you think: Why chain-of-thought prompting fails in subjective tasks. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
233
+ Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, and Shrikanth Narayanan. 2024. The strong pull of prior knowledge in large language models and its impact on emotion recognition. In 2024 12th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE.
234
+ Alan S Cowen and Dacher Keltner. 2017. Self-report captures 27 distinct categories of emotion bridged by continuous gradients. 114(38):E7900-E7909.
235
+
236
+ Aida Mostafazadeh Davani, Mark Diaz, and Vinodkumar Prabhakaran. 2022. Dealing with disagreements: Looking beyond the majority vote in subjective annotations. 10:92-110.
237
+ Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer error rates using the EM algorithm. 28(1):20-28.
238
+ Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4040-4054.
239
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding.
240
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
241
+ Senjuti Dutta, Sid Mittal, Sherol Chen, Deepak Ramachandran, Ravi Rajakumar, Ian Kivlichan, Sunny Mak, Alena Butryna, and Praveen Paritosh. 2023. Modeling subjectivity (by Mimicking Annotator Annotation) in toxic comment identification across diverse communities. Preprint, arXiv:2311.00203.
242
+ Emilio Ferrara. 2023. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738.
243
+ Justin Garten, Brendan Kennedy, Joe Hoover, Kenji Sagae, and Morteza Dehghani. 2019. Incorporating demographic embeddings into language understanding. 43(1):e12701.
244
+ Preni Golazizian, Ali Omrani, Alireza S Ziabari, and Morteza Dehghani. 2024. Cost-efficient subjective task annotation and modeling through few-shot annotator adaptation. arXiv preprint arXiv:2402.14101.
245
+ Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862.
246
+ Mitchell L Gordon, Michelle S Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S Bernstein. 2022. Jury learning: Integrating dissenting voices into machine learning models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1-19.
247
+ Shivanshu Gupta, Clemens Rosenbaum, and Ethan R Elenberg. 2023. GistScore: Learning better representations for in-context example selection with gist bottlenecks.
248
+
249
+ Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte. 2023. The political ideology of conversational ai: Converging evidence on chatgpt's pro-environmental, left-libertarian orientation. *Left-Libertarian Orientation* (January 1, 2023).
250
+ Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130.
251
+ Jannik Kossen, Tom Rainforth, and Yarin Gal. 2023. In-context learning in large language models learns label relationships but is not conventional learning.
252
+ Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamile Lukosiute, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Tileen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, and Ethan Perez. 2023. Measuring faithfulness in chain-of-thought reasoning. Preprint, arXiv:2307.13702.
253
+ Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. Advances in neural information processing systems, 36.
254
+ Kelly Marchisio, Saurabh Dash, Hongyu Chen, Dennis Aumiller, Ahmet Üstün, Sara Hooker, and Sebastian Ruder. 2024. How does quantization affect multilingual llms? arXiv preprint arXiv:2407.03211.
255
+ Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048-11064.
256
+ Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 task 1: Affect in tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 1-17.
257
+ Negar Mokhberian, Frederic R Hopp, Bahareh Harandizadeh, Fred Morstatter, and Kristina Lerman. 2022. Noise audits improve moral foundation classification. In 2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 147-154. IEEE.
258
+ Negar Mokhberian, Myrl G Marmarelis, Frederic R Hopp, Valerio Basile, Fred Morstatter, and Kristina Lerman. 2023. Capturing perspectives of crowdsourced annotators in subjective learning tasks. arXiv preprint arXiv:2311.09743.
259
+
260
+ Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022a. Training language models to follow instructions with human feedback.
261
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022b. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
262
+ Jane Pan, Tianyu Gao, Howard Chen, and Danqi Chen. 2023. What in-context learning" learns" in-context: Disentangling task recognition and task learning.
263
+ Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Diaz. 2021. On releasing annotator-level labels and information in datasets.
264
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. 1(8):9.
265
+ Paul Resnick, Yuqing Kong, Grant Schoenebeck, and Tim Weninger. 2021. Survey equivalence: A procedure for measuring classifier accuracy against human labels. arXiv preprint arXiv:2106.01254.
266
+ Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655-2671.
267
+ Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. 2022. Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884-5906. Association for Computational Linguistics.
268
+ Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.
269
+ Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics.
270
+ Ala N Tak and Jonathan Gratch. 2024. Gpt-4 emulates average-human emotional cognition from a third-person perspective. arXiv preprint arXiv:2408.13718.
271
+
272
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, and Shruti Bhosale. 2023. Llama 2: Open foundation and fine-tuned chat models.
273
+
274
+ Jackson Trager, Alireza S Ziabari, Aida Mostafazadeh Davani, Preni Golazizian, Farzan Karimi-Malekabadi, Ali Omrani, Zhihe Li, Brendan Kennedy, Nils Karl Reimer, Melissa Reyes, et al. 2022. The moral foundations reddit corpus.
275
+
276
+ Miles Turpin, Julian Michael, Ethan Perez, and Samuel Bowman. 2024. Language models don't always say what they think: unfaithful explanations in chain-of-thought prompting. Advances in Neural Information Processing Systems, 36.
277
+
278
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, $\backslash$ Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. 30.
279
+
280
+ Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
281
+
282
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models.
283
+
284
+ Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, et al. 2023. Larger language models do in-context learning differently.
285
+
286
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
287
+
288
+ Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2021. An explanation of in-context learning as implicit bayesian inference. In International Conference on Learning Representations.
289
+
290
+ Daniel Yang, Aditya Kommineni, Mohammad Alshehri, Nilamadhab Mohanty, Vedant Modi, Jonathan Gratch, and Shrikanth Narayanan. 2023. Context unlocks emotions: Text-based emotion classification dataset auditing with large language models. In 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1-8. IEEE.
291
+
292
+ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2024 Tree of thoughts: Deliberate problem solving with large language models. 36.
293
+
294
+ Kang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, and Taeuk Kim. 2022. Ground-truth labels matter: A deeper look into input-label demonstrations. arXiv preprint arXiv:2205.12685.
295
+
296
+ # A Annotators
297
+
298
+ We reiterate that we group annotators together based on their overlap in the train and evaluation sets so as to standardize the prompt across them. We use 5 annotators in MFRC, one triplet and one pair, both with enough common examples in the evaluation set to present somewhat robust results. For both annotator groups, we use 100 evaluation examples. For GoEmotions, we use 6 annotators, as these were the only groups with enough evaluation examples, 100 and 71 respectively. We did not find any possible quadruplets or larger groups.
299
+
300
+ # B Annotation Details
301
+
302
+ Two research assistants trained on emotions and moral foundations using the annotation manuals provided by the dataset authors and with common instructions independently generated the reasonings for the annotations in the datasets. We then check for the consistency of the reasoning chains between the two annotators qualitatively (manual checks) and quantitatively (checking the consistency of the predictions when using either annotator's reasonings). Note that we do not use model-generated CoT because we noticed from our experiments that LLM explanations are shallow and can easily miss more complex emotional and moral expressions.
303
+
304
+ # C Toy Example for Aggregation Artifact
305
+
306
+ In this section, we present an toy example to elucidate the notion of inconsistencies that can be created from aggregation, presented in the main text. Consider an example dataset where the US political affiliation is the major factor dictating the labels provided by annotators. Due to random assignment of examples to annotators, some examples may be assigned more annotators leaning Democrat, and similar for Republican. Therefore, the aggregated dataset will contain some annotations reflecting Republican views, and some Democrat views, creating inconsistencies in the training data, or worse.
307
+
308
+ # Prompt examples
309
+
310
+ # ICL
311
+
312
+ Classify the following inputs into none, one, or multiple the following moral foundations per input: authority, care, equality, loyalty, proportionality and purity.
313
+
314
+ Input: Did anyone watch Nigel Farage rebuke the EU? It was quite interesting! Moral Foundation(s): none
315
+
316
+ Input: Because Le Pen is alt right and is very dangerous for the peace of europe and the world. Moral Foundation(s):
317
+
318
+ # CoT
319
+
320
+ Classify the following inputs into none, one, or multiple the following moral foundations per input: authority, care, equality, loyalty, proportionality and purity.
321
+
322
+ Input: Did anyone watch Nigel Farage rebuke the EU? It was quite interesting!
323
+
324
+ Reasoning: The author expresses interest in Farage's actions but not any moral sentiment towards it.
325
+
326
+ Moral Foundation(s): none
327
+
328
+ Input: Because Le Pen is alt right and is very dangerous for the peace of europe and the world.
329
+
330
+ Reasoning:
331
+
332
+ between training and test data. Critically, trying to model political affiliation, the major factor of this setting, becomes impossible from this aggregated data, hence the "inconsistency" of the annotations.
333
+
334
+ # D Prompt
335
+
336
+ For completeness, we showcase example prompts we use in MFRC for ICL and CoT that illustrate the prompt format we utilize across all experiments. One example for each setting is shown in Table 2. We note that the prompts do not utilize the conversational format even though it is available across all our models because we found it to work worse in terms of performance (in terms of the performance of the aggregate in GoEmotions) and ability to decode (e.g., including explanations without being prompted to, diverging in terms of output format, or even predicting emojis rather than emotion words) compared to the used one.
337
+
338
+ Table 2: Examples showcasing the prompt format in In-context Learning and Chain-of-Thought prompting.
339
+
340
+ <table><tr><td>Q.</td><td>Micro F1</td><td>Macro F1</td><td>JS</td></tr><tr><td colspan="4">Aggregate</td></tr><tr><td>4 bit</td><td>0.078±0.026</td><td>0.094±0.034</td><td>0.108±0.013</td></tr><tr><td>8 bit</td><td>0.124±0.036</td><td>0.137±0.017</td><td>0.241±0.071</td></tr><tr><td>16 bit</td><td>0.131±0.047</td><td>0.150±0.014</td><td>0.219±0.061</td></tr><tr><td colspan="4">Annotator #1</td></tr><tr><td>4 bit</td><td>0.090±0.030</td><td>0.111±0.033</td><td>0.114±0.046</td></tr><tr><td>8 bit</td><td>0.123±0.061</td><td>0.155±0.049</td><td>0.240±0.108</td></tr><tr><td colspan="4">Annotator #2</td></tr><tr><td>4 bit</td><td>0.243±0.038</td><td>0.170±0.024</td><td>0.245±0.038</td></tr><tr><td>8 bit</td><td>0.208±0.059</td><td>0.228±0.022</td><td>0.312±0.042</td></tr><tr><td colspan="4">Annotator #3</td></tr><tr><td>4 bit</td><td>0.122±0.043</td><td>0.122±0.036</td><td>0.154±0.090</td></tr><tr><td>8 bit</td><td>0.126±0.018</td><td>0.135±0.020</td><td>0.247±0.072</td></tr></table>
341
+
342
+ Table 3: Performance of LLaMA-3 8B with various levels of quantization (Q.) on MFRC.
343
+
344
+ # E Quantization
345
+
346
+ We perform experiments with LLaMA-3 8B with less quantization to gauge at the effects that it has on our results. We present our findings in Table 3. As expected, performance slightly improves with less quantization for our metrics of interest, yet the comparative phenomena we study still seem to hold from this small-scale analysis.
347
+
348
+ # F Full Results
349
+
350
+ In Figures 8, 9, 10, 11, and 12, we see the equivalent results to Figures 2, 3, 4, 5, and 6 respectively, but each annotator is assigned their own marker. Therefore, these more detailed figures allow us to track the performance of different annotators across models and settings.
351
+
352
+ ![](images/d21edbed529e5fbcb915ccbcbff6afb35e040c678e864b84deb6efad987696f7.jpg)
353
+
354
+ ![](images/05237f6605ee3784157973ca5298bb5833f7144eb3f5e6ef1cc129b5584884c5.jpg)
355
+ Annotator Similarity with Prior wrt Annotator Similarity with Aggregate
356
+
357
+ ![](images/2c04febb168f1f0980668117663b56bf252bd467b08cbf96670b8a561bf36ecd.jpg)
358
+
359
+ ![](images/7f21699b3154f6d44add152fdf9ef3713eb4afa5c84fce0d9de9ae8ce222a831.jpg)
360
+
361
+ ![](images/db2502fd9071361e5ed621082736168b569903aece5ecd1bffb37732ee10ee6f.jpg)
362
+
363
+ ![](images/a4d84395c18ced7071f175a776ac89d47ccbc841253be80c65f9dc7795aeaf1f.jpg)
364
+
365
+ ![](images/75ebcab051e36b38a963e319f31da53d9994e256c25d634dc85b15fe94d1ab42.jpg)
366
+ Figure 8: Scatter plot of annotator similarity with aggregate and with prior, with correlation and line fit of data shown. * and bold lines: $p < 0.05$ .
367
+
368
+ ![](images/15daac4524fbd3d954b90ca5d4aa4de7ec41e71779bdfee0f1c4cb2e286154aa.jpg)
369
+
370
+ ![](images/2dc57f0a09e14ddf56f6af019df76372e94eab9268bfc038ccf05e9f235a3b06.jpg)
371
+
372
+ ![](images/d0d90fe9aecd7352171d8fb789dfad4656e9392e3137c72ce60758d30499617b.jpg)
373
+
374
+ ![](images/e1be68c3c591779953e6a3c27344d68fad161f25a9b73810b286394c9b0c3d8c.jpg)
375
+
376
+ ![](images/ddf06326836d8b7beafe32e0bbf66994f4f198522a7488eda4406bbd16c9af22.jpg)
377
+
378
+ ![](images/044c48958d82e82fdd24e1274b07c9498de30164746733b32a795c5a17609d90.jpg)
379
+
380
+ ![](images/70855a96fd7c15b85e9fd94172efc43ee0924bf3ee29f59d1bdad99c7d77728a.jpg)
381
+
382
+ ![](images/a07f16330c1874a9f1ee447896b8c88db37aed3019c04cc2203c99d35db922cd.jpg)
383
+ Annotator Performance with ICL
384
+
385
+ ![](images/b06dd989a6145809eaab8c126e415a3018c032accadfb5384f1120814f85c99f.jpg)
386
+
387
+ ![](images/133c4ceeef84662733352d894730403d6a686ea3c99a17f9b2014a28da92fb5b.jpg)
388
+
389
+ ![](images/1920db2e167b91d97a6f83007f4f603338cc3e0bf909e0182043ad462ccfeab2.jpg)
390
+
391
+ ![](images/c7ad2ad11ad370d846fab69648997b5ed62c398b103001e1806d5003cc348aa7.jpg)
392
+
393
+ ![](images/79b0a3cb02dbd92478b726ef9b51cd60146ad24d0e76c43c40ffcd1dccdf9994.jpg)
394
+
395
+ ![](images/87960e0a3d91b363ae378fe2cff7d0a47f31a2c8cbd00e625e2b297ff2b90934.jpg)
396
+ Annotator Similarity with Prior
397
+
398
+ ![](images/c00f27b007ce29d3de4b2024150d8830645e834c50493e5c7fa30edb2e1a30d6.jpg)
399
+
400
+ ![](images/181264516994df076739fb1efdc2a891d44bf51b5604083953cd6f2d0a2f1c18.jpg)
401
+
402
+ ![](images/c6bc369fca03ec454a634d080074e204b7936527be28d196afe5b0ea4b66beef.jpg)
403
+
404
+ ![](images/6d0e107da81f444f215b912ca9cdb82ca3fa1d3dc7cc384f6bbde662989fa96e.jpg)
405
+
406
+ ![](images/8df05de0f791f964f29e3dcf28e58941b8af585751e432b1528d607e27af67db.jpg)
407
+ Relative Posterior Improvement of Annotators with ICL
408
+
409
+ ![](images/ed10299201cc226a836343da817e0f501bc522ceace117bf1a04a9f5bef956ea.jpg)
410
+
411
+ ![](images/f80c505ab7e6dfdbedcad7a0a68f62234e292f648caa5d5a4cfb929e95fe3e75.jpg)
412
+
413
+ ![](images/6577161a60301d18e8c4062e004bed2d87618766974b8d8a56aca88efdba86dc.jpg)
414
+
415
+ ![](images/f2daa81492ad23d6208a6b8f7be064c02994709737e805a1026751afe2966548.jpg)
416
+
417
+ ![](images/9a501c2fdaa489ec764ce2b07965f31f6997789c3371b4c02aadfee79d782413.jpg)
418
+ Figure 9: In-Context Learning performance for annotators and aggregate and their relative improvement compared to the model's prior as a function of the similarity of each with the model's prior. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
419
+
420
+ ![](images/5ffe330f4d0cc73c2402c93db98e2e4c0deb642358a9db96a8b6cfe3bab551ad.jpg)
421
+
422
+ ![](images/6408d905a65a4510b780f9027a3740c654ffa4dbd0b5ccf53355b4c41e9b33ec.jpg)
423
+ Annotator Similarity with Prior
424
+
425
+ ![](images/be890381347c77a8e3541096cb7c5c6072482a17689ba90a523c6fe45bb4cdf2.jpg)
426
+
427
+ ![](images/64a91b10907d3211f19d1f2a00f4379611e356a2c5bb2e136a30f8cc06102e3b.jpg)
428
+
429
+ ![](images/d3f8fce0b4e37a7122f56ca86e843b9651357d89fbc61b8f72442ca8231aa9a2.jpg)
430
+
431
+ ![](images/64a1d3ef8d0cc414be7feaf53465b7b4a2d41809eae298990c2fe3487a4a6642.jpg)
432
+
433
+ ![](images/7086d354e124cbda156514babd9a531284396918e5047d8a0659141d096c7e09.jpg)
434
+
435
+ ![](images/1b49735a109cd0fe76ef9b334d0d735f33d54b8f4184fc657a5725e4cf809347.jpg)
436
+ Annotator Performance with ICL
437
+
438
+ ![](images/d8a91f6751e19079b7a4acf640db104a2be30ad18fa374c36a1d667c709ee8e1.jpg)
439
+
440
+ ![](images/a83b616168b71123fb3ca2ecb67cb9b460e856827b84ce440c8298afcb3a4858.jpg)
441
+
442
+ ![](images/0536895a607dd8634403d532a1532d0d40919c2d4b851be1c52739be1ba57705.jpg)
443
+
444
+ ![](images/fbdbbdffdb369271eca8f376d5783bf4cfb8c441c5ff38bc2c389326b4d336b2.jpg)
445
+
446
+ ![](images/6a874fd33fac5749d654b77d324359ff4452c9f8c36f9e564335fcaddb0fa94f.jpg)
447
+
448
+ ![](images/e9ba146bab597358653eefb2dbf3e63a0a7b45a3b254507565f36e92b778ea95.jpg)
449
+ Annotator Similarity with Aggregate
450
+
451
+ ![](images/06df9b7bea86d8c88ddacdd75c021a3646c4a8bd82ab076328d4c0fb05b825ab.jpg)
452
+
453
+ ![](images/eccbde9195077ec92448d27ad35dcce4d1ad048bab8096491e9c3bfe19d9ff0b.jpg)
454
+
455
+ ![](images/0e118297aad90f231445983ec34133c33e472bbf63fcf06d8ef75c68b884eb53.jpg)
456
+
457
+ ![](images/ef33ee75a878a03bedebb28cf9e69984ab4a33bbb5f62cc2dc1c90e4f2369e10.jpg)
458
+
459
+ ![](images/51488e03259bd25bba89c4419116fdcb20fd7579efa73ccc9f0876ac02b4a046.jpg)
460
+ Relative Posterior Improvement of Annotators with ICL
461
+
462
+ ![](images/7a7ad893f765e498d71f2d8e552f332a529f148874975673aef904bcc3bbf627.jpg)
463
+
464
+ ![](images/1186e67b985321a0805b774331117e631ee95601cded4860c2357ef1a8f2bda0.jpg)
465
+
466
+ ![](images/e799e444ec076920f29c719c80e9f66a2bb22bb2fee0b3d47df6944699f741a9.jpg)
467
+
468
+ ![](images/948571fc881530dfc9bf8b40d9f92395ff8699a2085441075b147e9cd0260048.jpg)
469
+
470
+ ![](images/ddb3ce77e19822d174486442c32fe616efbc3a097a2f93499e17cb416fee6238.jpg)
471
+ Figure 10: In-Context Learning performance for annotators and their relative improvement compared to the model's prior as a function of the similarity of each annotator with the aggregate. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
472
+
473
+ ![](images/d8aff5062097c38fee9e03849ac9ba3fe103e57a02635a114f6faf0bdc927cb1.jpg)
474
+
475
+ ![](images/1243f9eb73eab010b54c9f563e53dc201fb70d767f008f8f5eb8bac691f88bb0.jpg)
476
+ Annotator Similarity with Aggregate
477
+
478
+ ![](images/7c20691905197f7c3a558cac91ff037695c3132633fc8a102b4339d2280db353.jpg)
479
+
480
+ ![](images/724a0a595fc1c0f6b44a8751c161fc32446f6fb67880ee1edcaca9d497177817.jpg)
481
+
482
+ ![](images/51bb3185ec10874d40baa606f470918486c131c0bd7346e563e5b54a68543b81.jpg)
483
+
484
+ ![](images/53e7f1cd906f54faf6873b7ee5d2973d981ae0f176666b0411b391124a7342e5.jpg)
485
+
486
+ ![](images/f270beb765c4e316bda1fdeb7bc74dcb18e614621b14744749bd76205633119b.jpg)
487
+ Annotator Performance with CoT
488
+
489
+ ![](images/cdf955ec6f90f3e52e4dc4f7128d92454bce956677a9fe9fe8b85c5982408d09.jpg)
490
+
491
+ ![](images/94a901e0124a6ca18acbfef211e7ea42f23e4892343a1137d62aeafab5c9b9d4.jpg)
492
+
493
+ ![](images/8f1e69ba73f04162098347ddfa66301adeced7f23f3ab223a342880d50893292.jpg)
494
+
495
+ ![](images/8446e36e613584bf0053f16a10bda7b4ab983dd4ba46af9a1cf07626293e4dab.jpg)
496
+
497
+ ![](images/35683dd927edbd1a076e46564423e7acd97d22e3860bc5798cd997a8104e0757.jpg)
498
+
499
+ ![](images/1cf03bd048e8ff14ea40a4a62def40b61a8f9dc9b1709bcfaa153c9c77377895.jpg)
500
+
501
+ ![](images/59e538aa0d50fb0a12c2d4a0594c4b0397f1f9e040d9a43b7654ff13a5bbcda5.jpg)
502
+ Annotator Similarity with Prior
503
+
504
+ ![](images/233ac57d783e8b9cb21c6ece7f95cda436a42043ffbf02cf8a292c060bb4fd9e.jpg)
505
+
506
+ ![](images/2391ca4b78562330d53c52ce60b853c99a1b730bb8b996d0e772462fbadb94ec.jpg)
507
+
508
+ ![](images/f759ff777e6b5c24f9fb13b47da8b35ee2d406f95460a6e2ff82cbd2e9b15fe1.jpg)
509
+
510
+ ![](images/56d985ed90e8633cf0ade692692f3932524d4b07487805a431da9b5a633d15aa.jpg)
511
+
512
+ ![](images/8c77b0a82d8f863de7fd1e9e2bbc826073a6b0baa61e130492fc5836b594ab3d.jpg)
513
+ Relative Posterior Improvement of Annotators with CoT
514
+
515
+ ![](images/2b23b9967aa19ab60ebdae5b25f6d010b0461fff04e8691525a7cf848e167486.jpg)
516
+
517
+ ![](images/61131eb39c51db505d2dd33fd6869a9018f0689a7868fa5bbca1d46a84c60bd2.jpg)
518
+
519
+ ![](images/50c83dbb0f2567b9f70a4fcd3e2459e3b704b34a8e28687ebbf632fb0e113e0d.jpg)
520
+
521
+ ![](images/0af99d3cc31b012d528fdb2b822c26ce0141791d2d0171ef2241a0053d051c16.jpg)
522
+
523
+ ![](images/43a6b6cda6670091dfe5fa8dfffefbc9eee6f742b06b49445c548357021bec1d.jpg)
524
+ Figure 11: Chain-of-Thought performance for annotators and aggregate and their relative improvement compared to the model's prior as a function of the similarity of each with the model's prior. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
525
+
526
+ ![](images/380db945a51b50044a24568f9f3260c5ecca91f6149c5c4be6f2a84895ebb4fd.jpg)
527
+
528
+ ![](images/9319686c0feca253ee11526c775b3c93ab614caa40562db2e6b0a45733f16b30.jpg)
529
+ Annotator Similarity with Prior
530
+
531
+ ![](images/be510d67782c9fb4b4836725e189aa578292e4144d92a9317888555cc5bb8545.jpg)
532
+
533
+ ![](images/1c920557f84881175f72f12e28a5b8b186d372673792c0691d708b61db483698.jpg)
534
+
535
+ ![](images/d51270b8eb9f70c7e4da236e41aa363c8def07f1d1454caeaee61807d3075f24.jpg)
536
+
537
+ ![](images/9e6b13000dca75a44d0eba0abc038fbd5df95225c42a8409db4945b67f7a0097.jpg)
538
+
539
+ ![](images/0386640d52d85e9d7000567246e6afeb0eae6f811d0a3e95666037d3b32d6c72.jpg)
540
+
541
+ ![](images/b1075cedd9c37b0777363b88dd322e8ee3d5ece807971ca26c6dbe4059ed07bd.jpg)
542
+ Annotator Similarity with Aggregate
543
+
544
+ ![](images/59d71b610c7324b4aa7cb847e58279890a14c029d32c620bb6da37104675c9c3.jpg)
545
+
546
+ ![](images/fed5b2e5e684b8c94aa193f5fa2f66098b5ddaf1136a282d3932742bc23509b8.jpg)
547
+
548
+ ![](images/160c742ada2990ce605f4766ccd8bb04a1fdd55b6a201ee60ff3559c3ad89def.jpg)
549
+
550
+ ![](images/6f13b3e7653ec84221730a1742b2f56266803eb5180c66972675604ebbd67d4c.jpg)
551
+
552
+ ![](images/70bfd1a76959f83705a74560b13ac4cdca28137af74c1b24c8151fee061ec162.jpg)
553
+ Relative Posterior Improvement of Annotators with CoT
554
+
555
+ ![](images/8d03385d1a4b15442434c96bc11079914e34a411e83f399924e77724ce10edcf.jpg)
556
+
557
+ ![](images/2a066816727fad9ae3dedd8c75256498d0529dcb3c21573fb750174ab2c1c24d.jpg)
558
+
559
+ ![](images/508b0600bb57864929a5c050ec24f1ed3f49ac184586588a04204a5cb3f6de05.jpg)
560
+
561
+ ![](images/9448d59aa435d111b9d8932c3a26a3bc0ed9276f21dd25f2dc387c698d0a0bc5.jpg)
562
+
563
+ ![](images/48df85fadb8cf8c46787d4d99ca180e53a50262b6251b54a5f778229c403482b.jpg)
564
+ Figure 12: Chain-of-Thought performance for annotators and their relative improvement compared to the model's prior as a function of the similarity of each annotator with the aggregate. Correlation and line fit of data also shown. * and bold lines: $p < 0.05$ .
565
+
566
+ ![](images/aa640d87c754c2b9fed4d303a43e49d99733e92bd47160f66232a5474f264a42.jpg)
567
+
568
+ ![](images/23dfa680d517ccf0a642e84266cc1877ed946d27b3ac837457e11c26b4baa03a.jpg)
569
+ Annotator Similarity with Aggregate
570
+
571
+ ![](images/1cf806aa3a65bb9d798dea3d699cb73245fc35f4910fa7238e2739caa2939365.jpg)
572
+
573
+ ![](images/6445cece5f2ce19ed32152e71deccbddde0b7705c9614fc718e3a794d1ad78e0.jpg)
574
+
575
+ ![](images/74b4950a65bb5bb5bcc1e9191cd6acdaf880987ad705dcebd22398b566e01a4f.jpg)
aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e42a615829b3680d3099319187ba8741baabf851034bb3a54bb0e129c2e22eca
3
+ size 1331602
aggregationartifactsinsubjectivetaskscollapselargelanguagemodelsposteriors/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24431ee4f4ee45ba2c01d5e7d34ba6f79feeaf6c831ada1724c00715dce6c809
3
+ size 590871
algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/e99dadff-e646-41a3-bd3b-f07620a5f205_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3a5baa10036c25034c13ff4624c6587cef55aaba6ceebe118a7a8848bd049e4
3
+ size 124799
algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/e99dadff-e646-41a3-bd3b-f07620a5f205_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c5b09daa9ea315e1c2adc16c56b10ce7631b39db6097d303e0953449bb9aeb5
3
+ size 145897
algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/e99dadff-e646-41a3-bd3b-f07620a5f205_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:054d2b8ad30c57df81894a68c8351c8313a1b44fecac45840b5abf8023bf56ad
3
+ size 3148257
algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/full.md ADDED
@@ -0,0 +1,441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ALGOPUZZLEVQA: Diagnosing Multimodal Reasoning Challenges of Language Models with Algorithmic Multimodal Puzzles
2
+
3
+ Deepanway Ghosal $^{1*}$ , Vernon Y.H. Toh $^{1*}$ , Yew Ken Chia $^{1}$ , Soujanya Poria $^{1}$
4
+
5
+ $^{1}$ Singapore University of Technology and Design
6
+
7
+ : https://github.com/declare-lab/LLM-PuzzleTest
8
+
9
+ : https://algopuzzlevqa.github.io/
10
+
11
+ # Abstract
12
+
13
+ This paper introduces the novel task of multimodal puzzle solving, framed within the context of visual question-answering. We present a new dataset, ALGOPUZZLEVQA designed to challenge and evaluate the capabilities of multimodal language models in solving algorithmic puzzles that necessitate both visual understanding, language understanding, and complex algorithmic reasoning. We create the puzzles to encompass a diverse array of mathematical and algorithmic topics such as boolean logic, combinatorics, graph theory, optimization, search, etc., aiming to evaluate the gap between visual data interpretation and algorithmic problem-solving skills. The dataset is generated automatically from code authored by humans. All our puzzles have exact solutions that can be found from the algorithm without tedious human calculations. It ensures that our dataset can be scaled up arbitrarily in terms of reasoning complexity and dataset size. Our investigation reveals that multimodal language models such as GPT4V and Gemini exhibit limited performance in puzzle-solving tasks. We find that their performance is near random in a multi-choice question-answering setup for a significant number of puzzles. The findings emphasize the challenges of integrating visual, language, and algorithmic knowledge for solving complex reasoning problems.
14
+
15
+ # 1 Introduction
16
+
17
+ Puzzles have long been a source of fascination and intellectual challenge, serving both as entertaining pastimes and as means to advance mathematical and logical understanding. Historically, puzzles have often emerged from complex problems, with their solutions contributing valuable insights to mathematical research (Euler, 1741; Wiles, 1995;
18
+
19
+ Knuth, 2000). In this paper, we explore the novel task of algorithmic visual puzzle solving. Traditional visual question-answering (VQA) (Antol et al., 2015; Goyal et al., 2017) and visual reasoning (Hudson and Manning, 2019) datasets have mainly focused on combining language and vision through the lens of object detection, scene recognition, and spatial relationship, where the answer can be directly found in the image. Another line of work has explored the application of external factual knowledge (Sanket Shah and Talukdar, 2019) or commonsense world knowledge (Schwenk et al., 2022) for VQA. Recently, multimodal reasoning benchmarks such as ScienceQA (Lu et al., 2022), and MMMU (Yue et al., 2023) have been proposed that aim to evaluate the expert knowledge of LLMs through subject-specific questions spanning science, medicine, business, arts, etc.
20
+
21
+ Unlike previous research, our work integrates a new critical component - algorithmic reasoning, which demands the ability to apply logical rules to novel situations, rather than relying on pre-stored information or direct recall of facts. This distinction is vital as it shifts the focus from merely memorizing vast databases of knowledge to applying concrete reasoning steps for novel problem-solving. We thus propose to benchmark the problem-solving ability of AI models through visual puzzles. Our puzzles are created to be self-contained where minimal additional knowledge (beyond basic mathematics) is required to solve them. This helps us to disentangle the knowing part from the solving part, as the knowledge required to solve the problem is already provided as context.
22
+
23
+ Consider Figure 1 as an example of some of our puzzles. We show the setting or the configuration of the puzzle/problem as an image, which we consider as the visual context. Depending on the problem, the visual context may show different kinds of information such as the initial and final
24
+
25
+ ![](images/2176b1ac15ccd554e6804ba6b827c67faa6ad5bafe90ba7a434661cd8094e589.jpg)
26
+ Figure 1: Examples of puzzles in ALGOPUZZLEVQA based on visual features. We show one instance of a puzzle from each of the four visual categories. Note that the header categories are not exhaustive as some puzzles may belong to more visual categories than the ones shown in the header. The complete ontological categorization can be found in Table 1. The questions shown above have been shortened to fit in the figure. The full questions can be found later in §3 and in Appendix A.
27
+
28
+ configurations of the puzzle, the position of entities on a grid, the colours of a map, etc. Visual features such as colours, positions, shapes, and sizes of the puzzle components are generally present in the visual context. The language context, which is shown as Question in the figure then describes the specific features of the puzzle and the associated rules. It also presents the query problem that we are trying to solve. Correctly answering this question requires applying a fundamental mathematical or algorithmic concept. We design the puzzles such that solving them requires a wide array of mathematical and algorithmic topics, ranging from basic arithmetic, set theory, and boolean logic to more sophisticated areas like graph theory, and optimized search. Our proposed dataset thus assesses critical tasks such as interpreting visual inputs, understanding spatial relationships, and applying algorithms for object manipulation and pathfinding. These skills are essential for real-world applications, including environmental navigation and action optimization which are key components for autonomous systems in dynamic settings.
29
+
30
+ A key advantage of our approach is the automated generation of puzzles from human-written code, which allows for the scalable creation of challenges with adjustable reasoning complexity. For a particular puzzle, we first write the general algorithmic solution based on well known proofs and results from the literature. By varying the inputs and configurations of the problem, we can create instances of desired difficulties. As multimodal language models become more capable (OpenAI, 2023; Gemini Team, 2023), it is crucial that we design clean and more challenging benchmarks to make fair assessments of their capabilities. Our automatic generation framework
31
+
32
+ offers a simple way of creating and then continuously updating the benchmark with increasingly complex problems to evaluate progressively stronger multimodal models in the future.
33
+
34
+ We also highlight another feature of our dataset - the algorithmic foundation of our puzzle generation process ensures that each instance in our dataset has a definitive solution, eliminating the possibility of annotation error (Northcutt et al., 2021), subjectivity, ambiguities and biases that may arise from human annotated content (Geva et al., 2019).
35
+
36
+ We summarize our contributions as follows:
37
+
38
+ - We introduce an ontology of multimodal reasoning features tailored for visual algorithmic puzzle solving, aimed at delineating the capabilities and limitations of LLMs. Since each puzzle is annotated with distinct visual e.g., colour, position and algorithmic characteristics e.g., arithmetic, and search derived from our ontology, we have gained valuable insights into the specific skills that LLMs lack in addressing individual problems.
39
+ - Relying on this ontology, we created a novel puzzle dataset designed to test multimodal reasoning across vision, language, and algorithms. All puzzles in our dataset possess unambiguous algorithmic solutions. The distinguishing feature of our proposed dataset, ALGOPUZZLEVQA, resides in the prerequisites of the solutions to the problems. They demand proficiency in algorithmic and mathematical reasoning by applying logic adeptly in novel contexts without necessitating specialized domain knowledge.
40
+ - Our dataset is automatically generated using a scalable puzzle generation framework, allowing for dynamic adjustment of problem complexity.
41
+ - Experimental results conducted on this dataset
42
+
43
+ ![](images/4b48460b825bf2da14ffb25c1c77f876ffd91e4b0a30aaf275df4142fd82963d.jpg)
44
+ Figure 2: Examples of puzzles from ALGOPUZZLEVQA based on algorithmic features. We show at least one instance of a puzzle from each of the seven algorithmic categories. Note that the header categories are not exhaustive as some puzzles may belong to more algorithmic categories than the ones shown in the header.
45
+
46
+ with GPT-4V, Gemini Pro, and other models such as InstructBlip demonstrate challenges in achieving satisfactory scores. Our analysis reveals a fundamental deficiency in these models' visual perception and algorithmic reasoning capabilities, hindering effective complex reasoning.
47
+
48
+ # 2 The Proposed Ontology
49
+
50
+ Our puzzles encompass a diverse range of visual, mathematical and algorithmic topics. We categorize the visual and algorithmic features present in each puzzle in Table 1.
51
+
52
+ # 2.1 Visual Features
53
+
54
+ The configuration of the puzzle/problem is shown as an image, which constitutes its visual context. We identify some fundamental aspects of the visual context that influence the nature of the puzzles. We show examples of the categories in Figure 1. The categorization is as follows:
55
+
56
+ Colour: The category includes puzzles where understanding the colour of the puzzle components is crucial for solving the question. For these puzzles, a different colour combination of the components in the visual context would generally lead to a different solution even for the same question. Checker Move, Colour Hue, Rubik's Cube, etc. come in this category.
57
+
58
+ Position: The puzzles where the position of the puzzle components is necessary for solving the
59
+
60
+ question. All puzzles of our dataset fall under this category. Positional understanding is thus necessary for solving all the puzzles in our dataset.
61
+
62
+ Shape/Size: Shape and size of the visual components are two aspects that are essential in several puzzles. This category includes the understanding of both absolute and relative shapes and sizes of the puzzle components. Chain Link, Tower of Hanoi, and Wood Slide fall under this category.
63
+
64
+ Text: This category includes puzzles for which some features are shown as optical characters or embedded texts in the image. In some of our puzzles, the embedded text contains important information that must be used to correctly solve the question. The Calendar, Clock, and Water Jugs puzzles fall under this category. An interesting case is the Map Colour puzzle which has some embedded text in the image, but they are just given as additional reference and are not necessary for solving the core question.
65
+
66
+ # 2.2 Algorithmic Features
67
+
68
+ We now detail the algorithmic features present in our puzzles. The ontological categorization presented below is strictly limited to obtaining the solutions for the puzzles i.e. for answering the questions for the puzzle instances. The algorithmic categories are not mutually exclusive, as we need to use two or more categories to derive the answer for most puzzles. We show examples of all the algorithmic categories in Figure 2. The categorization
69
+
70
+ <table><tr><td rowspan="2">Puzzle</td><td colspan="4">Visual Features</td><td colspan="7">Algorithmic Features</td></tr><tr><td>Colour</td><td>Position</td><td>Shape/Size</td><td>Text</td><td>Arithmetic</td><td>Boolean Logic</td><td>Combinatorics</td><td>Graphs</td><td>Optimization</td><td>Search</td><td>Sets</td></tr><tr><td>Board Tiling</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>Calendar</td><td>X</td><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>Chain Link</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>✓</td></tr><tr><td>Checker Move</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>✓</td><td>X</td></tr><tr><td>Clock</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>Colour Hue</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td></tr><tr><td>Map Colour</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>X</td></tr><tr><td>Maze Solve</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>Move Box</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>N-Queens</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>✓</td><td>X</td></tr><tr><td>Number Slide</td><td>X</td><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>Rotten Fruits</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>Rubik&#x27;s Cube</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>X</td></tr><tr><td>Think A Dot</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>✓</td></tr><tr><td>Tower of Hanoi</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>Water Jugs</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td></tr><tr><td>Wheel of Fortune</td><td>X</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>Wood Slide</td><td>X</td><td>✓</td><td>✓</td><td>X</td><td>✓</td><td>X</td><td>X</td><td>X</td><td>✓</td><td>✓</td><td>X</td></tr></table>
71
+
72
+ Table 1: Ontological categorization of the puzzles. We divide the table in groups for ease of reading.
73
+
74
+ is as follows:
75
+
76
+ Arithmetic: This category includes puzzles where the application of basic mathematical operations such as addition, multiplication, counting, modular arithmetic, etc. are required. These are fundamental mathematical operations that are useful in all the puzzles. For example, addition and subtraction operations are required in the Water Jugs puzzle, and a modular arithmetic strategy is required in the Clock and Calendar.
77
+
78
+ Boolean Logic: The puzzles for which application of some form of boolean logic is necessary for answering the question. The logical statement may take various forms, for instance: checking if the number of occurrences of two colours is equal or not in Board Tiling, flipping a disc colour whenever a ball passes through it in Think A Dot.
79
+
80
+ Combinatorics: This category includes puzzles that deal with counting combinations and permutations of its states. The general form of the question is about how many unique configurations can be reached after performing a sequence of operations. These kinds of problems are generally easier to solve with computer algorithms when there are a lot of constraints on how the configurations can be created. Our Map Colour and Number Slide puzzles come under this category.
81
+
82
+ Graphs: This category includes puzzles that can be represented as a graph data structure on which specific graph algorithms can be applied to solve the problem. Such algorithms include graph colouring, graph traversal, etc. This category includes puzzles such as Map Colour, Maze Solve, and Rotten Fruits.
83
+
84
+ Optimization: Optimization is a crucial area of
85
+
86
+ mathematics and computer science that mainly deals with finding optimal solutions to a problem. Our puzzles involve problems such as solving a task in the minimum possible time, reaching a goal from an initial state in minimum steps, optimal sorting, summation maximization, etc. This category includes puzzles such as Chain Link, Colour Hue, Tower of Hanoi, and Water Jugs.
87
+
88
+ Search: Search algorithms are generally used to retrieve information stored within a particular data structure, or to calculate the search space of a particular problem domain. Although there are many different kinds of search algorithms, our puzzles are constrained to breadth-first search and exhaustive search. Examples of such puzzles include Checker Move, Move Box, Wood Slide, etc.
89
+
90
+ Sets: Many problems in computer science deal with sets of objects where you have to consider the identical nature of some objects and the equivalence of some positions or configurations. We include some puzzles in our dataset where such properties can be or have to be used to solve the problem. This category includes the Checker Move, Number Slide, and Think A Dot puzzles.
91
+
92
+ # 3 From the Ontology to Creating ALGOPUZZLEVQA
93
+
94
+ We create a total of 18 different puzzles for our dataset following various feature combinations from our ontology (Table 1). Many of these puzzles are popular in various recreational or academic settings.
95
+
96
+ Generating puzzle images and curating solutions. Instances for each puzzle are created auto
97
+
98
+ matically from the human-written code of the particular puzzle which includes:
99
+
100
+ 1. Code to generate the image file that we use as visual context. We use the matplotlib library in Python to generate the images.
101
+ 2. Code for applying the correct algorithm for finding the solution. The algorithmic solutions to these puzzles are well-known and are easy to verify against literature and other popular coding resources such as LeetCode<sup>1</sup>.
102
+
103
+ Creating language context. Finally, we also carefully curate the language context (in English) for each puzzle using puzzle-specific templates. We make sure that the template describes all the necessary information and rules required for solving the puzzle. The templates were written by two authors of the paper and were verified by the other authors.
104
+
105
+ Extensibility. As mentioned earlier, the number of instances and the difficulty level of the puzzles can be scaled arbitrarily to create a large and challenging dataset. On the other hand, most existing benchmarks such as SMART-101 (Cherian et al., 2023) rely on human annotation which is costly and difficult to scale. For now, we have created 100 instances for each puzzle. These instances are analogous to different test cases of the puzzle, i.e. they have different input combinations, initial and goal states, etc. Reliably solving all the instances would require finding the exact algorithm to use and then applying it accurately. This is akin to how we verify the accuracy of a computer program aiming to solve a particular task through a broad range of test cases.
106
+
107
+ Difficulty. Our current version of ALGOPUZZLEVQA contains problems with easy to moderate difficulty for humans, which can be solved by hand through multiple steps or by applying the appropriate algorithms. However, compared to existing benchmarks such as PuzzleVQA (Chia et al., 2024) and CLEVR (Johnson et al., 2017), ALGOPUZZLEVQA not only requires visual perception and commonsense reasoning, but also algorithmic understanding to solve the problems reliably.
108
+
109
+ Dataset Size. In total, we have 1800 instances from the 18 different puzzles. We consider the full
110
+
111
+ ![](images/5bd6bd296777a47ec74511fa897f302b396d2a2f7bd8cf169c991d30e0b54847.jpg)
112
+ Figure 3: Question: The checkerboard shown in the image was originally of $6 \times 9$ in dimension having a total of 54 squares. It uses two colours of squares, one light yellow and one dark yellow, in a chequered pattern. Two of the squares have been removed from the board in the position of the white coloured cells, as shown in the image. You have 26 dominoes of size $2 \times 1$ . You can use them as is or you can rotate them to use as a $1 \times 2$ domino. Is it possible to place all the 26 dominoes in the checkerboard to exactly cover all the remaining 52 squares? Answer Yes or No. Gold Answer: Yes
113
+
114
+ dataset as an evaluation-only benchmark. We describe the details and the creation process of some of the puzzles next. A detailed description of all the puzzles and their creation process can be found in the Appendix A.
115
+
116
+ # 3.1 Some Example Puzzles in Our Dataset
117
+
118
+ Board Tiling. The puzzle is inspired by the Multilated chessboard problem, originally posed by Max Black (Black, 1948). The puzzle belongs to the class of domino tiling problems. A general result from Mendelsohn (2004) states that: a checkerboard originally had an even (odd) number of squares. Two (one) randomly chosen squares are now removed from the board. If the mutilated board has $2 * m$ squares then it is tileable with $m$ dominoes of size $2 * 1$ if and only if it has an equal number of dark and light squares. We choose the number of rows and columns in our checkerboard randomly between 4 to 9. We randomly remove 1 or 2 squares if the board has an odd or even number of squares, respectively. We determine the yes or no tilebability answer based on the general result. We show an example of the puzzle in Figure 3.
119
+
120
+ Colour Hue. A rectangular board contains of $m*n$ coloured tiles arranged in a rectangular grid. The board has an ideal state. A non-ideal state of the board is created by shuffling some tiles. We ask how many minimum tile swaps are required to reach the ideal state from the non-ideal state. The answer can be determined using the selection sort algorithm, which minimizes the number of swaps required to sort an unsorted array. We considered
121
+
122
+ ![](images/ce55a19be7a631cc776d626749b5ca56e47be3f9bc29a3c042de9d96e7a2523e.jpg)
123
+ Figure 4: Question: A $5 \times 4$ board consists of 20 different coloured tiles. A random state of the board is shown in (A). The ideal state of the board is shown in (B). A swap consists of selecting any two tiles in the board and switching their positions. What is the minimum number of swaps required to restore the ideal state of the board from (A)? Gold Answer: 4
124
+
125
+ ![](images/1d2f15a5e3f42afe2e3486c10214d0feda0e7cd985f8986bf71579bd69898a8e.jpg)
126
+
127
+ the non-ideal state as the unsorted state and the ideal state as the sorted state to find the answer. We show an example in Figure 4.
128
+
129
+ ![](images/7e0b3912a82b7418d4c9aacc1923b6371327759d5f1b3c9fe567dfe4a01aed9c.jpg)
130
+ Figure 5: Question: You are given an incomplete map of a country having 15 different regions. The objective is to colour the regions of the map using only the four available colours: red, green, blue and yellow, such that no two adjacent regions have the same colour. Adjacent regions are defined as two regions that share a common boundary of non-zero length. The regions indicated by numbers 1 to 10 have already been coloured, as shown in the image. The regions indicated by numbers 11 to 15 are shown in white as they are yet to be coloured. You need to assign colours to these regions in a way such that it doesn't violate the objective. Each unique colour combination of the regions would result in a unique complete map. How many unique complete maps can be created by colouring all the white regions starting from the given incomplete map? Gold Answer: 8
131
+
132
+ Map Colouring. The four colour theorem is a famous result in mathematics that states that four colours are sufficient to colour the regions of any planar map such that no two adjacent regions have the same colour. We create a Voronoi diagram (Voronoi, 1908) followed by polygon clipping (Sutherland and Hodgman, 1974) to create the regions of the map. We use a graph colouring strategy based on Algorithm X (Knuth, 2000)
133
+
134
+ with four colours to find the exhaustive solutions for colouring the map. We fix the colours of some of the regions of the map and find out the number of ways the remaining regions can be coloured from the exhaustive list of solutions. We show an example of this puzzle in Figure 5.
135
+
136
+ # 4 Experiments on ALGOPUZZLEVQA
137
+
138
+ # 4.1 Setup and Baselines
139
+
140
+ We perform all our experiments on a multi-choice question-answering setup. We create three negative answer choices for each instance by using heuristics such as randomly sampling numbers within the same magnitude as the gold answer. More details can be found in Appendix B. In total, we thus have four answer choices - one gold positive and three random negatives, for all puzzles except one. The exception is the Board Tiling puzzle, where we have Yes and No as the possible choices. We evaluate various closed and open-source multimodal language models on our dataset. We consider the following models: GPT4V (OpenAI, 2023), GPT-4o, Gemini Pro (Gemini Team, 2023), Claude 3 Opus $^2$ as the closed models; InstructBLIP Vicuna 7B and 13B (Dai et al., 2023), LLaVA-1.5 13B (Liu et al., 2023), and Qwen2-VL-7B-Instruct (Wang et al., 2024) as the open-sourced models. We use the accuracy of predicting the final answer as the evaluation metric.
141
+
142
+ # 4.2 Prompting Strategy
143
+
144
+ GPT-4V, GPT-4o, Gemini, Claude, LLaVA, Qwen2: We use the zero-shot chain-of-thought (CoT) technique for these models. The objective is to generate the reasoning steps and then the final answer from the image, question and multiple answer choices. We perform experiments with two types of CoT settings using the following prompting instructions: (i) Let's think step by step (Kojima et al., 2022), and (ii) Let's describe the image first and think step by step. We use the notation CoT to describe the first setting and Elaborate CoT or eCoT to describe the second setting. For LLaVA and Qwen2, we only use the CoT setting. We concatenate the question, answer choices and the prompting instruction to create the final prompt. An example prompt from the CoT setup is as follows: Question: You are playing
145
+
146
+ <table><tr><td rowspan="2">Puzzle</td><td colspan="4">CoT</td><td colspan="4">eCoT</td><td colspan="2">I-BLIP</td><td>LLaVA</td><td>Qwen2</td><td rowspan="2">Average</td></tr><tr><td>GPT-4V</td><td>GPT-4o</td><td>Gemini Pro</td><td>Claude 3</td><td>GPT-4V</td><td>GPT-4o</td><td>Gemini Pro</td><td>Claude 3</td><td>7B</td><td>13B</td><td>13B</td><td>7B</td></tr><tr><td>Board Tiling</td><td>48</td><td>53</td><td>43</td><td>48</td><td>45</td><td>51</td><td>50</td><td>47</td><td>52</td><td>52</td><td>54</td><td>45</td><td>49.0</td></tr><tr><td>Calendar</td><td>57</td><td>30</td><td>33</td><td>39</td><td>51</td><td>27</td><td>30</td><td>42</td><td>18</td><td>21</td><td>31</td><td>28</td><td>33.9</td></tr><tr><td>Chain Link</td><td>21</td><td>28</td><td>30</td><td>23</td><td>31</td><td>36</td><td>28</td><td>29</td><td>29</td><td>24</td><td>31</td><td>31</td><td>28.4</td></tr><tr><td>Checker Move</td><td>33</td><td>29</td><td>27</td><td>35</td><td>34</td><td>27</td><td>25</td><td>33</td><td>34</td><td>15</td><td>27</td><td>33</td><td>29.3</td></tr><tr><td>Clock</td><td>25</td><td>22</td><td>32</td><td>37</td><td>29</td><td>22</td><td>30</td><td>34</td><td>28</td><td>26</td><td>10</td><td>26</td><td>26.8</td></tr><tr><td>Colour Hue</td><td>27</td><td>25</td><td>29</td><td>16</td><td>23</td><td>31</td><td>21</td><td>31</td><td>21</td><td>18</td><td>22</td><td>20</td><td>23.7</td></tr><tr><td>Map Colour</td><td>34</td><td>37</td><td>28</td><td>33</td><td>34</td><td>27</td><td>38</td><td>29</td><td>17</td><td>25</td><td>29</td><td>32</td><td>30.3</td></tr><tr><td>Maze Solve</td><td>30</td><td>28</td><td>38</td><td>23</td><td>31</td><td>24</td><td>32</td><td>25</td><td>27</td><td>21</td><td>27</td><td>25</td><td>27.6</td></tr><tr><td>Move Box</td><td>29</td><td>23</td><td>32</td><td>19</td><td>25</td><td>23</td><td>26</td><td>14</td><td>24</td><td>28</td><td>20</td><td>27</td><td>24.2</td></tr><tr><td>N-Queens</td><td>14</td><td>26</td><td>35</td><td>15</td><td>24</td><td>24</td><td>23</td><td>13</td><td>15</td><td>10</td><td>27</td><td>36</td><td>21.8</td></tr><tr><td>Number Slide</td><td>41</td><td>22</td><td>38</td><td>43</td><td>32</td><td>27</td><td>31</td><td>46</td><td>27</td><td>35</td><td>32</td><td>34</td><td>34.0</td></tr><tr><td>Rotten Fruits</td><td>27</td><td>31</td><td>29</td><td>40</td><td>42</td><td>35</td><td>25</td><td>39</td><td>27</td><td>33</td><td>29</td><td>30</td><td>32.3</td></tr><tr><td>Rubik&#x27;s Cube</td><td>48</td><td>35</td><td>34</td><td>39</td><td>43</td><td>33</td><td>26</td><td>36</td><td>41</td><td>41</td><td>37</td><td>37</td><td>38.3</td></tr><tr><td>Think A Dot</td><td>31</td><td>32</td><td>31</td><td>53</td><td>40</td><td>26</td><td>33</td><td>47</td><td>34</td><td>34</td><td>41</td><td>27</td><td>37.5</td></tr><tr><td>Tower of Hanoi</td><td>17</td><td>20</td><td>29</td><td>22</td><td>21</td><td>18</td><td>29</td><td>23</td><td>29</td><td>22</td><td>26</td><td>23</td><td>23.25</td></tr><tr><td>Water Jugs</td><td>17</td><td>24</td><td>13</td><td>37</td><td>17</td><td>16</td><td>15</td><td>28</td><td>41</td><td>42</td><td>21</td><td>18</td><td>24.1</td></tr><tr><td>Wheel of Fortune</td><td>25</td><td>27</td><td>27</td><td>19</td><td>28</td><td>22</td><td>24</td><td>20</td><td>23</td><td>19</td><td>27</td><td>31</td><td>24.3</td></tr><tr><td>Wood Slide</td><td>21</td><td>24</td><td>15</td><td>16</td><td>21</td><td>28</td><td>20</td><td>26</td><td>37</td><td>30</td><td>22</td><td>23</td><td>23.6</td></tr><tr><td>Average</td><td>30.3</td><td>28.7</td><td>30.2</td><td>30.9</td><td>31.7</td><td>27.6</td><td>28.1</td><td>31.2</td><td>29.1</td><td>27.6</td><td>28.5</td><td>28.2</td><td>-</td></tr></table>
147
+
148
+ Table 2: Accuracy scores across all the puzzles for the various multimodal language models. We divide the table into groups for ease of reading. Claude 3 indicates the Claude 3 Opus model, I-BLIP indicates the Instruct-BLIP model, and Qwen2 indicates Qwen2-VL-Instruct.
149
+
150
+ a Tower of Hanoi game ... Options: (A) 1 (B) 2 (C) 6 (D) 3. Answer: Let's think step by step. We create the prompt for eCoT in a similar fashion with its respective prompting instruction. We generate the output using a temperature of 0 which is equivalent to greedy decoding (Kojima et al., 2022; Wei et al., 2022).
151
+
152
+ Instruct-BLIP: We follow the multi-choice question-answering setup recommended in the original Instruct-BLIP work (Dai et al., 2023) for evaluation. The prompt is: Question: You are playing a Tower of Hanoi ... Options: (a) 1 (b) 2 (c) 6 (d) 3. Short Answer: The output generated from the model is constrained to be within the answer choices using a vocabulary restriction method. The answer choice with the highest log-likelihood is chosen as the prediction.
153
+
154
+ # 4.3 Main Results
155
+
156
+ We report the main results for all the models across all the puzzles in Table 2. The Board Tiling puzzle has a random baseline performance of $50\%$ . All other puzzles have a random baseline performance of $25\%$ . The overall random baseline stands at $26.4\%$ . We notice that the performance in a significant number of these puzzles across all the models is close to the random baseline of $50\%$ and $25\%$ . In CoT setup, Claude 3 Opus achieves the best average score of $30.9\%$ . The GPT-4V model in the eCoT setup achieves the best score overall with an average accuracy of $31.7\%$ , which is only around $5\%$ better than random. The other models perform slightly poorer, with GPT-4o ob
157
+
158
+ taining a best of $28.7\%$ , Gemini Pro obtaining a best of $30.2\%$ , Claude 3 obtaining a best of $31.2\%$ , Instruct-BLIP obtaining a best of $29.1\%$ with the 7B model, LLaVA obtaining $28.5\%$ , and Qwen2 obtaining $28.2\%$ .
159
+
160
+ We report the average score of the models for each puzzle in the right-most column of Table 2. This helps us find which puzzles are easier and which are difficult for models. We found that Rubik's Cube and Think A Dot have the highest average score over random, implying that these two puzzles are found by models to be comparatively the easiest. Conversely, the average performance in $N$ -Queens and Colour Hue are the lowest, signifying that models found them to be the hardest.
161
+
162
+ In terms of absolute score, we find the highest performing experiment to be GPT-4V CoT in the Calendar puzzle, where it achieves an accuracy of $57\%$ . We do not find any puzzle where the accuracy is higher than $60\%$ . We conclude that multimodal language models find the task of visual algorithmic problem-solving to be very challenging. Even though they have achieved remarkable performance in many tasks, they still have some way to go in performing complex reasoning tasks defined over vision, language, mathematics, and algorithms.
163
+
164
+ # 4.4 Ontological Analysis
165
+
166
+ We present the results across ontological categories in Table 3. The obtained results suggest some interesting patterns across the ontological structure. Closed models mostly perform better across the visual features of colour, position, and
167
+
168
+ <table><tr><td rowspan="2"></td><td rowspan="2">GPT-4V</td><td colspan="3">CoT</td><td colspan="4">eCoT</td><td rowspan="2">I-BLIP 7B</td><td rowspan="2">I-BLIP 13B</td><td rowspan="2">LLaVA 13B</td><td rowspan="2">Qwen2 7B</td></tr><tr><td>GPT-4o</td><td>Gemini Pro</td><td>Claude 3</td><td>GPT-4V</td><td>GPT-4o</td><td>Gemini Pro</td><td>Claude 3</td></tr><tr><td colspan="13">Visual Features</td></tr><tr><td>Colour</td><td>35.0</td><td>32.8</td><td>32.8</td><td>33.2</td><td>34.4</td><td>30.2</td><td>31.4</td><td>32.8</td><td>31.2</td><td>29.2</td><td>32.1</td><td>30.8</td></tr><tr><td>Position</td><td>30.3</td><td>28.7</td><td>30.2</td><td>30.9</td><td>31.7</td><td>27.6</td><td>28.1</td><td>31.2</td><td>29.1</td><td>27.6</td><td>28.5</td><td>29.2</td></tr><tr><td>Shape/Size</td><td>23.9</td><td>26.8</td><td>25.6</td><td>30.0</td><td>27.6</td><td>24.4</td><td>27.1</td><td>29.5</td><td>29.8</td><td>27.8</td><td>25.9</td><td>26.4</td></tr><tr><td>Text</td><td>33.2</td><td>27.0</td><td>28.5</td><td>34.7</td><td>31.8</td><td>23.5</td><td>28.0</td><td>33.2</td><td>25.7</td><td>28.0</td><td>25.0</td><td>28.2</td></tr><tr><td colspan="13">Algorithmic Features</td></tr><tr><td>Arithmetic</td><td>30.3</td><td>28.7</td><td>30.2</td><td>30.9</td><td>31.7</td><td>27.6</td><td>28.1</td><td>31.2</td><td>29.1</td><td>27.6</td><td>28.5</td><td>29.2</td></tr><tr><td>Boolean Logic</td><td>27.6</td><td>31.5</td><td>29.4</td><td>35.4</td><td>32.1</td><td>28.0</td><td>29.8</td><td>32.4</td><td>31.1</td><td>29.1</td><td>31.8</td><td>30.5</td></tr><tr><td>Combinatorics</td><td>37.5</td><td>29.5</td><td>33.0</td><td>38.0</td><td>33.0</td><td>27.0</td><td>34.5</td><td>37.5</td><td>22.0</td><td>30.0</td><td>30.5</td><td>33.0</td></tr><tr><td>Graphs</td><td>33.6</td><td>30.8</td><td>32.2</td><td>30.8</td><td>35.0</td><td>28.4</td><td>29.4</td><td>28.6</td><td>27.2</td><td>29.6</td><td>28.4</td><td>30.2</td></tr><tr><td>Optimization</td><td>25.6</td><td>25.0</td><td>28.1</td><td>26.6</td><td>27.0</td><td>26.4</td><td>25.2</td><td>29.0</td><td>29.1</td><td>28.1</td><td>25.6</td><td>25.7</td></tr><tr><td>Search</td><td>26.3</td><td>26.4</td><td>28.4</td><td>28.3</td><td>28.1</td><td>24.9</td><td>26.4</td><td>27.6</td><td>27.8</td><td>26.1</td><td>26.0</td><td>28.1</td></tr><tr><td>Sets</td><td>31.0</td><td>27.3</td><td>33.0</td><td>39.7</td><td>34.3</td><td>29.7</td><td>30.7</td><td>40.7</td><td>30.0</td><td>31.0</td><td>34.7</td><td>30.7</td></tr></table>
169
+
170
+ Table 3: Accuracy scores across all the ontological categories for the various multimodal language models.
171
+
172
+ <table><tr><td></td><td colspan="2">GPT-4V</td><td colspan="2">Gemini Pro</td></tr><tr><td></td><td>w/o</td><td>w/</td><td>w/o</td><td>w/</td></tr><tr><td>Calendar</td><td>51</td><td>43</td><td>30</td><td>31</td></tr><tr><td>Water Jugs</td><td>17</td><td>23</td><td>15</td><td>12</td></tr><tr><td>Checker Move</td><td>34</td><td>45</td><td>25</td><td>32</td></tr><tr><td>Clock</td><td>29</td><td>74</td><td>21</td><td>43</td></tr><tr><td>Move Box</td><td>25</td><td>61</td><td>26</td><td>21</td></tr><tr><td>Number Slide</td><td>32</td><td>53</td><td>31</td><td>39</td></tr><tr><td>Tower of Hanoi</td><td>19</td><td>32</td><td>18</td><td>19</td></tr></table>
173
+
174
+ text but perform poorer on shape/size.
175
+
176
+ The best-performing model GPT-4V eCoT performs much higher in algorithmic topics such as combinatorgraphs and sets compared to topics such as optimization and search. Results suggest that the optimization and search topics are the most difficult topics in general across all the models.
177
+
178
+ # 4.5 Reasoning with Guided Vision
179
+
180
+ Our current experimental setup doesn't disentangle the visual perception stage and algorithmic reasoning stage. In the original setup, models must identify the various aspects and characteristics of the visual context before the appropriate algorithm can be applied. To minimize the effect of the bottleneck in the visual perception stage, we conduct a guided vision experiment, where we additionally provide detailed descriptions of the image as part of the language context. In this setup, errors from the visual perception stage are minimized, so models can only focus on the algorithmic stage for solving the question.
181
+
182
+ We report the results in Table 4. The upper part of the table constitutes the puzzles where the algorithmic reasoning is difficult, as even with language-guided visual context, the model cannot improve its scores. For GPT-4V models, the
183
+
184
+ Table 4: Reasoning with Guided Vision. w/o and w/ indicate the without and with the guided vision context.
185
+
186
+ <table><tr><td>Puzzle</td><td>GPT-4V</td><td>PoT GPT-4o</td><td>Qwen2 7B</td></tr><tr><td>Board Tile</td><td>47.0</td><td>53.0</td><td>28.7</td></tr><tr><td>Calendar</td><td>50.0</td><td>26.0</td><td>51.0</td></tr><tr><td>Chain Link</td><td>26.0</td><td>26.0</td><td>29.0</td></tr><tr><td>Checker Move</td><td>28.0</td><td>30.0</td><td>29.0</td></tr><tr><td>Clock</td><td>23.0</td><td>35.0</td><td>24.0</td></tr><tr><td>Colour Hue</td><td>25.0</td><td>26.0</td><td>24.0</td></tr><tr><td>Map</td><td>34.0</td><td>31.0</td><td>23.0</td></tr><tr><td>Maze</td><td>30.0</td><td>26.0</td><td>33.0</td></tr><tr><td>Move Box</td><td>32.0</td><td>28.0</td><td>25.0</td></tr><tr><td>N Queens</td><td>23.0</td><td>25.0</td><td>24.0</td></tr><tr><td>Number Slide</td><td>38.0</td><td>28.0</td><td>30.0</td></tr><tr><td>Rotting Kiwi</td><td>33.0</td><td>28.0</td><td>31.0</td></tr><tr><td>Rubik&#x27;s Cube</td><td>50.0</td><td>19.0</td><td>22.0</td></tr><tr><td>Think Dot</td><td>34.0</td><td>24.0</td><td>31.0</td></tr><tr><td>Tower of Hanoi</td><td>24.0</td><td>27.0</td><td>32.0</td></tr><tr><td>Water Jugs</td><td>18.0</td><td>26.0</td><td>30.0</td></tr><tr><td>Wheel of Fortune</td><td>25.0</td><td>31.0</td><td>20.0</td></tr><tr><td>Wood Slide</td><td>23.0</td><td>29.0</td><td>30.0</td></tr><tr><td>Average</td><td>31.3</td><td>28.8</td><td>28.7</td></tr></table>
187
+
188
+ Table 5: Accuracy scores across all the puzzles for the various multimodal language models using PoT (Chen et al., 2023).
189
+
190
+ lower part of the table indicates puzzles where the guided vision setup helps in a large improvement of performance, suggesting there is a significant bottleneck in the visual perception stage. However, even then the numbers don't go anywhere close to 100, suggesting that the algorithmic reasoning stage presents substantial challenges even in the presence of gold context.
191
+
192
+ # 4.6 Reasoning Using Programs
193
+
194
+ We further experimented with the use of programs (Chen et al., 2023) for selected models such as GPT-4V, GPT-4o, and Qwen2 to solve puzzles in ALGOPUZZLEVQA. We present the results in Table 5. The results show comparable results to the other prompting methods shown in Table 2. Specifically, GPT-4V obtained $31.3\%$ using PoT as compared to $31.7\%$ using eCoT, GPT-4o ob
195
+
196
+ tained $28.8\%$ using PoT as compared to $28.7\%$ as compared to CoT, and Qwen2 obtained $28.7\%$ as compared to $28.2\%$ using CoT.
197
+
198
+ # 5 Conclusion
199
+
200
+ In this study, we introduce ALGOPUZZLEVQA, a novel dataset comprising puzzles that demand multimodal reasoning over vision, language and algorithms. Unlike existing multimodal reasoning datasets such as ScienceQA or MMMU, the challenges in ALGOPUZZLEVQA do not hinge on possessing domain-specific knowledge for reasoning. Instead, solving the problems in this dataset necessitates applying logical algorithmic steps to novel problem scenarios. To construct this dataset, we devised an ontology encompassing i) visual reasoning features such as colours, positions, shapes and sizes of puzzle components; and ii) algorithmic reasoning features such as arithmetic, combinatorics, optimization, etc. Each puzzle in our dataset comprises multiple categories from the visual and algorithmic ontology. Through rigorous experiments with LLMs such as GPT-4V, Gemini-Pro, Claude 3, InstructBlip, and LLaVA, we observed a consistent struggle in achieving satisfactory performance on ALGOPUZZLEVQA, emphasizing the formidable hurdles in multimodal reasoning for algorithmic puzzlesolving. Given that each puzzle is annotated with specific visual and algorithmic features based on our ontology, we draw insights into the precise skills lacking in LLMs to address particular problems. We also analyzed whether the models falter on this dataset due to deficiencies in the visual recognition setup or the algorithmic reasoning setup. While furnishing the models with accurate visual guidance of the puzzles enhances performance in some cases, achieving high-level performance in our proposed complex reasoning tasks remains an elusive goal.
201
+
202
+ # 6 Limitations
203
+
204
+ We have currently considered many puzzles that are popular in various recreational or academic settings. There are also other interesting puzzles beyond our compiled list which can be used to assess the complex reasoning abilities of LLMs. We aim to explore them and enlarge our suite of puzzles as future work. Additionally, the construction of the algorithmic ontology could be expanded to consider more fine-grained categories.
205
+
206
+ # References
207
+
208
+ Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433.
209
+ Kenneth Appel and Wolfgang Haken. 1977. The solution of the four-color-map problem. Scientific American, 237(4):108-121.
210
+ Max Black. 1948. Critical thinking. an introduction to logic and scientific method. Philosophy, 23(86).
211
+ Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2023. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research.
212
+ Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A. Smith, and Joshua B. Tenenbaum. 2023. Are deep neural networks smarter than second graders? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10834-10844.
213
+ Yew Ken Chia, Vernon Toh, Deepanway Ghosal, Li-dong Bing, and Soujanya Poria. 2024. PuzzleVQA: Diagnosing multimodal reasoning challenges of language models with abstract visual patterns. In Findings of the Association for Computational Linguistics ACL 2024, pages 16259-16273, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
214
+ Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Albert Li, Pascale Fung, and Steven C. H. Hoi. 2023. Instructclip: Towards general-purpose vision-language models with instruction tuning. ArXiv, abs/2305.06500.
215
+ Leonhard Euler. 1741. Solutio problematis ad geometriam situs pertinentis. Commentarii academiae scientiarum Petropolitanae, pages 128-140.
216
+ Google Gemini Team. 2023. Gemini: A family of highly capable multimodal models.
217
+ Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics.
218
+ Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913.
219
+
220
+ Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. Conference on Computer Vision and Pattern Recognition (CVPR).
221
+ Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988-1997.
222
+ Donald E Knuth. 2000. Dancing links. arXiv preprint cs/0011047.
223
+ Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems, volume 35, pages 22199-22213. Curran Associates, Inc.
224
+ Boris A Kordemsky. 1992. The Moscow Puzzles: 359 Mathematical Recreations. Courier Corporation.
225
+ Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023. Improved baselines with visual instruction tuning. In NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following.
226
+ Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS).
227
+ Nathan S Mendelsohn. 2004. Tiling with dominoes. The College Mathematics Journal, 35(2):115-120.
228
+ Curtis G Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749.
229
+ OpenAI. 2023. Gpt-4v(ision) system card.
230
+ Naganand Yadati Sanket Shah, Anand Mishra and Partha Pratim Talukdar. 2019. Kvqa: Knowledge-aware visual question answering. In AAAI.
231
+ Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. 2022. A-okvqa: A benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, pages 146-162. Springer.
232
+ Ivan E Sutherland and Gary W Hodgman. 1974. Reentrant polygon clipping. Communications of the ACM, 17(1):32-42.
233
+
234
+ Georges Voronoi. 1908. Nouvelles applications des paramètres continus à la théorie des formes quadratiques. deuxième mémoire. recherches sur les paralléloèdres primitifs. Journal für die reine und angewandte Mathematik, 134:198-287.
235
+ Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191.
236
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837.
237
+ Andrew Wiles. 1995. Modular elliptic curves and fermat's last theorem. Annals of mathematics, 141(3):443-551.
238
+ Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. 2023. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. arXiv e-prints, pages arXiv-2311.
239
+
240
+ # A Puzzles
241
+
242
+ We consider the following 18 puzzles, and the corresponding generated data will be released with MIT licence:
243
+
244
+ # A.1 Board Tiling
245
+
246
+ The puzzle is inspired by the Mutilated chessboard problem, originally posed by Max Black (Black, 1948): Suppose a standard $8*8$ chessboard has two diagonally opposite corners removed, leaving 62 squares. Is it possible to place 31 dominoes of size $2*1$ to cover all of these squares? The diagonally opposite corners in a standard $8*8$ chessboard are always of the same colour. Hence, the mutilated chessboard with 62 squares has 30 squares of one colour and 32 squares of the other colour. Now, the checkered pattern of the chessboard ensures that each $2*1$ domino must cover 1 dark-coloured square and 1 light-coloured square. It is thus impossible to place the dominoes to cover all the squares since the mutilated chessboard has an unequal number of dark and light-coloured squares.
247
+
248
+ A general result from Mendelsohn (2004) states that: a checkerboard originally had an even (odd)
249
+
250
+ ![](images/262920b96b2fed27779f17aa1703f14ae0d665068777a7b4e42cc7fed7a6cfe8.jpg)
251
+ Figure 6: Question: The checkerboard shown in the image was originally of $6 \times 9$ in dimension having a total of 54 squares. It uses two colours of squares, one light yellow and one dark yellow, in a chequered pattern. Two of the squares have been removed from the board in the position of the white coloured cells, as shown in the image. You have 26 dominoes of size $2 \times 1$ . You can use them as is or you can rotate them to use as a $1 \times 2$ domino. Is it possible to place all the 26 dominoes in the checkerboard to exactly cover all the remaining 52 squares? Answer Yes or No. Gold Answer: Yes
252
+ Figure 7: Question: The image shows the calendar of a month of a particular non-leap year. Which day of the week was on March 1 of that year? Gold Answer: Friday
253
+
254
+ number of squares. Two (one) randomly chosen squares are now removed from the board. If the mutilated board has $2 * m$ squares then it is tileable with $m$ dominoes of size $2 * 1$ if and only if it has an equal number of dark and light squares. We use this result to construct our puzzles. We choose the number of rows and columns in our checkerboard randomly between 4 to 9. We randomly remove 1 or 2 squares if the board has an odd or even number of squares, respectively. We question whether the resulting board is tileable with $m$ dominoes of size $2 * 1$ . We determine the yes or no tilebality answer based on the general result. We show an example of the puzzle in Figure 6.
255
+
256
+ # A.2 Calendar
257
+
258
+ We design this puzzle to evaluate the visual-temporal reasoning abilities of foundation models. We provide the calendar snapshot of a particular month as the visual context. We then ask what day of the week was a particular date in either the previous, same or the next year. We also provide information about whether the years of consideration were leap years or not to make sure that the answer is exact. We use the python calendar module to construct the instances of the puzzle. We show an example of the puzzle in Figure 7.
259
+
260
+ <table><tr><td colspan="7">November</td></tr><tr><td>Mon</td><td>Tue</td><td>Wed</td><td>Thu</td><td>Fri</td><td>Sat</td><td>Sun</td></tr><tr><td></td><td></td><td></td><td></td><td>1</td><td>2</td><td>3</td></tr><tr><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td></tr><tr><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td><td>16</td><td>17</td></tr><tr><td>18</td><td>19</td><td>20</td><td>21</td><td>22</td><td>23</td><td>24</td></tr><tr><td>25</td><td>26</td><td>27</td><td>28</td><td>29</td><td>30</td><td></td></tr></table>
261
+
262
+ # A.3 Checker Move
263
+
264
+ The puzzle generally known as Toads and Frogs<sup>3</sup> was invented by the mathematician Richard Guy. We start with a grid of length $n$ , with $n - 1$ checkers of either red or green colour occupying $n - 1$ positions. The goal is to rearrange the checkers into a given desired position. The rearrange is constrained to some rules: i) green checkers only move rightward; ii) red checkers only move leftward; iii) every move is either 1) a slide to the adjacent empty square, or 2) a jump over one position to an empty square, provided the checker being jumped over is of a different colour; iv) each grid position can accommodate a maximum of one checker at any time.
265
+
266
+ We only consider final arrangements that can be reached from the starting arrangements. We use breadth-first search constrained upon the movement rules to derive the solution.
267
+
268
+ # A.4 Chain Link
269
+
270
+ This puzzle is a modified version of a problem that appeared in Kordemsky (1992). It states that you are given chain segments of different lengths. Closed pieces can be cut open and open pieces can be welded together in a certain amount of time. You need to find out the least time required to create the longest possible circular necklace. We show an example of the puzzle in Figure 9. The puzzle can be solved optimally as follows:
271
+
272
+ 1. Check if the number of open links is equal to or greater than the number of closed segments.
273
+
274
+ ![](images/e61e1625fc8b00f0aaebd31d1e83989593f75c05281f4389262e0f7f740fd522.jpg)
275
+ Starting Configuration
276
+
277
+ ![](images/e138af8fa3e52ec9ef4a2ff8ee2397c78f922899a72e4339cf3ad9d3b4023e38.jpg)
278
+ Ending Configuration
279
+
280
+ Initially, in our example, the number of open links is 3 and the number of closed segments is 9. The condition is not satisfied and hence we move to the next step.
281
+
282
+ 2. Cut units from the segments of the least length until the first condition is satisfied. In our example, if you cut open the two segments of length 1 and both the units in the segment with length 2 then you have a total of 7 open links and 6 closed segments, satisfying the first condition.
283
+ 3. Calculate the time considering the total number of cuts and welds. We performed 4 cut operations and then we will need 7 welding operations to close all the open links. The time required is $4 * 5 + 7 * 2 = 34$ minutes. This is the minimum possible time to create the necklace.
284
+
285
+ # A.5 Clock
286
+
287
+ We design an instance of a visual-temporal reasoning puzzle using clock times. We consider an analog clock with the hours, minutes hand and show a randomly chosen time as the current time. We describe an event which happened (will happen)
288
+
289
+ ![](images/1304770da649587046eb0c3a8df7fb71ddf3e385fb2191d778c88fae35c90c2b.jpg)
290
+ Figure 9: Question: Alice has 12 segments of chains of different lengths as shown in the image. The total length of all the segments combined is 32 pieces. She has a saw machine with which a closed piece can be cut opened. She also has a welding machine with which an open piece can be closed. Each cut takes 5 minutes and each welding takes 2 minutes. Initially, she has 3 segments each with 1 open piece as shown in the image. All the other pieces are closed. She now wants to make the longest possible necklace using all the available 32 pieces. Each piece in the necklace would be connected to exactly two other pieces. This would require cutting open some pieces and then joining all the resulting segments together. What is the minimum time in which she can create the necklace? Gold Answer: 34
291
+
292
+ ![](images/d591fc8b6517d741274f3a4fd9c9591a0b9ec994319f6b06fda31f2eff738849.jpg)
293
+ Figure 8: Question: A checker game is being played on a grid of 5 squares with 2 green and 2 red checkers. Initially, the checkers are arranged as shown in the starting configuration with the 4 checkers occupying 4 squares and one unoccupied square. Green checkers only move rightward and red checkers only move leftward. Every move is either i) a slide to the adjacent empty square, or ii) a jump over one position to an empty square, provided the checker being jumped over is of a different colour. Each square can accommodate a maximum of one checker at any time. How many moves are required to reach the ending configuration from the starting configuration following the specified rules? Gold Answer: 8
294
+ Figure 10: Question: Alexis came to an event 3 minutes ago. The current time is shown on the clock. The clock is a standard analog clock without the seconds hand. What was the time when Alexis came to the event? Gold Answer: 9:19
295
+
296
+ $h$ hours and $m$ minutes ago (later). We then ask when did the event happen or when is it going to happen. We determine the answer using modular arithmetic. We show an example in Figure 10.
297
+
298
+ # A.6 Colour Hue
299
+
300
+ We show an example in Figure 11. The puzzle is inspired by the game I Love Hue<sup>4</sup>. A rectangular board contains of $m \times n$ coloured tiles arranged in a rectangular grid. The board has an ideal state where the colours are arranged in a way such that each row and column shows a monotonic change in the colour shade. To create this colour arrangement, we fix the RGB colour codes of the four cor
301
+
302
+ ![](images/3052e9cb4de26fbd4318f0d9531171de7e518f70fe9fc631576accebff4163e8.jpg)
303
+ (A)
304
+
305
+ ![](images/96de079814a6a682119278d99b26a1327f96183bceb837530e23e35216a8beb5.jpg)
306
+ (B)
307
+
308
+ ner tiles and perform linear interpolation between them to determine the colour codes of the intermediate tiles. We then randomly shuffle some of the tiles to create a non-ideal arrangement of the board. We ask how many minimum tile swaps are required to reach the ideal state from the non-ideal state. The answer can be determined using the selection sort algorithm, which minimizes the number of swaps required to sort an unsorted array. We consider the flattened version of the ideal state of the board to be the sorted state. The flattened version of the non-ideal state of the board is considered as the unsorted state. We then determine the answer using selection sort.
309
+
310
+ # A.7 Map Colouring
311
+
312
+ The four colour theorem is a famous result in mathematics which states that four colours are sufficient to colour the regions of any planar map such that no two adjacent regions have the same colour. The conjecture was first proposed in the 1850s but a formal proof (Appel and Haken, 1977) was first developed almost 120 years later.
313
+
314
+ We create a Voronoi diagram from a finite set of points (Voronoi, 1908) followed by polygon clipping (Sutherland and Hodgman, 1974) to clip the Voronoi diagram between the finite regions of $(x,y)$ , where $0 \leq x \leq 1$ and $0 \leq y \leq 1$ . This region constitutes our input map. We represent the map as a graph with regions as nodes and their adjacent regions as the adjacency list. We use a graph colouring strategy based on Algorithm X (Knuth, 2000) with four colours to find the exhaustive solutions for colouring the map. We fix the colours of some of the regions of the map and find out the number of ways the remaining regions can be coloured from the exhaustive list of solutions. We
315
+
316
+ ![](images/6a78b855a2aad984f6c9751c1151cf76f338533ee3ce198113e4f24f043e493f.jpg)
317
+ Figure 11: Question: A $5 \times 4$ board consists of 20 different coloured tiles. A random state of the board is shown in (A). The ideal state of the board is shown in (B). A swap consists of selecting any two tiles in the board and switching their positions. What is the minimum number of swaps required to restore the ideal state of the board from (A)? Gold Answer: 4
318
+ Figure 12: Question: You are given an incomplete map of a country having 15 different regions. The objective is to colour the regions of the map using only the four available colours: red, green, blue and yellow, such that no two adjacent regions have the same colour. Adjacent regions are defined as two regions that share a common boundary of non-zero length. The regions indicated by numbers 1 to 10 have already been coloured, as shown in the image. The regions indicated by numbers 11 to 15 are shown in white as they are yet to be coloured. You need to assign colours to these regions in a way such that it doesn't violate the objective. Each unique colour combination of the regions would result in a unique complete map. How many unique complete maps can be created by colouring all the white regions starting from the given incomplete map? Gold Answer: 8
319
+
320
+ show an example of this puzzle in Figure 12. We construct the puzzles in our dataset such that the number of masked regions is between 2 and 6 and the number of ways of colouring them is between 1 and 8.
321
+
322
+ # A.8 Maze Solving
323
+
324
+ This puzzle is a typical maze path-finding problem. We start from a square/rectangular grid consisting of all black cells (walls). We define the first cell of the second row as the entrance to the maze. We then perform a directionally randomized depth-first search (DFS) with backtracking from the entrance cell to create the white cells (paths) through the maze. We also make sure that at any point of the maze, either the maximum length or the maximum width of the path is 1 cell. This method ensures that there are no grids of white cells in the maze with both length and width greater than 2. We finally assign the last column of the second last row or the last row of the second last column as the exit cell.
325
+
326
+ After constructing the maze, we use breadth-first search (BFS) between the entrance cell and the exit cell to find the shortest / optimal path. We
327
+
328
+ ![](images/d8ffe9b537f593508a1376e3d9641527e52a4ec3af24976495fc2b8d7db801f7.jpg)
329
+ Figure 13: Question: This is maze having $13 \times 13$ cells. The empty cells are coloured white and the obstacle cells are coloured black. From an empty cell, you can only move up, down, left, or right to another adjacent empty cell. You cannot move diagonally between two empty cells and cannot step into a cell with an obstacle. The entry cell of the maze is shown with the green arrow. The exit cell of the maze is shown with the blue arrow. Suppose you have found the most optimal path in the maze between the entrance and exit, where you need to go through the least number of empty cells and you need to make the least number of left and right turns. What is the combined number of left and right turns do you need to make in this optimal path? Gold Answer: 12
330
+
331
+ then randomly select one question for this instance among the two choices: i) What is the number of left/right/total turns do you need to make in this optimal path? or ii) How many cells do you need to visit in this optimal path including the entrance and exit cells? We find out the answer to the question from the optimal path obtained from BFS. We show an example of the puzzle in Figure 13.
332
+
333
+ # A.9 Move Box
334
+
335
+ The puzzle is inspired by a LeetCode problem $^{5}$ . The problem setting is a game in which a person pushes boxes around in a warehouse trying to get them to target locations. The objective is to move the box to the target position in the minimum number of moves. The solution can be found using breadth-first search.
336
+
337
+ # A.10 N-Queens
338
+
339
+ The N-Queens problem is a famous chess problem often used as an example in various computer programming techniques. The objective is to place $N$ chess queens on an $N*N$ chessboard so that no two queens threaten each other. In other words, no two queens should share the same row, column, or diagonal. We consider $N = 8,9$ , and 10 for
340
+
341
+ ![](images/64e2b2ee25e4b72f565427091f2b030d8cd26791fd53be6636a5457b303fed7d.jpg)
342
+ Figure 14: Question: A storekeeper is a puzzle in which the player pushes boxes around in a warehouse trying to get them to target locations. The game is represented by a $6 \times 6$ grid of characters grid where each element is a wall, floor, or box. Your task is to move the box to the end flag under the following rules: 1. The box can be moved to an adjacent free cell by standing next to the box and then moving in the direction of the box by 1 grid. This is a push. 2. The player cannot walk through the box. What is the minimum number of pushes to move the box to the end flag? Gold Answer: 5
343
+
344
+ ![](images/4a78d4674450d3f6fc4ad5555e74ede7dd60c09f211512f925494f3c9b7bf5aa.jpg)
345
+ Figure 15: Question: You are given an $8 \times 8$ chessboard. The Manhattan distance between two squares in a chessboard is equal to the minimal number of orthogonal King moves between these squares on the otherwise empty board. The objective is to place 8 chess queens on this board so that no two queens threaten each other; i.e. no two queens share the same row, column, or diagonal. 6 queens have already been placed in some of the squares of the board, as shown in the image. Suppose you pick two squares to place the two remaining queen pieces in a way that fulfills the objective. What is the Manhattan distance between these two squares? Gold Answer: 5
346
+
347
+ which there are 92, 352, and 724 solutions, respectively. We use the well-known backtracking algorithm to create the solutions to the problem. For a solution, we show the exact position of randomly chosen $N - 2$ queens in the image. We ask what should be the Manhattan distance (in terms of the unit squares of the board) between the remaining
348
+
349
+ <table><tr><td>7</td><td>6</td><td>9</td><td>10</td></tr><tr><td>1</td><td>12</td><td>15</td><td></td></tr><tr><td>13</td><td>3</td><td>2</td><td>4</td></tr><tr><td>8</td><td>14</td><td>5</td><td>11</td></tr></table>
350
+
351
+ 2 queens when they are placed correctly to satisfy the objective.
352
+
353
+ The other 2 queens can be arranged in only a single way for most cases, for which we can easily compute the Manhattan distance. In some minimal number of cases, the other 2 queens can be placed in two different ways to satisfy the nonthreatening condition in all rows, columns, and diagonals. However, in both of these ways, the Manhattan distance between the last 2 queens is equal. So, we can have an exact answer to the question even though the arrangement could be distinct. We show an example of the puzzle in Figure 15.
354
+
355
+ # A.11 Number Slide
356
+
357
+ This puzzle is inspired by the mathematical toy known as the 15 Puzzle<sup>6</sup>. It is a sliding puzzle board of grid size $4 \times 4$ , having 15 tiles numbered 1 to 15, with one unoccupied position. Tiles can be moved by sliding them horizontally or vertically through the open position. A typical goal in the puzzle is to arrange the tiles in numerical order from left to right and top to bottom. We use grid sizes of $3 \times 3$ , $4 \times 4$ , or $5 \times 5$ and create a random arrangement of the tiles on the board and provide it as the visual context. We then create the question in one of the following styles:
358
+
359
+ - How many unique board positions can be reached after performing exactly $n$ moves?
360
+ - What is the maximum / minimum sum that can be achieved in a particular row / column after performing exactly $n$ moves?
361
+ - You perform $n$ moves where the open position is seen to be moved in the following way: up, left, ..., What is the maximum / minimum / sum of numbers in the row / column that now has the open position?
362
+
363
+ We compute the answer using breadth-first search in all the cases. We show an example of the puzzle in Figure 16.
364
+
365
+ # A.12 Rotting Fruit
366
+
367
+ The puzzle is inspired by a LeetCode problem<sup>7</sup>. The problem states that there is a rectangular grid with some fruits. Initially, there is a single rotten fruit. As time passes, it influences surrounding fruits to become rotten. The objective is to find out the earliest time at which all fruits become rotten. A breadth-first search algorithm can be used to find the solution.
368
+
369
+ ![](images/ebb9cf51952892ba5d75da133014c037d326e976dce75bfe64a97b960eae006b.jpg)
370
+ Figure 16: Question: The board shown in the image is a sliding puzzle of $4 \times 4$ tile dimensions. It has 15 numbered tiles and one unoccupied (open) position. Tiles in the same row or column of the open position can be moved by sliding them horizontally or vertically, respectively. All tiles always stay and move inside the red boundary wall, as shown in the image. A move is defined as moving the open position by one tile unit in any available direction. You start from the board position shown in the image and perform exactly 1 move. What is the minimum sum that you can achieve across the top most row in the final board position? Gold Answer: 22
371
+ Figure 17: Question: You are given a $3 \times 3$ grid in which each cell can contain either no kiwi, one fresh kiwi, or one rotten kiwi. Every minute, any fresh kiwi that is 4-directionally adjacent to a rotten kiwi also becomes rotten. What is the minimum number of minutes that must elapse until no cell has a fresh kiwi? Gold Answer: 3
372
+
373
+ # A.13 Rubik's Cube
374
+
375
+ Rubik's Cube is a mathematical combination toy invented by Erno Rubik in 1974. We consider an initial state of the cube and show it as the visual context. The movements of the cube are generally denoted using the alphabets BDFLRU denoting
376
+
377
+ ![](images/7a3c38ba46ef06198db9bf099bc68528c3e221dffdcc2fd61cb31e229ef210c2.jpg)
378
+ Figure 18: Question: A 3 * 3 Rubik's Cube has six different coloured panels: red, green, blue, yellow, orange, and grey. The initial state of the cube in terms of the different colour positions in its six faces is shown in the image. To represent the movements of the cube we use six letters: U for Up, D for Down, L for Left, R for Right, F for Front, B for Back. These letters are used in sequence where you need to perform each letter in the sequence from left to right. Each letter tells you to move that face clockwise by 90 degrees. A number 'n' immediately after a letter denotes that you need to move that face clockwise by 90 * n degrees. For example, 'U R3' would mean rotating the up face 90 degrees clockwise and then rotating the right face 270 degrees clockwise. You perform the move sequence 'D2 B' starting from the state shown in the image. What would be the number of small 1 * 1 red squares in the down face after completing the move sequence? Gold Answer: 2
379
+
380
+ clockwise movements of the back, down, front, left, right, and up faces, respectively. We first provide information about these notations in the textual context. We ask what is the number of small squares of any one of the six colours in any one of the six faces after completing a cube move sequence. The colour and the face in the question are chosen randomly for each instance. We show an example of the puzzle in Figure 18.
381
+
382
+ # A.14 Think A Dot
383
+
384
+ Think-a-Dot is a mathematical toy invented by Joseph Weisbecker<sup>8</sup>. It has three holes on its top through which a ball bearing could be dropped. It also has and eight coloured disks each displaying a blue or yellow face. When a ball is dropped through the toy, it would flip the disk mechanisms that it passed, and it would determine whether the ball would be deflected to the left or the right. We show an example of the puzzle in Figure 19. We
385
+
386
+ ![](images/1515c769ecba94b6b7125a98516885bb3cea4449fe3282fc34ea50d0c8860d33.jpg)
387
+ Figure 19: Question: The toy shown in the figure has eight coloured disks on its front, and three holes on its top - left, right, and center - through which a ball bearing could be dropped. Each disk would display either a yellow or blue face. When a ball passes through a disc it tips the disk mechanism which flips the face colour. The tipping of the disc mechanism determines whether the ball would be deflected to the left or to the right. The vertical walls between the discs would then determine the path of motion of the ball. A dropped ball always passes through exactly one disc in each of the top and the bottom row. Depending on the configuration of the top three discs it may or may not pass through the middle row. Finally, when the ball falls to the bottom it would exit either to a hole on the left or the right of the device. Four balls are dropped in sequence through the following holes: left, left, right, right. Consider the toy configuration after all the balls have been dropped and they have exited from the bottom. How many yellow faces can be seen in total in all the rows now? Gold Answer: 6
388
+
389
+ start with an initial configuration of the toy with a specific combination of the disk colours. We choose between 1 to 4 balls and a sequence of dropping them between the left, center and right holes. We ask how many blue / yellow disk faces can be seen in the top row / middle row / bottom rows / all rows after dropping the balls in that sequence. We write an algorithm to determine the answer from the initial configuration and the sequence of drops considering the state of the rows, the flipped disks and the position of the walls.
390
+
391
+ # A.15 Tower of Hanoi
392
+
393
+ The Tower of Hanoi is a well-known mathematical game often used in teaching the fundamentals of computer programming. The game consists of 3 rods and $n$ disks of various diameters, that can slide into any rod. In the original version of the puzzle, we start with all the disks stacked on one rod in order of decreasing size. The goal is
394
+
395
+ ![](images/c25e67e07bc90db46d4b3d525acdae1813c4eee1503aa1e33f2bf261990fd129.jpg)
396
+ Figure 20: Question: You are playing a Tower of Hanoi game with 3 rods and 5 disks of various diameters, which can slide onto any rod. You are given the starting and ending configuration of the game as shown in the top and the bottom of the image, respectively. The game has the following rules: i) Only one disk may be moved at a time; ii) Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod; and iii) No disk can be placed on top of a disk that is smaller than it. What is the minimum number of moves required to go from the starting to the ending configuration? Gold Answer: 2
397
+
398
+ to move the full stack of disks to another. Moving the discs between the rods is constrained by the following rules: (i) We can only move one disc at a time, (ii) Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod, (iii) No disk can be placed on top of a disk that is smaller than it. With no other constraints, the original version of the puzzle can be solved in a minimum of $2^{n} - 1$ moves.
399
+
400
+ We consider the number of discs to be between 3 to 6 and generate the optimal solution considering the original problem definition. We then select two random configurations of the game from the optimal solution, which are at most $k = 6$ moves away from each other. We consider them to be the starting and the ending configuration. As the original solution is optimal, the sequence of moves required to reach the ending configuration from the starting configuration is also optimal. We ask what is the minimum number of moves required to reach the ending configuration from the starting configuration. We mark the answer to be $k$ . We show an example of the puzzle in Figure 20.
401
+
402
+ # A.16 Water Jugs
403
+
404
+ This puzzle belongs to a class of measuring puzzles involving a finite collection of water jugs with integer capacities. We provide the initial amount of water present in each jug as the visual context.
405
+
406
+ ![](images/185740564d0b7d945d747ea64f1d4dafa1c47bcd52ac97154177f5a907f36527.jpg)
407
+ Figure 21: Question: You are given 3 jugs of capacities 6, 5, 1 litres. Initially, the amount of water that is contained in each jar is shown in the image. A single step of water pouring from one jug to another is constrained by the following rules: i) take a non-empty jug and pour water from it to another non-full jug until the first one becomes empty or the second one becomes full, and ii) no water can be spilt while pouring. The objective is to reach the amounts of 4, 3, 0 litres of water in the jugs from left to right, respectively. What is the minimum number of water pouring steps required to achieve the objective? Gold Answer: 1
408
+
409
+ We then ask how many steps of water pouring are required to reach a goal state defined in terms of specific quantities of water present in each jug. The water pouring steps are constrained by a couple of rules: i) water can be poured from a non-empty jug to another non-full jug until the first one becomes empty or the second one becomes full, and ii) no water can be split during pouring.
410
+
411
+ We consider the number of jugs to be between 3 and 5, with each initially having an amount between 1 and 14 litres of water. We create a pool of random goal states having the same quantity of total water with each jug having water that is less or equal to its respective capacity. We only consider goal states that are reachable from the initial state using breadth-first search. We show an example of the puzzle in Figure 21.
412
+
413
+ # A.17 Wheel of Fortune
414
+
415
+ We design this spinning wheel puzzle to assess the spatial reasoning ability of foundation models. We sketch a wheel with 6, 8 or 10 segments with different colours and associate each of them with a prize. The angular span of the segments is chosen to be either uniform or random. We show the initial position of the wheel and the position of a
416
+
417
+ ![](images/c2a4a9ca9f1539bd6c781e16b69df063f2dcd6e6239c941b81f883bbc50943f1.jpg)
418
+ Figure 22: Question: A fortune wheel has 6 segments of different colour. The initial position of the wheel is shown in the figure. Each segment is associated with a prize as shown in the embedded text within the segment. The axis of rotation of the wheel passes through its center and is perpendicular to the surface of the wheel. You spin the wheel clockwise and it rotates 1695 degrees before stopping. You are going to win the prize for the segment that now falls in front of the brown arrow. What is your prize? Gold Answer: Laptop
419
+
420
+ fixed arrow as the visual context. We ask what the prize would be (from the segment in front of the arrow) after the wheel has been rotated by a certain amount of degrees or full rotations in clockwise / anti-clockwise direction. We determine the answer using simple rotational mechanics. We show an example of the puzzle in Figure 22.
421
+
422
+ # A.18 Wood Slide
423
+
424
+ This puzzle is inspired by the Klotski $^{9}$ sliding puzzle. We consider a puzzle grid of size $5 \times 4$ units that has 9 wooden blocks of various sizes: one $2 \times 2$ , four $1 \times 2$ , two $2 \times 1$ , and two $1 \times 1$ . The other two spaces are empty. There is a version of the Klotski puzzle known as the Pennant Puzzle where we start with the largest $2 \times 2$ block residing at the top left. The objective is to bring this piece to the bottom left by sliding the available pieces. The shortest solution of this puzzle consists of 83 moves. We first use breadth-first search to find this optimal solution. For each instance in our dataset, we choose two board positions encountered in this solution such that they are at most 5 moves away from each other. We consider these positions as the starting and ending configuration of the board. We then create the question that asks the minimum number of moves required to reach the ending configuration from the starting configuration. We show an example of the puzzle in Figure 23.
425
+
426
+ ![](images/301f928d14085c2188e1af0dd2992a7420f4d0b6ed270556c5bffbc375243f43.jpg)
427
+ Starting Configuration
428
+
429
+ ![](images/3ff0f83afbf4aa9e1ae885386eaf1287c76169db1e9ceb5f544c87ad9bdad5fd.jpg)
430
+ Ending Configuration
431
+ Figure 23: Question: Consider a sliding block puzzle of grid size $5 \times 4$ units. It has 9 wooden blocks of varying sizes: one $2 \times 2$ , four $1 \times 2$ , two $2 \times 1$ , and two $1 \times 1$ . The gird also has two empty $1 \times 1$ spaces. The blocks cannot be removed from the grid, and may only be slid horizontally and vertically within its boundary. A move is defined as selecting a block that is slideable, and moving it by 1 unit either horizontally or vertically, whichever is possible. The image shows the starting and ending configurations of the puzzle grid. The wooden blocks are shown in various shades of brown and the empty spaces are shown in white. What is the minimum number of moves required to reach the ending configuration from the starting configuration? Gold Answer: 3
432
+
433
+ # B Negative Choice Generation for MCQA
434
+
435
+ The negative choices are constrained to Yes and No for Board Tiling. The negative choices are constrained to the prizes that appear in the wheel for Wheel of Fortune. All other puzzles in ALGOPUZZLEVQA have numerical answers. We use the heuristics of randomly sampling numbers within the same magnitude as the gold answer to create the negative choices for all the other puzzles. For example, if the gold answer is less or equal to 6 then we choose negatives between 1 - 6; otherwise, if the gold answer is less or equal to 10 then we choose negatives between 1 - 10. We step up the ranges to 50 and 100 next.
436
+
437
+ # C Model Configurations
438
+
439
+ GPT-4V We use the publicly available API to query the gpt-4-vision-preview version of the model.
440
+
441
+ Gemini Pro We use the publicly available API to query the gemini-pro-vision version of the model.
algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:896ae8d1b4c4e3af87da515225e108981d9a9b8d67c3db84c9a6c4356f6bcdcb
3
+ size 729209
algopuzzlevqadiagnosingmultimodalreasoningchallengesoflanguagemodelswithalgorithmicmultimodalpuzzles/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22625fdf5dd6c9bd70f7f878433fb9812914773dc027f72e5f92afa24357ee0f
3
+ size 531689
aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/953b6389-7a3c-46d4-8089-0acfdc5cb164_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec7203b15cf2522caa5fb82cdb44e684870ebb89db95b60cedb601fada93c855
3
+ size 108009
aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/953b6389-7a3c-46d4-8089-0acfdc5cb164_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1f4b1d09d30678cb3f63248d7fa91cf21c8950c7c8c94b5b37a29ed6fc8d1bb
3
+ size 130198
aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/953b6389-7a3c-46d4-8089-0acfdc5cb164_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aab0ea6dc5d428558d9b3ef23b988eafb53802ca5b4a70715e30700ae273f4c6
3
+ size 2126292
aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/full.md ADDED
@@ -0,0 +1,450 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Aligning Sentence Simplification with ESL Learner's Proficiency for Language Acquisition
2
+
3
+ Guanlin Li $^{1,2}$ , Yuki Arase $^{2}$ , Noel Crespi $^{1}$
4
+
5
+ <sup>1</sup>Samovar, Telecom SudParis, Institut Polytechnique de Paris, France
6
+
7
+ $^{2}$ School of Computing, Institute of Science Tokyo, Japan
8
+
9
+ {guanlin_li, noel.crespi}@telecom-sudparis.eu, arase@c.titech.ac.jp
10
+
11
+ # Abstract
12
+
13
+ Text simplification is crucial for improving accessibility and comprehension for English as a Second Language (ESL) learners. This study goes a step further and aims to facilitate ESL learners' language acquisition by simplification. Specifically, we propose simplifying complex sentences to appropriate levels for learners while also increasing vocabulary coverage of the target level in the simplifications. We achieve this without a parallel corpus by conducting reinforcement learning on a large language model. Our method employs token-level and sentence-level rewards, and iteratively trains the model on its self-generated outputs to guide the model to search for simplification hypotheses that satisfy the target attributes. Experiment results on CEFR-SP and TurkCorpus datasets show that the proposed method can effectively increase the frequency and diversity of vocabulary of the target level by more than $20\%$ compared to baseline models, while maintaining high simplification quality.
14
+
15
+ # 1 Introduction
16
+
17
+ Controlled text simplification considers audience-targeted attributes when generating simplified texts, so that the generated texts do not only meet the criteria of simplicity, but also preserve desired attributes for the targeted audiences. Recent studies on controlled text simplification aimed to help reading comprehension for language learners and employed school grade levels annotated in the training corpus as the simplification target (Scarton and Specia, 2018; Sheang and Saggion, 2021; Agrawal and Carpuat, 2023) or text features (number of words, character-level Levenshtein similarity etc.) between source and target sentences (Nishihara et al., 2019; Martin et al., 2020).
18
+
19
+ Different from these studies, we aim to aid language learning and education for English as a Sec-
20
+
21
+ and Language (ESL) learners by simplifying sentences while preserving desirable attributes for language acquisition. We use the Common European Framework of Reference for Languages (CEFR), the world standard definition of language proficiency. Our method is motivated by two classic L2 learning theories: the input hypothesis (Krashen, 1981) and frequency effect (Ellis, 2002). The input hypothesis stated that in order for the language acquisition to happen, the textual input which is either too simple or too complex for learner comprehension will not be useful for acquisition. If a learner's current competence is $i$ , then comprehensible input should contain both $i$ and $(i + 1)$ content (Mitchell et al., 2019). Frequency theory holds that the frequency of the words and phrases in the input is a key determinant of acquisition, and words with higher frequency in the usage tend to be easier to acquire (Ellis and Ferreira-Junior, 2009). The key challenge here is the lack of a parallel corpus for training that should provide complex-simple sentence pairs labelled their levels. Parallel sentences of this kind are naturally scarce, and worse, annotation of difficulty levels, in particular, CEFR, is non-trivial and requires language education experts (Arase et al., 2022).
22
+
23
+ To achieve sentence simplification for aiding language learning without a parallel corpus, we propose reinforcement learning on a pre-trained large language model (LLM). Based on the aforementioned L2 learning theories, the proposed method simplifies the complex sentences to the one corresponding to the learner's proficiency level or one level higher ( $i$ and $i + 1$ levels) and increases the coverage (frequency and diversity) of the corresponding level's vocabulary in the generated simplifications. Specifically, we reformulate the controlled simplification task as a lookahead search problem: in the decoding step $t$ , the model searches for the token that satisfies the target vocabulary constraint while also ensuring that future tokens
24
+
25
+ increase the target vocabulary coverage as much as possible, and the final hypothesis falls into the desired CEFR level. We combine a simple word-match-based heuristic with the supervised sentence-level signal to guide decoding and train the model iteratively using gradient policy optimization to memorize the search strategy that maximizes the overall reward. Remarkably, we eliminate the need for a parallel corpus by utilizing LLMs' language generation capacity for simplification via reinforcement learning. Experimental results show that the method significantly increases the coverage and diversity of the target vocabulary in the outputs by up to $20\%$ compared to the baselines, while maintaining high simplification quality.
26
+
27
+ Our primary contributions are twofold. First, we propose the sentence simplification method that aligns generated simplifications with ESL learners' proficiency level on word, phrase and sentence levels and preserves attributes effective for facilitating language learning. Second, our method is easy to deploy and does not require a parallel corpus that is often expensive to create.
28
+
29
+ # 2 Related Work
30
+
31
+ We briefly summarize two lines of simplification methods, controlled simplification and reinforcement learning based simplification.
32
+
33
+ Controlled Simplification attaches tokens or prompts to the input to control the simplification-related attributes during generation (Yang et al., 2023; Agrawal and Carpuat, 2023; Sheang and Saggion, 2021; Martin et al., 2020, 2022; Scarton and Specia, 2018; Chi et al., 2023). While these methods learn to control levels of simplified sentences using a parallel corpus with annotated difficulties, our method controls attributes useful for language learning without a parallel corpus. As opposed to the training-time controlling, Kew and Ebling (2022) adopted FUDGE (Yang and Klein, 2021), which adjusts the logits of the text generation model during decoding using a classifier, to directly control the attribute of the simplification in the decoding time.
34
+
35
+ Reinforcement Learning based Simplification has explored controllability by defining rewards based on simplicity-related criteria (Zhang and Lapata, 2017; Guo et al., 2018; Nakamachi et al., 2020; Laban et al., 2021). The rewards for the objectives are constructed using supervised or unsupervised
36
+
37
+ evaluation metrics for simplicity, adequacy and fluency. In contrast, we aim to control attributes useful for language learning and education. Furthermore, while RL tends to suffer from unstable training and sensitivity to the choice of hyperparameters, our method achieves training stability by adopting entropy regularization in the model optimization process and introducing a dynamic reward that adjusts based on the data distribution.
38
+
39
+ # 3 Problem Definition
40
+
41
+ We aim to facilitate language learning by simplification targeted at ESL learners. In this study, we use CEFR levels as a representative measure for the learners' proficiency and model the target level based on the vocabulary $^2$ (words, phrases, idioms) and sentence CEFR levels $^3$ .
42
+
43
+ Our problem is thus defined as follows. We assume that learners know their own CEFR level $i$ . Given a sentence above the learner's level $i$ , we generate its simplified version that (a) contains as much vocabulary of the level $i$ and $i + 1$ as possible, and (b) corresponds to the target (learner's) level $i$ at the sentence level.
44
+
45
+ # 3.1 Constraint Formalization
46
+
47
+ Generating simplified texts subject to vocabulary constraints can be approached as a lexical-constrained text generation task (Zetsu et al., 2022). Traditionally, lexical constraints in text generation involve a short list of required words, which Lu et al. (2021) expressed as a Conjunctive Normal Form (CNF), such as $\underbrace{(D_1 \vee D_2 \vee \cdots)}_{C_1} \wedge \cdots \wedge$ $\underbrace{(D_{m-1} \vee D_m)}_{C_m}$ in which $D_m$ stands for a single constraint, and all clauses must be satisfied, imposing hard constraints on the generation process.
48
+
49
+ In our setting, however, this formulation is no longer applicable because the vocabulary constraint is as large as the size of the vocabulary of a specific level. In addition, we aim to satisfy as many clauses as possible. Therefore, we formalize constraints as Disjunctive Normal Form (DNF), indicating words and phrases suitable for the target proficiency level: $D = \underbrace{(D_1)}_{C_1} \vee \underbrace{(D_2 \wedge D_3 \wedge \cdots)}_{C_2} \vee \cdots \vee \underbrace{(D_m)}_{C_m}$
50
+
51
+ where the form stands for the word list of the target
52
+
53
+ language level, a single $D_{m}$ represents word and the conjunctive clauses represent several words, namely phrases. Notably, this form of constraints allows for the control of discontinuous phrases, which is difficult in previous methods.
54
+
55
+ # 3.2 Optimization Function
56
+
57
+ Based on the DNF constraints, our task imposes soft constraints that aim to include as many clauses as possible. Given the simplification hypotheses $\{\mathrm{seq}_1,\mathrm{seq}_2,\dots ,\mathrm{seq}_n\}$ , the goal is to maximize:
58
+
59
+ $$
60
+ \sum_ {j = 1} ^ {m} \sum_ {k = 1} ^ {n} \operatorname {c o u n t} \left(C _ {j}, \operatorname {s e q} _ {k}\right), \tag {1}
61
+ $$
62
+
63
+ where $\mathrm{count}(C_j, \mathrm{seq}_k)$ indicates the number of clauses $C_j$ satisfied by $\mathrm{seq}_k$ . Consequently, the target during the generation process is to search for the next token that:
64
+
65
+ - simplifies the original text;
66
+ - is contained in $\exists C_i \in D$ ;
67
+ - leads to future tokens that satisfy $C_i$ ;
68
+ - leads to complete phrases or phrases with slots (discontinuity) that satisfy $C_i$ .
69
+
70
+ # 4 Proposed Method
71
+
72
+ To search for a hypothesis that better satisfies predetermined constraints, some previous methods use rollout in decoding that generates partial future sequences (Chaffin et al., 2022; Lu et al., 2022). These methods become infeasible for large models due to the inefficiency of sampling in decoding time and handling the large vocabulary constraints in our task. To effectively and efficiently search for the tokens that satisfy our constraints, we instead consider sampling in the training time and formulate the lookahead search problem using RL search (Fickinger et al., 2021) (see Fig. 1).
73
+
74
+ # 4.1 RL Search
75
+
76
+ Consider the text generation process as a Markov Decision Process (MDP), at each timestep $t$ of the generation, the language model observes a state $s_t$ which is the generated partial sequence $\mathrm{seq}_{t-1}$ , and takes an action $a_t$ to choose the token from its vocabulary. When the EOS token is reached, a reward $R$ for the generated sequence is calculated and used to update the model. In this setting, the language model is the policy function that searches for a token $v_i \in \mathcal{V}$ where $\mathcal{V}$ is the vocabulary, and we can use any policy gradient algorithm to guide the language model to search for the generations that
77
+
78
+ # Algorithm 1 Training Procedure
79
+
80
+ # Input: Complex sentences;
81
+
82
+ 1: Generate simplified texts from the complex sentences using current policy (rollout);
83
+ 2: Evaluate the current policy and produce rewards to guide the search;
84
+ 3: Optimize the policy model using the rewards
85
+ 4: Iteratively perform steps 1-3 till converge.
86
+
87
+ maximize the constraint satisfaction. Algorithm 1 indicates our training procedure.
88
+
89
+ # 4.2 Policy Model
90
+
91
+ The policy model generates a simplified sentence seq given a complex counterpart as a prompt pmt. The policy model is initialized from an instruction tuned language model, which unsupervisedly provides robust text simplifications (Kew et al., 2023).
92
+
93
+ By design, the rewards for the policy model across different proficiency levels are varied. For instance, given the same model response, a positive reward for C level could correspond to a negative reward for A level. Therefore, using the original language model as the backbone, we train separate copies of the policy model for A, B and C levels by adding and updating distinct LoRA parameters to the backbone parameters (Hu et al., 2022), while keeping the backbone frozen.
94
+
95
+ # 4.3 Reward Models
96
+
97
+ Inspired by the L2 learning theories, we design two types of rewards at lexical and sentence levels.
98
+
99
+ # 4.3.1 Lexical Constraint Reward
100
+
101
+ We use a simple heuristic to guide the search for generations that satisfy the lexical constraints:
102
+
103
+ $$
104
+ H (\operatorname {s e q}) = \sum_ {C _ {j} \in D} r (\operatorname {c o u n t} (C _ {j}, \operatorname {s e q})), \tag {2}
105
+ $$
106
+
107
+ where $C$ is a clause from $D$ , $r$ denotes the reward according to the number of satisfactions of $C$ in seq, and $H$ denotes the reward score for the generated sequence seq in the current decoding step. To calculate the match counts, we remove basic stop words from the sentence after lemmatization.
108
+
109
+ As a simple baseline, we define $r$ as a constant value 1 for word and 1.5 for phrase to encourage the model to generate more phrases and idioms. However, we found that this simple baseline is easily hacked by the model after a few steps of training,
110
+
111
+ ![](images/5921741f3d02d2f6f50985f507322785e5a7b63e79215ec1f05f5adcb4a58cf2.jpg)
112
+ Figure 1: (better viewed in color) The overall framework of the proposed method: the simplification model is initialized from a pretrained large language model which is also used as a frozen $(\ast)$ reference model to provide entropy regularization (part 0.); top-k sampling is adopted in the decoding process to sample varied simplifications for the complex sentence (part 1.a.); the generated simplifications are evaluated based on the language proficiency level (vocabulary level and sentence level) of the target audience, which is used as rewards to update the simplification model (part 1.b.) to adopt better decoding strategy.
113
+
114
+ i.e., the model only generates a limited set of frequent words that were learnt to produce rewards. To encourage the model to explore more diverse words and improve the overall coverage of target-level words in the generations, the reward should intuitively encourage maximizing the entropy for the clauses in $D$ , so that all the clauses are evenly distributed. Accordingly, we adjust the reward $r$ for the count of $C_j$ as a dynamic reward:
115
+
116
+ $$
117
+ r = \left\{ \begin{array}{l l} 1 & \text {i f} 0 \leq p _ {j} < \frac {1}{m}, \\ e ^ {- \alpha p _ {j}} & \text {i f} \frac {1}{m} \leq p _ {j} \leq 1, \end{array} \right. \tag {3}
118
+ $$
119
+
120
+ where $p_j$ is:
121
+
122
+ $$
123
+ p _ {j} = \frac {\sum_ {k = 1} ^ {n} \operatorname {c o u n t} \left(C _ {j} , \operatorname {s e q} _ {k}\right)}{\sum_ {k = 1} ^ {n} \sum_ {j = 1} ^ {m} \operatorname {c o u n t} \left(C _ {j} , \operatorname {s e q} _ {k}\right)}, \tag {4}
124
+ $$
125
+
126
+ to discourage the model from exploiting the same clause. Here, $m$ denotes the total number of clauses in $D$ and $\alpha$ is a constant to adjust the penalty degree for too frequent clauses. Eq. 4 is calculated after each epoch and the reward is adjusted accordingly. If matched clauses are above the target level, we give a constant negative score of $-1$ .
127
+
128
+ # 4.3.2 Sentence Level Reward
129
+
130
+ To go beyond words and guide the simplification model's search for a sentence of the target level, we incorporate a sentence-level reward model by simulating human experts' judgment for the sentence's CEFR level. We use pairwise ranking loss to train the reward model, since the class distribution for the CEFR-SP data is imbalanced (Arase et al., 2022). The ranking loss has been shown to be able to encourage the model to only pay attention to the focused class (Henning et al., 2023), thus may mitigate the class imbalance problem.
131
+
132
+ Consequently, we construct sentence pairs prioritizing the level we focus on generating: for a collection of sentences $\mathcal{S} = \{s_1,s_2,\dots ,s_n\}$ , each sentence $s_i$ is evaluated by human experts and annotated with a language level $l$ . Given the level we want to generate, we select the sentences with the target level $S_{\mathrm{tgt}} = \{s_i\in \mathcal{S}\mid l_i = \mathrm{level}_{\mathrm{tgt}}\}$ and randomly sample sentences from other levels to construct a negative set $S_{\mathrm{non - tgt}} = \{s_j\in \mathcal{S}\mid l_j\neq \mathrm{level}_{\mathrm{tgt}}\}$ . Then, we construct sentence pairs $\mathcal{P} = \{(s_i,s_j)\mid s_i\in S_{\mathrm{tgt}},s_j\in S_{\mathrm{non - tgt}}\}$ by randomly selecting from $S_{\mathrm{tgt}}$ and $S_{\mathrm{non - tgt}}$ .
133
+
134
+ Notably, we do not require the pair to be parallel;
135
+
136
+ they just need to be at different levels. By this design, we disentangle the adequacy requirement for the simplification from the target-level search process. The former is handled by the underlying LLM, and the latter is dealt with by the reward model by level judgment.
137
+
138
+ With the constructed sentence pairs, we train a sentence-level reward model $r_{\theta}$ . The training objective is to minimize loss:
139
+
140
+ $$
141
+ \mathcal {L} (\theta) = - \sum_ {\left(s _ {i}, s _ {j}\right) \in \mathcal {P}} \log \sigma \left(r _ {\theta} \left(s _ {i}\right) - r _ {\theta} \left(s _ {j}\right)\right) \tag {5}
142
+ $$
143
+
144
+ where $\sigma$ is the sigmoid function. After training the reward model, for a generated sentence seq, we take $r_l = \sigma(r_\theta(s))$ as the reward, and use a linear combination of the lexical reward and the sentence-level reward as the overall reward:
145
+
146
+ $$
147
+ R = \lambda r + \gamma r _ {l} \tag {6}
148
+ $$
149
+
150
+ # 4.4 Stabilized RL Training
151
+
152
+ The original instruct-tuned model is used as a frozen reference model, providing an entropy regularization for the updated policy model to ensure training stability during the search process. Specifically, the simplification seq' produced by the frozen backbone model $f^{\prime}$ is added as an entropy regularization to the overall reward:
153
+
154
+ $$
155
+ R ^ {\prime} = R - \log \left(p _ {\left(f \left(\operatorname {s e q} | p m t\right) / p _ {\left(f ^ {\prime} \left(\operatorname {s e q} ^ {\prime} | p m t\right)\right)}\right)}\right). \tag {7}
156
+ $$
157
+
158
+ By doing so, we may keep the LLM's strong paraphrasing ability while letting it acquire controllability in CEFR levels.
159
+
160
+ The policy model $f$ , namely the simplification model is then updated to search for the generations that maximize the reward. In this study, we adopt Proximal Policy Optimization (Schulman et al., 2017) to update the policy model, which achieves stable training and faster convergence.
161
+
162
+ # 5 Experiment Settings
163
+
164
+ We aim to evaluate the effectiveness of the proposed model in generating high-quality simplifications that align with the target vocabulary and sentence-based CEFR level. This section provides details of the experiment settings.
165
+
166
+ # 5.1 Resource and Implementation
167
+
168
+ Sentence CEFR Level To train the sentence-level reward model, we used CEFR-SP (Arase et al., 2022), which provides labels of six CEFR levels
169
+
170
+ <table><tr><td></td><td>A1</td><td>A2</td><td>B1</td><td>B2</td><td>C1</td><td>C2</td></tr><tr><td>Train</td><td>248</td><td>1284</td><td>2479</td><td>2226</td><td>889</td><td>52</td></tr><tr><td>Val</td><td>79</td><td>276</td><td>485</td><td>336</td><td>149</td><td>40</td></tr><tr><td>Test</td><td>71</td><td>289</td><td>540</td><td>369</td><td>150</td><td>39</td></tr></table>
171
+
172
+ Table 1: Statistics on CEFR-SP w/o Newsela
173
+
174
+ for a total of $17k$ sentences annotated by experts. We only used the publicly available subset from the dataset (excluding data based on Newsela (Xu et al., 2015)), which resulted in $10k$ sentences with labels. The statistics of the dataset are described in Table1. We fine-tuned the GPT-2 (Radford et al.) using the annotated CEFR levels.
175
+
176
+ Vocabulary List For the lexical constraint reward model, we need vocabulary lists per CEFR level. We downloaded the English Vocabulary Profile (EVP) data and used it as a dictionary of words and phrases annotated with their corresponding CEFR levels. Since our goal is to generate the simplifications in $i$ and $i + 1$ levels, we always aggregate the vocabulary lists in two levels. For clarity, we consider A1+A2, B1+B2, and C1+C2 levels. In total, we got 1076 words for A level, 3823 words for B level, 3612 words for C level.
177
+
178
+ Complex Sentence Collection We trained the policy model to iteratively learn to search for a hypothesis that maximizes rewards based on its own generations. The only requirement for our training corpus is a supply of complex sentences that warrant simplification, because sufficiently simple sentences without the need for simplification may disturb the learning. Cegin et al. (2023) showed that large language models are highly capable of paraphrasing. Following this study, we used GPT-4 $^6$ to synthesize complex sentences from the CEFRSP training set to create our training corpus. We manually prepared prompts to ensure that the outputs are always at least as complex as the highest C2 level. More details are in Appendix B.
179
+
180
+ We trained separate models for A, B and C levels since different levels require different rewards (see Section 4.2). For computational efficiency, we adopted a relatively small Phi-3-mini-3b model (Abdin et al., 2024). More implementation details
181
+
182
+ <table><tr><td>CEFR-SP</td><td>A-Frequency</td><td>A-Diversity</td><td>B-Frequency</td><td>B-Diversity</td><td>C-Frequency</td><td>C-Diversity</td></tr><tr><td>Reference</td><td>0.292</td><td>0.527</td><td>0.283</td><td>0.465</td><td>0.080</td><td>0.102</td></tr><tr><td>phi3-3b-vanilla</td><td>0.252</td><td>0.665</td><td>0.215</td><td>0.435</td><td>0.041</td><td>0.172</td></tr><tr><td>T5+grade-A</td><td>0.194</td><td>0.438</td><td>0.269</td><td>0.271</td><td>0.072</td><td>0.114</td></tr><tr><td>FUDGE-A</td><td>0.257</td><td>0.215</td><td>0.207</td><td>0.069</td><td>0.043</td><td>0.018</td></tr><tr><td>phi3-A</td><td>0.299</td><td>0.684</td><td>0.196</td><td>0.403</td><td>0.038</td><td>0.141</td></tr><tr><td>T5+grade-B</td><td>0.204</td><td>0.447</td><td>0.275</td><td>0.266</td><td>0.069</td><td>0.110</td></tr><tr><td>FUDGE-B</td><td>0.223</td><td>0.226</td><td>0.231</td><td>0.084</td><td>0.049</td><td>0.027</td></tr><tr><td>phi3-B</td><td>0.151</td><td>0.677</td><td>0.262</td><td>0.538</td><td>0.064</td><td>0.251</td></tr><tr><td>T5+grade-C</td><td>0.203</td><td>0.441</td><td>0.276</td><td>0.271</td><td>0.074</td><td>0.114</td></tr><tr><td>FUDGE-C</td><td>0.239</td><td>0.217</td><td>0.220</td><td>0.077</td><td>0.052</td><td>0.025</td></tr><tr><td>phi3-C</td><td>0.171</td><td>0.658</td><td>0.263</td><td>0.275</td><td>0.189</td><td>0.365</td></tr></table>
183
+
184
+ Table 2: Results on target attribute controllability on CEFR-SP-Test. For “Reference”, the frequency and diversity metrics were calculated using a subset of each grade level to show distributions in sentences of specific levels.
185
+
186
+ can be found in Appendix C.
187
+
188
+ # 5.2 Evaluation Datasets
189
+
190
+ To evaluate the simplification outputs, we need parallel corpora of complex and reference simple paraphrases. Below describes the resources we used for the evaluation.
191
+
192
+ CEFR-SP-Test As the formal evaluation dataset, we used CEFR-SP. We expanded its test set to be parallel because CEFR-SP is a non-parallel corpus. Specifically, we generated complex sentences for each sentence in the CEFR-SP test set using the same method described in Section 5.1. These complex sentences were input to models, and outputs were evaluated by comparing them to the original CEFR-SP sentences as references.
193
+
194
+ TurkCorpus To assess the applicability of the proposed method for a general simplification task, we also evaluated models on another widely-used dataset, TurkCorpus (Xu et al., 2016). We used the test set of the corpus, including 359 complex sentences, each has 8 human-written simplified sentences as references. It should be noted that TurkCorpus does not provide any level annotations.
195
+
196
+ # 5.3 Evaluation Metrics
197
+
198
+ We evaluated simplification outputs from two perspectives: simplification quality and target audience attributes by both automatic and human assessments. Simplification quality was assessed across three dimensions: Simplicity; Fluency; and Adequacy. As automatic metrics for simplicity, we employed LENS (Maddela et al., 2023) and SALSA (Heineman et al., 2023), which are two recently proposed model-based evaluation methods. For fluency and adequacy, we employed an instruction-tuned language model as an off-the-shelf evaluation model, which was shown to be
199
+
200
+ effective in automatic translation quality evaluations (Kocmi and Federmann, 2023). Target audience attributes were measured in terms of target vocabulary coverage and sentence CEFR level, in which vocabulary coverage includes both frequency and diversity of target vocabulary. For the evaluation of sentence CEFR-level, we used human evaluation. For more details on evaluation metrics, please refer to Appendix A.
201
+
202
+ # 5.4Baselines
203
+
204
+ Overall, we choose two lines of work as the baselines for comparison.
205
+
206
+ Controlled Simplification There are limited variants in controlled simplification methods which mostly employ control tokens with supervised learning. Based on previous literature, we implemented two baselines for controlling the target level of the simplified texts: a supervised baseline of T5+grade (Scarton and Specia, 2018) that attaches CEFR levels as control tokens and an unsupervised baseline of FUDGE that uses a discriminator at decoding time (Yang and Klein, 2021).
207
+
208
+ Non-controlled Simplification The Turk corpus was used to evaluate the effectiveness of the proposed method in general simplification. As opposed to controlled simplification, this task does not consider controlling attributes, such as grade levels, during the simplification. For this line of models, we choose the following methods: DRESS (Zhang and Lapata, 2017), DMASS (Zhao et al., 2018), EditNTS (Dong et al., 2019), ACCESS (Martin et al., 2020), IterativEdit (Kumar et al., 2020). We used outputs of these models shared in the EASSE package (Alva-Manchego et al., 2019). In addition, we also compare the vanilla phi3-3b instruction-tuned model as a baseline, under zero-shot setting without fine-tuning on simplification.
209
+
210
+ <table><tr><td>TURK</td><td>A-Frequency</td><td>A-Diversity</td><td>B-Frequency</td><td>B-Diversity</td><td>C-Frequency</td><td>C-Diversity</td></tr><tr><td>Reference</td><td>0.176</td><td>0.229</td><td>0.227</td><td>0.132</td><td>0.056</td><td>0.046</td></tr><tr><td>phi3-3b-vanilla</td><td>0.166</td><td>0.180</td><td>0.177</td><td>0.083</td><td>0.034</td><td>0.023</td></tr><tr><td>T5+grade-A</td><td>0.187</td><td>0.180</td><td>0.217</td><td>0.088</td><td>0.051</td><td>0.028</td></tr><tr><td>FUDGE-A</td><td>0.175</td><td>0.177</td><td>0.175</td><td>0.069</td><td>0.034</td><td>0.018</td></tr><tr><td>phi3-A</td><td>0.216</td><td>0.208</td><td>0.153</td><td>0.063</td><td>0.031</td><td>0.018</td></tr><tr><td>T5+grade-B</td><td>0.201</td><td>0.190</td><td>0.217</td><td>0.085</td><td>0.052</td><td>0.028</td></tr><tr><td>FUDGE-B</td><td>0.163</td><td>0.177</td><td>0.178</td><td>0.077</td><td>0.039</td><td>0.022</td></tr><tr><td>phi3-B</td><td>0.126</td><td>0.201</td><td>0.330</td><td>0.112</td><td>0.066</td><td>0.035</td></tr><tr><td>T5+grade-C</td><td>0.187</td><td>0.194</td><td>0.225</td><td>0.090</td><td>0.050</td><td>0.026</td></tr><tr><td>FUDGE-C</td><td>0.171</td><td>0.174</td><td>0.181</td><td>0.076</td><td>0.037</td><td>0.019</td></tr><tr><td>phi3-C</td><td>0.151</td><td>0.178</td><td>0.193</td><td>0.092</td><td>0.091</td><td>0.041</td></tr></table>
211
+
212
+ Table 3: Results on target attribute controllability on TurkCorpus
213
+
214
+ <table><tr><td>Complex Sentence The considerable distance, compounded by Jamie&#x27;s current condition of pregnancy, which inexorably engenders a state of increased fatigue, renders the prospect of ambulation to said location prohibitively challenging for her.</td></tr><tr><td>Ref. (level B) It is too far for Jamie to walk to, especially because she is pregnant and easily exhausted.</td></tr><tr><td>Simplifications
215
+ Level A: Jamie is too tired to walk far because she is pregnant.
216
+ Level B: Jamie&#x27;s pregnancy makes it very hard for her to walk to the location due to the long distance.
217
+ Level C: Jamie&#x27;s pregnancy leads to fatigue, making it hard for her to walk to the distant place.</td></tr></table>
218
+
219
+ Table 4: A randomly selected example from the simplification result of the proposed method. The target vocabulary of the corresponding level is marked in italic font.
220
+
221
+ # 6 Experiment Results
222
+
223
+ This section analyses experiment results of automatic and human evaluations, and ablation study.
224
+
225
+ # 6.1 Automatic Evaluation Results
226
+
227
+ Target Attributes Tables 2 and 3 show the evaluation results for the target vocabulary coverage. These results demonstrate that compared to the baseline models, the proposed model significantly increases the frequency of target vocabulary in simplified sentences while also improving vocabulary diversity. Notably, the proposed method successfully increases the frequency and diversity of A and C-level vocabulary, which should be harder than B-level due to the scarcity of level A and C samples (Arase et al., 2022).
228
+
229
+ Simplification Quality Tables 5 and 6 show the evaluation results for the simplification quality. Overall, these results indicate that our models can produce high-quality simplifications, greatly outperforming the baseline models. Remind that our model does not have reward to encourage the model to follow the adequacy requirement. We attribute these improvements to the benefits of using entropy regularization imposed by the reference model that allows the preservation of the high paraphrasing capability of LLMs. Table 4 shows a randomly
230
+
231
+ picked example of simplification by our method; Appendix E provides more.
232
+
233
+ # 6.2 Human Evaluation Results
234
+
235
+ We perform a human evaluation to assess the simplification quality from human perspectives. We recruited three graduate-level students majoring in linguistics to perform the evaluation. The evaluators were first trained with the background knowledge and then given a guideline to evaluate the following aspects of the samples: fluency, simplicity, adequacy, and CEFR sentence level.
236
+
237
+ We asked annotators to make binary judgements for fluency, simplicity, and adequacy. For sentence level, because CEFR-level judgements require expertise in language education, we simplified the task to collect reliable decisions. We asked the evaluators to judge if a simplified sentence matches the desired sentence level (denoted as "Level"). We showed a reference with its CEFR level and requested the evaluators to judge if the model output matches the reference's simplicity. In addition, we asked them if a simplification output is preferable in terms of its CEFR level compared to the one generated by a model targeting a different level (denoted as "Prefer"). For example, an evaluator judges if an output of the A-level model is preferable to that of the C-level compared to the
238
+
239
+ <table><tr><td>CEFR-SP</td><td>LENS</td><td>SALSA</td><td>Fluency</td><td>Adequacy</td></tr><tr><td>Reference</td><td>43.57</td><td>59.54</td><td>0.829</td><td>0.624</td></tr><tr><td>phi3-3b-vanilla</td><td>63.37</td><td>74.18</td><td>0.897</td><td>0.538</td></tr><tr><td>T5+grade-A</td><td>41.37</td><td>58.98</td><td>0.547</td><td>0.291</td></tr><tr><td>FUDGE-A</td><td>60.84</td><td>70.16</td><td>0.780</td><td>0.447</td></tr><tr><td>phi3-A</td><td>67.29</td><td>76.23</td><td>0.827</td><td>0.604</td></tr><tr><td>T5+grade-B</td><td>40.15</td><td>58.43</td><td>0.535</td><td>0.290</td></tr><tr><td>FUDGE-B</td><td>53.33</td><td>68.69</td><td>0.823</td><td>0.540</td></tr><tr><td>phi3-B</td><td>64.61</td><td>72.21</td><td>0.871</td><td>0.768</td></tr><tr><td>T5+grade-C</td><td>41.67</td><td>59.12</td><td>0.538</td><td>0.277</td></tr><tr><td>FUDGE-C</td><td>60.50</td><td>70.48</td><td>0.830</td><td>0.473</td></tr><tr><td>phi3-C</td><td>57.06</td><td>70.93</td><td>0.913</td><td>0.615</td></tr></table>
240
+
241
+ Table 5: Simplification quality on CEFR-SP-Test per levels; T5-grade, FUDGE and proposed method were evaluated using subsets of specific levels (A, B and C level references, respectively).
242
+
243
+ <table><tr><td>TURK</td><td>LENS</td><td>SALSA</td><td>Fluency</td><td>Adequacy</td></tr><tr><td>Reference</td><td>35.20</td><td>64.96</td><td>0.732</td><td>0.901</td></tr><tr><td>ACCESS</td><td>49.90</td><td>62.68</td><td>0.576</td><td>0.780</td></tr><tr><td>DMASS</td><td>46.52</td><td>58.97</td><td>0.515</td><td>0.665</td></tr><tr><td>DRESS</td><td>59.76</td><td>62.63</td><td>0.807</td><td>0.615</td></tr><tr><td>DRESS-LS</td><td>60.56</td><td>62.92</td><td>0.838</td><td>0.657</td></tr><tr><td>EditNTS</td><td>57.71</td><td>64.86</td><td>0.752</td><td>0.710</td></tr><tr><td>IterativEdit</td><td>37.35</td><td>49.74</td><td>0.409</td><td>0.607</td></tr><tr><td>phi3-3b-vanilla</td><td>65.08</td><td>71.93</td><td>0.830</td><td>0.807</td></tr><tr><td>phi3-A</td><td>64.92</td><td>73.68</td><td>0.720</td><td>0.708</td></tr><tr><td>phi3-B</td><td>70.25</td><td>69.05</td><td>0.855</td><td>0.952</td></tr><tr><td>phi3-C</td><td>62.24</td><td>70.43</td><td>0.869</td><td>0.872</td></tr></table>
244
+
245
+ A-level reference. $^{7}$ For each CEFR level, 30 simplifications of the CEFR-SP-Test were randomly sampled and annotated "Level" and "prefer" judgements. We report the ratios of positive judgements as evaluation scores. The details of the annotation guideline and interface are presented in Appendix D.
246
+
247
+ Table 7 shows the results; the simplicity score is generally high, close to 1, across models. This is expected as the source sentences were generated to be highly complex. The adequacy measurement results are consistent with automatic evaluation; identifying our proposed models as the most adequate. Furthermore, the proposed method achieves the best controllability on sentence levels compared to the baselines as indicated by significantly higher "Level" and "Prefer" scores.
248
+
249
+ # 6.3 Ablation Study
250
+
251
+ In this section, we show how each part of the proposed rewards contributes to the final performance. We compare the following models: vanilla phi3 model, reward using only target vocabulary counts,
252
+
253
+ Table 6: Simplification quality on TurkCorpus; all models evaluated on the entire sentences as TurkCorpus does not annotate levels.
254
+
255
+ <table><tr><td>Model</td><td>Simplicity</td><td>Adequacy</td><td>Fluency</td><td>Prefer</td><td>Level</td></tr><tr><td>Reference</td><td>1.00</td><td>0.89</td><td>0.99</td><td>0.87</td><td>-</td></tr><tr><td>T5+grade-A</td><td>0.83</td><td>0.16</td><td>0.47</td><td>0.40</td><td>0.10</td></tr><tr><td>T5+grade-B</td><td>0.90</td><td>0.13</td><td>0.50</td><td>0.43</td><td>0.17</td></tr><tr><td>T5+grade-C</td><td>0.80</td><td>0.16</td><td>0.60</td><td>0.40</td><td>0.17</td></tr><tr><td>FUDGE-A</td><td>1.00</td><td>0.50</td><td>0.80</td><td>0.50</td><td>0.43</td></tr><tr><td>FUDGE-B</td><td>0.96</td><td>0.43</td><td>0.83</td><td>0.57</td><td>0.47</td></tr><tr><td>FUDGE-C</td><td>1.00</td><td>0.47</td><td>0.83</td><td>0.57</td><td>0.33</td></tr><tr><td>Ours phi3-A</td><td>1.00</td><td>0.76</td><td>0.90</td><td>0.67</td><td>0.83</td></tr><tr><td>Ours phi3-B</td><td>1.00</td><td>0.83</td><td>0.90</td><td>0.70</td><td>0.63</td></tr><tr><td>Ours phi3-C</td><td>0.96</td><td>0.80</td><td>1.00</td><td>0.80</td><td>0.60</td></tr></table>
256
+
257
+ Table 7: Human evaluation results
258
+
259
+ ![](images/b5adf6cdf24715ae96750e80bd9716cfece11eedab4b4436350e48bba7dea996.jpg)
260
+ Figure 2: Reward effects on target vocabulary coverage
261
+
262
+ reward using dynamically adjusted vocabulary coverage rates, and reward using both dynamic vocabulary coverage rate and sentence levels (proposed method). The frequency and diversity evaluation results for A and B level models are presented in Fig. 2. Complete results can be found in Appendix E. It can be seen that changing the simple match count reward to a dynamically adjusted reward indeed encourages the model to increase the entropy inside the target vocabulary and largely improve the vocabulary diversity.
263
+
264
+ # 7 Conclusion
265
+
266
+ In this paper, we target ESL learners as audiences for text simplification aiming to facilitate the learning process of the foreign language. Referring to the input hypothesis and frequency effect theory in L2 learning, we propose a reinforcement learning method on LLM to control the simplification model to generate outputs that satisfy the vocabulary and sentence level constraints. Experiment results show that the proposed method can increase the target vocabulary coverage in the generated simplifications, and human evaluation results confirmed that the simplified texts generally preserve the targeted CEFR levels.
267
+
268
+ In practice, different individuals have varied lev
269
+
270
+ els of knowledge for the language. We plan to extend the method to generate individual learner-targeted personalized simplifications in the future.
271
+
272
+ # Limitations
273
+
274
+ This work assumes the target vocabulary for the learner is accessible, which in reality may not be the case as the target vocabulary varies with learner individuals and has to be first estimated. Although it is out of the scope of this paper, this direction constitutes our future work. Besides, currently, we do not control the frequency to a specific number, such as $95\%$ i level and $5\%$ i + 1 level, which is an important aspect to consider according to the L2 learning theory.
275
+
276
+ The control for target vocabulary and sentence is implemented individually for different levels rather than using one model altogether, causing heavier computational loads. In future, we seek to improve the design of reward model to integrate rewards for different proficiency levels into one model, and explore for a finer control over the frequency of the generated vocabulary.
277
+
278
+ # References
279
+
280
+ Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219.
281
+ Sweta Agrawal and Marine Carpuat. 2023. Controlling pre-trained language models for grade-specific text simplification. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12807-12819, Singapore. Association for Computational Linguistics.
282
+ Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 49-54, Hong Kong, China. Association for Computational Linguistics.
283
+ Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2021. The (un)suitability of automatic evaluation metrics for text simplification. Computational Linguistics, 47(4):861-889.
284
+ Yuki Arase, Satoru Uchida, and Tomoyuki Kajiwara. 2022. CEFR-based sentence difficulty annotation
285
+
286
+ and assessment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6206-6219, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
287
+ Jan Cegin, Jakub Simko, and Peter Brusilovsky. 2023. ChatGPT to replace crowdsourcing of paraphrases for intent classification: Higher diversity and comparable model robustness. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1889-1905, Singapore. Association for Computational Linguistics.
288
+ Antoine Chaffin, Vincent Claveau, and Ewa Kijak. 2022. PPL-MCTS: Constrained textual generation through discriminator-guided MCTS decoding. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2953-2967, Seattle, United States. Association for Computational Linguistics.
289
+ Alison Chi, Li-Kuang Chen, Yi-Chen Chang, Shu-Hui Lee, and Jason S. Chang. 2023. Learning to paraphrase sentences to different complexity levels. Transactions of the Association for Computational Linguistics, 11:1332-1354.
290
+ Liam Cripwell, Joel Legrand, and Claire Gardent. 2023. *Simplicity level estimate (SLE): A learned referenceless metric for sentence simplification.* In *Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing*, pages 12053–12059, Singapore. Association for Computational Linguistics.
291
+ Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-Interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393-3402, Florence, Italy. Association for Computational Linguistics.
292
+ Nick C Ellis. 2002. Frequency effects in language processing: A review with implications for theories of implicit and explicit language acquisition. Studies in second language acquisition, 24(2):143-188.
293
+ Nick C Ellis and Fernando Ferreira-Junior. 2009. Construction learning as a function of frequency, frequency distribution, and function. The Modern language journal, 93(3):370-385.
294
+ Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, and Noam Brown. 2021. Scalable online planning via reinforcement learning fine-tuning. Advances in Neural Information Processing Systems, 34:16951-16963.
295
+ Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Dynamic multi-level multi-task learning for sentence simplification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 462-476, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
296
+
297
+ David Heineman, Yao Dou, Mounica Maddela, and Wei Xu. 2023. Dancing between success and failure: Edit-level simplification evaluation using SALSA. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3466-3495, Singapore. Association for Computational Linguistics.
298
+ Sophie Henning, William Beluch, Alexander Fraser, and Annemarie Friedrich. 2023. A survey of methods for addressing class imbalance in deep-learning based natural language processing. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 523-540, Dubrovnik, Croatia. Association for Computational Linguistics.
299
+ Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
300
+ Tannon Kew, Alison Chi, Laura Vásquez-Rodríguez, Sweta Agrawal, Dennis Aumiller, Fernando Alva-Manchego, and Matthew Shardlow. 2023. BLESS: Benchmarking large language models on sentence simplification. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13291-13309, Singapore. Association for Computational Linguistics.
301
+ Tannon Kew and Sarah Ebling. 2022. Target-level sentence simplification as controlled paraphrasing. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), pages 28–42, Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
302
+ J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel.
303
+ Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, pages 193-203, Tampere, Finland. European Association for Machine Translation.
304
+ Stephen Krashen. 1981. Second language acquisition. Second Language Learning, 3(7):19-39.
305
+ Dhruv Kumar, Lili Mou, Lukasz Golab, and Olga Vechtomova. 2020. Iterative edit-based unsupervised sentence simplification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7918-7928, Online. Association for Computational Linguistics.
306
+ Philippe Laban, Tobias Schnabel, Paul Bennett, and Marti A. Hearst. 2021. Keep it simple: Unsupervised simplification of multi-paragraph text. In Proceedings of the 59th Annual Meeting of the Association for
307
+
308
+ Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6365-6378, Online. Association for Computational Linguistics.
309
+ Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, and Yejin Choi. 2022. NeuroLogic a*esque decoding: Constrained text generation with lookahead heuristics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 780-799, Seattle, United States. Association for Computational Linguistics.
310
+ Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. NeuroLogic decoding: (un)supervised neural text generation with predicate logic constraints. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4288-4299, Online. Association for Computational Linguistics.
311
+ Mounica Maddela, Yao Dou, David Heineman, and Wei Xu. 2023. LENS: A learnable evaluation metric for text simplification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16383-16408, Toronto, Canada. Association for Computational Linguistics.
312
+ Louis Martin, Éric de la Clergerie, Benoit Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4689-4698, Marseille, France. European Language Resources Association.
313
+ Louis Martin, Angela Fan, Éric de la Clergerie, Antoine Bordes, and Benoit Sagot. 2022. MUSS: Multilingual unsupervised sentence simplification by mining paraphrases. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1651-1664, Marseille, France. European Language Resources Association.
314
+ Rosamond Mitchell, Florence Myles, and Emma Marsden. 2019. Second language learning theories. Routledge.
315
+ Akifumi Nakamachi, Tomoyuki Kajiwara, and Yuki Arase. 2020. Text simplification with reinforcement learning using supervised rewards on grammaticality, meaning preservation, and simplicity. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 153-159, Suzhou, China. Association for Computational Linguistics.
316
+ Daiki Nishihara, Tomoyuki Kajiwara, and Yuki Arase. 2019. Controllable text simplification with lexical
317
+
318
+ constraint loss. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 260-266, Florence, Italy. Association for Computational Linguistics.
319
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
320
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners.
321
+ Carolina Scarton and Lucia Specia. 2018. Learning simplifications for specific target audiences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 712-718, Melbourne, Australia. Association for Computational Linguistics.
322
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
323
+ Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified text-to-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341-352, Aberdeen, Scotland, UK. Association for Computational Linguistics.
324
+ Regina Stodden, Omar Momen, and Laura Kallmeyer. 2023. DEplain: A German parallel corpus with intralingual translations into plain language for sentence and document simplification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16441-16463, Toronto, Canada. Association for Computational Linguistics.
325
+ Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 685-696, New Orleans, Louisiana. Association for Computational Linguistics.
326
+ Renliang Sun, Hanqi Jin, and Xiaojun Wan. 2021. Document-level text simplification: Dataset, criteria and baseline. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7997-8013, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
327
+ Wei Xu, Chris Callison-Burch, and Courtney Naples. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283-297.
328
+
329
+ Wei Xu, Courtney Naples, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.
330
+ Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511-3535, Online. Association for Computational Linguistics.
331
+ Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Mingfeng Xue, Boxing Chen, and Jun Xie. 2023. Tailor: A soft-prompt-based approach to attribute-based controlled text generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 410-427, Toronto, Canada. Association for Computational Linguistics.
332
+ Tatsuya Zetsu, Tomoyuki Kajiwara, and Yuki Arase. 2022. Lexically constrained decoding with edit operation prediction for controllable text simplification. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), pages 147-153, Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
333
+ Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with bert.* In International Conference on Learning Representations.
334
+ Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584-594, Copenhagen, Denmark. Association for Computational Linguistics.
335
+ Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164-3173, Brussels, Belgium. Association for Computational Linguistics.
336
+
337
+ # A Evaluation Metrics
338
+
339
+ In this section, the evaluation metrics are explained in details: fluency, adequacy, target vocabulary frequency, diversity and target sentence level.
340
+
341
+ Previous studies have introduced various metrics for evaluating simplicity, which we summarize in Table 8. Among these metrics, SARI (Xu et al., 2016) is the most commonly employed in the literature. However, recent studies show that SARI may not be an optimal measure for assessing the quality of simplicity (Alva-Manchego et al., 2021;
342
+
343
+ Maddela et al., 2023; Stodden et al., 2023). Thus, we chose to use LENS (Maddela et al., 2023) and SALSA (Heineman et al., 2023), two recently proposed metrics, to measure simplicity.
344
+
345
+ <table><tr><td>Metric</td><td>Scope</td><td>Reference</td></tr><tr><td>BLEU (Papineni et al., 2002)</td><td>semantic similarity</td><td>Y</td></tr><tr><td>FKGL (Kincaid et al., 1975)</td><td>readability</td><td>N</td></tr><tr><td>FKBLEU (Xu et al., 2016)</td><td>readability, similarity</td><td>Y</td></tr><tr><td>SARI (Xu et al., 2016)</td><td>keep, add, delete</td><td>Y</td></tr><tr><td>D-SARI (Sun et al., 2021)</td><td>keep, add, delete</td><td>Y</td></tr><tr><td>SAMSA (Sulem et al., 2018)</td><td>semantic structural similarity</td><td>N</td></tr><tr><td>BERTScore (Zhang* et al., 2020)</td><td>semantic similarity</td><td>Y</td></tr><tr><td>SLE (Cripwell et al., 2023)</td><td>human rating + FKGL</td><td>N</td></tr><tr><td>LENS (Maddela et al., 2023)</td><td>human rating</td><td>Y</td></tr><tr><td>SALSA (Heineman et al., 2023)</td><td>human rating</td><td>N</td></tr></table>
346
+
347
+ Table 8: Metrics used in recent literature. Scope denotes the aspect that the metric aims to evaluate, and reference indicates whether the metric is computed based on references or not.
348
+
349
+ For adequacy and fluency, the ideal approach is human evaluation; however, this is impractical due to the large dataset size. Instead, we employed large language models to assess these two aspects. With a capable language model $f$ , the generated simplification sentence $s$ is evaluated as:
350
+
351
+ $$
352
+ \operatorname {s c o r e} (s) = \sum_ {v \in V _ {y}} f (v \mid (p m t, s) \tag {8}
353
+ $$
354
+
355
+ where $pmt$ is a prompt designed for the model to output "yes" if the model evaluate $s$ to be adequate or fluent, and $V_y$ is a vocabulary subset for "yes" with $V_y = \{\text{YES}, \text{Yes}, \text{yes}\}$ . We use Llama-3-8b-instruct<sup>8</sup> model as the evaluation model in our experiment.
356
+
357
+ To measure target vocabulary frequency, we took the ratio between the total count of matched target words and the total generated words.
358
+
359
+ $$
360
+ \sum_ {j = 1} ^ {m} \sum_ {k = 1} ^ {n} \operatorname {c o u n t} \left(C _ {j}, \operatorname {s e q} _ {k}\right) / \sum_ {k = 1} ^ {n} \operatorname {c o u n t} (\text {t o k e n}, \operatorname {s e q} _ {k}) \tag {9}
361
+ $$
362
+
363
+ To measure vocabulary diversity, we took the ratio between the number of matched words and number of words in the word list.
364
+
365
+ $$
366
+ \sum_ {j \in D} \mathbf {1} \left(\bigvee_ {k = 1} ^ {n} \mathbf {1} _ {C _ {j}} \left(\operatorname {s e q} _ {k}\right)\right) / | D | \tag {10}
367
+ $$
368
+
369
+ # B Complex Sentence Generations
370
+
371
+ To generate complex sentences, we prompted the GPT-4 model to rephrase sentences of varied levels into highly complex sentences. To ensure the diversity of the generated complex data, we initially
372
+
373
+ created a variety of seed prompts manually and instructed GPT-4 to generate additional prompts based on these seed prompts. GPT-4 was then prompted to generate complex sentences based on these diversified prompts. The 5 manually written seed prompts and 10 model generated prompts are presented in Table 9. In total 15 prompts were used to generate complex sentences, for each generation, one of the prompts was randomly selected. We present samples of the generated complex sentences together with simplifications in Table 10 and Table 11.
374
+
375
+ # C Training Details
376
+
377
+ Implementation Details of Baselines We implemented the baseline models using the transformers library $^{10}$ . T5-s2s models require parallel corpus of complex-simple sentences, for which we used the pseudo-parallel sentences of generated complex sentences and their original sentences, and pretended level tokens for level controlling during training and evaluation. We implemented the FUDGE simplification control model with a Llama-3-8b-instruct model as the generation model, and its logits during the inference were adjusted using the CEFR level classification model released by (Arase et al., 2022).
378
+
379
+ Implementation Details of Proposed Method The PPO training algorithm was implemented using the TRL library with a learning rate of $3e - 5$ . For the dynamic reward model used in the training, we set the $\alpha$ to be 1.2, as we found a value slightly bigger than 1 was shown to have better performance empirically; the reward for phrases is always set to be 1.5 times more than words to reward more on the phrase generation. For the overall reward, $\lambda$ was set to be 1.5 to compensate for the vocabulary reward penalty, and $\gamma$ was set to 1. During training, we used the following prompt for the model to generate simplifications: "Given a complex sentence {}, generate a simplified version for it:"
380
+
381
+ Training Performance The performance of the sentence level reward model is shown in Fig. 3. Fig.4 shows mean reward and KL change over the training steps with and without the dynamic reward. Objective/KL indicates the deviation of the simplification model from the reference model, and
382
+
383
+ ![](images/3700c19b11c14c7cfd6f82e794c556b50d18c9421a0b33fc2451ca086b53889f.jpg)
384
+ Figure 3: Sentence level reward model evaluation accuracy
385
+
386
+ an absurdly high KL indicates model collapse; a burst in mean reward indicates model collapses and only produces a limited set of vocabulary. It can be observed that using the dynamic reward helps stabilize the training, while using the match count alone causes the model to be over-optimized and collapse to a limited vocabulary subset.
387
+
388
+ ![](images/03122b3996a83b79a81ee115406b4fa0c13556e1019794bf4998a89bdcb6662f.jpg)
389
+ Figure 4: Training stability w/wo dynamic reward
390
+
391
+ # D Human Evaluation Details
392
+
393
+ In this section, the annotation guidelines that evaluators used to evaluate the generated simplifications as well as the annotation interface are presented. The annotation guidelines contain the definition of the aspect to be evaluated, the criteria for the evaluation and indications for the annotation interface. The annotation interface is designed to be a binary-choice form, for each aspect to be evaluated, the evaluators chose to tick to indicate that the simplification contains the aspect to be evaluated, and does not satisfy the aspect otherwise. The evaluation results are then used to calculate the binary accuracy of the aspects to be evaluated. The evaluation guidelines are shown in Fig. 5(a) and the evaluation interface is shown in Fig. 5(b).
394
+
395
+ ![](images/6df1ac7aebe5ed51b31db733a70aed9f4330ef7f86a212180c31ed84c7363dd9.jpg)
396
+ (a) Screenshot of annotation guidelines shown to the evaluators
397
+
398
+ ![](images/c680c854e122910dd545ecacdcc0b6565e6a8c4c27a8514621be606f22906f1f.jpg)
399
+ (b) Screenshot of annotation interface shown to the evaluators
400
+ Figure 5: Screenshot of annotation guidelines
401
+
402
+ # E More Evaluation Results
403
+
404
+ Fig. 6 shows the ablation study results on all three levels of models.
405
+
406
+ Tables 10 and 11 present example outputs: the complex sentences, reference sentences, and simplified sentences. For each complex sentences, there are three versions of the simplified sentences, corresponding to A, B and C levels generated by different models targeting the corresponding level, respectively.
407
+
408
+ ![](images/8165baaefe1275ccf720f03edc4c398009a2493c79e81fd0bde28977a986ba74.jpg)
409
+ Figure 6: Vocabulary targeting w/ different rewards, all levels
410
+
411
+ <table><tr><td>Manual Composed Prompts</td></tr><tr><td>You are an expert in academic writing, renowned for your ability to compose intricate and sophisticated sentences. Please rephrase the following sentence,so that it&#x27;s a complex, hard to follow sentence that would usually appear in a journal article, without loss of original meaning:</td></tr><tr><td>You are an experienced English teacher. Please rephrase the following sentence,to make it a complicated, very hard sentence to read that a English learner may encounter in daily reading, without loss of original meaning:</td></tr><tr><td>You are a successful postmodernism theater and book critic. You used varied writing styles in your articles. Please rephrase the following sentence,to make it a complex and very difficult to understand sentence,without loss of original meaning:</td></tr><tr><td>You are a philosopher and literature professor. You usually make intricate perception and sharp insight in your writing. Please rephrase the following sentence,to make it a short but complex and very hard to follow,without loss of original meaning:</td></tr><tr><td>You are an editor of social and financial news and journals. Please rephrase the following sentence,so that the sentence has complex compositions and advanced words, that normal readers cannot understand, without loss of original meaning:</td></tr><tr><td>Model Generated Prompts</td></tr><tr><td>You are a legal scholar with extensive experience in drafting complex legal documents. Please rephrase the following sentence,to make it a complex and legally intricate sentence,without loss of original meaning:</td></tr><tr><td>You are a renowned scientist known for writing dense and comprehensive research papers. Please rephrase the following sentence,to make it a complex and highly technical sentence,without loss of original meaning:</td></tr><tr><td>You are a seasoned journalist known for crafting elaborate and detailed investigative reports. Please rephrase the following sentence,to make it a complex and deeply investigative sentence,without loss of original meaning:</td></tr><tr><td>You are a literary critic who writes for a prestigious literary journal, known for your sophisticated language. Please rephrase the following sentence,to make it a complex and highly sophisticated sentence,without loss of original meaning:</td></tr><tr><td>You are a historian known for your detailed and intricate historical analyses. Please rephrase the following sentence,to make it a complex and historically detailed sentence,without loss of original meaning:</td></tr><tr><td>You are an expert in technical writing, specializing in creating elaborate and detailed user manuals. Please rephrase the following sentence,to make it a complex and technically detailed sentence,without loss of original meaning:</td></tr><tr><td>You are a linguist with expertise in creating intricate and multifaceted linguistic analyses. Please rephrase the following sentence,to make it a complex and linguistically intricate sentence,without loss of original meaning:</td></tr><tr><td>You are a political theorist known for your dense and intricate political analyses. Please rephrase the following sentence,to make it a complex and politically intricate sentence,without loss of original meaning:</td></tr><tr><td>You are an economist renowned for your detailed and complex economic analyses. Please rephrase the following sentence,to make it a complex and economically detailed sentence,without loss of original meaning:</td></tr><tr><td>You are a theologian known for your intricate and deeply philosophical theological writings. Please rephrase the following sentence,to make it a complex and theologically intricate sentence,without loss of original meaning:</td></tr></table>
412
+
413
+ Table 9: Prompts used to generate complex sentences
414
+
415
+ <table><tr><td>Complex Sentence
416
+ Let us endeavor to delve into the intricacies and nuances of the text, striving to comprehend the underlying themes and implications inherent within, as we embark on this journey of intellectual exploration.</td></tr><tr><td>Reference, level: A
417
+ Let&#x27;s try to read.</td></tr><tr><td>Simplifications
418
+ Let&#x27;s try to understand the text well.
419
+ Let&#x27;s try to understand the text&#x27;s main ideas and meanings as we read.
420
+ Let&#x27;s understand the text&#x27;s themes and meanings.</td></tr><tr><td>Complex Sentence
421
+ Are you experiencing any concerns or apprehensions regarding the composition, quality, or safety of the food you consume?</td></tr><tr><td>Reference, level: A
422
+ Do you worry about what you eat?</td></tr><tr><td>Simplifications
423
+ Do you have worries about the food you eat?
424
+ Do you have worries about the food you eat&#x27;s quality or safety?
425
+ Do you have concerns about your food&#x27;s composition, quality, or safety?</td></tr><tr><td>Complex Sentence
426
+ The considerable distance, compounded by Jamie&#x27;s current condition of pregnancy, which inexorably engenders a state of increased fatigue, renders the prospect of ambulation to said location prohibitively challenging for her.</td></tr><tr><td>Reference, level: B
427
+ It is too far for Jamie to walk to , especially because she is pregnant and easily exhausted.</td></tr><tr><td>Simplifications
428
+ Jamie is too tired to walk far because she is pregnant.
429
+ Jamie&#x27;s pregnancy makes it very hard for her to walk to the location due to the long distance.
430
+ Jamie&#x27;s pregnancy leads to fatigue, making it hard for her to walk to the distant place.</td></tr><tr><td>Complex Sentence
431
+ With meticulous care, the young girl positioned the doll on her bed, her actions imbued with a quiet reverence, before softly pressing her lips to its cheek, an act infused with such tenderness that one might easily imagine she believed the doll to be a living boy.</td></tr><tr><td>Reference, level: B
432
+ The little girl placed the doll carefully on her bed and kissed its cheek as gently as if it were a real boy.</td></tr><tr><td>Simplifications
433
+ The girl put the doll on her bed and kissed it gently.
434
+ The girl carefully placed the doll on her bed, kissed its cheek gently, and seemed to love it like a boy.
435
+ The girl gently kissed the doll out of affection.</td></tr></table>
436
+
437
+ Table 10: Complex sentences, references and model simplified sentences in A, B, C levels, respectively
438
+
439
+ <table><tr><td>Complex Sentence
440
+ Numerous physical characteristics of alkenes and alkanes exhibit remarkable congruity; manifesting as chromatically imperceptible, possessing nonpolarity, and exhibiting combustibility.</td></tr><tr><td>Reference, level: C
441
+ Many of the physical properties of alkenes and alkanes are similar : they are colorless , nonpolar, and combustable .</td></tr><tr><td>Simplifications
442
+ Alkenes and alkanes have similar physical traits. They are hard to see color differences, not attracted to water, and burn easily.
443
+ Alkenes and alkanes have similar physical traits. They are colorless, nonpolar, and burn easily.
444
+ Alkenes and alkanes share traits like colorlessness, nonpolarity, and flammability.</td></tr><tr><td>By the month of May in the year 2008, the phenomenon known as Rickrolling had ascended to the status of an internet sensation, infiltrating the digital landscape with such pervasive enthusiasm that it swiftly garnered the attention of mainstream media outlets, which began to extensively cover this viral trend.</td></tr><tr><td>Reference, level: C
445
+ By May 2008 , Rickrolling had become an internet hit or phenomenon and very soon Rickrolling was being covered on mainstream media .</td></tr><tr><td>Simplifications
446
+ In 2008, Rickrolling became very popular on the internet and in the news.
447
+ In 2008, Rickrolling became a big internet trend.
448
+ In 2008, Rickrolling became an internet sensation, leading to widespread media coverage.</td></tr></table>
449
+
450
+ Table 11: Complex sentences, references and model simplified sentences in A, B, C levels, respectively
aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c07d4d1682747ef55289a0ef9a75db62f620fe88ca95015ed600f333c4ef912b
3
+ size 1619740
aligningsentencesimplificationwithesllearnersproficiencyforlanguageacquisition/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4027c69440499ad354655ccf8005821d41ae5d4bae928da69beedfb60d384649
3
+ size 474768
amphistabidirectionalmultiheaddecodingforacceleratingllminference/c43b5d34-1d0e-41f4-b5d7-782ab331bb8c_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89729889f428d0bc3882c6618a499c8fb88a7df3aa5d8b05d931becca9cff97b
3
+ size 93904
amphistabidirectionalmultiheaddecodingforacceleratingllminference/c43b5d34-1d0e-41f4-b5d7-782ab331bb8c_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3aa563c6b8f2687b865abb0175ece11a30542941c8a2b6e6f722e8bedf08044
3
+ size 112440
amphistabidirectionalmultiheaddecodingforacceleratingllminference/c43b5d34-1d0e-41f4-b5d7-782ab331bb8c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0be15190b9145d67fc7183be3c346a40e495315fb0aa4229d17450a96e68fdab
3
+ size 3158808
amphistabidirectionalmultiheaddecodingforacceleratingllminference/full.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Amphista: Bi-directional Multi-head Decoding for Accelerating LLM Inference
2
+
3
+ Zeping Li $^{1*}$ , Xinlong Yang $^{12*}$ , Ziheng Gao $^{1}$ , Ji Liu $^{1}$ , Guanchen Li $^{1}$ , Zhuang Liu $^{1}$ , Dong Li $^{1}$ , Jinzhang Peng $^{1}$ , Lu Tian $^{1}$ , Emad Barsoum $^{1}$
4
+
5
+ <sup>1</sup> Advanced Micro Devices, Inc. <sup>2</sup> Peking University {zeping.li, emad.barsoum} @amd.com
6
+
7
+ # Abstract
8
+
9
+ Large Language Models (LLMs) inherently use autoregressive decoding, which lacks parallelism in inference and results in significantly slow inference speed. While methods such as Medusa constructs parallelized heads, they lack adequate information interaction across different prediction positions. To overcome this limitation, we introduce Amphista, an enhanced speculative decoding framework that builds upon Medusa. Specifically, Amphista models an Auto-embedding Block capable of parallel inference, incorporating bi-directional attention to enable interaction between different drafting heads. Additionally, Amphista integrates Staged Adaptation Layers, which ensure a seamless transition of semantic information from the target model's autoregressive inference to the drafting heads' non-autoregressive inference, effectively achieving paradigm shift and feature fusion. Experimental results on Vicuna models using MT-Bench and Spec-Bench demonstrate that Amphista achieves substantial acceleration while maintaining generation quality. On MT-Bench, Amphista delivers up to $2.75 \times$ speedup over vanilla autoregressive decoding and $1.40 \times$ over Medusa on Vicuna 33B in wall-clock time.
10
+
11
+ # 1 Introduction
12
+
13
+ Generative large language models (LLMs) have made remarkable advances in language processing by scaling the transformer decoder block, offering a potential pathway toward Artificial General Intelligence (AGI) (OpenAI, 2022; Touvron et al., 2023). However, the autoregressive nature of next-token prediction and the large parameter size of foundational models result in low inference efficiency, marked by high latency per token and low throughput per second during decoding.
14
+
15
+ In this context, acceleration during inference has become a burgeoning research area. Speculative de
16
+
17
+ ![](images/18c966cf72b815a2ae7907b8cbc88f6b062215ad8ec33a4308dbf6ceb4efac2a.jpg)
18
+ Figure 1: Top-1/5 accuracy for different heads of Medusa and Amphista. We perform testing with randomly sampled $5\%$ ShareGPT conversation data. Amphista far outperforms Medusa in terms of head accuracy, especially for the latter two heads.
19
+
20
+ ![](images/eb5e18d54005ed16b7af70c53a1cbc625cb57056baeab971a94a1b2ff6bc0629.jpg)
21
+
22
+ coding (Stern et al., 2018; Chen et al., 2023) uses a draft model for preliminary multi-step speculative inference and a target model to verify the speculative predictions, emerging as a very promising algorithmic strategy. Notably, by employing a rejection sampling strategy (Leviathan et al., 2023), the generation quality and accuracy of the speculate-and-verify framework are consistent with those of the target model, making speculative decoding a lossless acceleration framework. Medusa decoding (Cai et al., 2024) innovatively uses the target model's last hidden states to implement a multi-heads inference framework. It is widely adopted for its efficient acceleration and simple structure.
23
+
24
+ Nonetheless, as illustrated in Figure 1, we find that the prediction accuracy of separately independent Medusa heads is relatively low, which progressively worsens and adversely impacts acceleration performance in downstream tasks. To mitigate these inaccuracies stemming from the absence of feature dependencies while maintaining parallel inference, we first introduce the Autoembedding Block, which integrates a bi-directional self-attention mechanism (Vaswani et al., 2017). This structure not only allows earlier heads to attend to subsequent ones, but more importantly, enables backward heads to leverage information from preceding heads. This enhancement allows
25
+
26
+ drafting heads to better capture contextual information, thereby improving the acceptance rate of their predictions. Moreover, in the multi-step drafting framework, this non-autoregressive structure achieves lower drafting latency compared to an autoregressive approach.
27
+
28
+ Additionally, we identify a significant gap between the autoregressive target model and the non-autoregressive draft model in their prediction paradigms. To bridge this discrepancy and further enhance feature representations across different drafting heads, we propose the Staged Adaptation Layers. These layers serve as an intermediary module to facilitate feature integration and transformation between the target model and draft heads. Once adopted, semantically enriched features are passed through MLP activations and fed into the auto-embedding block. This enhances the bi-directional attention mechanism's ability to fuse features across heads, ultimately boosting acceptance rates and reducing wall-clock time.
29
+
30
+ Lastly, to further align the draft model with the target model with minimal computational cost, we incorporate the sampled token from the target model's latest prediction into the staged adaptation layers. This critically integrated information harmonizes Amphista with the target model, yielding a significant improvement in performance.
31
+
32
+ On MT-Bench, Amphista achieves up to $2.75 \times$ speedup over vanilla autoregressive decoding and $1.40 \times$ over Medusa on Vicuna 33B, as consistently high accuracy (see Figure 1). To summarize, our contributions are as follows:
33
+
34
+ - We present Amphista, a non-autoregressive and innovatively cost-efficient inference acceleration framework, built upon the foundational principles of Medusa decoding.
35
+ - We introduce the Auto-embedding Block, which enables bi-directional interaction among different heads by facilitating collaborative information exchange during the drafting phase. Additionally, the Staged Adaptation Layers are introduced to bridge the gap between autoregressive and non-autoregressive token prediction through a two-stage adaptation process. Finally, the integration of a sampled token from the target model further aligns the draft and target models with minimal computational overhead.
36
+ - We perform comprehensive evaluations on a diverse set of foundational models. The results
37
+
38
+ show that Amphista consistently outperforms Medusa in terms of both acceptance rate and speed-up, across various generation tasks.
39
+
40
+ # 2 Preliminaries
41
+
42
+ In this section, we introduce some preliminary background related to our work as follows:
43
+
44
+ Speculative Decoding. Speculative decoding has been successfully applied to LLM decoding algorithm recently (Leviathan et al., 2023; Chen et al., 2023). The core idea is to leverage a small, lower-quality model (draft model) together with a large, higher-quality model (target model) to accelerate token generation. Concretely, in each decoding step, the algorithm first uses the draft model to autoregressively generate a sequence of future tokens. These drafted tokens are then verified by the target model in a single forward pass. During the verification process, a certain strategy is applied to determine which tokens are accepted by the target model and which are rejected and discarded. Previous work (Leviathan et al., 2023) has theoretically and empirically demonstrated that the token output distribution of speculative decoding is consistent with the autoregressive generation of original target model, but with fewer decoding steps, thus enhancing generation efficiency.
45
+
46
+ Medusa Decoding. Medusa Decoding (Cai et al., 2024) represents an efficient speculative decoding algorithm based on the draft-and-verify principle, inheriting principles from blockwise parallel decoding (Stern et al., 2018). Specifically, Medusa integrates independent MLP layers, called drafting heads, with the target model to form a unified architecture. In each decoding step, the target model's $\mathsf{lm\_head}$ samples the next token, while the next-i MLP heads predict tokens at subsequent positions. These drafted tokens are then verified by the target model's forward pass to decide their acceptance. By leveraging lightweight MLP layers, Medusa strikes an effective balance between computational efficiency and prediction accuracy, leading to substantial acceleration. Hydra (Ankner et al., 2024), which is a subsequent state-of-the-art optimization based on Medusa, transforms the independent MLP heads into sequentially dependent MLP heads, further enhancing the predictive accuracy.
47
+
48
+ Tree Attention. Tree attention (Miao et al., 2024; Cai et al., 2024) enables parallel computation of attention scores for multiple draft candidates. Medusa uses a tree causal mask, allowing each
49
+
50
+ ![](images/eb4655d4144688957da03560a0efc8d1d9e9dcdd8f84482ad45671ee03663450.jpg)
51
+ Figure 2: The Framework of Amphista Decoding. Our method improves Medusa in two folds: (1) We introduce staged adaptation layers, consisting of a group of causal Transformer Decoder layers built upon the target model, to adapt the target model's hidden states and the sampled token in two stages. This module ensures that the adapted features contain richer contextual information, supporting multiple-token predictions rather than focusing solely on the immediate next-token prediction. (2) We introduce an auto-embedding block, which is a bi-directional Transformer Encoder module with positional encoding. This block allows each head to attend to others, fostering cooperative predictions and thereby enhancing the speculative accuracy during the drafting stage.
52
+
53
+ node to attend only to its ancestors, efficiently processing multiple candidate sequences simultaneously (see Appendix A.1 for details).
54
+
55
+ # 3 Amphista
56
+
57
+ The overview of Amphista is shown in Figure 2. Building its pipeline upon target model, Amphista contains two main modules: (1) Staged Adaptation Layers. They are causal Transformer Decoder layers that adapt the target model's hidden states and sampled token embedding in two stages, each focusing on different drafting positions. This adaptation process results in hidden states that are enhanced with position-aware contextual information, improving overall prediction accuracy, especially for the latter steps. (2) Auto-embedding Block. It is a Transformer Encoder module that conducts bidirectional self-attention computations among the representations of different draft heads, allowing each head can be attended by the others. This facilitates collaborative prediction among these heads, thereby improving overall prediction accuracy.
58
+
59
+ # 3.1 Staged Adaptation Layers
60
+
61
+ Figure 2 demonstrates the relevant details of our staged adaptation layers. Although target model's
62
+
63
+ hidden states contain semantically rich information, there are still differences in the representation requirements between the target model and the draft heads. Specifically, the hidden states of the target model are trained only for predicting the next token, while draft heads need more contextual and positon-aware hidden states to perform multi-step speculation. To address this problem, Medusa-2 applies LoRA (Hu et al., 2021) for joint training of the target model and draft heads, which may compromise the generality on downstream tasks. Hydra employs a single prefix layer for all positions, lacking targeted adaptation for different positions. We propose an effective adaptation method by incorporating two adaptation layers to transform and adapt the strong semantic information from the target model in stages. Specifically, given the hidden states $h_t$ at position $t$ from the target model's final layer and the embedding of the token $e_{t+1}$ sampled from $h_t$ , we use the two adaptation layers to transform them in stages as below:
64
+
65
+ $$
66
+ h _ {t} ^ {1} = S A L ^ {1} \left(f c ^ {1} ([ h _ {t}; e _ {t + 1} ]), k v _ {1: t - 1} ^ {1}\right), \tag {1}
67
+ $$
68
+
69
+ $$
70
+ h _ {t} ^ {2} = S A L ^ {2} (f c ^ {2} ([ h _ {t} ^ {1}; e _ {t + 1} ]), k v _ {1: t - 1} ^ {2}).
71
+ $$
72
+
73
+ $SAL^1$ stands for the Stage-one Adaptation Layer that adapts target model hidden states and sampled
74
+
75
+ token embedding, while $SAL^2$ stands for the Stag-two Adaptation Layer that adapts $SAL^1$ 's output hidden states as well as the sampled token embedding. The function $fc^1$ and $fc^2$ are fully connected layers employed to transform features derived from the concatenation of hidden states and token embeddings. The terms $kv_{1:t-1}^1$ and $kv_{1:t-1}^2$ represent the key-value caches for each adaptation layer. Subsequently, adapted hidden states $h_t^1$ and $h_t^2$ are fed into the first and second halves of the drafting heads respectively, ensuring that each adaptation layer focuses on adapting target model's semantic representations in specific future locations.
76
+
77
+ # 3.2 Auto-embedding Block
78
+
79
+ Figure 2 shows the detailed design of our Auto-embedding Block. Given a set of $K$ drafting MLP heads, $\mathrm{MLP}_k$ head is tasked with predicting the token in the $(t + k + 1)$ -th position. Upon obtaining adapted hidden states $h_t^1$ and $h_t^2$ from the first and second staged adaptation layers, we first utilize the MLP layers to project them into more position-aware and semantically rich hidden states:
80
+
81
+ $$
82
+ h _ {k} ^ {\prime} = \operatorname {M L P} _ {k} \left(h _ {t} ^ {1}\right), \quad k = 1, 2, \dots , \lfloor K / 2 \rfloor , \tag {2}
83
+ $$
84
+
85
+ $$
86
+ h _ {k} ^ {\prime} = \mathrm {M L P} _ {k} \left(h _ {t} ^ {2}\right), \quad k = \lfloor K / 2 \rfloor + 1, \dots , K,
87
+ $$
88
+
89
+ where $\mathrm{MLP}_i\in \mathbb{R}^{d\times d}$ , and $d$ is the dimension of the target model hidden states. We then concatenate these $K$ hidden states along the seq_len dimension:
90
+
91
+ $$
92
+ H ^ {\prime} = \mathrm {c o n c a t} ([ h _ {1} ^ {\prime}, h _ {2} ^ {\prime}, h _ {3} ^ {\prime}, \dots , h _ {K} ^ {\prime} ]), \qquad (3)
93
+ $$
94
+
95
+ where $H^{\prime}\in \mathbb{R}^{K\times d}$ . In order to further enhance the relative positional information among different heads, we introduce additional positional encodings. Specifically, we introduce a learnable positional embedding $PE\in \mathbb{R}^{K\times d}$ , and the position-encoded hidden states $H_{p}$ are expressed as:
96
+
97
+ $$
98
+ H _ {p} = H ^ {\prime} + P E. \tag {4}
99
+ $$
100
+
101
+ Finally, we employ an effective and efficient bidirectional self-attention module to enable mutual awareness among the drafting heads and use additional learnable lm-head to sample the top- $k$ draft tokens in each position:
102
+
103
+ $$
104
+ a t t n _ {o} = \operatorname {S e l f - A t t e n t i o n} \left(H _ {p}\right), \tag {5}
105
+ $$
106
+
107
+ $$
108
+ d _ {-} \operatorname {l o g i t s} _ {k} = \operatorname {L M - H e a d} _ {k} \left(\operatorname {a t t n} _ {o} [ k ]\right), \quad k = 1, \dots , K. \tag {6}
109
+ $$
110
+
111
+ In the end, these draft tokens are organized into a draft tree and then verified by the LLM through tree attention. Unlike the independent heads in Medusa and the sequentially dependent heads in Hydra, Amphista adopts bi-directionally dependent heads. This approach enhances overall prediction accuracy while maintaining a non-autoregressive mechanism, potentially reducing the substantial computation overhead associated with sequential calculations (i.e., autoregressive manner).
112
+
113
+ # 3.3 Training Objective
114
+
115
+ Our loss function integrates two components to achieve a dual objective. First, we employ a Cross-Entropy (CE) loss between the logits of Amphista and those of the target model to align their output token distributions. Second, we utilize a language modeling (LM) loss to quantify the discrepancy between Amphista's outputs and the ground truth tokens. This approach enables Amphista not only to emulate the target model but also to assimilate predictive capabilities from the real corpus.
116
+
117
+ $$
118
+ \mathcal {L} _ {\text {A m p h i s t a}} = \lambda_ {1} \mathcal {L} _ {\text {a l i g n m e n t}} + \lambda_ {2} \mathcal {L} _ {\mathrm {l m}}, \tag {7}
119
+ $$
120
+
121
+ $$
122
+ \mathcal {L} _ {\text {a l i g n m e n t}} = \mathrm {C E} \left(d _ {-} \operatorname {l o g i t s} _ {i}, \operatorname {l o g i t s} _ {T _ {t + 1 + i}}\right), \tag {8}
123
+ $$
124
+
125
+ $$
126
+ \mathcal {L} _ {\mathrm {l m}} = \mathrm {C E} \left(d \_ l o g i t s _ {i}, g t _ {t + 1 + i}\right). \tag {9}
127
+ $$
128
+
129
+ Note that $d\_logits_{i}$ and $\text{logits}_{T_{t+1+i}}$ are the logits from Amphista and the target model for token $T_{t+1+i}$ , while $gt_{t+1+i}$ represent the ground truth labels of token $T_{t+1+i}$ . The terms $\lambda_{1}$ and $\lambda_{2}$ are weighting factors for the two objectives.
130
+
131
+ # 4 Experiments
132
+
133
+ # 4.1 Experimental Settings
134
+
135
+ Models and Baselines. Following (Cai et al., 2024; Li et al., 2024; Ankner et al., 2024), we use Vicuna family of models (Zheng et al., 2024) as our target model. Specifically, we implement our method on Vicuna 7, 13, and 33B models with four drafting heads. As for compared baseline methods, we choose original Speculative Decoding, Lookahead (Fu et al., 2024), Medusa (Cai et al., 2024) and Hydra (Ankner et al., 2024) for comparison.
136
+
137
+ Training and Datasets. For the training stage, again following (Cai et al., 2024; Ankner et al., 2024), we use ShareGPT ${}^{1}$ dataset to fine-tune our proposed module while keeping target model frozen. Training is conducted using HuggingFace
138
+
139
+ Table 1: The speed-up comparison on MT-Bench and Spec-Bench between different methods under greedy sampling setting (Temperature = 0). We regard the speed-up of vanilla autoregressive decoding as $1.00 \times$ .
140
+
141
+ <table><tr><td rowspan="2">Model Size</td><td rowspan="2">Method</td><td rowspan="2">MT-Bench</td><td colspan="5">Spec-Bench</td><td rowspan="2">Avg</td></tr><tr><td>Translation</td><td>Summarization</td><td>QA</td><td>Math</td><td>RAG</td></tr><tr><td rowspan="6">7B</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Spec-decoding</td><td>1.62×</td><td>1.11×</td><td>1.66×</td><td>1.46×</td><td>1.45×</td><td>1.61×</td><td>1.45×</td></tr><tr><td>Lookahead</td><td>1.44×</td><td>1.15×</td><td>1.26×</td><td>1.25×</td><td>1.56×</td><td>1.13×</td><td>1.27×</td></tr><tr><td>Medusa</td><td>1.87×</td><td>1.42×</td><td>1.42×</td><td>1.50×</td><td>1.74×</td><td>1.39×</td><td>1.50×</td></tr><tr><td>Hydra++</td><td>2.37×</td><td>1.92×</td><td>1.80×</td><td>1.94×</td><td>2.43×</td><td>2.04×</td><td>2.03×</td></tr><tr><td>Amphista (ours)</td><td>2.44×</td><td>1.96×</td><td>2.11×</td><td>1.94×</td><td>2.45×</td><td>2.20×</td><td>2.13×</td></tr><tr><td rowspan="6">13B</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Spec-decoding</td><td>1.66×</td><td>1.17×</td><td>1.75×</td><td>1.44×</td><td>1.59×</td><td>1.73×</td><td>1.53×</td></tr><tr><td>Lookahead</td><td>1.34×</td><td>1.08×</td><td>1.23×</td><td>1.15×</td><td>1.51×</td><td>1.15×</td><td>1.22×</td></tr><tr><td>Medusa</td><td>1.85×</td><td>1.55×</td><td>1.55×</td><td>1.53×</td><td>1.88×</td><td>1.51×</td><td>1.60×</td></tr><tr><td>Hydra++</td><td>2.34×</td><td>1.75×</td><td>1.85×</td><td>1.85×</td><td>2.31×</td><td>1.86×</td><td>1.92×</td></tr><tr><td>Amphista (ours)</td><td>2.49×</td><td>1.88×</td><td>2.14×</td><td>1.88×</td><td>2.41×</td><td>2.04×</td><td>2.07×</td></tr><tr><td rowspan="6">33B</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Spec-decoding</td><td>1.73×</td><td>1.28×</td><td>1.76×</td><td>1.54×</td><td>1.71×</td><td>1.69×</td><td>1.60×</td></tr><tr><td>Lookahead</td><td>1.32×</td><td>1.09×</td><td>1.21×</td><td>1.16×</td><td>1.55×</td><td>1.16×</td><td>1.24×</td></tr><tr><td>Medusa</td><td>1.97×</td><td>1.72×</td><td>1.62×</td><td>1.66×</td><td>2.06×</td><td>1.61×</td><td>1.73×</td></tr><tr><td>Hydra++</td><td>2.54×</td><td>1.93×</td><td>2.10×</td><td>2.04×</td><td>2.63×</td><td>2.17×</td><td>2.17×</td></tr><tr><td>Amphista (ours)</td><td>2.75×</td><td>2.11×</td><td>2.49×</td><td>2.12×</td><td>2.83×</td><td>2.44×</td><td>2.40×</td></tr></table>
142
+
143
+ Trainer, which we employ with AdamW optimizer $(\beta_{1} = 0.9, \beta_{2} = 0.999)$ and a cosine learning rate schedule with warmup strategy, the initial learning rate is set to 1e-3 and we train 4 epochs. At the evaluation stage, we use MT-Bench (Zheng et al., 2024) and Spec-Bench (Xia et al., 2024) as our benchmark. MT-Bench is an open source multi-turn conversation benchmark. Spec-Bench is a well-acknowledged and comprehensive benchmark designed for assessing speculative decoding methods across diverse application scenarios.
144
+
145
+ Metrics. Following previous speculative decoding work, we choose tokens/s and tokens/step as our main metrics. Tokens/step measures the average token length accepted per forward pass of the target LLM. Tokens/s represents the overall throughput of the acceleration algorithm, which is influenced by both the prediction accuracy of the curator and the drafting latency of the curator.
146
+
147
+ # 4.2 Evaluation of Amphista
148
+
149
+ Amphista builds on Medusa to support parallel decoding, distinctly diverging from auto-regression drafting methods. Thus, the representative work of parallel drafting (i.e., Lookahead), and the state-of-the-art work based on Medusa (i.e., Hydra), have been chosen as a competitive baseline method for comparison. Specifically, Hydra's best-performing model (i.e., Hydra++) is used for fair evaluation and vicuna-68m (Yang et al., 2024) is used as
150
+
151
+ draft model for the vanilla speculative decoding method. We conduct all the experiments on A100 40G GPUs, and all the experimental settings are kept the same for fair comparison.
152
+
153
+ Table 1 and Table 2 present the speed-up metrics compared on MT-Bench and Spec-Bench under greedy and random sampling settings (see A.2 for more experiment results). Overall, Amphista demonstrates significant performance superiority over Medusa and surpasses Hydra's best results by a considerable margin across a variety of generation tasks, and also greatly exceeding the speed-up achieved by vanilla speculative decoding. In detail, Amphista achieves a $2.44 \times -2.75 \times$ speed-up on MT-Bench and $2.13 \times -2.40 \times$ speed-up on Spec-Bench under greedy decoding setting. Similarly, under random sampling setting, Amphista achieves a $2.37 \times -2.85 \times$ speed-up and $1.99 \times -2.43 \times$ speed-up on MT-Bench and Spec-Bench with different target model sizes. These robust results demonstrate that enhancing non-autoregressive drafting can surpass autoregressive drafting in terms of speed-up, highlighting the efficiency of our Amphista architecture. During the drafting stage, all computations in non-autoregressive modeling (i.e., Amphista) can be processed in parallel, better leveraging the parallel computing capabilities of modern GPU accelerators. This leads to a more optimal trade-off between drafting acceptance rate and drafting latency.
154
+
155
+ Table 2: The speed-up comparison on MT-Bench and Spec-bench between different methods under random sampling setting (Temperature = 0.7). We regard the speed-up of vanilla autoregressive decoding as $1.00 \times$ .
156
+
157
+ <table><tr><td rowspan="2">Model Size</td><td rowspan="2">Method</td><td rowspan="2">MT-Bench</td><td colspan="5">Spec-Bench</td><td rowspan="2">Avg</td></tr><tr><td>Translation</td><td>Summarization</td><td>QA</td><td>Math</td><td>RAG</td></tr><tr><td rowspan="6">7B</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Spec-decoding</td><td>1.39×</td><td>1.02×</td><td>1.41×</td><td>1.24×</td><td>1.32×</td><td>1.43×</td><td>1.28×</td></tr><tr><td>Lookahead</td><td>1.28×</td><td>1.05×</td><td>1.21×</td><td>1.12×</td><td>1.25×</td><td>1.14×</td><td>1.16×</td></tr><tr><td>Medusa</td><td>1.86×</td><td>1.51×</td><td>1.47×</td><td>1.57×</td><td>1.89×</td><td>1.43×</td><td>1.57×</td></tr><tr><td>Hydra++</td><td>2.35×</td><td>1.81×</td><td>1.81×</td><td>1.97×</td><td>2.41×</td><td>1.74×</td><td>1.95×</td></tr><tr><td>Amphista (ours)</td><td>2.37×</td><td>1.81×</td><td>1.92×</td><td>1.96×</td><td>2.43×</td><td>1.79×</td><td>1.99×</td></tr><tr><td rowspan="6">13B</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Spec-decoding</td><td>1.52×</td><td>1.08×</td><td>1.57×</td><td>1.33×</td><td>1.42×</td><td>1.46×</td><td>1.37×</td></tr><tr><td>Lookahead</td><td>1.30×</td><td>1.07×</td><td>1.19×</td><td>1.15×</td><td>1.38×</td><td>1.14×</td><td>1.19×</td></tr><tr><td>Medusa</td><td>2.01×</td><td>1.65×</td><td>1.62×</td><td>1.71×</td><td>2.01×</td><td>1.57×</td><td>1.71×</td></tr><tr><td>Hydra++</td><td>2.57×</td><td>1.90×</td><td>1.99×</td><td>2.12×</td><td>2.56×</td><td>2.04×</td><td>2.12×</td></tr><tr><td>Amphista (ours)</td><td>2.65×</td><td>1.93×</td><td>2.16×</td><td>2.17×</td><td>2.64×</td><td>2.15×</td><td>2.22×</td></tr><tr><td rowspan="6">33B</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Spec-decoding</td><td>1.58×</td><td>1.21×</td><td>1.62×</td><td>1.48×</td><td>1.59×</td><td>1.54×</td><td>1.48×</td></tr><tr><td>Lookahead</td><td>1.29×</td><td>1.04×</td><td>1.18×</td><td>1.15×</td><td>1.52×</td><td>1.14×</td><td>1.21×</td></tr><tr><td>Medusa</td><td>2.06×</td><td>1.71×</td><td>1.79×</td><td>1.76×</td><td>2.10×</td><td>1.79×</td><td>1.83×</td></tr><tr><td>Hydra++</td><td>2.74×</td><td>2.01×</td><td>2.24×</td><td>2.24×</td><td>2.82×</td><td>2.26×</td><td>2.31×</td></tr><tr><td>Amphista (ours)</td><td>2.85×</td><td>2.05×</td><td>2.51×</td><td>2.29×</td><td>2.90×</td><td>2.39×</td><td>2.43×</td></tr></table>
158
+
159
+ Moreover, Amphista exhibits a discernible upward trend in speed-up when employed on larger target models. This can be attributed to Amphista's cost-efficient non-autoregressive modeling and effective transformation of semantic information from the target model. Amphista allows for appropriate increases in accepted token length without introducing excessive additional inference costs. For more exploration on the performance potential of Amphista, please refer to A.2.4. For more exploration on the parameter complexity optimization, please refer to A.2.5.
160
+
161
+ Last but not least, we further provide the actual throughput of different methods on MT-Bench with a batch size of 1. As depicted in Figure 3, Amphista achieves an actual throughput of approximately 120 tokens/s with a 7B target model and about 80 tokens/s with a 13B target model under both temperature settings. This performance surpasses that of Medusa and Hydra, underscoring Amphista's advantages in practical deployment.
162
+
163
+ # 4.3 Generation Quality of Amphista
164
+
165
+ We perform evaluation on XSUM (Narayan et al., 2018) and CNN/DM (See et al., 2017) to validate the generation quality of our Amphista (more results can be found in appendix A.2.1). From the ROUGE-1/2/L scores (Lin, 2004) in Table 3, we can find that Amphista can reserve the output distribution quality while achieving $2.10 \times -2.31 \times$ speed
166
+
167
+ ![](images/57ca0ae4209b909a4448918e43740651c1ec386133dfdcfae83d880d875c0363.jpg)
168
+ Figure 3: Throughput (tokens/s) on MT-Bench with different target model sizes and temperatures.
169
+
170
+ ![](images/73d6512b916a2e69bdeda7e366b8f771f88faaa87ae71b10cc97101273b5c8ea.jpg)
171
+
172
+ Table 3: Results on CNN/DM and XSUM with different temperatures, AR means Auto-Regressive decoding.
173
+
174
+ <table><tr><td>Benchmark</td><td>Temp</td><td>Method</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td><td>Speed-up</td></tr><tr><td rowspan="4">CNN/DM</td><td rowspan="2">0.0</td><td>AR</td><td>18.74</td><td>8.44</td><td>12.59</td><td>1.00×</td></tr><tr><td>Amphista</td><td>18.70</td><td>8.44</td><td>12.59</td><td>2.15×</td></tr><tr><td rowspan="2">0.7</td><td>AR</td><td>17.92</td><td>7.65</td><td>11.93</td><td>1.00×</td></tr><tr><td>Amphista</td><td>17.91</td><td>7.65</td><td>11.92</td><td>2.31×</td></tr><tr><td rowspan="4">XSUM</td><td rowspan="2">0.0</td><td>AR</td><td>17.32</td><td>5.05</td><td>12.16</td><td>1.00×</td></tr><tr><td>Amphista</td><td>17.30</td><td>5.05</td><td>12.15</td><td>2.25×</td></tr><tr><td rowspan="2">0.7</td><td>AR</td><td>15.99</td><td>4.44</td><td>11.42</td><td>1.00×</td></tr><tr><td>Amphista</td><td>15.96</td><td>4.43</td><td>11.40</td><td>2.10×</td></tr></table>
175
+
176
+ up compared with vanilla auto-regressive decoding.
177
+
178
+ # 4.4 Multi-Batching Exploration
179
+
180
+ In this section, we evaluate the speed-up of Amphista in multi-batch scenarios (batch size $>1$ ). For varying sentence lengths within a batch, we use padding to align them and always track the position of the last valid token for each sentence. The experimental results, presented in Table 6, are based on randomly sampled prompts from MT-Bench to generate various batch sizes. Generally, as batch
181
+
182
+ Table 4: Ablation experiments of different model variants on MT-Bench and Spec-Bench, with the target model being Vicuna 7B and the evaluation metric being speed-up. Medusa can be considered as Amphista w/o any added modules, and Hydra can be seen as Medusa w/ sequential dependency heads.
183
+
184
+ <table><tr><td rowspan="2">Method Variants</td><td rowspan="2">MT-Bench</td><td colspan="5">Spec-Bench</td><td rowspan="2">Avg</td></tr><tr><td>Translation</td><td>Summary</td><td>QA</td><td>Math</td><td>RAG</td></tr><tr><td>Medusa</td><td>1.86×</td><td>1.51×</td><td>1.47×</td><td>1.57×</td><td>1.89×</td><td>1.43×</td><td>1.57×</td></tr><tr><td>Hydra++</td><td>2.37×</td><td>1.92×</td><td>1.80×</td><td>1.94×</td><td>2.43×</td><td>2.04×</td><td>2.03×</td></tr><tr><td>Amphista w/o Auto-embedding</td><td>2.30×</td><td>1.82×</td><td>2.00×</td><td>1.81×</td><td>2.25×</td><td>1.99×</td><td>1.97×</td></tr><tr><td>Amphista w/o Position-Encoding</td><td>2.42×</td><td>1.96×</td><td>2.08×</td><td>1.92×</td><td>2.42×</td><td>2.18×</td><td>2.11×</td></tr><tr><td>Amphista w/o Staged-Adaptation</td><td>2.14×</td><td>1.85×</td><td>1.75×</td><td>1.78×</td><td>2.10×</td><td>1.91×</td><td>1.88×</td></tr><tr><td>Amphista w/ One-Adaptation-Layer</td><td>2.31×</td><td>1.90×</td><td>1.99×</td><td>1.83×</td><td>2.35×</td><td>2.14×</td><td>2.04×</td></tr><tr><td>Amphista w/o Sampled-Token</td><td>2.25×</td><td>1.88×</td><td>1.80×</td><td>1.81×</td><td>2.26×</td><td>2.01×</td><td>1.95×</td></tr><tr><td>Amphista (ours)</td><td>2.44×</td><td>1.96×</td><td>2.11×</td><td>1.94×</td><td>2.45×</td><td>2.20×</td><td>2.13×</td></tr></table>
185
+
186
+ Table 5: Ablation experiments of different model variants on MT-Bench and Spec-Bench, with the target model being Vicuna 7B and evaluation metric being average accepted length. Medusa can be considered as Amphista w/o any added modules, and Hydra can be seen as Medusa w/ sequential dependency heads.
187
+
188
+ <table><tr><td rowspan="2">Method Variants</td><td rowspan="2">MT-Bench</td><td colspan="5">Spec-Bench</td><td rowspan="2">Avg</td></tr><tr><td>Translation</td><td>Summary</td><td>QA</td><td>Math</td><td>RAG</td></tr><tr><td>Medusa</td><td>2.52</td><td>2.12</td><td>2.01</td><td>2.05</td><td>2.48</td><td>2.09</td><td>2.15</td></tr><tr><td>Hydra++</td><td>3.58</td><td>2.80</td><td>2.70</td><td>2.91</td><td>3.61</td><td>2.90</td><td>2.98</td></tr><tr><td>Amphista w/o Auto-embedding</td><td>3.16</td><td>2.41</td><td>2.66</td><td>2.40</td><td>3.11</td><td>2.49</td><td>2.60</td></tr><tr><td>Amphista w/o Position-Encoding</td><td>3.47</td><td>2.61</td><td>2.90</td><td>2.78</td><td>3.47</td><td>2.91</td><td>2.93</td></tr><tr><td>Amphista w/o Staged-Adaptation</td><td>2.91</td><td>2.42</td><td>2.24</td><td>2.30</td><td>2.85</td><td>2.38</td><td>2.43</td></tr><tr><td>Amphista w/ One-Adaptation-Layer</td><td>3.36</td><td>2.49</td><td>2.68</td><td>2.71</td><td>3.37</td><td>2.75</td><td>2.80</td></tr><tr><td>Amphista w/o Sampled-Token</td><td>3.11</td><td>2.43</td><td>2.48</td><td>2.45</td><td>3.15</td><td>2.55</td><td>2.61</td></tr><tr><td>Amphista (ours)</td><td>3.50</td><td>2.62</td><td>3.01</td><td>2.80</td><td>3.50</td><td>2.96</td><td>2.98</td></tr></table>
189
+
190
+ size increases, the GPU's idle computational resources gradually decrease, resulting in a reduced speed-up. Additionally, despite the additional computational overhead from different multi-batching strategies, Amphista consistently achieves around $2 \times$ speed-up using the simplest padding method, demonstrating its acceleration advantage in multi-batch settings.
191
+
192
+ Table 6: Speed-up evaluation of Amphista on MT-Bench with batch size $> 1$ .
193
+
194
+ <table><tr><td>Batch Size</td><td>1</td><td>2</td><td>4</td><td>6</td><td>8</td></tr><tr><td>Speed-up</td><td>2.32×</td><td>2.30×</td><td>2.23×</td><td>2.11×</td><td>2.06×</td></tr></table>
195
+
196
+ # 4.5 Ablation Study
197
+
198
+ Diverging from other approaches based on speculative sampling and Medusa, Amphista's main insight lies in adapting transformation through Staged Adaptation Layers and enhancing integration via the non-autoregressive Auto-embedding Block. These approaches strengthen semantic information derived from the target model. In this section, we conduct comprehensive ablation ex
199
+
200
+ periments based on the vicuna 7B model to validate the effectiveness of each proposed module in our Amphista. Specifically, we conduct five model variants as follows: (1) Amphista w/o Auto-embedding which means removing the Auto-embedding Block. (2) Amphista w/o Position-Encoding which means removing the additional position embedding matrix in Auto-embedding Blcok. (3) Amphista w/o Staged-Adaptation which means removing staged adaptation layers. (4) Amphista w/ One-Adaptation-Layer which means using only one adaptation layer for all the drafting heads. (5) Amphista w/o Sampled-Token which means removing sampled token during adaptation process. The experimental results are presented in Table 4, 5. From these comparative results, some observations can be found as follows:
201
+
202
+ - Amphista w/o Auto-embedding exhibits an approximate $5\% - 8\%$ decrease in speed-up performance and about a $10\% - 12\%$ reduction in average accepted length. This highlights the effectiveness of the Auto-embedding Block in mitigating inaccuracies deriving from the independent speculation of Medusa heads, and demonstrating the
203
+
204
+ efficiency of non-autoregressive drafting computations. Additionally, Amphista w/o Position-Encoding exhibits a slight performance decline, with an approximate $2\%$ decrease in inference speed-up, suggesting that position encoding provides additional benefits.
205
+
206
+ - Amphista w/o Staged-Adaptation leads to a more significant decline in speed-up (14%) and average accepted length (16%). This emphasizes the importance of bridging the feature gap between the target model and drafting heads, and further underscores the critical role of the staged adaptation layer in enhancing the auto-embedding block. Additionally, it is noteworthy that Amphista w/ One-Adaptation-Layer utilizes only a single adaptation layer for all drafting positions. In contrast to staged adaptation, this approach poses greater challenges to the adaptation process, resulting in some performance degradation, thereby validating the rationale behind our staged adaptation design.
207
+
208
+ - Amphista w/o Sampled-Token also causes an approximate $8\%$ performance decline. Unlike previous works (e.g., Hydra), we do not use the sampled token directly for the next step of prediction. Instead, we adapt it along with the target model's hidden states. This not only indicates that the sampled token, in addition to target model hidden states, contains important semantic information, but also demonstrates the effectiveness of our staged adaptation approach.
209
+
210
+ - Thanks to the autoregressive characteristics and the substantial number of parameters in the MLP layers, Hydra exhibits great performance in average token length. However, the computational overhead of auto-regressive methods is huge, resulting in significant reductions when translated into final speed-up. In contrast, Amphista achieves a comparable average token length to Hydra, and due to the parallelism and efficiency of its non-autoregressive computations, it ultimately attains a more favorable overall trade-off.
211
+
212
+ # 5 Related Work
213
+
214
+ Increasing techniques have been proposed to enhance the inference speed of large language models (LLMs), covering aspects of system hardware, model architecture, and decoding algorithms. A significant branch of these techniques is Model Compression, which includes methods such as
215
+
216
+ model quantization (Yao et al., 2023; Dettmers et al., 2024; Liu et al., 2023a; Ma et al., 2024), pruning (Belcak and Wattenhofer, 2023; Liu et al., 2023b; Zhong et al., 2024), and distillation (Zhou et al., 2024; Sun et al., 2024; Touvron et al., 2021). Additionally, techniques like kv-cache (Ge et al., 2023; Kwon et al., 2023), flash-attention (Dao et al., 2022), and early exiting (Bae et al., 2023; Elhoushi et al., 2024; Liu et al., 2024a) have also significantly reduced inference overhead. Another important line is Speculative Decoding, which our work is based on. It can be broadly categorized into two types. The first treats the target model and draft model separately and independently, involving the use of a small language model (Kim et al., 2024; Leviathan et al., 2023; Liu et al., 2024b; Monea et al., 2023; Chen et al., 2024; Du et al., 2024), external database, or n-grams pool (He et al., 2024; Fu et al., 2024; Kou et al., 2024; Ou et al., 2024) to generate candidate token sequences or token trees (Miao et al., 2024), which the LLM then verifies. The second type views the draft model as a dependent approximation of the target model, deriving the draft model directly from the target model or building additional modules on top of the target model for drafting (Stern et al., 2018; Zhang et al., 2023, 2024; Li et al., 2024; Cai et al., 2024; Kim et al.; Xiao et al., 2024; Ankner et al., 2024). Unlike these approaches, we propose a novel method using an auto-embedding block combined with staged adaptation layers to further enhance acceleration.
217
+
218
+ # 6 Conclusion
219
+
220
+ We propose Amphista, an efficient non-autoregressive speculative decoding framework that accelerates inference through parallel decoding and improves alignment between target and draft models via feature adaptation. Amphista integrates two core components: the Auto-embedding Block, leveraging bi-directional self-attention for collaborative speculation among drafting heads, and Staged Adaptation Layers, transforming target model semantics for multi-step predictions. Additionally, Amphista exploits sampled tokens to further optimize alignment. Extensive experiments confirm the superiority of Amphista, showcasing the promise of non-autoregressive methods in speculative decoding.
221
+
222
+ # Limitations
223
+
224
+ While we have found and adhered to using bidirectional self-attention for non-autoregressive modeling as an efficient inference structure, we have not yet fully explored the optimal structure of the Auto-embedding Block module. Specifically, this includes experimenting with different intermediate sizes (i.e., the hidden dimensions used in self-attention computations) and increasing the number of self-attention layers within the auto-embedding block to enhance its modeling depth (see A.2.4). Both of these structural optimizations could potentially improve Amphista's acceleration performance within the current framework. Additionally, this work primarily focuses on scenarios where the batch size is equal to one, with limited optimization for larger batch sizes. We leave these areas as our future work and also hope that researchers interested in non-autoregressive inference acceleration will build upon this foundation.
225
+
226
+ # Acknowledgement
227
+
228
+ We acknowledge the helpful discussions from Kolorin Yan, Fuwei Yang, Ethan Yang, Xiandong Zhao, Mahdi Kamani, and Vikram Appia during the writing process of this work.
229
+
230
+ # References
231
+
232
+ Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan RaganKelley, and William Brandon. 2024. Hydra: Sequentially-dependent draft heads for medusa decoding. Preprint, arXiv:2402.05109.
233
+ Sangmin Bae, Jongwoo Ko, Hwanjun Song, and Se-Young Yun. 2023. Fast and robust early-exiting framework for autoregressive language models with synchronized parallel decoding. pages 5910-5924, Singapore.
234
+ Peter Belcak and Roger Wattenhofer. 2023. Exponentially faster language modelling. arXiv preprint arXiv:2311.10770.
235
+ Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple llm inference acceleration framework with multiple decoding heads. Preprint, arXiv:2401.10774.
236
+ Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. Preprint, arXiv:2302.01318.
237
+
238
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidi Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code.
239
+ Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, and Beidi Chen. 2024. Sequoia: Scalable, robust, and hardware-aware speculative decoding. Preprint, arXiv:2402.12374.
240
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
241
+ Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. Preprint, arXiv:2205.14135.
242
+ Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2024. Qlora: Efficient finetuning of quantized lms. Advances in Neural Information Processing Systems, 36.
243
+ Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, and Yang You. 2024. Glide with a cape: A low-hassle method to accelerate speculative decoding. Preprint, arXiv:2402.02082.
244
+ Mostafa Elhoushi, Akshit Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, and Carole-Jean Wu. 2024. Layerskip: Enabling early exit inference and self-speculative decoding. Preprint, arXiv:2404.16710.
245
+ Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. 2024. Break the sequential dependency of llm inference using lookahead decoding. Preprint, arXiv:2402.02057.
246
+ Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, and Jianfeng Gao. 2023. Model tells you
247
+
248
+ what to discard: Adaptive kv cache compression for llms. arXiv preprint arXiv:2310.01801.
249
+ Zhenyu He, Zexuan Zhong, Tianle Cai, Jason D. Lee, and Di He. 2024. Rest: Retrieval-based speculative decoding. Preprint, arXiv:2311.08252.
250
+ Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations.
251
+ Sehoon Kim, Karttikeya Mangalam, Suhong Moon, Jitendra Malik, Michael W Mahoney, Amir Gholami, and Kurt Keutzer. 2024. Speculative decoding with big little decoder. Advances in Neural Information Processing Systems, 36.
252
+ Taehyeon Kim, Ananda Theertha Suresh, Kishore A Papineni, Michael Riley, Sanjiv Kumar, and Adrian Benton. Exploring and improving drafts in blockwise parallel decoding. In Workshop on Efficient Systems for Foundation Models II@ ICML2024.
253
+ Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, and Hao Zhang. 2024. Cllms: Consistency large language models. Preprint, arXiv:2403.00835.
254
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pages 611-626.
255
+ Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274-19286. PMLR.
256
+ Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024. Eagle: Speculative sampling requires rethinking feature uncertainty. Preprint, arXiv:2401.15077.
257
+ Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
258
+ Fangcheng Liu, Yehui Tang, Zhenhua Liu, Yunsheng Ni, Kai Han, and Yunhe Wang. 2024a. Kangaroo: Lossless self-speculative decoding via double early exiting. Preprint, arXiv:2404.18911.
259
+ Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, and Hao Zhang. 2024b. Online speculative decoding. Preprint, arXiv:2310.07177.
260
+ Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra. 2023a. Llm-qat: Data-free quantization aware training for large language models. Preprint, arXiv:2305.17888.
261
+
262
+ Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, et al. 2023b. Deja vu: Contextual sparsity for efficient llms at inference time. In International Conference on Machine Learning, pages 22137-22176. PMLR.
263
+ Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, and Furu Wei. 2024. The era of 1-bit llms: All large language models are in 1.58 bits. Preprint, arXiv:2402.17764.
264
+ Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, et al. 2024. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pages 932-949.
265
+ Giovanni Monea, Armand Joulin, and Edouard Grave. 2023. Pass: Parallel speculative sampling. arXiv preprint arXiv:2311.13581.
266
+ Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. ArXiv, abs/1808.08745.
267
+ OpenAI. 2022. Chatgpt: Chatgpt: Optimizing language models for dialogue.
268
+ Jie Ou, Yueming Chen, and Wenhong Tian. 2024. Lossless acceleration of large language model via adaptive n-gram parallel decoding. arXiv preprint arXiv:2404.08698.
269
+ Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics.
270
+ Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. 2018. Blockwise parallel decoding for deep autoregressive models. Advances in Neural Information Processing Systems, 31.
271
+ Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. 2024. Spectr: Fast speculative decoding via optimal transport. Advances in Neural Information Processing Systems, 36.
272
+ Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. 2021. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pages 10347-10357. PMLR.
273
+
274
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
275
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
276
+ Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhifang Sui. 2024. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. Preprint, arXiv:2401.07851.
277
+ Bin Xiao, Chunan Shi, Xiaonan Nie, Fan Yang, Xiangwei Deng, Lei Su, Weipeng Chen, and Bin Cui. 2024. Clover: Regressive lightweight speculative decoding with sequential knowledge. arXiv preprint arXiv:2405.00263.
278
+ Sen Yang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2024. Multi-candidate speculative decoding. Preprint, arXiv:2401.06706.
279
+ Zhewei Yao, Cheng Li, Xiaoxia Wu, Stephen Youn, and Yuxiong He. 2023. A comprehensive study on post-training quantization for large language models. arXiv preprint arXiv:2303.08302.
280
+ Aonan Zhang, Chong Wang, Yi Wang, Xuanyu Zhang, and Yunfei Cheng. 2024. Recurrent drafter for fast speculative decoding in large language models. Preprint, arXiv:2403.09919.
281
+ Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, and Sharad Mehrotra. 2023. Draft & verify: Lossless large language model acceleration via self-speculative decoding. arXiv preprint arXiv:2309.08168.
282
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36.
283
+ Shuzhang Zhong, Zebin Yang, Meng Li, Ruihao Gong, Runsheng Wang, and Ru Huang. 2024. Propd: Dynamic token tree pruning and generation for llm parallel decoding. arXiv preprint arXiv:2402.13485.
284
+ Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-Francois Kagy, and Rishabh Agarwal. 2024. Distillspec: Improving speculative decoding via knowledge distillation. Preprint, arXiv:2310.08461.
285
+
286
+ # A Appendix
287
+
288
+ # A.1 Draft Tree
289
+
290
+ For a fully fair comparison, we adopt the same draft tree structure as Medusa and Hydra. As shown in Figure 4, this tree is a sparse structure with a depth of 4, representing four drafting heads, and includes a total of 64 nodes, including the root node (the token sampled in the final step of the target model). Each layer's nodes represent the tokens obtained by top_k sampling from the corresponding drafting head. The entire tree is constructed using an auxiliary dataset by maximizing the acceptance probability of the whole tree (Cai et al., 2024). Moreover, a specially designed tree mask is used to correctly compute attention scores while simultaneously handling multiple paths, as described in Figure 5.
291
+
292
+ However, in some cases, due to the lack of redundant computational power (such as in high-throughput inference service scenarios) or parallel accelerators, an excessive number of tree nodes may lead to significant computation overhead, thereby affecting the acceleration efficiency of the algorithm. Consequently, we configure varying numbers of draft tree nodes without changing the tree depth for more comprehensive comparison, and the experimental results are shown in Table 7. From these results we observe that as the number of tree nodes decreases, the width of the tree reduces, leading to a decrease in speed-up for all compared methods. However, the decline is slightly less pronounced for Amphista, owing to its higher head accuracy. Furthermore, across various tree node configurations, we consistently achieve optimal performance, demonstrating the advantages of our algorithm in practical deployment and low-resource scenarios.
293
+
294
+ Table 7: Speed-up comparison on MT-Bench with varying number of draft tree nodes.
295
+
296
+ <table><tr><td>Method</td><td>Node = 22</td><td>Node = 35</td><td>Node = 45</td><td>Node = 64</td></tr><tr><td>Medusa</td><td>1.71×</td><td>1.80×</td><td>1.87×</td><td>1.87×</td></tr><tr><td>Hydra++</td><td>2.17×</td><td>2.26×</td><td>2.28×</td><td>2.37×</td></tr><tr><td>Amphista</td><td>2.29×</td><td>2.37×</td><td>2.42×</td><td>2.44×</td></tr></table>
297
+
298
+ # A.2 Additional Experiments Results
299
+
300
+ # A.2.1 Evaluation on XSUM and CNN/DM
301
+
302
+ We use XSUM (Narayan et al., 2018) and CNN/DM (See et al., 2017) for evaluating the generation quality of Amphista, the target model
303
+
304
+ ![](images/57e2b9ce3e445257ea2e840a4db6e7a04424e9012a9d50a4407f4c18cf8c82ba.jpg)
305
+ Figure 4: Draft tree used in Medusa, Hydra and our Amphista.
306
+
307
+ ![](images/0f1f74e6a5f8e2cc725bd0ead89c0f2e242a1dd646ad733f9ad1af6f2808e87c.jpg)
308
+ Figure 5: An Illustration of Tree Attention. Assuming Medusa has only 2 heads, where head-1 generates the top-2 tokens and head-2 generates the top-3 tokens, resulting in 6 candidate sequences (e.g., ABD). Additionally, a special tree mask is designed to ensure causal relationships among the top-k nodes of each head.
309
+
310
+ is vicuna 7B. Specifically, we perform zero-shot evaluation and the input prompt template is 'Article:'+ 'Original Text'+'Summary':'. Additionally, for input prompts exceeding a length of 2048, we perform truncation to meet the target model's input requirements.
311
+
312
+ Table 8: The speed-up metric comparison on Humaneval and GSM8K between different methods under greedy setting. The target model is vicuna 7B and 13B, and we regard the speed-up of vanilla auto-regressive decoding as $1.00 \times$ .
313
+
314
+ <table><tr><td>Model Size</td><td>Benchmark</td><td>Vinilla AR</td><td>Medusa</td><td>Hydra++</td><td>Amphista</td></tr><tr><td rowspan="2">7B</td><td>Humaneval</td><td>1.00×</td><td>2.40×</td><td>2.76×</td><td>3.02×</td></tr><tr><td>GSM8K</td><td>1.00×</td><td>1.87×</td><td>2.14×</td><td>2.32×</td></tr><tr><td rowspan="2">13B</td><td>Humaneval</td><td>1.00×</td><td>2.11×</td><td>2.75×</td><td>3.00×</td></tr><tr><td>GSM8K</td><td>1.00×</td><td>1.98×</td><td>2.39×</td><td>2.68×</td></tr></table>
315
+
316
+ # A.2.2 Code Generation and Math Reasoning
317
+
318
+ In this section, we provide more experimental results on code generation and math reasoning. we choose public Humaneval (Chen et al., 2021) and GSM8k (Cobbe et al., 2021) benchmark for evalua
319
+
320
+ Table 9: The speed-up and average accepted length metric comparison with the target model being vicuna 7B. We regard the speed-up of vanilla auto-regressive decoding as $1.00 \times$ .
321
+
322
+ <table><tr><td rowspan="2">Metric</td><td rowspan="2">Method</td><td rowspan="2">MT-Bench</td><td colspan="5">Spec-Bench</td><td rowspan="2">Avg</td></tr><tr><td>Translation</td><td>Summarization</td><td>QA</td><td>Math</td><td>RAG</td></tr><tr><td rowspan="5">Speed-up</td><td>Vanilla</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.00×</td></tr><tr><td>Hydra++</td><td>2.37×</td><td>1.92×</td><td>1.80×</td><td>1.94×</td><td>2.43×</td><td>2.04×</td><td>2.03×</td></tr><tr><td>EAGLE</td><td>2.58×</td><td>1.94×</td><td>2.21×</td><td>2.02×</td><td>2.57×</td><td>2.30×</td><td>2.21×</td></tr><tr><td>Amphista</td><td>2.44×</td><td>1.96×</td><td>2.11×</td><td>1.94×</td><td>2.45×</td><td>2.20×</td><td>2.13×</td></tr><tr><td>Amphista-α</td><td>2.63×</td><td>2.09×</td><td>2.23×</td><td>2.06×</td><td>2.61×</td><td>2.34×</td><td>2.27×</td></tr><tr><td rowspan="5">Average Accepted Length</td><td>Vanilla</td><td>1.00</td><td>1.00</td><td>1.00</td><td>1.00</td><td>1.00</td><td>1.00</td><td>1.00</td></tr><tr><td>Hydra++</td><td>3.58</td><td>2.80</td><td>2.70</td><td>2.91</td><td>3.61</td><td>2.90</td><td>2.98</td></tr><tr><td>EAGLE</td><td>3.84</td><td>2.92</td><td>3.32</td><td>3.14</td><td>3.93</td><td>3.31</td><td>3.32</td></tr><tr><td>Amphista</td><td>3.50</td><td>2.62</td><td>3.01</td><td>2.80</td><td>3.50</td><td>2.96</td><td>2.98</td></tr><tr><td>Amphista-α</td><td>3.58</td><td>2.70</td><td>3.14</td><td>2.90</td><td>3.62</td><td>3.08</td><td>3.09</td></tr></table>
323
+
324
+ tion, and the target model is vicuna 7B and vicuna 13B. According to the results in Table 8, we can observe that due to the universal template and notation of code generation and mathematical reasoning, almost all compared methods achieve a higher speed-up. Furthermore, Amphista algorithm consistently attains optimal performance, demonstrating the superiority of our approach.
325
+
326
+ # A.2.3 Exploring The Optimal Number of Adaptation Stages.
327
+
328
+ In this section, we analyze the impact of varying the number of adaptation stages on the final performance of Amphista (see 10). The staged adaptation approach was introduced to facilitate the progressive transformation of features, and our ablation study has demonstrated that employing two adaptation stages outperforms a single-stage design. This indicates that using multiple adaptation layers to adapt to different positions is more effective than relying on a single layer for all positions.
329
+
330
+ Although it is theoretically possible to further increase the number of adaptation stages, our experiments reveal two critical limitations. First, as the number of layers increases, the parameter count increases significantly, leading to increased model complexity. Second, the additional adaptation layers provide diminishing returns, with only minimal improvements in mean accept length, while introducing higher inference time overhead and reducing the overall speed-up metric. Given these trade-offs, we identified the two-layer approach as the optimal balance, effectively enhancing performance without unnecessary computational burden.
331
+
332
+ Table 10: Experimental results for different numbers of staged adaptation layers.
333
+
334
+ <table><tr><td>Stages</td><td>1</td><td>2</td><td>3</td><td>4</td></tr><tr><td>Accept Length</td><td>3.36</td><td>3.50</td><td>3.46</td><td>3.51</td></tr><tr><td>Speed-up</td><td>2.31×</td><td>2.44×</td><td>2.23×</td><td>2.24×</td></tr></table>
335
+
336
+ # A.2.4 Exploring The Potential of Amphista
337
+
338
+ In this section, we conduct a preliminary exploration of Amphista's scaling ability to demonstrate its potential for performance enhancement. By leveraging the efficiency of non-autoregressive modeling, we increase the number of auto-embedding blocks, which are essential modules within Amphista, while maintaining parallel inference. This approach yields remarkable results, detailed in Table 9. Specifically, we employ two layers of self-attention in the auto-embedding module, renaming our method as Amphista- $\alpha$ . This adjustment leads to an average accepted length increase of approximately 0.1-0.2 tokens and a notable $5\% - 8\%$ improvement in overall speed-up, highlighting Amphista's performance growth potential. We anticipate this to be a highly promising and potent attribute of Amphista.
339
+
340
+ Table 11: Experiment results of LoRA-like lm heads optimization. Note that we consider the speed-up of full rank lm head as $1.00 \times$ .
341
+
342
+ <table><tr><td>Benchmark</td><td>rank=4096 (full)</td><td>rank=64</td><td>rank=128</td><td>rank=256</td><td>rank=512</td></tr><tr><td>MT-bench</td><td>1.00×</td><td>0.98×</td><td>1.01×</td><td>1.02×</td><td>0.99×</td></tr><tr><td>Spec-bench</td><td>1.00×</td><td>1.00×</td><td>1.00×</td><td>1.01×</td><td>0.98×</td></tr></table>
343
+
344
+ # A.2.5 Parameter Complexity Optimization of Amphista
345
+
346
+ In this part, we propose LoRA-like drafting lm heads to further optimize the original learnable lm
347
+
348
+ ![](images/8f9abcf9a79dbfbc4d507d49496f5132638cfba84a5336d90d458f8a29c00a9b.jpg)
349
+ Figure 6: Case Study on Code Generation. Tokens in red means those generated by Amphista and tokens in black means those generated by target model itself.
350
+
351
+ ![](images/bddd97579f8ac0ad62da68d26fb85e37f41e392fe4e57973d875e2e8f5e2f594.jpg)
352
+ Figure 7: Case Study on Text Generation. Tokens in red means those generated by Amphista and tokens in black means those generated by target model itself.
353
+
354
+ heads of Amphista, which significantly reduces the parameter amount and complexity. Specifically, we use two low-rank matrices to replace the original lm head matrix. The experimental results are shown in Table 11, we choose Vicuna 7B as target model, so the parameter count of lm head is 4096 * 32000. With the increase of rank, we can reduce the number of learnable parameters by up to $45\%$ while maintaining almost the same performance, which greatly reduces the complexity of model parameters and reflects the advantages and potential of Amphista in practical deployment.
355
+
356
+ # A.3 Case Study
357
+
358
+ Here we show some real case studies (see Figure 6, 7) on Amphista inference, the target model is Vicuna 7B. Note that we do not apply any special processing to the tokenizer's output, preserving the original results. Tokens highlighted in red represent those generated by Amphista during each step of decoding. Tokens in black indicate those generated by target model. From these practical examples, we can observe that in the vast majority of cases, Amphista generates at least two tokens per decoding step. This generally results in a stable at least $2\mathrm{x}$ speed-up, demonstrating the efficiency of our algorithm. Additionally, Amphista's output is consistent with the target model's auto-regressive decoding output, ensuring the generation quality of Amphista.
amphistabidirectionalmultiheaddecodingforacceleratingllminference/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b03a204ab4baf6b2417dc271c6f1ce9578f669ed4af77d7b47e3f139a023b67c
3
+ size 722701
amphistabidirectionalmultiheaddecodingforacceleratingllminference/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0195c7786851a45d1f4c738151d0f473643a925ceda5ce683ea300814abb50da
3
+ size 404964
analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/19b26c58-d911-416d-bc63-d19c951c0f10_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0858f131352e529f830b6f0a11fe3de8af39013776da9afe1b1057a3534f0c28
3
+ size 144628
analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/19b26c58-d911-416d-bc63-d19c951c0f10_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:773f99753e691e92f1d69aa3732be223c4de7a433dba2a785978cce609b6c4a4
3
+ size 174594
analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/19b26c58-d911-416d-bc63-d19c951c0f10_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dbe1d307af5d4951a41bb0feb69c2297ef41c31e86e0d02d75d1ffc40ca5622
3
+ size 2077926
analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/full.md ADDED
@@ -0,0 +1,574 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Analyzing and Evaluating Correlation Measures in NLG Meta-Evaluation
2
+
3
+ Mingqi Gao*, Xinyu Hu*, Li Lin, Xiaojun Wan
4
+
5
+ Wangxuan Institute of Computer Technology, Peking University
6
+
7
+ {gaomingqi,huxinyu,wanxiaojun}@pku.edu.cn
8
+
9
+ efsotr_l@stu.pku.edu.cn
10
+
11
+ # Abstract
12
+
13
+ The correlation between NLG automatic evaluation metrics and human evaluation is often regarded as a critical criterion for assessing the capability of an evaluation metric. However, different grouping methods and correlation coefficients result in various types of correlation measures used in meta-evaluation. In specific evaluation scenarios, prior work often directly follows conventional measure settings, but the characteristics and differences between these measures have not gotten sufficient attention. Therefore, this paper analyzes 12 common correlation measures using a large amount of real-world data from six widely-used NLG evaluation datasets and 32 evaluation metrics, revealing that different measures indeed impact the meta-evaluation results. Furthermore, we propose three perspectives that reflect the capability of meta-evaluation: discriminative power, ranking consistency, and sensitivity to score granularity. We find that the measure using global grouping and Pearson correlation coefficient exhibits the best performance in both discriminative power and ranking consistency. Besides, the measures using system-level grouping or Kendall correlation are the least sensitive to score granularity.
14
+
15
+ # 1 Introduction
16
+
17
+ Automatic evaluation metrics (e.g., BLEU (Papineni et al., 2002) and BERTScore (Zhang et al., 2020)) are widely used in natural language generation (NLG) evaluation to assess the quality of content generated by a system or model for a specific task. Furthermore, the evaluation of the performance of these evaluation metrics is referred to as meta-evaluation, which typically uses the correlation between the metrics and human evaluation as a crucial criterion, as human evaluation is generally considered the gold standard. However, the implementation of the correlation measure is not uniform
18
+
19
+ because it involves two elements that have different possible selections: the grouping method of the evaluation scores (e.g., system level (Bhandari et al., 2020) and input level (Deutsch et al., 2021)) and the correlation coefficient (e.g., Pearson's $r$ and Spearman's $\rho$ ).
20
+
21
+ Our relevant preliminary experiments show that different correlation measures can indeed lead to different meta-evaluation results. However, prior studies have rarely paid attention to the relationships and differences between different measures; instead, they often simply follow the conventional practices of related work or authoritative competitions on evaluation. Some studies even do not clearly describe the correlation measure they used, particularly in terms of grouping methods, let alone explain the reasons for selecting that measure. Moreover, the correlation measures used in some authoritative competitions are constantly changing. For instance, WMT22 (Freitag et al., 2022) used segment-level correlations that include three different grouping methods, whereas WMT21 (Freitag et al., 2021b) and WMT23 (Freitag et al., 2023) only used one. These issues of non-transparency and inconsistency indicate that the correlation measure and meta-evaluation in NLG require more in-depth understanding.
22
+
23
+ On the other hand, large language models (LLMs) have been increasingly used in automatic evaluation, including both prompting proprietary LLMs for NLG evaluation (Liu et al., 2023; Chiang and Lee, 2023; Kocmi and Federmann, 2023) and fine-tuned LLM evaluators (Wang et al., 2023; Xu et al., 2023; Jiang et al., 2023; Li et al., 2023). Unlike traditional continuous evaluation metrics, LLM-based evaluators typically output discrete scores and can assess on different evaluation scales based on flexible requirements (e.g., 1-5, 0-100). This introduces more evaluation ties and varying degrees of the granularity or discretization of evaluation scores, which may affect the fairness of com
24
+
25
+ parisons in certain correlation measures (Deutsch et al., 2023), making the already confusing selection of correlation measures even more complex.
26
+
27
+ However, it is by no means easy to strictly determine whether a particular correlation measure is reasonable, as it depends on the specific scenario and research objectives. For example, Deutsch et al. (2023) believe that when using multidimensional quality metric (MQM) (Freitag et al., 2021a) for machine translation evaluation, fine-grained tied scores should be trusted and used for meta-evaluation, while traditional Kendall correlation coefficients cannot handle these situations and are therefore unsuitable. However, in coarser-grained evaluations (such as Likert scale ratings from 1 to 5), ties may not be as reliable, rendering the above conclusion invalid. Therefore, this paper focuses on the comparison and analysis of different correlation measures and their meta-evaluation capabilities, primarily around the following three and characteristic perspectives and research questions:
28
+
29
+ - RQ1 ( $\S 4.1$ ): For discriminative power, which correlation measures can more effectively distinguish between pairs of automatic evaluation metrics?
30
+ - RQ2 ( $\S 4.2$ ): For ranking consistency, which correlation measures can provide more stable rankings for a set of evaluation metrics?
31
+ - RQ3 (\$4.3): For granularity sensitivity, which correlation measures can better handle different evaluation score granularity?
32
+
33
+ To achieve more comprehensive and realistic analyses, we collect six commonly-used NLG evaluation datasets, including 30 different subsets, and calculate and annotate the results of 32 different automatic evaluation metrics (including evaluators using LLMs such as GPT-4). Based on such large amounts of real data, we design specific testing algorithms to analyze the above three questions and summarize the corresponding conclusions. Our contributions are summed up as follows:
34
+
35
+ 1. We point out the necessity for a more thorough understanding of correlation measures in NLG meta-evaluation and demonstrate that the different selections of measures do affect meta-evaluation results.
36
+ 2. We propose three characteristics reflecting meta-evaluation capabilities as well as cor
37
+
38
+ responding methods to analyze different correlation measures and experiment on a large amount of real-world data from six evaluation datasets and 32 evaluation metrics.
39
+
40
+ 3. Our experimental results show that the measure using global grouping and Pearson correlation exhibits the best meta-evaluation capabilities. We hope our work can deepen the understanding of correlation measures, thereby clarifying their usages in future research<sup>1</sup>.
41
+
42
+ # 2 Background
43
+
44
+ In NLG evaluation, we usually focus on the quality of the output generated by a system or model for a given task and input. For example, in automatic news summarization, the input is a news article, and the output is a summary. There are two ways to evaluate the output: human evaluation and automatic evaluation, both typically expressed as scores. Human evaluation scores are considered the gold standard, and their consistency with automatic evaluation metrics is used to assess and compare the performance of different metrics, which is calculated by the specific correlation measure. This process is referred to as meta-evaluation and can be formalized as follows:
45
+
46
+ Assume that there are $N$ systems, $\{s_i\}_{i=1}^N$ and $M$ inputs, $\{d_j\}_{j=1}^M$ . Each system $s_i$ generates an output $h_{ij}$ for each input $d_j$ and the human evaluation score for each output $h_{ij}$ is denoted as $z_{ij}$ . These form a meta-evaluation dataset $D = \{\{d_j\}_{j=1}^M, \{(h_{ij}, z_{ij})\}_{i=1,j=1}^{N,M}\}$ . In most meta-evaluation datasets, $N \ll M$ , and generally, the range of $N$ is a few to dozens, while the range of $M$ is tens to thousands.
47
+
48
+ On the other hand, an automatic evaluation metric $m$ typically requires the input, output, and other related optional content, such as the reference for evaluation. For each output $h_{ij}$ , the score given by the automatic evaluation metric is denoted as $x_{ij}$ . If there are $K$ metrics to be evaluated in meta-evaluation, they are denoted as $\{m_k\}_{k=1}^K$ , and their output scores are denoted as $\{x_{ij}^k\}_{k=1}^K$ (also denoted as matrices $\{X_k\}_{k=1}^K$ ).
49
+
50
+ The correlation $C(X,Z)$ between $\{x_{ij}\}_{i = 1,j = 1}^{N,M}$ (i.e., $X$ ) and $\{z_{ij}\}_{i = 1,j = 1}^{N,M}$ (i.e., $Z$ ) is used to evaluate the quality of a certain evaluation metric $m$ . And there are multiple ways to measure this correlation, which can be divided into four categories
51
+
52
+ based on the grouping method, where $c$ denotes a specific correlation coefficient, commonly Pearson's $r$ , Spearman's $\rho$ , and Kendall's $\tau$ :
53
+
54
+ - Global Level: Flatten matrices of evaluation scores into vectors and calculate the correlation coefficient between two $N \times M$ -dimensional vectors, $C_{global}(X,Z) = c((x_{ij})_{i=1,j=1}^{N,M}, (z_{ij})_{i=1,j=1}^{N,M})$ .
55
+ - Input Level: For each input, calculate the correlation coefficient between two $N$ -dimensional vectors, and then average the $M$ correlation coefficients, $C_{input}(X,Z) = \frac{1}{M}\sum_{j=1}^{M}c((x_{ij})_{i=1}^{N}, (z_{ij})_{i=1}^{N})$ .
56
+ - Item Level<sup>2</sup>: For each system, calculate the correlation coefficient between two $M$ -dimensional vectors, and then average the $N$ correlation coefficients, $C_{item}(X,Z) = \frac{1}{N}\sum_{i=1}^{N}c((x_{ij})_{j=1}^{M}, (z_{ij})_{j=1}^{M})$ .
57
+ - System Level<sup>3</sup>: First average the scores of each system across $M$ inputs, and then calculate the correlation coefficient between the two $N$ -dimensional vectors, $C_{\text{system}}(X,Z) = c\left(\frac{1}{M}\sum_{j=1}^{M}x_{ij}\right)_{i=1}^{N}, \left(\frac{1}{M}\sum_{j=1}^{M}z_{ij}\right)_{i=1}^{N}$ .
58
+
59
+ It can be seen that the correlation measure includes two parts: the grouping method and the correlation coefficient. Considering the common four grouping methods and three correlation coefficients mentioned above, there are $4 \times 3 = 12$ different correlation measures in total. For two evaluation metrics $m_{1}$ and $m_{2}$ and a specific correlation measure $C$ , it is generally considered that $m_{1}$ outperforms $m_{2}$ if $C(X_{1},Z) > C(X_{2},Z)$ . Furthermore, the ranking of $K$ metrics $\{m_k\}_{k=1}^K$ can also be determined by comparisons within $C(X_{k},Z)_{k=1}^{K}$ .
60
+
61
+ To verify whether different correlation measures would affect the results of meta-evaluation,
62
+
63
+ we conduct preliminary experiments of 12 measures on common evaluation datasets. Specifically, for two correlation measures $C_1$ and $C_2$ , as well as the evaluation metrics to be meta-evaluated $\{m_k\}_{k=1}^K$ , we calculate the consistency of metric rankings under the two measures using $\tau((C_1(X_k,Z))_{k=1}^K, (C_2(X_k,Z))_{k=1}^K)$ . Since we are concerned with rankings here, we choose Kendall's $\tau$ as the correlation coefficient. The results on SummEval (Fabbri et al., 2021) are presented in Figure 1, with the details of data introduced in Section 3. Any two different correlation measures lead to different meta-evaluations of metric rankings, with greater inconsistency when different grouping methods are used. Similar results are observed on other datasets, which are presented in Figures 6-10 in the appendix, indicating that the selection of correlation measures is indeed important and requires further analysis.
64
+
65
+ ![](images/088cbb56cae1a1ff66c697779a68f064b0fe0cfff5838ce71c689f065d9af4a8.jpg)
66
+ Figure 1: The consistency of evaluation metric rankings using different correlation measures on SummEval, calculated through Kendall's correlation coefficient.
67
+
68
+ # 3 Meta-Evaluation Data
69
+
70
+ For more comprehensive and realistic analyses on correlation measures, we collect six widely used evaluation datasets, as well as the results of 32 common automatic evaluation metrics on them. Our subsequent experiments on three research questions are based on this large-scale real evaluation data.
71
+
72
+ # 3.1 Datasets
73
+
74
+ As shown in Table 1, we select and preprocess six common evaluation datasets from five typical NLG tasks: summarization, story generation, dialogue,
75
+
76
+ <table><tr><td>Task</td><td>Name</td><td>#Subsets</td><td>#Aspects</td><td>#Systems</td><td>#Inputs</td></tr><tr><td>Summarization</td><td>SummEval (Fabbri et al., 2021)</td><td>1</td><td>4</td><td>16</td><td>100</td></tr><tr><td>Translation</td><td>WMT23-ZH2EN-NEWS (Freitag et al., 2023)</td><td>1</td><td>1</td><td>16</td><td>376</td></tr><tr><td>Story Generation</td><td>HANNA (Chhun et al., 2022)</td><td>1</td><td>6</td><td>6</td><td>60</td></tr><tr><td>Story Generation</td><td>MANS(Guan et al., 2021)</td><td>2</td><td>1</td><td>5</td><td>200</td></tr><tr><td>Dialogue</td><td>USR (Mehri and Eskénazi, 2020)</td><td>2</td><td>6</td><td>5</td><td>60</td></tr><tr><td>Data-to-text</td><td>WebNLG2020 (Castro Ferreira et al., 2020)</td><td>1</td><td>5</td><td>16</td><td>178</td></tr></table>
77
+
78
+ Table 1: Information and statistics of different evaluation datasets.
79
+
80
+ data-to-text, and translation. Due to the large volume of WMT23 data, we only use news domain data from ZH2EN. Following convention (Guan et al., 2021; Mehri and Eskenazi, 2020), we split the original datasets according to subsets and aspects, resulting in a total of 30 meta-evaluation datasets. Since our primary focus is on analyzing the general characteristics of correlation measures across different scenarios rather than on specific datasets, we label these datasets as D1-D30 for brevity and will summarize their overall performance in subsequent experiments. More details and the correspondence are shown in Table 4.
81
+
82
+ # 3.2 Automatic Evaluation Metrics
83
+
84
+ We select 14 common non-LLM evaluation metrics, including string-based BLEU (Papineni et al., 2002), ROUGE-(1,2,L) $^5$ (Lin, 2004), CHRF (Popovic, 2015), and model-based BERTScore(p,r,f1) (Zhang et al., 2020), MoverScore (Zhao et al., 2019), BARTScore-(s-h, r-h, h-r) (Yuan et al., 2021), BLEURT (Sellam et al., 2020), and COMET (Rei et al., 2020). For LLM-based evaluators, we employ 18 experimental settings to prompt proprietary LLMs to score outputs based on task descriptions and aspect definitions, resulting in 18 evaluation metrics: three different proprietary LLMs from OpenAI $^6$ (gpt-3.5-turbo, gpt-4-turbo, gpt-4o); different prompting strategies of three evaluation scales (1-5, 1-10, 0-100); and two sampling settings (temperature T=0 and sampling once, temperature T=1 and sampling ten times with results averaged). In total, there are $K = 32$ automatic evaluation metrics, and more information including the detailed prompts and implementations are shown in Appendix B.
85
+
86
+ # 4 Analyses of Correlation Measures
87
+
88
+ To analyze and compare the meta-evaluation capabilities of different correlation measures, we consider three important perspectives: discriminative power, ranking consistency, and sensitivity to score granularity. In practice, correlation measures are mainly used to assess the performance of automatic evaluation metrics, with two core applications: comparing two metrics and ranking a set of metrics. For the former, the key lies in the discriminative power of the correlation measure, i.e., whether it can distinguish various metric pairs as effectively as possible. For the latter, the consistency of the correlation measure in ranking the same set of metrics is crucial, i.e., whether the ranking remains stable. In addition, recently emerging LLM-based evaluators feature a flexible and discrete scoring pattern similar to human evaluation, so the sensibility of correlation measures to varying levels of score granularity of the same metric should meet certain expectations. In the following subsections, we will design specific tests and conduct experimental analyses based on these three capability aspects to identify the characteristics of different correlation measures.
89
+
90
+ # 4.1 Discriminative Power
91
+
92
+ In the fields of information retrieval (Sakai, 2013) and recommendation systems (Anelli et al., 2019; Ashkan and Metzler, 2019; Valcarce et al., 2020), discriminative power is widely used to evaluate evaluation measures. Inspired by this, we adapted this method to evaluate correlation measures in NLG meta-evaluation.
93
+
94
+ Specifically, for a given correlation measure, a meta-evaluation dataset (including human scores $Z$ ), and the scores of $K$ automatic evaluation metrics on it $\{X_{k}\}_{k = 1}^{K}$ , we obtain the two-sided p-value for each pair of automatic evaluation metrics through hypothesis testing. The smaller the p-value, the more likely we are to reject the null hypothesis
95
+
96
+ ![](images/c0ad435f38030f35c08a457be77b125e8004dd6e605f2bff2821299df676f316.jpg)
97
+ DP values for discriminative power
98
+ Figure 2: DP values of different correlation measures on all meta-evaluation datasets using the permutation test, the lower the better. Each column "Dn" shows the result on one dataset, which corresponds to the original dataset as shown in Table 4. The first column presents the overall performance with the averaged results of all datasets.
99
+
100
+ that there is no difference in the correlation values between the two metric scores with human scores. Thus, a highly discriminative correlation measure should yield many very small p-values. For convenience of comparison, similar to Valcarce et al. (2020), we define the DP value as the average of p-values of all metric pairs, ranging from 0 to 1, with smaller values indicating stronger discriminative power of a correlation measure. Algorithm 1 shows the pseudocode for calculating the DP value. Regarding the hypothesis testing methods used here, we refer to previous work and employ the Perm-Both algorithm proposed by Deutsch et al. (2021)<sup>7</sup>. (Noreen, 1989). It is a non-parametric test method that Deutsch et al. (2021) have shown to have a higher power in summarization meta-evaluation. Figure 2 shows the DP values of correlation measures across all meta-evaluation datasets, and the overall performance and rankings are summarized in Table 2. The complete results are shown in Table 6 in the appendix.
101
+
102
+ In addition to giving an overall value, we can also look more closely at the discriminative power of different correlation measures by p-values curves, presented in Figures 11-40 in the appendix. After obtaining the p-values for each pair of evaluation metrics, we sort them in descending order. With the number of evaluation metric pairs on the x-axis and the p-values on the y-axis, we can plot the p-value curves of different correlation measures
103
+
104
+ on a meta-evaluation dataset. The closer the curve is to the coordinate axis, the stronger the discriminative power of the corresponding correlation measure. The DP value numerically equals the area enclosed by the p-value curve and the coordinate axis divided by the number of metric pairs.
105
+
106
+ Algorithm 1 Discriminative Power
107
+ Input: $X_{1},\ldots ,X_{K},Z\in \mathbb{R}^{N\times M},T\in \mathbb{N},C$
108
+ Output: DP value
109
+ $v\gets 0$ $n\gets K\times (K - 1) / 2$
110
+ for $i\in \{1,\dots ,K - 1\}$ do for $j\in \{i + 1,\dots ,K\}$ do $p_{ij}\gets \mathrm{PERMUTATIONTEST}(X_i,X_j,Z,T,C)$ $v\gets v + p_{ij}$ end for
111
+ end for
112
+ return $v / n$
113
+ function PERMUTATIONTEST(X,Y,Z,T,C) $q\gets 0$ $\delta \leftarrow C(X,Z) - C(Y,Z)$
114
+ for $T$ iterations do $X_{s},Y_{s}\gets$ empty $N\times M$ matrices for $(i,j)\in \{1,\dots ,N\} \times \{1,\dots ,M\}$ do if random Boolean is true then $X_{s}[i,j]\gets Y[i,j]$ $Y_{s}[i,j]\gets X[i,j]$ else $X_{s}[i,j]\gets X[i,j]$ $Y_{s}[i,j]\gets Y[i,j]$
115
+ end if
116
+ end for
117
+ $\delta_s\gets C(X_s,Z) - C(Y_s,Z)$
118
+ if $|\delta_s| > |\delta |$ then $q\gets q + 1$
119
+ end if
120
+ end for
121
+ return $q / T$
122
+ end function
123
+
124
+ ![](images/9ab4f9810760649563c1baa7e1bbda046152d0ed8105ecd960fa5f6dd6120f12.jpg)
125
+ Figure 3: RC values of different correlation measures on all meta-evaluation datasets, the higher the better, with the representation of columns similar to Figure 2.
126
+
127
+ Takeaways The overall discriminative power of different correlation measures can be summarized and ranked based on different grouping methods and correlation coefficients as follows:
128
+
129
+ - Grouping: Global > Input > Item > System
130
+ Correlation Coefficient: Pearson's $r >$ Spearman's $\rho >$ Kendall's $\tau$
131
+
132
+ # Algorithm 2 Ranking Consistency
133
+
134
+ Input: $X_{1},\ldots ,X_{K},Z\in \mathbb{R}^{N\times M},T\in \mathbb{N},C.$
135
+
136
+ Output: RC value
137
+
138
+ $v\gets 0$
139
+
140
+ for $T$ iterations do
141
+
142
+ $M_{1}\gets |M / 2|$
143
+
144
+ $M_2\gets \bar{M} -[\bar{M} /2]$
145
+
146
+ $D_{1}\gets$ sample $\{1,\dots ,M\}$ w/o repl. $M_{1}$ times
147
+
148
+ $D_{2}\gets \{1,\dots ,M\} \setminus D_{1}$
149
+
150
+ $R_{1}, R_{2} \gets$ empty $K$ -dimensional arrays
151
+
152
+ for $k\in \{1,\ldots ,K\}$ do
153
+
154
+ $X_{1}^{s},Z_{1}^{s}\gets$ empty $N\times M_1$ matrices
155
+
156
+ $X_{2}^{s},Z_{2}^{s}\gets$ empty $N\times M_2$ matrices
157
+
158
+ for $i\in \{1,\ldots ,N\}$ do
159
+
160
+ for $j\in \{1,\ldots ,M_1\}$ do
161
+
162
+ $X_{1}^{s}[i,j]\gets X_{k}[i,D_{1}[j]]$
163
+
164
+ $Z_{1}^{s}[i,j]\gets Z[i,D_{1}[j]]$
165
+
166
+ end for
167
+
168
+ for $j\in \{1,\ldots ,M_2\}$ do
169
+
170
+ $\bar{X}_2^s [i,j]\gets X_k[i,D_2[j]]$
171
+
172
+ $Z_2^s [i,j]\gets Z[i,D_2[j]]$
173
+
174
+ end for
175
+
176
+ end for
177
+
178
+ $R_{1}[k]\gets C(X_{1}^{s},Z_{1}^{s})$
179
+
180
+ $R_{2}[k]\gets C(X_{2}^{s},Z_{2}^{s})$
181
+
182
+ end for
183
+
184
+ $\tau^s \gets \tau(R_1, R_2)$
185
+
186
+ $v\gets v + \tau^s$
187
+
188
+ end for
189
+
190
+ return $v / T$
191
+
192
+ # 4.2 Ranking Consistency
193
+
194
+ Inspired by the evaluation of different evaluation measures of ordinal classification (Sakai, 2021a) and information retrieval (Sakai, 2021b), for a given correlation measure, we randomly split the human scores and evaluation metric outputs in half, derive the rankings of the evaluation metrics on the two halves, and calculate the similarity of the two rankings using Kendall's $\tau$ as a measure of ranking consistency. We define the RC value as the mean results obtained from repeating this process $T = 1000$ times. Algorithm 2 presents the pseudocode for the calculation. Figure 3 and Table 2 shows the RC values of correlation measures across all meta-evaluation datasets, with the complete results presented in Table 7 in the appendix. According to Table 2, ranking consistency and discriminative power exhibit similar trends for correlation measures, meaning that correlation measures with high discriminative power generally also have high ranking consistency.
195
+
196
+ Takeaways The overall ranking consistency of different correlation measures can be summarized and ranked based on different grouping methods and correlation coefficients as follows:
197
+
198
+ - Grouping: Global > Input > Item > System
199
+ Correlation Coefficient: Pearson's $r >$ Spearman's $\rho >$ Kendall's $\tau$
200
+
201
+ # 4.3 Sensitivity to Score Granularity
202
+
203
+ We first introduce the concepts of evaluation scale and score granularity, which apply to both human
204
+
205
+ ![](images/c285e0465d8d13ed4f14950a76da69d308813d96760b2f3e8f6248b1a43d0bef.jpg)
206
+ Figure 4: As the changes of $G^{m}$ , the correlations between the GPT-4-Turbo evaluator and human evaluation using different measures on SummEval (left) and WMT23 (right) with the fixed evaluation scale of 1-5.
207
+
208
+ ![](images/8fee60ba0034372cceaaef0972ea5f6859af6e050d169e3dceefb94beed3df5c.jpg)
209
+
210
+ <table><tr><td>Grouping</td><td>Correlation</td><td>DP value ↓</td><td>RC value ↑</td></tr><tr><td>System</td><td>Pearson</td><td>0.136 (5)</td><td>0.698 (7)</td></tr><tr><td>System</td><td>Spearman</td><td>0.339 (11)</td><td>0.485 (12)</td></tr><tr><td>System</td><td>Kendall</td><td>0.381 (12)</td><td>0.486 (11)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.128 (4)</td><td>0.768 (3)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.150 (7)</td><td>0.738 (6)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.149 (6)</td><td>0.739 (5)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.107 (1)</td><td>0.774 (1)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.118 (2)</td><td>0.770 (2)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.125 (3)</td><td>0.753 (4)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.168 (8)</td><td>0.653 (8)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.180 (9)</td><td>0.643 (9)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.192 (10)</td><td>0.618 (10)</td></tr></table>
211
+
212
+ Table 2: Overall performance and rankings (in brackets) of different correlation measures with the averaged results of all meta-evaluation datasets in terms of the discriminative power (DP) and ranking consistency (RC).
213
+
214
+ evaluation and emerging LLM-based evaluators. In practice, human evaluation scores are not continuous values like traditional metrics (e.g., BLEU, BERTScore). Evaluators typically select a discrete value as the evaluation score based on the scale (also called range) required by the evaluation guidelines (e.g., 1-5, 0-100). This scale is referred to as the evaluation scale. On the other hand, multiple evaluators are often involved to enhance the reliability of the evaluation, and the final human score is the average of the scores from multiple evaluators, leading to more diverse values. The combination of the chosen evaluation scale and the number of evaluators determines the number of all possible final scores, which is called score granularity. This reflects the degree of discretization in the evaluation and potential ties in the scores. For example, SummEval uses a 5-point Likert scale for
215
+
216
+ human evaluation, and each sample is evaluated by three annotators, resulting in 13 possible averaged scores. Here, the evaluation scale is 1-5, and the score granularity is 13.
217
+
218
+ Moreover, LLMs with strong instruction-following capabilities like GPT-4 have been increasingly used in automatic evaluation, following the pattern of humans (Kocmi and Federmann, 2023; Chiang and Lee, 2023). Therefore, their output scores are also discrete, and they can simulate multiple evaluators through repeated sampling, which also involves the evaluation scale and score granularity. We have presented statistics for these two quantities in human scores and LLM-based metric scores across different datasets in Tables 4 and 5 in the appendix. The differences in evaluation scales primarily affect the difficulty of evaluation and the ability of the evaluators, which are not the focus of our study. For example, prompting GPT-4-Turbo with the scale of 1-5 or 1-10 is often seen as two distinct evaluation metrics that possess different capabilities. Our main focus, instead, is the impact of score granularity differences on different correlation measures when the evaluation scale and metric remain the same, which can be experimented on real-world data and statistical simulation. We denote the score granularity of humans and metrics as $G^{h}$ and $G^{m}$ , respectively, with the former depending on the datasets and the latter depending on the evaluation settings.
219
+
220
+ # 4.3.1 Real-world Data
221
+
222
+ We select two typical datasets from Table 1 for experiments: SummEval and WMT23, where their $G^{h}$ equals to 13 and 251, respectively. In addition,
223
+
224
+ ![](images/517a769c680c5ecaebd8a7ea608ec6c7502ad8bc968058a73e9ff2c4b1af6e21.jpg)
225
+ Figure 5: As the changes of $G^{m}$ , the correlations between metrics and humans using different measures in statistical simulation with $G^{h} = 13$ .
226
+
227
+ we use the three proprietary LLMs and two evaluation scales (1-5, 1-10) introduced in Section 3.2, with the temperature setting of $\mathrm{T} = 1$ . The number of multiple samplings varies from 1 to 10 to obtain results for different levels of metric score granularity $G^{m}$ . Figure 4 shows the consistency of GPT-4-Turbo with human evaluations under different correlation measures, with other results included in Figures 41-50 in the appendix. We believe that when the score granularity of the metric $G^{m}$ is no greater than that of human $G^{h}$ , the evaluation capability of the metric is limited, as fewer possible scores are available. Consequently, as $G^{m}$ increases until it equals $G^{h}$ , the consistency between the metric and human should increase. However, when $G^{m}$ exceeds $G^{h}$ , the situation becomes complex, and it is difficult to intuitively estimate the expected trend of the consistency, leaving for future explorations. The results illustrate that the measures using the system-level grouping or Kendall's $\tau$ are basically not affected by the score granularity, where the consistency almost remains unchanged, not meeting expectations as mentioned above. The situation is generally similar for the two datasets with different LLM-based evaluators and different evaluation scales.
228
+
229
+ # 4.3.2 Statistical Simulation
230
+
231
+ Although real-world datasets and evaluation metrics are more reliable for analysis compared to synthetic data, their scale of data is limited and some variables are less flexible to adjust. Therefore, we also employ the statistical simulation as an auxiliary method to observe whether similar
232
+
233
+ conclusions can be drawn. Based on our collected datasets and certain hypotheses, we model the NLG meta-evaluation data and conduct experiments with parameters estimated from real data. The detailed process, including the sampling algorithm and experimental settings, is described in Appendix C. Figure 5, as well as Figure 51 in the appendix, show the results, which generally align with the observations obtained before.
234
+
235
+ Takeaways The influence of the score granularity on the measures using system-level grouping or Kendall correlation is the least among different grouping methods and correlation coefficients, which is not as expected. And as the score granularity of the metric increases, the consistency between metrics and humans under other correlation measures continuously increases.
236
+
237
+ # 5 Related Work
238
+
239
+ In the field of NLP, there is limited research analyzing correlation measures in NLG meta-evaluation. Mathur et al. (2020) found that the introduction of an outlier system can distort the system-level Pearson correlation between machine translation evaluation metrics and human evaluation and quantify the impact. Deutsch et al. (2022) proposed two different ways of calculating system-level correlations by changing the input sets or system pairs and showed they can lead to more precise estimates of metric performance in real-world scenarios. Recently, Deutsch et al. (2023) pointed out that the segment-level Kendall correlation coefficient, widely used in machine translation evaluation, does not handle ties in human scores and metric outputs as expected and thus needs to be calibrated. It is worth mentioning that Perrella et al. (2024) designed several sentinel metrics for machine translation meta-evaluation and showed that global-level grouping and system-level grouping may introduce unfairness.
240
+
241
+ In the research on automatic evaluation metrics, some works have commented on correlation measures based on experimental results when presenting the performance of different evaluation metrics. Owczarzak et al. (2012) found that, in the domain of summarization evaluation, system-level correlation is more robust to inconsistent human annotations. Freitag et al. (2022) discovered that system-level correlation is hard to distinguish between different machine translation evaluation metrics. Liu et al. (2023); Xu et al. (2023) explained some experimental results as the inappropriate handling of
242
+
243
+ ties by the Kendall correlation when comparing the performance of different metrics. Additionally, Wei and Jia (2021) demonstrated the output scores of automatic evaluation metrics have less variance than human scores at system level.
244
+
245
+ In contrast, we focus on the properties and capabilities of typical correlation measures from generic NLG evaluation perspectives, not limited to specific tasks and evaluation metrics.
246
+
247
+ # 6 Conclusions
248
+
249
+ We analyze and compare the characteristics and capabilities of 12 typical correlation measures in NLG meta-evaluation through three proposed perspectives. Based on various experiments with large-scale real-word data from six NLG datasets and 32 evaluation metrics, we find measures using global-level grouping and Pearson correlation have better overall meta-evaluation capability, while those using system-level and Kendall correlation show the opposite. And they have all been used on some common evaluation datasets. We hope that our work can deepen the understanding of correlation measures and draw more attention and emphasis on related research in the future.
250
+
251
+ # Limitations
252
+
253
+ We mainly analyzed the capabilities of correlation measures through empirical experiments without conducting theoretical analysis. Besides, although our experiments have covered many common and typical evaluation datasets to provide as general analyses as possible, it is impossible to encompass all tasks and evaluation aspects in NLG. Therefore, the conclusions we obtained about the meta-evaluation capability of different correlation measures may be limited to a certain context. Additionally, our work requires quite a few resources, including the API cost of using proprietary LLMs to annotate data and high-performance computation for conducting large-scale empirical evaluations and statistical simulations, which could be improved in the future.
254
+
255
+ # Acknowledgments
256
+
257
+ This work was supported by Beijing Science and Technology Program (Z231100007423011) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the
258
+
259
+ anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author.
260
+
261
+ # References
262
+
263
+ Vito Walter Anelli, Tommaso Di Noia, Eugenio Di Sciaccio, Claudio Pomo, and Azzurra Ragone. 2019. On the discriminative power of hyper-parameters in cross-validation and how to choose them. In Proceedings of the 13th ACM Conference on Recommender Systems, RecSys 2019, Copenhagen, Denmark, September 16-20, 2019, pages 447-451. ACM.
264
+ Azin Ashkan and Donald Metzler. 2019. Revisiting online personal search metrics with the user in mind. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 625-634. ACM.
265
+ Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu, and Graham Neubig. 2020. Reevaluating evaluation in text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9347-9359. Association for Computational Linguistics.
266
+ Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussalam, and Anastasia Shimorina, editors. 2020. Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+). Association for Computational Linguistics, Dublin, Ireland (Virtual).
267
+ Cyril Chhun, Pierre Colombo, Fabian M. Suchanek, and Chloe Clavel. 2022. Of human criteria and automatic metrics: A benchmark of the evaluation of story generation. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 5794-5836. International Committee on Computational Linguistics.
268
+ David Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 15607-15631. Association for Computational Linguistics.
269
+ Daniel Deutsch, Rotem Dror, and Dan Roth. 2021. A statistical analysis of summarization evaluation metrics using resampling methods. Trans. Assoc. Comput. Linguistics, 9:1132-1146.
270
+ Daniel Deutsch, Rotem Dror, and Dan Roth. 2022. Re-examining system-level correlations of automatic summarization evaluation metrics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle,
271
+
272
+ WA, United States, July 10-15, 2022, pages 6038-6052. Association for Computational Linguistics.
273
+ Daniel Deutsch, George F. Foster, and Markus Freitag. 2023. Ties matter: Meta-evaluating modern metrics with pairwise accuracy and tie calibration. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 12914-12929. Association for Computational Linguistics.
274
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
275
+ Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2021. Summeval: Re-evaluating summarization evaluation. Trans. Assoc. Comput. Linguistics, 9:391-409.
276
+ Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Preprint, arXiv:2104.14478.
277
+ Markus Freitag, Nitika Mathur, Chi-kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frédéric Blain, Daniel Deutsch, Craig Stewart, Chrysoula Zerva, Sheila Castilho, Alon Lavie, and George F. Foster. 2023. Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent. In Proceedings of the Eighth Conference on Machine Translation, WMT 2023, Singapore, December 6-7, 2023, pages 578-628. Association for Computational Linguistics.
278
+ Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George F. Foster, Alon Lavie, and André F. T. Martins. 2022. Results of WMT22 metrics shared task: Stop using BLEU - neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation, WMT 2022, Abu Dhabi, United Arab Emirates (Hybrid), December 7-8, 2022, pages 46-68. Association for Computational Linguistics.
279
+ Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George F. Foster, Alon Lavie, and Ondrej Bojar. 2021b. Results of the WMT21 metrics shared task: Evaluating metrics with expert-based human evaluations on TED and news domain. In Proceedings of the Sixth Conference on Machine Translation, WMT@EMNLP 2021, Online Event, November 10-11, 2021, pages 733-774. Association for Computational Linguistics.
280
+ Jian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie Fan, and Minlie Huang. 2021. Openmeva: A benchmark for evaluating open-ended story generation metrics. In Proceedings of the 59th Annual Meeting of the Association for
281
+
282
+ Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6394-6407. Association for Computational Linguistics.
283
+ Xinyu Hu, Li Lin, Mingqi Gao, Xunjian Yin, and Xiao-jun Wan. 2024. Themis: Towards flexible and interpretable NLG evaluation. CoRR, abs/2406.18365.
284
+ Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, and Wenhu Chen. 2023. Tigerscore: Towards building explainable metric for all text generation tasks. CoRR, abs/2310.00752.
285
+ M. G. KENDALL. 1945. The treatment of ties in ranking problems. Biometrika, 33(3):239-251.
286
+ Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, EAMT 2023, Tampere, Finland, 12-15 June 2023, pages 193-203. European Association for Machine Translation.
287
+ Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2023. Generative judge for evaluating alignment. CoRR, abs/2310.05470.
288
+ Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
289
+ Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 2511-2522. Association for Computational Linguistics.
290
+ Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: reevaluating the evaluation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4984-4997. Association for Computational Linguistics.
291
+ Shikib Mehri and Maxine Eskenazi. 2020. USR: an unsupervised and reference free evaluation metric for dialog generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 681-707. Association for Computational Linguistics.
292
+ Eric W Noreen. 1989. Computer intensive methods for hypothesis testing: An introduction. Wiley, New York, 19:21.
293
+ Takahiro Onoshima, Kenpei Shiina, Takashi Ueda, and Saori Kubo. 2019. Decline of pearson'sr with categorization of variables: a large-scale simulation. Behaviormetrika, 46:389-399.
294
+
295
+ Karolina Owczarzak, Peter A. Rankel, Hoa Trang Dang, and John M. Conroy. 2012. Assessing the effect of inconsistent assessors on summarization evaluation. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, July 8-14, 2012, Jeju Island, Korea - Volume 2: Short Papers, pages 359-362. The Association for Computer Linguistics.
296
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL.
297
+ Stefano Perrella, Lorenzo Proietti, Alessandro Scirè, Edoardo Barba, and Roberto Navigli. 2024. Guardians of the machine translation meta-evaluation: Sentinel metrics fall in! In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16216-16244, Bangkok, Thailand. Association for Computational Linguistics.
298
+ Maja Popovic. 2015. chrf: character n-gram f-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, WMT@EMNLP 2015, 17-18 September 2015, Lisbon, Portugal, pages 392-395. The Association for Computer Linguistics.
299
+ Ricardo Rei, Craig Stewart, Ana C. Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2685-2702. Association for Computational Linguistics.
300
+ Tetsuya Sakai. 2013. Metrics, statistics, tests. In Bridging Between Information Retrieval and Databases - PROMISE Winter School 2013, Bressanone, Italy, February 4-8, 2013. Revised Tutorial Lectures, volume 8173 of Lecture Notes in Computer Science, pages 116-163. Springer.
301
+ Tetsuya Sakai. 2021a. Evaluating evaluation measures for ordinal classification and ordinal quantification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 2759-2769. Association for Computational Linguistics.
302
+ Tetsuya Sakai. 2021b. On the instability of diminishing return IR measures. In Advances in Information Retrieval - 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28 - April 1, 2021, Proceedings, Part I, volume 12656 of Lecture Notes in Computer Science, pages 572-586. Springer.
303
+ Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. BLEURT: learning robust metrics for text
304
+
305
+ generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7881-7892. Association for Computational Linguistics.
306
+ Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, and Lidong Bing. 2023. Large language models are not yet human-level evaluators for abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 4215-4233. Association for Computational Linguistics.
307
+ C. Spearman. 1904. The proof and measurement of association between two things. *The American Journal of Psychology*, 15(1):72-101.
308
+ Daniel Valcarce, Alejandro Bellogin, Javier Parapar, and Pablo Castells. 2020. Assessing ranking metrics in top-n recommendation. Inf. Retr. J., 23(4):411-448.
309
+ Yaqing Wang, Jiepu Jiang, Mingyang Zhang, Cheng Li, Yi Liang, Qiaozhu Mei, and Michael Bendersky. 2023. Automated evaluation of personalized text generation using large language models. CoRR, abs/2310.11593.
310
+ Johnny Tian-Zheng Wei and Robin Jia. 2021. The statistical advantage of automatic NLG metrics at the system level. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6840-6854. Association for Computational Linguistics.
311
+ Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Wang, and Lei Li. 2023. INSTRUCTSCORE: towards explainable text generation evaluation with automatic feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 5967-5994. Association for Computational Linguistics.
312
+ Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 27263-27277.
313
+ Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. *Bertscore: Evaluating text generation with BERT*. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
314
+ Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in
315
+
316
+ Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 563-578. Association for Computational Linguistics.
317
+
318
+ # A Definitions of Pearson's $r$ , Spearman's $\rho$ , and Kendall's $\tau$
319
+
320
+ Pearson correlation coefficient measures the linear relationship between two variables. The formula is:
321
+
322
+ $$
323
+ r (x, y) = \frac {\sum_ {i = 1} ^ {N} (x _ {i} - \bar {x}) (y _ {i} - \bar {y})}{\sqrt {\sum_ {i = 1} ^ {N} (x _ {i} - \bar {x}) ^ {2} \sum_ {i = 1} ^ {N} (y _ {i} - \bar {y}) ^ {2}}}
324
+ $$
325
+
326
+ Where $x$ and $y$ are n-dimensional vectors, and $x_{i}$ and $y_{i}$ are the elements of $x$ and $y$ . $\bar{x} = \sum_{i=1}^{n} x_{i}$ and $\bar{y} = \sum_{i=1}^{n} y_{i}$ .
327
+
328
+ Spearman correlation coefficient (Spearman, 1904) measures the rank-order correlation between two variables. The formula is:
329
+
330
+ $$
331
+ \rho (x, y) = 1 - \frac {6 \sum_ {i = 1} ^ {n} d _ {i} ^ {2}}{n (n ^ {2} - 1)}
332
+ $$
333
+
334
+ Where $d_{i} = \mathrm{rank}(x_{i}) - \mathrm{rank}(y_{i})$ is the difference between the ranks of corresponding values in the two vectors.
335
+
336
+ Kendall correlation coefficient (KENDALL, 1945) measures the ordinal association between two variables. It considers concordant and discordant pairs, with $\tau_{b}$ adjusted for ties. The formula is:
337
+
338
+ $$
339
+ \tau_ {b} (x, y) = \frac {C - D}{\sqrt {(C + D + T _ {x}) (C + D + T _ {y})}}
340
+ $$
341
+
342
+ Where $C$ is the number of concordant pairs, i.e. pairs of elements $(i,j)$ where $(x_{i} > x_{j}\wedge y_{i} > y_{j})\lor (x_{i} < x_{j}\wedge y_{i} < y_{j})$ . $D$ is the number of discordant pairs, i.e. pairs of elements $(i,j)$ where $(x_{i} > x_{j}\wedge y_{i} < y_{j})\vee (x_{i} < x_{j}\wedge y_{i} > y_{j})$ . $T_{x}$ and $T_{y}$ are the number of tied values in $x$ and $y$ respectively.
343
+
344
+ # B Details of Selected Evaluation Metrics
345
+
346
+ # B.1 Non-LLM evaluation metrics
347
+
348
+ For CHRF and BLEU, we use the implementation of TorchMetrics<sup>8</sup>. For ROUGE, BERTSCORE,
349
+
350
+ and BLEURT, we use the evaluation package of Huggingface with the default parameters. For MoverScore $^{9}$ , BARTScore $^{10}$ , and COMET $^{11}$ , we use the code from the original GitHub repositories and the default models. We check the licenses of all open source programs to ensure that our use is compliant.
351
+
352
+ # B.2 Evaluation Prompts for LLMs
353
+
354
+ We used the same prompts to instruct GPT-3.5, GPT-4, and GPT-4o for NLG evaluation. To save space, we present a template of our prompt in Table 3. We filled the aspect part of the prompt with definitions from original datasets. When the original dataset lacked these definitions, we composed them based on our understanding.
355
+
356
+ # B.3 Reasons not to use non-proprietary LLMs
357
+
358
+ We originally intended to include some open-source LLMs (e.g., Llama 3 (Dubey et al., 2024)). However, we found that their adherence to "evaluation instructions" was not strong enough, often resulting in non-compliant responses. We also considered including some fine-tuned LLM evaluators (e.g., Themis (Hu et al., 2024)). However, their limitations due to fine-tuning settings prevented us from requiring them to score with different evaluation scales via prompts. Therefore, we finally only chose proprietary LLMs in our experiments.
359
+
360
+ # C Process of Statistical Simulation
361
+
362
+ Based on the real-world data and some hypotheses, we first establish a probabilistic model for NLG meta-evaluation and then obtain results through the proposed algorithm and repeated sampling.
363
+
364
+ # C.1 Modeling NLG Meta-Evaluation
365
+
366
+ We posit that the capability of an evaluation metric is primarily reflected in two aspects: the ability to evaluate the overall level of different systems and the ability to evaluate different output texts from a given system. In practice, system-level correlation can estimate the former, while item-level correlation can estimate the latter. Therefore, we treat these two quantities as the control parameters during modeling. We assume that the scores of the evaluation metric and human evaluation here are continuous, and for a system $s_i$ , those of outputs it generates for various inputs
367
+
368
+ <sup>9</sup>https://github.com/AIPHES/emnlp19-moverscore
369
+ 10 https://github.com/neulab/BARTScore
370
+ 11https://github.com/Unbabel/COMET
371
+
372
+ <table><tr><td>Prompts and Instructions</td></tr><tr><td>########Instruction#####</td></tr><tr><td>Please act as an impartial and helpful evaluator for natural language generation (NLG), and the audience is an expert in the field.</td></tr><tr><td>Your task is to evaluate the quality of {task} strictly based on the given evaluation criterion.</td></tr><tr><td>Begin the evaluation by providing your analysis concisely and accurately, and then on the next line, start with &quot;Rating:&quot; followed by your rating on a Likert scale from {scale} (higher means better).</td></tr><tr><td>You MUST keep to the strict boundaries of the evaluation criterion and focus solely on the issues and errors involved; otherwise, you will be penalized.</td></tr><tr><td>Make sure you read and understand these instructions, as well as the following evaluation criterion and example content, carefully.</td></tr><tr><td>######Evaluation Criterion######</td></tr><tr><td>{aspect}</td></tr><tr><td>#######Example######</td></tr><tr><td>{source_des}:</td></tr><tr><td>{source}</td></tr><tr><td>{target_des}:</td></tr><tr><td>{target}</td></tr><tr><td>######Your Evaluation######</td></tr></table>
373
+
374
+ Table 3: Prompts and instructions used for LLMs to evaluate and annotate NLG tasks.
375
+
376
+ follow a bivariate normal distribution: $x_{ij}, z_{ij} \sim \mathcal{N}(\mu_i^m, \mu_i^h, \sigma_i^m, \sigma_i^h, \rho_i)$ , where $\rho_i^{12}$ controls the correlation between metrics and humans within a single system. Based on our observations and those of (Shen et al., 2023), this correlation varies across different systems for most evaluation metrics. For simplicity, we assume $\rho_i$ follows a truncated normal distribution: $\rho_i \sim \mathcal{N}(\mu_{\rho_{item}}, \sigma_{\rho_{item}})$ . Since item-level correlation is defined as the mean correlation coefficient across different systems, its value in existing datasets can be viewed as an estimate of $\mu_{\rho_{item}}$ . Furthermore, assuming $\mu_i^m$ and $\mu_i^h$ of the above bivariate normal distribution follow another bivariate normal distribution: $\mu_i^m, \mu_i^h \sim \mathcal{N}(\mu^m, \mu^h, \sigma^m, \sigma^h, \rho_{sys})$ , where $\rho_{sys}$ controls the correlation between $\mu_i^m$ and $\mu_i^h$ . And similarly, system-level correlation in real-world scenario can be seen as an estimate of $\rho_{sys}$ because $\frac{1}{M} \sum_{j=1}^{M} x_{ij}$ and $\frac{1}{M} \sum_{j=1}^{M} z_{ij}$ in the definition of system-level correlation are viewed as estimates of $\mu_i^m$ and $\mu_i^h$ .
377
+
378
+ Although the assumption of continuous scores makes it convenient to model and sample data, we need to analyze the impact of score granularity, which requires further data discretization. We then follow the practice of Onoshima et al. (2019) that has a similar situation to sample $G^{h} - 1$ and $G^{m} - 1$ thresholds from uniform distributions
379
+
380
+ $U(\mu^{m} - \sigma^{m},\mu^{m} + \sigma^{m})$ and $U(\mu^h -\sigma^h,\mu^h +\sigma^h)$ to discretize them. The Algorithm 3 shows the pseudocode for the entire sampling process.
381
+
382
+ Algorithm 3 Statistical Simulation
383
+ Input: $\mu^m,\mu^h,\sigma^m,\sigma^h,\sigma_i^m,\dots,\sigma_N^m,\sigma_1^h,\dots,\sigma_N^h\in \mathbb{R},N,$ $M,G^{m},G^{h},T_{1},T_{2}\in \mathbb{N},\rho_{sys},\mu_{\rho_{item}},\sigma_{\rho_{item}},C.$
384
+ Output: Correlation coefficient
385
+ $R\gets$ an empty list
386
+ for $T_{1}$ iterations do
387
+ $X^{s},Z^{s}\gets$ empty $N\times M$ matrices
388
+ for $i\in \{1,\ldots ,N\}$ do
389
+ sample $\mu_i^m,\mu_i^h\sim \mathcal{N}(\mu^m,\mu^h,\sigma^m,\sigma^h,\rho_{sys})$
390
+ sample $\rho_{i}\sim \mathcal{N}(\mu_{\rho_{item}},\sigma_{\rho_{item}})$
391
+ for $j\in \{1,\dots ,M\}$ do sample $x_{ij},z_{ij}\sim \mathcal{N}(\mu_i^m,\mu_i^h,\sigma_i^m,\sigma_i^h,\rho_i)$ $X^{s}[i,j]\leftarrow x_{ij}$ $Z^{s}[i,j]\leftarrow z_{ij}$
392
+ end for
393
+ end for
394
+ for $T_{2}$ iterations do
395
+ sample $\{t_n^m\}_{n = 1}^{G^m -1}\sim U(\mu^m -\sigma^m,\mu^m +\sigma^m)$
396
+ sample $\{t_n^h\}_{n = 1}^{G^h -1}\sim U(\mu^h -\sigma^h,\mu^h +\sigma^h)$ $X^{s}\gets$ DISCRETIZE(Xs, $\{t_n^m\}_{n = 1}^{G^m -1})$ $Z^{s}\gets$ DISCRETIZE(Zs, $\{t_n^h\}_{n = 1}^{G^h -1})$ $C^{s}\gets C(X^{s},Z^{s})$
397
+ Add $C^s$ to $R$
398
+ end for
399
+ end for
400
+ return AVG(R)
401
+ function DISCRETIZE(X,{tn}G
402
+ $Y\gets$ zero $N\times M$ matrix
403
+ for $(i,j)\in \{1,\dots ,N\} \times \{1,\dots ,M\}$ do
404
+ for $k\in \{1,\dots ,G\}$ do if $X[i,j]\leq t_k$ then
405
+ $Y[i,j]\gets k$ BREAK
406
+ end if
407
+ end for
408
+ if $Y[i,j]$ is O then
409
+ $Y[i,j]\gets G + 1$
410
+ end if
411
+ end for
412
+ return Y
413
+
414
+ # C.2 Experimental Settings
415
+
416
+ For the data collected in Section 3, all evaluation scores from humans and metrics are normalized to the 0-1 scale for parameter estimation, with results shown in Tables 4 and 5. We used the data where GPT-4-turbo is used as the evaluation metrics to estimate the parameters and the values are averaged across different meta-evaluation sets: $\mu^m = \mu_1^m = \dots = \mu_N^m = 0.47$ , $\mu^h = \mu_1^h = \dots = \mu_N^h = 0.65$ , $\sigma^m = \sigma_1^m = \dots = \sigma_N^m = 0.16$ , $\sigma^h = \sigma_1^h = \dots = \sigma_N^h = 0.14$ , $\sigma_{\rho_{item}} = 0.14$ , $\rho_{sys} = 0.92$ and $\mu_{\rho_{item}} = 0.35$ . For the number of systems and input documents, we consider $N = 16$ , $M = 100$ , which is typical and the same as SummEval. The corresponding $G^h$ is set to two common values: 13 and 250, and $G^m$ is selected from 2 to 15 and 5 to 100, respectively. Due to the huge amount of computation, we set $T_1 = T_2 = 100$ .
417
+
418
+ # D Other Figures and Tables
419
+
420
+ Given the limited space of the main text, we present the complete experimental results here.
421
+
422
+ ![](images/1ce132a6523ecc40f994d32d679fd606adacffd91d68f45198f8ad489df498ac.jpg)
423
+ Figure 6: The consistency of evaluation metric rankings using different correlation measures on HANNA, calculated through Kendall's correlation coefficient.
424
+
425
+ ![](images/d44518720d499331ede15baabe937eb186d7842f4b8d96a934dcf391b8fa397b.jpg)
426
+ Figure 7: The consistency of evaluation metric rankings using different correlation measures on MANS, calculated through Kendall's correlation coefficient.
427
+
428
+ ![](images/4a4c6c43b98dc7e49ad2c66068a24334174cfec8b26d959013d72b3b0237209b.jpg)
429
+ Figure 8: The consistency of evaluation metric rankings using different correlation measures on USR, calculated through Kendall's correlation coefficient.
430
+
431
+ ![](images/7614519d6adb04dbd6fd90cc9b394a22a9272ec26543859bb6d17778541447aa.jpg)
432
+
433
+ ![](images/c09bdfc94d883cbd7fb267ed1ce9a7566931b222d122b24e4f4111f4e505f47c.jpg)
434
+ Figure 9: The consistency of evaluation metric rankings using different correlation measures on WebNLG2020, calculated through Kendall's correlation coefficient.
435
+ Figure 10: The consistency of evaluation metric rankings using different correlation measures on WMT23-ZH2EN-NEWS, calculated through Kendall's correlation coefficient.
436
+
437
+ <table><tr><td>Dataset</td><td>Subset</td><td>Aspect</td><td>Label</td><td>μh</td><td>σh</td><td>Gh</td><td>Scale</td><td>Tie Ratio</td></tr><tr><td>SummEval</td><td>CNN/DM</td><td>Coherence</td><td>D1</td><td>0.60</td><td>0.15</td><td>13</td><td>1-5</td><td>0.10</td></tr><tr><td>SummEval</td><td>CNN/DM</td><td>Consistency</td><td>D2</td><td>0.92</td><td>0.14</td><td>13</td><td>1-5</td><td>0.67</td></tr><tr><td>SummEval</td><td>CNN/DM</td><td>Fluency</td><td>D3</td><td>0.92</td><td>0.09</td><td>13</td><td>1-5</td><td>0.53</td></tr><tr><td>SummEval</td><td>CNN/DM</td><td>Relevance</td><td>D4</td><td>0.69</td><td>0.09</td><td>13</td><td>1-5</td><td>0.13</td></tr><tr><td>WMT23</td><td>GeneralMT2023_NEWS</td><td>Overall Quality</td><td>D5</td><td>0.84</td><td>0.05</td><td>251</td><td>0-250</td><td>0.11</td></tr><tr><td>HANNA</td><td>WP</td><td>Coherence</td><td>D6</td><td>0.54</td><td>0.13</td><td>13</td><td>1-5</td><td>0.13</td></tr><tr><td>HANNA</td><td>WP</td><td>Complexity</td><td>D7</td><td>0.36</td><td>0.14</td><td>13</td><td>1-5</td><td>0.12</td></tr><tr><td>HANNA</td><td>WP</td><td>Empathy</td><td>D8</td><td>0.32</td><td>0.09</td><td>13</td><td>1-5</td><td>0.13</td></tr><tr><td>HANNA</td><td>WP</td><td>Engagement</td><td>D9</td><td>0.42</td><td>0.13</td><td>13</td><td>1-5</td><td>0.12</td></tr><tr><td>HANNA</td><td>WP</td><td>Relevance</td><td>D10</td><td>0.41</td><td>0.14</td><td>13</td><td>1-5</td><td>0.10</td></tr><tr><td>HANNA</td><td>WP</td><td>Surprise</td><td>D11</td><td>0.28</td><td>0.10</td><td>13</td><td>1-5</td><td>0.15</td></tr><tr><td>MANS</td><td>ROC</td><td>Overall</td><td>D12</td><td>0.38</td><td>0.15</td><td>21</td><td>1-3</td><td>0.06</td></tr><tr><td>MANS</td><td>WP</td><td>Overall</td><td>D13</td><td>0.45</td><td>0.08</td><td>21</td><td>1-3</td><td>0.08</td></tr><tr><td>USR</td><td>Persona-Chat</td><td>Engaging</td><td>D14</td><td>0.76</td><td>0.13</td><td>7</td><td>1-3</td><td>0.24</td></tr><tr><td>USR</td><td>Persona-Chat</td><td>Maintains Context</td><td>D15</td><td>0.74</td><td>0.19</td><td>7</td><td>1-3</td><td>0.31</td></tr><tr><td>USR</td><td>Persona-Chat</td><td>Natural</td><td>D16</td><td>0.89</td><td>0.08</td><td>7</td><td>1-3</td><td>0.48</td></tr><tr><td>USR</td><td>Persona-Chat</td><td>Overall</td><td>D17</td><td>0.69</td><td>0.19</td><td>13</td><td>1-5</td><td>0.11</td></tr><tr><td>USR</td><td>Persona-Chat</td><td>Understandable</td><td>D18</td><td>0.96</td><td>0.04</td><td>4</td><td>0-1</td><td>0.84</td></tr><tr><td>USR</td><td>Persona-Chat</td><td>Uses Knowledge</td><td>D19</td><td>0.45</td><td>0.35</td><td>4</td><td>0-1</td><td>0.38</td></tr><tr><td>USR</td><td>Topical-Chat</td><td>Engaging</td><td>D20</td><td>0.57</td><td>0.25</td><td>7</td><td>1-3</td><td>0.15</td></tr><tr><td>USR</td><td>Topical-Chat</td><td>Maintains Context</td><td>D21</td><td>0.62</td><td>0.23</td><td>7</td><td>1-3</td><td>0.17</td></tr><tr><td>USR</td><td>Topical-Chat</td><td>Natural</td><td>D22</td><td>0.64</td><td>0.21</td><td>7</td><td>1-3</td><td>0.17</td></tr><tr><td>USR</td><td>Topical-Chat</td><td>Overall</td><td>D23</td><td>0.54</td><td>0.27</td><td>13</td><td>1-5</td><td>0.08</td></tr><tr><td>USR</td><td>Topical-Chat</td><td>Understandable</td><td>D24</td><td>0.67</td><td>0.23</td><td>4</td><td>0-1</td><td>0.33</td></tr><tr><td>USR</td><td>Topical-Chat</td><td>Uses Knowledge</td><td>D25</td><td>0.55</td><td>0.24</td><td>4</td><td>0-1</td><td>0.33</td></tr><tr><td>WebNLG2020</td><td>WebNLG2020</td><td>Correctness</td><td>D26</td><td>0.88</td><td>0.07</td><td>401</td><td>0-100</td><td>0.03</td></tr><tr><td>WebNLG2020</td><td>WebNLG2020</td><td>Data Coverage</td><td>D27</td><td>0.9</td><td>0.06</td><td>401</td><td>0-100</td><td>0.04</td></tr><tr><td>WebNLG2020</td><td>WebNLG2020</td><td>Fluency</td><td>D28</td><td>0.83</td><td>0.06</td><td>401</td><td>0-100</td><td>0.01</td></tr><tr><td>WebNLG2020</td><td>WebNLG2020</td><td>Relevance</td><td>D29</td><td>0.91</td><td>0.05</td><td>401</td><td>0-100</td><td>0.05</td></tr><tr><td>WebNLG2020</td><td>WebNLG2020</td><td>Text Structure</td><td>D30</td><td>0.87</td><td>0.05</td><td>401</td><td>0-100</td><td>0.02</td></tr></table>
438
+
439
+ Table 4: Information of all the meta-evaluation datasets and estimated parameters.
440
+
441
+ <table><tr><td>Metric</td><td>μm</td><td>σm</td><td>ρsys</td><td>μρitem</td><td>σρitem</td><td>rN</td><td>rN×M</td><td>Gm</td><td>Scale</td><td>Tie Ratio</td></tr><tr><td>GPT3.5_T=0_0_100</td><td>0.44</td><td>0.08</td><td>0.88</td><td>0.22</td><td>0.13</td><td>0.41</td><td>0.38</td><td>101</td><td>0-100</td><td>0.35</td></tr><tr><td>GPT3.5_T=0_1_10</td><td>0.33</td><td>0.08</td><td>0.83</td><td>0.20</td><td>0.13</td><td>0.41</td><td>0.35</td><td>10</td><td>1-10</td><td>0.46</td></tr><tr><td>GPT3.5_T=0_1_5</td><td>0.32</td><td>0.10</td><td>0.86</td><td>0.21</td><td>0.12</td><td>0.40</td><td>0.35</td><td>5</td><td>1-5</td><td>0.49</td></tr><tr><td>GPT3.5_T=1_0_100</td><td>0.45</td><td>0.08</td><td>0.89</td><td>0.27</td><td>0.14</td><td>0.48</td><td>0.44</td><td>1001</td><td>0-100</td><td>0.03</td></tr><tr><td>GPT3.5_T=1_1_10</td><td>0.33</td><td>0.08</td><td>0.88</td><td>0.26</td><td>0.13</td><td>0.46</td><td>0.41</td><td>91</td><td>1-10</td><td>0.10</td></tr><tr><td>GPT3.5_T=1_1_5</td><td>0.32</td><td>0.09</td><td>0.88</td><td>0.25</td><td>0.13</td><td>0.46</td><td>0.41</td><td>41</td><td>1-5</td><td>0.14</td></tr><tr><td>GPT4_T=0_0_100</td><td>0.51</td><td>0.15</td><td>0.92</td><td>0.35</td><td>0.14</td><td>0.56</td><td>0.55</td><td>101</td><td>0-100</td><td>0.22</td></tr><tr><td>GPT4_T=0_1_10</td><td>0.45</td><td>0.15</td><td>0.92</td><td>0.34</td><td>0.13</td><td>0.55</td><td>0.53</td><td>10</td><td>1-10</td><td>0.28</td></tr><tr><td>GPT4_T=0_1_5</td><td>0.44</td><td>0.18</td><td>0.92</td><td>0.31</td><td>0.14</td><td>0.55</td><td>0.51</td><td>5</td><td>1-5</td><td>0.45</td></tr><tr><td>GPT4_T=1_0_100</td><td>0.51</td><td>0.14</td><td>0.93</td><td>0.39</td><td>0.14</td><td>0.59</td><td>0.58</td><td>1001</td><td>0-100</td><td>0.02</td></tr><tr><td>GPT4_T=1_1_10</td><td>0.45</td><td>0.15</td><td>0.93</td><td>0.38</td><td>0.13</td><td>0.59</td><td>0.56</td><td>91</td><td>1-10</td><td>0.06</td></tr><tr><td>GPT4_T=1_1_5</td><td>0.43</td><td>0.17</td><td>0.92</td><td>0.36</td><td>0.14</td><td>0.58</td><td>0.55</td><td>41</td><td>1-5</td><td>0.18</td></tr><tr><td>GPT4o_T=0_0_100</td><td>0.48</td><td>0.14</td><td>0.92</td><td>0.37</td><td>0.13</td><td>0.57</td><td>0.55</td><td>101</td><td>0-100</td><td>0.21</td></tr><tr><td>GPT4o_T=0_1_10</td><td>0.42</td><td>0.15</td><td>0.91</td><td>0.35</td><td>0.12</td><td>0.55</td><td>0.53</td><td>10</td><td>1-10</td><td>0.27</td></tr><tr><td>GPT4o_T=0_1_5</td><td>0.40</td><td>0.17</td><td>0.91</td><td>0.32</td><td>0.13</td><td>0.54</td><td>0.50</td><td>5</td><td>1-5</td><td>0.43</td></tr><tr><td>GPT4o_T=1_0_100</td><td>0.48</td><td>0.14</td><td>0.92</td><td>0.39</td><td>0.13</td><td>0.59</td><td>0.58</td><td>1001</td><td>0-100</td><td>0.02</td></tr><tr><td>GPT4o_T=1_1_10</td><td>0.42</td><td>0.14</td><td>0.92</td><td>0.38</td><td>0.13</td><td>0.59</td><td>0.56</td><td>91</td><td>1-10</td><td>0.05</td></tr><tr><td>GPT4o_T=1_1_5</td><td>0.40</td><td>0.16</td><td>0.91</td><td>0.35</td><td>0.13</td><td>0.57</td><td>0.54</td><td>41</td><td>1-5</td><td>0.16</td></tr><tr><td>BERTScore-f</td><td>0.87</td><td>0.04</td><td>0.60</td><td>-0.02</td><td>0.10</td><td>0.26</td><td>0.22</td><td>/</td><td>/</td><td>0.01</td></tr><tr><td>BERTScore-p</td><td>0.87</td><td>0.04</td><td>0.53</td><td>-0.02</td><td>0.10</td><td>0.23</td><td>0.20</td><td>/</td><td>/</td><td>0.01</td></tr><tr><td>BERTScore-r</td><td>0.88</td><td>0.04</td><td>0.67</td><td>-0.01</td><td>0.11</td><td>0.29</td><td>0.24</td><td>/</td><td>/</td><td>0.01</td></tr><tr><td>BLEU</td><td>0.15</td><td>0.24</td><td>0.52</td><td>-0.01</td><td>0.09</td><td>0.24</td><td>0.21</td><td>/</td><td>/</td><td>0.53</td></tr><tr><td>CHRF</td><td>0.37</td><td>0.20</td><td>0.58</td><td>0.00</td><td>0.10</td><td>0.27</td><td>0.24</td><td>/</td><td>/</td><td>0.02</td></tr><tr><td>COMET</td><td>0.60</td><td>0.12</td><td>0.73</td><td>-0.01</td><td>0.11</td><td>0.32</td><td>0.27</td><td>/</td><td>/</td><td>0.00</td></tr><tr><td>MoverScore</td><td>0.61</td><td>0.11</td><td>0.54</td><td>-0.02</td><td>0.11</td><td>0.24</td><td>0.22</td><td>/</td><td>/</td><td>0.00</td></tr><tr><td>ROUGE-1</td><td>0.41</td><td>0.20</td><td>0.53</td><td>-0.02</td><td>0.11</td><td>0.25</td><td>0.22</td><td>/</td><td>/</td><td>0.02</td></tr><tr><td>ROUGE-2</td><td>0.23</td><td>0.23</td><td>0.54</td><td>-0.01</td><td>0.10</td><td>0.25</td><td>0.22</td><td>/</td><td>/</td><td>0.15</td></tr><tr><td>ROUGE-L</td><td>0.33</td><td>0.21</td><td>0.53</td><td>-0.03</td><td>0.10</td><td>0.23</td><td>0.20</td><td>/</td><td>/</td><td>0.02</td></tr></table>
442
+
443
+ Table 5: Metrics information and estimated parameters averaged across meta-evaluation datasets. We do not use the output of BLEURT and BARTScore-(s-h, r-h, h-r) to estimate the parameters because their scores do not have clear ranges.
444
+
445
+ <table><tr><td>Grouping</td><td>Correlation</td><td>D1</td><td>D2</td><td>D3</td><td>D4</td><td>D5</td><td>D6</td><td>D7</td><td>D8</td><td>D9</td><td>D10</td></tr><tr><td>System</td><td>Pearson</td><td>0.074 (5)</td><td>0.054 (2)</td><td>0.148 (10)</td><td>0.150 (10)</td><td>0.056 (7)</td><td>0.092 (3)</td><td>0.035 (1)</td><td>0.065 (1)</td><td>0.055 (1)</td><td>0.125 (7)</td></tr><tr><td>System</td><td>Spearman</td><td>0.137 (9)</td><td>0.141 (11)</td><td>0.135 (7)</td><td>0.174 (11)</td><td>0.159 (11)</td><td>0.195 (11)</td><td>0.174 (9)</td><td>0.213 (11)</td><td>0.175 (8)</td><td>0.209 (11)</td></tr><tr><td>System</td><td>Kendall</td><td>0.178 (12)</td><td>0.138 (10)</td><td>0.172 (12)</td><td>0.228 (12)</td><td>0.171 (12)</td><td>0.251 (12)</td><td>0.214 (12)</td><td>0.263 (12)</td><td>0.219 (12)</td><td>0.246 (12)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.053 (1)</td><td>0.064 (3)</td><td>0.059 (2)</td><td>0.082 (2)</td><td>0.077 (9)</td><td>0.106 (6)</td><td>0.085 (5)</td><td>0.144 (7)</td><td>0.093 (3)</td><td>0.127 (8)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.074 (4)</td><td>0.086 (6)</td><td>0.096 (5)</td><td>0.103 (6)</td><td>0.085 (10)</td><td>0.117 (7)</td><td>0.097 (6)</td><td>0.121 (5)</td><td>0.117 (7)</td><td>0.096 (2)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.074 (6)</td><td>0.070 (4)</td><td>0.099 (6)</td><td>0.107 (8)</td><td>0.061 (8)</td><td>0.105 (5)</td><td>0.101 (7)</td><td>0.122 (6)</td><td>0.116 (6)</td><td>0.107 (4)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.064 (2)</td><td>0.054 (1)</td><td>0.058 (1)</td><td>0.079 (1)</td><td>0.035 (3)</td><td>0.089 (2)</td><td>0.070 (3)</td><td>0.121 (4)</td><td>0.085 (2)</td><td>0.084 (1)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.071 (3)</td><td>0.114 (8)</td><td>0.090 (3)</td><td>0.100 (5)</td><td>0.023 (1)</td><td>0.095 (4)</td><td>0.067 (2)</td><td>0.095 (2)</td><td>0.100 (5)</td><td>0.100 (3)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.085 (7)</td><td>0.084 (5)</td><td>0.096 (4)</td><td>0.097 (4)</td><td>0.053 (6)</td><td>0.084 (1)</td><td>0.079 (4)</td><td>0.115 (3)</td><td>0.093 (4)</td><td>0.117 (5)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.135 (8)</td><td>0.102 (7)</td><td>0.146 (9)</td><td>0.094 (3)</td><td>0.038 (4)</td><td>0.138 (8)</td><td>0.162 (8)</td><td>0.175 (8)</td><td>0.175 (9)</td><td>0.122 (6)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.147 (10)</td><td>0.134 (9)</td><td>0.149 (11)</td><td>0.106 (7)</td><td>0.027 (2)</td><td>0.141 (9)</td><td>0.180 (10)</td><td>0.177 (9)</td><td>0.179 (11)</td><td>0.132 (9)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.151 (11)</td><td>0.147 (12)</td><td>0.145 (8)</td><td>0.108 (9)</td><td>0.041 (5)</td><td>0.147 (10)</td><td>0.206 (11)</td><td>0.190 (10)</td><td>0.179 (10)</td><td>0.148 (10)</td></tr><tr><td>Grouping</td><td>Correlation</td><td>D11</td><td>D12</td><td>D13</td><td>D14</td><td>D15</td><td>D16</td><td>D17</td><td>D18</td><td>D19</td><td>D20</td></tr><tr><td>System</td><td>Pearson</td><td>0.062 (1)</td><td>0.214 (10)</td><td>0.170 (7)</td><td>0.171 (5)</td><td>0.162 (10)</td><td>0.147 (4)</td><td>0.244 (10)</td><td>0.451 (10)</td><td>0.209 (1)</td><td>0.101 (4)</td></tr><tr><td>System</td><td>Spearman</td><td>0.188 (8)</td><td>0.433 (11)</td><td>0.424 (11)</td><td>0.374 (11)</td><td>0.355 (11)</td><td>0.359 (11)</td><td>0.375 (11)</td><td>0.544 (11)</td><td>0.349 (3)</td><td>0.579 (11)</td></tr><tr><td>System</td><td>Kendall</td><td>0.225 (12)</td><td>0.444 (12)</td><td>0.505 (12)</td><td>0.386 (12)</td><td>0.358 (12)</td><td>0.374 (12)</td><td>0.396 (12)</td><td>0.623 (12)</td><td>0.361 (4)</td><td>0.651 (12)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.135 (5)</td><td>0.102 (4)</td><td>0.135 (4)</td><td>0.167 (4)</td><td>0.119 (3)</td><td>0.170 (5)</td><td>0.116 (3)</td><td>0.351 (7)</td><td>0.336 (2)</td><td>0.092 (2)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.123 (4)</td><td>0.114 (6)</td><td>0.141 (5)</td><td>0.182 (6)</td><td>0.145 (7)</td><td>0.186 (6)</td><td>0.158 (7)</td><td>0.392 (9)</td><td>0.414 (5)</td><td>0.138 (6)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.151 (7)</td><td>0.106 (5)</td><td>0.145 (6)</td><td>0.189 (7)</td><td>0.157 (8)</td><td>0.194 (9)</td><td>0.164 (8)</td><td>0.373 (8)</td><td>0.414 (6)</td><td>0.143 (7)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.136 (6)</td><td>0.072 (1)</td><td>0.107 (1)</td><td>0.093 (1)</td><td>0.127 (5)</td><td>0.108 (1)</td><td>0.088 (1)</td><td>0.165 (1)</td><td>0.540 (9)</td><td>0.080 (1)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.105 (2)</td><td>0.072 (2)</td><td>0.116 (3)</td><td>0.134 (3)</td><td>0.094 (1)</td><td>0.121 (3)</td><td>0.092 (2)</td><td>0.219 (3)</td><td>0.487 (8)</td><td>0.101 (3)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.107 (3)</td><td>0.095 (3)</td><td>0.110 (2)</td><td>0.129 (2)</td><td>0.110 (2)</td><td>0.119 (2)</td><td>0.128 (4)</td><td>0.193 (2)</td><td>0.483 (7)</td><td>0.104 (5)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.201 (9)</td><td>0.148 (7)</td><td>0.192 (9)</td><td>0.215 (8)</td><td>0.127 (4)</td><td>0.192 (8)</td><td>0.148 (5)</td><td>0.232 (4)</td><td>0.569 (10)</td><td>0.192 (8)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.214 (10)</td><td>0.152 (8)</td><td>0.186 (8)</td><td>0.224 (9)</td><td>0.140 (6)</td><td>0.192 (7)</td><td>0.152 (6)</td><td>0.265 (6)</td><td>0.603 (12)</td><td>0.227 (9)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.219 (11)</td><td>0.182 (9)</td><td>0.196 (10)</td><td>0.234 (10)</td><td>0.157 (9)</td><td>0.210 (10)</td><td>0.180 (9)</td><td>0.262 (5)</td><td>0.601 (11)</td><td>0.232 (10)</td></tr><tr><td>Grouping</td><td>Correlation</td><td>D21</td><td>D22</td><td>D23</td><td>D24</td><td>D25</td><td>D26</td><td>D27</td><td>D28</td><td>D29</td><td>D30</td></tr><tr><td>System</td><td>Pearson</td><td>0.157 (7)</td><td>0.140 (6)</td><td>0.134 (6)</td><td>0.149 (7)</td><td>0.159 (1)</td><td>0.092 (3)</td><td>0.076 (2)</td><td>0.155 (10)</td><td>0.109 (3)</td><td>0.127 (10)</td></tr><tr><td>System</td><td>Spearman</td><td>0.563 (11)</td><td>0.616 (11)</td><td>0.642 (11)</td><td>0.573 (11)</td><td>0.602 (11)</td><td>0.262 (11)</td><td>0.155 (10)</td><td>0.314 (11)</td><td>0.472 (11)</td><td>0.282 (11)</td></tr><tr><td>System</td><td>Kendall</td><td>0.595 (12)</td><td>0.669 (12)</td><td>0.711 (12)</td><td>0.622 (12)</td><td>0.698 (12)</td><td>0.290 (12)</td><td>0.173 (12)</td><td>0.408 (12)</td><td>0.532 (12)</td><td>0.337 (12)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.117 (4)</td><td>0.110 (3)</td><td>0.106 (3)</td><td>0.107 (1)</td><td>0.263 (6)</td><td>0.110 (6)</td><td>0.115 (3)</td><td>0.080 (3)</td><td>0.123 (4)</td><td>0.092 (5)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.158 (8)</td><td>0.136 (4)</td><td>0.123 (5)</td><td>0.147 (6)</td><td>0.271 (7)</td><td>0.138 (10)</td><td>0.145 (9)</td><td>0.101 (8)</td><td>0.184 (10)</td><td>0.125 (9)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.167 (10)</td><td>0.140 (5)</td><td>0.135 (7)</td><td>0.145 (5)</td><td>0.259 (5)</td><td>0.128 (8)</td><td>0.129 (6)</td><td>0.084 (5)</td><td>0.174 (9)</td><td>0.106 (8)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.117 (3)</td><td>0.096 (1)</td><td>0.085 (1)</td><td>0.114 (2)</td><td>0.225 (3)</td><td>0.064 (1)</td><td>0.064 (1)</td><td>0.054 (1)</td><td>0.074 (1)</td><td>0.061 (2)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.106 (2)</td><td>0.104 (2)</td><td>0.104 (2)</td><td>0.117 (3)</td><td>0.233 (4)</td><td>0.088 (2)</td><td>0.120 (5)</td><td>0.068 (2)</td><td>0.135 (5)</td><td>0.058 (1)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.096 (1)</td><td>0.153 (7)</td><td>0.109 (4)</td><td>0.118 (4)</td><td>0.221 (2)</td><td>0.107 (4)</td><td>0.140 (8)</td><td>0.093 (7)</td><td>0.155 (6)</td><td>0.077 (3)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.129 (5)</td><td>0.168 (8)</td><td>0.178 (9)</td><td>0.193 (10)</td><td>0.332 (8)</td><td>0.127 (7)</td><td>0.118 (4)</td><td>0.083 (4)</td><td>0.108 (2)</td><td>0.095 (6)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.164 (9)</td><td>0.197 (9)</td><td>0.178 (8)</td><td>0.180 (8)</td><td>0.364 (9)</td><td>0.108 (5)</td><td>0.134 (7)</td><td>0.091 (6)</td><td>0.171 (7)</td><td>0.087 (4)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.157 (6)</td><td>0.247 (10)</td><td>0.182 (10)</td><td>0.191 (9)</td><td>0.367 (10)</td><td>0.135 (9)</td><td>0.163 (11)</td><td>0.110 (9)</td><td>0.174 (8)</td><td>0.104 (7)</td></tr></table>
446
+
447
+ Table 6: The complete DP values of different correlation measures on all meta-evaluation datasets using permutation test, the lower the better, which are visualized as Figure 2.
448
+
449
+ <table><tr><td>Grouping</td><td>Correlation</td><td>D1</td><td>D2</td><td>D3</td><td>D4</td><td>D5</td><td>D6</td><td>D7</td><td>D8</td><td>D9</td><td>D10</td></tr><tr><td>System</td><td>Pearson</td><td>0.844 (4)</td><td>0.878 (2)</td><td>0.673 (12)</td><td>0.637 (10)</td><td>0.853 (8)</td><td>0.777 (7)</td><td>0.928 (1)</td><td>0.825 (1)</td><td>0.876 (1)</td><td>0.643 (10)</td></tr><tr><td>System</td><td>Spearman</td><td>0.695 (8)</td><td>0.756 (10)</td><td>0.724 (7)</td><td>0.622 (11)</td><td>0.657 (12)</td><td>0.581 (11)</td><td>0.680 (8)</td><td>0.542 (11)</td><td>0.669 (8)</td><td>0.553 (11)</td></tr><tr><td>System</td><td>Kendall</td><td>0.666 (11)</td><td>0.766 (8)</td><td>0.724 (8)</td><td>0.603 (12)</td><td>0.681 (11)</td><td>0.533 (12)</td><td>0.680 (9)</td><td>0.532 (12)</td><td>0.637 (12)</td><td>0.534 (12)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.901 (1)</td><td>0.876 (3)</td><td>0.887 (1)</td><td>0.843 (1)</td><td>0.854 (7)</td><td>0.844 (2)</td><td>0.842 (5)</td><td>0.736 (7)</td><td>0.828 (2)</td><td>0.784 (5)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.867 (2)</td><td>0.842 (5)</td><td>0.824 (3)</td><td>0.809 (2)</td><td>0.852 (9)</td><td>0.815 (6)</td><td>0.841 (6)</td><td>0.789 (4)</td><td>0.798 (6)</td><td>0.852 (1)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.862 (3)</td><td>0.873 (4)</td><td>0.815 (4)</td><td>0.799 (3)</td><td>0.897 (3)</td><td>0.836 (4)</td><td>0.833 (7)</td><td>0.791 (3)</td><td>0.792 (7)</td><td>0.833 (2)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.843 (5)</td><td>0.889 (1)</td><td>0.876 (2)</td><td>0.791 (4)</td><td>0.880 (4)</td><td>0.833 (5)</td><td>0.854 (4)</td><td>0.752 (6)</td><td>0.820 (4)</td><td>0.825 (3)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.836 (6)</td><td>0.765 (9)</td><td>0.804 (5)</td><td>0.752 (5)</td><td>0.925 (1)</td><td>0.839 (3)</td><td>0.877 (2)</td><td>0.819 (2)</td><td>0.810 (5)</td><td>0.805 (4)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.803 (7)</td><td>0.825 (6)</td><td>0.789 (6)</td><td>0.751 (6)</td><td>0.845 (10)</td><td>0.862 (1)</td><td>0.860 (3)</td><td>0.780 (5)</td><td>0.823 (3)</td><td>0.773 (6)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.679 (9)</td><td>0.775 (7)</td><td>0.679 (10)</td><td>0.746 (7)</td><td>0.861 (6)</td><td>0.749 (9)</td><td>0.669 (10)</td><td>0.652 (9)</td><td>0.657 (9)</td><td>0.741 (7)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.673 (10)</td><td>0.704 (11)</td><td>0.677 (11)</td><td>0.730 (8)</td><td>0.911 (2)</td><td>0.754 (8)</td><td>0.653 (11)</td><td>0.669 (8)</td><td>0.654 (10)</td><td>0.731 (8)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.655 (12)</td><td>0.673 (12)</td><td>0.680 (9)</td><td>0.722 (9)</td><td>0.878 (5)</td><td>0.740 (10)</td><td>0.601 (12)</td><td>0.639 (10)</td><td>0.654 (11)</td><td>0.709 (9)</td></tr><tr><td>Grouping</td><td>Correlation</td><td>D11</td><td>D12</td><td>D13</td><td>D14</td><td>D15</td><td>D16</td><td>D17</td><td>D18</td><td>D19</td><td>D20</td></tr><tr><td>System</td><td>Pearson</td><td>0.822 (1)</td><td>0.595 (12)</td><td>0.554 (10)</td><td>0.674 (7)</td><td>0.557 (12)</td><td>0.671 (6)</td><td>0.589 (12)</td><td>0.000 (12)</td><td>0.613 (2)</td><td>0.782 (5)</td></tr><tr><td>System</td><td>Spearman</td><td>0.584 (10)</td><td>0.771 (7)</td><td>0.473 (12)</td><td>0.889 (1)</td><td>0.562 (10)</td><td>0.538 (12)</td><td>0.751 (6)</td><td>0.259 (10)</td><td>0.606 (3)</td><td>0.132 (12)</td></tr><tr><td>System</td><td>Kendall</td><td>0.569 (12)</td><td>0.771 (8)</td><td>0.487 (11)</td><td>0.886 (2)</td><td>0.558 (11)</td><td>0.592 (11)</td><td>0.752 (5)</td><td>0.150 (11)</td><td>0.619 (1)</td><td>0.148 (11)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.755 (5)</td><td>0.803 (5)</td><td>0.695 (4)</td><td>0.684 (6)</td><td>0.777 (3)</td><td>0.689 (4)</td><td>0.809 (3)</td><td>0.382 (7)</td><td>0.336 (4)</td><td>0.853 (1)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.795 (4)</td><td>0.788 (6)</td><td>0.685 (5)</td><td>0.670 (8)</td><td>0.748 (6)</td><td>0.679 (5)</td><td>0.737 (7)</td><td>0.284 (9)</td><td>0.196 (5)</td><td>0.771 (6)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.733 (6)</td><td>0.805 (4)</td><td>0.671 (6)</td><td>0.651 (9)</td><td>0.723 (8)</td><td>0.659 (7)</td><td>0.723 (9)</td><td>0.318 (8)</td><td>0.192 (6)</td><td>0.763 (7)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.720 (7)</td><td>0.859 (2)</td><td>0.770 (3)</td><td>0.807 (3)</td><td>0.745 (7)</td><td>0.784 (1)</td><td>0.826 (2)</td><td>0.662 (1)</td><td>-0.082 (9)</td><td>0.841 (2)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.803 (2)</td><td>0.878 (1)</td><td>0.772 (2)</td><td>0.745 (5)</td><td>0.828 (1)</td><td>0.780 (2)</td><td>0.834 (1)</td><td>0.588 (3)</td><td>0.065 (7)</td><td>0.823 (3)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.798 (3)</td><td>0.834 (3)</td><td>0.782 (1)</td><td>0.747 (4)</td><td>0.792 (2)</td><td>0.777 (3)</td><td>0.759 (4)</td><td>0.630 (2)</td><td>0.053 (8)</td><td>0.816 (4)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.606 (8)</td><td>0.701 (10)</td><td>0.574 (9)</td><td>0.580 (11)</td><td>0.765 (4)</td><td>0.644 (9)</td><td>0.724 (8)</td><td>0.560 (4)</td><td>-0.112 (10)</td><td>0.618 (8)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.590 (9)</td><td>0.712 (9)</td><td>0.604 (7)</td><td>0.583 (10)</td><td>0.750 (5)</td><td>0.659 (8)</td><td>0.716 (10)</td><td>0.541 (6)</td><td>-0.150 (11)</td><td>0.570 (9)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.573 (11)</td><td>0.648 (11)</td><td>0.574 (8)</td><td>0.562 (12)</td><td>0.721 (9)</td><td>0.623 (10)</td><td>0.663 (11)</td><td>0.550 (5)</td><td>-0.155 (12)</td><td>0.552 (10)</td></tr><tr><td>Grouping</td><td>Correlation</td><td>D21</td><td>D22</td><td>D23</td><td>D24</td><td>D25</td><td>D26</td><td>D27</td><td>D28</td><td>D29</td><td>D30</td></tr><tr><td>System</td><td>Pearson</td><td>0.721 (7)</td><td>0.748 (6)</td><td>0.721 (7)</td><td>0.725 (7)</td><td>0.655 (1)</td><td>0.761 (7)</td><td>0.847 (2)</td><td>0.596 (10)</td><td>0.725 (7)</td><td>0.663 (10)</td></tr><tr><td>System</td><td>Spearman</td><td>0.179 (11)</td><td>0.169 (12)</td><td>0.085 (12)</td><td>0.115 (11)</td><td>0.177 (11)</td><td>0.295 (12)</td><td>0.698 (11)</td><td>0.373 (12)</td><td>0.039 (12)</td><td>0.390 (12)</td></tr><tr><td>System</td><td>Kendall</td><td>0.150 (12)</td><td>0.184 (11)</td><td>0.092 (11)</td><td>0.113 (12)</td><td>0.167 (12)</td><td>0.391 (11)</td><td>0.722 (10)</td><td>0.391 (11)</td><td>0.062 (11)</td><td>0.418 (11)</td></tr><tr><td>Input</td><td>Pearson</td><td>0.794 (3)</td><td>0.809 (2)</td><td>0.823 (2)</td><td>0.827 (1)</td><td>0.518 (5)</td><td>0.803 (3)</td><td>0.802 (4)</td><td>0.861 (2)</td><td>0.786 (2)</td><td>0.846 (2)</td></tr><tr><td>Input</td><td>Spearman</td><td>0.728 (6)</td><td>0.781 (4)</td><td>0.813 (4)</td><td>0.754 (6)</td><td>0.503 (7)</td><td>0.778 (5)</td><td>0.765 (6)</td><td>0.849 (4)</td><td>0.726 (6)</td><td>0.807 (6)</td></tr><tr><td>Input</td><td>Kendall</td><td>0.704 (9)</td><td>0.775 (5)</td><td>0.781 (6)</td><td>0.758 (5)</td><td>0.514 (6)</td><td>0.803 (2)</td><td>0.806 (3)</td><td>0.882 (1)</td><td>0.749 (3)</td><td>0.843 (3)</td></tr><tr><td>Global</td><td>Pearson</td><td>0.774 (4)</td><td>0.804 (3)</td><td>0.834 (1)</td><td>0.774 (4)</td><td>0.523 (4)</td><td>0.835 (1)</td><td>0.857 (1)</td><td>0.854 (3)</td><td>0.835 (1)</td><td>0.842 (4)</td></tr><tr><td>Global</td><td>Spearman</td><td>0.809 (2)</td><td>0.810 (1)</td><td>0.816 (3)</td><td>0.789 (2)</td><td>0.530 (3)</td><td>0.799 (4)</td><td>0.775 (5)</td><td>0.830 (5)</td><td>0.738 (5)</td><td>0.855 (1)</td></tr><tr><td>Global</td><td>Kendall</td><td>0.829 (1)</td><td>0.712 (7)</td><td>0.809 (5)</td><td>0.780 (3)</td><td>0.542 (2)</td><td>0.763 (6)</td><td>0.740 (7)</td><td>0.784 (6)</td><td>0.709 (8)</td><td>0.818 (5)</td></tr><tr><td>Item</td><td>Pearson</td><td>0.742 (5)</td><td>0.665 (8)</td><td>0.658 (8)</td><td>0.657 (9)</td><td>0.284 (8)</td><td>0.686 (10)</td><td>0.739 (8)</td><td>0.783 (7)</td><td>0.746 (4)</td><td>0.761 (8)</td></tr><tr><td>Item</td><td>Spearman</td><td>0.699 (10)</td><td>0.623 (9)</td><td>0.658 (9)</td><td>0.674 (8)</td><td>0.202 (9)</td><td>0.751 (8)</td><td>0.735 (9)</td><td>0.769 (8)</td><td>0.668 (9)</td><td>0.782 (7)</td></tr><tr><td>Item</td><td>Kendall</td><td>0.717 (8)</td><td>0.541 (10)</td><td>0.656 (10)</td><td>0.656 (10)</td><td>0.185 (10)</td><td>0.698 (9)</td><td>0.676 (12)</td><td>0.736 (9)</td><td>0.663 (10)</td><td>0.751 (9)</td></tr></table>
450
+
451
+ Table 7: The complete RC values of different correlation measures on all meta-evaluation datasets, the higher the better, which are visualized as Figure 3.
452
+
453
+ ![](images/cf9077e2f5499d40895ac6db72ceb39144689c5b5b10ca052dfac790e5dcb3af.jpg)
454
+ Figure 11: The p-value curves of correlation measures on meta-evaluation D1.
455
+
456
+ ![](images/27137218907964111b8046128e5332a0452a7cfc9a4ca16bf22703dd3f233f2f.jpg)
457
+ Figure 14: The p-value curves of correlation measures on meta-evaluation D4.
458
+
459
+ ![](images/957527220d1313e1e966dde4ba6dbf2405358af0329505a872c4ae4dcfa93421.jpg)
460
+ Figure 12: The p-value curves of correlation measures on meta-evaluation D2.
461
+
462
+ ![](images/362508bdafcafa315c485a012ea63f61c9cfd43f08fb40cc6c8128d969f3c3a0.jpg)
463
+ Figure 15: The p-value curves of correlation measures on meta-evaluation D5.
464
+
465
+ ![](images/5866c4b4ded32b53fc78b7f40bcdc33d78006b0c781a24c9f6b7cd17b2a4e2c5.jpg)
466
+ Figure 13: The p-value curves of correlation measures on meta-evaluation D3.
467
+
468
+ ![](images/4738e930ad31e9d165a9aedbd7b704921f73c60db2f97e259aecaf4ffb0d1744.jpg)
469
+ Figure 16: The p-value curves of correlation measures on meta-evaluation D6.
470
+
471
+ ![](images/bddad8f69ce256519aa0a0a14afbbc248d479d417ed222b839fd2e9f8e59ebc0.jpg)
472
+ Figure 17: The p-value curves of correlation measures on meta-evaluation D7.
473
+
474
+ ![](images/f33f7be784e407c1259c8e16a427a96227d6f3d1c8d9285c86712508cd3cfc80.jpg)
475
+ Figure 20: The p-value curves of correlation measures on meta-evaluation D10.
476
+
477
+ ![](images/c687ae465982fb6460d50f3ef6983f17644349cf38dcb642a1d618982db33d49.jpg)
478
+ Figure 18: The p-value curves of correlation measures on meta-evaluation D8.
479
+
480
+ ![](images/05aa9ce71eb980df1c31ae8acc8804be38b83a29bbe3c52dd6f80a65483b6d2e.jpg)
481
+ Figure 21: The p-value curves of correlation measures on meta-evaluation D11.
482
+
483
+ ![](images/d1391be63481b9da313ce365eaa73e3e26611cb44002e5bbe8f7d36a599fc3d6.jpg)
484
+ Figure 19: The p-value curves of correlation measures on meta-evaluation D9.
485
+
486
+ ![](images/de91358e0a60f946d48616144a74676da34d58b139da9d717ea2f5753a7b35be.jpg)
487
+ Figure 22: The p-value curves of correlation measures on meta-evaluation D12.
488
+
489
+ ![](images/1c25ba11c6674f1ae8b6f49f9afac31c35ba97860eabe0420ba57805d4b3a639.jpg)
490
+ Figure 23: The p-value curves of correlation measures on meta-evaluation D13.
491
+
492
+ ![](images/6264b020086fead39a7f1404a15e7bb433c66f042b7bde07a2f598ed2d54b6d1.jpg)
493
+ Figure 26: The p-value curves of correlation measures on meta-evaluation D16.
494
+
495
+ ![](images/dbd03b6b059f79ada087d87b867ba24a4c6d24bc47bfcbbc7ba75ed7b680a41f.jpg)
496
+ Figure 24: The p-value curves of correlation measures on meta-evaluation D14.
497
+
498
+ ![](images/1fd2e227319569cf824806b3393919c9a930e4abe5ee147e338ed0b14344b21b.jpg)
499
+ Figure 27: The p-value curves of correlation measures on meta-evaluation D17.
500
+
501
+ ![](images/441eb701b9c81b6c489a05e3dd7e4b6704ffb11d7d61c5ed2c5f2bc2f4c2f1cc.jpg)
502
+ Figure 25: The p-value curves of correlation measures on meta-evaluation D15.
503
+
504
+ ![](images/ce347c109aee275dbc5b46bf3708df4a2177ddc8118480cb97c41074935e0de3.jpg)
505
+ Figure 28: The p-value curves of correlation measures on meta-evaluation D18.
506
+
507
+ ![](images/5db32965400663c775871d6a5737c0738b479910ab8309c4a391df17bae48204.jpg)
508
+ Figure 29: The p-value curves of correlation measures on meta-evaluation D19.
509
+
510
+ ![](images/9a303dcf1fafcc2ae442201adff32c541bf1dd5ee4e87c69f49a1fdf77bbf847.jpg)
511
+ Figure 32: The p-value curves of correlation measures on meta-evaluation D22.
512
+
513
+ ![](images/41241b504d63bf2ed9d753f810b070bf3cb2ad84f86a325b91b95b054afb3c44.jpg)
514
+ Figure 30: The p-value curves of correlation measures on meta-evaluation D20.
515
+
516
+ ![](images/54b2aba836d321ce82a5d0b3cc4342a49648a3de8707b7f78be92400d4232605.jpg)
517
+ Figure 33: The p-value curves of correlation measures on meta-evaluation D23.
518
+
519
+ ![](images/f44faaa60c23e2563b9bec67befe260a0e31a7b5bfca78b8bcd7b313c04c7e80.jpg)
520
+ Figure 31: The p-value curves of correlation measures on meta-evaluation D21.
521
+
522
+ ![](images/33c47085c436bab8e3f15f77cad4814bd8a0ecf76035eab8df6655d324a29cae.jpg)
523
+ Figure 34: The p-value curves of correlation measures on meta-evaluation D24.
524
+
525
+ ![](images/9df090ab369516f06f4ab07e16c3f41faf9496983df39c554d5debbfaf708f5a.jpg)
526
+ Figure 35: The p-value curves of correlation measures on meta-evaluation D25.
527
+
528
+ ![](images/78e42bab8a68af4566801db8e1d16075339204095d01d5986f924596e13c76e7.jpg)
529
+ Figure 38: The p-value curves of correlation measures on meta-evaluation D28.
530
+
531
+ ![](images/fcfa9d053a3709caebdf4303988ad716861867540ee7745ed7771d3d08de9ea3.jpg)
532
+ Figure 36: The p-value curves of correlation measures on meta-evaluation D26.
533
+
534
+ ![](images/db10d460a8df0a72b2b0ee557e318efaea1217a22c5a2d29e429673c3aa4eb6d.jpg)
535
+ Figure 39: The p-value curves of correlation measures on meta-evaluation D29.
536
+
537
+ ![](images/c70676a60777b7b1cb985f321525c3e94dcb212c838b73d58f5e12d3f92362b2.jpg)
538
+ Figure 37: The p-value curves of correlation measures on meta-evaluation D27.
539
+
540
+ ![](images/a9c2dee4e8d227dcb004454842abfd902c8ed51a4b3b92b3a4ccca612061c8b6.jpg)
541
+ Figure 40: The p-value curves of correlation measures on meta-evaluation D30.
542
+
543
+ ![](images/05150d6f23437ab3bf0bd9432a12e5bc22b6a7fd200fc09f3c7151670a8ceec1.jpg)
544
+ Figure 41: As the changes of $G^{m}$ , the correlations between the GPT-3.5-Turbo evaluator and human evaluation using different measures on SummEval with the fixed evaluation scale of 1-5.
545
+
546
+ ![](images/91a341111ec963739e698a3bff960afc01b3d96dc8ec27b9f80a4625e9ab704f.jpg)
547
+ Figure 44: As the changes of $G^{m}$ , the correlations between the GPT-3.5-Turbo evaluator and human evaluation using different measures on WMT23 with the fixed evaluation scale of 1-10.
548
+
549
+ ![](images/91e11d1616dc527749cc63842c90c2ebb77c76656d7a40a3e63372693bad2f70.jpg)
550
+ Figure 42: As the changes of $G^{m}$ , the correlations between the GPT-3.5-Turbo evaluator and human evaluation using different measures on SummEval with the fixed evaluation scale of 1-10.
551
+
552
+ ![](images/ec709cfeebffc9d43e8b036fa18cd7e61e24d0c28ee6c6b93bd8a7e73ea79d44.jpg)
553
+ Figure 45: As the changes of $G^{m}$ , the correlations between the GPT-4o evaluator and human evaluation using different measures on SummEval with the fixed evaluation scale of 1-5.
554
+
555
+ ![](images/b4b6a377d335d81accbe5b365a4fe5e23fb92fd3ed56b23cc02f96f46c885b17.jpg)
556
+ Figure 43: As the changes of $G^{m}$ , the correlations between the GPT-3.5-Turbo evaluator and human evaluation using different measures on WMT23 with the fixed evaluation scale of 1-5.
557
+
558
+ ![](images/993a2b93607df706c589d327f0214e9da7a7424dcd5db9e59faca34dd169d2e4.jpg)
559
+ Figure 46: As the changes of $G^{m}$ , the correlations between the GPT-4o evaluator and human evaluation using different measures on SummEval with the fixed evaluation scale of 1-10.
560
+
561
+ ![](images/74ae82cfb8f32c908038693a1d004474e7164274eeec9a1b402d7daf04a19036.jpg)
562
+ Figure 47: As the changes of $G^{m}$ , the correlations between the GPT-4o evaluator and human evaluation using different measures on WMT23 with the fixed evaluation scale of 1-5.
563
+
564
+ ![](images/a270e0b9e6d5f5ce70ef4c342390a11631c464eb2e18d393dd348f1fa5c1ec56.jpg)
565
+ Figure 48: As the changes of $G^{m}$ , the correlations between the GPT-4o evaluator and human evaluation using different measures on WMT23 with the fixed evaluation scale of 1-10.
566
+
567
+ ![](images/56dda0fc67b1b38fe1464ce81f47c8bd1ace9d3ab3a43385d6e036a1cc27d593.jpg)
568
+ Figure 49: As the changes of $G^{m}$ , the correlations between the GPT-4-Turbo evaluator and human evaluation using different measures on SummEval with the fixed evaluation scale of 1-10.
569
+
570
+ ![](images/633435a3f9b2fc98e506651ae897356dff3cd6a66480192dd7d137026a764e05.jpg)
571
+ Figure 50: As the changes of $G^{m}$ , the correlations between the GPT-4-Turbo evaluator and human evaluation using different measures on WMT23 with the fixed evaluation scale of 1-10.
572
+
573
+ ![](images/341d70b501aee46d7d0e2908d611c8ce46bed4734344d435fb0b1f70f448bde7.jpg)
574
+ Figure 51: As the changes of $G^{m}$ , the correlations between metrics and humans using different measures in statistical simulation with $G^{h} = 250$ .
analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1e2ef296095b9f191809c2466c96cf3a4ac89427082e189e10053d9e19e4306
3
+ size 3411495
analyzingandevaluatingcorrelationmeasuresinnlgmetaevaluation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17b1fafc533c197b1f982e3810b6ec3fdcb5cde5010d4acf172d7ae98ed95d99
3
+ size 780599
analyzingandimprovingcoherenceoflargelanguagemodelsinquestionanswering/67398457-5763-4b62-8745-979ad4f8746f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86fba0461ecbed5c9530b2594cf24c1fd478ed425392352e86debbe69965a899
3
+ size 120061
analyzingandimprovingcoherenceoflargelanguagemodelsinquestionanswering/67398457-5763-4b62-8745-979ad4f8746f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7e1ba62f41d105ad0c3a48929b5a18cbda43ae16982665069bc4574a7290ba2
3
+ size 151341