Chelsea707 commited on
Commit
f7b7b21
·
verified ·
1 Parent(s): e4af6d2

Add Batch 82a6505b-0a40-4bd6-9307-195922f03a08 data

Browse files
Files changed (25) hide show
  1. .gitattributes +4 -0
  2. 2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_content_list.json +0 -0
  3. 2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_model.json +0 -0
  4. 2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_origin.pdf +3 -0
  5. 2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/full.md +466 -0
  6. 2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/images.zip +3 -0
  7. 2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/layout.json +0 -0
  8. 2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_content_list.json +0 -0
  9. 2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_model.json +0 -0
  10. 2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_origin.pdf +3 -0
  11. 2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/full.md +0 -0
  12. 2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/images.zip +3 -0
  13. 2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/layout.json +0 -0
  14. 2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/81aca763-e861-40de-ad3f-640af6cf3d30_content_list.json +0 -0
  15. 2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/81aca763-e861-40de-ad3f-640af6cf3d30_model.json +0 -0
  16. 2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/81aca763-e861-40de-ad3f-640af6cf3d30_origin.pdf +3 -0
  17. 2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/full.md +421 -0
  18. 2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/images.zip +3 -0
  19. 2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/layout.json +0 -0
  20. 2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_content_list.json +0 -0
  21. 2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_model.json +0 -0
  22. 2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_origin.pdf +3 -0
  23. 2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/full.md +599 -0
  24. 2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/images.zip +3 -0
  25. 2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/layout.json +0 -0
.gitattributes CHANGED
@@ -3078,3 +3078,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
3078
  2025/WiCkeD_[[:space:]]A[[:space:]]Simple[[:space:]]Method[[:space:]]to[[:space:]]Make[[:space:]]Multiple[[:space:]]Choice[[:space:]]Benchmarks[[:space:]]More[[:space:]]Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_origin.pdf filter=lfs diff=lfs merge=lfs -text
3079
  2025/WinSpot_[[:space:]]GUI[[:space:]]Grounding[[:space:]]Benchmark[[:space:]]with[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3080
  2025/Zero-Shot[[:space:]]Text-to-Speech[[:space:]]for[[:space:]]Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
3078
  2025/WiCkeD_[[:space:]]A[[:space:]]Simple[[:space:]]Method[[:space:]]to[[:space:]]Make[[:space:]]Multiple[[:space:]]Choice[[:space:]]Benchmarks[[:space:]]More[[:space:]]Challenging/bf7da2a6-e19c-4145-af83-91a11e265552_origin.pdf filter=lfs diff=lfs merge=lfs -text
3079
  2025/WinSpot_[[:space:]]GUI[[:space:]]Grounding[[:space:]]Benchmark[[:space:]]with[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models/1bdd66fd-4b90-40c0-b56c-7487b424be1b_origin.pdf filter=lfs diff=lfs merge=lfs -text
3080
  2025/Zero-Shot[[:space:]]Text-to-Speech[[:space:]]for[[:space:]]Vietnamese/836fe461-bc2f-496a-902d-624f10ca9fa9_origin.pdf filter=lfs diff=lfs merge=lfs -text
3081
+ 2025/Chumor[[:space:]]2.0_[[:space:]]Towards[[:space:]]Better[[:space:]]Benchmarking[[:space:]]Chinese[[:space:]]Humor[[:space:]]Understanding[[:space:]]from[[:space:]](Ruo[[:space:]]Zhi[[:space:]]Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_origin.pdf filter=lfs diff=lfs merge=lfs -text
3082
+ 2025/CipherBank_[[:space:]]Exploring[[:space:]]the[[:space:]]Boundary[[:space:]]of[[:space:]]LLM[[:space:]]Reasoning[[:space:]]Capabilities[[:space:]]through[[:space:]]Cryptography[[:space:]]Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_origin.pdf filter=lfs diff=lfs merge=lfs -text
3083
+ 2025/CitaLaw_[[:space:]]Enhancing[[:space:]]LLM[[:space:]]with[[:space:]]Citations[[:space:]]in[[:space:]]Legal[[:space:]]Domain/81aca763-e861-40de-ad3f-640af6cf3d30_origin.pdf filter=lfs diff=lfs merge=lfs -text
3084
+ 2025/ClaimPKG_[[:space:]]Enhancing[[:space:]]Claim[[:space:]]Verification[[:space:]]via[[:space:]]Pseudo-Subgraph[[:space:]]Generation[[:space:]]with[[:space:]]Lightweight[[:space:]]Specialized[[:space:]]LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_origin.pdf filter=lfs diff=lfs merge=lfs -text
2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/5abdc7b1-9bd7-4584-b650-ffb46a145cb6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b33a9ae872a841a764ea57f4ee011b6f0cad588715303957ebb7ffbe7a428750
3
+ size 2441661
2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/full.md ADDED
@@ -0,0 +1,466 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Chumor 2.0: Towards Better Benchmarking Chinese Humor Understanding from 弱智吧 (Ruo Zhi Ba)
2
+
3
+ Ruiqi He Yushu He Longju Bai Jiarui Liu Zhenjie Sun Zenghao Tang He Wang Hanchen Xia Rada Mihalcea Naihao Deng
4
+
5
+ $^{\text{©}}$ University of Michigan Carnegie Mellon University Shanghai Jiaotong University {ruiqih, dnaiahao}@umich.edu
6
+
7
+ # Abstract
8
+
9
+ Existing humor datasets and evaluations predominantly focus on English, leaving limited resources for culturally nuanced humor in non-English languages like Chinese. To address this gap, we construct Chumor, the first and the largest Chinese humor explanation dataset. Chumor is sourced from Ruo Zhi Ba (RZB, 弱智吧), a Chinese Reddit-like platform known for sharing intellectually challenging and culturally specific jokes. We test ten LLMs through direct and chain-of-thought prompting, revealing that Chumor poses significant challenges to existing LLMs, with their accuracy slightly above random and far below human. In addition, our analysis highlights that human-annotated humor explanations are significantly better than those generated by GPT-4o and ERNIE $_{4\text{-turbo}}$ . We release Chumor at https://huggingface.co/datasets/MichiganNLP/Chumor, our project page is at https://github.com/MichiganNLP/Chumor-2.0, our leaderboard is at https://huggingface.co/spaces/MichiganNLP/Chumor-leaderboard, and our codebase is at https://github.com/MichiganNLP/Chumor-2.0.
10
+
11
+ # 1 Introduction
12
+
13
+ Humor is an intrinsic human trait that touches the core of our social and emotional lives, making it a rich field of study across various disciplines (Lefcourt, 2001; Mihalcea and Strapparava, 2005; Gelkopf et al., 2011; Hessel et al., 2023). With the advent of Large Language Models (LLMs), researchers have evaluated LLMs' performance on diverse tasks (Liu et al., 2023a; Deng et al., 2024; Wu et al., 2023) and observed LLMs' extraordinary performance on many (Zhang et al., 2024b). In contrast, researchers have observed that LLMs still fail to understand humor (Ghanadian et al., 2023). However, with all these studies on humor,
14
+
15
+ most evaluations remain in English (Radev et al., 2016; Hasan et al., 2019). This presents a significant gap, particularly for non-English languages like Chinese, where culturally nuanced humor understanding is unexamined.
16
+
17
+ In this paper, we try to bridge this gap by constructing Chumor, a funny and challenging Chinese humor understanding dataset sourced from Ruo Zhi Ba (RZB, "弱智吧" in Chinese), a Chinese version of Reddit platform known for sharing intellectually challenging and culturally specific jokes. This platform provides a set of unique Chinese jokes that incorporate the subtleties and intricacies of Chinese humor. Table 1 provides examples of the jokes from RZB. In addition, Bai et al. (2024) reveal that tuning LLMs on RZB data yields the best performance on Chinese reasoning tasks compared to other data sources, highlighting the significant value of jokes from RZB.
18
+
19
+ Unlike existing datasets that focus on tasks such as humor detection, punchline identification, or humor generation, Chumor addresses the challenge of humor explanation. This involves not just identifying humor but understanding the reasoning behind it, a task that requires both linguistic and cultural knowledge. Specifically, Chumor tasks the LLMs with determining whether an explanation fully explains the joke. We source the explanations from GPT-4o and ERNIE $_{4\text{-turb}}$ , and have the entire dataset manually annotated by five native Chinese speakers. We evaluate ten LLMs from various model families, and reveal that all models perform poorly, lagging significantly behind humans on Chumor. We observe that chain-of-thought prompting does not necessarily improve models performance and can sometimes confuse their reasoning process. In addition, we conduct a case study in which one of the authors annotates the entire dataset, followed by A/B testing conducted by six native Chinese speakers to compare explanations from GPT-4o versus human, and
20
+
21
+ <table><tr><td colspan="2">Cultural</td></tr><tr><td>Desc.</td><td>Require knowledge of specific historical, social, or linguistic contexts.</td></tr><tr><td>Ex.</td><td>(zh)小明在正月接发竟导致舅舅复活。 (en) Xiaoming got hair extensions during the first lunar month, which astonishingly brought his uncle back to life.</td></tr><tr><td colspan="2">Situational</td></tr><tr><td>Desc.</td><td>Involve humor derived from specific contexts, irony, or narrative setups.</td></tr><tr><td>Ex.</td><td>(zh)真可怕, 犯罪嫌疑人就在我们之中,被告席上一名法警对另一名法警说。 (en)“Terrifying, the criminal suspect is right between the two of us,” said one bailiff to another in the defendant&#x27;s dock.</td></tr><tr><td colspan="2">Pun-based</td></tr><tr><td>Desc.</td><td>Build on linguistic ambiguity and wordplay, require models to identify dual meanings.</td></tr><tr><td>Ex.</td><td>(zh)你可以在steam上找到GTA,所以水是DNA。 (en) You can find GTA on Steam, so water is DNA.</td></tr><tr><td colspan="2">Homophobic</td></tr><tr><td>Desc.</td><td>Rely on phonetic similarities between words or phrases to create humor.</td></tr><tr><td>Ex.</td><td>(zh)家里的猪油没了,小明只能把植物油倒快点当猪油用了。 (en) With the lard gone, Xiaoming had to pour the vegetable oil quickly to use it like lard.</td></tr><tr><td colspan="2">Glyph-based</td></tr><tr><td>Desc.</td><td>Exploit the structural or visual elements of Chinese characters to create humor.</td></tr><tr><td>Ex.</td><td>(zh)我把電串難題简化了,现在是电车难题。 (en) I simplified the trolley problem (in traditional Chinese), now it&#x27;s the trolley problem (in simplified Chinese).</td></tr><tr><td colspan="2">Cross-lingual</td></tr><tr><td>Desc.</td><td>Involve humor derived from linguistic or phonetic interplay across multiple languages.</td></tr><tr><td>Ex.</td><td>(zh)曹操于城楼上问夏侯惇:“你瞧到了什么。”夏侯惇说:“瞧到马岱。” (en) Cao Cao, from atop the city tower, asked Xia Houdun, “What did you see?” Xia Houdun replied, “I saw Ma Dai.”</td></tr></table>
22
+
23
+ Table 1: Different types of jokes. Descriptions (Desc.) explain humor mechanisms. Examples (Ex.) illustrate each category. The corresponding explanations can be found in the referenced figures from the rightmost column.
24
+
25
+ ERNIE $_{4}$ -turbo versus human. Our results indicate that human-annotated joke explanations are significantly better than those produced by GPT-4o or ERNIE $_{4}$ -turbo (Figure 4), with LLMs yielding winning rates of only $6.2\%$ for GPT-4o and $5.3\%$ for ERNIE $_{4}$ -turbo compared to humans.
26
+
27
+ In summary, our contributions are threefold:
28
+
29
+ 1. We construct Chumor, a funny and challenging Chinese humor understanding dataset, which is the largest Chinese humor explanation dataset.
30
+ 2. We evaluate ten LLMs on Chumor and reveal the significant challenges Chumor possesses. We highlight that the best accuracy achieved by LLMs is $60.3\%$ , significantly lower than human's score of $78.3\%$ .
31
+ 3. We demonstrate that chain-of-thought prompting can hurt LLM's performance in humor reasoning, and that human-annotated joke explanations are significantly better than those produced by GPT-4o and ERNIE $_{4\text{-turbo}}$ , urging future research on culturally specific humor understanding.
32
+
33
+ # 2 Related Works
34
+
35
+ Humor Datasets. Humor analysis in natural language processing (NLP) encompasses a wide range of tasks, each focused on different aspects of humor. For instance, researchers have proposed datasets
36
+
37
+ such as “16000 One-Liners” (Mihalcea and Strapparava, 2005), “Pun of the Day” (Yang et al., 2015), and “Ted Laughter” (Chen and Lee, 2017) focused on humor detection to determine whether a given text is humorous or not. Datasets such as “Big Bang Theory” (Bertero and Fung, 2016) aim at pinpointing the punchline in a joke. Tasks for assessing humor intensity include humor level rating, comparison, and ranking. For example, datasets like HumorNorm (Engelthaler and Hills, 2018) and #Hashtag Wars (Potash et al., 2017) quantify humor scores and compare comedic elements, while UR-Funny ranks punchlines based on their perceived impact. Datasets such as “Humicroedit” (Hossain et al., 2019), “ $C^3$ ” (Wang et al., 2022), and “Talk-Funny” (Chen et al., 2024) focus on humor generation, the task of generating or rewriting humorous texts. In addition, we present a comprehensive overview of the existing datasets related to humor in Table 2. We highlight that most existing datasets are in English. Chinese humor, on the other hand, is less explored. Our dataset, Chumor is the first humor explanation dataset in Chinese.
38
+
39
+ Culturally Specific Datasets. Recent works underscore the challenges of culturally specific reasoning in LLMs (Shen et al., 2024; AlKhamissi et al., 2024; Pawar et al., 2024; Vayani et al., 2024). These challenges stem from the overrepresentation of Western-centric knowledge and translation ar
40
+
41
+ <table><tr><td>Dataset</td><td>Sources</td><td>Lan.</td><td>#(k)</td><td>Tasks</td></tr><tr><td>One Liners (2005)</td><td>Web</td><td>en</td><td>16</td><td>HR</td></tr><tr><td>Pun of the Day (2015)</td><td>Web</td><td>en</td><td>4.8</td><td>HR PD</td></tr><tr><td>Big Bang Theory (2016)</td><td>TV</td><td>en</td><td>44</td><td>PD</td></tr><tr><td>Ted Laughter (2017)</td><td>TED</td><td>en</td><td>9.4</td><td>HR PD</td></tr><tr><td>#HashtagWars (2017)</td><td>TV</td><td>en</td><td>13</td><td>HC</td></tr><tr><td>HumorNorm (2018)</td><td>\( CS^† \)</td><td>en</td><td>5</td><td>HC</td></tr><tr><td>UR-FUNNY (2019)</td><td>TED</td><td>en</td><td>17</td><td>PD</td></tr><tr><td>Humicroedit (2019)</td><td>Reddit</td><td>en</td><td>15</td><td>HG</td></tr><tr><td>rJokes (2020)</td><td>Reddit</td><td>en</td><td>57</td><td>HC</td></tr><tr><td>Memotion (2020)</td><td>Memes</td><td>en</td><td>9.8</td><td>HC</td></tr><tr><td>MUMOR (2021)</td><td>TV</td><td>en zh</td><td>30</td><td>HR</td></tr><tr><td>NYT-Captions (2023)</td><td>NYT</td><td>en</td><td>0.7 2.6</td><td>HE HC</td></tr><tr><td>\( C^3 \) (2022)</td><td>Books</td><td>zh</td><td>9.3</td><td>HG</td></tr><tr><td>TalkFunny (2024)</td><td>Apps</td><td>zh</td><td>4.1</td><td>HG</td></tr><tr><td>TCHD (2023)</td><td>-</td><td>zh</td><td>26</td><td>HR HC PD</td></tr><tr><td>TTWS (2019)</td><td>Books</td><td>zh</td><td>9.1</td><td>PD</td></tr><tr><td>CHM (2020)</td><td>Apps Web</td><td>zh</td><td>3.3</td><td>HC</td></tr><tr><td>Memeplate (2022)</td><td>Apps Web</td><td>zh</td><td>5.2</td><td>HC</td></tr><tr><td>Chumor (us)</td><td>Web</td><td>zh</td><td>3.3</td><td>HE</td></tr></table>
42
+
43
+ Table 2: Existing datasets related to humor. For the shorthands in the table, abbreviations represent the following tasks, HR: humor recognition; PD: punchline detection; HC: humor comparison; HG: humor generation; HE: humor explanation †: Crowd-source.
44
+
45
+ tifacts, which limit the fairness and effectiveness of multilingual evaluations (Mihalcea et al., 2024). Researchers have proposed various culturally specific datasets such as Global-MMLU (Singh et al., 2024) to evaluate LLMs' cultural knowledge. Chumor adds to this line of effort as it involves rich knowledge specific to Chinese culture.
46
+
47
+ # 3 Chumor Dataset
48
+
49
+ Data Collection. We construct our dataset by including RZB jokes from "Best Annual Threads" between 2018 and 2021 that have been previously crawled†. In addition, we directly collect all threads in the "Moderator's Recommendation" section from RZB. Each thread in RZB consists of "标题"(title),"一楼"(content), and several "跟帖"(follow-up posts). For threads from Best Annual Threads, the jokes are listed in the follow-up posts, which are selected by the forum moderator. For threads from Moderator's Recommendation, the jokes consist of the title and the content of each thread. We remove the content if it repeats the title.
50
+
51
+ Data Cleaning. We store both the title and the content of the raw data. However, due to the posting restrictions of the platform requiring non-empty content, many posts contain meaningless placeholder texts such as “:”, “!”, “0”, “RT”, and others. We automatically identify and remove these patterns, and only keep the title which is the joke itself. Due to the length limitations on the original platform, many post titles are truncated from the beginning parts of the content. We identify these instances and replace the truncated title with the complete content to get the joke. We also remove duplicates that appear both in the “Moderator’s Recommendation” and the “Best Annual Posts”.
52
+
53
+ We manually remove the threads related to forum management and rules, threads that include excessively offensive content, threads with incomplete content, and threads that focus more on philosophical insight rather than humor.
54
+
55
+ Humor Explanation Classification. We design a humor explanation classification task that can be easily used to test LLMs' capabilities in humor understanding. Specifically, we use two LLMs, GPT-4o and ERNIE $_{4}$ -turbo to generae explanations for our collected jokes. We manually annotate the generated explanations as either "fully explain the joke" (good) or "partially explain or not explain the joke" (bad) based on a majority vote among five of the authors who are native Chinese speakers. Each joke, along with its explanation, forms an individual instance in Chumor, leading to a total of 3,339 instances. Among these, 1,454 items are labeled as good and 1,887 as bad explanations.
56
+
57
+ Data Examples from Chumor. We present examples from Chumor in Table 3.
58
+
59
+ Humor Categorization in Chumor. We categorize the jokes in RZB into six types in Table 1, with an example provided for each type.
60
+
61
+ # 4 Experiments
62
+
63
+ Models. We test ten LLMs, five from the open-source LLM families and five from the closed-source LLM families, all capable of handling Chinese. Specifically, we include the open-source LLMs of $\mathbf{Yi}_{34\mathbf{B}}$ (01.ai, 2024) from 01.AI, Nemotron70B (NVIDIA, 2024) from NVIDIA, Athene70B (Nexusflow, 2024) from Nexusflow, Qwen2.572B (Qwen, 2024) from Alibaba, Mistral123B (Mistral.ai, 2024) from Mistral AI, alongside the closed-source LLMs of
64
+
65
+ <table><tr><td>Joke</td><td>Explanation</td><td>Label</td><td>Source</td></tr><tr><td>(zh) “你自己掰了一千吨玉米?”“哈哈我瞎掰的”</td><td>(zh) 这个笑话的幽默之处在于对话中的双关语和出人意料的回答。“掰了一千吨玉米”看似是一个惊人的壮举,而“哈哈我瞎掰的”则瞬间揭露了这其实是一个夸张的玩笑,这种巧妙的反转和语言的戏谑带来了幽默感。</td><td rowspan="2">good</td><td rowspan="2">ERNIE4-turbo</td></tr><tr><td>(en) “You shucked a thousand tons of corn all by yourself?”“Haha, let&#x27;s just say there&#x27;s been plenty of shuckin&#x27; going on.”</td><td>(en) The humor in this joke lies in the pun and the unexpected response in the conversation. “Shucked a thousand tons of corn” seems like an extraordinary achievement, but the reply, ‘haha, let&#x27;s just say there&#x27;s been plenty of shuckin&#x27; going on,” instantly reveals it to be an exaggerated joke. This clever twist and playful use of language create the comedic effect.</td></tr><tr><td>(zh) 小明突然出现在我面前以至于我突然出现在小明面前。</td><td>(zh) 这个笑话的幽默之处在于它利用了语言的对称性和意外性。小明突然出现导致叙述者的惊讶反应,而叙述者的惊讶反应又反过来让小明感到意外,形成了一个有趣的循环。</td><td rowspan="2">bad</td><td rowspan="2">GPT-4o</td></tr><tr><td>(en) Xiaoming suddenly appeared in front of me, causing me to suddenly appear in front of him.</td><td>(en) The humor in this joke lies in its use of linguistic symmetry and unexpectedness. Xiao Ming&#x27;s sudden appearance triggers a surprised reaction from the narrator, which in turn surprises Xiao Ming, creating an amusing loop.</td></tr></table>
66
+
67
+ Table 3: Examples from Chumor. The second example's explanation is bad because the joke does not "creating an amusing loop". Instead, it relies on linguistic symmetry and the use of a straightforward fact to subvert expectations. The audience anticipates an unexpected outcome due to the setup, but the latter part "suddenly appear in front of him" flips the perspective by stating the straightforward fact that because Xiao Ming is in front of the person so the person is in front of Xiao Ming too.
68
+
69
+ Gemini $_{1.5-pro}$ (Google, 2024) from Google, GLM $_{4\text{plus}}$ (BigModel, 2024) from Tsinghua University, GPT-4 $_{\text{turbo}}$ , GPT-4o (OpenAI, 2023, 2024) from OpenAI, ERNIE $_{4\text{turbo}}$ (Baidu, 2024) from Baidu. For all the open-source LLMs, we use the instruction-tuned version in our evaluation.
70
+
71
+ Evaluation Methods. We evaluate these LLMs using two prompting methods: direct prompting (DP) by
72
+
73
+ # Direct Prompting (DP)
74
+
75
+ 你将看到一个笑话以及对这个笑话的解释。请判断这个解释是否完全解释了笑话。根据判断,选择“完全解释”或“部分/没有解释”,不需要解释为什么对或者不对。
76
+
77
+ 笑话:[joke]
78
+
79
+ 笑话解释:[explanation]
80
+
81
+ # Translation
82
+
83
+ You will see a joke and an explanation of the joke. Please determine whether this explanation fully explains the joke. Based on your judgment, choose either "fully explain" or "partially/does not explain." You do not need to explain why it is correct or incorrect.
84
+
85
+ Joke: [joke]
86
+
87
+ Explanation: [explanation]
88
+
89
+ and chain-of-thought (CoT) prompting (Wei et al., 2022) by adding the phrase “请逐步思考,写下过程”“Please think step by step, write down your reasoning process” before determining the label. Appendix F provides the complete prompts. We cal
90
+
91
+ culate accuracy scores as part of our evaluation. In addition, we provide the false positive rate (FPR), false negative rate (FNR), and Matthews Correlation Coefficient (MCC) in Appendix H in Table 4. The MCC score considers true positives, true negatives, false positives, and false negatives, providing a score between -1 and +1. A score of +1 indicates perfect predictions, 0 reflects random guessing, and -1 means complete disagreement. The best MCC score achieved by LLMs is 0.29, which is close to random guessing, and is significantly lower than the human average of 0.60.
92
+
93
+ # 5 Results and Discussions
94
+
95
+ Overall Model Performance. Figure 1 presents the accuracy of different LLMs on Chumor in DP and CoT settings. Appendix H presents additional results and analysis.
96
+
97
+ Overall, we observe that all models perform poorly on Chinese humor comprehension, with accuracy scores ranging between $44.6\%$ and $60.3\%$ . ERNIE $_{4\text{-}\text{turbo}}$ and Gemini $_{1.5\text{-}\text{pro}}$ achieve the highest accuracy of $60.3\%$ , and are just 10 points above the random baseline and far below human performance of $78.3\%$ , highlighting the difficulty of Chumor and the limitations of these LLMs in understanding Chinese humor.
98
+
99
+ Error Analysis by Joke Type. To better understand how LLMs perform on each joke type listed in Table 1, we sample 200 jokes for error analysis. Figure 2 and Figure 17 in Appendix H present the results. The distribution of joke types can be found in Appendix G Figure 15.
100
+
101
+ ![](images/c82242bc9d064b5b1e48e0527762f6fc5982d614add9b8428f192d0426397d89.jpg)
102
+ Figure 1: The accuracy of different models' test results in the DP and CoT settings. ERNIE $_{4}$ -turbo and Gemini $_{1.5\text{-pro}}$ achieve the highest accuracy of $60.3\%$ .
103
+
104
+ We highlight that model performance varies significantly across different joke types. While models generally perform well on Situational jokes, achieving $60.0\%$ to $70.0\%$ accuracy in both DP and CoT settings, their performance difference on other joke types is more pronounced. For instance, GLM-4plus achieves $65.0\%$ accuracy on Homophonic jokes in the DP setting, whereas $\mathrm{Yi}_{34\mathrm{B}}$ only reaches $30.0\%$ . Nematron<sup>70\mathrm{B}\</sup> performs well on Cultural jokes in the CoT setting with $72.0\%$ accuracy, but Athene<sup>70\mathrm{B}\</sup> and ERNIE<sub>4-turbo</sub> achieve with only $43.0\%$ and $42.0\%$ , respectively. Such performance variance highlights LLMs' varied capabilities in specific domains such as cultural reasoning and situational reasoning, revealing the respective limitations of these LLMs.
105
+
106
+ # 5.1 Have LLMs achieved human-level understanding of humor?
107
+
108
+ Answer: No. To compare the performance of LLMs with humans, we conduct a human study involving three Chinese native speakers unfamiliar with this work to annotate a randomly chosen subset of 200 examples. Human annotators demonstrate significantly better performance, with an average accuracy of $78.3\%$ and an MCC score of 0.60, significantly better than the LLMs' best per
109
+
110
+ ![](images/1751194a5476ee5570e3e726c0ef4c756014c6ce34f4a9fb82ab296870d8687f.jpg)
111
+ Figure 2: DP accuracy on different joke types $(\%)$ . Here, we sample 200 jokes for error analysis. We highlight that model performance varies significantly across different joke types.
112
+
113
+ formance of $60.3\%$ accuracy and an MCC score of 0.29 (Figure 16 in Appendix H). Our results indicate that there is a large room of performance improvement for LLMs on Chinese humor understanding.
114
+
115
+ # 5.2 Does chain-of-thought (CoT) help LLMs' humor understanding?
116
+
117
+ <table><tr><td>Example</td><td colspan="2">(zh)“大人有大量,小孩倒是不多了。”食人族厨师清点食材说。
118
+ (en)“Adults are plenty, but children are quite scarce,” said the cannibal chef while counting ingredients.</td></tr><tr><td>Joke Explanation</td><td colspan="2">(zh)这个笑话的幽默之处在于其运用了惊悚与诙谐的巧妙结合,以及意料之外的语言转折。食人族厨师清点食材的场景已经足够诡异,但他却以“大人有大量,小孩倒是不多了”这样似乎很平常的口吻来评论,这种对比和反差产生了荒诞的幽默感,同时也在探讨一种颠覆性的、黑暗式的玩笑。
119
+ (en)The humor in this joke lies in its clever combination of horror and wit, as well as the unexpected linguistic twist. The scene of a cannibal chef counting ingredients is already eerie, but the casual remark, &quot;Adults are plenty, but children are quite scarce,&quot; delivered in an ordinary tone, creates a stark contrast. This juxtaposition generates absurd humor while exploring a subversive, dark style of comedy.</td></tr><tr><td>Model&#x27;s Answer DP✓</td><td>(zh)部分/没有解释
120
+ (en)Partially/Not Explained</td><td>Model&#x27;s Answer CoT X (zh) ...这句话使用了一个常用的谚语“大人有大量”来双关...比较您的理解和解释:笑话解释涵盖了我的分析要点,指出了对比和反差产生的荒诞幽默感,并提到了黑暗式幽默。解释完全涵盖了笑话的所有幽默点,因此选择“完全解释”。
121
+ (en) ...This sentence uses a common proverb, &quot;adults are plenty,&quot; as a pun... Comparison of your understanding and the explanation: The joke explanation covers the key points of my analysis, highlighting the absurd humor created by contrast and reversal, and mentioning the element of dark humor. The explanation fully addresses all aspects of the joke&#x27;s humor, so “Fully Explained” is selected.</td></tr><tr><td>CoT Failure Reasons</td><td colspan="2">Observing model&#x27;s explanation shows it recognizes the pun in the joke. However, the joke explanation being evaluated misses the pun. Model correctly identifies this in DP but, in CoT, despite noticing the pun, attempts to justify the flawed explanation, leading to an incorrect judgment.</td></tr></table>
122
+
123
+ Figure 3: Over-analyzing example by GPT-4o. The GPT-4o model chooses the correct answer in the DP prompting, but chooses the incorrect answer due to over-analyzing in the CoT prompting.
124
+
125
+ Answer: No. We observe that CoT does not necessarily improve model performance and, in most cases, even leads to performance decay. For instance, as shown in Figure 1, the accuracy of $\mathrm{ERNIE}_{4}$ -turbo decreases from $60.3\%$ to $45.2\%$ when we switch to CoT prompting, Mistral $_{123B}$ 's performance drops from $55.6\%$ to $51.2\%$ , GPT-4o's performance drops from $51.9\%$ to $50.6\%$ , GPT-4turbo's performance falls from $52.3\%$ to $51.3\%$ . Moreover, the MCC scores present a clearer trend of performance decline under CoT prompting. As shown in Figure 16 in Appendix H, eight of the ten LLMs' MCC scores decrease under CoT prompting. We hypothesize that CoT prompts may not help the model's reasoning when the model lacks a fundamental grasp of humor understanding.
126
+
127
+ We observe that under CoT prompting, models like GPT-4o tend to justify incorrect explanations as "correct", leading to an increase in false-positive rate from $80.0\%$ for DP prompting to $85.0\%$ for CoT prompting (Table 4 in Appendix H). $\mathrm{ERNIE}_{4}$ -turbo exhibits the largest false-positive rate, rising from $59.8\%$ to $96.9\%$ (Table 4 in Appendix H). Figure 3 provides an example where CoT confuses the GPT-4o model. Under the DP prompting, the GPT-4o model chooses the answer correctly. However, CoT prompting causes the model to over-analyze and justify an incorrect explanation.
128
+
129
+ On the other hand, models like Nematron $_{70}$ may be overly critical of explanations under CoT prompting, resulting in a false-negative rate from $20.9\%$ for DP prompting to $46.1\%$ for CoT prompting (Table 4 in Appendix H). We highlight that a
130
+
131
+ recent work demonstrates that CoT can degrade performance in tasks requiring subtle comprehension (Sprague et al., 2024), which aligns with our findings on its limitations in humor interpretation. Figure 14 in Appendix E discusses an example corresponding to the model being overly critical.
132
+
133
+ # 5.3 Case study: can GPT-4o and ERNIE $_{4}$ -turbo explain jokes as well as humans?
134
+
135
+ Answer: No. Apart from testing multiple LLMs on Chumor, we conduct case studies on GPT-4o and ERNIE $_{4}$ -turbo to assess the quality of their joke explanations compared to humans. We prompt them to explain the humor in two sentences, consistent with the format of human explanations. Here is the prompt we feed to both LLMs:
136
+
137
+ # Prompt
138
+
139
+ 请用两句话解释这个笑话的幽默之处: [joke]
140
+
141
+ Please explain the joke in two sentences: [joke]
142
+
143
+ Data Annotation. As demonstrated by Hessel et al. (2023), crowd-sourcing typically cannot produce high-quality explanations, following Hessel et al. (2023), one of the authors annotates all the explanations to ensure the quality and consistency.
144
+
145
+ This is a substantial effort: the author ended up annotating the explanations for 1,951 jokes. The resulting corpus has a mean of 78 Chinese characters of explanation per joke, and the total length, 151,730 Chinese characters, is comparable
146
+
147
+ ![](images/f29ecc8a0d970beaa864265793e8e2763bf420cfe75aaaa9c975bdd6a3d526dc.jpg)
148
+ Figure 4: Annotated preference for whether human explanation is preferred ("Human wins") or the explanation from LLMs is preferred ("LLM wins"). Humans' explanation is significantly preferred over LLMs'.
149
+
150
+ in length to a novella†.
151
+
152
+ Evaluation Setup. To fairly evaluate which explanation is better, we conduct A/B testing by presenting the humor explanation from one LLM and from human to six college students, asking them to annotate their preference of the explanation for each joke. These college students are native Chinese speakers who grew up in China, therefore they have a deep understanding of the cultural terms and trending terms in China. We note that the preference annotation requires a substantial effort as each annotator reads through a total length of around 300k Chinese characters†. We end up with three preference annotations for each joke. The preference annotation achieve a $61.4\%$ agreement rate among annotators (Appendix B).
153
+
154
+ We use the winning rate as our measure to compare LLMs' explanation versus human explanation, taking the majority vote among all annotators for each example. In addition, if all annotators disagree, we assign an "Undecided" label. Appendix C provides the annotation instructions we present to the annotators.
155
+
156
+ Overall Results. Figure 4 reports the wining rate of explanations from human versus GPT-4o and ERNIE $_{4\text{-turb}}$ . We can see that human explanations are significantly better than those from both LLMs, with humans winning over $50\%$ of the time, while LLMs win in only $2 - 3\%$ of cases.
157
+
158
+ Error Analysis. Figure 5 shows the overall distribution of error types for GPT-4o and ERNIE $_{4\text{-}\text{turbo}}$ on Chumor in terms of their humor explanations. This error analysis is conducted by an individual who is not involved in writing the original explanations, ensuring an unbiased evaluation. GPT-4o
159
+
160
+ ![](images/a8d2f4eda47cac68bcfb4deb23736b0a2064f14ab806d5e36881a22d801cd584.jpg)
161
+ Figure 5: Distribution of error types for GPT-4o and ERNIE $_{4\text{-}\text{turbo}}$ . We sample 200 examples to calculate the distribution of these error types. We note that each example may correspond to multiple error types. We highlight that ERNIE $_{4\text{-}\text{turbo}}$ demonstrates a lower error rate on cultural jokes, while GPT-4o demonstrates a lower error rate on contextual or pun-based jokes.
162
+
163
+ is more prone to errors categorized as "cultural unawareness" (29.5% of all its explanations) compared to $\mathrm{ERNIE}_{4\text{-}\text{turbo}}$ (10.5%). We suspect that $\mathrm{ERNIE}_{4\text{-}\text{turbo}}$ is more familiar with Chinese culture as it is likely trained on a larger Chinese corpus than GPT-4o. However, GPT-4o performs better on cases requiring an understanding of contexts or puns, suggesting its strong reasoning ability. We provide three error cases for GPT-4o here and additional cases for both GPT-4o and $\mathrm{ERNIE}_{4\text{-}\text{turbo}}$ in Appendix E. In the following examples in Figure 6, Figure 7 and Figure 8, we highlight key phrases that induce humor in green, and underscore the errors in red.
164
+
165
+ Error Type I: Cultural Unawareness. LLMs may fail to explain a joke due to a lack of awareness of certain cultural knowledge. For instance, the example in Figure 6 requires knowledge of a superstitious belief in Chinese culture, getting a haircut in the first lunar month brings death to your uncle, and the explanation from GPT-4o fails to connect to this Chinese cultural belief. We hypothesize that while LLMs are pre-trained on Internet-scale corpora, such culturally specific knowledge can still be challenging for them to grasp. Moreover, even when they have acquired such cultural knowledge, they may fail to relate to it as we humans do during the reasoning process.
166
+
167
+ Error Type II: Pun-based Error. LLMs may fail to identify multiple meanings of a single word within a joke, causing them to fail on pun-based
168
+
169
+ <table><tr><td>Example</td><td>(zh) 小明在正月接发竟导致舅舅复活。 (en) Xiaoming got hair extensions during the first lunar month, which astonishingly brought his uncle back to life.</td></tr><tr><td>Correct Humor Explanation</td><td>“Getting a haircut in the first lunar month will brings death to your uncle.” is a popular superstitious saying in Chinese culture. In this joke, Xiao Ming gets hair extensions in the first month, which reverses the original logic and absurdly results in “bringing his uncle back to life.”</td></tr><tr><td>GPT-4o&#x27;s Answer</td><td>(zh) ...它利用了“接发”和“接发”的双关语:一方面是指理发店的接发服务,另一方面是指正月里“接福”的传统习俗。 (en) ...the pun on “接发”: it refers to both hair extension at a salon and the traditional practice of “receiving blessings” during the first lunar month.</td></tr><tr><td>Failure Reasons</td><td>Fail to grasp expressions, slang, or specific usages unique to a particular cultural context.</td></tr></table>
170
+
171
+ Figure 6: Culture unawareness example.
172
+
173
+ <table><tr><td rowspan="2">Example</td><td>(zh) 你可以在steam上找到GTA,所以水是DNA。</td></tr><tr><td>(en) You can find GTA on Steam,so water is DNA.</td></tr><tr><td>Correct
174
+ Humor
175
+ Explanation</td><td>Normally,“Steam”refers to a gaming platform and “GTA”refers to the “Grand Theft Auto” game series. The joke uses a pun, where “steam”literally means the gaseous form of water, and “G”,“T”,and “A” represent Guanine, Thymine, and Adenine, respectively—three of the four nucleotides (AGTC) that make up DNA.</td></tr><tr><td rowspan="2">GPT-4o&#x27;s
176
+ Answer</td><td>(zh)...这种明显错误的类比制造了荒诞的效果,引人发笑。</td></tr><tr><td>(en)...This clearly wrong analogy creates an absurd effect.</td></tr><tr><td>Failure
177
+ Reasons</td><td>Fail to discover multiple meanings of a word in the jokes</td></tr></table>
178
+
179
+ jokes where humor lies in inverting the conventional usage of words. In Figure 7, GPT-4o fails to grasp the transition from the video game terms "Steam", "GTA" to the scientific terminologies "G", "T", "A" that form DNA. Typically, "Steam" refers to a game platform, and "GTA" refers to the game series "Grand Theft Auto". The joke employs a pun on words where "steam" in its literal sense means water vapor, and "GTA" can represent not only the video game, but guanine, thymine, and adenine, which are nucleotides involved in the structure of DNA. Such jokes require LLMs to identify puns and the reason for the association of the multiple meanings. Furthermore, the process requires LLMs to bridge the logic gap between these terms, such as
180
+
181
+ Figure 7: Pun-based error example.
182
+
183
+ <table><tr><td>Example</td><td>(zh) 家里的猪油没了,小明只能把 植物油倒快点 当猪油用了。 (en) With the lard gone, Xiaoming had to pour the vegetable oil quickly to use it like lard.</td></tr><tr><td>Correct
184
+ Humor
185
+ Explanation</td><td>When pronounced quickly,“vegetable” (pronounced as chih woo) sounds very similar to “pig” (pronounced as choo) in Chinese, leading Xiaoming to comically try using vegetable oil as lard by pouring it quickly.</td></tr><tr><td>GPT-4o&#x27;s
186
+ Answer</td><td>(zh) ...小明误解了猪油和植物油的区别,认为只要倒得快,植物油就能变成猪油。笑点在于他天真的逻辑和对常识的搞笑误解。 (en) ... Xiaoming misunderstood that if vegetable oil is poured quickly, it can turn into lard, highlighting his naive logic and misinterpretation of common sense.</td></tr><tr><td>Failure
187
+ Reasons</td><td>Fail to exploit homophones or near-homophones in the jokes.</td></tr></table>
188
+
189
+ Figure 8: Homophonic error example.
190
+
191
+ "steam" and "GTA", and an unusual context, like "water is DNA". The overall process requires both scientific knowledge and creative thinking, which are challenging for LLMs.
192
+
193
+ Error Type III: Homophonic Error. The example in Figure 8 requires LLMs to reason over the pronunciation as “植物” (pronounced as chih woo, meaning “vegetable”) sounds very similar to “猪” (pronounced as choo, meaning “pig”) in Chinese when we speak it fast enough. The humor arises from the contrast between the similarity in pronunciation and the disparity in meaning between the two terms. Such contrasts may be sparse in the training corpus of LLMs, and also demand a deep connection across different modalities to link pronunciation with the meaning behind these terms, which poses significant challenges to LLMs.
194
+
195
+ # 6 Conclusion
196
+
197
+ We introduce Chumor, a Chinese humor understanding dataset that captures intellectually challenging and culturally specific humor in Chinese. Our analysis reveals that Chumor remains difficult even for advanced LLMs, with a significant performance gap between LLMs and humans. Furthermore, we find that chain-of-thought reasoning does not improve LLMs' humor comprehension and, in some cases, leads to over-analysis and incorrect interpretations. Additionally, models such as GPT-4o and ERNIE $_{4}$ -turbo struggle to explain jokes as effectively as humans, highlighting fundamental challenges in humor reasoning. These findings un
198
+
199
+ derscore the unique difficulties that Chinese humor presents to LLMs. We hope that Chumor can advance non-English humor research and contribute to evaluating LLMs' reasoning abilities across diverse cultural backgrounds.
200
+
201
+ # Limitations
202
+
203
+ We try our best to test the Chinese humor understanding ability of different LLMs. However, due to the limited budget and API access, we cannot evaluate all possible LLMs in this paper. We encourage future research to conduct further evaluations of humor understanding abilities in LLMs. In the meantime, we emphasize that our research focuses primarily on demonstrating how humor understanding remains a significant challenge, even for SOTA LLMs. Our work shows that along with many other problems (Ignat et al., 2024), humor understanding, especially non-English and culturally specific humor understanding, remains an unsolved problem in the era of LLMs. We hope Chumor can contribute to non-English humor understanding evaluations for future multilingual LLMs.
204
+
205
+ # Ethics Statement
206
+
207
+ We have made every effort to filter out excessively offensive content in RZB. However, due to the subjective nature of humor, some of our jokes may still be perceived as offensive by individuals with different cultural or personal standards. To address these concerns, we strongly recommend that researchers use Chumor with cultural sensitivity, recognizing that the jokes in the dataset reflect the sociocultural context in which they were created. We encourage users of Chumor to approach the dataset with caution, remaining mindful of its potential to cause offense or harm, particularly when applying it in research or applications that involve diverse audiences or address sensitive topics. We wish to foster an ethical and responsible approach to data collection and usage, and we welcome constructive feedback from the research community and stakeholders to continually improve Chumor and mitigate potential harm.
208
+
209
+ # Acknowledgement
210
+
211
+ The GPT experiments are supported by credit from OpenAI through OpenAI Researcher Access assigned to Naihao Deng. We appreciate Qiang Liu, and Xiaoyue Shi for helping with the human study.
212
+
213
+ # References
214
+
215
+ 01.ai. 2024. Yi-34b model card. https://huggingface.co/01-ai/Yi-34B. Accessed: 2024-12-10.
216
+ Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, and 8 others. 2024. Phi-4 technical report. Preprint, arXiv:2412.08905.
217
+ Badr AlKhamissi, Muhammad ElNokrashy, Mai Alkhamissi, and Mona Diab. 2024. Investigating cultural alignment of large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12404-12422, Bangkok, Thailand. Association for Computational Linguistics.
218
+ Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, and 29 others. 2023. Qwen technical report. Preprint, arXiv:2309.16609.
219
+ Yuelin Bai, Xinrun Du, Yiming Liang, Yonggang Jin, Ziqiang Liu, Junting Zhou, Tianyu Zheng, Xincheng Zhang, Nuo Ma, Zekun Wang, and 1 others. 2024. Coig-cqia: Quality is all you need for chinese instruction fine-tuning. arXiv preprint arXiv:2403.18058.
220
+ Baidu. 2024. Ernie-4.0-turbo. https://cloud.baidu. com/doc/WENXINWORKSHOP/s/71xwwtafj. Accessed: 2024-12-10.
221
+ Dario Bertero and Pascale Fung. 2016. Deep learning of audio and language features for humor prediction. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 496-501, Porto-rož, Slovenia. European Language Resources Association (ELRA).
222
+ BigModel. 2024. Glm-4 model documentation. https://bigmodel.cn/dev/howuse/glm-4. Accessed: 2024-12-10.
223
+ Lei Chen and Chong Min Lee. 2017. Predicting audience's laughter during presentations using convolutional neural network. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 86-90, Copenhagen, Denmark. Association for Computational Linguistics.
224
+ Yuyan Chen, Zhixu Li, Jiaqing Liang, Yanghua Xiao, Bang Liu, and Yunwen Chen. 2023. Can pre-trained language models understand chinese humor? In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, WSDM '23, page 465-480, New York, NY, USA. Association for Computing Machinery.
225
+
226
+ Yuyan Chen, Yichen Yuan, Panjun Liu, Dayiheng Liu, Qinghao Guan, Mengfei Guo, Haiming Peng, Bang Liu, Zhixu Li, and Yanghua Xiao. 2024. Talk funny! a large-scale humor response dataset with chain-of-humor interpretation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16):17826-17834.
227
+ Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. Preprint, arXiv:1803.05457.
228
+ Peter T. Daniels and William Bright. 1996. The world's writing systems. Oxford University Press.
229
+ Naihao Deng, Zhenjie Sun, Ruiqi He, Aman Sikka, Yu-long Chen, Lin Ma, Yue Zhang, and Rada Mihalcea. 2024. Tables as images? exploring the strengths and limitations of llms on multimodal representations of tabular data. arXiv preprint arXiv:2402.12424.
230
+ Naihao Deng, Xinliang Zhang, Siyang Liu, Winston Wu, Lu Wang, and Rada Mihalcea. 2023. You are what you annotate: Towards better models through annotator representations. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 12475–12498, Singapore. Association for Computational Linguistics.
231
+ Xinrun Du, Zhouliang Yu, Songyang Gao, Ding Pan, Yuyang Cheng, Ziyang Ma, Ruibin Yuan, Xingwei Qu, Jiaheng Liu, Tianyu Zheng, and 1 others. 2024. Chinese tiny llm: Pretraining a chinese-centric large language model. arXiv preprint arXiv:2404.04167.
232
+ Tomas Engelthaler and Thomas T Hills. 2018. Humor norms for 4,997 english words. Behavior research methods, 50:1116-1124.
233
+ WILLIAM F. FRY. 1994. The biology of humor. HUMOR, 7(2):111-126.
234
+ Marc Gelkopf and 1 others. 2011. The use of humor in serious mental illness: A review. Evidence-Based Complementary and Alternative Medicine, 2011.
235
+ Matthew Gervais and David Sloan Wilson. 2005. The evolution and functions of laughter and humor: A synthetic approach. *The Quarterly review of biology*, 80(4):395-430.
236
+ Hamideh Ghanaian, Isar Nejadgholi, and Hussein Al Osman. 2023. ChatGPT for suicide risk assessment on social media: Quantitative evaluation of model performance, potentials and limitations. In Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, pages 172-183, Toronto, Canada. Association for Computational Linguistics.
237
+ Google. 2024. Gemini 1.5 pro model documentation. https://ai.google.dev/gemini-api/docs/ models/gemini#gemini-1.5-pro. Accessed: 2024-12-10.
238
+
239
+ Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, and Mohammed (Ehsan) Hoque. 2019. UR-FUNNY: A multimodal language dataset for understanding humor. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2046-2056, Hong Kong, China. Association for Computational Linguistics.
240
+ Jack Hessel, Ana Marasovic, Jena D. Hwang, Lillian Lee, Jeff Da, Rowan Zellers, Robert Mankoff, and Yejin Choi. 2023. Do androids laugh at electric sheep? humor "understanding" benchmarks from the new yorker caption contest. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688-714, Toronto, Canada. Association for Computational Linguistics.
241
+ Nabil Hossain, John Krumm, and Michael Gamon. 2019. "president vows to cut <taxes> hair": Dataset and analysis of creative text editing for humorous headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133-142, Minneapolis, Minnesota. Association for Computational Linguistics.
242
+ Oana Ignat, Zhijing Jin, Artem Abzaliev, Laura Biester, Santiago Castro, Naihao Deng, Xinyi Gao, Aylin Ece Gunal, Jacky He, Ashkan Kazemi, Muhammad Khalifa, Namho Koh, Andrew Lee, Siyang Liu, Do June Min, Shinka Mori, Joan C. Nwatu, Veronica Perez-Rosas, Siqi Shen, and 3 others. 2024. Has it all been solved? open NLP research questions not solved by large language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 8050-8094, Torino, Italia. ELRA and ICCL.
243
+ Herbert M Lefcourt. 2001. *Humor: The psychology of living buoyantly*. Springer Science & Business Media.
244
+ Zefeng Li, Hongfei Lin, Liang Yang, Bo Xu, and Shaowu Zhang. 2022. Memeplate: A chinese multimodal dataset for humor understanding in meme templates. In *Natural Language Processing and Chinese Computing*, pages 527-538, Cham. Springer International Publishing.
245
+ Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. 2023a. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439.
246
+ Siyang Liu, Naihao Deng, Sahand Sabour, Yilin Jia, Minlie Huang, and Rada Mihalcea. 2023b. Task-adaptive tokenization: Enhancing long-form text generation efficacy in mental health and beyond. In Proceedings of the 2023 Conference on Empirical Meth
247
+
248
+ ods in Natural Language Processing, pages 15264-15281, Singapore. Association for Computational Linguistics.
249
+ Xin Liu, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Min Zhang, Haiying Zhang, and Jinsong Su. 2021. Bridging subword gaps in pretrainfinetune paradigm for natural language generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6001-6011, Online. Association for Computational Linguistics.
250
+ Paul E McGhee. 1971. Development of the humor response: A review of the literature. Psychological Bulletin, 76(5):328.
251
+ Rada Mihalcea, Oana Ignat, Longju Bai, Angana Borah, Luis Chiruzzo, Zhijing Jin, Claude Kwizera, Joan Nwatu, Soujanya Poria, and Thamar Solorio. 2024. Why ai is weird and should not be this way: Towards ai for everyone, with everyone, by everyone. arXiv preprint arXiv:2410.16315.
252
+ Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 531-538, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
253
+ Mistral.ai. 2024. Mistral-large-instruct-2407 model card. https://huggingface.co/mistralai/Mistral-Large-Instruct-2407. Accessed: 2024-12-10.
254
+ Nexusflow. 2024. Athene-70b model card. https://huggingface.co/Nexusflow/Athene-70B. Accessed: 2024-12-10.
255
+ NVIDIA. 2024. Llama-3.1-nemotron-70b-instruct-hf model card. https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF. Accessed: 2024-12-10.
256
+ OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
257
+ OpenAI. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276.
258
+ Siddhesh Pawar, Junyeong Park, Jiho Jin, Arnav Arora, Junho Myung, Srishti Yadav, Faiz Ghifari Haznitrama, Inhwa Song, Alice Oh, and Isabelle Augenstein. 2024. Survey of cultural awareness in language models: Text and beyond. arXiv preprint arXiv:2411.00860.
259
+ Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. SemEval-2017 task 6: #HashtagWars: Learning a sense of humor. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 49-57, Vancouver, Canada. Association for Computational Linguistics.
260
+
261
+ Qwen. 2024. Qwen2.5-72b-instruct model card. https://huggingface.co/Qwen/Qwen2.5-72B-Instruct. Accessed: 2024-12-10.
262
+ Dragomir Radev, Amanda Stent, Joel Tetreault, Aasish Pappu, Aikaterini Iliakopoulou, Agustin Chanfreau, Paloma de Juan, Jordi Vallmitjana, Alejandro Jaimes, Rahul Jha, and Robert Mankoff. 2016. Humor in collective discourse: Unsupervised funniness detection in the new yorker cartoon caption contest. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 475-479, Porto Roz, Slovenia. European Language Resources Association (ELRA).
263
+ David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. 2023. Gpqa: A graduate-level google-proof q&a benchmark. Preprint, arXiv:2311.12022.
264
+ Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas PYKL, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Björn Gambäck. 2020. SemEval-2020 task 8: Memotion analysis-the visuolinguial metaphor! In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 759-773, Barcelona (online). International Committee for Computational Linguistics.
265
+ Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, and Rada Mihalcea. 2024. Understanding the capabilities and limitations of large language models for cultural commonsense. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 5668-5680, Mexico City, Mexico. Association for Computational Linguistics.
266
+ Shivalika Singh, Angelika Romanou, Clémentine Fourrier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, Peerat Limkonchotiwat, Kelly Marchisio, Wei Qi Leong, Yosephine Susanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, Daphne Ippolito, and 4 others. 2024. Global mmlu: Understanding and addressing cultural and linguistic biases in multilingual evaluation. Preprint, arXiv:2412.03304.
267
+ Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. 2024. To cot or not to cot? chain-of-thought helps mainly on math and symbolic reasoning. Preprint, arXiv:2409.12183.
268
+ Honglin Sun and Daniel Jurafsky. 2004. Shallow semantic parsing of Chinese. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 249-256, Boston, Massachusetts, USA. Association for Computational Linguistics.
269
+
270
+ Weiwei Sun, Zhifang Sui, Meng Wang, and Xin Wang. 2009. Chinese semantic role labeling with shallow parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1475-1483, Singapore. Association for Computational Linguistics.
271
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. Preprint, arXiv:2302.13971.
272
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 others. 2023b. Llama 2: Open foundation and fine-tuned chat models. Preprint, arXiv:2307.09288.
273
+ Yuen-Hsien Tseng, Wun-Syuan Wu, Chia-Yueh Chang, Hsueh-Chih Chen, and Wei-Lun Hsu. 2020. Development and validation of a corpus for machine humor comprehension. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1346-1352, Marseille, France. European Language Resources Association.
274
+ Ashmal Vayani, Dinura Dissanayake, Hasindri Watawana, Noor Ahsan, Nevasini Sasikumar, Omkar Thawakar, Henok Biadglin Ademtew, Yahya Hmaiti, Amandeep Kumar, Kartik Kuckreja, and 1 others. 2024. All languages matter: Evaluating Imms on culturally diverse 100 languages. arXiv preprint arXiv:2411.16508.
275
+ Benyou Wang, Xiang Wu, Xiaokang Liu, Jianquan Li, Prayag Tiwari, and Qianqian Xie. 2022. Can language models make fun? a case study in chinese comical crosstalk. In Annual Meeting of the Association for Computational Linguistics.
276
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
277
+ Orion Weller and Kevin Seppi. 2020. The rJokes dataset: a large scale humor collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6136-6141, Marseille, France. European Language Resources Association.
278
+ Jiaming Wu, Hongfei Lin, Liang Yang, and Bo Xu. 2021. Mumor: A multimodal dataset for humor detection in conversations. In *Natural Language Processing and Chinese Computing: 10th CCF International Conference*, NLPCC 2021, Qingdao, China, October 13–17, 2021, Proceedings, Part I, page 619–627, Berlin, Heidelberg. Springer-Verlag.
279
+
280
+ Yufan Wu, Yinghui He, Yilin Jia, Rada Mihalcea, Yu-long Chen, and Naihao Deng. 2023. Hi-ToM: A benchmark for evaluating higher-order theory of mind reasoning in large language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 10691-10706, Singapore. Association for Computational Linguistics.
281
+ Diyi Yang, Alon Lavie, Chris Dyer, and Eduard Hovy. 2015. Humor recognition and humor anchor extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2367-2376, Lisbon, Portugal. Association for Computational Linguistics.
282
+ Dongyu Zhang, Heting Zhang, Xikai Liu, Hongfei Lin, and Feng Xia. 2019. Telling the whole story: A manually annotated chinese dataset for the analysis of humor in jokes. In Conference on Empirical Methods in Natural Language Processing.
283
+ Min Zhang, Jianfeng He, Taoran Ji, and Chang-Tien Lu. 2024a. Don't go to extremes: Revealing the excessive sensitivity and calibration limitations of llms in implicit hate speech detection. Preprint, arXiv:2402.11406.
284
+ Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B. Hashimoto. 2024b. Benchmarking Large Language Models for News Summarization. Transactions of the Association for Computational Linguistics, 12:39-57.
285
+ Jun Zhao, Zhihao Zhang, Qi Zhang, Tao Gui, and Xuanjing Huang. 2024. Llama beyond english: An empirical study on language capability transfer. arXiv preprint arXiv:2401.01055.
286
+ Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. 2023. Instruction-following evaluation for large language models. Preprint, arXiv:2311.07911.
287
+
288
+ # A Contributions
289
+
290
+ Idea Proposal. Naihao Deng proposed the high-level idea of constructing a humor understanding benchmark sourced from RZB data.
291
+
292
+ Background Survey. Ruiqi He surveyed the humor-related tasks.
293
+
294
+ Data Processing. Ruiqi He crawled and processed the jokes from RZB.
295
+
296
+ Annotation. Ruiqi He annotated the explanations for the RZB jokes. Yushu He, Longju Bai, Jiarui Liu, Zhenjie Sun, Zhenghao Tang, He Wang, Nai-hao Deng conducted the preference annotations.
297
+
298
+ Experiments. Ruiqi He, Hanchen Xia, and Naihao Deng conducted the experiments.
299
+
300
+ Result Aggregation. Ruiqi He, Naihao Deng, Yushu He aggregated the results.
301
+
302
+ Paper Writing. Ruiqi He and Naihao Deng drafted the paper. Other authors provided revisions and feedback on the paper.
303
+
304
+ Naihao Deng organized the research.
305
+
306
+ # B Agreement Rate Calculation
307
+
308
+ We calculate the percentage agreement rate among annotators who annotate their preferences between explanations from LLMs and humans. The results show an average inter-annotator agreement of $61.9\%$ for GPT-4o and $60.9\%$ for $\mathrm{ERNIE}_{4}$ -turbo. Given the inherent subjectivity of humor interpretation tasks (Deng et al., 2023), the combined average agreement percentage of $61.4\%$ is decent.
309
+
310
+ # C Annotation Instructions for Preference Annotation
311
+
312
+ We include the following instructions for the preference annotations of the joke explanations:
313
+
314
+ # Instruction
315
+
316
+ “在这个标注中,你将会看到一个笑话和对这个笑话的幽默之处的两个解释,请你比较哪个解释更好的解释了这个笑话的幽默之处,并从以下三个标签中选择:
317
+
318
+ 1. 解释1
319
+ 2. 解释2
320
+ 3. 一样好”
321
+
322
+ # Translation
323
+
324
+ "In this annotation task, you will see a joke along with two explanations of its humor. Please compare which explanation better explains the reason why this joke is funny and choose from the following three labels:
325
+
326
+ 1. Explanation 1
327
+ 2. Explanation 2
328
+ 3. Both are equally good."
329
+
330
+ For each example, we randomly assign the explanations from the LLMs and the human as Explanation 1 and Explanation 2 to ensure a fair comparison.
331
+
332
+ # D Discussion on Evaluation Setting
333
+
334
+ Why Zero-Shot w.o. SFT? The primary research objective of this paper is to determine how well foundational LLMs can understand Chinese humor without relying on supervised fine-tuning for this binary classification task. The focus is on investigating the innate humor-understanding ability of these models through zero-shot and zero-shot CoT prompting. This aligns with the overarching goal of examining their general capabilities without additional task-specific training. From a human behavior perspective, individuals generally acquire a sense of humor through exposure and experience rather than explicit classroom instruction (McGhee, 1971; FRY, 1994; Gervais and Wilson, 2005). Analogously, our purpose lies in evaluating the models' intrinsic ability to recognize and interpret humor without deliberate, task-specific fine-tuning. Moreover, the experimental design follows practices from other benchmarks, such as GPQA (Rein et al., 2023), AI2ARC (Clark et al., 2018), and IFEVAL (Zhou et al., 2023), which do not provide predefined train/dev/test splits. Under these conditions, it is common to assess models in a zero-shot manner to directly evaluate their capabilities on each respective task (Touvron et al., 2023a,b; Bai et al., 2023; Abdin et al., 2024).
335
+
336
+ <table><tr><td rowspan="2">Example</td><td>(zh)真可怕,犯罪嫌疑人就在我们之中,被告席上一名法警对另一名法警说。</td></tr><tr><td>(en)“Terrifying, the criminal suspect is right between the two of us,” said one bailiff to another in the defendant&#x27;s dock.</td></tr><tr><td>Correct Humor Explanation</td><td>“between us” can refer to “either one of us” or literally means the actual physical position.</td></tr><tr><td rowspan="2">GPT-4o&#x27;s Answer</td><td>(zh)…暗示他们自己可能是犯罪嫌疑人…</td></tr><tr><td>(en)…it suggests the bailiffs themselves might be criminal suspects…</td></tr><tr><td>Failure Reasons</td><td>Fail to address the literal meaning.</td></tr></table>
337
+
338
+ # E More Error Cases
339
+
340
+ We note that many examples here encompass multiple error types, highlighting the complexity of Chumor.
341
+
342
+ Insufficient Contextual Understanding. LLMs may fail to ground their responses to the context when they explain the joke. For instance, in the example in Figure 9, "between us" typically means "either you or me", but it also has the literal meaning to indicate the person standing "between us", which is the right interpretation given that the two bailiffs are talking about the criminal. However, GPT-4o only reasons that "the criminal is either you or me" but fails to capture the literal meaning from the context. We hypothesize that in the pretraining corpus, "between us" most likely acquires the meaning of "either you or me" rather than the literal meaning in a scenario like this, which creates a bias that prevents the model from reasoning about the literal interpretation required for this specific explanation.
343
+
344
+ Glyph-based Error. LLMs may fail to interpret the visual puns based on the shape or form of Chinese characters. Glyph-based humor in Chinese leverages its logographic writing system, where characters integrate both semantic and visual elements. Unlike the phonemic alphabet used in English, Chinese characters' pictorial and ideographic nature allows for visual puns in jokes (Daniels and Bright, 1996). In the example in Figure 10, "simplify" does not refer to simplifying the tally problem conceptually, but to simplifying the traditional Chinese characters to simplified Chinese
345
+
346
+ Figure 9: Insufficient contextual understanding example.
347
+
348
+ <table><tr><td rowspan="2">Example</td><td>(zh)我把電車難題簡化了,現在是电车难题。</td></tr><tr><td>(en)I simplified the trolley problem (in traditional Chinese), now it&#x27;s the trolley problem (in simplified Chinese).</td></tr><tr><td>Correct Humor Explanation</td><td>The joke does not actually simplify the scenario of the trolley problem; instead, it converts the traditional Chinese characters “電車難題” into the simplified Chinese characters “电车难题”.</td></tr><tr><td rowspan="2">GPT-4o&#x27;s Answer</td><td>(zh)...这个笑话的幽默之处在于通过将“電車難題”中的“難”字去掉,变成了“电车題”。</td></tr><tr><td>(en)...The humor of this joke lies in removing the character “hard” from “the hard tally problem”, turning it into “the tally problem”.</td></tr><tr><td>Failure Reasons</td><td>Fail to interpret the visual puns in the shape or form of the characters.</td></tr></table>
349
+
350
+ Figure 10: Glyph-based error example.
351
+
352
+ <table><tr><td rowspan="2">Example</td><td>(zh) 小王订了张飞去北京的机票,给张飞省了一大笔钱。</td></tr><tr><td>(en) Xiaowang booked an airline ticket to Beijing, saving Zhang Fei a lot of money.</td></tr><tr><td rowspan="2">Correct
353
+ Humor
354
+ Explanation</td><td>There are two ways to parse the first half of the sentence:
355
+ (1) 小王/订了/张/飞去/北京的/机票 XiaoWang/ booked/ a/ ticket to fly to Beijing.
356
+ (2) 小王/订了/张飞/去北京的/机票 XiaoWang / booked/ Zhangfei/ a ticket/ to Beijing.</td></tr><tr><td>Typically, people would interpret in the first way as Zhangfei is a fictional figure and do not appear in the daily conversation. However, the second half of the sentence confirms that the second way of parsing turns out to be correct. Such contrast is hilarious.</td></tr><tr><td rowspan="2">GPT-4o&#x27;s
357
+ Answer</td><td>(zh) ...由于历史人物张飞已经不在世,所以给他省了一大笔钱。</td></tr><tr><td>(en) ... since the historical figure Zhang Fei is dead and cannot take a flight, the joke humorously suggests that he saved a lot of money.</td></tr><tr><td>Failure
358
+ Reasons</td><td>Fail to parse textual elements in a sentence dynamically.</td></tr></table>
359
+
360
+ Figure 11: Parsing error example.
361
+
362
+ characters as the traditional Chinese characters are also termed as "complicated characters". However, LLMs struggle to reason such graphemic differences as there are no explicit connections between the textual meaning and visual representations of the glyphs.
363
+
364
+ Parsing Error. LLMs often fail to parse sentences in multiple ways simultaneously, leading to difficulties in explaining jokes that require different parsing for the same sentence. In the example in Figure 11, the humor hinges on the ambiguity of
365
+
366
+ the phrase "张飞", which can be interpreted either as part of a verb phrase implying "a ticket flying to Beijing" or as a proper noun, referring to the historical figure Zhang Fei. This ambiguity stems from the flexibility of the Chinese language, where each character can function independently as a word or combine with others to form new words or phrases. There are decades of research studying the problem of parsing Chinese (Sun and Jurafsky, 2004; Sun et al., 2009). Recently, researchers have proposed task-specific tokenization approaches that adapt the parsing process to better align with downstream tasks (Liu et al., 2021, 2023b). However, how to incorporate different ways of parsing at one time still remains challenging.
367
+
368
+ <table><tr><td>Example</td><td>(zh) 五步蛇的危险性是李白的两倍。 (en) The danger of a five-step snake is twice that of Li Bai.</td></tr><tr><td rowspan="2">Correct Humor Explanation</td><td>Kill one in ten steps is a line from a poem by the ancient Chinese poet Li Bai, suggesting that while it takes Li Bai ten steps to kill, a person can die within five steps of being bitten by the “Five-Step Snake” hence it is twice as dangerous as Li Bai.</td></tr><tr><td>(zh) …李白的名字听起来像“二步”,所以“五步蛇”的危险性被戏称为是“二步”的两倍。</td></tr><tr><td>GPT-4o&#x27;s Answer</td><td>(en) …Li Bai&#x27;s (pronounced as lee bye) name sounds similar to “two steps” (pronounced as r boo) in Chinese, so the danger of “Five-Step Snake” is humorously claimed to be twice that of “two steps”</td></tr><tr><td>Failure Reasons</td><td>Fabricate ungrounded details or facts.</td></tr></table>
369
+
370
+ Figure 12: Hallucinations example.
371
+
372
+ Hallucinations. LLMs may fabricate ungrounded details or facts in joke explanations. For instance, in the explanation in Figure 12, GPT-4o claims that "Li Bai's name sounds similar to two steps", while "Li Bai" (pronounced as lee bye) does not sound like "two steps" (pronounced as $r$ boo).
373
+
374
+ On the other hand, the correct explanation requires an understanding of a Chinese poem from Li Bai, "十步杀一人" (The warrior kills a person for every ten steps). This line praises the courage of the soldiers, but the joke deliberately portrays this as a characteristic of Li Bai. Therefore, compared to Li Bai who can kill a person in ten steps, a five-step snake, which can kill a person in five steps, is twice as dangerous as Li Bai. Such explanation requires LLMs to have a deep understanding of Chinese
375
+
376
+ culture and reason over cultural references, posing a great challenge to current LLMs. Although recent works have made progress towards building LLMs beyond English (Du et al., 2024; Zhao et al., 2024), building an LLM that can comprehend such nuanced Chinese cultural terms can be extremely hard.
377
+
378
+ <table><tr><td>Example</td><td>(zh)曹操于城楼上问夏侯惇:“你瞧到了什么。”夏侯惇说:“瞧到马岱。”(en) Cao Cao, from atop the city tower, asked Xia Houdun, “What did you see?” Xia Houdun replied, “I saw Ma Dai.”</td></tr><tr><td>Correct Humor Explanation</td><td>Cao Cao, Xiahou Dun, and Ma Dai were historical figures from the Three Kingdoms period in ancient China. The phrase “saw Ma Dai” (pronounced as chuh dow ma die) not only literally refers to spotting Ma Dai but also play on similar pronunciation with the Japanese phrase “まじと待て” (chotto matte), which means to “wait a moment.”</td></tr><tr><td>GPT-4o&#x27;s Answer</td><td>(zh) ... 这个笑话的幽默之处在于利用了“马岱”这个名字的谐音梗,听起来像“妈的”,让人误以为夏侯惇在骂人。(en) ... a pun involving the name &quot;Ma Dai,&quot; which sounds like &quot;damn&quot; in Chinese. It sounds like Xiahou Dun is swearing at Cao Cao.</td></tr><tr><td>Failure Reasons</td><td>Fail to recognize information/elements across different languages.</td></tr></table>
379
+
380
+ Cross-lingual Error. LLMs may fail to recognize elements or information across different languages. In the explanation in Figure 13, GPT-4o attempts to link the pronunciation of "Ma Dai" to other Chinese terms but fails to identify the similar pronunciations across the Chinese term "瞧到马岱" (pronounced as chuh dow ma die, meaning "saw Ma Dai") and the Japanese term "ちよと待て" (chotto matte, meaning "wait a moment"). Such cases require LLMs to connect pronunciations across languages, which may be rare in the LLMs' pre-training corpus and poses significant challenges to current LLMs.
381
+
382
+ Overcritical. Figure 14 shows when the model overly criticizes an explanation, it focuses excessively on minor details, neglecting the major point and ultimately misjudging the explanation. The joke relies on common sense about life, death, and testing rather than cultural knowledge. Under the CoT prompting, the model recognizes that the explanation identifies the main punchlines but overly criticizes the lack of cultural context.
383
+
384
+ Excessive Sensitivity. For $\mathrm{ERNIE}_{4}$ -turbo, in addition to errors across all the aforementioned categories, it also demonstrates excessive sensitivity to certain examples. Specifically, when content includes languages related to hate speech but used in non-harmful contexts, $\mathrm{ERNIE}_{4}$ -turbo refuses to provide an explanation. During our evalua
385
+
386
+ Figure 13: Cross-lingual error example.
387
+
388
+ <table><tr><td>Example</td><td>(zh)网上传疯了的超准测试题!20道题轻松判断出你有没有死!(en)The viral online quiz! 20 questions to easily determine if you&#x27;re alive or dead!</td></tr><tr><td>Joke Explanation</td><td>(zh)这个笑话的幽默之处在于它用夸张的方式提出了一个显而易见的问题——判断自己是否还活着,这本身就不需要测试。其次,标题中“超准测试题”和“轻松判断”这样的描述让人觉得荒谬可笑,因为活着与否是最基本的感知,不需要任何测试。</td></tr><tr><td>Model&#x27;s Answer DP</td><td>(en)The humor in this joke lies in its exaggerated premise of asking an obvious question—determining whether one is still alive, which inherently doesn&#x27;t require any test. Additionally, the descriptions in the title, such as &quot;super accurate quiz&quot; and &quot;easily determine,&quot; come across as absurdly funny because being alive is the most basic perception, requiring no testing at all.</td></tr><tr><td rowspan="3">Model&#x27;s Answer CoT</td><td>(zh)完全解释(en)Fully Explained</td></tr><tr><td>(zh)...最终判断:部分/没有解释理由:虽然解释准确地指出了笑话的主要笑点(夸张性和标题的矛盾),但它没有深入探讨这些元素为什么会产生幽默效果,也没有讨论潜在的文化或认知背景...</td></tr><tr><td>(en)...Final judgment: Partially/Not explained Reason: While the explanation accurately identifies the main punchlines of the joke (exaggeration and the contradiction in the title), it does not delve into why these elements create a humorous effect or discuss the potential cultural or cognitive context behind it...</td></tr></table>
389
+
390
+ Figure 14: Overcritical example by Nematron $_{70\mathrm{B}}$ . The Nematron $_{70\mathrm{B}}$ model selects the correct answer in the DP prompting, but selects the incorrect answer due to being overly critical in the CoT prompting.
391
+
392
+ tion, we observe this excessive sensitivity in the $\mathrm{ERNIE}_{4}$ -turbo's responses to humor related to medical ethics and political discussions. This suggests that correctly understanding the context and the language toxicity remains an open challenge (Zhang et al., 2024a). Such issues are particularly critical for humor explanation, as misclassifying non-toxic context can cause the responses to deviate from the intended humor.
393
+
394
+ # F Prompts for DP and CoT in Chumor
395
+
396
+ This section outlines the prompts used in Chumor to evaluate whether an explanation fully explains a joke. Two prompting strategies are adopted: Direct Prompting (DP) and Chain of Thought (CoT). Below are the details of each approach:
397
+
398
+ # Direct Prompting (DP)
399
+
400
+ 你将看到一个笑话以及对这个笑话的解释。请判断这个解释是否完全解释了笑话。根据判断,选择“完全解释”或“部分/没有解释”,不需要解释为什么对或者不对。
401
+
402
+ 笑话:[joke]
403
+
404
+ 笑话解释:[explanation]
405
+
406
+ # Translation
407
+
408
+ You will see a joke and an explanation of the joke. Please determine whether this explanation fully explains the joke. Based on your judgment, choose either "fully explain" or "partially/does not explain." You do not need to explain why it is correct or incorrect.
409
+
410
+ Joke: [joke]
411
+
412
+ Explanation: [explanation]
413
+
414
+ The DP prompt is designed to encourage concise decision-making. It directly asks the model to evaluate the completeness of the explanation without requiring reasoning or justification.
415
+
416
+ # Chain of Thought (CoT)
417
+
418
+ 你将看到一个笑话以及对这个笑话的解释。请逐步思考,写下过程并最终判断这个解释是否完全解释了笑话。根据判断,选择“完全解释”或“部分/没有解释”。
419
+
420
+ 笑话:[joke]
421
+
422
+ 笑话解释:[explanation]
423
+
424
+ # Translation
425
+
426
+ You will see a joke and an explanation of the joke. Please think step by step, write down your reasoning process, and finally determine whether this explanation fully explains the joke. Based on your judgment, choose either "fully explains" or "partially/does not explain."
427
+
428
+ Joke: [joke]
429
+
430
+ Explanation: [explanation]
431
+
432
+ The CoT prompt, in contrast, requires the model to reason step by step before reaching a conclusion. This approach aims to improve transparency by explicitly documenting the thought process behind the evaluation.
433
+
434
+ # G Joke Type Distribution in Chumor
435
+
436
+ We sampled 200 datapoints from Chumorto analyze the distribution of joke types, as shown in Figure 15. Note that a single joke may belong to
437
+
438
+ ![](images/326993549316df513bb9806d9d31515b487772d05a316df9b370565647db6178.jpg)
439
+ Figure 15: Distribution of Joke Types in 200 Sampled Datapoints.
440
+
441
+ multiple categories, as it can exhibit features of more than one joke type.
442
+
443
+ ![](images/6df1b3c527229283c9d7de199eea2d6285b3eca5b0cdc18e78ecc11bf3f068e7.jpg)
444
+ H Detailed Results of Experiments
445
+ Figure 16: The Matthew's correlation coefficient of different models' test results in DP and CoT.
446
+
447
+ For evaluation, we input each prompt into the model and collect its responses, comparing them to the labels in Chumor. A model's response is considered correct if it matches the reference label. If the model provides an incorrect answer or doesn't generate a response at all (due to safety protocols or
448
+
449
+ filtering sensitive terms), it is marked as incorrect. Such scenario is rare, occurring only 21 times in our experiments, and exclusively with GLM-4plus.
450
+
451
+ We highlight that CoT prompting at most cases degrade the models' performance on Chumor. As shown in Figure 16, only $\mathrm{Athene}_{70\mathrm{B}}$ achieves a significant improvement. However, this is offset by its poorest performance under DP prompting among the models. GPT-4o shows a slight improvement, with its MCC score increasing from 0.19 to 0.20. And all other eight models exhibit different degrees of performance decline.
452
+
453
+ ![](images/6141e183d684833465b3a4c899603522d9953ad32c94768a7106eaf7f5d4f1c1.jpg)
454
+ Figure 17: CoT accuracy on different joke types $(\%)$ .
455
+
456
+ <table><tr><td rowspan="2">Model</td><td colspan="4">DP</td><td colspan="4">CoT</td></tr><tr><td>MCC</td><td>ACC (%)</td><td>FPR (%)</td><td>FNR (%)</td><td>MCC</td><td>ACC (%)</td><td>FPR (%)</td><td>FNR (%)</td></tr><tr><td>Yi34B</td><td>0.10</td><td>44.95</td><td>97.24</td><td>0.21</td><td>0.09</td><td>47.17</td><td>89.30</td><td>5.44</td></tr><tr><td>Nemotron70B</td><td>0.19</td><td>56.30</td><td>61.26</td><td>20.87</td><td>0.14</td><td>57.17</td><td>40.28</td><td>46.14</td></tr><tr><td>Athene70B</td><td>0.08</td><td>44.59</td><td>97.83</td><td>0.28</td><td>0.12</td><td>47.26</td><td>91.10</td><td>2.89</td></tr><tr><td>ERNIE4-turbo</td><td>0.29</td><td>60.29</td><td>59.83</td><td>13.57</td><td>0.11</td><td>45.16</td><td>96.93</td><td>0.14</td></tr><tr><td>QWen2.572B</td><td>0.19</td><td>48.46</td><td>90.67</td><td>0.69</td><td>0.17</td><td>49.45</td><td>86.91</td><td>3.31</td></tr><tr><td>Mistral123B</td><td>0.22</td><td>55.56</td><td>69.26</td><td>12.19</td><td>0.16</td><td>51.18</td><td>79.92</td><td>8.40</td></tr><tr><td>Gemini1.5-pro</td><td>0.24</td><td>54.00</td><td>77.42</td><td>5.17</td><td>0.19</td><td>60.32</td><td>33.81</td><td>47.31</td></tr><tr><td>GLM-4plus</td><td>0.24</td><td>55.56</td><td>72.28</td><td>8.26</td><td>0.14</td><td>58.13</td><td>32.96</td><td>53.44</td></tr><tr><td>GPT-4o</td><td>0.19</td><td>51.87</td><td>80.02</td><td>6.68</td><td>0.20</td><td>50.64</td><td>85.00</td><td>3.03</td></tr><tr><td>GPT-4turbo</td><td>0.20</td><td>52.32</td><td>79.28</td><td>6.61</td><td>0.17</td><td>51.27</td><td>80.87</td><td>6.96</td></tr></table>
457
+
458
+ Table 4: Performance metrics for explanation evaluation including Matthew's correlation coefficient (MCC), accuracy (ACC), false positive rate (FPR), and false negative rate (FNR).
459
+
460
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Source</td><td colspan="4">DP</td><td colspan="4">CoT</td></tr><tr><td>MCC</td><td>ACC(%)</td><td>FPR(%)</td><td>FNR(%)</td><td>MCC</td><td>ACC(%)</td><td>FPR(%)</td><td>FNR(%)</td></tr><tr><td rowspan="3">Athene70B</td><td>Overall</td><td>0.08</td><td>44.59</td><td>97.83</td><td>0.28</td><td>0.12</td><td>47.26</td><td>91.10</td><td>2.89</td></tr><tr><td>ERNIE Bot</td><td>0.12</td><td>52.38</td><td>97.15</td><td>0.00</td><td>0.15</td><td>54.24</td><td>91.13</td><td>2.13</td></tr><tr><td>GPT-4o</td><td>0.03</td><td>33.90</td><td>98.51</td><td>0.86</td><td>0.08</td><td>37.67</td><td>91.06</td><td>4.50</td></tr><tr><td rowspan="3">ERNIE-turbo</td><td>Overall</td><td>0.29</td><td>60.29</td><td>59.83</td><td>13.57</td><td>0.11</td><td>45.16</td><td>96.93</td><td>0.14</td></tr><tr><td>ERNIE Bot</td><td>0.23</td><td>58.64</td><td>78.14</td><td>5.99</td><td>0.16</td><td>53.47</td><td>94.83</td><td>0.10</td></tr><tr><td>GPT-4o</td><td>0.27</td><td>62.54</td><td>41.38</td><td>29.55</td><td>0.04</td><td>33.76</td><td>99.04</td><td>0.21</td></tr><tr><td rowspan="3">Gemini1.5-pro</td><td>Overall</td><td>0.24</td><td>54.00</td><td>77.42</td><td>5.17</td><td>0.19</td><td>60.32</td><td>33.81</td><td>47.31</td></tr><tr><td>ERNIE Bot</td><td>0.27</td><td>60.66</td><td>74.13</td><td>5.89</td><td>0.23</td><td>60.87</td><td>28.62</td><td>49.24</td></tr><tr><td>GPT-4o</td><td>0.21</td><td>44.85</td><td>80.74</td><td>3.64</td><td>0.17</td><td>59.56</td><td>39.04</td><td>43.25</td></tr><tr><td rowspan="3">GLM-4plus</td><td>Overall</td><td>0.24</td><td>55.56</td><td>72.28</td><td>8.26</td><td>0.14</td><td>58.13</td><td>32.96</td><td>53.44</td></tr><tr><td>ERNIE Bot</td><td>0.25</td><td>59.83</td><td>74.97</td><td>6.70</td><td>0.15</td><td>57.56</td><td>37.06</td><td>47.61</td></tr><tr><td>GPT-4o</td><td>0.21</td><td>49.68</td><td>69.57</td><td>11.56</td><td>0.06</td><td>58.92</td><td>28.83</td><td>65.74</td></tr><tr><td rowspan="3">GPT-4turbo</td><td>Overall</td><td>0.20</td><td>52.32</td><td>79.28</td><td>6.61</td><td>0.17</td><td>51.27</td><td>80.87</td><td>6.96</td></tr><tr><td>ERNIE Bot</td><td>0.20</td><td>57.25</td><td>80.99</td><td>5.99</td><td>0.22</td><td>58.75</td><td>76.14</td><td>7.72</td></tr><tr><td>GPT-4o</td><td>0.18</td><td>45.56</td><td>77.55</td><td>7.92</td><td>0.13</td><td>41.01</td><td>85.64</td><td>5.35</td></tr><tr><td rowspan="3">GPT-4o</td><td>Overall</td><td>0.19</td><td>51.87</td><td>80.02</td><td>6.68</td><td>0.20</td><td>50.64</td><td>85.00</td><td>3.03</td></tr><tr><td>ERNIE Bot</td><td>0.21</td><td>57.82</td><td>79.41</td><td>6.40</td><td>0.24</td><td>58.07</td><td>82.47</td><td>2.94</td></tr><tr><td>GPT-4o</td><td>0.16</td><td>43.71</td><td>80.64</td><td>7.28</td><td>0.15</td><td>40.44</td><td>87.55</td><td>3.21</td></tr><tr><td rowspan="3">Nemotron70B</td><td>Overall</td><td>0.19</td><td>56.30</td><td>61.26</td><td>20.87</td><td>0.14</td><td>57.17</td><td>40.28</td><td>46.14</td></tr><tr><td>ERNIE Bot</td><td>0.22</td><td>60.66</td><td>56.81</td><td>22.54</td><td>0.14</td><td>57.04</td><td>39.18</td><td>46.60</td></tr><tr><td>GPT-4o</td><td>0.18</td><td>50.32</td><td>65.74</td><td>17.34</td><td>0.13</td><td>57.36</td><td>41.38</td><td>45.18</td></tr><tr><td rowspan="3">Mistral123B</td><td>Overall</td><td>0.22</td><td>55.56</td><td>69.26</td><td>12.19</td><td>0.16</td><td>51.18</td><td>79.92</td><td>8.40</td></tr><tr><td>ERNIE Bot</td><td>0.25</td><td>61.13</td><td>65.15</td><td>13.60</td><td>0.18</td><td>57.04</td><td>79.73</td><td>7.61</td></tr><tr><td>GPT-4o</td><td>0.20</td><td>47.90</td><td>73.40</td><td>9.21</td><td>0.12</td><td>43.14</td><td>80.11</td><td>10.06</td></tr><tr><td rowspan="3">Qwen2.572B</td><td>Overall</td><td>0.19</td><td>48.46</td><td>90.67</td><td>0.69</td><td>0.17</td><td>49.45</td><td>86.91</td><td>3.31</td></tr><tr><td>ERNIE Bot</td><td>0.19</td><td>54.45</td><td>92.61</td><td>0.30</td><td>0.18</td><td>55.54</td><td>88.07</td><td>2.54</td></tr><tr><td>GPT-4o</td><td>0.17</td><td>40.23</td><td>88.72</td><td>1.50</td><td>0.14</td><td>41.08</td><td>85.74</td><td>4.93</td></tr><tr><td rowspan="3">Yi34B</td><td>Overall</td><td>0.10</td><td>44.95</td><td>97.24</td><td>0.21</td><td>0.09</td><td>47.17</td><td>89.30</td><td>5.44</td></tr><tr><td>ERNIE Bot</td><td>0.15</td><td>53.42</td><td>94.72</td><td>0.30</td><td>0.11</td><td>53.99</td><td>88.38</td><td>5.28</td></tr><tr><td>GPT-4o</td><td>0.03</td><td>33.33</td><td>99.79</td><td>0.00</td><td>0.07</td><td>37.81</td><td>90.21</td><td>5.78</td></tr></table>
461
+
462
+ Table 5: Detailed performance metrics with source for explanation evaluation of Matthew's correlation coefficient (MCC), accuracy (ACC), false positive rate (FPR), and false negative rate (FNR).
463
+
464
+ <table><tr><td>Model</td><td>Prompting</td><td>Cross-lingual</td><td>Glyph-based</td><td>Homophonemic</td><td>Pun-based</td><td>Situational</td><td>Cultural</td></tr><tr><td rowspan="2">Athene70B</td><td>DP</td><td>0.00</td><td>0.00</td><td>30.00</td><td>44.00</td><td>61.00</td><td>42.00</td></tr><tr><td>CoT</td><td>0.00</td><td>25.00</td><td>30.00</td><td>44.00</td><td>59.00</td><td>43.00</td></tr><tr><td rowspan="2">ERNIE4-turbo</td><td>DP</td><td>50.00</td><td>50.00</td><td>60.00</td><td>61.00</td><td>70.00</td><td>63.00</td></tr><tr><td>CoT</td><td>0.00</td><td>0.00</td><td>30.00</td><td>43.00</td><td>59.00</td><td>42.00</td></tr><tr><td rowspan="2">Gemini1.5-pro</td><td>DP</td><td>50.00</td><td>50.00</td><td>55.00</td><td>63.00</td><td>67.00</td><td>61.00</td></tr><tr><td>CoT</td><td>50.00</td><td>75.00</td><td>70.00</td><td>61.00</td><td>66.00</td><td>69.00</td></tr><tr><td rowspan="2">GLM-4plus</td><td>DP</td><td>50.00</td><td>25.00</td><td>65.00</td><td>60.00</td><td>69.00</td><td>60.00</td></tr><tr><td>CoT</td><td>50.00</td><td>100.00</td><td>75.00</td><td>64.00</td><td>60.00</td><td>61.00</td></tr><tr><td rowspan="2">GPT-4turbo</td><td>DP</td><td>50.00</td><td>25.00</td><td>40.00</td><td>57.00</td><td>67.00</td><td>55.00</td></tr><tr><td>CoT</td><td>50.00</td><td>25.00</td><td>45.00</td><td>54.00</td><td>62.00</td><td>56.00</td></tr><tr><td rowspan="2">GPT-4o</td><td>DP</td><td>0.00</td><td>50.00</td><td>35.00</td><td>49.00</td><td>63.00</td><td>54.00</td></tr><tr><td>CoT</td><td>0.00</td><td>50.00</td><td>35.00</td><td>50.00</td><td>62.00</td><td>53.00</td></tr><tr><td rowspan="2">Nemotron70B</td><td>DP</td><td>50.00</td><td>50.00</td><td>65.00</td><td>63.00</td><td>62.00</td><td>60.00</td></tr><tr><td>CoT</td><td>100.00</td><td>100.00</td><td>65.00</td><td>66.00</td><td>60.00</td><td>72.00</td></tr><tr><td rowspan="2">Mistral123B</td><td>DP</td><td>50.00</td><td>50.00</td><td>55.00</td><td>61.00</td><td>65.00</td><td>61.00</td></tr><tr><td>CoT</td><td>50.00</td><td>0.00</td><td>40.00</td><td>53.00</td><td>66.00</td><td>55.00</td></tr><tr><td rowspan="2">Qwen2.572B</td><td>DP</td><td>0.00</td><td>50.00</td><td>35.00</td><td>47.00</td><td>64.00</td><td>51.00</td></tr><tr><td>CoT</td><td>0.00</td><td>50.00</td><td>40.00</td><td>53.00</td><td>63.00</td><td>53.00</td></tr><tr><td rowspan="2">Yi34B</td><td>DP</td><td>0.00</td><td>0.00</td><td>30.00</td><td>43.00</td><td>60.00</td><td>44.00</td></tr><tr><td>CoT</td><td>0.00</td><td>25.00</td><td>40.00</td><td>49.00</td><td>63.00</td><td>52.00</td></tr></table>
465
+
466
+ Table 6: Performance metrics by joke type for explanation evaluation accuracy(%)
2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79cbd84c0370412138750dee0a2c00e06d928e1aeee9b375d3354567859c8110
3
+ size 1748890
2025/Chumor 2.0_ Towards Better Benchmarking Chinese Humor Understanding from (Ruo Zhi Ba)/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/f6a4ee7e-a5b2-4a68-bf9b-6d717be3f8fe_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c2c26f35bcd0fc26f1b226fe2e54c436ec0a5ab5f4853f731cb542f4c74ff1b
3
+ size 772601
2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5e11ad3fb4478319bacd134121d2ee181d8121cdc09be8386e3a777b1ad30d7
3
+ size 2052010
2025/CipherBank_ Exploring the Boundary of LLM Reasoning Capabilities through Cryptography Challenge/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/81aca763-e861-40de-ad3f-640af6cf3d30_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/81aca763-e861-40de-ad3f-640af6cf3d30_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/81aca763-e861-40de-ad3f-640af6cf3d30_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a196eb6bb662138c7cea0c8cb70ee8e33d451cfcd962746ef4cd95da7bfa062
3
+ size 666158
2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/full.md ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CitaLaw: Enhancing LLM with Citations in Legal Domain
2
+
3
+ Kepu Zhang $^{1}$ , Weijie Yu $^{2*}$ , Sunhao Dai $^{1}$ , Jun Xu $^{1}$
4
+
5
+ <sup>1</sup>Gaoling School of Artificial Intelligence, Renmin University of China
6
+
7
+ $^{2}$ University of International Business and Economics
8
+
9
+ kepuzhang@ruc.edu.cn, yu@uibep.edu.cn
10
+
11
+ # Abstract
12
+
13
+ In this paper, we propose Citalaw, the first benchmark designed to evaluate LLMs' ability to produce legally sound responses with appropriate citations. Citalaw features a diverse set of legal questions for both laypersons and practitioners, paired with a comprehensive corpus of law articles and precedent cases as a reference pool. This framework enables LLM-based systems to retrieve supporting citations from the reference corpus and align these citations with the corresponding sentences in their responses. Moreover, we introduce syllogism-inspired evaluation methods to assess the legal alignment between retrieved references and LLM-generated responses, as well as their consistency with user questions. Extensive experiments on 2 open-domain and 7 legal-specific LLMs demonstrate that integrating legal references substantially enhances response quality. Furthermore, our proposed syllogism-based evaluation method exhibits strong agreement with human judgments.
14
+
15
+ # 1 Introduction
16
+
17
+ Generating responses supported by citations, such as relevant law articles and precedent cases, is essential for ensuring the trustworthiness of large language models (LLMs) in legal tasks. For laypersons seeking legal advice (Fei et al., 2023), LLM-generated responses grounded in citations provide verifiable information, fostering trust in the system. Conversely, for legal practitioners such as lawyers and judges, citations serve as supportive evidence that aids in analyzing complex cases, validating legal arguments, and ensuring decisions align with established legal principles (Li et al., 2024; Zhong et al., 2020; Abdallah et al., 2023).
18
+
19
+ Recently, a growing body of benchmark research (Gao et al., 2023a; Li et al., 2023) has focused on enabling LLMs to provide citations for the
20
+
21
+ ![](images/8b18c275924d2d7c798f99f411d1b1dd24b37d58177b2cd07259e65b86050300.jpg)
22
+ Figure 1: The framework of our CitaLaw.
23
+
24
+ statements they generate. For instance, ALCE (Gao et al., 2023b) introduces a benchmark designed to evaluate the ability of LLMs to generate citation-supported outputs, aiming to improve factual accuracy. WebCiteS (Deng et al., 2024) provides a curated database of manually annotated summaries and citations to enhance performance in text summarization and citation generation.
25
+
26
+ While these studies have made notable progress in general domains, they face significant challenges when applied to the legal domain. First, laypersons and legal practitioners interact with LLMs differently and have distinct expectations for citations. Laypersons typically seek legal advice and rely on citations to verify the accuracy of LLM responses, whereas legal practitioners pose more complex queries, using LLMs for legal reasoning, with citations serving as supportive evidence. Existing studies fail to address these differences, leading to unsatisfactory performance in real-world applications. Second, existing methods often fall short in providing the diverse references required in legal contexts, such as law articles and precedent cases. Law articles establish the foundational legal framework, while precedent cases offer concrete examples and interpretive guidance. These two types of references inherently align with the distinct characteristics of civil and common law systems. Third, traditional citation evaluation measures, such as ROUGE (Lin, 2004), rely on surface-level similar
27
+
28
+ ![](images/b0903bfb6decf7c8b09ba1de2359ba276ce2f70217a00854a97e5ed3d8807b8a.jpg)
29
+ Figure 2: Examples from the two subsets of CitaLaw, with text in red, blue, and yellow representing the three dimensions of the syllogism: major premise, minor premise (circumstances, illegal acts), and conclusion (legal decisions), respectively. [A] and [C] denote citations to relevant law articles and precedent cases, respectively.
30
+
31
+ ities and are often insufficient to assess the alignment between references and LLM-generated responses. In the legal domain, effective evaluation requires a deeper understanding of logical and semantic relationships.
32
+
33
+ To overcome the above challenges, we propose CitaLaw, the first benchmark tailored to evaluate LLMs' capabilities in generating legally grounded responses supported by accurate and context-aware citations. As shown in Figure 1, CitaLaw incorporates four distinct legal-specific features:
34
+
35
+ (1) CitaLaw has two subsets tailored for laypersons and practitioners, with examples in Figure 2. Laypersons typically ask shorter, conversational questions, while practitioners often pose specialized, detailed questions.
36
+ (2) CitaLaw includes a retrieval corpus comprising two commonly used references: law articles, which provide clear and concise guidelines for addressing user questions, and precedent cases, which offer legal reasoning and support for judicial decisions. Recognizing the distinct needs of laypersons and practitioners, we provide only law articles for laypersons to ensure clarity, while practitioners have access to both law articles and precedent cases to support more complex legal reasoning.
37
+ (3) In addition to traditional global-level metrics such as MAUVE (Pillutla et al., 2021), we propose a syllogism-based evaluation method to assess both the response correctness and the citation quality. This method provides a more granular evaluation by focusing on three key dimensions: circumstances, illegal acts, and legal decisions.
38
+
39
+ (4) We consider two types of response generation methods. The first type, Citation-Guided Generation (CGG), involves generating responses by incorporating retrieved references during generation. The second type, Answer Refinement Generation (ARG), refines the LLMs' initial response (CloseBook) by retrieving and incorporating reference information. This category includes ARG-Q, which retrieves citations using only the user query, and ARG-QA, which retrieves citations using both the user query and the LLM's initial response.
40
+
41
+ Extensive experiments on two open-domain and seven legal-specific LLMs reveal the following key insights: 1) Incorporating legal references into the LLM significantly improves the quality of responses; 2) Including references as part of the LLM's input consistently outperforms answer-refinement methods; 3) Leveraging references to refine the LLM's responses yields better alignment of responses and references. 4) For fine-tuning LLMs in legal scenarios, incorporating law articles, syllogistic reasoning, and full-scale fine-tuning achieves promising performance. 5) Open-domain LLMs surprisingly outperform legal-specific LLMs in certain scenarios; 6) Human evaluations show a strong correlation with our syllogism-based methods.
42
+
43
+ In summary, our contributions are as follows:
44
+
45
+ - To the best of our knowledge, CitaLaw is the first benchmark designed to evaluate the capability of LLMs to generate legally grounded responses with accurate and context-aware citations. CitaLaw includes questions tailored to both laypersons and practitioners, paired
46
+
47
+ with a citation corpus comprising law articles and precedent cases.
48
+
49
+ - We propose a two-level evaluation framework that combines global-level metrics with a syllogism-based reasoning approach. Additionally, we explore two mainstream methods for legal response generation: citation-guided and answer refinement.
50
+ - Through extensive experiments on two open-domain and seven legal-specific LLMs, we demonstrate the effectiveness of integrating legal references into response generation and validate our syllogism-based evaluation method. Additionally, we provide actionable insights for the practical deployment of LLMs in legal scenarios.
51
+
52
+ # 2 Related Work
53
+
54
+ LLM for Legal Task. A amount of work has explored applying LLMs to legal tasks (Savelka et al., 2023; Wu et al., 2023b; Yu et al., 2022a; Blair-Stanek et al., 2023). Building LLMs tailored for legal scenarios is a popular direction (Yue et al., 2023; Wu et al., 2023a; He et al., 2023). There are also some benchmarks that explore the capabilities of LLMs in legal tasks. LawBench (Fei et al., 2023) evaluates LLMs' legal knowledge across three cognitive aspects. LAiW (Dai et al., 2023) assesses LLMs' legal reasoning abilities based on legal practice logic. LexEval (Li et al., 2024) evaluates LLMs' legal capabilities based on a new legal cognitive ability classification system. However, none of them have considered enhancing the trustworthiness of LLMs in legal scenarios by generating outputs with citations.
55
+
56
+ Citation in LLM. Attribution (Li et al., 2023) in LLMs refers to providing supporting evidence for the answers generated by the model, presented in the form of citations. ALCE (Gao et al., 2023b) is an automated benchmark for evaluating LLMs' ability to generate outputs with citations, aimed at improving the factual accuracy of the generated responses. WebCiteS (Deng et al., 2024) provides a database containing 7,000 manually annotated summaries and citations to enhance LLMs' capabilities in summarization and citation. RARR (Gao et al., 2023a) enhances LLM outputs by automatically adding citations, and modifying the responses. ExpertQA (Malaviya et al., 2024) verifies and modifies citations through expert review to ensure re
57
+
58
+ liability. In contrast to the above works, CitaLaw focuses specifically on citation in legal scenarios.
59
+
60
+ # 3 Task Setup and Dataset Construction
61
+
62
+ Suppose we have a legal corpus $D$ , which consists of either a collection of precedent cases $(D_{l})$ or law articles $(D_{c})$ . Given a user question $x$ posed by either a layperson or a practitioner, the LLM-based system is tasked with retrieving supportive citations from $D$ and generating a legally grounded response $y$ . The response $y$ comprises a list of $n$ sentences, i.e., $y = [s_1,\dots ,s_n]$ , where each sentence $s_i$ refers to at most one corresponding citation. As illustrated in Figure 2, the system is further required to attach each citation to its relevant sentence, with "[A]" and "[C]" denoting references to law articles and precedent cases, respectively.
63
+
64
+ To enable the evaluation of this task, we construct the specialized dataset (Table 1 shows the statistics) as follows:
65
+
66
+ To simulate the behavior of laypersons, we include questions that are more conversational, lack detailed case descriptions, and are relatively short in length. We use the consultation section from LawBench (Fei et al., 2023), which collects user queries from the Hualv website<sup>1</sup> and answers provided by lawyers or legal consulting firms.
67
+
68
+ To simulate the behavior of legal practitioners, we include questions that are more professional, often accompanied by detailed case descriptions, and generally longer. For this purpose, we use the open-ended question section from LexEval (Li et al., 2024), which consists of subjective questions from the National Uniform Legal Profession Qualification Examination. These questions are particularly challenging for LLMs, requiring them to understand the case fully and apply legal knowledge accurately to generate answers.
69
+
70
+ In terms of the corpus, we construct a comprehensive corpus from multiple sources, including law articles and precedent cases. Specifically, for law articles, we collect approximately 50,000 documents from LexiLaw $^2$ , covering areas such as Civil Law, Criminal Law, and judicial interpretations. For precedent cases, we include both criminal and civil cases. Criminal cases are sourced from the LeCaRD legal retrieval dataset (Ma et al., 2021b), ELAM (Yu et al., 2022b), and civil cases from the CAIL legal summary
71
+
72
+ <table><tr><td>Dataset</td><td>#Q</td><td>LenQ</td><td>LenA</td><td>Q Type</td></tr><tr><td>Layperson</td><td>500</td><td>57.62</td><td>107.40</td><td>Question</td></tr><tr><td>Practitioner</td><td>500</td><td>618.96</td><td>193.46</td><td>Case + Question</td></tr></table>
73
+
74
+ Table 1: Dataset statistics. #Q indicates the number of questions, $\mathrm{Len}_Q$ and $\mathrm{Len}_A$ denote the average lengths of questions and gold answers, and Q Type refers to the question type.
75
+
76
+ dataset, LJP-MSJudge (Ma et al., 2021a), and the pre-training data of fuzi.mingcha (Wu et al., 2023a). As a supplement to precedent cases, we also incorporate question-and-answer pairs from fine-tuning datasets of legal LLMs as part of the precedent cases. These QA pairs are collected from DISC-LawLLM (Yue et al., 2023), LawGPT_zh (Liu et al., 2023), and HanFei (He et al., 2023). In total, the constructed corpus contains approximately 500,000 documents, ensuring sufficient coverage of both law articles and precedent cases to support diverse legal tasks.
77
+
78
+ # 4 Method
79
+
80
+ # 4.1 Response Generation
81
+
82
+ We consider two types of methods in this study.
83
+
84
+ Citation-Guided Generation (CGG) produces response $y_{cgg}$ given a user question $x$ by referring retrieved relevant document(s) $D_R$ :
85
+
86
+ $$
87
+ y _ {\mathrm {c g g}} = f _ {\mathrm {L L M}} \left(x, D _ {R}, p _ {1}\right), \tag {1}
88
+ $$
89
+
90
+ where $f_{\mathrm{LLM}}$ denotes a open-domain or a legal specific LLM; $p_1$ is the direct generation prompt. All prompt settings are detailed in Appendix A.
91
+
92
+ Answer Refinement Generation (ARG) is a two-stage method that generates the final response $y_{\mathrm{arg}}$ by refining the LLM's initial response $y_{\mathrm{init}}$ through the retrieval and incorporation of reference information. This process can be formulated as:
93
+
94
+ $$
95
+ y _ {\text {i n i t}} = f _ {\mathrm {L L M}} (x, p _ {2}), \tag {2}
96
+ $$
97
+
98
+ where $p_2$ is the prompt instructing the LLM to directly generate an initial response without reference information. We refer to this step as CloseBook. The initial response $y_{\mathrm{init}}$ is then refined as:
99
+
100
+ $$
101
+ y _ {\text {a r g}} = f _ {\mathrm {L L M}} \left(y _ {\text {i n i t}}, D _ {R}, p _ {3}\right), \tag {3}
102
+ $$
103
+
104
+ where $p_3$ is the prompt guiding the LLM to refine the $y_{\mathrm{init}}$ using the retrieved documents $D_R$ .
105
+
106
+ Laypersons and practitioners interact with LLMs differently and have distinct expectations for citations. When $x$ is submitted by a layperson, the corresponding $D_{R}$ consists of relevant law articles. In
107
+
108
+ contrast, when $x$ is submitted by a practitioner, the corresponding $D_R$ includes both relevant law articles and precedent cases. The process for retrieving $D_R$ from $D$ is detailed in the next subsection.
109
+
110
+ # 4.2 Citation Retrieval
111
+
112
+ We explore state-of-the-art open-domain dense retriever BGE (Xiao et al., 2023), along with two legal-specific dense retrievers, CriminalBERT (Zhong et al., 2019) and Civil-BERT (Zhong et al., 2019). We also investigate two types of retrieval queries: $x$ (the user question alone, ARG-Q) and $[x; y_{\mathrm{init}}]$ (the concatenation of the user query $x$ and the initial response $y_{\mathrm{init}}$ , where $[]$ denotes the concatenation operation, ARG-QA). The impact of different retrieval models on performance will be analyzed in the experiments.
113
+
114
+ # 4.3 Citation Attachment
115
+
116
+ Building on the retrieved citations, this subsection outlines the process of attaching these law articles or precedents to specific sentences in the LLM-generated responses. This process involves answering two key questions:
117
+
118
+ What kind of sentences can be associated with citations? We utilize co-occurring words and legal entity extraction to identify sentences that explicitly reference legal concepts, actions, or terms relevant to the retrieved citations. Specifically, we construct a pool of legal terminologies using THUOCL<sup>3</sup> and LaWGPT (Zhou et al., 2024). A sentence is considered eligible if it contains any of the terminologies from this pool. Additionally, we use SpaCy (Honnibal et al., 2020) to extract legal entities from each sentence. If a sentence includes legal entities, it is also deemed eligible for citation attachment.
119
+
120
+ How are citations attached to the identified sentences? If a sentence is deemed eligible for citation attachment, we associate it with retrieved citations as follows. For the laypersons, the retrieved law article $c_{l} \in D_{l}$ is attached to the most relevant sentence $s_{k} \in y$ :
121
+
122
+ $$
123
+ C _ {\text {L a y}} = \left\{\left(s _ {k}, c _ {l}\right) \mid s _ {k} = \underset {s _ {i} \in y} {\arg \max } \operatorname {s i m} \left(s _ {i}, c _ {l}\right) \right\}, \tag {4}
124
+ $$
125
+
126
+ where $(s_k, c_l)$ represents attaching the reference $c_l$ to the sentence $s_i$ , and $\mathrm{sim}(\cdot)$ is computed using sentence-BERT (Reimers, 2019). We set $|C_{\mathrm{Lay}}| = 1$ because, typically, a layperson's query pertains to only one specific legal article. For practitioners,
127
+
128
+ we attach the retrieved law article in the same way as for laypersons. Additionally, we associate the retrieved precedent cases $c_{c} \in D_{c}$ with each $s_{i} \in y$ , which is formulated as:
129
+
130
+ $$
131
+ \begin{array}{l} C _ {\text {P r a}} = \left\{\left(s _ {k}, c _ {l}\right) \mid s _ {k} = \underset {s _ {i} \in y} {\arg \max } \operatorname {s i m} \left(s _ {i}, c _ {l}\right) \right\} (5) \\ \cup \{(s _ {i}, c _ {c}) |, c _ {c} = \underset {c _ {j} \in D _ {c}} {\arg \max } \operatorname {s i m} (s _ {i}, c _ {j}) \}, (5) \\ \end{array}
132
+ $$
133
+
134
+ where $|D_c| = 3$ , meaning each response $y$ can be associated with up to three precedents<sup>4</sup>.
135
+
136
+ # 5 Evaluation
137
+
138
+ CitaLaw provides a comprehensive evaluation framework incorporating metrics for fluency, correctness, and citation quality. This framework is divided into two levels of analysis: global level and the proposed syllogism level.
139
+
140
+ Syllogism, a foundational framework in legal reasoning, comprises three key components: the major premise, the minor premise, and the conclusion. In our legal context, these correspond to the relevant law article or precedent case (major premise), the factual circumstances and actions of a specific case (minor premise), and the resulting legal decision (conclusion). By integrating this syllogistic framework, CitaLaw goes beyond surface-level correctness to evaluate the logical coherence and alignment of LLM-generated responses with established legal principles.
141
+
142
+ # 5.1 Fluency (Style Consistency)
143
+
144
+ To ensure the LLM-generated responses align with the user's requirements, the system must adapt its style based on the user's background. For laypersons, responses should avoid excessive technical jargon to ensure accessibility and comprehension. Conversely, responses for legal practitioners should adopt a formal and professional tone to maintain credibility and utility. To achieve this aim, we concatenate the user query and the LLM-generated response and apply MAUVE (Pillutla et al., 2021) to assess their style consistency.
145
+
146
+ # 5.2 Correctness
147
+
148
+ At the global level, we use established metrics ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2019). ROUGE measures word-level overlap between the generated and labeled responses, with scores reported for ROUGE-1, ROUGE-2, and
149
+
150
+ ROUGE-L. BERTScore captures semantic similarity between the generated and labeled responses, and we report the F-score (BERT-F) for evaluation. These metrics assess the overall correctness of LLM-generated responses.
151
+
152
+ At the syllogism level, we leverage the Qwen2 (Yang et al., 2024) to extract key components, including the circumstances, illegal acts, and legal decisions. We use sentence-BERT (Reimers, 2019) to measure the alignment between the labeled responses and the generated outputs across these dimensions, resulting in $\mathrm{Correct}_{\mathrm{c}}$ , $\mathrm{Correct}_{\mathrm{a}}$ , and $\mathrm{Correct}_{\mathrm{d}}$ . This syllogism-level evaluation allows us to assess the logical coherence of the responses, ensuring that they align with the underlying legal reasoning principles.
153
+
154
+ # 5.3 Citation Quality
155
+
156
+ As previously discussed, we assume that a question submitted by laypersons typically corresponds to a specific law article. Therefore, at the global level, we evaluate the citation quality of the retrieved law article (premise) by measuring its entailment with the associated sentence in the LLM's response (hypothesis). Specifically, we use an NLI model to compute $\text{Cita}_{\text{Law}}$ , which quantifies the degree to which the law article entails the attached sentence. This metric reflects how effectively the response aligns with the cited law article. We employ DISC-LawLLM (Yue et al., 2023) as the NLI model due to its strong agreement with human evaluations (as discussed in Sec. 6.3) and its superior performance compared to other NLI models (as detailed in Sec. 6.5).
157
+
158
+ At the syllogism level, we evaluate the quality of precedent case citations by examining three key components: circumstances, illegal acts, and legal decisions. After extracting these elements from both the retrieved cases and the associated sentence in the LLM's response, we utilize DISC-LawLLM to assess the entailment for each component. This evaluation yields three distinct scores: $\text{Cita}_{\text{c}}$ for circumstances, $\text{Cita}_{\text{a}}$ for illegal acts, and $\text{Cita}_{\text{d}}$ for legal decisions, providing a more detailed and nuanced assessment of citation quality within the syllogism framework.
159
+
160
+ # 6 Experiments
161
+
162
+ We conduct extensive experiments on our CitaLaw using the proposed two-level evaluation methods.
163
+
164
+ <table><tr><td colspan="2">Metric</td><td>Fluency</td><td colspan="8">Correctness</td><td>Citation</td><td>All</td></tr><tr><td>Category</td><td>Model</td><td>Mauve</td><td>Rouge-1</td><td>Rouge-2</td><td>Rouge-L</td><td>BERT-F</td><td>Correcte</td><td>Correcta</td><td>Correctd</td><td>CitaLaw</td><td>Avg</td><td></td></tr><tr><td rowspan="4">Llama3 (Llam3-8B-Instruct)</td><td>CloseBook</td><td>22.63</td><td>16.47</td><td>1.95</td><td>13.34</td><td>58.46</td><td>73.05</td><td>68.24</td><td>66.87</td><td>67.38</td><td>43.15</td><td></td></tr><tr><td>CGG</td><td>61.01</td><td>23.97</td><td>6.05</td><td>17.91</td><td>65.94</td><td>67.29</td><td>77.31</td><td>74.95</td><td>86.70</td><td>53.46</td><td></td></tr><tr><td>ARG-Q</td><td>61.27</td><td>23.17</td><td>5.65</td><td>17.83</td><td>64.23</td><td>69.04</td><td>75.45</td><td>74.47</td><td>79.10</td><td>52.24</td><td></td></tr><tr><td>ARG-QA</td><td>51.83</td><td>23.73</td><td>6.96</td><td>18.53</td><td>64.84</td><td>71.37</td><td>74.81</td><td>74.66</td><td>80.80</td><td>51.95</td><td></td></tr><tr><td rowspan="4">Qwen2 (Qwen2-7B-Instruct)</td><td>CloseBook</td><td>21.04</td><td>15.29</td><td>2.27</td><td>11.31</td><td>58.39</td><td>70.89</td><td>71.71</td><td>69.93</td><td>72.35</td><td>43.69</td><td></td></tr><tr><td>CGG</td><td>75.10</td><td>22.26</td><td>4.77</td><td>15.41</td><td>65.28</td><td>67.50</td><td>78.62</td><td>77.82</td><td>77.59</td><td>53.82</td><td></td></tr><tr><td>ARG-Q</td><td>66.55</td><td>20.86</td><td>4.50</td><td>15.42</td><td>64.59</td><td>66.96</td><td>77.82</td><td>75.66</td><td>81.48</td><td>52.65</td><td></td></tr><tr><td>ARG-QA</td><td>66.80</td><td>21.73</td><td>4.78</td><td>16.34</td><td>64.85</td><td>69.31</td><td>76.35</td><td>75.05</td><td>82.83</td><td>53.11</td><td></td></tr><tr><td rowspan="7">Legal LLM (CGG)</td><td>DISC-LawLLM</td><td>72.70</td><td>22.46</td><td>4.14</td><td>15.48</td><td>65.06</td><td>65.21</td><td>78.55</td><td>76.17</td><td>83.46</td><td>53.69</td><td></td></tr><tr><td>fuzi.mingcha</td><td>56.58</td><td>24.54</td><td>5.70</td><td>17.48</td><td>65.86</td><td>63.28</td><td>79.56</td><td>77.94</td><td>81.64</td><td>52.51</td><td></td></tr><tr><td>LexiLaw</td><td>71.89</td><td>24.96</td><td>6.25</td><td>18.91</td><td>65.68</td><td>68.89</td><td>78.12</td><td>76.72</td><td>82.42</td><td>54.87</td><td></td></tr><tr><td>Tailing</td><td>13.95</td><td>15.93</td><td>4.13</td><td>12.89</td><td>59.47</td><td>72.00</td><td>69.11</td><td>68.38</td><td>82.67</td><td>44.28</td><td></td></tr><tr><td>zhihai</td><td>37.50</td><td>20.98</td><td>4.59</td><td>13.69</td><td>64.54</td><td>67.75</td><td>77.68</td><td>76.99</td><td>77.16</td><td>48.99</td><td></td></tr><tr><td>LawGPT_zh</td><td>51.60</td><td>23.33</td><td>5.28</td><td>16.17</td><td>65.14</td><td>63.72</td><td>79.43</td><td>77.52</td><td>86.18</td><td>52.04</td><td></td></tr><tr><td>Hanfei</td><td>51.12</td><td>23.95</td><td>5.19</td><td>18.76</td><td>65.12</td><td>70.83</td><td>75.01</td><td>74.21</td><td>76.97</td><td>51.24</td><td></td></tr></table>
165
+
166
+ Table 2: Performance comparisons on the Layperson dataset. The best performance is indicated in bold.
167
+
168
+ # 6.1 Experimental Settings
169
+
170
+ # 6.1.1 Evaluated Models
171
+
172
+ We selected two categories of LLMs for testing: The legal LLMs include (1) fuzi.mingcha (6B) (Wu et al., 2023a), (2) LexiLaw<sup>5</sup> (6B), (3) Tailing<sup>6</sup> (7B), (4) DISC-LawLLM (13B) (Yue et al., 2023), (5) zhihai (7B) (Wu et al.), (6) LawGPT_zh (6B) (Liu et al., 2023), (7) HanFei (7B) (He et al., 2023). The open-domain LLMs include Qwen2 (7B) (Yang et al., 2024) and Llama3 (8B) (AI@Meta, 2024). For these models, we tested all methods mentioned in Sec. 4, including: (1) CloseBook, (2) CGG, (3) ARG-Q and (4) ARG-QA. For the legal LLMs, we generate responses using CGG. Appendix B has the details.
173
+
174
+ # 6.1.2 Implementation Details
175
+
176
+ Our implementation is based on the Huggingface Transformers library (Wolf et al., 2020) with PyTorch. We use bge-base-zh-v1.5 (Xiao et al., 2023) as the retrieval model and conduct all experiments on Nvidia A6000 GPUs. Additional details are provided in Appendix C and https://github.com/ke-01/CitaLaw.
177
+
178
+ # 6.2 Main Results
179
+
180
+ The results on the Layperson and Practitioner datasets are presented in Table 2 and Table 3. We analyze the results from three perspectives:
181
+
182
+ # 6.2.1 Performance of Open-Domain LLM
183
+
184
+ Legal references improve the response quality. Compared to CloseBook, the overall performance in CGG, ARG-Q, and ARG-QA has improved. This indicates that incorporating references into the
185
+
186
+ LLM helps it better understand both the question and the required direction for the answer, thereby enhancing performance in terms of style consistency, correctness, and citation quality.
187
+
188
+ CGG achieves better response quality. We observe that CGG achieves optimal performance, especially response correctness, suggesting that incorporating legal references into the LLM input is more effective than refining the LLM's response. This is because including legal knowledge as input allows the LLM to consider relevant context when generating replies, whereas refining the response might lead to excessive alterations.
189
+
190
+ ARG improves the alignment of responses and references. We can observe that ARG outperforms CGG in citation-related metrics overall. This is because CGG merely incorporates reference information as input, which may lead the model to overlook some reference details during the generation process. In contrast, ARG modifies the answer based on the references after generation, making it easier to ensure the completeness of citations.
191
+
192
+ Chinese data fine-tuning can bring benefits. Both the Layperson and Practitioner datasets are Chinese datasets. Qwen2 (Fine-tuning on more Chinese data) achieved better performance than Llama3, demonstrating the benefits of using Chinese data for fine-tuning.
193
+
194
+ CloseBook tends to state circumstances. CloseBook performs better in terms of correctness regarding circumstances compared to the other dimensions. This suggests that when judicial knowledge references are not used, the LLM is more likely to repeat the circumstances itself, rather than providing an appropriate response to the illegal acts and the legal decision.
195
+
196
+ <table><tr><td colspan="2">Metric</td><td>Fluency</td><td colspan="8">Correctness</td><td colspan="4">Citation</td><td>All</td></tr><tr><td>Category</td><td>Model</td><td>Mauve</td><td>Rouge-1</td><td>Rouge-2</td><td>Rouge-L</td><td>BERT-F</td><td>\( Correct_c \)</td><td>\( Correct_a \)</td><td>\( Correct_d \)</td><td>\( Cita_{Law} \)</td><td>\( Cita_c \)</td><td>\( Cita_a \)</td><td>\( Cita_d \)</td><td>Avg</td><td></td></tr><tr><td rowspan="4">Llama3 (Llam3-8B-Instruct)</td><td>CloseBook</td><td>23.81</td><td>23.05</td><td>7.29</td><td>19.23</td><td>62.83</td><td>76.30</td><td>71.05</td><td>70.32</td><td>63.49</td><td>66.95</td><td>68.83</td><td>65.46</td><td>51.55</td><td></td></tr><tr><td>CGG</td><td>36.37</td><td>26.15</td><td>7.84</td><td>19.55</td><td>65.60</td><td>67.19</td><td>76.36</td><td>77.73</td><td>73.58</td><td>68.23</td><td>67.87</td><td>67.65</td><td>54.51</td><td></td></tr><tr><td>ARG-Q</td><td>42.65</td><td>20.39</td><td>5.07</td><td>15.75</td><td>62.82</td><td>70.49</td><td>73.67</td><td>72.00</td><td>68.61</td><td>69.48</td><td>70.51</td><td>68.34</td><td>53.31</td><td></td></tr><tr><td>ARG-QA</td><td>36.94</td><td>18.64</td><td>4.56</td><td>14.63</td><td>61.50</td><td>71.07</td><td>72.38</td><td>70.32</td><td>69.40</td><td>68.95</td><td>70.42</td><td>69.51</td><td>52.36</td><td></td></tr><tr><td rowspan="4">Qwen2 (Qwen2-7B-Instruct)</td><td>CloseBook</td><td>61.91</td><td>30.44</td><td>10.54</td><td>23.53</td><td>67.55</td><td>74.35</td><td>79.84</td><td>78.52</td><td>68.55</td><td>68.03</td><td>70.30</td><td>69.71</td><td>58.61</td><td></td></tr><tr><td>CGG</td><td>39.66</td><td>31.01</td><td>10.75</td><td>23.43</td><td>69.06</td><td>73.49</td><td>80.11</td><td>81.11</td><td>70.37</td><td>67.82</td><td>69.53</td><td>70.01</td><td>57.20</td><td></td></tr><tr><td>ARG-Q</td><td>41.02</td><td>20.57</td><td>5.14</td><td>15.62</td><td>63.31</td><td>67.84</td><td>74.71</td><td>73.94</td><td>73.01</td><td>68.96</td><td>73.20</td><td>73.64</td><td>54.25</td><td></td></tr><tr><td>ARG-QA</td><td>21.97</td><td>16.67</td><td>3.06</td><td>12.47</td><td>60.70</td><td>67.49</td><td>71.16</td><td>70.88</td><td>71.76</td><td>69.01</td><td>71.04</td><td>71.33</td><td>50.63</td><td></td></tr><tr><td rowspan="7">Legal LLM (CGG )</td><td>DISC-LawLLM</td><td>38.11</td><td>21.37</td><td>6.75</td><td>16.96</td><td>60.84</td><td>73.42</td><td>72.14</td><td>71.79</td><td>63.92</td><td>67.42</td><td>68.22</td><td>65.45</td><td>52.20</td><td></td></tr><tr><td>fuzi.mingcha</td><td>66.55</td><td>28.95</td><td>9.51</td><td>22.69</td><td>67.06</td><td>70.73</td><td>76.66</td><td>77.47</td><td>65.92</td><td>66.94</td><td>69.28</td><td>68.69</td><td>57.54</td><td></td></tr><tr><td>LexiLaw</td><td>57.74</td><td>29.01</td><td>8.93</td><td>23.83</td><td>65.63</td><td>70.36</td><td>76.67</td><td>75.97</td><td>65.28</td><td>66.93</td><td>68.89</td><td>68.03</td><td>56.44</td><td></td></tr><tr><td>Tailing</td><td>50.16</td><td>26.52</td><td>9.16</td><td>22.44</td><td>65.35</td><td>75.96</td><td>73.83</td><td>70.30</td><td>64.65</td><td>66.94</td><td>67.56</td><td>66.09</td><td>54.91</td><td></td></tr><tr><td>zhihai</td><td>26.29</td><td>21.38</td><td>6.00</td><td>15.53</td><td>64.47</td><td>65.59</td><td>76.38</td><td>77.37</td><td>67.93</td><td>66.30</td><td>63.17</td><td>59.82</td><td>50.85</td><td></td></tr><tr><td>LawGPT_zh</td><td>47.10</td><td>29.16</td><td>8.92</td><td>22.55</td><td>67.64</td><td>69.48</td><td>79.37</td><td>80.23</td><td>66.90</td><td>68.38</td><td>67.55</td><td>68.94</td><td>56.35</td><td></td></tr><tr><td>HanFei</td><td>75.72</td><td>32.98</td><td>12.46</td><td>26.91</td><td>68.72</td><td>73.25</td><td>78.63</td><td>78.11</td><td>67.03</td><td>67.45</td><td>68.63</td><td>67.73</td><td>59.80</td><td></td></tr></table>
197
+
198
+ Table 3: Performance comparisons on the Practitioner dataset. The best performance is indicated in bold.
199
+
200
+ # 6.2.2 Performance of Legal LLM
201
+
202
+ Law article training achieves gains. In the Layperson dataset, LexiLaw achieves optimal performance overall. This is because the questions in the Layperson dataset often require only law articles to provide answers clearly, and LexiLaw's training explicitly used law articles, allowing it to effectively handle such questions.
203
+
204
+ Full-parameter training offers advantages. Hanfei achieves the best results in the Practitioner dataset, as it is a fully parameter-trained legal LLM. Full-parameter fine-tuning allows it to effectively simulate a legal expert, thus performing well.
205
+
206
+ Syllogistic reasoning is useful. fuzi.mingcha performs well on syllogism evaluation metrics, particularly on the Layperson dataset. This is due to its fine-tuning of syllogism judgment data.
207
+
208
+ # 6.2.3 Open Domain LLM vs. Legal LLM
209
+
210
+ Impact of LLM Backbone. We can observe that some legal LLMs perform worse than open-domain LLMs. This is because Qwen2 and Llama3 are the latest open-domain LLMs, and their overall capabilities have significantly improved. In contrast, most legal LLMs are built on earlier generations of LLMs, which have weaker base models, leading to poorer overall performance.
211
+
212
+ Effectiveness of legal knowledge. Overall, the upper limit of legal LLMs is higher than that of open-domain LLMs. This is because legal LLMs, after extensive training on legal knowledge, have developed strong capabilities in solving legal issues. As a result, even though their base models are outdated, they can still perform effectively.
213
+
214
+ # 6.3 Human Evaluation
215
+
216
+ In this section, we compared the syllogism-level metric with human evaluation. Details of legal human annotators can be found in Appendix D.
217
+
218
+ The syllogism-level evaluation of citation quality is divided into two stages: Stage 1: Extracting key components. Stage 2: Assessing the entailment using an NLI model.
219
+
220
+ Stage 1: We randomly selected 50 questions each from the Layperson and Practitioner datasets. After splitting the cases into individual clauses, annotators were provided with the full case and its clauses. They do a three-class classification of each clause. The Qwen2's annotations were then compared with human annotations. The Cohen's kappa coefficient (Cohen, 1960) of 0.7876 indicates substantial agreement (0.61-0.80) between the model's and human annotators' labels.
221
+
222
+ Stage 2: We randomly selected 50 questions from the Practitioner dataset and used Qwen2 to extract key components of pairs of responses and citations. Annotators assessed the degree to which the citations entailed the corresponding response components using a 5-point scale (1: low, 5: high), with descriptions provided in Appendix D. The entailment probabilities given by DISC-LawLLM, which range from 0 to 1, were scaled to the same 1-5 range by multiplying by 5 and rounding. We then compared the scaled model outputs with the human evaluations and calculated Cohen's kappa coefficient. The kappa score of 0.6923 again indicates substantial agreement (0.61-0.80) between the model and human judgments.
223
+
224
+ # 6.4 Effects on Different Retrieval Models
225
+
226
+ We selected BGE as the retrieval model in the main experiment. In this section, we explore the impact of using different retrieval models. Specifically, we evaluate Criminal-BERT (Zhong et al., 2019) and Civil-BERT (Zhong et al., 2019), two legal domain models based on BERT, fine-tuned on large-scale criminal and civil law documents, respectively. We replaced the retrieval model and tested the CGG
227
+
228
+ ![](images/bb02b0dcacfe9e2343a7e3dce9d3c1df14239b907d3877ece25c69828cb44528.jpg)
229
+ Figure 3: Performance of different retrieval models. Lay is short for Layperson dataset and Pra is short for Practitioner dataset.
230
+
231
+ ![](images/1fbc1d82ad26023059c2cbac91cbdf2370eca84e69adf8ce3919ae5ef0ed0727.jpg)
232
+
233
+ ![](images/2710aec7dc9ffc3bb3faa79cb8e962183333b9a0069341b63d3806ae4b8136a3.jpg)
234
+ (a) Methods for CitaLaw metric with Layperson dataset.
235
+ (b) Metrics for CGG method with Practitioner dataset.
236
+ Figure 4: The performance of different NLI models when the LLM is Llama.
237
+
238
+ method on the Layperson dataset. The average results across all metrics are shown in Figure 3, with detailed metric results provided in Appendix E.
239
+
240
+ As shown, on the Layperson dataset, BGE significantly outperforms the other two models. This is because the dataset consists of questions from laypersons, which are more everyday in nature. In contrast, the two legal BERT models, having been trained extensively on legal cases, show a distributional mismatch with open-domain data, leading to poorer performance. On the Practitioner dataset, which features professional legal questions, BGE still achieves the best performance. This can be attributed to its extensive training on diverse data, likely including some legal data, and its use of more advanced model architectures and techniques. However, the two legal BERT models perform comparably to BGE, showcasing the benefits of their specialized training on legal data.
241
+
242
+ # 6.5 Effects on Different NLI Models
243
+
244
+ We opted to use legal LLMs as the NLI model in our experiments, as they support longer input lengths and incorporate substantial legal knowl
245
+
246
+ edge. In Section 6.3, we verified that DISC-LawLLM and human achieved good consistency. In this section, we explore the performance of several legal LLMs in the NLI task. Besides DISC-LawLLM, we evaluated LexiLaw, LawGPT_zh, and Hanfei, which demonstrated strong performance in the main experiments.
247
+
248
+ In Figures 4 (a), we examined the ability of four legal LLMs to evaluate Llama across the Close-Book, CGG, ARG-Q, and ARG-QA methods using the CitaLaw metric on the Layperson dataset. In Figures 4 (b), we investigated the performance of four legal LLMs in evaluating the CGG method applied to Llama across the metrics CitaLaw, CitaC, Cita a, and Cita d on the Practitioner dataset.
249
+
250
+ We can observe that Hanfei provides lower entailment scores across both datasets. This is because it is a fully parameter-tuned legal LLM, which results in a diminished capability to handle the general task of entailment reasoning. Additionally, we found that on the Practitioner dataset, other legal LLMs achieved results closer to those of DISC-LawLLM, while on the Layperson dataset, the performance gap was significantly larger. This is because the Practitioner dataset is more judicially oriented, aligning with the knowledge seen during the fine-tuning of legal LLMs. In contrast, due to limited training on general-purpose data, other legal LLMs struggle to accurately determine entailment relationships in the Layperson dataset. Similar conclusions can be drawn when the LLM is Qwen in Appendix F.
251
+
252
+ # 7 Conclusion
253
+
254
+ We introduce CitaLaw, a benchmark designed to explore LLMs to generate responses with citations in legal scenarios, thus improving the trustworthiness of LLMs. CitaLaw includes two categories of questions: laypersons and practitioners. For laypersons, CitaLaw provides law articles as citations to help them understand the LLM's response clearly. For practitioners, both law articles and precedent cases are provided as citations, better supporting their needs for complex reasoning. CitaLaw offers global-level and syllogism-level metrics and supports the integration of citations into LLM inputs to guide generation or using citations to refine LLM's response. We conducted extensive experiments on 7 legal-domain LLMs and 2 popular open-domain LLMs, providing valuable insights for the deployment of LLMs in legal scenarios.
255
+
256
+ # 8 Limitations
257
+
258
+ While Citalaw provides a robust framework for evaluating LLMs in legal scenarios, several limitations should be acknowledged to guide future extensions of this work.
259
+
260
+ First, the datasets used in CitaLaw are primarily sourced from the Chinese legal system, which may limit the benchmark's applicability to other jurisdictions. However, by incorporating both law articles and precedent cases to align with the principles of civil and common law systems, CitaLaw demonstrates strong potential for adaptation to diverse legal contexts.
261
+
262
+ Second, the syllogism-based evaluation framework simplifies legal reasoning into three key components: the major premise (law articles or precedent cases), the minor premise (case circumstances and actions), and the conclusion (legal decision). While this structured approach is effective for systematic evaluation, real-world legal reasoning may encompass additional complexities.
263
+
264
+ # 9 Ethical Considerations
265
+
266
+ Data Privacy and Confidentiality. The legal datasets used in CitaLaw include law articles, precedent cases, user questions, and golden responses. These documents were sourced from publicly available databases, ensuring compliance with data privacy and confidentiality standards. We carefully reviewed the datasets to ensure that no personally identifiable information (PII) or sensitive details about individuals were inadvertently included.
267
+
268
+ Alignment with Legal Standards. Legal AI systems must align with the ethical and professional standards of the legal domain. Our work emphasizes the need for syllogism-based reasoning to ensure logical consistency and adherence to legal principles.
269
+
270
+ Transparency and Explainability. Legal reasoning must be transparent and interpretable, particularly when used in sensitive or high-stakes domains. The metrics proposed in CitaLaw, including syllogism-based evaluation, aim to improve explainability by breaking down the reasoning process into logical components.
271
+
272
+ Responsibility in System Deployment. Citalaw is intended as a research benchmark and should not be directly deployed in high-stakes legal decision-making without human oversight. While the benchmark aims to enhance the trustworthiness
273
+
274
+ of LLM-generated responses, legal professionals should always verify the citations and legal interpretations provided by such systems. Misuse of automated systems without adequate validation could lead to inaccurate legal advice or unintended consequences in legal proceedings.
275
+
276
+ # 10 Acknowledgements
277
+
278
+ This work was funded by the National Key R&D Program of China (2023YFA1008704), the National Natural Science Foundation of China (62472426). Supported by fund for building world-class universities (disciplines) of Renmin University of China. Work partially done at Beijing Key Laboratory of Research on Large Models and Intelligent Governance, and Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE. Supported by the Beijing Social Science Foundation Planning Project (Grant No. 24GLC041), the Fundamental Research Funds for the Central Universities in UIBE (Grant No. 24QN06, 24PYTS22).
279
+
280
+ # References
281
+
282
+ Abdelrahman Abdallah, Bhawna Piryani, and Adam Jatowt. 2023. Exploring the state of the art in legal qa systems. Journal of Big Data, 10(1):127.
283
+ AI@Meta. 2024. Llama 3 model card.
284
+ Andrew Blair-Stanek, Nils Holzenberger, and Benjamin Van Durme. 2023. Can gpt-3 perform statutory reasoning? In Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law, pages 22-31.
285
+ Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.
286
+ Yongfu Dai, Duanyu Feng, Jimin Huang, Haochen Jia, Qianqian Xie, Yifang Zhang, Weiguang Han, Wei Tian, and Hao Wang. 2023. Laiw: A chinese legal large language models benchmark (a technical report). arXiv preprint arXiv:2310.05620.
287
+ Haolin Deng, Chang Wang, Xin Li, Dezhang Yuan, Junlang Zhan, Tianhua Zhou, Jin Ma, Jun Gao, and Ruifeng Xu. 2024. Websites: Attributed query-focused summarization on chinese web search results with citations. arXiv preprint arXiv:2403.01774.
288
+ Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
289
+ Zhiwei Fei, Xiaoyu Shen, Dawei Zhu, Fengzhe Zhou, Zhuo Han, Songyang Zhang, Kai Chen, Zongwen
290
+
291
+ Shen, and Jidong Ge. 2023. Lawbench: Benchmarking legal knowledge of large language models. arXiv preprint arXiv:2309.16289.
292
+ Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2023a. Rarr: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477-16508.
293
+ Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6465-6488.
294
+ Wanwei He, Jiabao Wen, Lei Zhang, Hao Cheng, Bowen Qin, Yunshui Li, Feng Jiang, Junying Chen, Benyou Wang, and Min Yang. 2023. Hanfei-1.0. https://github.com/siat-nlp/HanFei.
295
+ Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.
296
+ Dongfang Li, Zetian Sun, Xinshuo Hu, Zhenyu Liu, Ziyang Chen, Baotian Hu, Aiguo Wu, and Min Zhang. 2023. A survey of large language models attribution. arXiv preprint arXiv:2311.03731.
297
+ Haitao Li, You Chen, Qingyao Ai, Yueyue Wu, Ruizhe Zhang, and Yiqun Liu. 2024. Lexeval: A comprehensive chinese legal benchmark for evaluating large language models. Preprint, arXiv:2409.20288.
298
+ Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
299
+ Hongcheng Liu, Yusheng Liao, Yutong Meng, and Yuhao Wang. 2023. Xiezhi: Chinese law large language model. https://github.com/LiuHC0428/LAW_GPT.
300
+ Luyao Ma, Yating Zhang, Tianyi Wang, Xiaozhong Liu, Wei Ye, Changlong Sun, and Shikun Zhang. 2021a. Legal judgment prediction with multi-stage case representation learning in the real court setting. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 993-1002.
301
+ Yixiao Ma, Yunqiu Shao, Yueyue Wu, Yiqun Liu, Ruizhe Zhang, Min Zhang, and Shaoping Ma. 2021b. Lecard: A legal case retrieval dataset for chinese law system. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2342-2348.
302
+ Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, and Dan Roth. 2024. Expertqa: Expert-curated questions and attributed answers. In Proceedings of the 2024 Conference of the North
303
+
304
+ American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3025-3045.
305
+ Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. Advances in Neural Information Processing Systems, 34:4816-4828.
306
+ N Reimers. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
307
+ Jaromir Savelka, Kevin D Ashley, Morgan A Gray, Hannes Westermann, and Huihui Xu. 2023. Explaining legal concepts with augmented large language models (gpt-4). arXiv preprint arXiv:2306.09525.
308
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38-45.
309
+ Shiguang Wu, Zhongkun Liu, Zhen Zhang, Zheng Chen, Wentao Deng, Wenhao Zhang, Jiyuan Yang, Zhitao Yao, Yougang Lyu, Xin Xin, Shen Gao, Pengjie Ren, Zhaochun Ren, and Zhumin Chen. 2023a. fuzi.mingcha. https://github.com/irlab-sdu/fuzi.mingcha.
310
+ Yiquan Wu, Yuhang Liu, Yifei Liu, Ang Li, Siying Zhou, and Kun Kuang. wisdominterrogatory. Available at GitHub.
311
+ Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023b. Precedent-enhanced legal judgment prediction with llm and domain-model collaboration. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12060-12075.
312
+ Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff, Defu Lian, and Jian-Yun Nie. 2023. C-pack: Packaged resources to advance general chinese embedding. arXiv preprint arXiv:2309.07597.
313
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu
314
+
315
+ Cui, Zhenru Zhang, and Zhihao Fan. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
316
+ Fangyi Yu, Lee Quartey, and Frank Schilder. 2022a. Legal prompting: Teaching a language model to think like a lawyer. arXiv preprint arXiv:2212.01326.
317
+ Weijie Yu, Zhongxiang Sun, Jun Xu, Zhenhua Dong, Xu Chen, Hongteng Xu, and Ji-Rong Wen. 2022b. Explainable legal case matching via inverse optimal transport-based rationale extraction. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval, pages 657-668.
318
+ Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen, Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, Xuanjing Huang, and Zhongyu Wei. 2023. Disc-lawllm: Fine-tuning large language models for intelligent legal services. Preprint, arXiv:2309.11325.
319
+ Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
320
+ Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. Jecqa: a legal-domain question answering dataset. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 9701-9708.
321
+ Haoxi Zhong, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2019. Open chinese language pretrained model zoo. Technical report.
322
+ Zhi Zhou, Jiang-Xin Shi, Peng-Xiao Song, Xiao-Wen Yang, Yi-Xuan Jin, Lan-Zhe Guo, and Yu-Feng Li. 2024. Lawgpt: A chinese legal knowledge-enhanced large language model. Preprint, arXiv:2406.04614.
323
+
324
+ # A The Used Prompts
325
+
326
+ Figure 5 illustrates the prompts used in this paper, including $p_1$ , $p_2$ , $p_3$ in Eq. 1, Eq. 2 and Eq. 3.
327
+
328
+ # B More Details of Evaluated Models and Datasets
329
+
330
+ For the Legal LLMs, we choose (1) fuzi.mingcha (6B) (Wu et al., 2023a): It leverages unsupervised judicial corpora for training and uses syllogistic reasoning judgment data for fine-tuning. (2) LexiLaw<sup>7</sup> (6B): It specifically utilizes legal articles and legal reference books for training. (3) Tailing<sup>8</sup> (7B): It uses judicial text validation data, information extraction data, and judgment data for training. (4) DISC-LawLLM (13B) (Yue et al., 2023): In addition to fine-tuning with pairs, it also uses triplet data for fine-tuning to enhance the model's ability to leverage external knowledge. (5) zhihai (7B) (Wu et al.): It utilizes ChatGPT to modify the existing dataset and then performs secondary pre-training. (6) LawGPT_zh (6B) (Liu et al., 2023): It primarily uses scenario-based dialogues and knowledge-based question-answering data for fine-tuning based on LoRA. (7) HanFei (7B) (He et al., 2023): It is the first fully parameter-trained legal LLM in China. Because in the main experiment, CGG has the best overall performance, for the legal LLMs, we generate responses using CGG.
331
+
332
+ Table 4 and Table 5 are the website URLs and corresponding licenses of the evaluated models and datasets.
333
+
334
+ # C More Details on Implementation
335
+
336
+ Considering the length of legal texts and the input window for the LLMs is limited, all experiments in this paper are conducted using a zero-shot setting. We use the Chinese-performing-well Qwen2-1.5B (Yang et al., 2024)<sup>9</sup> to complete the MAUVE calculations. For RGUGE, We use version 1.0.1 of ROUGE for calculation. For BERTScore, we use bert-base-chinese (Devlin, 2018)<sup>10</sup> to compute it. Regarding sentence-BERT, we employ paraphrase-multilingual-MiniLM-L12-v2 (Reimers, 2019)<sup>11</sup>.
337
+
338
+ # D Human Evaluation
339
+
340
+ We hired four legal annotators from a Chinese university, all of whom have legal education backgrounds and are familiar with the cases in the dataset they need to annotate. We explained to the annotators that the data they annotated would be used for scientific research and paid them a reasonable remuneration based on local conditions. They are all graduate students from the judicial field, with practical experience in the legal profession. Two are male, two are female, aged between 24 and 30, and all have over five years of judicial theory study. Two annotators were responsible for the first stage of annotation, while the other two were responsible for the second stage, with all working together on the annotation process.
341
+
342
+ Table 6 shows a detailed description of each level used to evaluate the agreement of the NLI model with human evaluations.
343
+
344
+ # E Different Retrieval Models
345
+
346
+ Tables 7 and 8 present the performance of different retrieval models—Criminal-BERT, CivilBERT, and BGE—on each metric for the CGG method across the two datasets. It can be observed that when Llama3 and Qwen2 are used as LLMs, BGE achieves the best performance as the retrieval model. Comparing the two datasets, on the Layperson dataset, where the questions are more general, Criminal-BERT and Civil-BERT, which focus on legal cases, perform relatively poorly. In contrast, on the Practitioner dataset, despite no structural or training improvements, Criminal-BERT and CivilBERT achieve results comparable to BGE, highlighting the importance of legal knowledge in judicial QA tasks.
347
+
348
+ The differences between the two datasets also underscore the significance of selecting an appropriate retrieval model.
349
+
350
+ # F Different NLI Models
351
+
352
+ Figures 6 (a) and (b) show the entailment scores given by four legal LLMs as NLI models under different methods (CloseBook, CGG, ARG-Q, ARGQA) and metrics(CitaLaw, CitaS, CitaB, and CitaC) when Qwen is used as the LLM. Similar conclusions to those in Section 6.5 can be drawn.
353
+
354
+ Answer the question based on the provided law article and cite it appropriately. Only output the answer and citations, without including any additional content. When citing the law article, use [A1] at the end of the relevant sentence.
355
+
356
+ Below is the provided law article: Law article [A1]: {Law article 1}
357
+
358
+ Question: {Question} Answer:
359
+
360
+ # (a) Layperson
361
+
362
+ Refine the text based on the references and only output the refined text.
363
+
364
+ Text: {Answer}
365
+ Reference: {References}
366
+
367
+ Refined text (only output the Refined text, without any additional content):
368
+
369
+ # (c) Response Refinement
370
+
371
+ Answer the question based on the provided documents and cite them appropriately.
372
+
373
+ Only output the answer and citations, without including any additional content.
374
+
375
+ When citing precedent cases, use [C1], [C2], or [C3] at the end of the sentence. When citing the law article, use [A1].
376
+
377
+ Below are the provided documents:
378
+ Precedent case [C1]: {Precedent case 1}
379
+
380
+ Precedent case [C2]: {Precedent case 2}
381
+
382
+ Precedent case [C3]: {Precedent case 3}
383
+ Law article [A1]: {Law article 1}
384
+
385
+ Question: {Question} Answer:
386
+
387
+ # (b) Practitioner
388
+
389
+ Answer the question. Only output the answer without any additional content.
390
+
391
+ Question: {Question} Answer:
392
+
393
+ # (d) Without Reference
394
+
395
+ <table><tr><td>Type</td><td>LLM</td><td>URL</td><td>Licence</td></tr><tr><td rowspan="2">Open domain</td><td>Qwen2-7B-Instruct</td><td>https://huggingface.co/Qwen/Qwen2-7B-Instruct</td><td>Apache-2.0 license</td></tr><tr><td>Llam3-8B-Instruct</td><td>https://github.com/meta-llama/llama3</td><td>META LLAMA 3 COMMUNITY License</td></tr><tr><td rowspan="7">Legal Domain</td><td>fuzi.mingcha</td><td>https://github.com/irlab-sdu/fuzi.mingcha</td><td>Apache-2.0 license</td></tr><tr><td>DISC-LawLLM</td><td>https://github.com/FudanDISC/DISC-LawLLM</td><td>Apache-2.0 license</td></tr><tr><td>LawGPT_zh</td><td>https://github.com/LiuHC0428/LAW-GPT</td><td></td></tr><tr><td>Hanfei</td><td>https://github.com/siat-nlp/HanFei</td><td>Apache-2.0 license</td></tr><tr><td>Tailing</td><td>https://github.com/DUTIR-LegalIntelligence/Tailing</td><td></td></tr><tr><td>LexiLaw</td><td>https://github.com/CSHaitao/LexiLaw</td><td>MIT license</td></tr><tr><td>zhihai</td><td>https://github.com/zhihaiLLM/wisdomInterrogatory</td><td>Apache-2.0 license</td></tr></table>
396
+
397
+ Table 4: The LLM source URLs and licenses used by CitaLaw. The parts where the license is listed as empty indicate that the author has not provided a License.
398
+
399
+ ![](images/1b8aaf2ed2a7460c159e4afcce3fcd90678b2d5fe400a1644ae0c81d1b896487.jpg)
400
+ Figure 5: Prompts used in this paper. (a) The prompt $p_1$ is used to retrieve one law article in the Layperson dataset. (b) The prompt $p_1$ is used to retrieve one law article and three precedent cases in the Practitioner dataset. (c) The prompt $p_3$ is used to refine the LLM's answer based on references. (d) The prompt $p_2$ is used for LLM responses without references.
401
+
402
+ ![](images/1871806c373c10877618a242a1d2f56ced23ad1f3ac4caa451cbf99ee3a98601.jpg)
403
+ (a) Metrics for CGG method with Layperson dataset.
404
+ (b) Methods for CitaLaw metric with Practitioner dataset.
405
+ Figure 6: The performance of different NLI models when the LLM is Qwen.
406
+
407
+ <table><tr><td>Type</td><td>Dataset</td><td>URL</td><td>Licence</td></tr><tr><td rowspan="2">Question</td><td rowspan="2">Layperson Practitioner</td><td>https://github.com/open-compass/LawBench</td><td>Apache-2.0 license</td></tr><tr><td>https://github.com/CSHaitao/LexEval</td><td>MIT License</td></tr><tr><td rowspan="8">Corpus</td><td>LeCaRD</td><td>https://github.com/myx666/LeCaRD</td><td>MIT License</td></tr><tr><td>ELAM</td><td>https://github.com/ruc-wjyu/IOT-Match</td><td>MIT License</td></tr><tr><td>CAIL2021-sfzy</td><td>https://github.com/china-ai-law-challenge/CAIL2021</td><td></td></tr><tr><td>LJP-MSJudg fuzi.mingcha</td><td>https://github.com/mly-nlp/LJP-MSJudge</td><td></td></tr><tr><td>DISC-LawLLM</td><td>https://github.com/irlab-sdu/fuzi.mingcha</td><td>Apache-2.0 license</td></tr><tr><td>LawGPT_zh</td><td>https://github.com/FudanDISC/DISC-LawLLM</td><td>Apache-2.0 license</td></tr><tr><td>Hanfei</td><td>https://github.com/LiuHC0428/LAW-GPT</td><td></td></tr><tr><td></td><td>https://github.com/siat-nlp/HanFei</td><td>Apache-2.0 license</td></tr></table>
408
+
409
+ Table 5: The dataset source URLs and licenses used by CitaLaw. The parts where the license is listed as empty indicate that the author has not provided a License.
410
+
411
+ <table><tr><td>Score</td><td>Description</td></tr><tr><td>1</td><td>No Entailment: The former does not entail the latter at all, with no logical connection between the two.</td></tr><tr><td>2</td><td>Weak Entailment: A partial entailment where the former somewhat relates to the latter, but the connection is weak and not fully conclusive.</td></tr><tr><td>3</td><td>Moderate Entailment: A moderate degree of entailment, meaning the former generally leads to the latter in most cases, but exceptions exist.</td></tr><tr><td>4</td><td>Strong Entailment: A strong logical relationship where the former can derive the latter in the vast majority of cases.</td></tr><tr><td>5</td><td>Complete Entailment: The former fully entails the latter in all cases, with an unambiguous and definitive logical connection between them.</td></tr></table>
412
+
413
+ Table 6: Scoring Criteria for Human Evaluation of Entailment.
414
+
415
+ <table><tr><td colspan="2">Metric</td><td>Fluency</td><td colspan="7">Correctness</td><td>Citation</td><td>All</td></tr><tr><td>Category</td><td>Retriever</td><td>Mauve</td><td>Rouge-1</td><td>Rouge-2</td><td>Rouge-L</td><td>BERT-F</td><td>Correctc</td><td>Correcta</td><td>Correctd</td><td>CitaLaw</td><td>Avg</td></tr><tr><td rowspan="3">Llama3 (Llam3-8B-Instruct)</td><td>Criminal</td><td>37.44</td><td>18.07</td><td>2.18</td><td>13.15</td><td>61.71</td><td>64.03</td><td>63.56</td><td>64.36</td><td>80.34</td><td>44.98</td></tr><tr><td>Civil</td><td>56.16</td><td>18.27</td><td>2.34</td><td>13.44</td><td>61.90</td><td>63.22</td><td>63.89</td><td>63.35</td><td>80.97</td><td>47.06</td></tr><tr><td>BGE</td><td>61.01</td><td>23.97</td><td>6.05</td><td>17.91</td><td>65.94</td><td>67.29</td><td>77.31</td><td>74.95</td><td>86.70</td><td>53.46</td></tr><tr><td rowspan="3">Qwen2 (Qwen2-7B-Instruct)</td><td>Criminal</td><td>55.26</td><td>21.09</td><td>4.53</td><td>14.32</td><td>64.73</td><td>63.10</td><td>64.89</td><td>65.85</td><td>61.60</td><td>46.15</td></tr><tr><td>Civil</td><td>52.44</td><td>20.48</td><td>4.16</td><td>13.81</td><td>64.45</td><td>61.79</td><td>64.94</td><td>65.62</td><td>59.88</td><td>45.29</td></tr><tr><td>BGE</td><td>75.10</td><td>22.26</td><td>4.77</td><td>15.41</td><td>65.28</td><td>67.50</td><td>78.62</td><td>77.82</td><td>77.59</td><td>53.82</td></tr></table>
416
+
417
+ Table 7: Performance comparisons on retrieval models in the Layperson dataset when the method is CGG. The best performance is indicated in bold.
418
+
419
+ <table><tr><td colspan="2">Metric</td><td>Fluency</td><td colspan="7">Correctness</td><td colspan="4">Citation</td><td>All</td></tr><tr><td>Category</td><td>Retriever</td><td>Mauve</td><td>Rouge-1</td><td>Rouge-2</td><td>Rouge-L</td><td>BERT-F</td><td>Correctc</td><td>Correcta</td><td>Correctd</td><td>CitaLaw</td><td>CitaC</td><td>Citaa</td><td>CitaD</td><td>Avg</td></tr><tr><td rowspan="3">Llama3 (Llam3-8B-Instruct)</td><td>Criminal</td><td>34.25</td><td>25.79</td><td>7.86</td><td>19.42</td><td>65.03</td><td>66.27</td><td>76.30</td><td>76.82</td><td>70.59</td><td>66.41</td><td>70.09</td><td>69.47</td><td>54.03</td></tr><tr><td>Civil</td><td>39.84</td><td>26.39</td><td>8.07</td><td>20.02</td><td>65.27</td><td>65.41</td><td>75.78</td><td>75.73</td><td>69.21</td><td>67.52</td><td>69.54</td><td>69.16</td><td>54.33</td></tr><tr><td>BGE</td><td>36.37</td><td>26.15</td><td>7.84</td><td>19.55</td><td>65.60</td><td>67.19</td><td>76.36</td><td>77.73</td><td>73.58</td><td>68.23</td><td>67.87</td><td>67.65</td><td>54.51</td></tr><tr><td rowspan="3">Qwen2 (Qwen2-7B-Instruct)</td><td>Criminal</td><td>32.49</td><td>31.79</td><td>11.09</td><td>23.93</td><td>69.79</td><td>72.00</td><td>80.81</td><td>81.53</td><td>68.42</td><td>68.42</td><td>71.86</td><td>71.54</td><td>56.97</td></tr><tr><td>Civil</td><td>33.37</td><td>31.67</td><td>11.06</td><td>23.84</td><td>69.63</td><td>73.35</td><td>80.57</td><td>81.27</td><td>69.11</td><td>66.41</td><td>70.09</td><td>69.47</td><td>56.65</td></tr><tr><td>BGE</td><td>39.66</td><td>31.01</td><td>10.75</td><td>23.43</td><td>69.06</td><td>73.49</td><td>80.11</td><td>81.11</td><td>70.37</td><td>67.82</td><td>69.53</td><td>70.01</td><td>57.20</td></tr></table>
420
+
421
+ Table 8: Performance comparisons on retrieval models in the Practitioner dataset when the method is CGG. The best performance is indicated in bold.
2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ef3d44807091a05db9f73e2dbb186e7fc266aaeb98f3004f93dfbbdfbad3727
3
+ size 785144
2025/CitaLaw_ Enhancing LLM with Citations in Legal Domain/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/08384e56-2d5c-4ecb-b64c-ecb74bcfc53b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5c0363cf7c4506bcf7f15bde91fbe35f1e9385dc77a1aec2b6a1b59c05cfca5
3
+ size 2470739
2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/full.md ADDED
@@ -0,0 +1,599 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM
2
+
3
+ Hoang Pham*, Thanh-Do Nguyen*, Khac-Hoai Nam Bui†
4
+
5
+ Viettel Artificial Intelligence and Data Services Center,
6
+
7
+ Viettel Group, Vietnam
8
+
9
+ {hoangpv4, dont15, nambkh} @ viettel.com.vn
10
+
11
+ # Abstract
12
+
13
+ Integrating knowledge graphs (KGs) to enhance the reasoning capabilities of large language models (LLMs) is an emerging research challenge in claim verification. While KGs provide structured, semantically rich representations well-suited for reasoning, most existing verification methods rely on unstructured text corpora, limiting their ability to effectively leverage KGs. Additionally, despite possessing strong reasoning abilities, modern LLMs struggle with multi-step modular pipelines and reasoning over KGs without adaptation. To address these challenges, we propose ClaimPKG<sup>1</sup>, an end-to-end framework that seamlessly integrates LLM reasoning with structured knowledge from KGs. Specifically, the main idea of ClaimPKG is to employ a lightweight, specialized LLM to represent the input claim as pseudo-subgraphs, guiding a dedicated subgraph retrieval module to identify relevant KG subgraphs. These retrieved subgraphs are then processed by a general-purpose LLM to produce the final verdict and justification. Extensive experiments on the FactKG dataset demonstrate that ClaimPKG achieves state-of-the-art performance, outperforming strong baselines in this research field by $9\% - 12\%$ accuracy points across multiple categories. Furthermore, ClaimPKG exhibits zero-shot generalizability to unstructured datasets such as HoVer and FEVERIOUS, effectively combining structured knowledge from KGs with LLM reasoning across various LLM backbones.
14
+
15
+ # 1 Introduction
16
+
17
+ In today's rapidly evolving information landscape, distinguishing fact from misinformation is becoming more challenging, especially with the rise of AI-generated content. Robust claim verification
18
+
19
+ ![](images/1f79aa990ca6454f337d823732bf436f1207f9d509390dd3c4d4aaae0fda0a94.jpg)
20
+
21
+ ![](images/101c2011dbb6cd268f3b14346c625c2dd2859bef56228ac675c763c1aa6f5077.jpg)
22
+ c) Our Method - ClaimPKG
23
+ Figure 1: Different claim verification paradigms: (a) Unstructured Text-based methods focusing on claim decomposition and sequential reasoning over text, (b) KG-based methods facing challenges in entity resolution and structured reasoning, and (c) ClaimPKG's unified framework with specialized modules for pseudosubgraph generation, retrieval, and general reasoning.
24
+
25
+ systems, leveraging NLP methods to automatically assess the veracity of claims (Glockner et al., 2022a,b; Thorne and Vlachos, 2018), are essential to ensure information reliability. Effective methods require not only accuracy but also transparency, necessitating strong reasoning to identify evidence and provide clear justifications (Pan et al., 2023).
26
+
27
+ Most existing verification approaches focus on unstructured text corpora, using techniques like chain-of-thought (CoT) reasoning (Wei et al., 2022) to break down claims for verification. Approaches like ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023) employ modular pipelines to verify claims against text-based knowledge bases (Figure 1(a)). However, the inherent limitations of text representation pose challenges. Specifically, ambiguous entity references and complex multi-hop relationships make it difficult to perform rigorous verification against unstructured text.
28
+
29
+ In contrast, Knowledge Graphs (KGs) provide
30
+
31
+ structured relationships for effective reasoning (Luo et al., 2024; Sun et al., 2024), yet their use in claim verification remains limited. Existing KG-based approaches (Figure 1(b)) (Kim et al., 2023b; Zhou et al., 2019; Kim et al., 2023a) lack end-to-end solutions, often requiring pre-extracted entities via modules like entity or relation extraction. Meanwhile, despite excelling at general reasoning, LLMs struggle with KG-specific tasks like entity resolution and multi-hop reasoning (Cao et al., 2021; Aly et al., 2021), suggesting the need for a system combining LLM capabilities with KG-based inference.
32
+
33
+ Overall, solving claim verification problems is hindered by following major limitations: (1) Entity Ambiguity: Systems must accurately disambiguate entities within claims to identify relevant evidence (Aly et al., 2021); (2) Multihop Reasoning: Complex claims often require reasoning across multiple evidence from different sources (Pan et al., 2023; Wang and Shu, 2023); and (3) Limited integration of KGs and LLMs: Current approaches are underexploring the potential of combining the application of structured representation with strong inference capabilities of LLMs (Kim et al., 2023a).
34
+
35
+ To address these challenges, we propose ClaimPKG (Claim Verification using Pseudo-Subgraph in Knowledge Graphs), a novel end-to-end framework that synergizes the adaptability and generalization strengths of LLMs with the structured and rigorous representation of KGs to enable robust and transparent claim verification. As specified in Figure 1(c), ClaimPKG operates through three phases: (1) Pseudo-Subgraphs Generation: A KG-specialized lightweight LLM generates pseudo subgraphs as the representations of input claims under a Trie-based KG-Entity Constraint, ensuring the correctness of extracted entities; (2) Subgraphs Retrieval: A retrieval algorithm considers generated pseudo subgraphs as queries to identify actual relevant KG subgraphs as evidence; and (3) General Reasoning: A general-purpose LLM reasons over the retrieved KG subgraphs to produce the verdict and human-readable justifications. Through extensive experiments on the FactKG dataset, ClaimPKG achieves state-of-the-art performance, demonstrating its effectiveness over various claim types with a small number of training samples. Furthermore, its zero-shot generalizability to unstructured datasets (HoVer, FEVEROUS) highlights its robustness.
36
+
37
+ Our contributions can be summarized as follows: (1) We introduce ClaimPKG, a holistic framework
38
+
39
+ that integrates LLMs and KGs for accurate and interpretable claim verification, handling various types of claims in a unified manner; (2) We develop a lightweight specialized LLM with its according decoding algorithm for pseudo-subgraph generation and pair it with general-purpose LLMs to achieve robust reasoning; and (3) We validate the effectiveness of ClaimPKG through extensive experiments, achieving state-of-the-art performance on structure-based datasets and generalizing to unstructure-based datasets.
40
+
41
+ # 2 Related Work
42
+
43
+ Claim Verification Approaches. Claim verification systems utilize knowledge bases that can be categorized into unstructured and structured formats. In the unstructured domain, text-based verification methods predominate, with systems designed to verify claims against textual evidence, as demonstrated in the FEVER dataset (Thorne et al., 2018). Recent advances have focused on handling specialized verification scenarios, including ambiguous question-answer pairs (Park et al., 2022), detecting factual changes (Schuster et al., 2021), and processing multiple documents concurrently (Jiang et al., 2020). For structured verification, research has primarily focused on tables and graphs, with early work developing specialized architectures: graph neural networks for knowledge graph processing (Zhou et al., 2020), table-specific transformers (Herzig et al., 2020), and tree-structured decoders for hierarchical data (Wang et al., 2020).
44
+
45
+ Claim Verification over Knowledge Graphs (KGs). The emergence of Large Language Models (LLMs) has simplified direct reasoning over textual corpora for claim verification, as demonstrated by ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023). However, structured data sources like tables and graphs can provide more grounded and robust verification results (Kim et al., 2023b). Knowledge graphs are particularly advantageous as they enable explicit representation of reasoning processes through logical rules over nodes and edges. FactKG (Kim et al., 2023b) established a foundation in this direction by introducing a comprehensive dataset for evaluating modern verification methods. KG-GPT (Kim et al., 2023a) followed this work by demonstrating performance gains through a pipeline that performs sentence decomposition, subgraph retrieval, and logical inference. Additionally, while not directly addressing
46
+
47
+ claim verification, StructGPT (Jiang et al., 2023) and RoG (Luo et al., 2024) achieved promising results in related tasks (e.g., Knowledge Base Question Answering) by collecting relevant evidence, such as subgraphs in KGs, then leveraging LLMs for complex reasoning in particular scenarios.
48
+
49
+ # 3 Preliminary
50
+
51
+ Knowledge Graph: Knowledge Graph (KG) $\mathcal{G}$ represents facts as triplets of format $t = (e,r,e')$ , where entities $e,e'\in \mathcal{E}$ are connected by a relation $r\in \mathcal{R}$ ; $r$ can also be referred as $r(e,e')$ .
52
+
53
+ Claim Verification: Given a claim $c$ , a verification model $\mathcal{F}$ determines its veracity as Supported or Refuted based on an external knowledge base $\kappa$ , while also providing a justification $j$ to explain the predicted label. This work specifically considers the scenario where $\kappa$ is structured as a Knowledge Graph $\mathcal{G}$ , enabling reasoning over graph knowledge to infer $v$ and $j$ . Formally, the verification process is defined as: $(v,j) = \mathcal{F}(c,\mathcal{G})$ .
54
+
55
+ Trie-based Constrained Decoding: A Trie (Wikipedia, 2025b) indexes predefined token sequences, where each root-to-node path represents a prefix. During LLM generation, this structure restricts token selection to only valid Trie paths, ensuring reliable output.
56
+
57
+ # 4 ClaimPKG
58
+
59
+ # 4.1 Formulation of ClaimPKG
60
+
61
+ We formulate the ClaimPKG framework using a probabilistic approach. Given a claim $c$ and a prebuilt KG $\mathcal{G}$ , our objective is to model the distribution $p_{\theta}(v,j|c,\mathcal{G})$ , where $v$ denotes the verdict and $j$ the justification. However, direct computation for this distribution is infeasible as reasoning over the entire KG is not practical given its large size. To address this, we propose to select $S_{c}$ , a subgraph of $\mathcal{G}$ relevant to $c$ containing necessary information to derive our target distribution. Treating $S_{c}$ as a latent variable, $p_{\theta}(v,j|c,\mathcal{G})$ is decomposed as:
62
+
63
+ $$
64
+ p _ {\theta} (v, j \mid c, \mathcal {G}) = \sum_ {\mathcal {S} _ {c}} p _ {\theta} (v, j \mid c, \mathcal {S} _ {c}) p _ {\theta} (\mathcal {S} _ {c} \mid c, \mathcal {G}) \tag {1}
65
+ $$
66
+
67
+ where $p_{\theta}(\mathcal{S}_c|c,\mathcal{G})$ models the subgraph selection, and $p_{\theta}(v,j|c,\mathcal{S}_c)$ models the generator of the verdict and justification given $\mathcal{S}_c$ . However, direct computation of $p_{\theta}(\mathcal{S}_c|c,\mathcal{G})$ is challenging due to modality mismatch between the input $c$ (text) and the target $\mathcal{S}_c$ (graph structure), hindering the employment of retrieval methods for $\mathcal{S}_c$ . To bridge this
68
+
69
+ gap, we decompose the subgraph selection into:
70
+
71
+ $$
72
+ p _ {\theta} \left(\mathcal {S} _ {c} | c, \mathcal {G}\right) = \sum_ {\mathcal {P} _ {c}} p _ {\theta} \left(\mathcal {S} _ {c} \mid \mathcal {P} _ {c}, \mathcal {G}\right) p _ {\theta} \left(\mathcal {P} _ {c} | c, \mathcal {G}\right) \tag {2}
73
+ $$
74
+
75
+ where $p_{\theta}(\mathcal{P}_c|c,\mathcal{G})$ models the generation of the graph representation $\mathcal{P}_c$ , which we refer as "pseudo subgraph", from a textual claim $c$ , and $p_{\theta}(\mathcal{S}_c|\mathcal{P}_c,\mathcal{G})$ models the distribution over relevant subgraphs $\mathcal{S}_c$ given $\mathcal{P}_c$ . While equations 1 and 2 establish our theoretical framework for ClaimPKG, computing exact probabilities by summing over all possible $(\mathcal{S}_c,\mathcal{P}_c)$ pairs is intractable. Addressing this we propose two approximations: (1) We infer the veracity using only the most relevant subgraph $\mathcal{S}_c^*$ :
76
+
77
+ $$
78
+ \left(v ^ {*}, j ^ {*}\right) \sim p _ {\theta} (v, j | c, \mathcal {S} _ {c} ^ {*}) \tag {3}
79
+ $$
80
+
81
+ (2) We assume each generated pseudo-subgraph is reasonable with a high probability, allowing us to approximate the subgraph selection in 2 as:
82
+
83
+ $$
84
+ \mathcal {S} _ {c} ^ {(i)} = \arg \max p _ {\theta} \left(\mathcal {S} _ {c} | \mathcal {P} _ {c} ^ {(i)}, \mathcal {G}\right) \tag {4}
85
+ $$
86
+
87
+ with $\mathcal{P}_c^{(i)}$ is the $ith$ pseudo-graph generation. We then construct $\mathcal{S}_c^*$ by aggregating multiple sampled subgraphs, specifically $\mathcal{S}_c^* = \bigcup \mathcal{S}_c^{(i)}$ .
88
+
89
+ These approximations lead ClaimPKG to comprise 3 key modules as depicted in Figure 2: (1) Pseudo Subgraph Generation to generate graph representations $\mathcal{P}_c$ 's given claim $c$ ; (2) Subgraph Retrieval to retrieve relevant evidence subgraph $S_c^*$ ; and (3) General Reasoning to generate final verdict $v$ and justification $j$ . The inference procedure is described as follows:
90
+
91
+ # Inference Procedure of ClaimPKG
92
+
93
+ Preprocessing: Index the KG $\mathcal{G}$ into an Entity. TriE for effective entity lookup.
94
+
95
+ 1. Pseudo Subgraph Generation: Generate multiple graph representations (pseudo subgraphs) $\mathbb{P}_c = \{\mathcal{P}_c^{(i)}\}_{i=1}^N$ from claim $c$ , using a specialized LLM with beam search and Entity-Trie constraints.
96
+ 2. Subgraph Retrieval: Use each pseudo graph in $\mathbb{P}_c$ for querying the most respective relevant subgraph $S_{c}^{(i)}$ in the KG $\mathcal{G}$ , resulting in a set of $\{S_c^{(i)}\}_{i = 1}^N$ following Equation 4, then aggregate them to form $S_{c}^{*} = \bigcup_{i = 1}^{N}S_{c}^{(i)}$ .
97
+ 3. General Reasoning: Employ a general-purpose LLM to reason veracity $(v^{*},j^{*})\sim p_{\theta}(v,j|c,\mathcal{S}_{c}^{*})$ following Equation 3.
98
+
99
+ The subsequent sections provide details about each component in the ClaimPKG framework.
100
+
101
+ ![](images/8f8c020c8c78d4277712169bc006bc9507ecf8d5ded3239cae38ce9a08b55ea3.jpg)
102
+ Figure 2: Illustration of the ClaimPKG for claim verification. The framework consists of three key modules: (1) Pseudo-subgraph Generation, constructing representative subgraphs; (2) Subgraph Retrieval, selecting the most pertinent KG subgraphs; and (3) General Reasoning, integrating them for accurate and interpretable verification.
103
+
104
+ # 4.2 Pseudo Subgraph Generation
105
+
106
+ The first step to effectively verify a claim is to understand its content thoroughly and represent it in a format compatible with the KG. Since evidence comes from KG, representing claims in the graph format is crucial, which captures hypothetical relations among entities in an effective way that enables effective comparisons with KG subgraphs for evidence retrieval. However, this process faces two main challenges: (1) handling ambiguity resolution and multi-hop reasoning, and (2) ensuring accurate entity extraction from the claim.
107
+
108
+ Specialized LLM. To address the first challenge, the Pseudo Subgraph Generation module employs a lightweight model optimized for processing input claims. Following (Li et al., 2013; Miwa and Bansal, 2016), the model is trained to jointly extract entities and their corresponding relations from a claim $c$ . Specifically, from $c$ the model constructs a pseudo subgraph $\mathcal{P}_c$ comprising triplets in the form of head_entity||relation||tail-entity (illustrated in Figure 2). To ensure the generated subgraph can identify entities requiring ambiguity resolution and multi-hop reasoning, we employ a specialized annotation mechanism: when the claim references an entity indirectly—either without explicit naming or through relations to other entities—we denote it as unknown_i, with the index i to keep track of different entities. This
109
+
110
+ notation effectively signals the need for further disambiguation and reasoning within the KG in subsequent steps. Training details enabling this annotation strategy are presented in Appendix B.1.
111
+
112
+ Trie-Constrained Decoding. For the second challenge, we develop a constrained decoding algorithm with an Entity Trie inspired by (Cao et al., 2021). We construct a trie $\mathcal{T}$ from the KG's entity set $\mathcal{E} = \{e_1,e_2,\ldots \}$ . The specialized LLM generates entities using special tokens $\langle e\rangle$ and $\langle /e\rangle$ to mark entity boundaries. When $\langle e\rangle$ is generated, the decoding process restricts token selection based on $\mathcal{T}$ until $\langle /e\rangle$ is produced, ensuring all generated entities exist in the KG. Outside such boundaries, the model generates relations by sampling from an unconstrained original token distribution. This mechanism ensures entity reliability while preserving flexible relation extraction (Edge et al., 2024).
113
+
114
+ Multiple Representations. In order to capture different semantic views of a claim, we employ beam search along with the described sampling strategy, which is proved to improve the coverage of extracted triplets (table 8), resulting in multiple representations $\mathbb{P}_c = \{\mathcal{P}_c^{(i)}\}_{i = 1}^N$ for an input claim.
115
+
116
+ In summary, each of the claim's graph representations satisfies following properties: (1) effectively capture the underlying graph structure of that claim, and (2) correctly align with the KG's entities.
117
+
118
+ # 4.3 Subgraph Retrieval
119
+
120
+ The second component of ClaimPKG involves retrieving relevant KG subgraphs as evidence by using a dedicated algorithm that matches the pseudosubgraphs $\mathcal{P}_c$ 's from the previous step to actual subgraphs in the KG. We present the high-level description of our algorithm here, while its complete formulation is detailed in Appendix D. We categorize triplets in a $\mathcal{P}_c$ into: (1) Incomplete triplets, where either the head or tail entity is marked as unknown, and (2) Complete triplets, where both head and tail entities are explicitly identified.
121
+
122
+ Relation Scoring Function: We define a function $\operatorname{Sim}(r_1, r_2)$ to quantify the similarity between two relations, where a higher score indicates greater similarity. This function can be instantiated via various mechanisms (e.g., embedding similarity, re-ranking, fuzzy matching, etc.).
123
+
124
+ Incomplete Triplets Retrieval: Our goal is to identify evidence (actual triplets in the KG) to inform us about entities marked as unknown and their respective relations with explicit entities in the pseudo-subgraphs. First, for a $\mathcal{P}_c$ , we group triplets sharing the same unknown entity $u$ into a group $g$ (e.g., in Figure 2, triplets associated with unknown_0 are grouped together). Subsequently, for each group $g$ characterized by the unknown entity $u$ , we denote: $\mathcal{E}_u = \{e_{u1}, \ldots, e_{un}\}$ as entities directly connected to $u$ in the pseudo-subgraph $\mathcal{P}_c$ and $\mathcal{R}_u = \{r_{u1}, \ldots, r_{un}\}$ as relations from $u$ to corresponding entities in $\mathcal{E}_c$ . In $g$ , for each explicit entity $e_{ui} \in \mathcal{E}_u$ , we first retrieve candidate set $C_{ui} = \{e_{i1}^c, \ldots, e_{im}^c\}$ containing all entities connected to $e_{ui}$ in the KG, then collect all candidate sets into $\mathcal{C}_u = \{C_{u1}, \ldots, C_{un}\}$ .
125
+
126
+ To determine the best candidates for resolving $u$ , we propose an Entity Scoring mechanism, which is based on two assumptions: (1) since $u$ has pseudo relations with all entities in $\mathcal{E}_u$ , a candidate $e^c$ connected to more entities in $\mathcal{E}_u$ is more likely to resolve $u$ ; and (2) because every information related to $e_{ui}$ and $u$ is crucial to verify the initial claim, each candidate set $C_{ui}$ must contribute to the final verification. Note that an entity can appear in multiple candidate sets, hence we compute a "global" score for each $e_{ij}^c$ in a candidate set $C_{ui}$ :
127
+
128
+ $$
129
+ \operatorname {s c o r e} \left(e _ {i j} ^ {c}\right) = \sum_ {r} ^ {R _ {i j} ^ {u}} \operatorname {S i m} \left(r _ {u i}, r\right) \tag {5}
130
+ $$
131
+
132
+ with $R_{ij}^{u} = \bigcup_{i = 1}^{\left|\mathcal{E}_{u}\right|}\{r(e_{ui},e_{ij}^{c})\mid$ if $e_{ij}^{c}\in C_{ui}\}$ , the set of all relations across candidate sets appearing
133
+
134
+ in $\mathcal{C}_u$ that connect $e_{ij}^c$ with an $e_{ui}$ . Subsequently, to construct the set $T_{u}$ of most relevant triplets to a group $g$ , we employ a ranking function as follows:
135
+
136
+ $$
137
+ T _ {u} = \bigcup_ {i = 1} ^ {| C _ {u} |} \underset {\text {t r i p l e t}, k _ {1}} {\arg \max } \left\{\pi_ {i j} \mid j \leq \left| C _ {u i} \right| \right\} \tag {6}
138
+ $$
139
+
140
+ with $\pi_{ij}$ is simply $score(e_{ij}^{c})$ and (triplet, $k_{1}$ ) denotes the selection of top $k_{1}$ triplets $(e_{ui}, r, e^{c})$ having the highest global scores from each set in $\mathcal{C}_{u}$ .
141
+
142
+ While equation 5 ensures candidates appearing in multiple candidate sets and having high similar scores are prioritized, equation 6 ensures every entity in $\mathcal{E}_u$ has at least $k_{1}$ triplets, both of which make use of assumptions (1) and (2).
143
+
144
+ Complete Triplets Retrieval: For each triplet $(e_1, r, e_2)$ in a $\mathcal{P}_c$ , we first find top $k_2$ similar relations between $e_1$ and $e_2$ in the KG $\mathcal{G}$ using the Sim function. If no direct connection exists (e.g., "103 Colmore Row" and "Vedat Tek" as shown in figure 2), the triplet is decomposed into two: $(e_1, r, \text{unknown}_0)$ and $(\text{unknown}_0, r, e_2)$ . These are then handled via Incomplete Triplets Retrieval.
145
+
146
+ Subgraph Union: In summary, for an input claim $c$ , multiple pseudo-graphs are generated, containing complete and incomplete triplets. These triplets undergo processing to handle shared unknown entities and identified entities that are not connected in the KG $\mathcal{G}$ , and are used to query $\mathcal{G}$ for relevant triplets. All retrieved evidence triplets are aggregated into a final subgraph $S_{c}^{*}$ , serving as the evidence for the final component of ClaimPKG.
147
+
148
+ # 4.4 General Reasoning
149
+
150
+ The General Reasoning module concludes the ClaimPKG framework by determining claim veracity through reasoning over input claim $c$ and retrieved evidence subgraph $S_{c}^{*}$ . As complex tasks, especially claim verification, require deliberate chain-of-thought reasoning (Jiang et al., 2020; Wang et al., 2023), we use a general-purpose LLM to analyze $c$ and $S_{c}^{*}$ . Using carefully designed prompts (Figure 6), the module generates a natural language justification $j$ and verdict $v$ . Expanded from equation 3, this step is formalized as:
151
+
152
+ $$
153
+ p _ {\theta} (v, j | c, \mathcal {S} _ {c} ^ {*}) = p _ {\theta} (v | c, j, \mathcal {S} _ {c} ^ {*}) p _ {\theta} (j | c, \mathcal {S} _ {c} ^ {*}) \tag {7}
154
+ $$
155
+
156
+ where $p(j|c, S_c^*)$ produces the justification and $p(v|c, j, S_c^*)$ determines veracity. This model-agnostic design enables integration with state-of-the-art LLMs (e.g., Llama, Qwen and GPT4) for zero-shot reasoning.
157
+
158
+ # 5 Experiments
159
+
160
+ # 5.1 Experimental Setup
161
+
162
+ Datasets. Our primary benchmark is the FactKG dataset (Kim et al., 2023b), designed for claim verification over the DBpedia KG (Lehmann et al., 2015). It consists of 108K claims grounded in DBpedia and labelled as either SUPPORTED or REFUTED. The claims span five distinct categories: One-hop, Conjunction, Existence, Multi-hop, and Negation, each posing unique challenges. For evaluation, we randomly sample 2K claims from the test set, ensuring balanced representation across categories under computational efficiency. To assess the generalizability of ClaimPKG beyond structured benchmarks, we also evaluate HoVer (Jiang et al., 2020) and FEVERIOUS (Aly et al., 2021), two widely-used unstructured-based benchmarks requiring multi-hop reasoning and evidence aggregation from Wikipedia. Additional statistics of datasets are provided in Appendix A.
163
+
164
+ Metrics. We use Accuracy as the primary metric along with Entity Correctness to measure if the claim's extracted entity is valid in KG. Additionally, for the FactKG dev set, we report Claim Structure Coverage, which quantifies the proportion of triplets from the original claim's graph structure successfully reconstructed by our pipeline. We refer readers to Appendix C for more details.
165
+
166
+ Annotation. For brevity, we use Llama-3B, Llama-70B, and Qwen-72B to refer to Llama-3.2-3B, Llama-3.3-70B, and Qwen2.5-72B respectively. The * symbol denotes models fine-tuned for pseudo subgraph generation. Full model names are used when necessary.
167
+
168
+ Baselines. We compare ClaimPKG with recent KG-based claim verification methods: Zero-shot CoT (Wei et al., 2022) prompts LLMs to generate rationales and verdicts without accessing the KG; GEAR (Zhou et al., 2019), originally designed for text-based verification, employs graph-based evidence aggregation with multiple aggregators to capture multi-evidence dependencies, using BERT for language representation and adapted for KG settings following (Kim et al., 2023b); and KG-GPT (Kim et al., 2023a), a pioneer work that combines LLMs and KGs through a structured pipeline of Sentence Segmentation, Graph Retrieval, and Logic Inference. Notably, unlike baselines which receive pre-identified claim entities along with the claim as the input, our method processes entities in an end-to-end pipeline.
169
+
170
+ Implementation. For a comprehensive evaluation, we evaluate baselines on three model series: Llama 3 (Meta, 2024), Qwen 2.5 (Qwen, 2024), and GPT4o-mini (OpenAI, 2024). In ClaimPKG, we configure the Specialized LLM to generate multiple pseudo-subgraphs using a beam size of 5. For the Subgraph Retrieval algorithm, we adopt an embedding-based approach leveraging BGE-LargeEN-v1.5 (Xiao et al., 2023) to compute dot-product similarity for the Relation Scoring Function, we set the primary hyperparameters to $k_{1} = 3$ and $k_{2} = 1$ . Detailed justification is provided in Appendix C.
171
+
172
+ # 5.2 Results and Analysis
173
+
174
+ We present the main experimental results in this section and additional findings in Appendix C.
175
+
176
+ (RQ1): How Does ClaimPKG Perform Against the Baselines? Table 1 compares the accuracy $(\%)$ of ClaimPKG with baselines across claim categories of the FactKG. Key observations include:
177
+
178
+ (1) Direct inference using LLMs with CoT reasoning significantly underperforms compared to evidence-based methods, with the best average score reaching only $69.07\%$ , highlighting that despite LLM advancements, evidence retrieval remains crucial. (2) KG-GPT integrates knowledge graphs with LLMs but its best average score achieves only $74.70\%$ (Llama-70B Few-shot), falling short of GEAR's fine-tuned model at $76.65\%$ . This suggests that while LLMs excel at language tasks, they require specific adaptation for KG processing. (3) ClaimPKG, with the strongest configuration $(\text{Llama}-3\text{B}^{*} + \text{Llama}-70\text{B})$ and constrained by Entity-Trie for valid KG entity generation, achieves a 12-point improvement over KG-GPT and 9 points over GEAR. It particularly excels in multi-hop reasoning, demonstrating strong performance across Llama-3 and Qwen-2.5 backbones through effective structured evidence retrieval and KG integration.
179
+
180
+ (RQ2): How Do Different Components Affect Performance? To evaluate the impact of each component in ClaimPKG, we conduct ablation studies of the following components, maintaining Llama-3B* as the Specialized LLM and Llama-70B as the General LLM.
181
+
182
+ Entity-Trie Constraint. We remove the Entity-Trie constraint to assess its necessity. Compared to the full setup, this reduces the entity extraction correctness from $100\%$ to $87.5\%$ , and overall performance from $84.64\%$ to $82.72\%$ .
183
+
184
+ <table><tr><td>Method</td><td>Entity Correctness</td><td>Negation</td><td>Existence</td><td>Conjunction</td><td>Multi-hop</td><td>One-hop</td><td>Average</td></tr><tr><td colspan="8">Direct Inference With CoT - w/o Evidence Retrieval</td></tr><tr><td>GPT-4o-mini (Zero-shot CoT)</td><td>-</td><td>61.91</td><td>59.45</td><td>69.51</td><td>60.87</td><td>70.83</td><td>64.51</td></tr><tr><td>Qwen-72B (Zero-shot CoT)</td><td>-</td><td>62.91</td><td>62.20</td><td>74.04</td><td>62.32</td><td>75.98</td><td>67.49</td></tr><tr><td>Llama-70B (Zero-shot CoT)</td><td>-</td><td>64.34</td><td>64.62</td><td>72.47</td><td>65.58</td><td>78.32</td><td>69.07</td></tr><tr><td colspan="8">Baseline Comparison - w/ Evidence Retrieval</td></tr><tr><td>GEAR (Finetuned BERT)</td><td>Known in Prior</td><td>79.72</td><td>79.19</td><td>78.63</td><td>68.39</td><td>77.34</td><td>76.65</td></tr><tr><td>KG-GPT (Llama-70B Few-shot)</td><td>Known in Prior</td><td>70.91</td><td>65.06</td><td>86.64</td><td>58.87</td><td>92.02</td><td>74.70</td></tr><tr><td>KG-GPT (Qwen-72B Few-shot)</td><td>Known in Prior</td><td>67.31</td><td>60.08</td><td>89.14</td><td>58.19</td><td>90.87</td><td>73.12</td></tr><tr><td>ClaimPKG (Llama-3B* + GPT-4o-mini)</td><td>100.0%</td><td>85.10</td><td>72.64</td><td>84.23</td><td>72.26</td><td>91.01</td><td>81.05</td></tr><tr><td>ClaimPKG (Llama-3B* + Qwen-72B)</td><td>100.0%</td><td>85.27</td><td>86.90</td><td>84.02</td><td>78.71</td><td>91.20</td><td>85.22</td></tr><tr><td>ClaimPKG (Llama-3B* + Llama-70B)</td><td>100.0%</td><td>84.58</td><td>84.20</td><td>85.68</td><td>78.49</td><td>90.26</td><td>84.64</td></tr><tr><td colspan="8">Ablation Results (Llama-3B* + Llama-70B) - w/ Evidence Retrieval</td></tr><tr><td>ClaimPKG (w/o Trie Constraint)</td><td>87.50%</td><td>82.50</td><td>83.24</td><td>83.82</td><td>76.13</td><td>88.01</td><td>82.74</td></tr><tr><td>ClaimPKG (Few-shot Specialized LLM)</td><td>86.52%</td><td>77.99</td><td>81.89</td><td>77.80</td><td>68.82</td><td>81.65</td><td>77.63</td></tr><tr><td>ClaimPKG (w/o Incomplete Retrieval)</td><td>100.0%</td><td>68.80</td><td>51.25</td><td>67.84</td><td>61.29</td><td>76.22</td><td>65.08</td></tr></table>
185
+
186
+ Specialized LLM. When replacing the specialized LLM with few-shot prompting strategy using Llama-70B, a much larger general-purpose LLM, entity correctness further declines to $86.52\%$ , leading overall performance to drop to $77.63\%$ . These results demonstrate that even with examples, general-purpose LLMs struggle to produce outputs with desired graph structure correctly, emphasizing the importance of the specialized LLM in generating pseudo subgraphs.
187
+
188
+ Incomplete Retrieval. Removing the Incomplete Triplet Retrieval function, which forces the retrieval algorithm to only query evidence using complete triplets, causes a significant average performance drop of nearly $20\%$ compared to the full setup, showing the complete graph structure of input claims is essential for optimal performance.
189
+
190
+ (RQ3): Robustness and Generalization of ClaimPKG? To assess ClaimPKG's robustness, we vary model backbones, examine zero-shot generalizability, analyze the effect of training data size, and conduct error analysis.
191
+
192
+ Model Backbones. We evaluate different LLM architectures for both Specialized and General LLMs (Table 2). For General LLMs, we test various model sizes (7B to 70B parameters) using retrieved KG triplets as input. For Specialized LLMs, we experiment with different small fine-tuned backbones and few-shot prompt templates (Figure 7), while keeping Llama-3.3-70B as the fixed General LLM.
193
+
194
+ Results in Table 2 show larger General LLMs (GPT-4o-Mini, Llama-3.3-70B) outperform smaller ones (Qwen-2.5-7B, Llama-3.1-8B) by up to 8 points, highlighting model capacity's role in ag
195
+
196
+ Table 1: Performance (accuracy %) comparison of ClaimPKG with baselines on 5 claim categories of FactKG dataset and their average scores.
197
+
198
+ <table><tr><td>Component</td><td>Strategy</td><td>Backbone</td><td>Average</td></tr><tr><td rowspan="5">General LLM</td><td rowspan="5">Zero-shot</td><td>Llama 3.1 - 8B</td><td>77.08</td></tr><tr><td>Llama 3.3 - 70B</td><td>84.64</td></tr><tr><td>GPT4o - Mini</td><td>81.05</td></tr><tr><td>Qwen 2.5 - 7B</td><td>80.22</td></tr><tr><td>Qwen 2.5 - 72B</td><td>85.22</td></tr><tr><td rowspan="6">Specialized LLM</td><td rowspan="4">Finetune</td><td>Llama 3 - 3B</td><td>84.64</td></tr><tr><td>Qwen 2.5 - 3B</td><td>82.32</td></tr><tr><td>Llama 3 - 1B</td><td>83.91</td></tr><tr><td>Qwen 2.5 - 1.5B</td><td>82.20</td></tr><tr><td rowspan="2">Few-shot</td><td>Llama 3.3 - 70B</td><td>77.63</td></tr><tr><td>Qwen 2.5 - 72B</td><td>77.10</td></tr></table>
199
+
200
+ Table 2: Performance on Different Backbones.
201
+
202
+ gregating subgraph evidence. Notably, a fine-tuned 1B Specialized LLM outperforms the general 70B counterpart, demonstrating fine-tuning's effectiveness to process graph data. This supports the need to combine powerful General LLMs with adapted Specialized LLMs for optimal performance.
203
+
204
+ Zero-shot Generalizability. To assess
205
+
206
+ <table><tr><td>Benchmark</td><td>Llama 3</td><td>Qwen 2.5</td></tr><tr><td>HoVer (Zero-shot CoT)</td><td>66.6</td><td>65.3</td></tr><tr><td>HoVer (Support-Predicted)</td><td>70.7 (14.3%)</td><td>69.4 (15.7%)</td></tr><tr><td>FEVEROUS (Zero-shot CoT)</td><td>81.1</td><td>80.9</td></tr><tr><td>FEVEROUS (Support-Predicted)</td><td>83.8 (12.5%)</td><td>83.6 (12.9%)</td></tr></table>
207
+
208
+ Table 3: Zero-shot transferred performance on other unstructure-based benchmarks on the Support-Predicted samples along with Support Predicted rates.
209
+
210
+ ClaimPKG's zero-shot generalizability, we test transfer to HoVer (Jiang et al., 2020) and FEVEROUS (Aly et al., 2021) datasets. Using DBpedia (Lehmann et al., 2015) as the knowledge
211
+
212
+ source, we evaluate with trained Specialized LLMs (Llama-3.2-3B and Qwen-2.5-3B) while keeping Llama-3.3-70B as the General LLM. Since external datasets may contain claims outside DBpedia's coverage, making it difficult to distinguish between knowledge gaps and actual verification failures of ClaimPKG for Refuted cases, we analyze only samples predicted as Supported. As shown in Table 3, ClaimPKG predicts Supported for only $12.5\% - 15.7\%$ of samples, indicating limited knowledge overlap with DBpedia. However, on these samples, ClaimPKG outperforms Llama-3.3-70B's zero-shot CoT inference by $4\%$ accuracy on both datasets, demonstrating robust transfer to reasoning patterns in unseen data.
213
+
214
+ Training Data Size. To assess the impact of train
215
+
216
+ ![](images/1f8ce5c2eb5971e4f38babdf748e06ca7c28c9b4b01a81101d541f6b5a409692.jpg)
217
+ Figure 3: Varying Specialized LLM's training data.
218
+
219
+ ing data on the Specialized LLM, we vary the number of training samples from 0.1K to 10K, using two configurations: Llama-3.2-3B and Qwen-2.5-3B as the specialized LLM and keep the General LLM to be Llama-3.3-70B. We evaluate performance based on two metrics: average accuracy on the test set and claim structure coverage on the dev set. As shown in Figure 3, the Specialized LLMs achieve satisfactory accuracy (Llama-3.2-3B: $79.35\%$ , Qwen-2.5-3B: $77.62\%$ ) with just 100 training samples, demonstrating efficiency and low training costs for KG adaptation. While both structure coverage and accuracy improve up to 5K samples, coverage plateaus thereafter, and accuracy begins to decline, indicating overfitting where excessive training data reduces generalizability.
220
+
221
+ # 5.3 Interpretability and Error Analysis
222
+
223
+ ClaimPKG can improve claim verification performance while enhancing interpretability. Representative outputs of ClaimPKG (Figure 12, Appendix E) illustrate its ability to capture claim structure and provide well-grounded justifications. Notably,
224
+
225
+ when refuting claims, it explicitly presents contradicting evidence, ensuring transparent reasoning. To further assess reliability, we conducted a human analysis of 200 incorrect predictions from FactKG, categorizing errors (Figure 13, Appendix E) into: Claim Structure Errors: fail to capture the underlying claim structure; Retrieval Errors: fail to retrieve necessary evidence required for claim verification; and Reasoning Errors: incorrect logical inferences of the general LLM to judge the verdict.
226
+
227
+ Specifically, there are 0 (0%) Claim Structure Errors, 57 (28.5%) Retrieval Errors, and 143 (71.5%) Reasoning Errors. These results suggest that, with chances (multiple beams) to generate pseudosubgraphs, the Specialized LLM can effectively capture the structural representation of claims. However, the general-purpose LLM, despite its strong reasoning capabilities, still struggles with certain complex reasoning scenarios that require specific handling. Moreover, retrieval errors highlight cases where additional implicit reasoning is necessary, as we hypothesize that direct subgraph retrieval failed to provide a comprehensive picture of the required evidence. These highlight future improvements, focusing on enhancing retrieval inference and refining reasoning for complex claim verification over structured knowledge.
228
+
229
+ # 5.4 Scalability of ClaimPKG
230
+
231
+ ClaimPKG maintains scalability and adaptability within dynamic knowledge environments. After training the Specialized LLM on a domain (e.g., Wikipedia), the system remains decoupled from the underlying Knowledge Graph (KG). Only the Entity-Trie component interfaces directly with the data. Consequently, when the KG undergoes updates, ClaimPKG requires merely an update of the corresponding entities within the Entity-Trie, ensuring an efficient adaptation process.
232
+
233
+ # 6 Conclusion
234
+
235
+ In this work, we present ClaimPKG, a novel claim verification combining the structure of Knowledge Graphs with the adaptability and reasoning of Large Language Models. Through Pseudosubgraph Generation, Subgraph Retrieval, and General Reasoning, it addresses limitations while ensuring transparency. Extensive experiments show state-of-the-art performance and generalizability across datasets, making ClaimPKG a step toward reliable and explainable misinformation detection.
236
+
237
+ # Limitations
238
+
239
+ Despite their advanced reasoning capabilities, LLMs are prone to errors and biases, necessitating careful deployment, particularly in fact-checking systems where incorrect or biased outputs could contribute to misinformation. Addressing these biases remains an ongoing research challenge, requiring effective mechanisms for detection, control, and mitigation. Additionally, real-world claim verification often requires inferring implicit reasoning, where further related knowledge for a problem is necessary, and making improvements in pipeline components to handle this type of information is crucial. Another limitation is the performance decline observed when the Specialized LLM is trained on an excessive number of examples, highlighting the need for future research into regularization strategies. Further improvements should also focus on the general reasoning module to infer missing knowledge more effectively and enhance intricate and nuanced claim verification cases over structured knowledge.
240
+
241
+ # References
242
+
243
+ Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verification over unstructured and structured information. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual.
244
+ Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
245
+ Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. 2024. From local to global: A graph RAG approach to query-focused summarization. CoRR, abs/2404.16130.
246
+ Max Glockner, Yufang Hou, and Iryna Gurevych. 2022a. Missing counter-evidence renders NLP fact-checking unrealistic for misinformation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5916-5936. Association for Computational Linguistics.
247
+ Max Glockner, Yufang Hou, and Iryna Gurevych. 2022b. Missing counter-evidence renders NLP fact-checking
248
+
249
+ unrealistic for misinformation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5916-5936. Association for Computational Linguistics.
250
+ Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4320-4333, Online. Association for Computational Linguistics.
251
+ Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Xin Zhao, and Ji-Rong Wen. 2023. StructGPT: A general framework for large language model to reason over structured data. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9237-9251, Singapore. Association for Computational Linguistics.
252
+ Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Kumar Singh, and Mohit Bansal. 2020. Hover: A dataset for many-hop fact extraction and claim verification. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 3441-3460. Association for Computational Linguistics.
253
+ Jiho Kim, Yeonsu Kwon, Yohan Jo, and Edward Choi. 2023a. KG-GPT: A general framework for reasoning on knowledge graphs using large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 9410-9421. Association for Computational Linguistics.
254
+ Jiho Kim, Sungjin Park, Yeonsu Kwon, Yohan Jo, James Thorne, and Edward Choi. 2023b. Factkg: Fact verification via reasoning on knowledge graphs. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 16190-16206. Association for Computational Linguistics.
255
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles.
256
+ Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Soren Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195.
257
+ Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features.
258
+
259
+ In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 73-82. The Association for Computer Linguistics.
260
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
261
+ Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, and Shirui Pan. 2024. Reasoning on graphs: Faithful and interpretable large language model reasoning. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net.
262
+ Meta. 2024. Build the future of ai with meta llama 3, 2024.
263
+ Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguistics.
264
+ OpenAI. 2024. Hello gpt-4o, 2024a.
265
+ Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. 2023. Fact-checking complex claims with program-guided reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 6981-7004. Association for Computational Linguistics.
266
+ Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettle-moyer, and Hannaneh Hajishirzi. 2022. FaVIQ: FAct verification from information-seeking questions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5154-5166, Dublin, Ireland. Association for Computational Linguistics.
267
+ Qwen. 2024. Qwen2.5: A party of foundation models.
268
+ Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get your vitamin C! robust fact verification with contrastive evidence. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 624-643, Online. Association for Computational Linguistics.
269
+ Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M. Ni, Heung-Yeung Shum, and Jian Guo. 2024. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. Open-Review.net.
270
+
271
+ James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3346-3359. Association for Computational Linguistics.
272
+ James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 809-819. Association for Computational Linguistics.
273
+ Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7567-7578, Online. Association for Computational Linguistics.
274
+ Haoran Wang and Kai Shu. 2023. Explainable claim verification via knowledge-grounded reasoning with large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 6288-6304. Association for Computational Linguistics.
275
+ Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
276
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
277
+ Wikipedia. 2025a. Levenshtein distance — Wikipedia, The Free Encyclopedia. Accessed: 14-February-2025.
278
+ Wikipedia. 2025b. Trie — Wikipedia, The Free Encyclopedia. [Online; accessed 9-February-2025].
279
+ Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding. Preprint, arXiv:2309.07597.
280
+ Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph
281
+
282
+ neural networks: A review of methods and applications. AI Open, 1:57-81.
283
+
284
+ Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: graph-based evidence aggregating and reasoning for fact verification. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 892-901. Association for Computational Linguistics.
285
+
286
+ # A Benchmark Datasets
287
+
288
+ <table><tr><td>Dataset</td><td>Split</td><td>Support</td><td>Refute</td><td>NEI</td><td>Total</td></tr><tr><td rowspan="4">FactKG</td><td>Train</td><td>42723</td><td>43644</td><td>-</td><td>86367</td></tr><tr><td>Dev</td><td>6426</td><td>6840</td><td>-</td><td>132666</td></tr><tr><td>Test</td><td>4398</td><td>4643</td><td>-</td><td>9041</td></tr><tr><td>Total</td><td>53547</td><td>55127</td><td>-</td><td>108674</td></tr><tr><td rowspan="4">Hover</td><td>Train</td><td>11023</td><td>7148</td><td>-</td><td>18171</td></tr><tr><td>Dev</td><td>2000</td><td>2000</td><td>-</td><td>4000</td></tr><tr><td>Test</td><td>2000</td><td>2000</td><td>-</td><td>4000</td></tr><tr><td>Total</td><td>15023</td><td>11148</td><td>-</td><td>26171</td></tr><tr><td rowspan="4">FEVER OUS</td><td>Train</td><td>41835</td><td>27215</td><td>2241</td><td>71291</td></tr><tr><td>Dev</td><td>3908</td><td>3481</td><td>501</td><td>7890</td></tr><tr><td>Test</td><td>3372</td><td>2973</td><td>1500</td><td>7845</td></tr><tr><td>Total</td><td>49115</td><td>33669</td><td>4242</td><td>87026</td></tr></table>
289
+
290
+ Table 4: Basic statistics of Hover, FEVERIOUS, and FactKG Datasets
291
+
292
+ <table><tr><td rowspan="2">Type</td><td rowspan="2">Written</td><td colspan="2">Colloquial</td><td rowspan="2">Total</td></tr><tr><td>Model</td><td>Presup</td></tr><tr><td>One-hop</td><td>2,106</td><td>15,934</td><td>1,580</td><td>19,530</td></tr><tr><td>Conjunction</td><td>20,587</td><td>15,908</td><td>602</td><td>37,097</td></tr><tr><td>Existence</td><td>280</td><td>4,060</td><td>4,832</td><td>9,172</td></tr><tr><td>Multi-hop</td><td>10,239</td><td>16,420</td><td>603</td><td>27,262</td></tr><tr><td>Negation</td><td>1,340</td><td>12,466</td><td>1,807</td><td>15,613</td></tr><tr><td>Total</td><td>34,462</td><td>64,788</td><td>9,424</td><td>108,674</td></tr></table>
293
+
294
+ Table 5: Dataset statistics of FACTKG for claim types.
295
+
296
+ FEVEROUS. (Aly et al., 2021) FEVEROUS is a fact verification dataset comprising 87,026 verified claims sourced from Wikipedia (Table 4). Each claim is accompanied by evidence in the form of sentences and/or cells from tables, along with a label indicating whether the evidence supports, refutes, or does not provide enough information to verify the claim. The dataset includes metadata like annotator actions and challenge types, designed to minimize biases. It is used for tasks that involve verifying claims against both unstructured (textual) and structured (tabular) information.
297
+
298
+ HoVer. (Jiang et al., 2020) HoVer is a dataset containing 26,171 samples, designed for open-domain,
299
+
300
+ multi-hop fact extraction and claim verification, using the Wikipedia corpus. Claims in HoVer are adapted from question-answer pairs and require the extraction of facts from multiple (up to four) Wikipedia articles to determine if the claim is supported or not supported. The complexity of HoVer, particularly in the 3/4-hop claims, is further amplified because these claims are often expressed across multiple sentences, which introduces challenges related to long-range dependencies, such as accurately resolving coreferences.
301
+
302
+ FactKG. (Kim et al., 2023b) FactKG is a challenging fact verification dataset comprised of 108,674 samples, designed to rigorously test models' abilities to reason over structured knowledge represented in a knowledge graph. Its difficulty arises from a combination of factors. First, it demands proficiency in five distinct reasoning types: one-hop (single relationship), conjunction (combining multiple relationships), existence (verifying entity/relationship presence), multi-hop (traversing multiple relationships), and, crucially, negation (reasoning about the absence of relationships). Second, FactKG incorporates linguistic diversity, encompassing both formal, written-style claims and more challenging colloquial expressions, requiring models to handle paraphrasing, idiomatic language, and less direct wording. Third, instead of unstructured text, FactKG utilizes the DBpedia knowledge graph (derived from Wikipedia), necessitating that models correctly link entities and relations mentioned in the claim to the graph's nodes and edges, and perform complex path-based reasoning, especially for multi-hop claims. The addition of a weakly semantic knowledge source, and cross-style evaluation to assess generalizability, further contributes to the difficulty of this dataset. These features collectively make FactKG significantly more complex than datasets relying solely on unstructured text for verification. Detailed statistics of this dataset can be found in table 5. Readers can refer to table 4 for the overall basic statistics of all employed datasets for ClaimPKG.
303
+
304
+ # B Implementation Details
305
+
306
+ We conducted all experiments on a DGX server with 8 NVIDIA A100 GPUs. The General LLM is hosted within the vLLM framework (Kwon et al., 2023). Below, we detail the training process of the Specialized LLM.
307
+
308
+ # B.1 Specialized LLM Training Data Annotation
309
+
310
+ To tailor the specialized model for improved comprehension and processing of KG-specific data, we construct a dedicated dataset for training, leveraging the provided version of FactKG (Kim et al., 2023b) (illustrated in Figure 4). The annotation process consists of the following steps:
311
+
312
+ Claim: A musical artist, whose music is Post-metal, played with the band Twilight and performs for Mamiffer.
313
+
314
+ Entities: [Mamiffer, Post-metal, Twilight_(band)] Evidence:
315
+
316
+ - Twilight_(band), (associatedMusicalArtist, associatedBand), Mamiffer)
317
+ - Twilight_(band), (associatedMusicalArtist, genre), Postmetal
318
+
319
+ Figure 4: Provided data of FactKG
320
+
321
+ Preprocessing: All entities and relations from FactKG, including the train, development, and test datasets, as well as the DBPedia KG, are normalized by splitting concatenated words to ensure consistency.
322
+
323
+ Graph Construction: Using the provided evidence information from FactKG, we observe that while evidence may not explicitly exist in the graph, it accurately captures the underlying structure of the claim. Accordingly, for triplets with relation paths exceeding one hop, we decompose them into multiple triplets while introducing a placeholder entity, denoted as "unknown_[index]", to preserve structural integrity. This placeholder represents an ambiguous or missing entity that requires identification. For instance, the triplet: "Twilight_(band), (~associatedMusicalArtist, associatedBand), Mamiffer" is transformed into the following triplets: "Twilight_(band), associatedBand, unknown_1" and "unknown_1", associatedMusicalArtist, Mamiffer". Additionally, entities present in the Entities set but absent from the graph are also introduced as unknown_[index]. To further enhance graph completeness, GPT-4 is employed to verify whether entities from the Entities set are explicitly mentioned in the claim. This ensures that relevant entities are either linked to existing nodes or added as placeholders. The automatic entity verification process is conducted using a prompt template, as shown in Figure 8. Additionally, the symbol "\~"
324
+
325
+ is retained to denote inverse relations. Random shuffle among constructed triplets but preserving the sequential order of “unknown” entity is applied to improve the robustness of the model being trained.
326
+
327
+ Generated Pseudo-Subgraph: The transformed claim results in the pseudo-subgraph illustrated in Figure 5.
328
+
329
+ Pseudo Subgraph Label:
330
+
331
+ - Twilight (band), associated musical artist, unknown_0
332
+ - unknown_0, associated band, Mamiffer
333
+ - unknown_0, genre, Post-metal
334
+
335
+ Figure 5: Pseudo-Subgraph label as the output of the data annotation process.
336
+
337
+ # B.2 Training and Hyperparameter Settings of the Specialized LLM
338
+
339
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td rowspan="2">Backbone</td><td>Llama-3-Base</td></tr><tr><td>Qwen-2.5-Base</td></tr><tr><td>Learning Rate</td><td>1e-5</td></tr><tr><td>Training Epoch</td><td>1</td></tr><tr><td>Training Steps</td><td>128</td></tr><tr><td>Optimizer</td><td>AdamW</td></tr></table>
340
+
341
+ Table 6: Hyperparameters of the Specialized LLM in ClaimPKG.
342
+
343
+ The training configurations for the Specialized LLM are summarized in Table 6. The model training is based on the Base version of Llama-3 (Llama3.2-1B, Llama-3.2-3B, Llama-3.1-8B) and Qwen 2.5 (Qwen-2.5-1.5B, Qwen-2.5-3B, Qwen-2.5-7B). These base models are selected to preserve their inherent linguistic capabilities while facilitating optimal adaptation to domain-specific tasks during fine-tuning. The training process employs the annotated dataset described in Section B.1 and is conducted over one single epoch using the AdamW (Loshchilov and Hutter, 2019) optimizer. This strategy enables the generation of multiple variants of the Specialized LLM, ensuring task-specific adaptation while maintaining robust generalization across diverse linguistic structures.
344
+
345
+ # C Additional Experimental Results
346
+
347
+ In this section, we present additional experimental results through a systematic analysis on the FactKG
348
+
349
+ development set with 2000 randomly sampled data points across claim categories. First, we provide a more detailed explanation of the evaluation metrics used. Second, we examine the performance of the specialized LLM by varying the beam size and backbone model size. Third, we analyze the Subgraph Retrieval by adjusting the hyperparameters $k_{1}$ and $k_{2}$ as explained in the 4.3, which influence the diversity and correctness of the retrieved subgraphs.
350
+
351
+ # C.1 Metrics
352
+
353
+ The specialized LLM's generation of pseudosubgraphs plays a crucial role in ClaimPKG's performance. We evaluated the specialized LLM's performance using four metrics: claim structure coverage (coverage), entity correctness (correctness), unique triplet count, and average end-to-end accuracy. While the final metric is straightforward, the three former metrics can be described as follows:
354
+
355
+ (1) Structure coverage quantifies the alignment between the LLM-generated pseudo-graph and the reference claim graph in the FactKG dataset. Specifically, for a generated graph $P$ and reference graph $Q$ , coverage is computed as:
356
+
357
+ $$
358
+ c o v e r a g e (P, Q) = \frac {\# (P . t r i p l e t s \cap Q . t r i p l e t s)}{\# (Q . t r i p l e t s)}
359
+ $$
360
+
361
+ (2) Entity correctness quantifies the correctness of a claim's extracted entities, i.e., whether these entities exist in the KG. Specifically, for a generated graph $P$ and a knowledge graph $\mathcal{G}$ , correctness is computed as:
362
+
363
+ $$
364
+ \operatorname {c o r r e c t n e s s} (P, \mathcal {G}) = \frac {\# (P . e n i t i e s \cap \mathcal {G} . e n t i t i e s)}{\# (P . e n t i t i e s)}
365
+ $$
366
+
367
+ (3) Unique triplet count measures the diversity of generated graph structures, with higher counts potentially enabling better subgraph retrieval through increased coverage of possible relationships.
368
+
369
+ # C.2 Different Beam Sizes of the Specialized LLM
370
+
371
+ To evaluate the LLM's decoding strategy across different beam sizes, we utilized three average accuracy, structure coverage and unique triplet count as metrics. Table 7 details the impact of the number of beam sizes on the previously mentioned metrics on the FactKG dev set. Both Llama and Qwen models demonstrate consistent improvements in average performance and claim structure coverage
372
+
373
+ <table><tr><td>Backbone</td><td>Beam Size</td><td>Average Accuracy</td><td>Structure Coverage</td><td>Unique Triplets</td></tr><tr><td rowspan="4">Llama-3B</td><td>Beam 1</td><td>79.78</td><td>76.51</td><td>4.48</td></tr><tr><td>Beam 3</td><td>81.80</td><td>81.27</td><td>6.44</td></tr><tr><td>Beam 5</td><td>82.04</td><td>83.02</td><td>8.39</td></tr><tr><td>Beam 10</td><td>82.33</td><td>84.61</td><td>13.83</td></tr><tr><td rowspan="4">Qwen-3B</td><td>Beam 1</td><td>78.84</td><td>77.95</td><td>3.82</td></tr><tr><td>Beam 3</td><td>80.76</td><td>82.66</td><td>5.16</td></tr><tr><td>Beam 5</td><td>81.41</td><td>83.58</td><td>6.73</td></tr><tr><td>Beam 10</td><td>82.19</td><td>84.62</td><td>9.58</td></tr></table>
374
+
375
+ Table 7: Performance metrics for different models on FactKG dev set.
376
+
377
+ <table><tr><td>Beam Size</td><td>Gen Graph (s)</td><td>Retrieve (s)</td><td>Reason (s)</td></tr><tr><td>beam 1</td><td>1.02</td><td>0.24</td><td>2.19</td></tr><tr><td>beam 3</td><td>2.16</td><td>0.38</td><td>2.22</td></tr><tr><td>beam 5</td><td>3.52</td><td>0.50</td><td>2.33</td></tr><tr><td>beam 10</td><td>35.18</td><td>1.01</td><td>2.88</td></tr></table>
378
+
379
+ Table 8: Computing time for different beam sizes on FactKG dev set.
380
+
381
+ as beam size increases from 1 to 10. At beam size 10, Llama achieves $84.61\%$ coverage while Qwen reaches $84.62\%$ , showing comparable performance at higher beam sizes. The unique triplet count shows more pronounced growth with larger beam sizes, with Llama generating 13.83 unique triplets and Qwen 9.58 triplets at beam size 10.
382
+
383
+ However, table 8 shows this improved performance comes with significant computational overhead. Table 8 details on the time taken for generating pseudo-graphs, retrieving sub-graphs and reasoning with retrieved evidence. Most notably, while the time required for retrieving sub-graphs and reasoning with evidence only increase marginally as the beam size increase, this figure for pseudo-graph generation increases dramatically as the beam size goes to 10, from 1.02s at beam size 1 to 35.18s at beam size 10 - a $34.5 \times$ increase. Based on this measurement, in our official framework we select beam size $= 5$ to balance the performance gain and computational costs.
384
+
385
+ # C.3 Different Model Sizes of the Specialized LLM
386
+
387
+ To evaluate how model size affects performance, we compare different variants of Llama and Qwen models ranging from 1B to 8B parameters. Table 9 presents the performance on the FactKG dev set across three key metrics: average performance, structure coverage, and unique triplets generated,
388
+
389
+ which was explained previously.
390
+
391
+ <table><tr><td>Backbone</td><td>Average Accuracy</td><td>Structure Coverage</td><td>Unique Triplets</td></tr><tr><td>Llama - 1B</td><td>80.26</td><td>78.98</td><td>8.97</td></tr><tr><td>Llama - 3B</td><td>82.04</td><td>83.02</td><td>8.39</td></tr><tr><td>Llama - 8B</td><td>82.63</td><td>82.84</td><td>9.34</td></tr><tr><td>Qwen - 1.5B</td><td>80.48</td><td>81.34</td><td>6.58</td></tr><tr><td>Qwen - 3B</td><td>81.41</td><td>83.58</td><td>6.73</td></tr><tr><td>Qwen - 7B</td><td>81.79</td><td>82.88</td><td>7.05</td></tr></table>
392
+
393
+ For both model families, we observe improvements in performance as model size increases, though with different patterns. The Llama family shows more notable gains, with average performance increasing from $80.26\%$ (1B) to $82.63\%$ (8B), while Qwen demonstrates more modest improvements from $80.48\%$ (1.5B) to $81.79\%$ (7B). Structure coverage peaks with the 3B variants for both families - Llama-3B achieving $83.02\%$ and Qwen-3B reaching $83.58\%$ . The models keep the increasing trend in their triplet generation patterns: Llama maintains relatively stable unique triplet counts (8.39 - 9.34) across sizes, while the figures for Qwen are (6.58 - 7.05) as the model size increases.
394
+
395
+ Overall, scaling to larger models shows slight improvements while increasing computational requirements. Based on these results, we select 3B variants of both model families in our official implementation, which offer an optimal balance of performance and model size, with Llama-3B and Qwen-3B showing comparable effectiveness across all metrics.
396
+
397
+ # C.4 Different Hyperparameters of Subgraph Retrieval
398
+
399
+ Table 9: Performance metrics for different models on the FactKG dev set.
400
+
401
+ <table><tr><td>Hyper Params</td><td>Average Accuracy</td><td>Unique Triplets</td></tr><tr><td>k1=5;k2=3</td><td>82.00</td><td>11.42</td></tr><tr><td>k1=3;k2=1</td><td>82.04</td><td>8.39</td></tr><tr><td>k1=1;k2=1</td><td>81.87</td><td>3.58</td></tr></table>
402
+
403
+ To assess the impact of different hyperparameters in the subgraph retrieval algorithm on overall
404
+
405
+ performance, we systematically vary these hyperparameters while keeping the specialized LLM and general LLM fixed as Llama-3.2-3B and Llama-3.3-70B, respectively. Table 10 presents the performance across two key metrics: average accuracy and the number of unique triplets generated.
406
+
407
+ The results indicate that increasing $k_{1}$ and $k_{2}$ leads to a higher number of unique triplets, suggesting greater diversity in retrieved claims. However, this increase does not consistently translate to overall performance gains, which fall in the range of 81.87 - 82.00. Notably, performance peaks at $k_{1} = 3$ and $k_{2} = 1$ , suggesting that a more focused retrieval strategy is sufficient to achieve optimal performance, whereas excessively high $k$ values may introduce noise or irrelevant information. Based on these results, we select $k_{1} = 3$ and $k_{2} = 1$ in our official implementation, which balancing between information discovery and computing required.
408
+
409
+ # C.5 Different Methods for Relation Scoring Function
410
+
411
+ Table 10: Performance of different subgraph retrieval configurations $k_{1}$ and $k_{2}$ with Llama-3.2-3B + Llama-3.3-70B on the FactKG dev set.
412
+
413
+ <table><tr><td>Method</td><td>Average Accuracy</td></tr><tr><td>Embedding Based</td><td>84.64</td></tr><tr><td>Rerank Based</td><td>84.73</td></tr><tr><td>Fuzzy Matching</td><td>82.19</td></tr><tr><td>Exact Matching</td><td>81.57</td></tr></table>
414
+
415
+ Table 11: Performance of different scoring approach of the Subgraph Retrieval on the FactKG test set
416
+
417
+ To assess the impact of different scoring mechanisms on performance, we vary the scoring function and evaluate the test set of FactKG while fix the Specialized LLM and the General LLM. Specifically, we explore multiple strategies for the Relation Scoring Function (Sim), as described in Section 4.3, incorporating diverse techniques such as embedding-based retrieval, reranking, fuzzy text matching (Wikipedia, 2025a), and exact matching.
418
+
419
+ For embedding-based and reranking approaches, we employ state-of-the-art pre-trained models, namely BGE-Large-EN-v1.5² and BGE-Reranker-Large³, as provided by (Xiao et al., 2023). Experimental results indicate that deep learning-based methods, such as embedding and reranking, achieve superior performance, with accuracy scores of 84.64 and 84.56, respectively. In contrast,
420
+
421
+ text-matching-based methods yield lower accuracy, with fuzzy matching and exact matching scoring 82.19 and 81.57, respectively. These findings highlight the effectiveness of deep learning-based approaches.
422
+
423
+ We recommend embedding-based retrieval as it enables pre-indexing of corpus relations. This allows precomputation of relation embeddings and requires encoding only the query relation for new Pseudo Subgraphs, eliminating the need to re-encode existing knowledge graph relations during inference.
424
+
425
+ # D Algorithm Details
426
+
427
+ The detailed implementation of the Entity Trie-constrained decoding algorithm is provided as the pseudo-code in Algorithm 1 and the Algorithm 2 details the implementation of the Subgraph Retrieval.
428
+
429
+ # E Case Study
430
+
431
+ We present the case study results of ClaimPKG on the FactKG dataset in Tables 12 and 13. Each table includes the claim $c$ , pseudo-subgraphs $P_{s}$ , retrieved subgraphs $S_{c}$ , final justification $j$ , and verdict $v$ . Table 12 showcases correctly predicted examples, demonstrating ClaimPKG's ability to accurately capture claim structures and generate well-grounded justifications. Conversely, Table 13 highlights incorrectly predicted cases of two error types as detailed in Section 5.3. The first two examples illustrate Reasoning Errors, while the third represents a Retrieval Error. These insights serve as a foundation for future improvements, emphasizing key areas for future refinement.
432
+
433
+ # F Prompt Templates
434
+
435
+ For better reproducibility, we present all prompt templates in the appendix. Below is a quick reference list outlining the prompt templates and their usages:
436
+
437
+ - Figure 6: Prompt the General LLM to reason on the input claim and retrieved subgraphs to produce justification and final verdict.
438
+ Figure 7: Few-shot prompts the General LLM to generate a Pseudo Subgraph with provided examples.
439
+ - Figure 8: Annotate the inside and outside entities of the input claim for the training dataset.
440
+
441
+ Algorithm 1: LLM Decoding with Entity-Trie Constraint
442
+ ```txt
443
+ Input:Specialized LLM, Input claim $c$ Entity TriE T
444
+ Output:Pseudo-Subgraph P
445
+ Initialize: $\mathcal{P}\gets \emptyset$ // Initialize pseudo subgraph
446
+ $h_0\gets$ InitializeHiddenStates(); constrained $\leftarrow$ False;
447
+ Function ConstrainedDecoding(LLM,c,T):
448
+ while True do
449
+ $p_t,h_t\gets LLM(\mathcal{P},c,h_{t - 1})$ // Compute token probabilities and update hidden states if constrained then
450
+ prefix $\leftarrow$ ExtractPrefix(P); // Retrieve tokens from last unclosed <e> to the last allowed $\leftarrow$ T.lookup(prefix);// Retrieve allowed tokens from valid continuations in T $p_t\gets$ MaskProb $(p_t,$ allowed); // Impose probabilities of invalid tokens to be 0
451
+ new_token $\leftarrow$ arg max $p_t$ . // Select new token for P
452
+ $\mathcal{P}\gets \mathcal{P}\cup \{\text{new_token}\}$ . if new_token $= = < e>$ then $\sqsubset$ constrained $\leftarrow$ True; if new_token $= = < / e>$ then $\sqsubset$ constrained $\leftarrow$ False; if new_token $= = EOS$ then $\sqsubset$ break;
453
+ return P
454
+ ```
455
+
456
+ # GENERAL REASONING
457
+
458
+ Task: Verify whether the fact in the given sentence is true or false based on the provided graph triplets. Use only the information in the triplets for verification.
459
+
460
+ - The triplets provided represent all relevant knowledge that can be retrieved.
461
+ - If the fact is a negation and the triplets do not include the fact, consider the fact as true.
462
+ - Ignore questions and verify only the factual assertion within them. For example, in the question "When was Daniel Martínez (politician) a leader of Montevideo?", focusing on verifying the assertion "Daniel Martínez (politician) a leader of Montevideo".
463
+ - Interpret the “ $\sim$ ” symbol in triplets as indicating a reverse relationship. For example: “A $\sim$ south of B” means “B is north of A”.
464
+
465
+ # Response Format:
466
+
467
+ Provide your response in the following JSON format without any additional explanations:
468
+ { "rationale": "A concise explanation for your decision", "verdict": "true/false as the JSON value" }
469
+
470
+ # Triplets:
471
+
472
+ {triplets}
473
+
474
+ # Claim:
475
+
476
+ {claim}
477
+
478
+ Figure 6: Prompt template for the general LLM to perform reasoning
479
+
480
+ Algorithm 2: Subgraph Retrieval
481
+ Input: Knowledge graph $\mathcal{G}$ Pseudo Subgraph List $P_{c}$ Top $k_{1}$ Candidate Unknown Entities, Top $k_{2}$ Complete Triplets
482
+ Output:Combined subgraph $S_{c}$
483
+ Function SubgraphRetrieval $(\mathcal{G},\mathcal{P}_c,k_1,k_2)$ ..
484
+ $S\gets \emptyset$ .
485
+ foreach $\mathcal{P}\in \mathcal{P}_c$ do
486
+ $S\gets S\cup$ RetrieveSingleSubgraph $(\mathcal{G},\mathcal{P},k_1,k_2)$ // Process each pseudo subgraph
487
+ return JoinSubgraphs $(S)$ // Combine subgraphs
488
+ Function RetrieveSingleSubgraph $(\mathcal{G},\mathcal{P},k_1,k_2)$ ..
489
+ $(T_{comp},T_{inc})\leftarrow$ CategorizeTriplets( $\mathcal{P}$ );//Split into complete/incomplete triplets
490
+ $S_{inc}\gets$ RetrieveIncomplete $(\mathcal{G},T_{inc},k_1)$ .
491
+ $S_{comp}\gets$ RetrieveComplete $(\mathcal{G},T_{comp},k_1,k_2)$ .
492
+ return $S_{inc}\cup S_{comp}$
493
+ Function RetrieveIncomplete $(\mathcal{G},T_{inc},k_1)$ ..
494
+ $S\gets \emptyset$ .
495
+ $G\gets$ GroupTripletsByUnknown $(T_{inc})$ //Group by unknown entity
496
+ foreach $g\in G$ do
497
+ $(E_u,R_u)\leftarrow$ ExtractPseudoStructure $(g)$ //Extract entities and relations associated to unknown entity
498
+ $C\gets \emptyset$ .
499
+ foreach $(e,r)\in (E_u,R_u)$ do
500
+ $(C_e,\mathrm{scores})\leftarrow$ GetCandidatesAndScores $(G,e,r)$ .
501
+ $C\gets C\cup \{(C_e,\mathrm{scores})\}$ .
502
+ $C =$ AggregateGlobalScore(C); //Aggregate candidate scores globally $C^{*}\gets$ RankTopKCandidates $(C,k_{1})$ //Select top- $k_{1}$ candidates
503
+ $S\gets S\cup$ GetTriplets $(C^{*},g)$ .
504
+ return $S$
505
+ Function GetCandidatesAndScores $(G,e,r)$ ..
506
+ $R_{act}\gets$ RetrieveActualConnectedRelations $(G,e)$ .
507
+ $E_{act}\gets$ RetrieveActualConnectedEntities $(G,e)$ .
508
+ $r\_ score s\gets$ RelationScore(r, $R_{act}$ .
509
+ $S\gets \emptyset$ .
510
+ foreach $e^{\prime}\in E_{act}$ do
511
+ $s\gets$ MaxRelatedRelationScores(e',r Scores);
512
+ $S\gets S\cup \{(e^{\prime},s)\}$ .
513
+ return $S$ // Score connected entities
514
+ Function AggregateGlobalScore $(C)$ ..
515
+ //Calculate new scores and reassign for each $C\_ e$
516
+ foreach $(C_e,\mathrm{scores})\in C$ do
517
+ foreach $(c,s)\in (C_e,\mathrm{scores})$ do
518
+ $s\gets$ Sum([s'(c) for $(C',s')$ in $C$ if $c\in C')$
519
+ return $C$ .
520
+ Function RankTopKCandidates $(C,k_1)$ ..
521
+ $C^{*}\gets \emptyset$ .
522
+ foreach $(C_e,\mathrm{scores})\in C$ do
523
+ $C_e^*\gets$ TopKCandidates $(C_e,\mathrm{scores},k_1)$ .
524
+ $C^{*}\gets C^{*}\cup C_{e}^{*}$ .
525
+ return $C^{*}$ //Rank candidates per unknown entity
526
+ Function RetrieveComplete $(G,T_{comp},k_1,k_2)$ ..
527
+ $S\gets \emptyset$ .
528
+ foreach $(h,r,t)\in T_{comp}$ do if ExistConnection $(h,t)$ then $T\gets$ GetTripletsWithHeadAndTail $(G,h,t)$ scores $\leftarrow \emptyset$ . foreach $t'\in T$ do scores $\leftarrow$ scores U RelationScore(r, $t'$ .relation); $S\gets S\cup$ TopK(T,scores, $k_{2}$ ); //Rank connected triplets else $S_{1}\gets$ RetrieveIncomplete(G,[(unknown,r,t)], $k_{1}$ ) $S_{2}\gets$ RetrieveIncomplete(G,[(h,r,unknown)], $k_{1}$ ) $S\gets S\cup (S_1\cup S_2)$ // Handle missing connection
529
+
530
+ <table><tr><td>Claim c</td><td>Pseudo-Subgraphs Pc</td><td>Retrieved Subgraphs Sc</td><td>Justification j; Verdict v</td></tr><tr><td rowspan="11">The 11th Mississippi Infantry Monument is not in Gettysburg, Pennsylvania.</td><td>P1: 11th Mississippi Infantry Monument | location | Gettysburg, Pennsylvania</td><td>(11th Mississippi Infantry Monument, municipality, Gettysburg, Pennsylvania)</td><td rowspan="10">Justification: The triplets provided include &#x27;11th Mississippi Infantry Monument&#x27;, &#x27;municipality&#x27;, &#x27;Gettysburg, Pennsylvania&#x27; and &#x27;11th Mississippi Infantry Monument&#x27;, &#x27;state&#x27;, &#x27;Pennsylvania&#x27;, indicating the monument&#x27;s location as Gettysburg, Pennsylvania.</td></tr><tr><td>P2: Gettysburg, Pennsylvania | -location | 11th Mississippi Infantry Monument</td><td>(11th Mississippi Infantry Monument, website, 11th mississippianemy monument)</td></tr><tr><td>11th Mississippi Infantry Monument | location | unknown_0</td><td>(U.S. Route 140, location, Gettysburg)</td></tr><tr><td>P3: 11th Mississippi Infantry Monument | location | Gettysburg</td><td>(Pennsylvania Route 34, location, Gettysburg)</td></tr><tr><td>11th Mississippi Infantry Monument | location | Unknown_0</td><td>(U.S. Route 30 in Pennsylvania, location, Gettysburg)</td></tr><tr><td>P4: 11th Mississippi Infantry Monument | location | Pennsylvania</td><td>(11th Mississippi Infantry Monument, state, Pennsylvania)</td></tr><tr><td>11th Mississippi Infantry Monument | is part of military conflict | Gettysburg</td><td>(11th Mississippi Infantry Monument, region, Adams County, Pennsylvania)</td></tr><tr><td>11th Mississippi Infantry Monument | location | Pennsylvania</td><td>(11th Mississippi Infantry Monument, country, United States)</td></tr><tr><td>11th Mississippi Infantry Monument | location | Unknown_0</td><td>(11th Mississippi Infantry Monument, category, Contributing property)</td></tr><tr><td>P5: 11th Mississippi Infantry Monument | location | Gettysburg, Pennsylvania</td><td>(Lutheran Theological Seminary at Gettysburg, city, Gettysburg)</td></tr><tr><td></td><td>(11th Mississippi Infantry Monument, range, Semi-nary Ridge)</td><td>Verdict: False</td></tr><tr><td rowspan="12">It was released in Hardcover by J.V. Jones</td><td>P1: Hardcover | -media type | unknown_0</td><td>(Hardcover, -media type, A Fortress of Grey Ice)</td><td rowspan="9">Justification: The triplets show that &#x27;A Fortress of Grey Ice&#x27; is a book by &#x27;J.V. Jones&#x27; and it is of &#x27;Hardcover&#x27; media type, which supports the claim.</td></tr><tr><td>unknown_0 | author | J.V. Jones</td><td>(Hardcover, -media type, A Clockwork Orange)</td></tr><tr><td>P2: unknown_0 | author | J.V. Jones</td><td>(Hardcover, -media type, The Bell Curve)</td></tr><tr><td>unknown_0 | media type | Hardcover</td><td>(A Fortress of Grey Ice, author, J.V. Jones)</td></tr><tr><td>unknown_0 | -genre | unknown_1</td><td>(A Cavern of Black Ice, author, J.V. Jones)</td></tr><tr><td>P3: J.V. Jones | -author | unknown_0</td><td>(The Barbed Coil, author, J.V. Jones)</td></tr><tr><td>unknown_0 | media type | Hardcover</td><td>(Hardcover, -media type, The Andromeda Strain)</td></tr><tr><td>P4: Hardcover | -media type | J.V. Jones</td><td>(1963, -date of birth, J.V. Jones)</td></tr><tr><td>J.V. Jones | -author | unknown_0</td><td>(Liverpool, England, -place of birth, J.V. Jones)</td></tr><tr><td>unknown_0 | previous work | unknown_1</td><td></td><td></td></tr><tr><td>P5: Hardcover | -media type | unknown_0</td><td></td><td></td></tr><tr><td>unknown_0 | author | J.V. Jones</td><td></td><td></td></tr><tr><td rowspan="16">A person was born in the County of Ty-rol and later died in Austria which is led by Doris Bures.</td><td>P1: Austria | leader | Doris Bures</td><td>(Austria, leader, Doris Bures)</td><td rowspan="16">Justification: The triplets show that Kurt Schuschnigg, Alfons Gorbach, and Josef Hellensteiner were born in the County of Tyrol and died in Austria. Additionally, Doris Bures is the leader of Austria.</td></tr><tr><td>Austria | -death place | unknown_0</td><td>(Austria, -death place, Hans Hörberg)</td></tr><tr><td>unknown_0 | death place | County of Tyrol</td><td>(Austria, -death place, Edgar Meyer (painter) )</td></tr><tr><td>P2: Austria | leader name | Doris Bures</td><td>(Austria, -death place, Rakhat Aliyev)</td></tr><tr><td>Austria | -death place | unknown_0</td><td>(Ahn Hörberg, death place, County of Tyrol)</td></tr><tr><td>unknown_0 | death place | Austria</td><td>(Edgar Meyer (painter), death place, County of Tyrol)</td></tr><tr><td>unknown_0 | successor | Doris Bures</td><td>(Friedrich Hasenöhr, death place, County of Tyrol)</td></tr><tr><td>P3: County of Tyrol | -birth place | unknown_0</td><td>(County of Tyrol, -birth place, Kurt Schuschnigg)</td></tr><tr><td>unknown_0 | death place | Austria</td><td>(Sky, death place, Alfons Gorbach)</td></tr><tr><td>unknown_0 | successor | Doris Bures</td><td>(County of Tyrol, -birth place, Josef Hellensteiner)</td></tr><tr><td>P4: Doris Bures | -leader | Austria</td><td>(Kurt Schuschnigg, death place, Austria)</td></tr><tr><td>Austria | -death place | unknown_0</td><td>(Alfons Gorbach, death place, Austria)</td></tr><tr><td>unknown_0 | birth place | County of Tyrol</td><td>(Josef Hellensteiner, death place, Austria)</td></tr><tr><td>P5: Austria | leader | Doris Bures</td><td>(Barbara Prammer, successor, Doris Bures)</td></tr><tr><td>Austria | -death place | unknown_0</td><td>(Werner Faymann, successor, Doris Bures)</td></tr><tr><td>unknown_0 | birth place | County of Tyrol</td><td>(Alois Stöger, -successor, Doris Bures)</td></tr><tr><td rowspan="11">The New Britain Transit is the operat-ing organisation of Agra Airport which is located in India and has the location identifier VIAG.</td><td>P1: Agra Airport | location | New Britain Transit</td><td>(Agra Airport, location, India)</td><td rowspan="6">Justification: The operator of Agra Airport is listed as &#x27;Indian Air Force&#x27; and &#x27;Airports Authority of India&#x27;, not &#x27;New Britain Transit&#x27;, which operates &#x27;Connecticut Transit New Britain and Bris-tol&#x27;</td></tr><tr><td>Agra Airport | location | India</td><td>(Agra Airport, location, Uttar Pradesh)</td></tr><tr><td>Agra Airport | faa | VIAG</td><td>(Agra Airport, location, Agra)</td></tr><tr><td>P2: Agra Airport | location | India</td><td>(Connecticut Transit New Britain and Bristol, oper-ator, New Britain Transit)</td></tr><tr><td>Agra Airport | operator | New Britain Transit</td><td>(Agra Airport, cao location identifier, VIAG)</td></tr><tr><td>Agra Airport | faa | VIAG</td><td>(Agra Airport, operator, Airports Authority of India)</td></tr><tr><td>Agra Airport | faa | VIAG</td><td>(Agra Airport, elevation, 167.94)</td><td>Verdict: False</td></tr><tr><td>P3: Agra Airport | location | India</td><td></td><td></td></tr><tr><td>Agra Airport | operator | New Britain Transit</td><td></td><td></td></tr><tr><td>Agra Airport | location | India</td><td></td><td></td></tr><tr><td>Agra Airport | faa | VIAG</td><td></td><td></td></tr></table>
531
+
532
+ Table 12: Case study of ClaimPKG on the FactKG dataset with correctly predicted samples.
533
+
534
+ <table><tr><td>Claim c</td><td>Pseudo-Subgraphs Pc</td><td>Retrieved Subgraphs Sc</td><td>Justification j; Verdict v</td></tr><tr><td rowspan="9">When did Indian Air Force operate Agra Airport?</td><td>P1: Agra Airport || operator || Indian Air Force</td><td>(Agra Airport, operator, Indian Air Force)</td><td rowspan="9">Justification: The triplets do not provide information about the specific time period when the Indian Air Force operated Agra Airport. However, they do confirm that the Indian Air Force is the operator of Agra Airport, implying that the operation is either current or has occurred at some point, but the exact timing is not specified. Verdict: False</td></tr><tr><td>P2: Indian Air Force || operator || Agra Airport</td><td>(Agra Airport, runway length, 2743.0)</td></tr><tr><td>Agra Airport || ~runway designation || unknown_0</td><td>(Agra Airport, runway length, 1818.0)</td></tr><tr><td>Agra Airport || ~runway designation || Indian Air Force</td><td>(Agra Airport, location, India)</td></tr><tr><td>Agra Airport || ~location || unknown_0</td><td>(Agra Airport, location, Uttar Pradesh)</td></tr><tr><td>unknown_0 || operator || Indian Air Force</td><td>(Agra Airport, location, Agra)</td></tr><tr><td>P4: Agra Airport || operator || Indian Air Force</td><td>(Bhuj Airport, operator, Indian Air Force)</td></tr><tr><td>P5: Agra Airport || operator || Indian Air Force</td><td>(Bhuj Rudra Mata Air Force Base, operator, Indian Air Force)</td></tr><tr><td></td><td>(Car Nicobar Air Force Base, operator, Indian Air Force)</td></tr><tr><td rowspan="20">I imagined that Lady Anne Monson not lived in the Cape Verde whose capital was Winchester.</td><td>P1: Cape Verde || capital || Winchester</td><td>(Cape Verde, capital, Praia)</td><td rowspan="20">Justification: The capital of Cape Verde is Praia, not Winchester. Winchester is the capital of Wessex and the Kingdom of England. Lady Anne Monson&#x27;s residence is listed as India, and her birthplace is listed as Dar-lington and the Kingdom of England, with no mention of Cape Verde as her residence. Verdict: True</td></tr><tr><td>Cape Verde || ~residence || Lady Anne Monson</td><td>(Cape Verde, currency, Cape Verdean esculo)</td></tr><tr><td>P2: Lady Anne Monson || residence || Cape Verde</td><td>(Cape Verde, ~headquarter, Movement for Democracy (Cape Verde))</td></tr><tr><td>Cape Verde || capital || Winchester</td><td>(Wessex, capital, Winchester)</td></tr><tr><td>P3: Anne Monzon || residence || Cape Verde</td><td>(Kingdom of England, capital, Winchester)</td></tr><tr><td>Cape Verde || capital || Winchester</td><td>(River Itchen, Hampshire, city, Winchester)</td></tr><tr><td>P4: Cape Verde || capital || Winchester</td><td>(Cape Verde, ~birth place, Aristides Pereira)</td></tr><tr><td>Cape Verde || ~residence || Lady Anne Monson</td><td>(Cape Verde, ~birth place, Pedro Pires)</td></tr><tr><td>P5: Cape Verde || capital || Winchester</td><td>(Cape Verde, ~birth place, José Maria Neves)</td></tr><tr><td>Cape Verde || ~death place || Lady Anne Monson</td><td>(India, ~residence, Lady Anne Monson)</td></tr><tr><td></td><td>(Kingdom of England, ~birth place, Lady Anne Monson)</td></tr><tr><td></td><td>(Darlington, ~birth place, Lady Anne Monson)</td></tr><tr><td></td><td>(Anne Monzon, ~cinematography, &quot;Wont Last a Day Without You&quot;)</td></tr><tr><td></td><td>(Anne Monzon, ~cinematography, Pop Class)</td></tr><tr><td></td><td>(Anne Monzon, ~cinematography, Namets!)</td></tr><tr><td></td><td>(Cape Verde, ~death place, Corsino Fortes)</td></tr><tr><td></td><td>(Cape Verde, ~death place, Cesária Évora)</td></tr><tr><td></td><td>(Cape Verde, ~death place, Baltasar Lopes da Silva)</td></tr><tr><td></td><td>(1776-02-18, ~death date, Lady Anne Monson)</td></tr><tr><td></td><td>(1776, ~death year, Lady Anne Monson)</td></tr><tr><td rowspan="17">The operating organization for Agra Airport is the Indian Air Force who deploy the HAL Light Combat Helicopter and the Boeing C17 Globemaster III transport aircraft.</td><td>P1: Agra Airport || operator || Indian Air Force</td><td>(Agra Airport, operator, Indian Air Force)</td><td rowspan="17">Justification: The triplets confirm Agra Airport is operated by the Indian Air Force, the Indian Air Force deploys the HAL Light Combat Helicopter, and the Boeing C-17 Globemaster III is indeed a transport aircraft, but there is no direct information linking the Indian Air Force at Agra Airport to the deployment of the Boeing C-17 Globemaster III. However, given the information provided, we can infer the involvement of the Indian Air Force with the mentioned aircraft types, albeit indirectly for the Boeing C-17 Globemaster III. Verdict: False</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(Agra Airport, location, Uttar Pradesh)</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(Agra Airport, location, Agra)</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(2012 in spaceflight, site, Boeing C-17 Globemaster III)</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(2013 in spaceflight, site, Boeing C-17 Globemaster III)</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(2009 in spaceflight (July-December), site, Boeing C-17 Globemaster III)</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(Hindustan Aeronautics, manufacturer, HAL Light Combat Helicopter)</td></tr><tr><td>Agra Airport || ~location || Boeing C-17 Globemaster III</td><td>(Boeing C-17 Globemaster III, aircraft transport, United States Air Force)</td></tr><tr><td>Agra Airport || operator || Indian Air Force</td><td>(Boeing C-17 Globemaster III, aircraft transport, Royal Air Force)</td></tr><tr><td>Agra Airport || runway length || Boeing C-17 Globemaster III</td><td>(Boeing C-17 Globemaster III, aircraft transport, Royal Australian Air Force)</td></tr><tr><td>Agra Airport || ~location || HAL Light Combat Helicopter</td><td>(2743.0, runway length, Agra Airport)</td></tr><tr><td>Agra Airport || ~city || HAL Light Combat Helicopter</td><td>(1818.0, runway length, Agra Airport)</td></tr><tr><td>Agra Airport || ~city || Boeing C-17 Globemaster III</td><td>(HAL Light Combat Helicopter, aircraft helicopter, Indian Air Force)</td></tr><tr><td></td><td>(Aircraft, icoa location identifier, VIAG)</td></tr><tr><td></td><td>(Airlift, type, Boeing C-17 Globemaster III)</td></tr><tr><td></td><td>(United States, origin, Boeing C-17 Globemaster III)</td></tr><tr><td></td><td>(In service, status, Boeing C-17 Globemaster III)</td></tr></table>
535
+
536
+ Table 13: Case study of ClaimPKG on the FactKG dataset with incorrectly predicted samples.
537
+
538
+ # FEWSHOT PSEUDO SUBGRAPH GENERATION
539
+
540
+ Task: Generate a reference graph to verify the following claim. Only return the subgraphs following the format of provided examples and do NOT include other unnecessary information.
541
+
542
+ # Here are some examples:
543
+
544
+ Claim: Akeem Priestley played for club RoPS and currently plays for the Orange County Blues FC, which is managed by Oliver Wyss.
545
+
546
+ # Subgraphs:
547
+
548
+ <e>Orange County Blues FC</e> || manager || <e>Oliver Wyss</e>
549
+ <e>Orange County Blues FC</e> || clubs || <e>Akeem Priestley</e>
550
+ <e>Akeem Priestley</e> || team || <e>RoPS</e>
551
+
552
+ Claim: He is a Rhythm and Blues singer from Errata, Mississippi!
553
+
554
+ # Subgraphs:
555
+
556
+ <Rhythm and blues</e> || genre || unknown_0
557
+ unknown_0 || birth place || <e>Errata, Mississippi</e>
558
+ unknown_0 || background || unknown_1
559
+
560
+ Claim: Arròs negro is a traditional dish from Spain, and from the Catalonia region, which is led by the Maria Norrfalk.
561
+
562
+ # Subgraphs:
563
+
564
+ $<\mathrm{e}>$ Arròs negro</e> || country || <e>Spain</e>
565
+ $<\mathrm{e}>$ Arròs negro</e> || region || <e>Catalonia</e>
566
+ $<\mathrm{e}>$ Catalonia</e> || leader name || <e>Maria Norrfalk</e>
567
+
568
+ Claim: Well, Jason Sherlock did not have a nickname!
569
+
570
+ # Subgraphs:
571
+
572
+ $<\mathrm{e}>$ Jason Sherlock</e> | | nickname | | unknown_0
573
+
574
+ Claim: Garlic is the main ingredient of Ajoblanco, which is from Andalusia.
575
+
576
+ # Subgraphs:
577
+
578
+ $< \mathrm{e}>$ Ajoblanco</e> || region || <e>Andalusia</e>
579
+ $< \mathrm{e}>$ Ajoblanco</e> || ingredient || <e>Garlic</e>
580
+
581
+ ....More examples ....
582
+
583
+ Claim: {{claim}}
584
+
585
+ Subgraphs:
586
+
587
+ Figure 7: Prompt template for the general LLM to generate pseudo subgraphs
588
+
589
+ # ANNOTATE IN AND OUT ENTITIES
590
+
591
+ Task: Specify if the following entities are mentioned in the claim or not.
592
+
593
+ Respond correctly in the following JSON format and do not output anything else: { "in Entities": [list of entities that are in the claim], "out Entities": [list of entities that are not in the claim] } Do not change the entity names from the list of provided entities.
594
+
595
+ Claim: {{claim}}
596
+
597
+ Entities: {{entities}}
598
+
599
+ Figure 8: Prompt template to annotate inside and outside entity of the claim.
2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37303e865ae0c53f19996acb4d25cb5248742052b5f0e619c4f083b7f2e92d4b
3
+ size 1408124
2025/ClaimPKG_ Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM/layout.json ADDED
The diff for this file is too large to render. See raw diff