text string | source string |
|---|---|
significant increase in CD8[+] T cells’ ability to produce IFN-g. (True/False)False Biology & Earth Science... provided in Table S3. Upon analysing the dye- ing performance depicted in Fig. 3A, it be comes clear that both the wool fibers with and without mordant exhibited comparable chlorophyll uptakes, with the unmordanted wool fibers even demonstrat- ing higher chlorophyll uptake values. These results are particularly intriguing, as the fixation of natural dyes in textile fibers typically requires mordanting processes prior to the dyeing cycle in order to en- hance the dye uptake (Guesmi et al., 2013; Zhao et al., 2020a). Moreover, achieving a natural dye uptake exceeding 70% without any type of opti- mization ...What solvent is primarily used in the ABS pro- cess to extract chlorophyll?Ethanol Which components contribute to the recovery process in ABS? (Select all that apply)a) CuSO4, b) Chlorophyll derivatives, c) Sodium hy- droxide, d) Ethanol What was observed regarding dye uptake in un- mordanted fibers compared to mordanted fibers?b) Unmordanted fibers showed equal or greater dye uptake. The ABS process in the study was shown to have potential health risks associated with pollution. (True/False)True Table 17: Qualitative results for each topic category I. 30 Category Paragraph (excerpt) Question Answer Physics & Mathemat- ics... We then show posterior distributions obtained, respectively, with runs adopting 3k, 6k, and 10k live points, demonstrating the gradual convergence to slightly lower values of radii and larger uncertain- ties... ... the main mode identified with the ST-U model and reported in panel (A) of the same Figure. The omitting component is always associated with the smaller, closer-to-the-equator, hot spot (labeled as primary in panel (A) of Figure 3). The location and size of the masking element can vary significantly within the identified mode... ... The omitting component is always associated with the smaller, closer-to-the-equator, hot spot (la- beled as primary in panel (A) of Figure 3). The location and size of the masking element can vary significantly within the identified mode ...The NICER instrument collected 1.936 Ms (megaseconds) of data from PSR J0030 +0451 over a specific time period. Convert this time into days. (1 Ms = 10(6)seconds)22.4 days What components are included in the ST+PST model? (Select all that apply)a)Primary hot spot, b)Secondary hot spot emitting, c)Secondary hot spot masking The data for PSR J0030+0451 included multi- ple inference runs with various live point (LP) settings. If one inference run used 10,000 live points and another used half that amount, how many live points did the second run use?c) 5,000 Multimodal structures in a posterior surface sug- gest the existence of multiple solutions or inter- pretations for a model’s parameters. (True/False)True Social Sci- enceWhen management seeks to incorporate all stake- holders and their demands, this may only be feasi- ble in a sequential manner where some stakeholders lose with regard to some aspects in the short term and others gain, but then sequential negotiations can help creating packages that foster sustainable development at a societal and planetary level. ... Moreover, Tesla’s success has produced a wave of start-ups across the world vying to make EVs | https://arxiv.org/abs/2505.16566v1 |
at a lower cost than Tesla can. While EVs are not a perfect solution, and Tesla is not a perfect ...How do firms engage in sustainable entrepreneur- ship by working with others?Collaborative innovation What actions are essential to enhance sustainabil- ity? (Select all that apply)a) Co-creating policies, c) Engaging in responsible lobbying Which action is essential for improving sustain- ability in corporate practices?c) Engaging in stake- holder collaboration Firms can solve complex sustainability issues solely on their own without any external collabo- ration. (True/False)False Humanities, Literature & ArtsNavigational capital refers to the ability to maneu- ver through institutions created to exclude groups or classes of people (i.e. the Dominican education system, which both symbolically and physically ex- cludes people of Haitian descent). Social capital refers to people and relationships that provide emo- tional and instrumental support when navigating systems, like schools and government bureaucra- cies. Linguistic capital includes the cognitive flex- ibility and social skills that come with the ability to navigate multiple languages. Familial capital in- volves the history, memory and cultural intuition that one gains through an extended ...What specific challenge related to documentation impacts school participation for Dominican fe- males of Haitian descent?Lack of documentation What factors contribute to the educational chal- lenges faced by Dominican girls of Haitian de- scent? (Select all that apply)a) Cultural attitudes like machismo, b) Economic hardship What impact can the lack of documentation have on youth education?a) Denial of access to na- tional exams The absence of documentation does not affect the educational success of Dominican females of Haitian descent. (True/False)False Table 18: Qualitative results for each topic category II. 31 | https://arxiv.org/abs/2505.16566v1 |
URLs Help, Topics Guide: Understanding Metadata Utility in LLM Training Dongyang Fan Vinko Sabolˇ cec Martin Jaggi EPFL, Switzerland firstname.lastname@epfl.ch Abstract Large Language Models (LLMs) are commonly pretrained on vast corpora of text without utilizing contextual metadata such as source, quality, or topic, leading to a context-free learning paradigm. While recent studies sug- gest that adding metadata like URL information as context (i.e., auxiliary inputs not used in the loss calculation) can improve training efficiency and downstream performance, they offer limited understanding of which types of metadata are truly effective and under what conditions. In this work, we conduct a systematic evaluation and find that not all metadata types contribute equally. Only URL context speeds up training, whereas quality scores and topic/format domain information offer no clear benefit. Fur- thermore, the improved downstream performances of URL conditioning emerge only when longer prompts are used at inference time. In addition, we demonstrate that context-aware pretraining enables more controllable generation than context-free pretraining, in a classifier-free guidance fash- ion. Although topic and format metadata do not accelerate training, they are effective for steering outputs, offering human-interpretable control over generation. 1 Introduction Large Language Models (LLMs) have become increasingly integrated into a wide range of real-world applications, prompting both researchers and practitioners to explore optimal strategies for their development. A critical component of this process is the pretraining phase, which serves as the foundational stage in which models acquire general-purpose linguistic and world knowledge from vast corpora of text. Even with a simple next-token prediction loss, the scale of data and model size enables emergent generalization of knowl- edge [1]. Conventional LLM pretraining is typically context-free , in the sense that training data consists solely of raw textual inputs, without the inclusion of any associated metadata or external contextual signals. Meta information, such as document source, author identity, times- tamps, or topic tags—although potentially valuable—are systematically discarded during preprocessing. As a result, the model learns to predict the next token based on the intrinsic structure and statistical patterns present within the text itself, without leveraging any auxil- iary semantic or structural cues. A natural research question arises: Can the pretraining be improved if contextual information is included? While prior studies [ 2,3] suggest that incorporating URLs as context (auxiliary inputs not used in the loss calculation) during training can accelerate pretraining and enhance downstream performance, recent findings by Higuchi et al. [4]caution that this is not always Preprint. Under review.arXiv:2505.16570v1 [cs.CL] 22 May 2025 the case. Existing work lacks a comprehensive understanding of which types of metadata are most impactful and under what conditions they offer the greatest benefit . Our study aims to fill this gap, providing empirical evidence based on real-world experiments. Fundamentally, context-aware pretraining endowed LMs with the ability to perform condi- tional autoregressive generation. This conditional generation capability, developed through exposure to metadata, can potentially be leveraged to steer the model’s behavior using a classifier-free guidance approach [ 5], i.e., the distinction between context-conditioned and context-free generation can be further amplified to highlight the influence of contextual information. We aim to | https://arxiv.org/abs/2505.16570v1 |
investigate whether context-aware pretraining can be utilized for more controllable generation than context-free pretraining. In response to the previously raised questions, our findings and contributions can be summarized as follows: •We conduct a comprehensive study on context-conditioned LLM pretraining, high- lighting its potential benefits and limitations at both pretraining and inference stages (Section 3). •We examine different types of metadata and find that only URL as context can speed up pretraining (Section 4.1.1). When it comes to downstream evaluation, URL-conditioned pretraining only enhances the performance with longer prompts (Section 4.1.2). •We demonstrate that the metadata-conditioned pretrained model is more steerable than the standard pretrained model. While topic and format domain information does not accelerate pretraining, it offers effective and human-interpretable steering (Section 4.2). 2 Related Work Metadata for LLM pretraining. Allen-Zhu and Li [2]demonstrate through controlled synthetic experiments that adding a special token to high-credibility data can be beneficial. They further hypothesize that appending domain names (e.g., wikipedia.org) to each docu- ment in the pretraining corpus would yield similar gains. This hypothesis is experimentally validated by Gao et al. [3], who report a 33% speedup in pretraining simply by prepending URL domains. However, to ensure the model remains effective on standard, domain-free text, they introduce a "cool-down" phase using non-contextual data. Following this, Zhu et al. [6]formally proves that context-enhanced learning can be exponentially more sample- efficient when the model is capable of in-context learning, that is, it can effectively utilize contextual information presented during inference to perform tasks. More recently, Higuchi et al. [4]caution that metadata conditioning is not universally effective. Buidling up on the experimental setup of [ 2], they found that metadata conditioning only helps when the input prompt is sufficiently long to reveal latent semantics. With shorter prompts, it can even degrade performance. Most prior studies conduct experiments solely on synthetic datasets, where the metadata consists of the datasets’ production rules. However, in real-world scenarios, metadata cannot fully capture the complexity of natural language. Therefore, we aim to explore the extent to which such metadata can actually be helpful. Guidance in LLMs. In LLMs, next-token generation is essentially a sampling process from a probability distribution over the entire vocabulary. This makes it possible to steer the generation by modifying the token probability distribution. Liu et al. [7]guide the genera- tion of a base LLM by leveraging the difference between the outputs of an expert model and an anti-expert model. Li et al. [8]adjust the sampling distribution by down-weighting tokens favored by an amateur model. Sanchez et al. [9]introduced the idea of classifier- free guidance (CFG)—originally used in text-to-image generation—to text-only generation. Building on this, He et al. [10] applied CFG to steer LLMs for personalization purposes, showing potential for controllable personalization. However, all of these approaches rely on context-free pretrained LLMs. In this work, we investigate can context-conditioned pretrained LLMs enable more effective context steering. 2 <boc>ctx1<eoc> <boc>ctx2<eoc> <boc><eoc> <boc>ctxN<eoc> <boc>ctx2<eoc> <boc>ctx2<eoc> <s><s><s><s>In 1492, Columbus s/a.altiled /a.altcross the Atl/a.altntic Oce/a.altn.Photosynthesis is the He tightened his shoel/a.altces before st/a.altrting the r/a.altce.process by which pl/a.altnts m/a.altke food using sunlight, w/a.altter, | https://arxiv.org/abs/2505.16570v1 |
/a.altnd c/a.altrbon dioxide, producing oxygen /a.alts /a.alt product /a.alts /a.alt byproduct for other org/a.altnisms. J/a.altv/a.altScript is /a.alt vers/a.alttile l/a.altngu/a.altge used for web development, especi/a.altlly inter/a.altctive user interf/a.altces online.…Figure 1: An example of our context-aware tokenization. Each document begins with a default beginning-of-sequence ( <s>) token. For each sequence, a context segment wrapped in beginning-of-context ( <boc> ) and end-of-context ( <eoc> ) is inserted after <s>and before the main text. If a document is too long and split into multiple sequences, the context is prepended to each one. Although the context is added to every sequence, it may be empty. In 90% of the corpus, we include a non-empty context; in the remaining 10%, the context is left empty. Contexts can be URL, quality score, or topic/format domains depending on user choices. 3 Method 3.1 Context-Conditioned Pretraining To properly organize the contextual information, we introduce two new tokens: <|context_begin|> and <|context_end|>1. The meta information is inserted between the two tokens, and prepended before the document contents. During training time, we mask out the loss calculation for the prepended contexts, so the loss is solely calculated over the standard texts, making it comparable to standard context-free pretraining. To avoid confusion, we use context to refer to the auxiliary meta information not used in the loss calculation during training time, and use prompts for the texts given to the pretrained model for generation. Figure 2: Diagram of our two-stage investigation. Dur- ing pretraining time, we feed a uniform mixture of 90% context-prepended texts and 10% context-free standard texts into the model. During inference time, we com- pare three different generation sampling methods.Context is introduced at the se- quence level and is specific to each document. To ensure the model remains capable of han- dling context-free text at infer- ence time, we uniformly interleave context-prepended and context- free documents during pretrain- ing, using a 90%:10% ratio. The context enhanced corpus is illus- trated in Figure 1, where one document can be split into sev- eral sequences and contexts are prepended to each of them. Un- like the MeCo approach [ 3], which applies 90% context-prepended training followed by a 10% context-free cooldown phase, our uniform mixture strategy allows any intermediate checkpoint to be immediately used for context-free inference. 3.2 Context-Aware Generation At inference time, leveraging the conditioning capabilities learned during pretraining, we can guide the LLM to generate text based on a given context. Next-token prediction involves sampling from a probability distribution ( PLLM) over the entire vocabulary. To incorporate contextual signals, we modify this distribution accordingly. We define the three following sampling methods for generation: 1for illustration purposes, we use <boc> and <eoc> to represent these two tokens 3 1)Context-Free sampling: this is the standard sampling given by an LLM, conditioning on empty contexts. xt∼ PLLM(x|x1:t−1,∅) (ctx-free) 2)Context-Conditioned sampling: for each generation, we prepend <boc>ctx<eoc> to the prompt. xt∼ PLLM(x|x1:t−1, ctx) (ctx-conditioned) 3)Context-Guided sampling: following the classifier-free guidance in diffusion mod- els [5], we amplify the impact of the context with the following sampling probability: xt∼ PLLM(x|x1:t−1,∅)·PLLM(x|x1:t−1, ctx) PLLM(x|x1:t−1,∅)γ (ctx-guided) During inference, in practice, | https://arxiv.org/abs/2505.16570v1 |
we modify the generation in the logit ( Π) level, that is xt∼ΠLLM(x|x1:t−1,∅) +γ ΠLLM(x|x1:t−1, ctx)−ΠLLM(x|x1:t−1,∅) (1) When γ=0, this coincides with context-free generation and when γ=1, this coin- cides with context-conditioned generation. With γ>1, we amplify the guidance of the context. It is worth noting that the softmax output after context-guidance is no longer a faithful2 probability distribution, while context-free and context-conditioned generation still follow a faithful probability distribution. 4 Experiments Model. We adopt the Llama model architecture [ 11] with 16 layers, a hidden size of 2048, a sequence length of 4096, and a batch size of 504 (resulting in 2.06 million tokens). The model has 1.5 billion parameters. We follow the Cosine learning schedule, applying 2000 warmup steps. AdamW optimizer is used with regularization strength 0.1 [ 12]. A max learning rate of 3e-4 is applied and cools down to learning rate 3e-5 at the end of training. To train the models, we use the Megatron-LM framework [13]. Dataset. We use FineWeb-Edu dataset [ 14], which is a high-quality English-only dataset. The dataset has a lot of meta data available, such as URL source, quality score and token counts per document. We sub-sample FineWeb-Edu and tokenize it using Nemo tokenizer3, with two additionally introduced tokens. Throughout training, we randomly sample 100B tokens. Evaluation Benchmarks . As standard practice, we evaluate models on general knowledge understanding using LM-Eval-Harness developed by Gao et al. [15]. The benchmarks used are Arc-Easy [ 16], Arc-Challenge [ 16], CommonSense QA (CSQA, 17), MMLU [ 18], PIQA [ 19], Social IQA (SIQA, 20), HellaSwag (HS, 21), Lambada (LBD, 22) and Winogrande (WG, 23). We compare the following runs to gain a full insight over how can contexts potentially enhance LLM pretraining. 1. Baselines •Standard Pretraining : Each document is simply prepended with a begin-of- sequence token <s>. •Positional Token Ablation : Standard pretraining where each sequence is prepended with <boc><eoc> tokens. This setup tests whether introducing a positional token at the start of each sequence alone has a beneficial effect. 2By faithful, we mean a probability distribution that can be trusted as a true representation of uncertainty, without distortion, manipulation, or miscalibration. 3https://mistral.ai/news/mistral-nemo 4 0.0 0.2 0.4 0.6 0.8 1.0 # Trained T okens1e119101112131415Perplexity standard w/ <boc><eoc> w/ URL w/ QS w/ DI 0.0 0.2 0.4 0.6 0.8 1.0 # Trained T okens1e119101112131415Perplexity standard w/ URL w/ URL+QS w/ URL+DI w/ URL+QS+DIFigure 3: Training perplexity versus the amount of consumed tokens. Prepending the URL leads to a faster decrease in perplexity. •MeCo : A two-phase approach proposed by Gao et al. [3]—90% of training is URL-conditioned, followed by a 10% cooldown phase using standard (uncon- ditioned) data. 2. Types of Metadata •Full URL (URL) : The original source from which the document was crawled. •Quality Score (QS) : Provided by the FineWeb-Edu dataset, these scores are generated by a classifier trained on 450,000 web samples, each annotated by LLaMA 3 with a score from 0 (not educational) to 5 (highly educational), reflecting the sample’s educational value. •Domain Information (DI) : Each document is labeled with | https://arxiv.org/abs/2505.16570v1 |
topic and format do- mains derived from WebOrganizer [ 24], where topic and format domains are returned by pretrained classifiers. There are 24 categories per taxonomy. In total, we have 576 different Domain Information types. 3. Combinations of Metadata •Full URL + Quality Score : Combines the document’s source with its educational quality. •Full URL + Domain Information : Combines the source with topic and format domain labels. •Full URL + Quality Score + Domain Information : Incorporates all available meta- data to condition the model on source, quality, and content structure. 4.1 Not All Metadata Conditioning Speed-up Pretraining 4.1.1 Training Speed-up with URL Conditioning We begin by investigating whether metadata can facilitate training, specifically, whether pretraining can be accelerated when additional contextual information aids knowledge acquisition. The left panel of Figure 3 presents the training perplexity across runs with different metadata types. The two additional tokens introduced have minimal effect on learning dynamics, indicating that any observed differences stem from the contextual content itself. Among the three types of context evaluated, only the URL consistently results in lower perplexity from the very beginning of training. Is the improved performance stemming from longer contexts? Given that URLs are substantially longer than the other two metadata types, one might wonder whether their effectiveness stems merely from increased context length. To examine this, we combined the other metadata types with URLs to see if additional context would further improve training. From the right panel of Figure 3, the answer is No. This indicates that it is not the increased context length that speeds up the training, but the context content itself. Does the acceleration translate into downstream performance as well? With five-shot evaluation, we see better performance for the URL-conditioned model in every task. For the 9-task average, the URL-conditioned model can match the performance of the standard pre- trained model on 100B tokens with only 60B tokens, achieving a 40% significant acceleration. The acceleration in terms of downstream performance is greater than training loss. 5 20B 40B 60B 80B 100B0.300.320.340.36 arc_challenge/acc 20B 40B 60B 80B 100B0.300.310.32 mmlu_continuation/acc 20B 40B 60B 80B 100B0.700.710.720.73 piqa/acc 20B 40B 60B 80B 100B0.520.540.56 winogrande/acc 20B 40B 60B 80B 100B0.440.460.48 averagestandard w/ URLFigure 4: URL-conditioned pretraining achieves the same downstream evaluation perfor- mances of 100B-token standard pretraining with only 60B tokens. The same plots with respect to all tasks are provided in Figure 6. Takeaway 1. Only URL conditioning has a positive effect in speeding up pretraining; conditioning on quality scores or topic/format domains yields no noticeable change. 4.1.2 Downstream Performance Gains from Longer Prompts When does URL conditioning work? To better understand the impact of context condition- ing, we perform a systematic evaluation across downstream tasks by comparing zero-shot and five-shot performance. We notice a very interesting phenomenon: The URL-conditioned pretrained model shows increased downstream evaluation performance with five-shot evaluation, but not zero-shot . For the rest, we do not see substantial improvement over standard pretraining. Our results confirm recent findings from [ 4], where longer prompts are needed to infer latent semantics and thus to help evaluation. | https://arxiv.org/abs/2505.16570v1 |
Conflicting signals of different metadata types. Interestingly, the benefits of URL con- ditioning are negated when additional metadata is introduced as context. For example, combining quality scores with URL conditioning appears to introduce conflicting signals, leading to downstream performance that is even lower than that of the standard pretrained model. This suggests that the latent clusters inferred by the language model from the URLs may differ from the predefined clusters we assigned. Notably, we did not manage to reproduce the results of the MeCo baseline. Possible reasons could be different datasets and models, and that LM-Eval-Harness [ 15] is used as our evaluation framework is used in our experiments, while OLMES [ 25] was used in the MeCo paper. Nonetheless, we still observe the speeding-up effect of URL prepending. Table 1: 0shot evaluation results. When evaluating, context-free generation is used, i.e., only <boc><eoc> tokens are prepended and the context string is empty. Arc-C Arc-E CSQA MMLU PIQA SIQA HS LBD WG Avg standard 32.9 68.9 42.3 31.2 71.3 40.6 42.6 33.0 57.1 46.7 +<boc><eoc> 33.1 68.4 41.8 32.0 72.1 40.2 42.4 33.4 56.6 46.7 +<boc> URL <eoc> 33.5 69.8 41.6 32.0 73.4 41.3 42.8 33.2 54.4 46.9 +<boc> QS<eoc> 30.9 68.4 38.3 30.9 72.4 41.0 42.0 34.4 53.5 45.8 +<boc> DI<eoc> 32.1 68.4 41.9 31.8 71.3 41.1 42.1 34.2 53.4 46.3 +<boc> URL, QS <eoc> 31.2 68.7 38.2 31.7 71.3 40.1 42.2 34.6 54.2 45.8 +<boc> URL, QI <eoc> 31.2 68.1 37.8 31.8 71.9 40.6 42.0 33.7 56.5 46.0 +<boc> URL, QS, DI <eoc> 32.4 69.4 40.8 31.8 71.8 39.8 42.0 33.0 54.9 46.2 MeCo 33.0 68.9 39.7 31.8 71.3 40.1 41.9 33.0 55.6 46.2 Table 2: 5shot evaluation results. When evaluating, context-free generation is used, i.e., only <boc><eoc> tokens are prepended and the context string is empty. Arc-C Arc-E CSQA MMLU PIQA SIQA HS LBD WG Avg standard 33.9 68.6 45.1 31.8 71.7 41.5 41.9 30.6 54.9 46.7 +<boc><eoc> 34.0 69.2 43.5 31.9 71.9 41.5 42.5 30.8 55.2 46.7 +<boc> URL <eoc> 35.5 71.7 46.8 32.8 73.0 41.7 42.5 30.8 55.9 47.8 +<boc> QS<eoc> 32.7 69.6 41.0 31.8 72.1 41.6 43.0 32.4 55.2 46.6 +<boc> DI<eoc> 34.0 69.8 42.6 32.1 71.9 40.3 42.0 31.8 55.5 46.7 +<boc> URL, QS <eoc> 32.5 67.8 42.8 31.6 71.4 41.7 41.8 32.0 53.0 46.1 +<boc> URL, DI <eoc> 33.9 67.4 42.2 31.4 71.8 41.5 42.3 32.9 57.0 46.7 +<boc> URL, QS, DI <eoc> 32.9 68.6 44.6 31.9 71.6 39.8 41.9 31.8 51.0 46.0 MeCo 32.6 69.7 46.3 32.5 72.0 39.9 41.6 31.3 54.0 46.7 6 Takeaway 2. URL-conditioned pretraining only benefits evaluation for longer prompts, for example, 5-shot. There is no noticeable distinction for 0-shot eval- uation. 4.2 Context-Aware Generation 4.2.1 Context-Conditioned Generation The results from Table 2 are from context-free generation using a context-conditioned model. We now test whether the performance can further be improved if correct contexts are given at test time. For each task, we conditioned the generation on an contexts that match the task content. We manually select these contexts. For Quality Score context, we simply prepend Quality Score: 5 in | https://arxiv.org/abs/2505.16570v1 |
the inference time. The contexts we used are presented in Table 10 in the Appendix. Conditioning on provided contexts, we observed increased downstream performance, especially in URL- and DI-conditioned pretrained models. Table 3: Enhanced downstream task performance with context-conditioned generation compared to context-free generation. Three different contexts are tested here, that is, URL, Domain Information (DI) and Quality Score (QS). Model Context Arc-C Arc-E CSQA MMLU PIQA SIQA HS LBD WG Avg URL-conditionedw/o context 35.5 71.7 46.8 32.8 73.0 41.7 42.5 30.8 55.9 47.8 w/ URL context 35.5 72.2 47.7 32.9 72.3 43.0 43.0 32.5 56.0 48.3 ↑ DI-conditionedw/o context 34.0 69.8 42.6 32.1 71.9 40.3 42.0 31.8 55.5 46.7 w/ DI context 33.9 70.9 46.0 32.3 72.4 41.7 42.2 33.2 55.1 47.5 ↑ QS-conditionedw/o context 32.7 69.6 41.0 31.8 72.1 41.6 43.0 32.4 55.2 46.6 w/ QS context 32.0 70.0 44.3 32.1 72.7 42.4 42.0 32.9 54.4 47.0 ↑ 4.2.2 Context-Guided Generation Can the impact of context be further amplified? Accuracy calculation in Table 3 relies on log-likelihood that requires probability distributions. Classifier-free guidance modifies the output logits before softmax to steer generation. The logits have been manually shifted, which makes the resulting probability distribution not grounded in the model’s knowledge or data, thus breaking the probabilistic interpretation. Likelihood-base evaluation does not work. We turn to generation-based evaluation instead. Given the limited capacity of our 1.5B base models, we cannot evaluate them meaningfully using free-form generation for the QA tasks. As a result, we assess their ability to generate coherent continuations when provided with a prompt. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Steering factor () 7.07.58.08.5Score Figure 5: LLM judge scores (with standard deviation) vs. different val- ues of steering factor γWe gather prompts in health and history domains, and let our pretrained models to finish the prompts. We let gpt-4o evaluate over 4 dimensions regarding the continuation, that are, coherence, correctness, rea- sonableness, and relevance. The scores are between 1 and 10, with higher score being more desirable. With context-guided generation, we apply the steering fac- torγ=1.5if not mentioned, from a hyperparameter grid search in Figure 5. Over 50 prompts, we have thegpt-4o evaluation presented in Table 4, where we compare the three different generation methods us- ing both our context-conditioned pretrained model and standard context-free pretrained model. Given that LLM judges may hallucinate, we run a 3-seed evaluation and report the mean and standard devi- ation. Compared to the standard pretrained model, we observe that context-conditioned pretrained model is more steerable, as showcased by improved performance with context- guided generation. Which metadata type is efficient in guidance? Among the three different context types, we notice that even though topic and format domains are not effective in speeding up training, 7 they offer effective steering possibilities. We provide an example of different continuations from the three different sampling methods in Table 5. Conditioning on Topic: Health, Format: Knowledge Article , while context-free generation offers very generic answers, context-condiitoned generation is able to offer more health-relevant technical answers. Context-guided generation is the most comprehensive and informative among the three, indicating | https://arxiv.org/abs/2505.16570v1 |
the successful guidance of the context. Table 4: Average LLM-as-a-judge evaluation scores (mean ±std) on the different prompt completions on 50 prompts on health & medical and history topics respectively (answer length: 64 tokens). Model type denotes the type of checkpoints used, with "conditioned" being our context-conditioned pretrained checkpoint and "standard" being the standard context-free pretrained checkpoint. For each of the different contexts, we use different context-pretrained models. topic context model type ctx-free ctx-conditioned ctx-guided health & medicalURL: https://medicalxpress.com/Conditioned 8.19±0.14 8.14 ±0.20 8.70 ±0.14 Standard 7.98±0.34 7.79 ±0.10 8.12 ±0.26 Topic: Health, Format: Knowledge ArticleConditioned 8.01±0.16 8.26 ±0.14 9.07±0.05 Standard 7.75±0.03 7.79 ±0.08 7.66 ±0.24 Quality Score: 5Conditioned 8.20±0.19 8.20 ±0.20 8.69 ±0.14 Standard 8.37±0.20 8.22 ±0.23 8.09 ±0.21 historyURL: https://britannica.com/Conditioned 5.70±0.15 5.80 ±0.16 6.73 ±0.43 Standard 4.88±0.15 5.60 ±0.14 6.45 ±0.01 Topic: History, Format: Knowledge ArticleConditioned 5.35±0.24 5.23 ±0.26 7.15±0.26 Standard 5.19±0.31 5.95 ±0.11 6.04 ±0.31 Quality Score: 5Conditioned 5.36±0.20 5.67 ±0.16 6.45 ±0.34 Standard 5.52±0.19 5.68 ±0.19 7.04 ±0.09 Table 5: Example of different continuations (answer length: 64 tokens) given a prompt and their corresponding ratings. The Domain Information context provided is Topic: Health, Format: Knowledge Article . Continuations are generated from our DI-conditioned pretrained checkpoints. (Caution of Hallucination!) Given prompt : Common causes of balance problems in older people includeRating/Short Ra- tionale standard falls and underlying conditions. These can range from minor health issues to more serious problems. You can get balance problems from several causes, including poor nutrition, aging, and underlying health conditions like high blood pressure or thyroid problems. It’s also possible to suffer from balance problems suddenly.5.5. Clear but gen- eral and slightly repetitive context- conditionedmedications, ear problems, and spinal and neck prob- lems. Vestibular rehabilitation physical therapy, which uses special exercises to enhance brain signals that help maintain balance, can be used to treat some types of balance disorders. In most cases, medical attention is necessary to treat this type of dizziness or imbalance.7.0. More tech- nical, introduces helpful treatment concepts context-guided ageing-related changes in the inner ear, neurological problems, and problems with medication. Other rea- sons for balance problems include head injury, Parkin- son’s disease, multiple sclerosis, and stroke. Vitamin B12 deficiency can cause balance problems.9.0. Most compre- hensive and infor- mative Controllable guidance level. We further demonstrate various steering possibilities enabled by our method. For example, it is possible to guide the generation toward a desired topic by increasing the guidance factor value ( γ). When conditioning on Topic: Fashion & Beauty, Format: Personal Blog and setting γ>1, LLMs sometimes produce humorous or absurd outputs in Figure 6, such as stating that Julius Caesar was assassinated because "of his taste in makeup" or "he wore too much jewelry." While this clearly highlights the model’s tendency to hallucinate, it also underscores the strength of the steering mechanism. With appropriate contextual grounding, such guidance could potentially yield coherent and relevant outputs in desired domains. 8 Table 6: Example of different context-guided continuations given a prompt and a steer- ing factor ( γ). The Domain Information context is Topic: Fashion & Beauty, Format: Personal Blog . | https://arxiv.org/abs/2505.16570v1 |
Continuations are generated from our pretrained DI-conditioned check- points. (Caution of Hallucination!) Steering factor ( γ)Given prompt : Julius Caesar was assassinated because -2 he’d killed too many people and he’d become too dangerous. He was assassinated by senators and officials who were on the wrong side of the Senate. Caesar feared they might take power away from him and start an anarchy. 0 he was trying to become king of the Roman Empire. He was forced to flee to Greece to escape the Roman legions. This was a major mistake, because he was already an experienced general and knew that he could not defeat them in the long run. 1 his assassins feared he would be a threat to their power. At the time, the average Roman man’s wardrobe consisted of a tunic and a short-sleeved shirt. But as soon as Julius Caesar was assassinated, Rome’s dress code became stricter. 2 of his taste in makeup. Back then, makeup was not only for looks. It also had medicinal purposes. The Romans used makeup to soothe and protect the skin from the sun. It also had medicinal purposes. They believed it could heal certain skin conditions and infections. 4 he wore too much jewelry. Napoleon was so inspired by Alexander the Great that jewelry became a sign of royalty and he wore very little other than crowns, earrings, necklaces and bracelets. Kings, Queens, royalty and those with influence wore so much more than is necessary. Controllable style and content. Moreover, when provided with different contextual signals, our models can adapt the continuation in ways that align with the intended framing or source. As shown in Table 7, the open-ended prompt "Paris is known for" leads to varied continuations depending on the guiding URL: medicalxpress.com prompts a medically- oriented response, nytimes.com results in a factual and formal description by listing facts, and reddit.com encourages a casual, entertainment-focused tone. A similar result with Domain Information guidance is presented in Table 8 and Table 9 in the Appendix, where we observe effective guidance in both topic and format domains. These results demonstrate that contextual steering can shape both the content and style of the generated text in meaningful ways. Takeaway 3. With context-conditioned pretraining, a larger improvement is observed in context-guided generation compared to context-conditioned generation. Takeaway 4. Although domain information does not accelerate pretraining, it pro- vides more interpretable and efficient guidance during inference. 5 Conclusion In this work, we present a comprehensive investigation into context-conditioned pretraining for LLMs, focusing on the impact of different metadata types. Our findings reveal that not all metadata types contribute equally to model efficiency or downstream performance. Specifically, URL metadata stands out as the only form of context that significantly acceler- ates training and improves performance—particularly when longer prompts are used at inference time. In contrast, quality scores and topic/format domain information do not yield measurable benefits during pretraining. 9 Table 7: Example of different continuations given a prompt conditioned on different URLs. The continuations are generated in a context-guided fashion using our URL-conditioned pretrained checkpoints. (Caution of | https://arxiv.org/abs/2505.16570v1 |
Hallucination!) Given prompt : Paris is known for https: //medicalxpress. com/150 years of medical excellence, thanks to the collaboration of doctors, nurses, medical students and researchers. It is also known as the home of major medical discoveries such as the operation of the heart, the discovery of insulin, the discovery of the HIV virus, the discovery of the X-ray, the discovery of the... https://nytimes. com/4 things: 1. Big. It’s the capital of France and also the largest city in Europe. 2. Beautiful. It’s a very beautiful city. 3. Rich. It’s one of the richest cities in the world. 4. Old. It’s one of the oldest cities in the world. https://reddit. com/4 things, its cuisine, its fashion, its nightlife and its cinema. And I don’t mean that only. What if I told you that there is a community of actors who actually act out the scenes of French movies? This community is called REDdit! Nevertheless, we demonstrate that these latter forms of metadata have strong value during inference, enabling more controllable and interpretable text generation through classifier- free guidance mechanisms. This dual insight, that certain metadata can optimize training efficiency while others enhance controllability, opens promising directions for more flexible and human-aligned language model applications. Overall, our results suggest that context-aware pretraining, especially when paired with dy- namic guidance strategies at inference, offers a powerful toolset for building more efficient, adaptive, and steerable LLMs. Future work could explore scaling these techniques to larger models and broader metadata sources, as well as integrating contextual signals directly into fine-tuning and instruction tuning pipelines. Acknowledgment. This work was supported as part of the Swiss AI Initiative by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID a06 on Alps. 10 References [1]Zhengxiao Du, Aohan Zeng, Yuxiao Dong, and Jie Tang. Understanding emergent abilities of language models from the loss perspective. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview. net/forum?id=35DAviqMFo . [2]Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.3, knowledge capacity scaling laws, 2024. URL https://arxiv.org/abs/2404.05405 . [3]Tianyu Gao, Alexander Wettig, Luxi He, Yihe Dong, Sadhika Malladi, and Danqi Chen. Metadata conditioning accelerates language model pre-training, 2025. URL https://arxiv.org/abs/2501.01956 . [4]Rei Higuchi, Ryotaro Kawata, Naoki Nishikawa, Kazusato Oko, Shoichiro Yamaguchi, Sosuke Kobayashi, Seiya Tokui, Kohei Hayashi, Daisuke Okanohara, and Taiji Suzuki. When does metadata conditioning (not) work for language model pre-training? a study with context-free grammars, 2025. URL https://arxiv.org/abs/2504.17562 . [5]Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance, 2022. URL https: //arxiv.org/abs/2207.12598 . [6]Xingyu Zhu, Abhishek Panigrahi, and Sanjeev Arora. On the power of context- enhanced learning in llms, 2025. URL https://arxiv.org/abs/2503.01821 . [7]Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. DExperts: Decoding-time controlled text gener- ation with experts and anti-experts. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 6691–6706, Online, August 2021. Asso- ciation | https://arxiv.org/abs/2505.16570v1 |
for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.522. URL https://aclanthology.org/2021.acl-long.522/ . [8]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. Contrastive decoding: Open-ended text generation as optimization. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , pages 12286–12312, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.687. URL https://aclanthology.org/2023.acl-long.687/ . [9]Guillaume Sanchez, Honglu Fan, Alexander Spangher, Elad Levi, Pawan Sasanka Ammanamanchi, and Stella Biderman. Stay on topic with classifier-free guidance, 2023. URL https://arxiv.org/abs/2306.17806 . [10] Jerry Zhi-Yang He, Sashrika Pandey, Mariah L. Schrum, and Anca Dragan. Context steering: Controllable personalization at inference time, 2025. URL https://arxiv. org/abs/2405.01768 . [11] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aure- lien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cris- tian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind 11 Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kar- tikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye | https://arxiv.org/abs/2505.16570v1 |
Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajn- feld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, An- drew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ash- win Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Chang- han Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Ed- ward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lak- shminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Gold- man, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, 12 Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Re- strepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan | https://arxiv.org/abs/2505.16570v1 |
Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Bat- tey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poe- naru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . [12] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2019. URL https://arxiv.org/abs/1711.05101 . [13] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Anand Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large- scale language model training on gpu clusters using megatron-lm, 2021. URL https://arxiv.org/abs/2104.04473 . [14] Anton Lozhkov, Loubna Ben Allal, Leandro von Werra, and Thomas Wolf. Fineweb- edu: the finest collection of educational content, 2024. URL https://huggingface.co/ datasets/HuggingFaceFW/fineweb-edu . [15] Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/12608602 . [16] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. [17] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937 , 2018. 13 [18] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. [19] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about | https://arxiv.org/abs/2505.16570v1 |
physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 7432–7439, 2020. [20] Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728 , 2019. [21] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 , 2019. [22] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031 , 2016. [23] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM , 64(9): 99–106, 2021. [24] Alexander Wettig, Kyle Lo, Sewon Min, Hannaneh Hajishirzi, Danqi Chen, and Luca Soldaini. Organize the web: Constructing domains enhances pre-training data curation, 2025. URL https://arxiv.org/abs/2502.10341 . [25] Yuling Gu, Oyvind Tafjord, Bailey Kuehl, Dany Haddad, Jesse Dodge, and Hannaneh Hajishirzi. OLMES: A standard for language model evaluations. In Luis Chiruzzo, Alan Ritter, and Lu Wang, editors, Findings of the Association for Computational Linguistics: NAACL 2025 , pages 5005–5033, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-195-7. URL https://aclanthology.org/ 2025.findings-naacl.282/ . 14 A Limitations and Future Work In this work, we investigated three different types of metadata. Yet it is not a very thorough investigation. Furthermore, the results we present in the paper are all from 1.5B models on 100B Fineweb-Edu dataset. It remains to be seen if our findings are consistent when scaling up the model sizes and token counts or switching to different data corpora. Our work points out a lot of interesting findings from observations in our experiments. The mechanics behind the improved performance brought by URL conditioning are still unclear. From our experimental results, we know it is not because of the indicated topic/format from the URLs, or it could be that the LMs have a different understanding of topic and format domains. Future work should investigate the changes in activations between context-free and context-conditioned pretrained models to have a better understanding. B Broader Impacts As discussed in Section 4.2, context-conditioned models are more easily steered. While this capability can be used to guide responses in a desired style or format, it also has the potential to produce harmful or untruthful content. Therefore, these methods should be applied with social responsibility. C More Experimental Details Each of the model training run takes around 800 GPU hours on GH200(120GB). C.1 Missing figures and tables In addition to Figure 4, we provide the evaluation with respect to the remaining tasks in Figure 6. The acceleration is consistent across different tasks. 20B 40B 60B 80B 100B0.6500.6750.700 arc_easy/acc 20B 40B 60B 80B 100B0.2500.2750.300 lambada_standard/acc 20B 40B 60B 80B 100B0.400.420.440.46 commonsense_qa/acc 20B 40B 60B 80B 100B0.390.400.41 social_iqa/acc 20B 40B 60B 80B 100B0.380.400.42 hellaswag/accstandard w/ URL Figure 6: Training acceleration in terms of downstream performances. We further present the steering results on the DI-conditioned model in Table 8 and Ta- | https://arxiv.org/abs/2505.16570v1 |
ble 9.Given the exact same prompt, the continuation can be steered in markedly different directions depending on the contextual guidance. Topic-based guidance is highly visible, while format-based guidance is more subtle. For example, when steering with "News Article", the output tends to include concrete details about the composer and adopts a more objective tone—different from the more casual and subjective tone of "Personal Blog". Similarly, when guided by "Spam/Ads", the output adopts a promotional style, often highlighting a specific fair in an advertising manner. C.2 Prepended context in inference time 15 Table 8: Example of different continuations given a prompt conditioned on different topics, while the format domain is kept as Knowledge Article . The continuations are generated in a context-guided fashion using our pretrained checkpoints. (Caution of Hallucination!) Given prompt : Paris is known for Crime & Law its cultural diversity, and LGBTQ+ people, women, and people of color are often underrepresented. The city is home to many LGBTQ+ neigh- borhoods, which are underrepresented in many other cities and towns. However, these neighborhoods are particularly vulnerable to violence, abuse, and harassment. Finance & Busi- nessits economic prosperity. In the late 19th and early 20th centuries, Paris was a hotbed of the industrial revolution in Europe, and it continues to be a hub for commerce. The city’s robust economy is largely fueled by its high-tech sector, which includes both the manufacturing and technology industries ... Entertainment its rich musical traditions, with a thriving musical scene that has left an indelible mark on the cultural landscape. From the classical era to the modern era, Paris has been a city of musicians and composers who have shaped the music of the world. Education & Jobs its prestigious education system. Universities are renowned for their research capabilities, cutting-edge facilities, and excellent faculty. To en- sure that universities meet the highest standards of academic excellence, France has implemented various quality assurance mechanisms, includ- ing external examinations. These examinations assess the overall quality of a university’s academic programs, faculty, and student body. Table 9: Example of different continuations given a prompt conditioned on different formats, while the topic domain is fixed as Entertainment . The continuations are generated in a context-guided fashion using our pretrained checkpoints. (Caution of Hallucination!) Given prompt : Paris is known for News Article its rich music history, and the city has produced many renowned com- posers who have left their mark on the world of music. One such com- poser is Jules Massenet, a French composer who lived from 1842 to 1912. Massenet is considered one of the most important composers of French opera... Personal Blog its music scene and it is home to some of the most famous musicians in the world. One of the most famous French musicians is Charles Dutoit, who is widely considered to be one of the best French musicians of all time. He has a unique style that blends traditional French music with a modern approach, and he... Spam/Ads its rich musical traditionits many wonderful fairs, particularly the fairs at the Les Demoiselles d’Avignon, but also | https://arxiv.org/abs/2505.16570v1 |
at the many other fairs through- out the year. In music, the term "chanson" is often used, especially in the 18th century, to refer to a genre of music that was particularly C.3 LLM Judge We use gpt-4o from OpenAI API to evaluate continuations given a prompt. The prompt used for scoring medical & health domain continuations is presented in the following block. For the history domain, we modify it correspondingly to suit the topic. 16 Table 10: Different contexts for Table 3. Task URL context DI context QS context Arc-E/C www.factmonster.com/ Topic: Science & Tech., Format: Knowledge Arti- cle Quality Score: 5CSQA www.factmonster.com/ Topic: Social Life, Format: Knowledge Article MMLU www.reference.com/ Topic: Science & Tech., Format: Knowledge Arti- cle PIQA www.howstuffworks.com/ Topic: Social Life, Format: Knowledge Article SIQA www.reddit.com/r/ AskSocialScience/Topic: Social Life, Format: Knowledge Article HS www.wikihow.com/ Topic: Home & Hobbies, Format: Knowledge Arti- cle LBD www.wikihow.com/ Topic: Literature, Format: Creative Writing WG www.reddit.com/r/ explainlikeimfive/Topic: Social Life, Format: Personal Blog You will be given a prompt and a completion that attempts to continue the prompt. Your task is to judge the quality of the completion based on the following criteria: Coherence: Does the continuation flow logically and grammatically from the prompt? Correctness: Is the information medically or scientifically accurate? Reasonableness: Is the claim plausible and not misleading or exaggerated? Relevance: Does it directly relate to and complete the idea expressed in the prompt? Rate the continuation on a scale from 1 to 10, where: 1 = incoherent, irrelevant, or medically incorrect 5 = somewhat relevant or plausible, but lacking clarity or precision 10 = fully coherent, factually accurate, and medically relevant You should ignore the irrelevant answer if the relevant part makes good continuation. You must not hallucinate or invent facts. Use only information in the prompt and completion. Respond in this format: Prompt: question Continuation: answer Rating: (your rating, as a float between 1 and 10) Rationale: (Brief explanation of the score) 17 | https://arxiv.org/abs/2505.16570v1 |
EMULATE: A Multi-Agent Framework for Determining the Veracity of Atomic Claims by Emulating Human Actions Spencer Hong, Meng Luo, Xinyi Wan National University of Singapore Abstract Determining the veracity of atomic claims is an imperative component of many recently proposed fact-checking systems. Many ap- proaches tackle this problem by first retriev- ing evidence by querying a search engine and then performing classification by providing the evidence set and atomic claim to a large lan- guage model, but this process deviates from what a human would do in order to perform the task. Recent work attempted to address this issue by proposing iterative evidence re- trieval, allowing for evidence to be collected several times and only when necessary. Con- tinuing along this line of research, we pro- pose a novel claim verification system, called EMULATE , which is designed to better emu- late human actions through the use of a multi- agent framework where each agent performs a small part of the larger task, such as rank- ing search results according to predefined cri- teria or evaluating webpage content. Exten- sive experiments on several benchmarks show clear improvements over prior work, demon- strating the efficacy of our new multi-agent framework. Our code is available at https: //github.com/qqqube/EMULATE . 1 Introduction To prevent the spread of misinformation, a mul- titude of automated fact-checking systems have recently been proposed in the natural language processing community (Xie et al., 2025; Wang et al., 2024; Singal et al., 2024; Kim et al., 2024; Chen et al., 2024; Chern et al., 2023; Pan et al., 2023; Wang and Shu, 2023). For example, Chern et al. (2023) and Wang et al. (2024) introduced frameworks that break texts into atomic claims and equip LLMs with the ability to use web search tools to retrieve evidence for verifying the claims. Other works have also considered taking iterative approaches for search query generation (Wei et al., 2024) as well as the whole retrieval/verificationprocess (Xie et al., 2025). Continuing along this line of research, we pro- pose a novel system, called EMULATE , that takes an atomic claim as input and determines the verac- ity of the claim by retrieving evidence from the web and mimicking human actions. More specifically, we employ a multi-agent framework that consists of agents for generating search queries, determining the credibility and relevance of search results, eval- uating webpage content, assessing collections of evidence, and performing classification. By having each agent execute a small part of the larger task, our system can successfully guide the underlying language models in retrieving important informa- tion from external resources which ultimately leads to an amelioration in classification performance. The closest work to ours is FIRE (Xie et al., 2025), which consists of three components: one for either outputting the final answer or generat- ing the next search query, another for making web searches and retrieving the snippets of the search results, and a third for final verification after a max- imum number of retrieval steps has been reached. Though FIRE also makes use of several agents, our framework further breaks down | https://arxiv.org/abs/2505.16576v1 |
the fact verifica- tion process by trying to understand why additional evidence is needed at each step, which enables the system to ignore redundant and irrelevant search results and generate high-quality queries that can better enhance the system’s evidence set. To evaluate the efficacy of our framework, we perform experiments on a variety of fact-checking benchmarks. Our results show a clear improvement over prior work and demonstrate the effectiveness of using multi-agent systems to tackle complex tasks like fact verification. 2 Related Work Many existing automatic fact-checking pipelines adopt the Decompose-Then-Verify paradigm, which 1arXiv:2505.16576v1 [cs.CL] 22 May 2025 Figure 1: Claim veracity classification algorithm. The input to the algorithm is an atomic claim calong with two values specifying the maximum number of search queries that can be made and the maximum number of search results returned per query. The output of the algorithm is a binary label indicating the veracity of the claim. Each LLM agent is highlighted in yellow . first decomposes a text into several atomic claims and then verifies each claim individually (Hu et al., 2024; Wei et al., 2024; Song et al., 2024; Min et al., 2023). Several approaches for the latter task of ver- ifying individual claims (which is the task that we focus on in this work), begin by retrieving evidence via web search and then feeding the evidence and the claim to a language model for final verification (Chern et al., 2023; Wang et al., 2024). Though this is sometimes effective, a shortcoming is the misalignment between this process and the process of humans when doing the task. Recent works ad- dress this through iterative evidence retrieval (Xie et al., 2025), which allows for evidence to be col- lected several times and only when it is considered necessary. We build on this idea with EMULATE and also incorporate the idea of iterative retrieval and verification. 3 Methodology Emulating Human Actions. Our multi-agent framework is designed to emulate human actions. If a human were trying to verify a claim by using the Internet, they would start by making a search query that they think would be helpful, which willreturn many results/links. They would then select a link to click on based on the credibility of the source (which can be inferred from the URL), as well as the relevance (which can be guessed by looking at the title and snippet). After clicking on a link and reading the text, one of the following scenarios will occur: (a)The document is self-contained and the hu- man has sufficient information to determine the claim’s veracity. (b)The document is self-contained and the hu- man was able to acquire knowledge that’s helpful for the task, but more information is needed. (c)The document is self-contained, but com- pletely irrelevant. (d)The document is not self-contained. If scenario (a)occurs, the human is done with the task. If scenario (b)occurs, the human should retain the information acquired from the text and then think of additional search queries required for completing the task. In scenario (c), the human should visit another link that was returned in | https://arxiv.org/abs/2505.16576v1 |
the response to the initial search query. In scenario (d), the human would need to formulate additional search queries to fill in the gaps. To the best of our 2 Figure 2: Results on the entire GPT-4.1 model family. The strongest/weakest model according to OpenAI is GPT-4.1/GPT-4.1-nano. Dataset #True #False Total FacTool-KBQA 177 56 233 BingCheck 160 42 202 Factcheck-Bench 472 159 631 Table 1: Dataset statistics for FactTool-KBQA, BingCheck, and Factcheck-Bench knowledge, our system is the first fact-checking algorithm to follow this process. A Novel Multi-Agent Framework. Our fact- checking algorithm is shown in Figure 1, and makes use of the following LLM-powered agents: (1)InitialQueryGen: Generates a list of initial search queries given a claim. (2) SearchRank: Given a query and a list of corresponding search results (each result consists of a title, a URL, and a snippet), outputs a sorted list of the results based on relevance and credibility. (3) SelfContained- Check: Given a claim, the evidence set so far, and a new search result, determines if the content of the new webpage is comprehensible (i.e., if it is comprehensible, it is either self-contained or can be comprehended if you consider the information in the evidence set). (4) DetHelpful: Given a claim, the evidence set so far, and a new comprehensi- ble search result, determines if the search result provides new information that isn’t already men- tioned in the current evidence set and if it would be helpful for veracity checking. (5) SufficientEvi- dence: Given a claim and the evidence set so far, determines if there is sufficient evidence to per- form classification. (6) Classifier: Given a claim and the evidence set, outputs a classification la- bel. (7) AdditionalQueryGen: Given a claim and the evidence set, outputs a list of search queries to enhance the existing evidence set. Note that in Figure 1, when the algorithm en-counters scenario (d), it stores the result instead of making additional search queries to fill in the gaps, and walks through them at the end if it didn’t termi- nate during the while loop yet, since something that was once not self-contained could become self-contained if the memory bank has changed. This design choice was made to prioritize process- ing self-contained evidence pieces to minimize the number of queries that need to be made. 4 Experiments Datasets and Metrics. We evaluate EMULATE along with other systems on three datasets that each provides annotations at the level of atomic claims: FacTool-KBQA (Chern et al., 2023), BingCheck (Li et al., 2024), and Factcheck-Bench (Wang et al., 2024). FacTool-KBQA is a subset of the dataset introduced in Chern et al. (2023) for the knowledge- based QA task with 233 claims labeled as either True or False. BingCheck (Li et al., 2024) consists of atomic claims annotated with four possible la- bels ( supported ,refuted ,partially supported , and not supported ). We retain supported andrefuted examples only and convert their labels to True and False respectively. We also only use a portion of the supported examples to control the class imbalance. Factcheck-Bench (Wang et al., 2024) provides | https://arxiv.org/abs/2505.16576v1 |
661 checkworthy claims human-annotated with either True,False , orUnknown . We ignore the Unknown examples and sample 631 claims for our experi- ments. See Table 1 for full dataset statistics. To quantify performance, we report the preci- sion, recall, and F1 scores for each class. We also provide the macro-F1 score, which aggregates the label-wise F1 scores by averaging. The weighted- F1 score is also included, which could better ac- count for class imbalance. Baselines. We compare our multi-agent system with four baselines: (1) FACTOOL (Chern et al., 2023), (2) FACTCHECK -GPT (Wang et al., 2024), (3)SAFE (Wei et al.), and (4) FIRE (Wei et al.). Note that FIRE (Xie et al., 2025) is the only base- line that was designed to take as input an atomic claim and output True orFalse (like EMULATE ). In each of the other three baselines, checking the veracity of atomic claims is one step in the algo- rithm, which means that modifications are required to make comparisons. Implementation. For our main experiments, we employ OpenAI’s GPT-4.1 model1with a 1gpt-4.1-2025-04-14 3 Dataset MethodTrue FalseM-F1 W-F1P R F1 P R F1 BingCheckFacTool 0.92 0.84 0.88 0.55 0.71 0.62 0.75 0.83 FactCheck-GPT 0.9 0.56 0.69 0.31 0.76 0.44 0.56 0.64 SAFE 0.88 0.72 0.79 0.37 0.62 0.46 0.62 0.72 FIRE 0.91 0.87 0.89 0.58 0.69 0.63 0.76 0.84 EMULATE 0.91 0.96 0.93 0.79 0.62 0.69 0.81 0.88 FacTool-KBQAFacTool 0.91 0.84 0.87 0.59 0.73 0.65 0.76 0.82 FactCheck-GPT 0.86 0.47 0.61 0.31 0.75 0.44 0.53 0.57 SAFE 0.89 0.87 0.88 0.61 0.64 0.63 0.76 0.82 FIRE 0.9 0.88 0.89 0.63 0.68 0.66 0.78 0.83 EMULATE 0.89 0.92 0.91 0.72 0.64 0.68 0.8 0.85 Factcheck-BenchFacTool 0.93 0.74 0.82 0.52 0.82 0.64 0.73 0.77 FactCheck-GPT 0.9 0.52 0.66 0.37 0.83 0.51 0.58 0.62 SAFE 0.92 0.78 0.84 0.55 0.79 0.65 0.74 0.79 FIRE 0.93 0.81 0.87 0.59 0.81 0.68 0.78 0.82 EMULATE 0.9 0.89 0.9 0.7 0.72 0.71 0.8 0.85 Table 2: For each claim verification system, we report the label-wise precision, recall, and F1 scores along with the Macro-F1 ( M-F1 ) and Weighted-F1 ( W-F1 ) scores. The best results on each dataset are shown in bold , while the second best results are underlined . Dataset Method True F1 False F1 Weighted-F1 FacTool-KBQARM-SR 0.87 0.57 0.8 RM-SCC 0.9 0.63 0.84 EMULATE 0.91 0.68 0.85 Factcheck-BenchRM-SR 0.88 0.68 0.83 RM-SCC 0.88 0.66 0.82 EMULATE 0.9 0.71 0.85 Table 3: Ablation studies on FacTool-KBQA and FactCheck-Bench. RM-SR/RM-SCC means that SearchRank /SelfContainedCheck were removed from EMULATE. temperature of 1 for all agents in EMULATE as well as the baseline systems. All EMU- LATE agents are provided with zero-shot prompts that contain instructions for the subtasks. Un- less otherwise stated, MAX_SEARCH_QUERIES and MAX_SEARCH_RESULTS_PER_QUERY are set to 4 and 2 respectively. To make web searches, we invoke API calls with serper.dev . 5 Results Our main results are presented in Table 2. From them, we can see that EMULATE outperforms all baselines on every dataset on 6 out of 8 metrics that we compute. Notably, EMULATE consistently achieves the best results on both | https://arxiv.org/abs/2505.16576v1 |
label-wise F1 scores, the macro-F1 score, and the weighted- F1 score , which confirms the effectiveness of our novel design. We also observe that FIRE always achieves the second best results, which is likely attributed to its iterative retrieval mechanism. To gain a better understanding of the impact that different agents have on our system, we conduct ablation studies on FacTool-KBQA and FactCheck- Bench. In particular, we quantify the effect ofremoving (1) SearchRank and (2) SelfContained- Check . From Table 3, we can see that excluding SearchRank leads to performance degradation on both datasets (more heavily on FacTool-KBQA), which tells us that the SearchRank agent can ef- fectively sort a list of search results according to the aforementioned criteria. We also find degrada- tion on all datasets when removing the SelfCon- tainedCheck agent, which reveals that the agent can effectively evaluate and filter search results. Lastly, we run experiments on the entire GPT- 4.1 model family to determine if EMULATE still works well when the underlying LLM of each agent is supplanted with a weaker model. According to Figure 2, as the underlying LLM weakens, the weighted-F1 scores decrease as well. Intuitively, weaker models are expected to be less performant on the subtasks in EMULATE , which can lead to suboptimal results. However, we can see that the performance of EMULATE when equipped with GPT-4.1-mini is sometimes close to the per- formance with GPT-4.1. 6 Conclusion In this paper, we proposed a novel approach for determining the veracity of atomic claims, which is designed to emulate human actions through a multi-agent framework. Through extensive exper- iments, we showed that our system, EMULATE , outperforms previously introduced algorithms for the task and can work well even when used with a weaker base LLM. We also reported the results from doing ablation studies, which confirmed the 4 effectiveness of several agents. Limitations Evaluation of our system requires datasets that have veracity annotations at the level of atomic claims. Due to the scarcity of such datasets, we were only able to evaluate on three, and each contained less than 1,000 examples. Additionally, these datasets have a class imbalance issue (i.e., there are signifi- cantly less False claims than True claims). Another shortcoming lies in our design choice of processing documents that aren’t self-contained at the end of the algorithm. Future work should inves- tigate other alternatives, since for some claims, it may not be possible to do claim verification without providing search results that aren’t self-contained as evidence. References Jifan Chen, Grace Kim, Aniruddh Sriram, Greg Durrett, and Eunsol Choi. 2024. Complex claim verification with evidence retrieved in the wild. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Pa- pers) , pages 3569–3587. I Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Ke- hua Feng, Chunting Zhou, Junxian He, Graham Neu- big, Pengfei Liu, and 1 others. 2023. Factool: Fac- tuality detection in generative ai–a tool augmented framework for multi-task and multi-domain scenar- ios.arXiv preprint arXiv:2307.13528 . Qisheng Hu, Quanyu | https://arxiv.org/abs/2505.16576v1 |
Long, and Wenya Wang. 2024. Decomposition dilemmas: Does claim decompo- sition boost or burden fact-checking performance? arXiv preprint arXiv:2411.02400 . Kyungha Kim, Sangyun Lee, Kung-Hsiang Huang, Hou Pong Chan, Manling Li, and Heng Ji. 2024. Can llms produce faithful explanations for fact-checking? towards faithful explainable fact-checking via multi- agent debate. arXiv preprint arXiv:2402.07401 . Miaoran Li, Baolin Peng, Michel Galley, Jianfeng Gao, and Zhu Zhang. 2024. Self-checker: Plug-and-play modules for fact-checking with large language mod- els. In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 163–181. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, Singa- pore. Association for Computational Linguistics.Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. 2023. Fact-checking complex claims with program-guided reasoning. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 6981–7004, Toronto, Canada. Association for Com- putational Linguistics. Ronit Singal, Pransh Patwa, Parth Patwa, Aman Chadha, and Amitava Das. 2024. Evidence-backed fact check- ing using rag and few-shot in-context learning with llms. In Proceedings of the Seventh Fact Extraction and VERification Workshop (FEVER) , pages 91–98. Yixiao Song, Yekyung Kim, and Mohit Iyyer. 2024. VeriScore: Evaluating the factuality of verifiable claims in long-form text generation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 9447–9474, Miami, Florida, USA. Association for Computational Linguistics. Haoran Wang and Kai Shu. 2023. Explainable claim verification via knowledge-grounded reasoning with large language models. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 , pages 6288–6304. Yuxia Wang, Revanth Gangi Reddy, Zain Mujahid, Ar- nav Arora, Aleksandr Rubashevskii, Jiahui Geng, Osama Mohammed Afzal, Liangming Pan, Na- dav Borenstein, Aditya Pillai, and 1 others. 2024. Factcheck-bench: Fine-grained evaluation bench- mark for automatic fact-checkers. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 14199–14230. Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Zixia Hu, Jie Huang, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, and Quoc V Le. 2024. Long-form factuality in large language models. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems . Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Zixia Hu, Jie Huang, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, and 1 others. Long-form factuality in large language models. In The Thirty- eighth Annual Conference on Neural Information Processing Systems . Zhuohan Xie, Rui Xing, Yuxia Wang, Jiahui Geng, Hasan Iqbal, Dhruv Sahnan, Iryna Gurevych, and Preslav Nakov. 2025. FIRE: Fact-checking with it- erative retrieval and verification. In Findings of the Association for Computational Linguistics: NAACL 2025 , pages 2901–2914, Albuquerque, New Mexico. Association for Computational Linguistics. 5 | https://arxiv.org/abs/2505.16576v1 |
arXiv:2505.16582v2 [cs.CL] 26 May 2025O2-Searcher: A Searching-based Agent Model for Open-Domain Open-Ended Question Answering Jianbiao Mei1,2,∗Tao Hu3,2,∗Daocheng Fu2,4,∗Licheng Wen2Xuemeng Yang2 Rong Wu1,2Pinlong Cai2Xinyu Cai2Xing Gao2Yu Yang1 Chengjun Xie3Botian Shi2,†Yong Liu1,5,†Yu Qiao2 1Zhejiang University2Shanghai Artificial Intelligence Laboratory 3University of Science and Technology of China4Fudan University 5State Key Laboratory of Industrial Control Technology Abstract Large Language Models (LLMs), despite their advancements, are fundamentally limited by their static parametric knowledge, hindering performance on tasks requiring open-domain up-to-date information. While enabling LLMs to interact with external knowledge environments is a promising solution, current efforts primarily address closed-end problems. Open-ended questions, which characterized by lacking a standard answer or providing non-unique and diverse answers, remain underexplored. To bridge this gap, we present O2-Searcher, a novel search agent leveraging reinforcement learning to effectively tackle both open-ended and closed- ended questions in the open domain. O2-Searcher leverages an efficient, locally simulated search environment for dynamic knowledge acquisition, effectively decoupling the external world knowledge from model’s sophisticated reasoning processes. It employs a unified training mechanism with meticulously designed reward functions, enabling the agent to identify problem types and adapt different answer generation strategies. Furthermore, to evaluate performance on complex open-ended tasks, we construct O2-QA, a high-quality benchmark featuring 300 manually curated, multi-domain open-ended questions with associated web page caches. Extensive experiments show that O2-Searcher, using only a 3B model, significantly surpasses leading LLM agents on O2-QA. It also achieves SOTA results on various closed-ended QA benchmarks against similarly-sized models, while performing on par with much larger ones. Code is available at https: //github.com/Acade-Mate/O2-Searcher . 1 Introduction Large Language Models (LLMs) have demonstrated remarkable progress in diverse tasks such as mathematical reasoning [ 41] and code generation [ 26], owing to their advanced understanding and generative capabilities. Despite these advancements, the capability boundaries of current LLMs are fundamentally bounded by their parametric knowledge, the information encoded during pre-training, which serves as the basis for their responses [ 21]. This architecture faces inherent constraints: (1) Real- world knowledge is both dynamic and unbounded, making it impossible for any finite model to fully encapsulate its breadth and evolution. (2) Information becomes obsolete rapidly (e.g., news updates may invalidate prior facts overnight), novel disciplines and discoveries continually arise, and special- ized domains often demand expertise far beyond the scope of general training data. As a result, de- pendence on this static "snapshot" knowledge inevitably hampers performance on open-domain tasks * equal contribution, †corresponding author Preprint. Under review. Figure 1: Illustration of different characteristics of closed- ended and open-ended questions.that require real-time updates, deep spe- cialization, or cross-domain integration, frequently leading to factual inaccura- cies or hallucinations [37, 60, 68, 21]. One way to address the above challenge is to let LLMs interact with external knowledge environments, decoupling the storage of external world knowledge from the model’s core reasoning capabil- ities. The central idea of this strategy is to focus on training the model to master a set of capabilities for efficiently uti- lizing external knowledge resources. In other words, we do not directly "feed" the model knowledge itself, but teach it how to find, | https://arxiv.org/abs/2505.16582v2 |
understand, and apply knowledge. Recently, several works [ 27,69,17] have begun exploring training LLMs with the search engine to achieve adaptive interaction with the external knowledge for open-domain tasks. However, these efforts primarily focus on evaluating model performance in so-called closed-ended or deterministic problems, which are typically defined by clear objectives and standard answers. Unlike closed-ended questions that seek precise answers, many real-world tasks involve open-ended or exploratory questions. As shown in Fig. 1, such questions typically lack a single, definitive an- swer, often requiring extensive, multi-turn search, yielding comprehensive responses that encompass multiple key findings. Given the intrinsic nature of these open-ended problems, developing effective methods to efficiently train and rigorously evaluate LLM-generated responses remains an active area of exploration. While the open-ended and closed-ended questions have different formats and answer characteristics, solving them requires LLMs to combine external information retrieval with internal reasoning processes. To this end, we propose O2-Searcher , a reinforcement learning (RL)-based search agent for the open domain , primarily focused on tackling demanding open-ended problems, yet also demonstrating strong performance on closed-ended ones. Specifically, to enable the O2-Searcher to acquire external information rapidly and cost-effectively, we develop a search environment where the agent can freely explore. This local environment simulates an online search engine, returning several relevant cached web pages based on queries generated by the agent. To guide the agent’s evolutionary trajectory, we design a series of reward functions to ensure the model simultaneously develops the capability to solve both open-ended and closed-ended problems. Furthermore, we construct a manually curated open-domain open-ended question-answering (QA) benchmark, named O2-QA, to effectively evaluate LLMs’ performance on open-ended problems. Experimental results demonstrate that our O2-Searcher, despite using only a small 3B parameter model, significantly surpasses the performance of state-of-the-art (SOTA) LLM agents on the O2-QA benchmark. Moreover, on multiple closed-ended QA benchmarks, our O2-Searcher not only achieves the SOTA performance with a comparable parameter size but also exhibits comparable performance against models with larger parameters. Our main contributions are summarized as follows: •We introduce a novel RL-based search agent O2-Searcher, which dynamically acquires and flexibly utilizes external knowledge via an efficient, local search environment. This design enables an effective decoupling of LLM’s internal knowledge from its sophisticated reasoning processes. •We propose a unified training mechanism, allowing the agent to efficiently handle both open-ended and closed-ended question types. Through meticulously designed reward functions, O2-Searcher learns to identify problem types and adaptively adjust its answer generation strategies. •We construct O2-QA, a high-quality open-domain QA benchmark specifically designed to evaluate LLMs’ performance on complex open-ended questions. It comprises 300 manually curated open- ended questions from diverse domains, along with ∼30kassociated cached web pages. •Extensive experiments show O2-Searcher, which uses a 3B-base LLM, significantly outperforms other SOTA LLM agents on O2-QA. It also achieves SOTA on multiple closed-ended QA bench- marks against comparably-sized models, with performance matching 7B models. 2 2 Methodology 2.1 Overview In this section, we present O2-Searcher, capable of handling both open-ended and closed-ended questions in the open domain. O2-Searcher interacts with the search environment to find, understand, and apply | https://arxiv.org/abs/2505.16582v2 |
corresponding knowledge, potentially engaging in knowledge-aware multi-round interac- tions to generate the final answer. Let q0be the initial input query, K0represent an initial internal knowledge state, and Ebe the search environment. In each round t, O2-Searcher autonomously evaluates its current knowledge state Ktand determines whether to formulate the final answer apred or to continue gathering information for more knowledge. If more information is needed, it identifies knowledge gaps within Ktand generate subsequent search queries Qt=G(Kt), interacts with Eto retrieve information It=S(E, Q t), and updates its internal knowledge state to Kt+1. This cycle of knowledge-aware assessment, targeted search, and knowledge update repeats until Ktis deemed sufficient, at which point the final answer apred=G(Kt)is generated and outputted. 2.2 Search Environment While the open web provides a suitable dynamic search environment for open-domain question answering, practical applications in training scenarios are limited by slow API responses and high costs at scale. To address this, we develop a specialized search environment Efor our O2-Searcher that efficiently supports discovery, processing, and utilization of external knowledge across various scenarios. This environment combines aggregated web pages for open-ended inquiries and structured Wikipedia-derived knowledge for closed-ended question answering. These heterogeneous sources are unified in a locally hosted knowledge corpus, enabling rapid information retrieval through dedicated tools followed by knowledge condensation via our condensation module. Knowledge corpus. To train O2-Searcher, we create a knowledge corpus for both open-ended exploration and closed-ended question answering. For open-ended tasks, we develop a special- ized dataset comprising 300 expert-curated questions across cutting-edge domains (including AI, philosophy, sociology, energy transition, geopolitics, education, healthcare, bioethics, Web3, and metaverse). As shown in Fig. 2, each question undergoes systematic processing through a prompt-engineered LLM agent, leveraging commercial LLMs and the Serper API1for web re- trieval. To model diverse retrieval behaviors, we execute randomized search trajectories per question. Figure 2: The construction of the knowledge cor- pus for open-ended questions.All retrieved content is cached to construct the retriever index. For closed-ended queries, we follow [ 17] and locally index the 2018 Wikipedia dump [ 20] to facilitate efficient struc- tured knowledge retrieval. By integrating these two types of content, we establish a special- ized search environment Ewith a locally hosted knowledge corpus, enabling O2-Searcher to ef- ficiently acquire external knowledge during ex- ecution process to generate answers in both closed-ended and open-ended question-solving. Retrieval tools. In the context of open-ended retrieval, we utilize MeiliSearch2to query our curated web corpus, leveraging its high-throughput, low-latency performance, and flexible ranking mecha- nisms to closely emulate real-world web search dynamics. For closed-ended retrieval, we implement an E5-based dense retriever [ 48] over the 2018 Wikipedia dump [ 20], maintaining consistency with [17] while benefiting from the high-precision passage ranking enabled by contrastively trained CC- Pairs embeddings. Both retrievers are configured to return the top- krelevant passages per query based on their respective relevance scores. The passages retrieved for the given queries in the t-round interaction are combined as the retrieval information It. Information condensation. The retrieval information Itfor open-ended queries typically contains extensive raw web content, creating a significant computational burden in subsequent | https://arxiv.org/abs/2505.16582v2 |
steps. Rather than directly incorporating raw contents into knowledge state Kt+1, we introduce a condensation 1https://serper.dev 2https://www.meilisearch.com 3 System : SYSTEM_PROMPT User : Initial query: when did Ford stop producing the 7.3 diesel? Assistant : <think> To determine when Ford stopped producing the 7.3 diesel, we need to first understand the history of the 7.3 diesel engine in Ford vehicles. </think> <search> < query> when did Ford stop producing the 7.3 diesel </query> </search> User ( Env_feedback ): <learnings> … As the 7.3L V8 was no longer able to comply with emissions regulations for diesel engines, it was discontinued midway through the 2003 model year. … </learnings> … Assistant : <think> Now we know that Ford stopped producing the 7.3 diesel in 2003. Next, we can provide the answer. </think> <answer> 2003 </answer>System : SYSTEM_PROMPT User : Initial query: How can intelligent manufacturing achieve sustainable industrial upgrading through innovative paths? Assistant : <think> …it is necessary to first understand the current status of intelligent manufacturing… </think> <search> <query> The current status of intelligent manufacturing in various industries </query> <query> Definition and connotation of sustainable industrial upgrading </query> … </search> User ( Env_feedback ): <learnings> …sustainable industrial upgrading involves the integration of advanced materials… </learnings> Assistant : <think> Based on the information obtained,… </think> <search> <query> Development of new energy vehicles in recent years </query> <query> …</query> … </search> User ( Env_feedback ): <learnings> …value of new energy vehicles… </learnings> … Assistant : <think> Based on the information obtained from multiple rounds of search… </think> <answer> - Sustainable industrial upgrading in intelligent manufacturing is supported by various innovative approaches, including the use of artificial intelligence, robotics, and automation. - The adoption of innovative technologies in intelligent manufacturing plays a crucial role in promoting sustainable industrial upgrading, especially in reducing resource consumption. … </answer> (a) open -ended question (b) closed -ended questionFigure 3: We use multi-round conversations for modeling action trajectories to enhance interactivity. The agent reasons in <think> tags, searches via <search> (with specific queries in <query>), and answers in <answer> when ready. The search contents feedback by the search environment is presented in <learnings>. For the open-ended problem, multiple potential queries are generated. module that extracts structured learnings from It. Through prompting commercial LLMs, the condensation module adaptively compresses input passages and extracts query-relevant items while maintaining semantic integrity. It employs length-aware compression: preserving most content with minimal restructuring for concise documents, while selectively retaining only salient information from extensive documents. This process eliminates redundant context while preserving query-relevant knowledge, reducing computational requirements and improving reasoning efficiency. 2.3 Training Receipt In this section, we elaborate on the training process of our O2-Searcher for internalizing interaction and reasoning skills, enabling it to effectively find, understand, and apply relevant knowledge. Chat template. As depicted in Fig. 3, we utilize a multi-round conversational methodology for modeling action trajectories to improve interactivity. The agent is instructed to conduct its reasoning within <think> tags prior to undertaking actions such as searching or answering. For information retrieval, the agent issues search actions encapsulated in <search> tags, which may | https://arxiv.org/abs/2505.16582v2 |
include up to five distinct queries, each delimited by <query> tags. Subsequently, the retrieved data is presented within <learnings> tags. Ultimately, once the model deems the gathered information sufficient, it delivers its answer within <answer> tags. Cold start. Drawing inspiration from DeepSeek-R1 [ 9], we first apply the cold start to alleviate early instability and reduce the burden of learning the complex output format during RL training, especially for open-ended questions. This involves fine-tuning the instruction model on a small, curated dataset of CoT interaction trajectories to establish the initial RL actor. The data collection strategy varies by problem type. For open-ended questions, we employ the prompt-engineered LLM agent described in Section 2.2 to generate trajectories that capture detailed reasoning, search actions, retrieved data, and the final answer (e.g., key findings). For closed-ended questions, we apply Search-R1 [17] to create example trajectories using small, randomly sampled subsets ( ∼1ksamples) from the NQ [ 23] and HotpotQA [57] training datasets. GRPO. As shown in Fig. 4, we utilize the Group Relative Policy Optimization (GRPO) algorithm [41] to reduce the computational resources required for RL training. For each input query in GRPO, a group of Grollout trajectories, denoted as τ={yi}G i=1, is generated using the preceding policy πold, the current policy model πθis subsequently optimized by maximizing the objective function: L(θ) =Ex∼D,{yi}G i=1 yi∼πold(·|x)" 1 GGX i=1min ( ri(θ)Ai,clip(ri(θ),1−ϵ,1 +ϵ)Ai)−βDKL[πθ||πref]# (1) where πrefdenotes reference model, ri(θ) =πθ(yi|x) πold(yi|x). 4 PolicyModelOpen-ended QuestionsReferenceModelOutput1…Output2Output3OutputGGT AnswersGT AnswersAdvantage1…Advantage2Advantage3AdvantageGReward1…Reward2Reward3RewardGKnowledge CorpusClosed-endedRewardOpen-endedRewardDiversity RewardFormat RewardFactual RewardRetrievalToolsKLClosed-ended QuestionsFactual RewardGroupComputationFigure 4: GRPO training with the interaction with the search environment. The policy model is optimized using GRPO with interaction with the local search environment, leveraging a reference model and rollout outputs from the preceding policy model. For closed-ended questions, optimization is driven by a Factual reward. For open-ended questions targeting key findings, training is guided by composite reward signals consisting of Format, Diversity, and Factual rewards. 2.4 Reward Design During the reinforcement training stage, we combine closed-ended questions with closed-ended questions. Given the disparities in the format and characteristics of the answers for these two types of questions, we employ distinct reward functions to steer the training process. Specifically, when dealing with closed-ended questions with exact answers, the training reward is directly correlated with the format correctness and accuracy of the generated answer. Following [ 17], we only evaluate the final outcome and adopt the rule-based criteria as the factual reward, which is formulated as follows: rc=I(apred=agt),if format is correct 0, if format is incorrect(2) where apred andagtare the predicted answer and ground truth, which are both converted to lowercase. I(·)is the indicator function. Regarding the format, an answer is considered to be in the correct format when it is enclosed within <answer> tags. For open-ended questions, we organize the answers into a set of key findings, which usually take the form of long-form content and cover multiple aspects. To better guide the optimization process and generate high-quality answers in the desired format, we design a more sophisticated reward function. This function is composed of the following three parts: Format reward. In addition to the | https://arxiv.org/abs/2505.16582v2 |
requirement of adhering to the basic format where answers are en- closed within <answer> tags, we also expect the model to generate answers featuring diverse findings presented in Markdown list format. Consequently, answers with incorrect item formatting will be penalized. Moreover, to promote diversity among the items, we impose penalties for similarities and duplication between them. The detailed procedure is presented as follows: ro,fm =α0·(1−nerr ntot) +α1·[1−s(apred)]δ−α2·(1−nind/ntot) (3) where ntotrepresents the total number of items generated, nerrcounts items with incorrect for- matting, nvalcounts correctly formatted items ( nerr+nval=ntot),nindcounts the unique items. Furthermore, α0,α1,α2are non-negative weighting hyper-parameters (potentially constrained, e.g. α1+α2= 1, treating α2as a separate penalty weight), and δis an exponent. s(apred)evaluates the similarities among the finding items extracted from the generated answer apred: s=w0·max(u) +w1·avg(u[u> sthr]) +w2·avg(u) (4) where urepresents the vector containing the pairwise similarity scores derived from the vector representations of the items, sthris a threshold of potential highly similar pairs. The hyper-parameters w0,w1, and w2are non-negative, normalized weights. Diversity reward. To encourage comprehensive information gathering through varied queries, we introduce a diversity reward ro,div. This reward evaluates the semantic distinctiveness among the 5 generated queries Q={q1, q2, . . . , q nq}during the rollout process. First, we employ a pre-trained language model [ 49] to encode each query qiinto an embedding vector ψ(qi). We then compute the pairwise cosine similarities between all query embeddings, resulting in a query similarity matrix: Sq i,j=ψ(qi)⊤ψ(qj) ∥ψ(qi)∥ · ∥ψ(qj)∥(5) From this, we can derive a query independence matrix Twhere each element represents the dissimi- larity between query pairs. The diversity score for each individual query qiis then calculated as its average independence score from all other queries: si=1 nq−1X j̸=iTi,j,Ti,j= 1−Sq i,j (6) To encourage a balanced number of queries, i.e., avoiding both overly sparse and excessively redundant exploration, we introduce a weighting factor ω(nq)that corresponds to the number of queries nq. This factor penalizes deviations from an optimal range, thus alleviating the divergence [13] without sacrificing breadth. The final diversity reward ro,div is computed as the average of these individual query scores, adjusted by the weighting factor: ro,div = 1 nqnqX i=1si! ·ω(nq) (7) where nqis the total number of queries generated. Factual reward. To evaluate the factual correctness of extracted finding items, we compute the F1 score by comparing the items derived from the predicted answer apred against those from the ground- truth answer agt. Specifically, we employ the same model [ 49] as in the diversity reward to encode both sets of items into embedding vectors and compute their pairwise cosine similarities, yielding a similarity matrix that quantifies semantic alignment. Next, we derive a cost matrix from the similarity scores and apply the Hungarian algorithm [ 22] to establish optimal one-to-one correspondences between the generated and ground-truth items. Pairs with similarity scores below a predefined threshold sθare discarded to ensure high-confidence matches. Finally, we compute precision, recall, and the aggregated F1 score based on the filtered pairs as the final factual reward ro,f1. The procedure is formulated as follows: Sa i,j=ϕ(xi)⊤ϕ(yj) ∥ϕ(xi)∥ · ∥ϕ(yj)∥(8) M′={(xi, yj)|Mi,j= 1∧Sa i,j≥sθ},M=Hungarian | https://arxiv.org/abs/2505.16582v2 |
(1−Sa) (9) p=|M′|/ntot,pred , r =|M′|/ntot,gt, r o,f1= 2·p·r/(p+r) (10) where ϕis the embedding model, xiandyiare the items extracted from the generated answer and the ground-truth answer. ntot,pred andntot,gt stand for the total number of items in the generated set and the ground-truth set, respectively. Ultimately, for open-ended questions, if the generated answers are not enclosed by <answer> tags, the outcome reward is set to zero; or the outcome reward is calculated as the weighted sum of the format reward ro,fm, diverse reward ro,div and the factual reward ro,f1: ro=γ0·ro,fm +γ1·ro,div+γ2·ro,f1 (11) where γ0,γ1, and γ2are non-negative hyper-parameters. 3 Experiments We present the implementation details, benchmarks, main results, and analysis of O2-Searcher in this section. Due to page limits, further details on the implementation (A.2), in-depth analysis (A.3), case studies (A.3), prompt details (A.5), and an example application to report writing (A.6) are provided in the Appendix. 6 3.1 Implementation Details We adopt Qwen2.5-3B-Instruct [ 55] as the backbone model of our proposed O2-Searcher. For the stage of cold start, we utilize the Adam optimizer with an initial learning rate of 1×10−5, warm-up ratio of 0.1, and a batch size of 16, training across 4 A100 GPUs for 2 epochs. During the RL training stage, we use the Verl framework3. At each training step, we sample 32 prompts per batch, each with 8 rollout trajectories, and optimize the policy model using the Adam optimizer with a reduced learning rate of 1×10−6and a mini-batch size of 32 on 4 A100 GPUs. For RL training, we constructed a hybrid dataset comprising 240 open-ended and 1,200 closed-ended questions randomly sampled from the NQ [ 23] and HotpotQA [ 57] training sets. The model is trained by 200 steps with a 4:1 sampling ratio between closed-ended and open-ended questions during the training procedure to balance the data distribution. Please see more details in Appendix A.2. 3.2 Benchmarks Datasets. For the closed-ended question-answering task, we assess our proposed O2-Searcher on both in-domain and out-of-domain datasets. The in-domain datasets include NQ [ 23] and HotpotQA [57], while the out-of-domain datasets encompass TriviaQA [ 18], PopQA [ 32], 2WikiMultiHopQA [10], Musique [ 45], and Bamboogle [ 38]. In total, these validation tests involve 51,953 questions with corresponding ground-truth answers. Regarding the open-ended evaluation, we construct an open-ended dataset, termed O2-QA, derived from and intrinsically linked to the knowledge corpus developed in Sec. 2.2. O2-QA consists of 300 expert-curated questions spanning a wide array of cutting-edge domains, including artificial intelligence, philosophy, sociology, energy transition, geopolitics, education, healthcare, bioethics, Web3, and metaverse. Among these, 240 questions are allocated for training, while 60 are reserved for testing. The test questions are further classified into easy and hard levels based on the median number of key findings in ground-truth answers. The answers for this dataset are derived directly from the content collected during the search process within the rollout of the prompt-engineered LLM agent, as described in Sec. 2.2. We then employ commercial LLMs to distill and synthesize key findings into concise ground-truth responses. Baselines. We follow the setting of Search-R1 [ 17] | https://arxiv.org/abs/2505.16582v2 |
for closed-ended question-answering tasks and compare our O2-Searcher against three categories of methods: (1) CoT-based approaches, including CoT [ 51], RAG [ 24], IRCoT [ 44], and Search-o1 [ 27]. These methods leverage Chain-of-Thought reasoning either for direct inference or in combination with Retrieval-Augmented Generation (RAG). (2) SFT-based methods, such as Supervised Fine-Tuning (SFT) [ 3] and SFT-Tool. SFT does not involve interaction with a search engine, whereas SFT-Tool learns to output search actions by training on collected example trajectories, as described in the cold start stage in Sec. 2.3. (3) RL-based approaches, including DeepSeek-R1 [ 9] and Search-R1 [ 17], both maintaining the same fundamental setting as [ 17]. DeepSeek-R1 performs reasoning and answer steps without a search engine, whereas Search-R1 incorporates a local search engine. For the open-ended task, we compare our O2-Searcher with prompt-engineered LLM agent leveraging commercial LLMs such as Doubao-1.5-pro-32k4, GPT-4o-mini [ 14], and open-source LLMs such as Deepseek-V3 [ 29], Qwen2.5-72B-Instruct [ 55]. We also include SFT-based baselines such as SFT and SFT-Tool, as well as the RL-based method Search-R1. In this scenario, SFT-Tool is trained on hybrid collected data containing both open-ended and closed-ended questions, while Search-R1 is prompted to produce key findings for open-ended evaluation. Metrics. For closed-ended question-answering tasks, the Exact Match (EM) metric is applied, following [ 63,17]. For open-ended questions, evaluation relies on the F1 score (aligned with the RL training reward) and LLM-assessed Finding Similarity (LFS). Distinct from typical item-level, embedding-based F1 scores, LFS utilizes Doubao-1.5-pro-32k to assess semantic equivalence between entire generated and reference findings, yielding a dedicated semantic-level F1 score. 3https://github.com/volcengine/verl 4https://seed.bytedance.com/en/special/doubao_1_5_pro 7 Table 1: Results on open-ended O2-QA benchmark. The best performance is set in bold . Our O2-Searcher outperforms all baselines in the same local search environment, demonstrating the effectiveness of the RL training. In open web search, it maintains comparable performance to local search, highlighting the noise resilience. MethodSearch EnvironmentF1 LFS Easy Hard Avg. Easy Hard Avg. Deepseek-v3 Local 0.0993 0.0798 0.0899 0.4033 0.2749 0.3412 Doubao-32k Local 0.1167 0.0847 0.1012 0.3992 0.2684 0.3360 Qwen2.5-72B Local 0.0978 0.0848 0.0915 0.3611 0.2546 0.3096 GPT-4o-mini Local 0.1039 0.0779 0.0913 0.4150 0.2839 0.3516 SFT Local 0.0651 0.0811 0.0728 0.1344 0.1371 0.1357 SFT-Tool Local 0.0393 0.0385 0.0390 0.2222 0.1427 0.1852 Search-R1 Local 0.0467 0.0317 0.0396 0.2458 0.1530 0.2020 O2-Searcher Local 0.2696 0.1743 0.2236 0.5032 0.3628 0.4353 O2-Searcher Web 0.2870 0.1691 0.2300 0.5241 0.3602 0.4449 Table 2: Exact Match (EM) metrics on closed-ended question-answering tasks. The best performance is set in bold. Our O2-Searcher outperforms all baselines across most in/out-of-domain datasets using Qwen2.5-3B, achieving comparable performance to Search-R1-instruct (Qwen2.5-7B). MethodsIn domain Out of domainAvg. NQ HotpotQA TriviaQA PopQA 2wiki Musique Bamboogle Qwen2.5-7B Direct Inference 0.134 0.183 0.408 0.140 0.250 0.031 0.120 0.181 CoT 0.048 0.092 0.185 0.054 0.111 0.022 0.232 0.106 IRCoT 0.224 0.133 0.478 0.301 0.149 0.072 0.224 0.226 Search-o1 0.151 0.187 0.443 0.131 0.176 0.058 0.296 0.206 RAG 0.349 0.299 0.585 0.392 0.235 0.058 0.208 0.304 SFT 0.318 0.217 0.354 0.121 0.259 0.066 0.112 0.207 R1-base 0.297 0.242 0.539 0.202 0.273 0.083 0.296 0.276 R1-instruct 0.270 0.237 0.537 0.199 | https://arxiv.org/abs/2505.16582v2 |
0.292 0.072 0.293 0.271 Search-R1-base 0.480 0.433 0.638 0.457 0.382 0.196 0.432 0.431 Search-R1-instruct 0.393 0.370 0.610 0.397 0.414 0.146 0.368 0.385 Qwen2.5-3B Direct Inference 0.106 0.149 0.288 0.108 0.244 0.020 0.024 0.134 CoT 0.023 0.021 0.032 0.005 0.021 0.002 0.000 0.015 IRCoT 0.111 0.164 0.312 0.200 0.171 0.067 0.240 0.181 Search-o1 0.238 0.221 0.472 0.262 0.218 0.054 0.320 0.255 RAG 0.348 0.255 0.544 0.387 0.226 0.047 0.080 0.270 SFT 0.249 0.186 0.292 0.104 0.248 0.044 0.112 0.176 R1-base 0.226 0.201 0.455 0.173 0.268 0.055 0.224 0.229 R1-instruct 0.210 0.208 0.449 0.171 0.275 0.060 0.192 0.224 Search-R1-base 0.406 0.284 0.587 0.435 0.273 0.049 0.088 0.303 Search-R1-instruct 0.341 0.324 0.545 0.378 0.319 0.103 0.264 0.325 SFT-Tool 0.371 0.258 0.503 0.387 0.186 0.103 0.192 0.286 O2-Searcher 0.444 0.388 0.597 0.429 0.374 0.160 0.344 0.391 3.3 Main Results Open-ended benchmark. Table 1 demonstrates that our O2-Searcher achieves superior performance on both easy and hard levels of the open-ended O2-QA benchmark when evaluated using F1 score and LFS metrics, outperforming all baseline methods operating within the constructed local search environment. Notably, our approach shows significant improvements over both SFT and SFT- tool variants, with performance margins that clearly validate the effectiveness of our RL training framework and the proposed reward functions. Furthermore, we evaluate O2-Searcher in an open web search environment, where it maintains comparable performance to our constructed local search environment counterpart. This result highlights two key strengths of our method: (1) its ability to answer open-ended questions, and (2) its robustness in mitigating information noise inherent in open web environments. We also observe that using web search slightly degrades results on hard questions while providing a slight boost for easy ones. We attribute this to hard questions, which depend on identifying multiple key findings, are more susceptible to interference from the vast and often uncurated information available on the open web. Closed-ended benchmark. Table 2 illustrates the results on closed-ended question-answering tasks. The results show that our O2-Searcher demonstrates superior performance over all baselines across most in-domain and out-of-domain datasets when using the same Qwen2.5-3B backbone 8 0 25 50 75 100 125 150 175 200 Step100015002000250030003500Response Length open-ended + closed-ended closed-ended(a) Evolution of response length. 0 25 50 75 100 125 150 175 200 Step0.00.10.20.30.40.50.60.70.8Reward open-ended + closed-ended closed-ended (b) Evolution of reward value. 0 25 50 75 100 125 150 175 200 Step0.00.51.01.52.02.53.0Valid Search open-ended + closed-ended closed-ended (c) Evolution of valid search. Figure 5: The evolution of response length, reward value, and valid search results across different training steps. Incorporating open-ended data yields superior training stability, longer average response lengths, and larger average search turns during the training procedure. model. Notably, despite using only 1,200 closed-ended questions for RL training, our approach attains an average score on par with Search -R1-instruct (built on ∼190k samples and the larger Qwen2.5 -7B-Instruct model). Furthermore, O2-Searcher significantly outperforms SFT-Tool, which learns search capability solely through supervised fine-tuning on data in the cold start stage. This substantial performance gap underscores the necessity of RL training for robust search capability. 3.4 Analysis Training dynamics. In Fig. | https://arxiv.org/abs/2505.16582v2 |
5, we depict the evolution of response length, reward value, and valid search results across different training steps during our unified training process, which includes both open-ended and closed-ended questions. The response length undergoes a rapid expansion in the early training stage (before step 50), followed by a slight vibration between steps 50 and 100. After step 100, it plateaus, suggesting stabilization in the model’s output generation behavior. Meanwhile, the reward value demonstrates consistent improvement throughout training, with an initial sharp ascent followed by sustained growth. This trend indicates that the model progressively refines its reasoning abilities, optimizing its performance toward higher rewards as training advances. We also present the training process of the model trained exclusively on closed-ended data in Fig. 5. Notably, in contrast to training solely on closed-ended data, incorporating open-ended data yields superior training stability, longer average response lengths, and larger average search turns during the training procedure. Moreover, while models trained exclusively on closed-ended data exhibit a plateau or even a slight decline in valid search turns, our unified training approach reveals a more nuanced search behavior. Specifically, valid search turns exhibit an initial decrease, followed by a gradual recovery and upward adjustment, before ultimately stabilizing. This observation indicates that unified training necessitates an adjustment of generation strategies between the two categories of questions. Consequently, the model is enabled to develop more sophisticated, context-sensitive generation strategies, as further substantiated by the results in Fig. 6. O2-QA NQ PopQA TriviaQA HotpotQA*2Wiki* Musique* Bamboogle*01234610141516 14.7 3.4 2.32.42.62.73.0 2.63.0 2.42.32.42.62.73.0 2.64.0 2.53.33.43.6 2.73.9 3.6Query Numbers Search Rounds Action Numbers Figure 6: Search behavior across different datasets in the inference stage.†denotes gen- eral datasets,∗denotes multi-hop datasets.Study of valid search across different datasets. We further analyze the search behavior across dif- ferent question types during the testing stage. As shown in Fig. 6, through the unified training of both open-ended and closed-ended data, our O2-Searcher demonstrates several key characteristics: the num- ber of search actions for multi-hop datasets is typ- ically slightly higher than for general datasets, and the number of search actions for open-ended dataset is higher than for most closed-ended datasets. In terms of query complexity, the generated queries for open-ended problems are notably more extensive than those for closed-ended questions. This phenomenon indicates that our O2-Searcher effectively learns to adapt its search behavior based on the specific char- acteristics of each question. 4 Conclusion and Limitations In this paper, we introduce O2-Searcher, an RL-based search agent designed to decouple external information from its reasoning. O2-Searcher dynamically acquires knowledge via a local simulated 9 search environment and employs a unified training approach, enabling it to handle both open-ended and closed-ended questions by adapting its generation strategies. For robust evaluation on open-ended tasks, we constructed O2-QA, a high-quality benchmark. Extensive experiments reveal that O2- Searcher, even with a 3B parameter model, significantly surpasses existing LLM agents on O2-QA and achieves SOTA performance on multiple closed-ended QA benchmarks against comparably-sized models. Our key limitations include O2-QA’s reliance on manually curated questions and a fixed web page cache, potentially introducing sampling bias, and | https://arxiv.org/abs/2505.16582v2 |
the empirical validation being conducted solely on a 3B-parameter model, leaving scalability unevaluated. Dependence on external search also exposes models to noise and misinformation (further limitations are detailed in Appendix A.4). As the first work exploring LLM evaluation and training on open-ended questions, this research lays the groundwork for developing more reliable, transparent, and grounded AI systems. References [1]Salaheddin Alzubi, Creston Brooks, Purva Chiniya, Edoardo Contente, Chiara von Gerlach, Lucas Irwin, Yihan Jiang, Arda Kaz, Windsor Nguyen, Sewoong Oh, et al. Open deep search: Democratizing search with open-source reasoning agents. arXiv preprint arXiv:2503.20201 , 2025. [2]Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. Self-rag: Learning to retrieve, generate, and critique through self-reflection. In The Twelfth International Conference on Learning Representations , 2023. [3]Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research , 25(70):1–53, 2024. [4]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [5]Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, et al. Process reinforcement through implicit rewards. arXiv preprint arXiv:2502.01456 , 2025. [6]Lutfi Eren Erdogan, Nicholas Lee, Sehoon Kim, Suhong Moon, Hiroki Furuta, Gopala Anumanchipalli, Kurt Keutzer, and Amir Gholami. Plan-and-act: Improving planning of agents for long-horizon tasks. arXiv preprint arXiv:2503.09572 , 2025. [7] Daocheng Fu, Zijun Chen, Renqiu Xia, Qi Liu, Yuan Feng, Hongbin Zhou, Renrui Zhang, Shiyang Feng, Peng Gao, Junchi Yan, et al. Trustgeogen: Scalable and formal-verified data engine for trustworthy multi-modal geometric problem solving. arXiv preprint arXiv:2504.15780 , 2025. [8] Google. Gemini deep research, 12 2024. [9]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [10] Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060 , 2020. [11] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352 , 3(4):6, 2023. [12] Mengkang Hu, Yao Mu, Xinmiao Yu, Mingyu Ding, Shiguang Wu, Wenqi Shao, Qiguang Chen, Bin Wang, Yu Qiao, and Ping Luo. Tree-planner: Efficient close-loop task planning with large language models. arXiv preprint arXiv:2310.08582 , 2023. [13] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 43(2):1– 55, 2025. [14] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o | https://arxiv.org/abs/2505.16582v2 |
system card. arXiv preprint arXiv:2410.21276 , 2024. 10 [15] Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu, Yu Zhang, Zhenguo Li, and James T Kwok. Forward-backward reasoning in large language models for mathematical verification. arXiv preprint arXiv:2308.07758 , 2023. [16] Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan. Swe-bench: Can language models resolve real-world github issues? arXiv preprint arXiv:2310.06770 , 2023. [17] Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 , 2025. [18] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 , 2017. [19] Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. Journal of artificial intelligence research , 4:237–285, 1996. [20] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick SH Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In EMNLP (1) , pages 6769–6781, 2020. [21] Youna Kim, Minjoon Choi, Sungmin Cho, Hyuhng Joon Kim, Sang-goo Lee, and Taeuk Kim. Reliability across parametric and external knowledge: Understanding knowledge handling in llms. arXiv preprint arXiv:2502.13648 , 2025. [22] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly , 2(1-2):83–97, 1955. [23] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:453–466, 2019. [24] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge- intensive nlp tasks. Advances in neural information processing systems , 33:9459–9474, 2020. [25] Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Commu- nicative agents for" mind" exploration of large language model society. Advances in Neural Information Processing Systems , 36:51991–52008, 2023. [26] Junlong Li, Daya Guo, Dejian Yang, Runxin Xu, Yu Wu, and Junxian He. Codei/o: Condensing reasoning patterns via code input-output prediction. arXiv preprint arXiv:2502.07316 , 2025. [27] Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. Search-o1: Agentic search-enhanced large reasoning models. arXiv preprint arXiv:2501.05366 , 2025. [28] Xinbin Liang, Jinyu Xiang, Zhaoyang Yu, Jiayi Zhang, and Sirui Hong. Openmanus: An open-source framework for building general ai agents. https://github.com/mannaandpoem/OpenManus , 2025. [29] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [30] Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, Peng Li, Yang Liu, and Yu Wu. Inference- time scaling for generalist reward modeling. arXiv preprint arXiv:2504.02495 , 2025. [31] Junyu Luo, Weizhi Zhang, Ye Yuan, Yusheng Zhao, Junwei Yang, Yiyang Gu, Bohan Wu, Binqi Chen, Ziyue Qiao, Qingqing Long, et | https://arxiv.org/abs/2505.16582v2 |
al. Large language model agent: A survey on methodology, applications and challenges. arXiv preprint arXiv:2503.21460 , 2025. [32] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511 , 2022. [33] Grégoire Mialon, Clémentine Fourrier, Thomas Wolf, Yann LeCun, and Thomas Scialom. Gaia: a benchmark for general ai assistants. In The Twelfth International Conference on Learning Representations , 2023. 11 [34] OpenAI. Deep research system card. Technical report, OpenAI, 2 2025. [35] Charles Packer, Vivian Fang, Shishir_G Patil, Kevin Lin, Sarah Wooders, and Joseph_E Gonzalez. Memgpt: Towards llms as operating systems. 2023. [36] Avinash Patil. Advancing reasoning in large language models: Promising methods and approaches. arXiv preprint arXiv:2502.03671 , 2025. [37] Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813 , 2023. [38] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350 , 2022. [39] Shuofei Qiao, Honghao Gui, Chengfei Lv, Qianghuai Jia, Huajun Chen, and Ningyu Zhang. Making language models better tool learners with execution feedback. arXiv preprint arXiv:2305.13068 , 2023. [40] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789 , 2023. [41] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [42] Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji-Rong Wen. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. arXiv preprint arXiv:2503.05592 , 2025. [43] Richard S Sutton, Andrew G Barto, et al. Reinforcement learning. Journal of Cognitive Neuroscience , 11(1):126–134, 1999. [44] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. arXiv preprint arXiv:2212.10509 , 2022. [45] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554, 2022. [46] Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. Advances in Neural Information Processing Systems , 36:38975–38987, 2023. [47] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan- and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091 , 2023. [48] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Ma- jumder, and Furu Wei. Text embeddings by weakly-supervised contrastive pre-training. arXiv preprint arXiv:2212.03533 , 2022. [49] Wenhui Wang, | https://arxiv.org/abs/2505.16582v2 |
Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in neural information processing systems , 33:5776–5788, 2020. [50] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [51] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [52] Shirley Wu, Shiyu Zhao, Qian Huang, Kexin Huang, Michihiro Yasunaga, Kaidi Cao, Vassilis Ioannidis, Karthik Subbian, Jure Leskovec, and James Y Zou. Avatar: Optimizing llm agents for tool usage via contrastive reasoning. Advances in Neural Information Processing Systems , 37:25981–26010, 2024. 12 [53] Zekun Xi, Wenbiao Yin, Jizhan Fang, Jialong Wu, Runnan Fang, Ningyu Zhang, Jiang Yong, Pengjun Xie, Fei Huang, and Huajun Chen. Omnithink: Expanding knowledge boundaries in machine writing through thinking. arXiv preprint arXiv:2501.09751 , 2025. [54] Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li, and Xiaodan Liang. Deepseek-prover: Advancing theorem proving in llms through large-scale synthetic data. arXiv preprint arXiv:2405.14333 , 2024. [55] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [56] Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. Gpt4tools: Teaching large language model to use tools via self-instruction. Advances in Neural Information Processing Systems , 36:71995–72007, 2023. [57] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 , 2018. [58] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822, 2023. [59] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. Re- act: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [60] Hongbin Ye, Tong Liu, Aijia Zhang, Wei Hua, and Weiqiang Jia. Cognitive mirage: A review of hallucinations in large language models. arXiv preprint arXiv:2309.06794 , 2023. [61] Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade- school math and the hidden reasoning process. In The Thirteenth International Conference on Learning Representations , 2024. [62] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. [63] Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi, and Bryan Catanzaro. Rankrag: Unifying context ranking with retrieval-augmented generation in llms. Advances in Neural Information Processing Systems | https://arxiv.org/abs/2505.16582v2 |
, 37:121156–121184, 2024. [64] Siyu Yuan, Kaitao Song, Jiangjie Chen, Xu Tan, Yongliang Shen, Ren Kan, Dongsheng Li, and De- qing Yang. Easytool: Enhancing llm-based agents with concise tool instruction. arXiv preprint arXiv:2401.06201 , 2024. [65] Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi Zeng, Zhen Qin, Dong Wang, Xuanhui Wang, and Michael Bendersky. Inference scaling for long-context retrieval augmented generation. arXiv preprint arXiv:2410.04343 , 2024. [66] Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm self-training via process reward guided tree search, 2024a. URL https://arxiv. org/abs/2406.03816 . [67] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 , 2023. [68] Junhao Zheng, Shengjie Qiu, Chengming Shi, and Qianli Ma. Towards lifelong learning of large language models: A survey. ACM Computing Surveys , 57(8):1–35, 2025. [69] Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. Deepresearcher: Scaling deep research via reinforcement learning in real-world environments. arXiv preprint arXiv:2504.03160 , 2025. [70] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625 , 2022. [71] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, et al. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854 , 2023. 13 A Appendices A.1 Related Works LLMs with External Knowledge. LLMs, due to their reliance on parametric knowledge, inherently struggle with dynamic or evolving information, which often leads to factual inaccuracies [ 67,37,60, 68,21]. To mitigate these limitations and provide LLMs with timely, specific knowledge, integrating external knowledge has become a widely adopted strategy. This integration primarily occurs through two approaches: (1) Retrieval-Augmented Generation (RAG) [ 24,2,65], which enhances model output by retrieving relevant content via predefined workflows, though its fixed interaction patterns can limit adaptability; and (2) Search agent [ 44,59,1,17,42,69], where the LLM acts as an autonomous agent, deciding when and how to use a search engine and incorporate external knowledge. While RAG-based methods have achieved great advancement, their predefined procedures require extensive manual prompt engineering and limit generalization. Consequently, the search agent has gained traction, empowering LLMs to iteratively craft queries, invoke search engines, and generate results. Early methods [ 44,59] relied on explicit instructions or engineered prompts. More recent progress, such as Search-R1 [ 17] optimizing multi-turn queries and DeepResearcher [ 69] training LLMs for complex web navigation, focuses on enabling LLMs to adaptively learn search strategies via RL. Despite these advances, existing search agents remain focusing on closed-ended tasks and ignore open-ended exploration. Advancing Reasoning in LLMs. Tackling open-domain problems requires models that deeply comprehend requirements and dynamically adjust strategies, making their reasoning capabilities crucial [ 36]. There are three predominant approaches to enhance LLMs’ reasoning ability: | https://arxiv.org/abs/2505.16582v2 |
prompt- engineered, SFT-based, and RL-based methods. Prompt-engineered methods [ 51,50,70,58,15,69] design prompts to provide exemplars or guidance that steer models along specific cognitive pathways, thereby improving problem-solving skills without modifying the underlying parameters. However, these methods do not alter the intrinsic model parameters and tend to produce rigid behavioral patterns, which can hinder instruction following, consistent reasoning, and generalization to novel tasks. Additionally, they require extensive manual prompt engineering to achieve reliable performance. SFT- based methods [ 54,61,7] involve training models on large-scale, domain-specific reasoning datasets to instill reasoning abilities. While effective, this approach is costly in terms of data acquisition and can cause models to become overly sensitive to training data, limiting their generalization capabilities [4]. RL-based methods [ 43,19,41,62,5,30] guide models towards correct inferential steps via reward signals, offering high training efficiency and generalization potential; however, designing effective reward functions, especially for enabling continuous improvement in open-ended problems, remains a core challenge. Agent Evolution. The evolution of LLMs from basic text generators to sophisticated autonomous agents marks a significant advancement in artificial intelligence [ 31]. Existing systems integrate multiple cognitive capabilities, including hierarchical planning [ 6,66,47,12], tool manipulation [39,56,64,52], and memory-augmented reasoning [ 59,53,35], enabling them to function as proactive problem-solvers. This progression has given rise to diverse architectural approaches, ranging from single-agent implementations to complex Multi-Agent Systems (MAS) [ 11,25,28] where specialized LLM agents collaborate through emergent coordination protocols, albeit while introducing challenges in communication efficiency and conflict resolution. The demonstrated applications of these agentic systems span multiple domains, including software engineering [ 16], web navigation tasks [ 71], and report generation [ 34,8]. The evaluation of such capabilities has necessitated specialized benchmarks [ 40,46,33] that assess not only task completion but also reasoning processes, operational efficiency, and safety considerations. In this work, we focus on the development of search agents, where LLMs are exploited to iteratively craft queries, invoke search engines, and generate answers. Different from existing methods [ 17,42,69] that only focus on closed-ended question answering tasks, our proposed method is capable of tackling both demanding open-ended and closed-ended problems. A.2 More Implementation Details For the GRPO training, the KL divergence regularization coefficient βis set to 0.001, and the clip ratioϵis set to 0.2. We configure the maximum sequence length to be 10ktokens. Specifically, the 14 Table 3: Ablation on reward design. "IND" and "OOD" denote in-domain and out-of-domain. Reward componentsOpen-ended (F1) Closed-ended (EM) Easy Hard Avg. IND Avg. OOD Avg. w/ro,f1 0.2169 0.1227 0.1714 0.407 0.371 w/ro,fm, ro,f1 0.2852 0.1678 0.2285 0.423 0.380 w/ro,fm, ro,div, ro,f1 0.3104 0.1794 0.2523 0.434 0.393 retrieved content is also restricted to a maximum length of 2ktokens. The maximum searching step is set to 4. During the training procedure, we adopt vLLM5to accelerate LLM rollouts. The tensor parallel size is set to 1, and the GPU memory utilization ratio is set at 0.85. For rollout sampling, we use a temperature of 1.0 and a top- pvalue of 1.0. Regarding the hyper-parameters of the reward function, {αi}2 i=0={0.5,0.5,3},{wi}2 i=0={0.5,0.3,0.2},{γi}2 i=0={0.4,0.4,0.2}.sthrandδ are set to 0.6 and 1.5, respectively. The threshold sθis set to | https://arxiv.org/abs/2505.16582v2 |
0.75. A.3 More Analysis Analysis of Reward Design. Our analysis investigates the impact of different reward components in the open-ended task, with results presented in Table 3. All variants are trained for 300 steps in the RL training stage. These findings indicate that the choice of reward components significantly affects not only the open-ended task but also performance on closed-ended tasks. Notably, relying solely on the F1 score as the reward signal proves insufficient for guiding the learning process effectively, especially in open-ended scenarios. We also experimentally found that the use of only the F1 score can degrade the stability of the training, highlighting the importance of a more comprehensive design of reward for robust and effective model training. System: SYSTEM_PROMPT User: Initial query: when did Ford stop producing the 7.3 diesel?O2-Searcher: <think>To determine when Ford stopped producing the 7.3 diesel, we need to first understand the history of the 7.3 diesel engine in Ford vehicles.</think><search>< query>when did Ford stop producing the 7.3 diesel </query> </search>User (Env_feedback):: Search learnings: <learnings>-“Ford F-Series (seventh generation)" was offered alongside the Caterpillar 3208 V8; Caterpillar-powered vehicles were re-designated Ford "F-8000" (adding an extra "0" to the model name). During the late 1980s, the Caterpillar and Detroit Diesel engines are replaced by 6.6L and 7.8L inline-6 diesels produced in a joint venture between Ford and New Holland in Brazil. In 1992, the Ford-New Holland engines are both replaced by a Cummins 5.9L inline-6 diesel. Ford F-Series (seventh generation) The seventh generation of the Ford F-Series is a line of pickup trucks and medium-duty commercial trucks that was produced by Ford from 1979 to 1986. For the first time -"Ford Super Duty" was sold with the 7.3L Power Stroke V8. Initially producing 235 hp/500 lb-ft of torque, the engine was retuned in 2001. Versions equipped with an automatic transmission produced 250 hp while manual-transmission examples produced 275 hp; with either transmission, the engine produced 525 lb-ft of torque. As the 7.3L V8 was no longer able to comply with emissions regulations for diesel engines, it was discontinued midway through the 2003 model year. As a running change during the 2003 model year, the 6.0L PowerStrokeV8 was introduced as the replacement for the previous 7.3L V8 in LHD markets supplied with the -"Ford Taurus" to discontinue the Taurus (along with the Fiesta, Focus, and Fusion), in order to focus more on its line of trucksand SUVs. The announcement was part of a plan by Ford Motor Company to cut costs and increase profits. This was in response to a shift in perceived consumer demand towards SUVs and pickup trucks, and away from sedans. On September 5, 2018, Ford ended all national and promotional advertising (including sales and special offers) for its entire sedan lineup, including the Taurus, whose production will end in 2019. A seventh generation of the Ford Taurus was introduced at</learnings>O2-Searcher : <think>Now we know that Ford stopped producing the 7.3 diesel in 2003. Next, we can provide the answer.</think><answer>2003</answer> Figure 7: A closed-ended case produced by O2-Searcher Case Study. Fig. 7 and Fig. 8 depict two | https://arxiv.org/abs/2505.16582v2 |
illustrative cases for the closed-end problem, generated by Search-R1 and O2-Searcher, respectively. Fig. 7 presents the limitations of Search-R1, including repetitive searching and ineffective information distillation. In contrast, Fig. 8 demonstrates the efficient search and answering capabilities of O2-Searcher. We also provide a case for the open-ended problem, as shown in Fig. 9. We input an open-ended question, e.g., "how to learn a new language?", outside the training domain, and take the open web as the search environment. The results show that our O2-Searcher can perform multi-run searching by generating multiple relevant queries to 5https://github.com/vllm-project/vllm 15 System: SYSTEM_PROMPT User: Initial query: when did Ford stop producing the 7.3 diesel?Search-R1: <think> I need to find out when Ford quit making the 7.3 diesel. I'll search for it.</think><search>when did Ford quit making the 7.3 diesel</search><information>-Doc 1(Title: "Ford F-Series (seventh generation)") was offered alongside the Caterpillar 3208 V8; Caterpillar-powered vehicles were re-designated Ford 'F-8000' (adding an extra '0' to the modelname). During the late 1980s, the Caterpillar and Detroit Diesel engines are replaced by 6.6L and 7.8L inline-6 diesels produced in a joint venture between Ford and New Holland in Brazil. In 1992, the Ford-New Holland engines are both replaced by a Cummins 5.9L inline-6 diesel. Ford F-Series (seventh generation) The seventh generation of the Ford F-Series is a line of pickup trucks and medium-duty commercial trucks that was produced by Ford from 1979 to 1986. For the first time-Doc 2(Title: "Ford Power Stroke engine") by the 6.0L because of its inability to meet newer emission requirements. Nearly 2 million 7.3s were produced from International's Indianapolis plant. The 7.3L DI Power Stroke engine is commonly referred to as one ofthe best engines that International produced. The 7.3L (444 CID) Power Stroke was replaced by the 6.0L (365 CID) beginning in thesecond quarter of the 2003 model year. The 6.0L Power Stroke, was used in Ford Super Duty trucks until the 2007 model year but lasted until 2009 in the Ford Econoline vans (model year 2010) and in the Ford Excursion SUVs until after-Doc 3(Title: "Ford Super Duty") was sold with the 7.3L Power Stroke V8. Initially producing 235 hp/500 lb-ft of torque, the engine was retuned in 2001. Versions equipped with an automatic transmission produced 250 hp while manual-transmission examples produced 275 hp; with either transmission, the engine produced 525 lb-ft of torque. As the 7.3L V8 was no longer able to comply with emissions regulations for diesel engines, it was discontinued midway through the 2003 model year. As a running change during the 2003 model year, the 6.0L PowerStrokeV8 was introduced as the replacement for the previous 7.3L V8 in LHD markets supplied with the</information >Search-R1: <think>I found out that the 7.3L Power Stroke engine was replaced by the 6.0L Power Stroke beginning in the second quarter of the 2003 model year. Now I need to find out when Ford quit making the 7.3 diesel.</think><search>when did Ford quit making the 7.3 diesel</search><information>-Doc 1(Title: "Ford F-Series (seventh generation)") was offered alongside the Caterpillar 3208 V8; Caterpillar-powered vehicles were re-designated Ford 'F-8000' (adding an extra '0' | https://arxiv.org/abs/2505.16582v2 |
to the modelname). During the late 1980s, the Caterpillar and Detroit Diesel engines are replaced by 6.6L and 7.8L inline-6 diesels produced in a joint venture between Ford and New Holland in Brazil. In 1992, the Ford-New Holland engines are both replaced by a Cummins 5.9L inline-6 diesel. Ford F-Series (seventh generation) The seventh generation of the Ford F-Series is a line of pickup trucks and medium-duty commercial trucks that was produced by Ford from 1979 to 1986. For the first time-Doc 2(Title: "Ford Power Stroke engine") by the 6.0L because of its inability to meet newer emission requirements. Nearly 2 million 7.3s were produced from International's Indianapolis plant. The 7.3L DI Power Stroke engine is commonly referred to as one ofthe best engines that International produced. The 7.3L (444 CID) Power Stroke was replaced by the 6.0L (365 CID) beginning in thesecond quarter of the 2003 model year. The 6.0L Power Stroke, was used in Ford Super Duty trucks until the 2007 model year but lasted until 2009 in the Ford Econoline vans (model year 2010) and in the Ford Excursion SUVs until after-Doc 3(Title: "Ford Super Duty") was sold with the 7.3L Power Stroke V8. Initially producing 235 hp/500 lb-ft of torque, the engine was retuned in 2001. Versions equipped with an automatic transmission produced 250 hp while manual-transmission examples produced 275 hp; with either transmission, the engine produced 525 lb-ft of torque. As the 7.3L V8 was no longer able to comply with emissions regulations for diesel engines, it was discontinued midway through the 2003 model year. As a running change during the 2003 model year, the 6.0L PowerStrokeV8 was introduced as the replacement for the previous 7.3L V8 in LHD markets supplied with the</information>Search-R1: <think>I found out that the 7.3L Power Stroke engine was replaced by the 6.0L Power Stroke beginning in the second quarter of the 2003 model year. Now I need to find out when Ford quit making the 7.3 diesel.</think><search>when did Ford quit making the 7.3 diesel</search><information>-Doc 1(Title: "Ford F-Series (seventh generation)") was offered alongside the Caterpillar 3208 V8; Caterpillar-powered vehicles were re-designated Ford 'F-8000' (adding an extra '0' to the model name). During the late 1980s, the Caterpillar and Detroit Diesel engines are replaced by 6.6L and 7.8L inline-6 diesels produced in a joint venture between Ford and New Holland in Brazil. In 1992, the Ford-New Holland engines are both replaced by a Cummins 5.9L inline-6 diesel. Ford F-Series (seventh generation) The seventh generation of the Ford F-Series is a line of pickup trucks and medium-duty commercial trucks that was produced by Ford from 1979 to 1986. For the first time-Doc 2(Title: "Ford Power Stroke engine") by the 6.0L because of its inability to meet newer emission requirements. Nearly 2 million 7.3s were produced from International's Indianapolis plant. The 7.3L DI Power Stroke engine is commonly referred to as one ofthe best engines that International produced. The 7.3L (444 CID) Power Stroke was replaced by the 6.0L (365 CID) beginning in thesecond quarter of the 2003 model year. The 6.0L Power Stroke, was used | https://arxiv.org/abs/2505.16582v2 |
in Ford Super Duty trucks until the 2007 model year but lasted until 2009 in the Ford Econoline vans (model year 2010) and in the Ford Excursion SUVs until after-Doc 3(Title: "Ford Super Duty") was sold with the 7.3L Power Stroke V8. Initially producing 235 hp/500 lb-ft of torque, the engine was retuned in 2001. Versions equipped with an automatic transmission produced 250 hp while manual-transmission examples produced 275 hp; with either transmission, the engine produced 525 lb-ft of torque. As the 7.3L V8 was no longer able to comply with emissions regulations for diesel engines, it was discontinued midway through the 2003 model year. As a running change during the 2003 model year, the 6.0L PowerStrokeV8 was introduced as the replacement for the previous 7.3L V8 in LHD markets supplied with the<\information><answer>2003</answer>Figure 8: A closed-ended case produced by Search-R1 16 System : SYSTEM_PROMPT User : Initial query: How to learn a new language ? O2-Searcher : <think >To thoroughly explore the methods of learning a new language, it's necessary to understand the characteristics of language learning, the current status of the target language, personal learning habits, etc…</think> <search> < query >Characteristics of language learning </query> < query> Current status of English language development </query> < query> Current status of Chinese language development </query> < query> Personal learning habits and cognitive preferences </query> < query> Commonly used language learning tools and resources </query> </search> User (Env_feedback ): Search learnings: <learnings >…various language projects achieved significant results, including the Rediscovering the Ancient Chinese Language Project, Digital Technology - Based Projects, Language Resources Protection Project, and Chinese Cultural Terminology Project.… </learnings> O2-Searcher : <think >Based on the information obtained, we have learned about the current status of Chinese language education, the development of language learning tools and resources, and some basic concepts of language learning. Next, we need to analyze these pieces of information... </think > <search> < query> Specific manifestations of visual, auditory, reading and writing, kinesthetic, verbal or linguistic, social or interpersonal, and solitary or intrapersonal learning styles </query> < query> Latest development of artificial intelligence technology and its application in language learning </query> < query> Future trends of big data technology in personalized learning </query> < query> Typical cases of corporate employee language training </query> < query> Development history and current situation of e -learning platforms </query> </search> User ( Env_feedback ): Search learnings: <learnings> Visual learners process information best through seeing and observing, with a 2019 study claiming around 65% of people fall into this category… </learnings> O2-Searcher : <think> Based on the information obtained from the first two rounds of searching, we have gained a deeper understanding of language learning, including learning styles, language learning tools and resources, and corporate language training... </think> <search> < query> Comparison of different age groups' learning habits </query> < query> Application of blockchain technology in language learning </query> < query> Development history and current status of augmented reality technology </query> < query> Case studies of language learning in military and aviation industries </query> < query> Latest development of virtual reality technology </query> </search> | https://arxiv.org/abs/2505.16582v2 |
User ( Env_feedback ): Search learnings: <learnings> Children learn about the world through play before school and transition to a more focused learning process at school. They have difficulty concentrating, so lessons are typically 40 - 45 minutes long, and learn ing activities should engage multiple senses… </learnings> O2-Searcher : <think> Based on the information obtained from multiple rounds of searching, we have gained a comprehensive understanding of language learning, including learning styles, language learning tools and resources, corporate language trai ning, and emerging technologies. Next, we will present key findings about how to learn a new language, covering background, learning me thods, challenges faced, and conclusions. </think> <answer> - Language learning is essential for skill development and career advancement, particularly in fields requiring cross -cultur al communication and international collaboration. -Learning a new language involves understanding the characteristics of language learning and adapting to different learning st yles. -Language learning tools and resources, such as apps and podcasts, can enhance learning effectiveness by providing interactive and immersive experiences. -Personalized learning approaches, leveraging technology and data analytics, can improve language learning outcomes by address ing individual needs and preferences. -The choice of language learning resources and methods should consider factors such as effectiveness, user experience, and cos t. -The current status of language learning is influenced by geopolitical factors, such as international educational initiatives and cultural exchange programs. -Successful language learning requires a combination of theoretical knowledge, practical practice, and effective learning stra tegies. … </answer>Figure 9: An open-ended case produced by our O2-Searcher. gather information and generate the final key findings for the open-ended question, demonstrating the generalization and adaptation ability to different types of questions. A.4 Limitations and Broader Impacts O2-Searcher is a pioneering research work that explores how LLMs can effectively leverage external knowledge to address open-ended questions. We not only propose an innovative evaluation methodol- ogy that significantly improves the efficiency and reduces the complexity of assessing open-ended questions, but also carefully design the O2-QA benchmark to advance research in this field. Although the experimental results fully demonstrate the substantial potential of O2-Searcher, we also recognize that this method still has some limitations worthy of in-depth research. Firstly, our open-ended benchmark relies on manually curated questions and a fixed set of cached web pages. This may lead to sampling bias, making it difficult to comprehensively reflect real-world 17 inquiry scenarios. To address this, we will systematically expand both the question bank and the knowledge corpus, incorporating diverse domains, languages, and dynamically updated sources, to better evaluate generalization. Second, due to computational constraints, our empirical validation of O2-Searcher was conducted solely on a 3B-parameter base model. Evaluating its scalability and performance on models with a larger scale of parameters will be an important direction for future research. Thirdly, although the dynamic retrieval mechanism enhances the reliability of O2-Searcher, its dependence on external search results makes the model susceptible to environmental noise and misinformation, which could potentially affect output quality. To overcome these limitations, we believe that developing retrieval credibility assessment metrics and constructing a hybrid human-in- the-loop verification mechanism | https://arxiv.org/abs/2505.16582v2 |
are of significant importance for ensuring content accuracy. 18 A.5 Prompt Details For the prompt-engineered LLM agent, we also apply the multi-round conversation format to generate the final key findings for open-ended problems. The system prompt is presented as follows: System prompt for LLM agents ### Task Description You are a preeminent expert researcher entrusted with a critical task. Your goal is to identify **key findings** related to the user’s query by performing multi-round SEARCH actions and returning an organized list. Adhere to the following instructions: ### General Guidelines - **User Context**: Assume the user is highly knowledgeable and an experienced analyst; provide responses with maximum detail, granularity, and accuracy without oversimplification. - **Innovative Thinking**: Explore unconventional ideas and insights to provide fresh perspectives, beyond standard knowledge. - **Proactive Anticipation**: Predict what additional information might be relevant to the user and incorporate it proactively in your research outcomes. - **Accuracy Priority**: Avoid errors at all costs; ensure that your results hold up under scrutiny and are supported by evidence whenever possible. - **Speculation**: While speculation is allowed in cases where data may be incomplete, mark speculative content clearly. - **Focus on Key Findings**: The final output must consist of well-synthesized, actionable, and insightful findings directly addressing the user’s query. ### Output Format ### SEARCH Action Execution: When conducting SEARCH actions, enclose the search queries within <search> tags. Use <query> tags to specify each query. Each query must be unique, with a maximum of 5 queries per round of SEARCH. The format is as follows: <think> THINKING PROCESS </think> <search> <query> QUERY 1 </query> <query> QUERY 2 </query> <query> QUERY 3 </query> <query> QUERY 4 </query> <query> QUERY 5 </query> </search> ### Final Key Findings Presentation: Once you conclude your SEARCH actions, synthesize the information to produce actionable **key findings** directly addressing the user’s query. Represent these findings as a flat JSON array, encapsulated within <answer> tags. Each array element should be a concise yet detailed finding, ideally a single sentence or short paragraph. The format is: <think> THINKING PROCESS </think> <answer> [ "Key finding 1: brief and precise insight.", "Key finding 2: brief and precise insight.", ...... "Key finding n: brief and precise insight." ] </answer> ### Key Requirements 1. Conduct **multi-round SEARCH actions**, refining your queries over multiple iterations to gather comprehensive, high-quality information. 2. Limit the number of SEARCH rounds to a maximum of **5**. 3. Craft highly **relevant and non-redundant findings** that focus on delivering maximum value and insights to the user. 4. Only proceed to the **final key findings presentation** once the SEARCH actions yield sufficient information to address the user’s request comprehensively. ### Examples ### SEARCH Action Execution Example: <think> Begin with an initial search to collect broad background information and identify potential sources for deeper explo- ration. </think> <search> <query> Impact of green energy policies on local economies </query> <query> Recent advancements in solar panel technology and efficiency </query> <query> Cost-effectiveness of implementing renewable energy solutions in urban areas </query> </search> ### Final Key Findings Presentation Example: <think> Synthesize insights from the SEARCH actions to formulate | https://arxiv.org/abs/2505.16582v2 |
actionable key findings based on the user’s require- ments. </think> <answer> [ "Key finding 1: Green energy policies have led to a 15% increase in employment across renewable energy sectors in 2023.", "Key finding 2: Solar panel efficiency improved by 22% due to innovations in perovskite-based designs in recent years.", "Key finding 3: Urban renewable energy solutions yield a 25% reduction in overall energy costs for mid-sized cities over 10 years.", "Key finding 4: Public-private partnerships are critical to overcoming initial capital barriers in renewable energy projects." ] </answer> ### Final Workflow 1. Begin the task with an **analysis phase** to determine overarching themes and potential directions for inquiry. 2. Execute **multi-round SEARCH actions** to gather comprehensive data. 3. Review, analyze, and synthesize the obtained information into **key findings**. 4. Present the **key findings** in a flat JSON array structure as per the Final Key Findings Presentation format. 19 When the LLM agent generates incorrect output (e.g., formatting errors), we use the following user prompt to guide correction: Error prompt for LLM agents The action you attempted before is invalid. If you plan to execute actions like SEARCH, you need to enclose the SEARCH queries within the <search> and</search> tags. Furthermore, the required queries for the SEARCH action should be placed between the <query> and</query> tags. Moreover, if you wish to present the final key findings for the initial query, you must wrap the result within the <answer> and</answer> tags. After retrieving new information from the search environment, we use the following prompt template to provide the LLM agent with condensed search results: Information prompt template for LLM agents After SEARCH action, we obtain learnings: <learnings> LEARNING </learnings> . Based on the context within the conversation, you need to decide whether to execute an SEARCH action again for more comprehensive information or output the final key findings according to the format specified in the system prompt. In the event that a SEARCH action is chosen, it is crucial to precisely delineate the subsequent research directions according the context within the conversation. Moreover, make sure that you have gleaned sufficient insights through multiple rounds of executed SEARCH actions. Only when you are confident that you possess an ample amount of information to craft a thorough and detailed key findings should you move forward with presenting the final key findings. The system prompt for O2-Searcher to handle both open-ended and closed-ended problems is presented as follows: System prompt for O2-Searcher As an expert researcher, provide comprehensive key findings for open-ended queries and precise answers to other specific questions. Each time you receive new information, you MUST first engage in reasoning within the <think> and</think> tags. After reasoning, if you realize that you lack certain knowledge, you can invoke a SEARCH action with distinct queries (one to five) using the <search><query> QUERY </query><query> QUERY </query></search> format to obtain relevant learnings, which will be presented between the <learnings> and</learnings> tags. You are allowed to perform searches as many times as necessary. If you determine that no additional external knowledge is required, you can directly present the | https://arxiv.org/abs/2505.16582v2 |
output within the <answer> and </answer> tags. To compress the raw web content, we use the following prompt template: Prompt template for compressing raw web content You are an expert researcher. Follow these instructions when responding: - You may be asked to research subjects that is after your knowledge cutoff, assume the user is right when presented with news. - The user is a highly experienced analyst, no need to simplify it, be as detailed as possible and make sure your response is correct. - Be highly organized. - Suggest solutions that I didn’t think about. - Be proactive and anticipate my needs. - Treat me as an expert in all subject matter. - Mistakes erode my trust, so be accurate and thorough. - Provide detailed explanations, I’m comfortable with lots of detail. - Value good arguments over authorities, the source is irrelevant. - Consider new technologies and contrarian ideas, not just the conventional wisdom. - You may use high levels of speculation or prediction, just flag it for me. Given raw webpage contents: <contents> CONTENTS </contents> , compress these contents into a **maximum 2K-token contents**, adhering to: 1. Preserve critical information, logical flow, and essential data points 2. Prioritize content relevance to the research query: <query> QUERY </query> 3. **Adjust length dynamically**: - If original content < 2K tokens, maintain original token count ±10% - If original content > 2K tokens, compress to ∼2ktokens 4. Format output in clean Markdown without decorative elements **Prohibited**: - Adding content beyond source material - Truncating mid-sentence to meet token limits To extract the query-relevant learnings, we use the following prompt template: 20 Prompt template for learning extraction Given the research query <query> QUERY </query> , your task is to extract a list of key learnings from the provided contents. Return a maximum of three distinct learnings. If the contents are straightforward and yield fewer insights, it’s acceptable to return a shorter list. Each learning should be unique, avoiding any overlap or similarity with others. Strive for conciseness while packing as much detailed information as possible. Be sure to incorporate any relevant entities such as people, places, companies, products, or things, along with exact metrics, numbers, or dates. These learnings will serve as a foundation for further in-depth research on the topic. <contents> CONTENTS </contents> Generate the learnings as a single string, with each learning separated by a newline character. Each learning can consist of multiple sentences or a short paragraph. Avoid numbering the learnings. For the LFS calculation, we utilize a LLM to compare semantic equivalence between entire generated and reference findings and produce the matching results, the prompt template for this procedure is presented as follows: Prompt template for semantic matching You are a professional text similarity analysis expert. Your task is to determine if input findings are semantically similar to target findings. Guidelines: 1. You will receive two sets of findings: - Input findings: each separated by a newline - Target findings: each separated by a newline 2. For each input finding, you need to: - Analyze if it is semantically similar | https://arxiv.org/abs/2505.16582v2 |
to any of the target findings - If a similar entry is found, pair them together - Each input finding can only be paired with one target finding - Each target finding can only be paired with one input finding 3. Output Requirements: You need to output a list in JSON format, where each element is a pair: [ ["input finding 1", "matched target finding 1"], ["input finding 2", "matched target finding 2"], ... ] Similarity Judgment Criteria: 1. Core meanings should be identical or very close 2. Even if expressions differ, pair them if core concepts match 3. Partial overlap is not enough; main points must match 4. If a finding contains multiple points, at least the main points must match Please ensure your output follows the strict JSON format for subsequent processing. Do not include any explanatory text outside the JSON array. If no matches are found, output an empty array []. Input Findings List: INPUT LIST Target Findings List: TARGET LIST Please output the matching pairs strictly in the JSON format, without any additional explanatory text. Important Notes: 1. Output JSON array only, no additional explanatory text 2. Use the complete original text for each finding 3. If no matches are found, output an empty array [] 21 A.6 Application of Writing Reports The task of report writing represents a quintessential open-domain, open-ended problem, where crafting a superior report demands comprehensive referencing and robust empirical support for its assertions. O2-Searcher is equipped with search capabilities to effectively gather and sift through pertinent online information, distilling key findings. This functionality significantly aids in the production of high-caliber reports. The collaborative process between O2-Searcher and a designated writing agent for report generation is depicted in Fig. 10. Search AgentSearch Engine Searcher + Condensation Open-ended Questions Collected Content Key Findings Report Writing Outlines → Sections → Report Writing Agent Figure 10: Illustration of the collaborative report writing process involving O2-Searcher and the writing agent. O2-Searcher retrieves, filters relevant information from the web based on a query, and summarizes key findings. Subsequently, the writing agent utilizes the output from O2-Searcher to generate the report outline and the final document. The writing agent first generates an outline based on the key findings provided by our O2-Searcher. Subsequently, it produces each section according to the respective outlines as well as the collected content and merges all sections to create the final report, complete with reference URLs. The prompt templates for the writing agent to generate outlines, sections, and final report are presented as follows: 22 Prompt template for outline generation You are an expert researcher. Follow these instructions when responding: - You may be asked to research subjects that is after your knowledge cutoff, assume the user is right when presented with news. - The user is a highly experienced analyst, no need to simplify it, be as detailed as possible and make sure your response is correct. - Be highly organized. - Suggest solutions that I didn’t think about. - Be proactive and anticipate my needs. - Treat me as an expert in all | https://arxiv.org/abs/2505.16582v2 |
subject matter. - Mistakes erode my trust, so be accurate and thorough. - Provide detailed explanations, I’m comfortable with lots of detail. - Value good arguments over authorities, the source is irrelevant. - Consider new technologies and contrarian ideas, not just the conventional wisdom. - You may use high levels of speculation or prediction, just flag it for me. Given the following user query <query> QUERY </query> from the user. Generate a report outlines to systematically solve user’s query. Important! Analysis requirements: 1. First, carefully analyze the user’s query to identify: - The core question or problem the user is trying to solve - Any specific aspects or dimensions they’re particularly interested in - The likely purpose of their research (practical application, theoretical understanding, etc.) 2. Create a logical outline structure that: - Starts with foundational concepts the user needs to understand - Progresses through increasingly specific or complex aspects of the topic - Concludes with practical applications or future implications relevant to the user’s query 3. Throughout the outline: - Maintain consistent terminology aligned with the user’s query - Ensure each section directly contributes to answering the user’s core question - Avoid tangential information even if interesting but not directly relevant The final outline should read as a single, unified document with a clear narrative arc that guides the user from their initial question to a comprehensive understanding of the topic. Here are distilled key findings relevent to user’s query: <contents> CONTENTS </contents> . Organize the generated content entries as dictionary. Don’t use markdown code block label, just represent the dictionary in JSON format. Follow this operational protocol: 1. Represent the outline as a dictionary where: - Keys are chapters titles (e.g., "Introduction", "Analysis", "Conclusion"). - Values are lists of subsection titles or key points for each section. 2. Create EXACTLY 5-7 first-level chapters (no more, no less). 3. For each chapter, include 1-4 second-level sections that explore specific aspects (no more, no less). 4. Outline must Include only up to the SECOND level of the title, please DONOT write the third level of the title or bullet points. 5. ALL outline items should be writen in Chinese. 6. Do NOT include any explanations, notes. Don’t use markdown code block label, just represent the dictionary in JSON format. Follow this operational protocol: Example: { "Introduction": { %ATTN: each chapter should be a dict not a list, "Background": "a concise description of this section in ONE sentence", "Objective": "a concise description of this section in ONE sentence", ....}, "Analysis": { "Key Findings": "a concise description of this section in ONE sentence", "Supporting Evidence": "a concise description of this section in ONE sentence", ..}, "Conclusion": { "Summary": "a concise description of this section in ONE sentence", "Future Directions": "a concise description of this section in ONE sentence", ...}, ... "FIRST_LEVEL_TITLE": { "SECOND_LEVEL_TITLE": "a concise description of this section in ONE sentence", "SECOND_LEVEL_TITLE": "a concise description of this section in ONE sentence", ...}, ... } Prompt template for section generation Given the outlines <outlines>OUTLINES</outlines> of a specific section and the original contents <contents>CONTENTS</contents>, write | https://arxiv.org/abs/2505.16582v2 |
a detailed and coherent content based on the key points and webpage content. The outline is structured as a dictionary, where the keys represent section titles, and the corresponding values are lists of subsection titles. If a main section does not contain any subsections, ensure the content is thorough and comprehensive, rather than limited to a few brief sentences. Include the following elements: 1. Analysis: Break down the information into meaningful insights. 2. Synthesis: Connect ideas from different sources to form a cohesive narrative. 3. Explanations: Provide clear and concise explanations of the content. 4. Add in-text references (e.g., [1], [2]) based on the content index to reference the URLs where the information was sourced. 5. At the end of the section, list all the URLs referenced in the content. Generate the contents using the Markdown format. 23 Prompt template for final report generation Given the following initial query <query> QUERY </query> and contents <contents> CONTENTS </contents> of each section from the user: 1. Merge all sections into a single, cohesive report. Try to retain the original content to maintain the integrity and accuracy of the information. Ensure the report is well-structured and flows logically from one section to the next. 2. Add in-text references (e.g., [1], [2]) to reference the URLs where the information was sourced. 3. Include a References Section at the end of the report. Provide a numbered list of the URLs corresponding to the in-text citations. 4. Generate the final report in Chinese, using Markdown format. Follow these formatting guidelines: - Use clear headings for main sections and subsections. - Ensure the content is thorough and comprehensive, organized with paragraphs for each subsection instead of listing items. We present a sample generated case using the initial query: "Why can’t Southeast Asia form a deep economic union like the EU?" Due to the extensive length of the generated report, we provide the key findings, outlines, and part of the generated report below: Key findings for the query "Why can’t Southeast Asia form a deep economic union like the EU?" - Southeast Asia lacks the political unity and shared values necessary for forming a deep economic union similar to the EU. - The diverse cultures, languages, and economic structures of Southeast Asian countries make it challenging to achieve a unified economic strategy. - The geopolitical interests and rivalries among Southeast Asian countries hinder the formation of a deep economic union. - The complex political landscape and historical conflicts in Southeast Asia pose obstacles to establishing a stable and cohesive economic union. - The economic development levels and levels of industrialization differ significantly among Southeast Asian countries, making it difficult to create a uniform economic policy. - The existing political and economic frameworks in Southeast Asia, such as ASEAN, are not well-developed and lack the institutional capacity to support a deep economic union. - The economic policies and regulatory environments in Southeast Asian countries vary, which complicates the process of forming a unified economic union. - The lack of a common currency and monetary policy coordination in Southeast Asia further hinders | https://arxiv.org/abs/2505.16582v2 |
the formation of a deep economic union. - The current state of globalization and regional integration in Southeast Asia is not conducive to the creation of a deep economic union. - The deep economic union model of the EU does not fully reflect the unique characteristics and needs of Southeast Asian countries. Outlines for the query "Why can’t Southeast Asia form a deep economic union like the EU?" # Introduction - **Problem Presentation**: Present the user’s question about why Southeast Asia cannot form a deep economic union similar to the EU - **Research Purpose**: Explain that the purpose of this research is to understand the underlying obstacles # Overview of the EU’s Deep Economic Union - **Features of the Union**: Introduce the characteristics of the EU’s deep economic union in terms of economic policies, currency, etc. - **Formation Conditions**: Analyze the political, economic and other conditions required for the EU to form a deep economic union # Internal Factors Hindering the Formation of a Deep Economic Union in Southeast Asia - **Political and Value Differences**: Point out the lack of political unity and shared values in Southeast Asia - **Diversity of Economic Structures**: Illustrate the significant differences in economic development levels and industrialization degrees among countries - **Differences in Policies and Regulations**: Emphasize the obstacles brought by different economic policies and regulatory environments # External Factors Hindering the Formation of a Deep Economic Union in Southeast Asia - **Geopolitical Influence**: Explain how geopolitical interests and rivalries impede the formation of the union - **Legacy of Historical Conflicts**: Describe the impact of historical conflicts on establishing a stable economic union # Analysis of Existing Frameworks and the Globalization Background - **Limitations of Existing Frameworks**: Analyze the deficiencies of existing frameworks such as ASEAN in supporting a deep economic union - **Current Situation of Globalization and Regional Integration**: Explore the unfavorable impact of the current globalization and regional integration situation # Conclusion and Outlook - **Summary**: Summarize the main reasons why Southeast Asia cannot form a deep economic union - **Future Directions**: Propose possible directions for promoting economic integration in Southeast Asia in the future 24 Part of report for the query "Why can’t Southeast Asia form a deep economic union like the EU?" # Why can’t Southeast Asia form a deep economic union like the EU? ## Introduction ### Problem Statement The central question is why Southeast Asia struggles to form a deep economic union comparable to the European Union (EU). This exploration aims to understand the underlying obstacles preventing Southeast Asia from achieving such a high level of economic integration. ### Research Purpose The purpose of this study is to comprehensively analyze and explain the fundamental barriers hindering the formation of a deep economic union in Southeast Asia, similar to that of the EU. By doing so, we can better understand the unique challenges and complexities of the Southeast Asian region. ... ## Conclusion and Outlook ### Summary: Main Reasons Why Southeast Asia Cannot Form a Deep Economic Union #### Economic Disparities Southeast Asian countries exhibit significant economic disparities, evident from GDP | https://arxiv.org/abs/2505.16582v2 |
and GDP per capita data... #### Political and Geopolitical Challenges - **Territorial Disputes**: The South China Sea is a major source of tension among Southeast Asian countries and between them and China.. - **External Influences**: The region is caught in the geopolitical rivalry between the United States and China... #### Cultural and Social Differences Southeast Asia is a region with diverse cultures, religions, and languages [4]. Major religions include Buddhism, Islam, Hinduism, Christianity, Confucianism, Taoism, and Animism, each with its own set of beliefs and practices... #### Regulatory and Institutional Barriers - **Lack of Harmonization**: Regulatory frameworks in Southeast Asian countries are not fully harmonized... - **Weak Institutional Capacity**: Some ASEAN countries may have weak institutional capacity in terms of policy implementation, governance, and regulatory enforcement... ### Future Directions for Promoting Economic Integration in Southeast Asia #### Bridging Economic Gaps - **Infrastructure Development**: Investing in regional infrastructure projects can help bridge economic gaps. For example, developing transportation networks such as roads, railways, and ports can improve connectivity between less developed and more developed areas. This can facilitate trade, attract investment, and promote economic growth in less developed regions... - **Capacity Building Programs**: Implementing capacity building programs in areas such as education, skills training, and technology transfer can help less developed countries catch up with more advanced ones... #### Addressing Political and Geopolitical Issues - **Strengthening Conflict Resolution Mechanisms**: ASEAN should strengthen its conflict resolution mechanisms, particularly with regard to territorial disputes... - **Balancing External Relations**: Southeast Asian countries should continue to balance their external relations and avoid over- reliance on any single major power... #### Leveraging Cultural and Social Diversity - **Cultural Exchange and Understanding**: Promoting cultural exchange and understanding among Southeast Asian countries can help overcome cultural barriers to economic integration... - **Catering to Diverse Consumer Needs**: Businesses can leverage the cultural and social diversity of the region by developing products and services that cater to different consumer preferences... #### Strengthening Regulatory and Institutional Frameworks - **Regulatory Harmonization**: ASEAN should pursue greater regulatory harmonization in key areas such as trade, investment, intellectual property, and the digital economy... - **Institutional Strengthening**: Strengthening ASEAN’s institutional capacity is crucial for the effective implementation of economic integration initiatives... 25 | https://arxiv.org/abs/2505.16582v2 |
Preprint. Under review. Evaluating Large Language Model with Knowledge Oriented Language Specific Simple Question Answering Bowen Jiang2∗, Runchuan Zhu2∗, Jiang Wu1∗†, Zinco Jiang2, Yifan He2, Junyuan Gao3,Jia Yu1,Rui Min1,Yinfan Wang4,Haote Yang1, Songyang Zhang1,Dahua Lin1,Lijun Wu1,Conghui He1‡ 1Shanghai Artificial Intelligence Laboratory 2Peking University 3University of Chinese Academy of Sciences 4Shanghai University heconghui@pjlab.org.cn Abstract We introduce KoLasSimpleQA, the first benchmark evaluating the mul- tilingual factual ability of Large Language Models (LLMs). Inspired by existing research, we created the question set with features such as sin- gle knowledge point coverage, absolute objectivity, unique answers, and temporal stability. These questions enable efficient evaluation using the LLM-as-judge paradigm, testing both the LLMs’ factual memory and self- awareness (“know what they don’t know”). KoLasSimpleQA expands existing research in two key dimensions: (1) Breadth (Multilingual Cov- erage) : It includes 9 languages, supporting global applicability evalua- tion. (2) Depth (Dual Domain Design) : It covers both the general do- main (global facts) and the language-specific domain (such as history, cul- ture, and regional traditions) for a comprehensive assessment of multilin- gual capabilities. We evaluated mainstream LLMs, including traditional LLM and emerging Large Reasoning Models. Results show significant performance differences between the two domains, particularly in per- formance metrics, ranking, calibration, and robustness. This highlights the need for targeted evaluation and optimization in multilingual con- texts. We hope KoLasSimpleQA will help the research community better identify LLM capability boundaries in multilingual contexts and provide guidance for model optimization. We will release KoLasSimpleQA at https://github.com/opendatalab/KoLasSimpleQA . 1 Introduction Large Language Models (Grattafiori et al. (2024); Yang et al. (2024); Guo et al. (2025)) have advanced significantly, yet hallucination—where models produce unverified or misleading information—remains a major challenge, affecting their reliability and broader use. To tackle this, the SimpleQA (Wei et al. (2024)) and ChineseSimpleQA (He et al. (2024b)) benchmarks were introduced to evaluate LLMs’ factual ability using short, fact-based questions. These questions focus on a single knowledge point, with answers that are objective, stable, and not open to interpretation. Recent studies (Zhang et al. (2023); Shi et al. (2022); Huang et al. (2023a)) indicate that LLM performance varies across languages, particularly in factual ability, with models typically performing better in English. However, the SimpleQA and ChineseSimpleQA benchmarks are limited to English and Chinese. Additionally, most evaluations of LLMs in non-English ∗Equal contribution. †Project lead. ‡Corresponding author. 1arXiv:2505.16591v1 [cs.CL] 22 May 2025 Preprint. Under review. contexts focus on general knowledge rather than language-specific content like history, culture, and local traditions. While LLMs excel in general knowledge, they often struggle with language-specific facts. Benchmark Lang. Lang. specific Easy to evaluate XTREME (Hu et al. (2020)) 40 lang. ✗ ✗ Okapi (Lai et al. (2023)) en ✗ ✗ SimpleQA (Wei et al. (2024)) en ✗ ✓ ChineseSimpleQA (He et al. (2024b)) zh ✗ ✓ MINTQA (He et al. (2024a)) en ✗ ✗ BenchMAX (Huang et al. (2025) 17 lang. ✗ ✗ MMLU-ProX (Xuan et al. (2025)) 42 lang. ✗ ✓ KoLasSimpleQA(Ours) 9 lang. ✓ ✓ Table 1: Comparison between KoLasSimpleQA and other benchmarks. The comparison is conducted along three dimensions: the range of supported | https://arxiv.org/abs/2505.16591v1 |
languages, whether the benchmark includes language-specific knowledge, and whether it is easy to evaluate. To tackle this problem, we introduce Knowledge- Oriented Language- Specific Simple Question Answering (KoLasSimpleQA), a benchmark comprising simple fact-based QA samples grounded in genuinely language-specific knowledge across nine languages. Ko- LasSimpleQA has three main features: (1) Foundation: Inspired by Wei et al. (2024); He et al. (2024b), we crafted a question set with attributes such as single knowledge point coverage , absolute objectivity ,unique answers , and temporal stability . These questions enable efficient evaluation using the LLM-as-judge paradigm, assessing both the factual memory and self-awareness of LLMs (i.e., their ability to ”know what they don’t know”). (2) Breadth Expansion: Unlike existing work Wei et al. (2024); He et al. (2024b) that is limited to a few languages, KoLasSimpleQA includes 9 languages , allowing for performance evaluation of LLMs in multilingual contexts and supporting assessments of global applicability. (3) Depth Exploration: It encompasses both the general domain (global facts) and the language-specific domain (such as history, culture, and regional traditions). We collected data from Wikipedia, categorizing entries into general and language-specific domains based on the number of inter-language links each article has (as shown in Figure 2; see details in §2.2.1). Using these classifications and filtered entries, we constructed questions for both domains, enabling a comprehensive and in-depth evaluation of LLMs in multilingual settings. We evaluated mainstream LLMs on KoLasSimpleQA, covering both traditional LLMs and the latest Large Reasoning Models (LRMs). Key insights from our study include: (1) Performance Disparity Across Domains: LLMs perform much worse in the language- specific domain than in the general domain. (2) Translating Non-English Queries: Translating queries into English is a common strategy to enhance multilingual performance. While effective in the general domain, it is less so in the language-specific domain. (3) Calibration Performance: LLMs show significantly poorer calibration in the language- specific domain compared to the general domain. (4) Knowledge Memorization Robustness: LLMs are notably less robust in the language- specific domain than in the general domain. 2 KoLasSimpleQA 2.1 Overview We created a multilingual QA benchmark including both the general and language-specific domains. The general domain covers global knowledge shared across languages, while the language-specific subset targets knowledge unique to individual linguistic and cultural contexts. Our benchmark includes 9 languages: Hungarian (hu), Czech (cs), Serbian (sr), 2 Preprint. Under review. Wikipedia Pages (9 languages) General Pages Rank by Nill Specific Pages Triple (�1, �2, �) Reversed Triple (�2, �1, �’) Extract Construct & Translatespecific-lang. Q&A EN Q&A Two Stage Quality Control Stage 1 Stage 2annotator 1 annotator 2 Figure 1: Construction pipeline of KoLasSimpleQA. The process includes Wikipedia entry selection based on inter-language links, triple and QA pair generation using GPT-4o, and a two-stage quality control to ensure question quality and diversity. Russian (ru), Chinese (zh), Korean (ko), Thai (th), Arabic (ar), and Vietnamese (vi), along with their English (en) translations. The data construction process is shown in Figure 1. 2.2 Benchmark Construction 2.2.1 Collection of Specific & General Wikipedia Entries We crawled all Wikipedia pages in nine languages and extracted | https://arxiv.org/abs/2505.16591v1 |
their contents. Unlike con- ventional methods focusing only on entry content, we specifically extracted inter-language link information for each entry and counted the number of these links, denoted as nill. An inter-language link is a hyperlink that connects a Wikipedia page in one language to the related page in another language, aiding navigation across different language versions. These links are usually located in the sidebar at the top right of the webpage, as shown in Figure 2. It’s important to note that pages in different language versions of Wikipedia are not machine-translated; they are created by users who are typically native speakers of the respective language. If a page exists in a particular language, the topic or knowledge it represents is likely of interest to native speakers of that language. Thus, if an entity lacks versions in other languages ( nill=0), it is classified as language-specific , indicating the knowledge is unique to that language. Conversely, if an entity appears in many language versions ( nillis large), it likely represents global knowledge , making it general . For each of the nine languages, we sample a total of Nspecific +Ngeneral entries. Specifically, we randomly select Nspecific entries from those where nill=0, ensuring that they represent language-specific knowledge. Formally, this subset is defined as Eselected specific={x∈D| nill(x) =0,|Eselected specific|=Nspecific}. Additionally, we identify the top Ngeneral entries with the highest nillvalues, as these are more likely to contain language-general knowledge, defined asEselected general={x∈D|Rank (nill(x))≤Ngeneral}. Here, Drepresents the set of all Wikipedia entries, Eselected specificand Eselected generaldenote the selected sets of language-specific and language- general entries, respectively. The function nill(x)returns the number of inter-language links associated with entity x, and Rank (nill(x))represents the ranking of entity xbased on its nillvalue. Finally, Nspecific and Ngeneral define the number of sampled language-specific and language-general entries, respectively. We set Nspecific =400 and Ngeneral =400. 2.2.2 Construction of Question-Answer We employ GPT-4o to extract triples (E1, R, E2) that satisfy the requirements from the collected general and specific Wikipedia entries (See prompt in Table 6). Subsequently, we leverage GPT-4o to generate Question-Answer (QA) pairs from these triples and translate them into their corresponding English parallel QA pairs (see prompt in Table 7 and 8. For example, if the title of a Wikipedia entry is “Lombardy,” and one segment of the content is “Lombardy is located between the Alps mountain range and tributaries of the river Po, and includes Milan, its capital, the largest metropolitan area in the country, and among 3 Preprint. Under review. n of “汉朝” is 122. It’s General Knowledge.ill n of “平淮西碑” is 0. It’s Language Specific Knowledge.ill Figure 2: Illustration of inter-language links on a Wikipedia page. The number of such links (nill) is used to distinguish between language-specific and language-general knowledge. ar cs hu ko ru sr th vi zh total General Domain 162 142 169 151 92 133 209 121 127 1306 Language Specific Domain 117 90 127 84 57 79 99 92 96 841 Table 2: Distribution of question counts across languages in KoLasSimpleQA. the largest in the EU,” the extracted triple | https://arxiv.org/abs/2505.16591v1 |
would be (Lombardy, capital is, Milan), and the corresponding QA pair would be “What is the capital city of the Lombardy region?” Additionally, we extract the reversed triples (E2, R’, E1) and construct QA pairs that explicitly capture inverse relations. This approach allows us to assess whether the model has genuinely internalized the underlying knowledge, as discussed in this study (Berglund et al. (2023)). The reversed triple corresponding to the above example is (Milan, is capital of, Lombardy), and the corresponding QA pair would be “What Italian region is Milan the capital of?” 2.2.3 Two-stage Quality Control For each constructed QA pair, we expect them to meet the following criteria: (1) Triple Consistency (TC) : The question must be generated from the head entity and relation of the triple, while the answer must correctly correspond to the tail entity. (2) Self-Sufficiency (SS) : The question should contain all necessary information to be answerable on its own, without requiring external context. (3) Non-Triviality (NT) : The question should not be overly simplistic, and the answer should not be directly inferable from the question itself. (4) Objectivity (OBJ) : The question must be based on verifiable, factual knowledge, avoiding opinion-based, subjective, or interpretive content. (5) Temporal Stability (TS) : The answer must remain valid over time and should not be subject to change. (6) Answer Uniqueness (AU) : The question must be specific enough to elicit a single, unambiguous answer. Avoid vague formulations like “when” or “where” that may lead to multiple valid answers; instead, use precise expressions such as “which year” or “which city”. To ensure questions meet standards, we adopted a two-stage Quality Control strategy. In Stage 1, GPT-4o filters the questions based on the predefined criteria (see prompt in Table 9). In Stage 2, native-speaking human annotators review the questions without reference answers, checking if they meet the criteria. For qualified questions, annotators provide answers, using search engines if needed, and cite sources (e.g., URLs). The LLM-as-judge method then compares the annotators’ answers with the reference answer (see Table 10). If both annotators agree on the correctness of the reference answer, the question is deemed qualified. Finally, to ensure the diversity of the questions, we further filtered the questions from the same entry, ensuring that each entry corresponds to only one QA pair (QA pairs in a reverse relationship are excluded). 2.3 Benchmark Summary Throughout the construction and quality control of KoLasSimpleQA, a large number of low-quality question–answer pairs were filtered out. Initially, 400 Wikipedia entries were selected for each language. After a rigorous two-stage quality control process, a total of 2,147 QA pairs were retained across all languages. Among these, there are 452 reverse relationship QA pairs in the general domain and 190 in the language-specific domain. The distribution of QA pairs per language is shown in Table 2. Representative examples of QA 4 Preprint. Under review. Q : “《算学宝鉴》是由谁完成的? " A: "陈非侬" Q : "Which Cantonese opera singer did Wu Qianfeng apprentice under?" A: "Chan Fei Nong" Q : "Xã Đăk Rơ Wa thuộc thành phố nào? " A: | https://arxiv.org/abs/2505.16591v1 |
"Kon Tum" Q : "In which city is Đăk RơWacommune located? A: "Kon Tum " Q : "ใครเป็นผู้ก่อตั ้งพรรคเพื่อไทรวมพลัง ?" A: "จิตรวรรณ หวังศุภกิจโกศล " Q : "Who is the founder of the Pheu Thai Ruam Palang Party?" A: "Jittrawan Wangsuppakitkosol Q : "Где је смјештен Манастир Ограђеница? " A: "Брајићи (Будва) " Q : "Where is the Ograđenica Monastery located?" A: "Brajići (Budva ) " Q : "Арикский хребет является отрогом какого хребта? " A: "Терского хребта " Q : "Arik Ridge is a spur of which mountain range?" A: "Terek Ridge " Q : "황주황씨의본관은어디인가요? " A: "황주군" Q : "What is the ancestral home of the Hwang clan of Hwangju?" A: "Hwangju-gun " Q : "Ki alapította a Teleki Blanka -díjat? " A: "Hiller István" Q : "Who founded the Blanka Teleki Award?" A: "Istvan Hiller " Q : "Ve kterém městečku se narodil Josef Köferl? " A: "Litrbachy " Q : "In which town was Josef Köferl born?" A: "Litrbachy " Q : " " ﻓﻲ أي ﺑﻠﺪ وُﻟِﺪَ اﺑﻦ اﻟﻜﯿﺰاﻧﻲ؟ A: "ﻣﺼﺮ" Q : "In which country was Ibn al -Kizani born?" A: "Egypt"Language-specific Q: "伦巴第大区的首府是哪座城市 ?" A: "米兰" Q : "What is the capital city of the Lombardy region? " A: "Milan " Q : "Biển Okhotsk nằm dưới sự quản lý của quốc gia nào? " A: "Nga" Q : "Which country administers the Sea of Okhotsk? " A: “Russi " Q : "รางวัลพูลิตเซอร์ถูกได้รับโดยใครในปี ค.ศ. 1932? " A: "เพิร ์ลเอส. บัก" Q : "Who received the Pulitzer Prize in the year 1932?" A: "Pearl S. Buck " Q : "У којем граду се налази тим Тампа Беј баканирси? " A: "Тампа " Q : "In which city is the Tampa Bay Buccaneers team located?" A: "Tampa " Q : "Кто стал королём Венгрии в 1458 году? " A: "Матьяш Хуньяди " Q : "Who became the king of Hungary in 1458?" A: "Matthias Hunyadi " Q : "나일강은 어느대륙을흐르나요? A: "아프리카 대륙" Q : "Which continent does the Nile River flow through?" A: "Africa " Q : "Kína melyik déli tartományának székhelye Kunming? " A: "Jünnan" Q : "Kunming is the capital of which southern province in China?" A: "Yunnan " Q : "Jaký typ jazyka je manština? " A: "keltský jazyk " Q : "What type of language is Manx?" A: "Celtic languag " Q : " ﻓﻲ أي دوﻟﺔ ﺗﻘﻊ وﻻﯾﺔ ﺑﺎدن-ﻓﻮرﺗﻤﺒﯿﺮغ؟ " A: " ﺟﻤﮭﻮرﯾﺔ أﻟﻤﺎﻧﯿﺎ اﻻﺗﺤﺎدﯾﺔ" Q : "In which country is the state of Baden-Württemberg located?" A: "Federal Republic of Germany"General zh th srvi ru ko hu cs arzh th srvi ru ko hu cs ar Figure 3: Example QA pairs in KoLasSimpleQA. General Q: "在北欧神话中,弗 雷亚的父亲是谁?" A: "尼奥尔德" Q: "In Norse mythology, who is Freyja 's father?" A: "Njord" Q : " Melyik állam fővárosa Tallahassee ?" A: "Florida " Q : " What state's capital is Tallahassee ?" A: "Florida "Q: "在北欧神话中,尼奥尔 德的女儿是谁?" A: "弗雷亚" Q: "In Norse mythology, who is the daughter of Njord?" A: "Freyja " Q: "Melyik várost tartalmazza Florida mint főváros ?" A: "Tallahassee " | https://arxiv.org/abs/2505.16591v1 |
Q: "Which city is considered the capital of Florida ?" A: "Tallahassee "reversezh huLanguage -specific Q: "久大精盐公司的创 立者是谁? " A: "范旭东" Q: " Who is the founder of Jiuda Salt Company?" A: "Fan Xudong" Q : " "Melyik zenekar készítette az Oblatio című akusztikus nagylemezt? " A: "Pokolgép" Q : " Which band made the acoustic album titled Oblatio? " A: "Pokolgép"Q: "中国化学工业企业家范旭东创立了哪家精 盐公司? " A: "久大精盐公司" Q: " Which salt company was founded by Chinese chemical industry entrepreneur Fan Xudong? A: "Jiuda Salt Company" Q: " Milyen címet visel a Pokolgép zenekar akusztikus nagylemeze , amely 2006 decemberében jelent meg?" A: "Oblatio" Q: " What is the title of the acoustic album by the band Pokolgép, which was released in December 2006?" A: "Oblatio"reversezh hu Figure 4: Example QA pairs in the reverse relationship in KoLasSimpleQA. pairs in KoLasSimpleQA are presented in Figure 3, while examples of QA pairs in a reverse relationship are illustrated in Figure 4. 3 Experiment 3.1 Experimental setup We evaluated a selection of state-of-the-art large language models (LLMs), including GPT- 4o Hurst et al. (2024) and GPT-4o-mini1, as well as Deepseek-V3 Liu et al. (2024a), Qwen2.5- Instruct Yang et al. (2024), and Llama-3.1-Instruct Wendler et al. (2024). Additionally, we assessed the latest Large Reasoning Models (LRMs), including OpenAI o1-mini, QwQ-32B- Preview, QwQ-32B (abbreviated as QwQ-preview and QwQ respectively in subsequent text), QwQ-Plus and Deepseek-R1 Guo et al. (2025). A comprehensive overview of these models is presented in Table 14. All experiments were conducted using OpenCompass Contributors (2023b). For traditional instruction-based LLMs, we adopted OpenCompass’s default configuration for maximum output length. For LRMs, we increased the output length to 8192 tokens to accommodate complex reasoning processes and prevent premature truncation. For OpenAI models (GPT series and o1-mini), inference was performed via the official API, following OpenCompass’s default settings. For Deepseek-V3 and Deepseek-R1, due to instability in Deepseek’s official API under heavy usage, we used equivalent services provided by Alibaba Cloud ( https: //cn.aliyun.com/ ), maintaining OpenCompass’s default parameters with a temperature of 0.7. All remaining models listed in Table 14 were run locally on NVIDIA A100 GPUs, 1We used gpt-4o-2024-11-20 for GPT-4o and gpt-4o-mini-2024-07-18 for GPT-4o-mini. 5 Preprint. Under review. utilizing LMDeploy Contributors (2023a) as the inference backend. In these cases, we used OpenCompass’s standard settings, with temperature =1e-6 and topk=1. 3.2 Evaluation Metrics Following Wei et al. (2024), we use GPT-4o as a judge to classify the LLMs’ responses into three categories: CORRECT (a response is correct if it fully contains the reference answer without contradictions), INCORRECT (applies when the predicted answer contradicts the reference answer), and NOTATTEMPTED (if the response doesn’t fully provide the reference answer but is free of contradictions). The judging prompt is detailed in Tables 12 and 13. Furthermore, we use five evaluation metrics to assess model performance. Correct (CO) : The proportion of CORRECT answers among all answers. Not Attempted (NA) : The proportion ofNOTATTEMPTED answers among all answers. Incorrect (IN) : The proportion of INCORRECT answers among all answers. Correct Given Attempted (CGA) : The proportion of CORRECT answers among | https://arxiv.org/abs/2505.16591v1 |
those attempted. F-score : A harmonic mean of CO and CGA, balancing correctness and successful attempts. 4 Results & Analysis 4.1 Overall Results and Domain Disparity The overall performance (F-score) of all LLMs on KolasSimpleQA is shown in Tables 3 and 4, with additional metrics in Appendix D. LLMs perform significantly worse in the specific domain than in the general one, highlighting challenges in language-specific factual QA. In the general domain, Deepseek-R1 and GPT4o achieved the top results, with only a 1.2 percentage point difference. However, in the language-specific domain, GPT4o led by nearly 7 percentage points, demonstrating its superior language-specific factual ability. zh ko th ar vi general specific general specific general specific general specific general specific GPT-4o-2024-11-20 95.28 23.95 86.58 38.36 86.27 31.28 89.51 38.89 94.21 54.55 GPT-4o-mini 74.02 9.63 74.17 28.92 67.63 16.49 72.84 19.30 80.17 34.97 o1-mini 86.40 8.82 81.23 19.20 76.05 13.66 79.62 9.09 85.48 19.39 Llama-3.1-Instruct-70B 86.51 11.76 78.15 22.89 79.81 16.22 78.02 13.79 85.95 33.15 Llama-3.1-Instruct-8B 45.85 5.41 42.38 13.58 52.40 11.40 48.75 3.79 69.42 20.11 Qwen2.5-Instruct-72B 88.19 23.73 78.79 20.69 75.18 13.04 75.93 10.38 87.60 33.33 Qwen2.5-Instruct-7B 57.71 14.57 39.59 7.94 45.15 8.28 34.89 6.67 60.00 19.88 QwQ-32B 87.40 22.95 76.67 19.63 77.40 12.50 79.26 12.17 83.82 26.52 QwQ-32B-Preview 79.03 19.05 74.56 17.60 72.40 13.85 78.29 9.88 81.03 20.13 QwQ-Plus 63.32 15.48 57.85 19.85 50.32 10.39 60.70 9.63 61.38 18.18 Deepseek V3 94.49 34.78 82.12 30.00 79.14 19.59 86.42 22.81 92.56 44.94 Deepseek R1 96.85 52.41 86.75 25.30 87.98 19.59 90.74 25.00 93.39 52.17 Table 3: Model performance (F-score) on KolasSimpleQA (part1/2). Red indicates the best, followed by green and then blue. We compared model performance rankings between general and language-specific domains. Figure 5(a) and 10 show that when models are ranked by general domain performance, their performance in language-specific domains shows significant fluctuations and jumps across nearly all 9 languages. This suggests that models optimized for the general domain may not excel in language-specific domains, emphasizing the need for targeted evaluation and optimization for language-specific tasks. 4.2 Does Explicit translating into English help ? Research has shown Shi et al. (2022); Huang et al. (2023a) that translating non-English ques- tions into English before inputting them into an LLM significantly improves performance 6 Preprint. Under review. cs hu ru sr avg. general specific general specific general specific general specific general specific GPT-4o-2024-11-20 91.87 37.50 88.17 28.19 86.34 24.00 93.58 42.47 90.20 37.59 GPT-4o-mini 83.80 31.46 71.22 10.48 68.48 12.50 78.03 26.75 74.48 23.72 o1-mini 84.89 24.83 73.27 10.58 79.78 20.93 85.17 26.36 81.32 19.81 Llama-3.1-Instruct-70B 89.44 26.82 80.47 16.87 83.06 15.93 91.73 31.37 83.68 23.56 Llama-3.1-Instruct-8B 60.07 23.73 46.15 9.32 43.72 7.27 65.41 16.77 52.68 14.75 Qwen2.5-Instruct-72B 81.69 21.30 52.38 6.31 72.13 20.75 73.68 16.11 76.17 20.61 Qwen2.5-Instruct-7B 50.18 8.70 35.11 0.00 32.58 6.32 33.59 15.60 43.20 12.11 QwQ-32B 83.10 25.43 63.28 11.52 78.26 19.47 81.95 20.38 79.02 21.23 QwQ-32B-Preview 79.41 16.79 65.18 9.47 79.77 18.67 78.43 24.14 76.46 18.34 QwQ-Plus 70.00 10.37 48.69 5.49 60.00 14.43 53.06 13.91 58.37 14.70 Deepseek V3 92.96 35.03 79.88 16.39 87.43 16.36 88.72 29.30 87.08 30.10 Deepseek | https://arxiv.org/abs/2505.16591v1 |
R1 92.20 31.28 87.83 17.46 92.39 14.16 93.58 24.20 91.30 30.81 Table 4: Model performance (F-score) (part2/2). “avg.” denotes the average result across all the 9 languages. Red indicates the best, followed by green and then blue. (a) (b)(c) Figure 5: (a) Model performance (F-score) ranking in general and langugage-specific do- mains. The models are sorted based on the general domain (blue line). (b) Differences in F-scores between the tran enand the direct settings (a value greater than zero indicates that tran enperforms better). (c) Proportion of bidirectional correctness ( Pbi) for general and specific domain questions across models. compared to using the original language directly. In our KoLasSimpleQA experiments, we established two settings: the direct setting,2where questions are input in their original language, and the tran ensetting, where questions are first translated into English using GPT-4o3before being input into the LLM. Figure 5(b) details the performance differences between these settings across two domains. In the general domain, the tran ensetting consistently enhances performance across most models and languages. However, in the language-specific domain, models generally perform better when questions are presented in their original language. 4.3 Analysis of Calibration To evaluate LLMs’ calibration, we prompted the LLM to provide a confidence level (0 to 100) when answering questions, as detailed in Table 11. Figure 6(a) shows calibration performance across the two domains. We used the Expected Calibration Error (ECE) to quantitatively assess calibration. The ECE is calculated by dividing confidence scores into 2Unless otherwise specified, we default to the direct setting. 3We use gpt-4o-08-06, which is not the model evaluated in this paper. See the prompt in Table 8. 7 Preprint. Under review. (a) (b) Figure 6: (a) Calibration of LLMs based on their expressed confidence. (b) Mean of Expected Calibration Error (mECE), detailed results can be found in Table 23. Mequally spaced bins. For each bin Bm, we compute the average confidence conf (Bm)and accuracy acc(Bm). ECE is defined as ∑M m=1|Bm| n|acc(Bm)−conf (Bm)|, where nis the total sample count, and |Bm|is the sample count in bin m(we set M=10). Lower ECE indicates better calibration. We calculated ECE for each model across two domains, with results in Table 23. To summarize, we averaged results across all 12 models to get the Average ECE (AvgECE), shown in Table 5. Additionally, we computed the mean ECE across both domains and settings for each model, called mECE, with results in Figure 6(b) and 11. general specific tran en 0.06 0.48 direct 0.13 0.5 Table 5: Average Expected Calibration Error (AvgECE) across two domains and two settings (lower is better). Detailed results can be found in Table 23.From these results, we conclude that: (1) Mod- els show significantly poorer calibration in the language-specific domain compared to the gen- eral domain. (2) The tran ensetting improves calibration in both domains, although it only en- hances the F-score in the general domain. This indicates that LLMs not only vary in answering ability across languages but also in calibration, with English being the most effective language. (3) LRMs demonstrate superior calibration compared to traditional LLMs, as all | https://arxiv.org/abs/2505.16591v1 |
five LRMs rank within the top six for mECE values. This suggests that through thorough reasoning and reflection during inference, LRMs achieve better calibration performance than traditional LLMs. 4.4 Knowledge Memorization Robustness Studies Allen-Zhu & Li (2023); Berglund et al. (2023) have shown that auto-regressive LLMs struggle to generalize bidirectionally, a phenomenon known as the Reversal Curse. For instance, a model trained on ”A’s mother is B” may not correctly respond to ”Who is B’s child?” This reflects the robustness of knowledge retention in LLMs. In developing KoLasSimpleQA, we included QA pairs with reverse relationships, as illustrated in Figure 4, to evaluate the robustness of LLMs’ knowledge memorization in multilingual contexts. We defined the metric Proportion of Bidirectional Correctness as Pbi=N2/N1where N2 represents the number of reverse QA pairs where the LLM correctly answered at least one of the pair, and N1is the number of pairs where the LLM answered both questions correctly. A higher Pbiindicates more robust memory of reverse knowledge points. As shown in Figure 5(c), models achieve significantly higher Pbiscores in the general domain compared to the language-specific domain. This suggests that general domain knowledge is more thoroughly represented in the pretraining data of LLMs, leading to more robust memorization, while language-specific domain knowledge is relatively scarce, underscoring the need for targeted optimization in these areas. 4.5 Analyzing the Reasoning Process of LRMs Recently, LRMs, such as o1 Jaech et al. (2024) and Deepseek R1Guo et al. (2025), represent a new direction in the development of LLMs. These models significantly enhance reasoning abilities through comprehensive reasoning, self-reflective negation, and exploration of multiple reasoning paths, adhering to the test-time scale law. However, research on their 8 Preprint. Under review. reasoning processes in multilingual scenarios remains underexplored. We address this by analyzing the reasoning processes of LRMs on KoLasSimpleQA, as shown in Figure 7. Shift Words Question User LRM<think > </think >Reasoning ProcessQuestion & Response Thoughts 1 2Thought 1 Thought 2 Thought N ... Split “Alternatively , ...” Judgement Thought 1 ... Thought Ncorrect? Score ResponseThought 2 {‘Correctness’ : 2} {‘Correctness’ : 1}{‘Correctness’ : 0} “Wait, am I... ”“But, maybe... ” Figure 7: Segment the LRM’s reasoning process into distinct thoughts and judge the correctness of each. general domain Deepseek -R1 QwQ-32B010002000Generated Tokens Deepseek -R1 QwQ-32B0153045Number of ThoughtsCORRECT INCORRECT NOT ATTEMPTED language -specific domainFigure 8: Average number of tokens and thoughts generated during the LRM’s rea- soning process ( ntoken and nthought ). When responding to user queries, an LRM’s output typically includes two components: the reasoning process and the answer . As outlined in Wang et al. (2025), the reasoning process can be broken down into discrete ”thoughts,” which are intermediate steps generated during reasoning. The LRM transitions between these thoughts, often marked by reflective phrases like ”Wait” or ”Alternative.” Examples are shown in Figures 12 and 13. We used GPT-4o to segment the reasoning process into distinct thoughts (see prompts in Tables 24 and 25). Thoughts are classified as correct orincorrect : correct thoughts lead to CORRECT responses, while incorrect ones lead to INCORRECT responses. GPT-4o assessed the correctness | https://arxiv.org/abs/2505.16591v1 |
of each thought (see Table 26), with examples in Figures 12 and 13. The number of correct thoughts in a response is denoted as nct. A response with at least one correct thought is labeled Rnct≥1. We applied the above process to responses in both the general and language-specific domains of KoLaSimpleQA. The LRM’s responses were already categorized as CORRECT ,INCORRECT , or NOTATTEMPTED (refer to prompts in Tables 12 and 13). We then conducted statistical analysis across these two domains and three categories, resulting in six distinct evaluations. DeepSeek-R1 QwQ-32B0.70 0.35Proportiongeneral domainINCORRECT NOT ATTEMPTED language-specific domain Figure 9: P(Rnct≥1)of the INCORRECT and NOTATTEMPTED responses.To evaluate the LRM’s reasoning process, we define the following metrics: (1)ntoken and nthought :These metrics represent the average number of tokens and thoughts generated during the reasoning process, re- spectively, providing a direct measure of its length and indicating the associated over- head. (2)P(Rnct≥1):This metric denotes the proportion of Rnct≥1within a given cat- egory. We focus on this particularly in the INCORRECT and NOTATTEMPTED categories, as it reflects instances where the LRM had the chance to answer correctly by ”searching” through multiple thoughts but either an- swered incorrectly or chose not to attempt. We selected two representative LRMs, Deepseek-R1 and QwQ-32B, and analyzed their reasoning processes on KoLasSimpleQA using the aforementioned methods. The results are shown in Figures 8 and 9. Based on these results, we found: (1) Overhead in the reasoning process : As illustrated in Figure 8, Deepseek-R1 maintains consistent reasoning costs across all three categories. Conversely, QwQ-32B shows significantly higher reasoning costs for NOTATTEMPTED responses compared to the other categories. Notably, there is no significant difference in reasoning costs between the two domains for either model. (2) Regret in reasoning search : Figure 9 shows that both models exhibit a much higher P(Rnct≥1)in the general domain than in the language-specific domain. This indicates that when tackling global knowledge questions, LRMs have substantial opportunities to answer correctly through thorough knowledge recall, search and reflection. However, due to various interferences and limitations in understanding their knowledge boundaries, they 9 Preprint. Under review. often miss the correct answer. In contrast, for language-specific knowledge questions, the lack of relevant pretraining knowledge becomes evident. 5 Conclusion This paper presents KoLasSimpleQA, a multilingual evaluation benchmark for assessing the factual capabilities of LLMs. KoLasSimpleQA focuses on two domains: general and language-specific. Comprehensive experiments show that in multilingual contexts, LLMs perform differently in language-specific versus general domains, highlighting the need for specialized evaluations and model optimization based on language specifics. We hope KoLasSimpleQA will advance LLM development and application in multilingual scenarios. Acknowledgments This research was supported by National Key R&D Program of China (2022ZD0160201). References Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.2, knowledge manipulation. arXiv preprint arXiv:2309.14402 , 2023. Shane Arora, Marzena Karpinska, Hung-Ting Chen, Ipsita Bhattacharjee, Mohit Iyyer, and Eunsol Choi. Calmqa: Exploring culturally specific long-form question answering across 23 languages. arXiv preprint arXiv:2406.17761 , 2024. Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, and | https://arxiv.org/abs/2505.16591v1 |
Madian Khabsa. The belebele benchmark: a parallel reading comprehension dataset in 122 language variants. arXiv preprint arXiv:2308.16884 , 2023. Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, and Owain Evans. The reversal curse: Llms trained on” a is b” fail to learn” b is a”.arXiv preprint arXiv:2309.12288 , 2023. Mingda Chen, Zewei Chu, Karl Stratos, and Kevin Gimpel. Mining knowledge for natural language inference from wikipedia categories. arXiv preprint arXiv:2010.01239 , 2020. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. XNLI: Evaluating cross-lingual sentence representations. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pp. 2475–2485, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1269. URL https://aclanthology.org/D18-1269/ . LMDeploy Contributors. Lmdeploy: A toolkit for compressing, deploying, and serving llm. https://github.com/InternLM/lmdeploy , 2023a. OpenCompass Contributors. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/opencompass , 2023b. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Jie He, Nan Hu, Wanqiu Long, Jiaoyan Chen, and Jeff Z Pan. Mintqa: A multi-hop question answering benchmark for evaluating llms on new and tail knowledge. arXiv preprint arXiv:2412.17032 , 2024a. 10 Preprint. Under review. Yancheng He, Shilong Li, Jiaheng Liu, Yingshui Tan, Weixun Wang, Hui Huang, Xingyuan Bu, Hangyu Guo, Chengwei Hu, Boren Zheng, et al. Chinese simpleqa: A chinese factuality evaluation for large language models. arXiv preprint arXiv:2411.07140 , 2024b. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. Xtreme: A massively multilingual multi-task benchmark for evaluating cross- lingual generalisation. In International conference on machine learning , pp. 4411–4421. PMLR, 2020. Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Ting Song, Yan Xia, and Furu Wei. Not all languages are created equal in llms: Improving multilingual capability by cross-lingual-thought prompting. arXiv preprint arXiv:2305.07004 , 2023a. Kaiyu Huang, Fengran Mo, Xinyu Zhang, Hongliang Li, You Li, Yuanchi Zhang, Weijian Yi, Yulong Mao, Jinchen Liu, Yuzhuang Xu, et al. A survey on large language models with multilingualism: Recent advances and new frontiers. arXiv preprint arXiv:2405.10936 , 2024. Xu Huang, Wenhao Zhu, Hanxu Hu, Conghui He, Lei Li, Shujian Huang, and Fei Yuan. Benchmax: A comprehensive multilingual evaluation suite for large language models. arXiv preprint arXiv:2502.07346 , 2025. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Yao Fu, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. Advances in Neural Information Processing Systems , 36:62991–63010, 2023b. Aaron Hurst, | https://arxiv.org/abs/2505.16591v1 |
Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. Sara Javanmardi, Yasser Ganjisaffar, Cristina Lopes, and Pierre Baldi. User contribu- tion and trust in wikipedia. In 2009 5th International Conference on Collaborative Com- puting: Networking, Applications and Worksharing , pp. 1–6, 2009. doi: 10.4108/ICST. COLLABORATECOM2009.8376. Fajri Koto, Nurul Aisyah, Haonan Li, and Timothy Baldwin. Large language models only pass primary school exams in Indonesia: A comprehensive test on IndoMMLU. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pp. 12359–12374, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.760. URL https://aclanthology.org/2023.emnlp-main.760/ . Fajri Koto, Haonan Li, Sara Shatnawi, Jad Doughman, Abdelrahman Boda Sadallah, Aisha Alraeesi, Khalid Almubarak, Zaid Alyafeai, Neha Sengupta, Shady Shehata, et al. Ara- bicmmlu: Assessing massive multitask language understanding in arabic. arXiv preprint arXiv:2402.12840 , 2024. Viet Dac Lai, Chien Van Nguyen, Nghia Trung Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan A Rossi, and Thien Huu Nguyen. Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback. arXiv preprint arXiv:2307.16039 , 2023. 11 Preprint. Under review. Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. CMMLU: Measuring massive multitask language understanding in Chinese. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Associ- ation for Computational Linguistics: ACL 2024 , pp. 11260–11285, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-acl.671. URL https://aclanthology.org/2024.findings-acl.671/ . No´emi Ligeti-Nagy, Gerg ˝o Ferenczi, Enik ˝o H´eja, L ´aszl´o J´anos Laki, No ´emi Vad ´asz, Zi- jian Gy ˝oz˝o Yang, and Tam ´as V ´aradi. HuLU: Hungarian language understanding benchmark kit. In Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, and Nianwen Xue (eds.), Proceedings of the 2024 Joint Interna- tional Conference on Computational Linguistics, Language Resources and Evaluation (LREC- COLING 2024) , pp. 8360–8371, Torino, Italia, May 2024. ELRA and ICCL. URL https: //aclanthology.org/2024.lrec-main.733/ . Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024a. Chen Cecilia Liu, Fajri Koto, Timothy Baldwin, and Iryna Gurevych. Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings. arXiv preprint arXiv:2309.08591 , 2023. Chen Cecilia Liu, Iryna Gurevych, and Anna Korhonen. Culturally aware and adapted nlp: A taxonomy and a survey of the state of the art. arXiv preprint arXiv:2406.03930 , 2024b. Junho Myung, Nayeon Lee, Yi Zhou, Jiho Jin, Rifki Putri, Dimosthenis Antypas, Hsuvas Borkakoty, Eunsu Kim, Carla Perez-Almendros, Abinew Ali Ayele, et al. Blend: A benchmark for llms on everyday knowledge in diverse cultures and languages. Advances in Neural Information Processing Systems , 37:78104–78146, 2024. Tarek Naous, Michael J Ryan, Alan | https://arxiv.org/abs/2505.16591v1 |
Ritter, and Wei Xu. Having beer after prayer? measuring cultural bias in large language models. arXiv preprint arXiv:2305.14456 , 2023. Edoardo Maria Ponti, Goran Glava ˇs, Olga Majewska, Qianchu Liu, Ivan Vuli ´c, and Anna Korhonen. XCOPA: A multilingual dataset for causal commonsense reasoning. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pp. 2362–2376, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020. emnlp-main.185. URL https://aclanthology.org/2020.emnlp-main.185/ . Rifki Afina Putri, Faiz Ghifari Haznitrama, Dea Adhista, and Alice Oh. Can LLM generate culturally relevant commonsense QA data? case study in Indonesian and Sundanese. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pp. 20571–20590, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/ v1/2024.emnlp-main.1145. URL https://aclanthology.org/2024.emnlp-main.1145/ . Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057 , 2022. Guijin Son, Hanwool Lee, Sungdong Kim, Seungone Kim, Niklas Muennighoff, Taekyoon Choi, Cheonbok Park, Kang Min Yoo, and Stella Biderman. Kmmlu: Measuring massive multitask language understanding in korean. arXiv preprint arXiv:2402.11548 , 2024. Jiaxing Sun, Weiquan Huang, Jiang Wu, Chenya Gu, Wei Li, Songyang Zhang, Hang Yan, and Conghui He. Benchmarking chinese commonsense reasoning of llms: from chinese- specifics to reasoning-memorization correlations. arXiv preprint arXiv:2403.14112 , 2024. Haoyu Wang, Shuo Wang, Yukun Yan, Xujia Wang, Zhiyu Yang, Yuzhuang Xu, Zhenghao Liu, Liner Yang, Ning Ding, Xu Han, Zhiyuan Liu, and Maosong Sun. UltraLink: An 12 Preprint. Under review. open-source knowledge-enhanced multilingual supervised fine-tuning dataset. In Lun- Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 11929–11942, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10. 18653/v1/2024.acl-long.644. URL https://aclanthology.org/2024.acl-long.644/ . Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, et al. Thoughts are all over the place: On the underthinking of o1-like llms. arXiv preprint arXiv:2501.18585 , 2025. Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. Measuring short-form factuality in large language models. arXiv preprint arXiv:2411.04368 , 2024. Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West. Do llamas work in english? on the latent language of multilingual transformers. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 15366–15394, 2024. Weihao Xuan, Rui Yang, Heli Qi, Qingcheng Zeng, Yunze Xiao, Yun Xing, Junjue Wang, Huitao Li, Xin Li, Kunyu Yu, et al. Mmlu-prox: A multilingual benchmark for advanced large language model evaluation. arXiv preprint arXiv:2503.10497 , 2025. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Heng-Li Yang and | https://arxiv.org/abs/2505.16591v1 |
Cheng-Yu Lai. Motivations of wikipedia content contributors. Computers in human behavior , 26(6):1377–1383, 2010. Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, et al. Kola: Carefully benchmarking world knowledge of large language models. arXiv preprint arXiv:2306.09296 , 2023. Arda Y ¨uksel, Abdullatif K ¨oksal, L ¨utfi Kerem S ¸enel, Anna Korhonen, and Hinrich Sch ¨utze. Turkishmmlu: Measuring massive multitask language understanding in turkish. arXiv preprint arXiv:2407.12402 , 2024. Xiang Zhang, Senyu Li, Bradley Hauer, Ning Shi, and Grzegorz Kondrak. Don’t trust chatgpt when your question is not in english: a study of multilingual abilities and types of llms. arXiv preprint arXiv:2305.16339 , 2023. Yidan Zhang, Boyi Deng, Yu Wan, Baosong Yang, Haoran Wei, Fei Huang, Bowen Yu, Junyang Lin, and Jingren Zhou. P-mmeval: A parallel multilingual multitask benchmark for consistent evaluation of llms. arXiv preprint arXiv:2411.09116 , 2024. A Related Work A.1 Multilingual Benchmark for General Domain The evaluation LLMs in the multilingual context has consistently been a focus of academic interest. A comprehensive summary of existing multilingual evaluation datasets can be found in this excellent review article (Huang et al. (2024)). Many of these datasets are derived from translations of existing English evaluation sets. Earlier evaluation sets typically concentrated on individual tasks or capabilities, such as MGSM (Shi et al. (2022)), XNLI (Conneau et al. (2018)), XCOPA (Ponti et al. (2020)), and BELEBELE (Bandarkar et al. (2023)). Recent studies, such as PMMEval (Zhang et al. (2024)) and BenchMaX (Huang et al. (2025)), utilize translated parallel corpora to comprehensively assess the performance of LLMs across multiple tasks and capability dimensions in multilingual settings. These evaluation sets, based on translated parallel corpora, allow for a thorough assessment of the language- agnostic abilities of LLMs, bu overlook the unique characteristics and capabilities inherent 13 Preprint. Under review. to most of the world’s languages, such as culture, history, geography, religion, and local life, which are crucial and highly relevant to the speakers of these languages. A.2 Multilingual Benchmark for language-specific domain There are various methods for constructing benchmarks that evaluate language-specific features (Liu et al. (2024b)). Some works collect exam questions from different countries to build benchmarks similar to MMLU (Hendrycks et al. (2020)), such as CMMLU (Li et al. (2024)), CEval (Huang et al. (2023b)), IndoMMLU (Koto et al. (2023)), ArabicMMLU (Koto et al. (2024)), KMMLU (Son et al. (2024)), TurkishMMLU (Y ¨uksel et al. (2024)), and MMLU-ProX (Xuan et al. (2025)). However, their data is limited in scale, and most of the knowledge remains global, such as STEM knowledge. Some involves manually constructing evaluation sets (Myung et al. (2024); Sun et al. (2024); Ligeti-Nagy et al. (2024)), but this method also faces scalability challenges. Another approach involves crawling content from internet forums and user queries, followed by filtering to retain only those queries relevant to language features, to build evaluation sets (Naous et al. (2023); Arora et al. (2024)). Moreover, cultural-related materials (such as concepts, proverbs, etc.) are collected as seeds, and evaluation sets are constructed through an ”LLM generation + | https://arxiv.org/abs/2505.16591v1 |
human expert revision/inspection” approach (Liu et al. (2023); Putri et al. (2024)), which is a recent trend. However, the problems in the aforementioned evaluation sets are typically complex: first, they include both factual inquiries and reasoning abilities; second, they usually involve queries about multiple facts or knowledge points; third, the answers are often open-ended, with no absolute or unique standard answer. This paper, following the framework of SimpleQA, constructs a multilingual version that effectively encompasses both global and language-specific domains, thereby facilitating the evaluation of LLM’s factual abilities in multilingual contexts through ”simple” questions. A.3 Dataset based on Wikipedia Wikipedia plays a crucial role in the development of large language models (LLMs). It serves not only as the corpus for nearly all LLM pretraining (Chen et al. (2020)), but also as the foundation for constructing SFT data (Wang et al. (2024)) and evaluation datasets (Yu et al. (2023); He et al. (2024b;a)). In particular, Wikipedia contains entries in XX languages, which are typically composed of native speaker-generated content across various languages (Javanmardi et al. (2009); Yang & Lai (2010)), making it a valuable repository of native multilingual data. However, the Wikipedia corpus encompasses vast amounts of global knowledge, which, despite being presented in non-English forms (e.g., STEM-related entries, world knowledge), may not fully capture the linguistic characteristics of each language. This paper leverages the meta-information in Wikipedia pages to effectively distinguish entries, filtering out those that genuinely contain language-specific knowledge, such as history, geography, people, and events. Based on this, we have constructed a benchmark dataset that authentically reflects linguistic features. BPrompt Templates Used in KolasSimpleQA Construction and Qualify Control We show all the prompt templates used in the construction of KolasSimpleQA. The prompt template for extracting the triples is shown in Table 6. The prompt template for constructing the QA pair from the triples is shown in Table 7. The prompt template for translating the original non-English question of KolasSimpleQA into English is shown in Table 8. The prompt template for quality control (stage1) is shown in Table 9. The prompt template for quality control (stage2) is shown in Table 10. C Details of Evaluations The prompt template for model inference is shown in Table 11. The prompt template for LLM as judge is shown in Table 12 and 13. 14 Preprint. Under review. Extract Triple Prompt You are a professional natural language processing assistant, responsible for extracting structured relational triples from text in the format of [[‘Entity A’, ‘Relation 1’, ‘Entity B’], [‘Entity B’, ‘Relation 2’, ‘Entity A’]] . Please extract all the triples that meet the requirements from the following text, where: 1. Entity A and Entity B are entities explicitly mentioned in the text. 2. Relation is the core meaning of the verb, phrase or sentence that describes the association between entity A and entity B. The output triples are represented in the form of a list, and ensure that: 1. The triples are accurate and based on the text content. 2. Do not contain subjective inferences, and only extract clear textual relations. 3. If the corresponding relational | https://arxiv.org/abs/2505.16591v1 |
triples between the two entities cannot be extracted based on the text, output []. 4. The language of relations and entities of the extracted triples are consistent with the language of the provided text. 5. For Entity A and Entity B, it is necessary to extract the relationship triples between Entity A and Entity B and the relationship triples between Entity B and Entity A at the same time. The final output format is: [[‘Entity A’, ‘Relation 1’, ‘Entity B’], [‘Entity B’, ‘Relation 2’, ‘Entity A’]] . Output format: [[‘Entity A’, ‘Relation 1’, ‘Entity B’], [‘Entity B’, ‘Relation 2’, ‘Entity A’]] Here is am example: [Input text]: Li Sing Primary School Li Sing Primary School (English: Li Sing Primary School) is a government primary school located in Sai Ying Pun, Hong Kong. It was founded in 1954. In May 1953, Li Baochun announced that he would invest 250,000 yuan to open this primary school. The school site is the former site of Sai Ying Pun Government School. ... [Entity A]: Li Baochun [Entity B]: Li Sing Primary School [Output Result]: { "triple_pair": [[ 'Li Baochun ','Opened ','Li Sing Primary School '],['Li Sheng Primary School ','is Opened by ','Li Baochun ']] } [Input Text]: <context> [Entity A]: <entity1> [Entity B]: <entity2> [Output Result]: { "triple_pair": [] } Table 6: Prompt template for triple extraction in the construction of KoLasSimpleQA. D Additional Results The details of the evaluated LLMs are listed in Table 14. The detailed model performance is outlined below: • Correct (CO): See Tables 15 and 16. • Not Attempted (NA): See Tables 17 and 18. 15 Preprint. Under review. Figure 10: Model performance (F-score) ranking in general and langugage-specific domains. The models are sorted based on the general domain (blue line). Figure 11: Mean of Expected Calibration Error (mECE), detailed results can be found in Table 23. • Incorrect (IN): See Tables 19 and 20. • Correct Given Attempted (CGA): See Tables 21 and 22. The model performance (F-score) ranking in the general and the langugage-specific domains are shown in Figure 5(a) and 10. The detail of the Expected Calibration Error (ECE) is shown in Table 23. The Mean of Expected Calibration Error (mECE) is shown in Figure 11. 16 Preprint. Under review. Question :何晓钟参与编剧的台视八点档连续剧叫什么名字? What is the name of the TTV primetime drama series that He Xiaozhong participated in writing? Correct Answer :星星知我心 (Stars Know My Heart) 虽然《意难忘》的可能性较高,但信心可能不是百分之百,因为可能有其他剧集。所以, 我需要给出一个信心分数,比如 80分,表示比较确定,但可能还有不确定因素。 Although the possibility of Unforgettable Love is relatively high, the confidence may not be 100%, because there may be other dramas. Therefore, I need to give a confidence score, for example, 80 points, which means it is relatively certain, but there may still be some uncertain factors.Thought1 Thought2 Thought5...... Predicted Answer: 何晓钟参与编剧的台视八点档连续剧是 《意难平》。 The Taiwan Television's 8 PM slot serial drama that He Xiaozhong participated in writing the screenplay for is Unforgettable Love. 好的,我需要回答用户关于何晓钟参与编剧的台视八点档连续剧的问题。首先,我得回忆 一下何晓钟的资料。他是台湾的编剧,参与过不少电视剧的创作。台视八点档通常指的是 台湾电视台黄金时段播出的连续剧,这类剧集很多,比如 《意难忘》、《娘家》等。我记 得何晓钟曾参与 《意难忘》的编剧工作,这部剧确实是在台视八点档播出的长剧,分很多 集,播出时间也很长。 Okay, I need to answer the user's questions about the Taiwan Television's 8 PM slot serial dramas for which He Xiaozhong was involved in the screenwriting. First of all, I | https://arxiv.org/abs/2505.16591v1 |
have to recall the information about He Xiaozhong . He is a screenwriter from Taiwan and has participated in the creation of quite a few TV dramas. The Taiwan Television's 8 PM slot usually refers to the serial dramas broadcast during the prime time of Taiwan Television. There are many such dramas, such as Unforgettable Love and The Mother's Family, etc. I remember that He Xiaozhong was once involved in the screenwriting work of Unforgettable Love. This drama is indeed a long serial drama broadcast in the 8 PM slot of Taiwan Television. It has many episodes and was broadcast for a long time. 不过,我需要确认这一点是否正确,可能还有其他剧集比如 《星星知我心 》或者《满庭芳》 之类的。 However, I need to confirm whether this point is correct. There may also be other TV dramas, such as The Sentimental Journey or Full Bloom, and the like. 不过《意难忘》是比较有名的,而且确实属于台视的八点档。 However, Unforgettable Love is quite famous, and it is indeed a drama in the 8 PM time slot of Taiwan Television.Thought3 Figure 12: Example of the thoughts in Deepseek-R1’s reasoning process on KoLasSimpleQA. ✓and✗indicate the correctness of the thoughts. The original text is in black, while the translation into English is in blue. E Details of Analyzing the Reasoning Process of LRMs We employed GPT-4o to segment the reasoning process into distinct thoughts (see the prompts in Table 24 and 25). Furthermore, we assessed the correctness of each thought (see the prompt in table 26), with additional examples provided in Figure 12 and 13. 17 Preprint. Under review. Question :何晓钟参与编剧的台视八点档连续剧叫什么名字? What is the name of the TTV primetime drama series that He Xiaozhong participated in writing? Correct Answer :星星知我心 (Stars Know My Heart) 或者,可能用户的问题中的何晓钟是男性名字,而台湾的编剧可能有女性较多,但不确定。 总之,可能需要猜测是 《含苞》,但信心分数可能在 50-60之间。 Alternatively , the He Xiaozhong in the user's question is a male name, and among the screenwriters in Taiwan, there may be more females, but I'm not sure about that. In any case, it's possible that it might be In Bud, but the confidence level might be between 50 and 60.Thought1 Thought9...... Predicted Answer: 何晓钟参与编剧的台视八点档连续剧是 《含苞》。 The Taiwan Television's 8 PM slot serial drama that He Xiaozhong participated in writing the screenplay for is In Bud.嗯,用户问的是何晓钟参与编剧的台视八点档连续剧叫什么名字。首先,我需要确认何晓 钟是谁。可能是一位台湾的编剧,参与过八点档电视剧。台视的八点档通常指的是台湾电 视公司(Taiwan Television ,台视)长期播出的连续剧,通常在晚上八点时段播出,类 似中国大陆的长篇连续剧,可能类似家庭伦理剧或者乡土剧。 首先,我需要回想一下台视 有哪些著名的八点档连续剧。台视比较有名的可能有 《娘家》系列,比如 《娘家之新女 性》、《娘家之女儿翻身 》等。 Okay, the user asked what the name of the Taiwan Television's 8 PM slot serial drama that He Xiaozhong was involved in screenwriting is. First of all, I need to confirm who He Xiaozhong is. He might be a screenwriter from Taiwan who has been involved in 8 PM slot TV dramas. The 8 PM slot of Taiwan Television usually refers to the serial dramas that Taiwan Television (Taiwan Television Enterprise, Ltd., TTV) has been broadcasting for a long time. They are usually broadcast during the 8 PM time period, which is similar to the long serial dramas in mainland China. They might be similar to family ethical dramas or rural dramas. First of all, I need to recall what the famous 8 PM slot serial dramas of Taiwan Television are. Some of the more famous ones of Taiwan Television might include the The Mother‘s Family series, such as The New | https://arxiv.org/abs/2505.16591v1 |
Women in the Mother’s Family and The Daughters‘ Rise in the Mother’s Family. 不过,不确定何晓钟是否参与过这些剧集的编剧工作。 另外,可能需要查证何晓钟的编剧 作品。不过由于我目前无法实时搜索,只能依靠已有的知识库。记得何晓钟可能参与过 《含苞》这部剧?或者 《家的脚印 》?或者《娘家》系列? However, I'm not sure if He Xiaozhong has ever participated in the screenwriting work of these TV dramas. In addition, it may be necessary to verify He Xiaozhong's screenwriting works. However, since I can't conduct a real -time search at present, I can only rely on the existing knowledge base. I remember that He Xiaozhong might have participated in the TV drama In Bud? Or Footsteps of Home? Or the Mother's Family series?Thought2 ...... Figure 13: Example of the thoughts in QwQ-32B’s reasoning process on KoLasSimpleQA. ✓and✗indicate the correctness of the thoughts. The original text is in black, while the translation into English is in blue. 18 Preprint. Under review. Construct Question-Answer Prompt As a general knowledge expert, please generate open-ended questions that can be answered independently based on the specified knowledge material and triples related to the material, and ensure that the questions meet the following requirements: 1. Given a piece of material and the triples extracted from the material, generate questions based on the triples. Each question is an independent question that can be answered inde- pendently without the materials. The question can contain appropriate context materials for simple background explanation (such as the attributive that describes the entity, specify the time when the event occurred, etc.) to avoid ambiguity. 2. The question stem must specify the scope of the answer. For example, instead of asking ”where did Barack and Michelle Obama meet” (for which the answers could be ”Chicago” or ”the law firm Sidley & Austin”), the question should specify ”which city” or ”which company”. Another example: instead of asking ”when”, ask ”which year” or ”which day”. 3. Reference answers should not change over time. For example, instead of broadly asking ”who is Meredith’s partner in Grey’s Anatomy”, which could change as new seasons are produced, questions asking about TV shows, movies, video games, and sports typically require specifying a point in time (e.g., ”who is Meredith’s partner in Grey’s Anatomy in Season 13”). 4. Questions must have a clear and unique answer (the tail entity in the triple). For example, for the triple [”China”, ”contains”, ”Beijing”], the question cannot be ”Which province does China contain?” because the answer is not unique. For triples that cannot generate questions, output None . Another example: do not ask ”What is one of the representative pieces of the Mei’an Qin School?” as the answer is not unique. 5. A triple and its corresponding opposite triple will be provided. If the provided triple is [], the corresponding question and answer should be None . 6. The question is about the head entity of the triple, and the answer is the tail entity of the triple. 7. The given [triple] is used to generate [question] and [answer] , and the given [reverse triple] is used to generate [question reverse] and [answer reverse] . The head entity and the relation of the triple are used to generate the question, and the tail entity is the answer. | https://arxiv.org/abs/2505.16591v1 |
8. Language: The questions are in <language> . Here is an example: [Input materials]: Li Sing Primary School Li Sing Primary School (English: Li Sing Primary School) is a government primary school located in Sai Ying Pun, Hong Kong. It was founded in 1954. In May 1953, Li Baochun announced that he would invest 250,000 yuan to open this primary school. The school site is the former site of Sai Ying Pun Government School. ... [triple]: [’Li Baochun’, ’Opened’, ’Li Sing Primary School’] [reverse triple]: [’Li Sing Primary School’, ’is Opened by’, ’Li Baochun’] [Output Result]: { "question": "What is the name of the primary school opened by Li Baochun?", "answer": "i Sheng Primary School", "question_reverse": "Who is the founder of Li Sheng Primary School?", "answer_reverse": "Li Baochun" } [Input materials]: <context> [triple]: <triple> [reverse triple]: <reverse triple> [Output Result]: { "question": "", "answer": "", "question_reverse": "", "answer_reverse": "" } Table 7: Prompt template for constructing questions from triples in the construction of KoLasSimpleQA. 19 Preprint. Under review. Translatation Prompt You are a language expert specialized in from lang and tolang. Please translate the following open-ended question and its answer into tolang. Ensure that the semantics and format are consistent with those before translation. Try to translate names of people and places into the target language. [Question] <question> [Answer] <answer> [Output Result]: Please respond strictly in JSON format. Do not include any additional text outside the JSON structure: { "question_trans": [the translation of question], "answer_trans":[the translation of answer] } Table 8: Prompt template for translating the original non-English question of KoLasSim- pleQA into English 20 Preprint. Under review. Quality Control Prompt You are a knowledge question quality inspection expert. Your task is to evaluate the quality of knowledge test questions generated from given materials and extracted triples. Each triple is in the format [’head entity’, ’relation’, ’tail entity’] . Please assess the quality based on the following criteria: 1. The extracted triples must be correct and consistent with the input materials. 2. Questions must be generated using the head entity and relation of the triple; the answer must be the tail entity. 3. Questions must contain all necessary context and be answerable independently without access to the original material. 4. Questions should not be overly simple; the answer must not be directly revealed in the question stem. 5. Questions must target objective knowledge and yield a single, indisputable answer. For instance, do not ask ”Where did Barack and Michelle Obama meet?” (which could have multiple answers like ”Chicago” or ”Sidley Austin LLP”). Instead, specify ”which city” or ”which company”. Similarly, avoid vague time expressions like ”when” and use precise ones like ”which year” or ”which date”. 6. Questions must have time-invariant answers. Avoid asking questions whose answers change over time. For example, do not ask ”Who is Meredith’s partner on Grey’s Anatomy?” Instead, specify the season, e.g., ”Who is Meredith’s partner in Season 13?” 7. If the triple fails the quality check, then the corresponding question must also be judged as failing. 8. Questions must have a clear and unique answer (i.e., | https://arxiv.org/abs/2505.16591v1 |
the tail entity). For instance, for the triple ["China", "contains", "Beijing"] , the question ”Which province does China contain?” is invalid, as it has multiple possible answers. Similarly, avoid vague questions like ”What is one of the representative pieces of the Mei’an Qin School?” 9. The language used in the question must be <language> . Output format: { "check_triple": "[whether the triple is correct]", "check_independent": "[whether the question can be answered independently without input material]", "check_answer_is_tail": "[whether the answer is the tail entity of triple]", "check_unique": "[whether the answer to the question is unique]", "check_question": "[whether the question and answer is correct]", "check_question_reason": "[the reason why the question and answer is true or false]" } Here is an example [Input materials]: Ibn al-Kizani was born in Egypt and lived in the sixth century AH. [triple]: [’Egypt’, ’is birthplace’, ’Ibn al-Kizani’] [question]: Who was born in Egypt and lived in the sixth century AH? [answer]: Ibn al-Kizani [Output Result]: { "check_triple": true, "check_independent": false, "check_answer_is_tail": true, "check_unique": false, "check_question": false, "check_question_reason": "The answer is not unique and the question is not independently answerable. It fails." } Please strictly follow the format below to generate your output: [Input materials]: <context> [triple]: <triple> [question]: <question> [answer]: <answer> [Output Result]: { "check_triple": true/false, "check_independent": true/false, "check_answer_is_tail": true/false, "check_unique": true/false, "check_question": true/false, "check_question_reason": "[your explanation]" } Table 9: Prompt template for qualify control (stage 1) in the construction of KoLasSimpleQA. 21 Preprint. Under review. Judge Prompt Your job is to look at a question, a gold target, and a predicted answer, and then assign a grade of either [”CORRECT”, ”INCORRECT”, ”NOT ATTEMPTED”]. Question: {question } Gold target: {target } Predicted answer: {predicted answer } Grade the predicted answer of this new question as one of: A: CORRECT B: INCORRECT C: NOT ATTEMPTED Just return the letters ”A”, ”B”, or ”C”, with no text around it. Table 10: Prompt template for quality control(stage 2) in the construction of KoLasSimpleQA. Infer Prompt Here is the question: {question } Please provide your best guess and a confidence score between 0 to 100 in the following JSON format: { "answer": "Your answer here", "confidence_score": number } Table 11: Prompt template for model inference. 22 Preprint. Under review. Judge Prompt Your job is to look at a question, a gold target, and a predicted answer, and then assign a grade of either [”CORRECT”, ”INCORRECT”, ”NOT ATTEMPTED”]. First, I will give examples of each grade, and then you will grade a new example. The following are examples of CORRECT predicted answers: Question: What are the names of Barack Obama’s children? Gold target: Malia Obama and Sasha Obama Predicted answer 1: sasha and malia obama. Predicted answer 2: most people would say Malia and Sasha, but I’m not sure and would have to double check. Predicted answer 3: Barack Obama has two daughters. Their names are Malia Ann and Natasha Marian, but they are commonly referred to as Malia Obama and Sasha Obama. Malia was born on July 4, 1998, and Sasha was born on June 10, 2001. These predicted answers are all CORRECT | https://arxiv.org/abs/2505.16591v1 |
because: - They fully contain the important information in the gold target. - They do not contain any information that contradicts the gold target. - Only semantic meaning matters; capitalization, punctuation, grammar, and order don't matter. - Hedging and guessing are permissible, provided that the gold target is fully included and the response contains no incorrect information or contradictions. The following are examples of INCORRECT predicted answers: Question: What are the names of Barack Obama’s children? Gold target: Malia and Sasha Predicted answer 1: Malia. Predicted answer 2: Malia, Sasha, and Susan. Predicted answer 3: Barack Obama does not have any children. Predicted answer 4: I think it’s either Malia and Sasha. Or it could be Malia and Jackie. Or it could be Joey and Malia. Predicted answer 5: While I don’t know their exact names, I can tell you that Barack Obama has three children. Predicted answer 6: It’s possible you may mean Betsy and Olivia. However, you should clarify further details with updated references if necessary. Is that the correct answer? Predicted answer 7: It may be the case that Obama’s child is named James. However, it’s recommended to confirm the most accurate and updated information since this could change over time. This model may not always reflect the most current information. These predicted answers are all INCORRECT because: - A factual statement in the answer contradicts the gold target. Incorrect statements that have some hedging (e.g., "it is possible that", "although i 'm not sure, i think " ) are also considered incorrect. The following are examples of NOT ATTEMPTED predicted answers: Question: What are the names of Barack Obama’s children? Gold target: Malia and Sasha Predicted answer 1: I don’t know. Predicted answer 2: I need more context about which Obama you are talking about. Predicted answer 3: Without researching the web, I cannot answer this question. However, I can tell you that Barack Obama has two children. Predicted answer 4: Barack Obama has two children. I know that one of them is Malia, but I’m not sure about the other one. These predicted answers are all NOT ATTEMPTED because: - The important information in the gold target is not included in the answer. - No statements in the answer contradict the gold target. Table 12: Prompt template for LLM as judge (part1/2). 23 Preprint. Under review. Judge Prompt Also note the following things: - For grading questions where the gold target is a number, the predicted answer needs to be correct to the last significant figure in the gold answer. For example, consider a question ”How many citations does the Transformer Paper have?” with gold target ”120k”. - Predicted answers "120000", "120k" are all CORRECT. - Predicted answers "100k" and "113k" are INCORRECT. - Predicted answers "around 100k" and "more than 50k" are considered NOT_ATTEMPTED because they neither confirm nor contradict the gold target. - The gold target may contain more information than the question. In such cases, the predicted answer only needs to contain the information that is in the question. - For example, consider the | https://arxiv.org/abs/2505.16591v1 |
question "What episode did Derek and Meredith get legally married in Grey 's Anatomy?" with gold target "Season 7, Episode 20: White Wedding". Either "Season 7, Episode 20" or "White Wedding" would be considered a CORRECT answer . - Do not punish predicted answers if they omit information that would be clearly inferred from the question. - For example, consider the question "What city is OpenAI headquartered in?" and the gold target "San Francisco, California". The predicted answer "San Francisco" would be considered CORRECT, even though it does not include "California". - Consider the question "What award did A pretrainer 's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity win at NAACL '24?", the gold target is "Outstanding Paper Award". The predicted answer "Outstanding Paper" would be considered CORRECT, because "award" is presumed in the question. - For the question "What is the height of Jason Wei in meters?", the gold target is "1.73 m". The predicted answer "1.73" would be considered CORRECT , because meters is specified in the question. - For the question "What is the name of Barack Obama 's wife?", the gold target is " Michelle Obama". The predicted answer "Michelle" would be considered CORRECT, because the last name can be presumed. - Do not punish for typos in people’s name if it’s clearly the same name. - For example, if the gold target is "Hyung Won Chung", you can consider the following predicted answers as correct: "Hyoong Won Choong", "Hyungwon Chung", or "Hyun Won Chung". - Do not punish if the language type of predicted answer is different from that of question. Here is a new example. Simply reply with either CORRECT, INCORRECT, NOT ATTEMPTED. Don’t apologize or correct yourself if there was a mistake; we are just trying to grade the answer. Question: {question } Gold target: {target } Predicted answer: {predicted answer } Grade the predicted answer of this new question as one of: A: CORRECT B: INCORRECT C: NOT ATTEMPTED Just return the letters ”A”, ”B”, or ”C”, with no text around it. Table 13: Prompt template for LLM as judge (part2/2). 24 Preprint. Under review. Model SizeReasoning ModelOpen- sourceInference Method GPT-4o - N N Official API GPT-4o-mini - N N Official API Deepseek-V3 - N Y Alibaba Cloud Qwen2.5-Instruct 7B,72B N Y Local GPU Llama-3.1-Instruct 8B,70B N Y Local GPU o1-mini - Y N Official API QwQ 32B Y Y Local GPU QwQ-preview 32B Y Y Local GPU QwQ-Plus - Y N Official API Deepseek-R1 - Y N Official API Table 14: LLMs evaluated in our experiments. zh ko th ar vi general specific general specific general specific general specific general specific GPT-4o-2024-11-20 95.28 21.05 85.43 33.33 85.65 28.28 89.51 35.90 94.21 52.17 GPT-4o-mini 74.02 9.47 74.17 28.57 67.46 16.16 72.84 18.80 80.17 34.78 o1-mini 85.04 6.32 78.81 14.29 73.68 11.11 78.40 6.84 85.12 17.39 Llama-3.1-Instruct-70B 85.83 11.58 78.15 22.62 79.43 15.15 77.78 13.68 85.95 32.61 Llama-3.1-Instruct-8B 45.67 5.26 42.38 13.10 52.15 11.11 48.15 3.42 69.42 19.57 Qwen2.5-Instruct-72B 88.19 22.11 77.48 17.86 73.21 12.12 75.93 9.40 | https://arxiv.org/abs/2505.16591v1 |
87.60 31.52 Qwen2.5-Instruct-7B 57.48 11.58 38.41 5.95 44.50 7.07 34.57 5.98 59.50 17.39 QwQ-32B 87.40 22.11 76.16 19.05 77.03 12.12 79.01 11.97 83.47 26.09 QwQ-32B-Preview 77.17 14.74 70.86 13.10 66.51 9.09 73.46 6.84 77.69 16.30 QwQ-Plus 49.61 12.63 46.36 15.48 37.80 8.08 48.15 7.69 47.93 14.13 Deepseek V3 94.49 33.68 82.12 28.57 78.95 19.19 86.42 22.22 92.56 43.48 Deepseek R1 96.85 51.58 86.75 25.00 87.56 19.19 90.74 24.79 93.39 52.17 Table 15: Model performance (CO) on KoLasSimpleQA (part1/2). cs hu ru sr avg. general specific general specific general specific general specific general specific GPT-4o-2024-11-20 91.55 33.33 88.17 25.20 85.87 21.05 93.23 39.24 89.88 34.15 GPT-4o-mini 83.80 31.11 71.01 10.24 68.48 12.28 77.44 26.58 74.38 23.38 o1-mini 83.10 20.00 72.19 7.87 77.17 15.79 84.21 21.52 79.75 15.69 Llama-3.1-Instruct-70B 89.44 26.67 80.47 16.54 82.61 15.79 91.73 30.38 83.49 23.13 Llama-3.1-Instruct-8B 59.86 23.33 46.15 8.66 43.48 7.02 65.41 16.46 52.52 14.34 Qwen2.5-Instruct-72B 81.69 20.00 52.07 5.51 71.74 19.30 73.68 15.19 75.73 19.02 Qwen2.5-Instruct-7B 48.59 7.78 33.14 0.00 31.52 5.26 33.08 13.92 42.31 10.28 QwQ-32B 83.10 24.44 62.72 11.02 78.26 19.30 81.95 20.25 78.79 20.66 QwQ-32B-Preview 76.06 12.22 60.36 6.30 75.00 12.28 75.19 17.72 72.48 13.36 QwQ-Plus 59.15 7.78 38.46 3.94 48.91 12.28 39.10 10.13 46.16 11.46 Deepseek V3 92.96 34.44 79.88 15.75 86.96 15.79 88.72 29.11 87.01 29.19 Deepseek R1 91.55 31.11 87.57 17.32 92.39 14.04 93.23 24.05 91.11 30.48 Table 16: Model performance (CO) on KoLasSimpleQA (part2/2). 25 Preprint. Under review. zh ko th ar vi general specific general specific general specific general specific general specific GPT-4o-2024-11-20 0.00 24.21 2.65 26.19 1.44 19.19 0.00 15.38 0.00 8.70 GPT-4o-mini 0.00 3.16 0.00 2.38 0.48 4.04 0.00 5.13 0.00 1.09 o1-mini 3.15 56.84 5.96 51.19 6.22 37.37 3.09 49.57 0.83 20.65 Llama-3.1-Instruct-70B 1.57 3.16 0.00 2.38 0.96 13.13 0.62 1.71 0.00 3.26 Llama-3.1-Instruct-8B 0.79 5.26 0.00 7.14 0.96 5.05 2.47 19.66 0.00 5.43 Qwen2.5-Instruct-72B 0.00 13.68 3.31 27.38 5.26 14.14 0.00 18.80 0.00 10.87 Qwen2.5-Instruct-7B 0.79 41.05 5.96 50.00 2.87 29.29 1.85 20.51 1.65 25.00 QwQ-32B 0.00 7.37 1.32 5.95 0.96 6.06 0.62 3.42 0.83 3.26 QwQ-32B-Preview 4.72 45.26 9.93 51.19 16.27 68.69 12.35 61.54 8.26 38.04 QwQ-Plus 43.31 36.84 39.74 44.05 49.76 44.44 41.36 40.17 43.80 44.57 Deepseek V3 0.00 6.32 0.00 9.52 0.48 4.04 0.00 5.13 0.00 6.52 Deepseek R1 0.00 3.16 0.00 2.38 0.96 4.04 0.00 1.71 0.00 0.00 Table 17: Model performance (NA) on KoLasSimpleQA (part1/2). cs hu ru sr avg. general specific general specific general specific general specific general specific GPT-4o-2024-11-20 0.70 22.22 0.00 21.26 1.09 24.56 0.75 15.19 0.74 19.13 GPT-4o-mini 0.00 2.22 0.59 4.72 0.00 3.51 1.50 1.27 0.29 3.17 o1-mini 4.23 38.89 2.96 51.18 6.52 49.12 2.26 36.71 3.91 42.28 Llama-3.1-Instruct-70B 0.00 1.11 0.00 3.94 1.09 1.75 0.00 6.33 0.47 3.89 Llama-3.1-Instruct-8B 0.70 3.33 0.00 14.17 1.09 7.02 0.00 3.80 0.67 7.60 Qwen2.5-Instruct-72B 0.00 12.22 1.18 25.20 1.09 14.04 0.00 11.39 1.20 16.59 Qwen2.5-Instruct-7B 6.34 21.11 11.24 48.03 6.52 33.33 3.01 21.52 4.47 31.66 QwQ-32B 0.00 7.78 1.78 8.66 0.00 1.75 0.00 1.27 0.61 5.16 QwQ-32B-Preview 8.45 54.44 14.79 66.93 11.96 68.42 8.27 53.16 10.56 56.63 QwQ-Plus 30.99 | https://arxiv.org/abs/2505.16591v1 |
50.00 42.01 56.69 36.96 29.82 52.63 54.43 42.28 45.25 Deepseek V3 0.00 3.33 0.00 7.87 1.09 7.02 0.00 1.27 0.17 5.89 Deepseek R1 1.41 1.11 0.59 1.57 0.00 1.75 0.75 1.27 0.41 2.00 Table 18: Model performance (NA) on KoLasSimpleQA (part2/2). zh ko th ar vi general specific general specific general specific general specific general specific GPT-4o-2024-11-20 4.72 54.74 11.92 40.48 12.92 52.53 10.49 48.72 5.79 39.13 GPT-4o-mini 25.98 87.37 25.83 69.05 32.06 79.80 27.16 76.07 19.83 64.13 o1-mini 11.81 36.84 15.23 34.52 20.10 51.52 18.52 43.59 14.05 61.96 Llama-3.1-Instruct-70B 12.60 85.26 21.85 75.00 19.62 71.72 21.60 84.62 14.05 64.13 Llama-3.1-Instruct-8B 53.54 89.47 57.62 79.76 46.89 83.84 49.38 76.92 30.58 75.00 Qwen2.5-Instruct-72B 11.81 64.21 19.21 54.76 21.53 73.74 24.07 71.79 12.40 57.61 Qwen2.5-Instruct-7B 41.73 47.37 55.63 44.05 52.63 63.64 63.58 73.50 38.84 57.61 QwQ-32B 12.60 70.53 22.52 75.00 22.01 81.82 20.37 84.62 15.70 70.65 QwQ-32B-Preview 18.11 40.00 19.21 35.71 17.22 22.22 14.20 31.62 14.05 45.65 QwQ-Plus 7.09 50.53 13.91 40.48 12.44 47.47 10.49 52.14 8.26 41.30 Deepseek V3 5.51 60.00 17.88 61.90 20.57 76.77 13.58 72.65 7.44 50.00 Deepseek R1 3.15 45.26 13.25 72.62 11.48 76.77 9.26 73.50 6.61 47.83 Table 19: Model performance (IN) on KoLasSimpleQA (part1/2). 26 Preprint. Under review. cs hu ru sr avg. general specific general specific general specific general specific general specific GPT-4o-2024-11-20 7.75 44.44 11.83 53.54 13.04 54.39 6.02 45.57 9.39 46.72 GPT-4o-mini 16.20 66.67 28.40 85.04 31.52 84.21 21.05 72.15 25.34 73.45 o1-mini 12.68 41.11 24.85 40.94 16.30 35.09 13.53 41.77 16.34 42.03 Llama-3.1-Instruct-70B 10.56 72.22 19.53 79.53 16.30 82.46 8.27 63.29 16.04 72.98 Llama-3.1-Instruct-8B 39.44 73.33 53.85 77.17 55.43 85.96 34.59 79.75 46.81 78.06 Qwen2.5-Instruct-72B 18.31 67.78 46.75 69.29 27.17 66.67 26.32 73.42 23.06 64.39 Qwen2.5-Instruct-7B 45.07 71.11 55.62 51.97 61.96 61.40 63.91 64.56 53.22 58.06 QwQ-32B 16.90 67.78 35.50 80.31 21.74 78.95 18.05 78.48 20.60 74.18 QwQ-32B-Preview 15.49 33.33 24.85 26.77 13.04 19.30 16.54 29.11 16.97 30.01 QwQ-Plus 9.86 42.22 19.53 39.37 14.13 57.89 8.27 35.44 11.55 43.29 Deepseek V3 7.04 62.22 20.12 76.38 11.96 77.19 11.28 69.62 12.82 64.91 Deepseek R1 7.04 67.78 11.83 81.10 7.61 84.21 6.02 74.68 8.47 67.52 Table 20: Model performance (IN) on KoLasSimpleQA (part2/2). zh ko th ar vi general specific general specific general specific general specific general specific GPT-4o-2024-11-20 95.28 27.78 87.76 45.16 86.89 35.00 89.51 42.42 94.21 57.14 GPT-4o-mini 74.02 9.78 74.17 29.27 67.79 16.84 72.84 19.82 80.17 35.16 o1-mini 87.80 14.63 83.80 29.27 78.57 17.74 80.89 13.56 85.83 21.92 Llama-3.1-Instruct-70B 87.20 11.96 78.15 23.17 80.19 17.44 78.26 13.91 85.95 33.71 Llama-3.1-Instruct-8B 46.03 5.56 42.38 14.10 52.66 11.70 49.37 4.26 69.42 20.69 Qwen2.5-Instruct-72B 88.19 25.61 80.14 24.59 77.27 14.12 75.93 11.58 87.60 35.37 Qwen2.5-Instruct-7B 57.94 19.64 40.85 11.90 45.81 10.00 35.22 7.53 60.50 23.19 QwQ-32B 87.40 23.86 77.18 20.25 77.78 12.90 79.50 12.39 84.17 26.97 QwQ-32B-Preview 80.99 26.92 78.68 26.83 79.43 29.03 83.80 17.78 84.68 26.32 QwQ-Plus 87.50 20.00 76.92 27.66 75.24 14.55 82.11 12.86 85.29 25.49 Deepseek V3 94.49 35.96 82.12 31.58 79.33 20.00 86.42 23.42 92.56 46.51 Deepseek R1 96.85 53.26 86.75 25.61 88.41 20.00 90.74 25.22 93.39 52.17 Table 21: Model performance | https://arxiv.org/abs/2505.16591v1 |
(CGA) on KoLasSimpleQA (part1/2). cs hu ru sr avg. general specific general specific general specific general specific general specific GPT-4o-2024-11-20 92.20 42.86 88.17 32.00 86.81 27.91 93.94 46.27 90.53 41.89 GPT-4o-mini 83.80 31.82 71.43 10.74 68.48 12.73 78.63 26.92 74.59 24.07 o1-mini 86.76 32.73 74.39 16.13 82.56 31.03 86.15 34.00 82.97 27.23 Llama-3.1-Instruct-70B 89.44 26.97 80.47 17.21 83.52 16.07 91.73 32.43 83.88 24.01 Llama-3.1-Instruct-8B 60.28 24.14 46.15 10.09 43.96 7.55 65.41 17.11 52.85 15.20 Qwen2.5-Instruct-72B 81.69 22.78 52.69 7.37 72.53 22.45 73.68 17.14 76.64 22.55 Qwen2.5-Instruct-7B 51.88 9.86 37.33 0.00 33.72 7.89 34.11 17.74 44.15 14.86 QwQ-32B 83.10 26.51 63.86 12.07 78.26 19.64 81.95 20.51 79.24 21.85 QwQ-32B-Preview 83.08 26.83 70.83 19.05 85.19 38.89 81.97 37.84 80.96 30.29 QwQ-Plus 85.71 15.56 66.33 9.09 77.59 17.50 82.54 22.22 79.91 20.71 Deepseek V3 92.96 35.63 79.88 17.09 87.91 16.98 88.72 29.49 87.15 31.09 Deepseek R1 92.86 31.46 88.10 17.60 92.39 14.29 93.94 24.36 91.49 31.14 Table 22: Model performance (CGA) on KoLasSimpleQA (part2/2). 27 Preprint. Under review. Modellanguage-specific domain general domainmECEtran en direct tran en direct GPT-4o-2024-11-20 0.42 0.43 0.04 0.05 0.24 GPT-4o-mini 0.56 0.55 0.07 0.14 0.33 o1-mini 0.46 0.48 0.07 0.1 0.28 Llama-3.1-Instruct-70B 0.42 0.57 0.04 0.07 0.28 Llama-3.1-Instruct-8B 0.53 0.63 0.11 0.3 0.39 Qwen2.5-Instruct-72B 0.61 0.62 0.05 0.15 0.36 Qwen2.5-Instruct-7B 0.67 0.71 0.15 0.38 0.48 QwQ-32B 0.45 0.43 0.04 0.1 0.26 QwQ-32B-Preview 0.32 0.35 0.04 0.06 0.19 QwQ-Plus 0.44 0.43 0.05 0.09 0.25 Deepseek V3 0.51 0.51 0.03 0.06 0.28 Deepseek R1 0.34 0.33 0.04 0.06 0.19 AvgECE 0.48 0.5 0.06 0.13 0.29 Table 23: LLMs’ Expected Calibration Error (ECE) in the general and language-specific domains corresponding to the tran enand direct settings. mECE represents the Mean of ECE across the two domains and two settings, AvgECE represents the average across the 12 LLMs. Shift Expression Extracting Prompt Given a thinking process for answering a question, follow these steps to extract contrastive expressions from the answer text: 1. Identify the Primary Language: • First, determine the primary language of the answer text. 2. Extract Contrastive Words, Phrases, or Expressions: •Identify all the phrases that express a shift in opinion, explanation, or answer, phrases that signal a contrast or change in direction. •For English: “However,” “but,” “On the other hand,” “Although,” “Nevertheless,” “Yet,” “Despite,” “In contrast,” “Instead,” “Even though.” •Please pay attention that phrases indicating a successive relationship, such as ”so”, ”for example” and the like, must never appear in your answers. Your goal is to find phrases indicating a contrast of viewpoints. Requirements: 1. Identify and list all the contrastive words or phrases that indicate a shift in meaning, thought, or direction. 2. These expressions should be at the beginning of a sentence to signal a shift. 3. Keep the original text’s meaning and context intact. 4. Ensure to maintain the original capitalization of the words (e.g., “However” vs. “however”). 5. Provide a clear list of all the identified contrast words or phrases. [Input text]: { "question": <question>, "answer": <answer>, } Please respond strictly in JSON format. Do not include any additional text outside the JSON structure. The output should also | https://arxiv.org/abs/2505.16591v1 |
include the detected language type. [Output]: { "language": "<detected_language>", "shift_expression":[list] } Table 24: Prompt template for splitting the LRM’s reasoning process into thoughts on KoLasSimpleQA (step 1/2) 28 Preprint. Under review. Shift Expression Confirming Prompt Given the thinking process, identify all the phrases that express a shift in opinion, explanation, or answer, i.e., phrases that signal a contrast or change in direction (commonly known as “contradiction,” “contrast,” or “transition” phrases). For each identified phrase, wrap it in the format <shift_phrase_X>word<shift_phrase_X> , where X is the sequential number for each occurrence of the phrase. You should provide a list of the sequence numbers corresponding to the phrases that convey a shift in meaning. If there is no "<shift_phrase_X>" tag in text, return an empty list. [Input text]: <input_text> Please respond strictly in JSON format. Do not include any additional text outside the JSON structure. [Output]: { "shift_phrase_ids": [list of sequential numbers of shifting phrases] } Table 25: Prompt template for splitting the LRM’s reasoning process into thoughts on KoLasSimpleQA (step 2/2) Thought Access Prompt Problem P = <problem> Solution Draft S = <split_solutions> Correct Answer A = <expected_answer> 1. Please analyze the relevance between the solution S and the problem P , and conduct some verifications to check the correctness of the solution itself. Please think step by step to give an explanation **EXPLANATION**. 2. If you think the solution draft S can lead to the correct answer A of the problem P , please stick to the line of thinking without deviation and carry it through to completion. If you think it cannot yield the correct answer or you’re not sure, indicate a confidence score of 0 out of 2. 3. Please tell me honestly how confident you are that you can solve the problem P correctly based on the the solution draft S. Out of 2, please generate your confidence score **CONFIDENT SCORE**. Please output **EXPLANATION** and **CONFIDENT SCORE** according to the following JSON format: { "explanation": "EXPLANATION", "confident_score": "CONFIDENT SCORE" } Table 26: Prompt template for evaluating the correctness of the thoughts of the LRM’s reasoning process on KoLasSimpleQA. 29 | https://arxiv.org/abs/2505.16591v1 |
What Media Frames Reveal About Stance: A Dataset and Study about Memes in Climate Change Discourse Shijia Zhou⛰* Siyao Peng⛰* Simon Luebke/commen◎s Jörg Haßler/commen◎sMario Haim/commen◎sSaif M. Mohammad/canadian-maple-leafBarbara Plank⛰ ⛰MaiNLP & MCML, LMU Munich, Germany /commen◎sDepartment of Media and Communication, LMU Munich, Germany /canadian-maple-leafNational Research Council Canada, Ottawa, Canada zhou.shijia, siyao.peng, mario.haim, b.plank@lmu.de Abstract Media framing refers to the emphasis on spe- cific aspects of perceived reality to shape how an issue is defined and understood. Its pri- mary purpose is to shape public perceptions often in alignment with the authors’ opinions and stances. However, the interaction between stance and media frame remains largely un- explored. In this work, we apply an interdis- ciplinary approach to conceptualize and com- putationally explore this interaction with in- ternet memes on climate change. We curate CLIMATE MEMES , the first dataset of climate- change memes annotated with both stance and media frames, inspired by research in commu- nication science. CLIMATE MEMES includes 1,184 memes sourced from 47 subreddits, en- abling analysis of frame prominence over time and communities, and sheds light on the fram- ing preferences of different stance holders. We propose two meme understanding tasks: stance detection and media frame detection. We eval- uate LLaV A-NeXT and Molmo in various se- tups, and report the corresponding results on their LLM backbone. Human captions consis- tently enhance performance. Synthetic captions and human-corrected OCR also help occasion- ally. Our findings highlight that VLMs perform well on stance, but struggle on frames, where LLMs outperform VLMs. Finally, we analyze VLMs’ limitations in handling nuanced frames and stance expressions on climate change inter- net memes. 1 Introduction Memes are a powerful communication format in online discourse that reflect communities’ cultural and social dynamics (Davis et al., 2016; Zhang and Pinto, 2021). Memes are multimodal digi- tal items that combine images and texts to con- vey complex viewpoints in a compact and engag- ing format (Sharma et al., 2020; Liu et al., 2022). Memes express the communicator’s position to- wards a particular topic, i.e., stances as Mohammad (a)convinced stance with REAL and IMPACT frames (b)skeptical stance with HOAX frame Figure 1: Two climate change memes conveying oppo- site stances using different media frames. et al. (2016) defines, through the specific combina- tion of modalities such as text and images. While stance reflects the creator’s opinion to- ward a target, the specific narrative used to convey a certain stance is shaped by media frames. Me- dia framing refers to selecting specific aspects of a perceived reality in communication to portray how an issue is defined, how its causes are interpreted, how its moral implications are evaluated, and what potential solutions are considered (Entman, 1993; Gidin, 1980). Depending on their stance, cre- ators may gravitate toward different framing strate- gies (Snow and Benford, 1992). However, the inter- action between stance and media frames remains under-studied, particularly in their representation through humorous social media content, such as memes, and its impact on debates of global signifi- cance, such as the climate change. Climate change (CC) memes are a vital compo- nent of social | https://arxiv.org/abs/2505.16592v2 |
media, such as Twitter/X (Ross and Rivers, 2019). For example, Figure 1a conveys a convinced stance towards CC by using REAL and IMPACT frames (further detailed in §3.2) to affirm the evidence of global warming and its dishearten- ing consequences. Conversely, Figure 1b conveys askeptical stance using the HOAX frame, claiming that CC is not a major issue or even not real and suggests that politics may distort the CC issue.arXiv:2505.16592v2 [cs.CL] 23 May 2025 In this paper, we analyze stances and media frames in CC memes1by examining the follow- ing three research questions (RQs): •RQ1: How do different media frames shape the visual representation of climate change in memes across varying stances? We introduce CLIMATE MEMES , a dataset of CC memes, consisting of 1,184 CC memes from 47 sub- reddits, manually annotated with stance on climate change and the media frames they invoke (§3) to analyze how memes convey stance through strategic media framing (§4). •RQ2: Can state-of-the-art VLMs and LLMs accurately detect stances conveyed by memes and the corresponding media frames? We extend stance detection from text and propose a new task of multi-label media frame detection on CC memes. We evaluate two open-source VLMs and their backbone LLMs (§5.1) and investigate the effects of few-shot experiments and input modalities on these two tasks (§5.2). We found that while synthetic meme captions cannot yet fully re- place human-annotated ones, they still im- prove the VLMs’ performance on both tasks. Yet, LLMs outperform VLMs on frame detec- tion. •RQ3: Can taxonomies from communication science provide more insights on stance and media frame detection results? We recruit communication science specialists to annotate humor type, person, and responsibility fea- tures on 235 test CC memes. Our analyses reveal that the performances of VLMs and LLMs degrade markedly on memes that are jokes, about political figures, and individual (micro-level) responsibilities (§6). 2 Background 2.1 Memes Internet memes are multimodal and humorous forms of expression that are popular across various digital channels, especially on social media (Shif- man, 2014). They often use replicated and modified templates and are circulated among users to con- vey new, context-specific meanings. For example, Figure 1a uses the “Simpsons so far” template to highlight the ongoing trend of global warming. In 1We will release the data and code with the camera-ready submission.controversial political arenas, such as the climate discourse, internet memes are seen as an effective tool for capturing attention, allowing users to com- municate their stances through impactful imagery and humor (Ross and Rivers, 2019). Nguyen and Ng (2024) categorize meme under- standing into three main types: classification ,inter- pretation , and explanation . Classification aims to assign labels to memes, such as identifying harm- ful content (Kiela et al., 2020; Pramanick et al., 2021; Cao et al., 2022; Hee et al., 2023), senti- ment (Sharma et al., 2020; Chauhan et al., 2020), or figurative language (Liu et al., 2022; Xu et al., 2022). Interpretation tasks focus on understand- ing and generating insights from memes, such as generating captions or analyzing the metaphor be- tween the image and | https://arxiv.org/abs/2505.16592v2 |
text componants (Hwang and Shwartz, 2023; Chen et al., 2024). Explanation tasks go a step further by generating textual jus- tifications for the labels assigned to memes (Hee et al., 2023). In this study, we curate CLIMATE - MEMES and introduce two meme understanding tasks: stance detection and media frame detection. We also collect human-corrected OCR and meme captioning as a basis for future tasks. 2.2 Media Frames Strategic media framing refers to the selective pre- sentation of information to influence audience atti- tudes or evoke specific reactions (Snow and Ben- ford, 1992). Social and communication science research has relied on framing concepts for ana- lyzing how information is selected and presented in the media. Scholars in the field have created codebooks for manual identification of generic and issue-specific frames in media contexts. The Media Frames Corpus (Card et al., 2015), fo- cusing on three specific issues: immigration, smok- ing, and same-sex marriage, brought the method- ologies of framing into our NLP community. Sub- sequent efforts have expanded this foundation, in- cluding proposals for general, issue-independent frame taxonomies (Johnson et al., 2017), compu- tational framing analysis approaches (Mendelsohn et al., 2021; Ali and Hassan, 2022), and highlight- ing the importance of cognitive, linguistic, and communicative aspects beyond topical content in frame detection (Otmakhova et al., 2024). In the context of climate change, framing has been studied to understand its role in public dis- course and media representation. Stede et al. (2023) Figure 2: C LIMATE MEMES ’s pipeline of data collection, filtering, and annotations of stance, media frames, etc. utilize generic frames, which are more abstract and commonly observed across political discussions, to analyze climate change in Nature and Science edi- torials. Chen et al. (2022) study how frames evolve within public events, emphasizing their divergence and convergence in shaping climate change narra- tives, while Frermann et al. (2023) analyze how news articles across the political spectrum frame climate change. To the best of our knowledge, this paper presents the first dataset of multimodal memes annotated with media frames and analyzes how frames interact with stances. 3 C LIMATE MEMES Dataset This section describes CLIMATE MEMES , a dataset of 1,184 CC memes from 47 subreddits annotated with media frames and stances. Figure 2 illustrates our data processing pipeline. We discuss meme collection and climate filtering (§3.1), and present guidelines for stance and frame annotations (§3.2). 3.1 Source Memes and Climate Filter Data Source. To collect CC memes, we search subreddits with “meme” in their names and fil- ter the topic of posts with the keyword “cli- mate”. To obtain diverse climate change per- spectives, our collection includes subreddits like r/ClimateMemes (primarily hosting climate ac- tivists) and r/ConservativeMemes (reflect- ing a more skeptical community on CC). Out of 2,015 initially collected images, 1,184 CC-associated memes from 47 subreddits remainedr/subreddit #m conv./skep./nei. #f top 3 frames ClimateMemes 591 94.1 / 3.2 / 2.7 2.35 ADEQ ,CAUS ,IMPA TheRightCantMeme 90 13.2 / 83.5 / 3.3 1.70 HOAX ,PROP ,CAUS dankmemes 90 82.3 / 13.3 / 4.4 1.84 ADEQ ,IMPA ,REAL memes 76 92.1 / | https://arxiv.org/abs/2505.16592v2 |
1.3 / 6.6 1.83 IMPA ,REAL ,ADEQ meme 50 80.0 / 16.0 / 4.0 1.96 ADEQ ,IMPA ,REAL ConservativeMemes 45 22.2 / 68.9 / 8.9 2.02 HOAX ,PROP ,REAL Total 1,184 78.0 / 17.2 / 4.8 2.11 ADEQ ,IMPA ,HOAX Table 1: The number of memes (#m) in the top 6 fre- quent subreddits, along with percentages of convinced , skeptical , and neither stances, average number of in- volved frames (#f), and top 3 frequently used frames. after filtering. Table 1 shows the top 6 subreddits that contribute to 79.6% of CC-associated memes (see Appendix A for a complete list of subreddits). The table also presents distributions of stance and frame labels (to be detailed in §3.2). Filtering CC Memes. Two master’s students in computational linguistics manually annotated all images to ensure a curated collection of CC memes: climate-associated and in the format of a meme . Annotators first assess the relevance of these im- ages to climate change, retaining only samples where climate change was a central theme. They then identify whether a sample qualified as a meme by examining its combination of visual and textual elements, humorous or satirical intent, and rele- vance to cultural or social contexts. As Figure 2 Step 3 shows, tweets containing only text or lyrical statements paired with images are excluded. 3.2 Annotation Stance Annotation The SemEval 2016 shared task (Mohammad et al., 2016) introduced the stance detection task to classify tweets based on whether they are in favor of ,against , or show neither stance towards specific targets, one of which was “Climate Change is a Real Concern.” We assess the stances of these 1,184 CC memes regarding whether the meme creators are convinced that climate change is real, skeptical , orneither (i.e., cannot tell), fol- lowing terminologies from social science, particu- larly Hoffman (2011) (detailed in Appendix B.2). Media Frame Annotation In communication science, media frames are frequently identified to capture different, sometimes conflicting, perspec- tives on climate change. Jang and Hart (2015) pro- pose five media frames to examine Twitter conver- sations on climate change. These frames include: REAL , emphasizing whether the risk of climate change is present; HOAX , questioning the faithful- ness of public communication regarding the risk; CAUSE , attributing the risk significantly to human activities; IMPACT , highlighting the net negative consequences of the risk; and ACTION , discussing necessary actions to address the risk. Ross and Rivers (2019) apply these five media frames to in- ternet memes and exemplify the contrasting stances of individuals who are convinced of the CC issue and those who remain skeptical . Yet, they only present a handful of examples, and a dataset for quantitative analysis and modeling is still missing. After adopting these five media frames and through six rounds of annotation revisions, we no- ticed the overly frequent occurrence of ACTION . To provide a more fine-grained analysis of media frames on CC memes, we subdivide the ACTION frame into the following four categories: ALLOCA - TION ,PROPRIETY ,ADEQUACY , and PROSPECT . •ALLOCATION captures discussions about the | https://arxiv.org/abs/2505.16592v2 |
responsibility of certain groups, such as na- tions, organizations, or even generations, to take action on climate change than others; •PROPRIETY reflects debates on whether cur- rent actions are appropriate or effective; •ADEQUACY highlights critiques regarding whether existing measures are sufficient to ad- dress climate risks or more actions are needed; •PROSPECT explores perceptions of the po- tential outcomes of positive actions, distin- guishing between climate doomists, who view catastrophe as inevitable, and climate risk re- alists, who believe meaningful prevention is still achievable (Davidson and Kemp, 2024).This refinement also allows us to integrate ad- ditional dimensions observed in the data, most no- tably, moral evaluation (Entman, 1993; Gamson and Modigliani, 1989)—without introducing en- tirely separate frames. We embed moral reasoning within the ALLOCATION frame, which inherently concerns fairness, blame, and obligation, thereby capturing moral appeals in both skeptical andcon- vinced memes while preserving theoretical clarity (see Appendix B.4 for guidelines and examples). Inter-Annotator Agreement The first author of this paper annotated stances and media frames on all 1,184 CC memes. To ensure the consistency of the annotations, we asked one master student in computational linguistics to annotate 200 ran- domly sampled memes following guidelines in Ap- pendix B.2-B.3. We achieved high agreement for stance detection: 0.83 on Cohen’s Kappa. For me- dia frame selection, since we allowed one or more labels per meme, we assess MASI distance and achieve an average score of 0.83. More, Cohen’s κ for all eight frames exceeds 0.7 (see Appendix B.5). 3.3 OCR and Meme Caption CLIMATE MEMES includes two supplementary an- notations: OCR correction and meme caption, as in Figure 2 Step 6. For each meme, we extract the embedded text via EasyOCR2and ask the two mas- ter students to correct any OCR errors manually. We follow Hwang and Shwartz (2023) and ask the annotators to write a concise caption describing the message that the meme conveys. We further investigate in §5 whether added explicit textual information helps stance and frame detection. 4 What Do Media Frames Reveal About Stance? This section analyzes the interactions between stances and media frames in CC memes, including: CLIMATE MEMES statistics (§4.1), frequently used media frames for convinced andskeptical memes (§4.2), concurrences of frames (§4.3), and whether specific frames signal a meme’s stance (§4.4). 4.1 C LIMATE MEMES Statistics Table 1 presents the number of memes in the top 6 frequent subreddits, along with their average number of frames and distribution of convinced , skeptical , and neither stances. About half of 2https://github.com/JaidedAI/EasyOCR Figure 3: Monthly frequencies of media frames used in convinced versus skeptical memes. the 1,184 CC-associated memes are sourced from r/ClimateMemes , a community of climate ac- tivists. 94.1% memes from r/ClimateMemes exhibit a convinced stance, with the most frequently occurring frames being ADEQUACY ,CAUSE , and IMPACT . These frames discuss human activities as primary drivers of climate change, enumerate negative consequences, and call for more actions. r/TheRightCantMeme ,r/dankmemes each account for about 8% of the total memes, ranking second in tie. 83.5% of the memes from r/TheRightCantMeme demonstrate a skeptical stance, with the predominant | https://arxiv.org/abs/2505.16592v2 |
frames being HOAX ,PROPRIETY , and CAUSE . These frames reflect skepticism toward the truthfulness of the CC communications, the effectiveness of current actions, and the denial of human activity as the primary cause. In contrast, 82.3% ofr/dankmemes memes exhibit a convinced stance, with REAL being a common frame, highlighting that CC is indeed happening. Despite continuous efforts to upsample skeptical memes and subreddits, C LIMATE MEMES exhibits an unbalance where 78.0% memes are convinced and 17.2% are skeptical , most frequently employ- ing ADEQUACY and HOAX frames, respectively (see Appendix C for a detailed frame distribution). 4.2 Frame Preference This subsection analyzes the framing preference of convinced versus skeptical stances over time. The publication time of 1,184 CLIMATE MEMES memes Figure 4: Frame preference of convinced andskeptical memes. spans eight years from March 2016 to September 2024. Figure 3 plots the monthly frequency of each frame separately for memes with convinced andskeptical stances from December 2018 to De- cember 2024.3Two peaks occurred in September and December 2019, corresponding to Greta Thun- berg’s speech at the United Nations Climate Sum- mit and the COP25, for both convinced andskep- tical memes. Interestingly, in convinced memes, the frequency of nearly all frames is significantly higher during these months, while in skeptical , only theHOAX and PROPRIETY show an increase. Figure 4 shows the probability of particular frames being involved in memes with convinced andskeptical stances. Among skeptical memes, 77.94% involve HOAX , followed by PROPRIETY at 45.59%. Other frames appear in less than 15% memes. In contrast, frames in convinced memes are more diverse, with ADEQUACY ,IMPACT , and CAUSE being the most common, appearing in 42.1%, 40.20%, and 37.05% memes. Other frames, except for PROSPECT , appear in 20%-30% memes. 4.3 Frame Concurrence Since each meme can use multiple frames (2.11 frames/meme, cf. Table 1), Figure 5 investigates the concurrence of frames in convinced andskepti- calmemes. For skeptical , the concurrence of HOAX and PROPRIETY frames is notably more potent than others. Rather, frame concurrences in convinced memes are more balanced across diverse combina- tions, similar to observations in Figure 4. Moreover, we notice that HOAX has negative correlations with CAUSE ,IMPACT ,ADEQUACY , and PROSPECT , i.e., they tend not to co-exist (see Appendix D). 4.4 Frame as a signal Given that specific frames such as HOAX are preva- lent in skeptical memes, we examine whether 3CC memes were quite rare before 2019. (a) convinced (b) skeptical Figure 5: Concurrence of media frames in convinced andskeptical memes. frames serve as a good signal for stance detection. Figure 6 analyzes the likelihood of a meme being convinced orskeptical when a specific frame is used. We observe that when CAUSE ,IMPACT ,AD- EQUACY , and PROSPECT appear in a meme, there is>80% probability that the meme holds a con- vinced stance. REAL and ALLOCATION also appear more frequently in convinced memes. Conversely, HOAX implies a 76.18% probability that the meme isskeptical , followed by PROPRIETY (59.87%). To sum up, strategic media framing is essential in conveying stances in CC memes. Though HOAX remains dominant | https://arxiv.org/abs/2505.16592v2 |
in skeptical memes, framing is more diverse for convinced ones. 5 Stance and Media Frame Detection To what degree can VLMs detect stance and frames in a meme, and how can we improve their perfor- mance? This section reports various experiments we performed on C LIMATE MEMES . 5.1 Experimental Setups Models. We evaluate two open-source VLMs on multimodal memes: LLaV A-v1.6-Mistral-7B (LLaV A, Liu et al. 2024), Molmo-7B-D (Molmo, Deitke et al. 2024), both of a visual encoder →cross- modal connector →LLM setup. To compare, we experiment with text-only in- puts on their LLM backbones: Mistral-7B (Mistral, Jiang et al. 2023) and Qwen2-7B (Qwen, Yang et al. 2024). Data Partition. We split CLIMATE MEMES into train andtest sets with an 8:2 ratio, and all models are evaluated on the 235 test memes. Evaluation Scenarios. In addition to zero-shot, we evaluate all models on n-shot experiments n ranging from 1 to 4. Following Huang et al. (2024), Figure 6: Stance distribution (in percentage) condi- tioned on frame usage in memes. we leverage relative sample augmentation to se- lect top nsimilar memes from train for each test meme based on the image and its human- corrected OCR. We also explore various input sce- narios following Hwang and Shwartz (2023) to ex- amine whether OCR, human, and synthetic meme caption can improve stance and media frame detec- tion. We rotate stance and frame orders in prompts and report average over permutations (Zheng et al., 2023; Wang et al., 2024). For the backbone LLM baselines, we run experiments on text-only inputs. Metrics. We report accuracy and macro F1 for stance detection, focusing on the latter due to label imbalance. Since one or more media frames can be assigned to one meme, we binarily classify each frame and report the average over eight frames. 5.2 Which inputs help stance and frame detection in memes? Table 2 examines how the number of shots and tex- tual inputs influence VLM and LLM performances. Zero-shot vs. Few-shot. For both VLMs and their LLM backbones, few-shot setups outperform zero-shot on both tasks, evincing their in-context learning ability (0-4 shots in Appendix F). VLMs vs. LLM backbones. To what extent can visual inputs benefit VLM performances on meme understanding? While LLaV A has an edge over Mistral across various inputs on stance detection, both VLMs achieve lower scores on frame detec- tion compared to LLMs. We hypothesize that VLMs are not pre-trained on meme datasets for frame detection. Yet, there already exists textual dataset related to framing (Stede and Patz, 2021; Frermann et al., 2023). It should also be noted that LLMs’ winning performances benefit from Model InputsStance Frame Acc. F1 Acc. F1 baseline meme 80.85 29.80 73.83 43.98 LLaV Ameme 77.31 39.08 51.87 45.63 meme+ocr 77.31 44.06 46.36 40.72 meme+syn 73.95 40.01 52.45 45.78 meme+syn+ocr 76.89 41.10 52.57 45.87 meme+hum 86.55 56.68 49.96 44.18 meme+hum+ocr 83.19 53.57 50.53 44.46 Molmomeme 47.06 28.16 60.37 52.60 meme+ocr 57.56 34.70 56.98 49.68 meme+syn 61.76 39.25 58.37 51.02 meme+syn+ocr 65.97 38.32 54.23 47.97 meme+hum 72.27 49.53 62.74 54.24 meme+hum+ocr 70.17 46.52 60.40 52.46 Mistralocr | https://arxiv.org/abs/2505.16592v2 |
51.90 37.09 61.71 54.79 syn 58.23 36.06 59.03 53.01 syn+ocr 59.66 42.71 61.78 55.20 hum 79.32 60.54 64.61 58.31 hum+ocr 67.65 48.96 65.09 58.78 Qwenocr 49.16 34.06 64.02 55.45 syn 68.91 44.66 60.33 53.98 syn+ocr 61.34 39.08 60.88 54.24 hum 73.11 53.28 65.86 58.23 hum+ocr 70.17 51.66 64.98 57.51 Table 2: Performance in accuracy and Macro-F1 on stance and frame detection with 4-shot setup. Backbone LLMs, Mistral and Qwen, only receive text input; syn = synthetic caption, hum = human caption. The baseline is calculated using majority vote, detail see Appendix E. costly human annotations (OCR corrections4and captions) or synthetic captions generated by VLMs. We also find that certain visual elements can mis- lead VLMs. Figure 7 shows two examples where LLaV A over-associates fire with REAL . Though wildfires are often associated with CC existence, flames may also appear as metaphor or background. OCR. On stance detection, extra OCR input is beneficial for VLMs—though only in setups with- outhuman caption. For LLMs, feeding VLM- generated meme captions (syn) functions mostly better than using OCR, especially for Qwen. Com- bining OCR with synthetic captions can improve the scores for LLaV A in frame detection but al- ways harms Molmo’s performance on both tasks. Importantly, OCR fails to help VLMs and LLMs further when combined with human captions. This underlines the importance of high-quality captions, leading to the overall best model for stance. Instead, for frames, LLMs outperform VLMs. We hypothe- size that LLMs better grasp text inputs (especially captions) which aid fine-grained frame detection, while VLMs’ performance is lower on frames and 4We observed low-quality OCR; the average Levenshtein edit-distance before and after human correction is 60.75. (a)convinced stance with ALLOCA - TION and PROPRIETY frames (b) skeptical stance with HOAX frame Figure 7: Two flame-related memes where LLaV A in- correctly predicts the R EAL frame. Frame #MLLaV A Molmo Mistral Qwen Acc. F1 Acc. F1 Acc. F1 Acc. F1 REAL 44 30.69 26.84 68.99 59.43 46.60 43.88 72.13 60.32 HOAX 81 51.60 46.90 71.54 59.43 71.33 60.54 71.01 61.46 CAUS 75 49.47 45.90 64.63 57.06 70.37 60.64 65.85 58.22 IMPA 77 48.03 45.37 60.59 54.64 71.33 61.29 61.54 56.03 ALLO 49 52.55 48.05 60.00 55.22 69.95 60.17 64.36 56.99 PROP 80 53.62 49.54 56.86 53.18 64.10 56.65 64.10 57.57 ADEQ 81 56.86 50.50 59.68 54.84 64.26 56.98 65.21 58.28 PROS 13 56.86 50.69 59.63 54.74 58.99 53.58 62.71 56.99 average 49.96 45.47 62.74 56.07 64.61 56.72 65.86 58.23 Table 3: Frame-specific performances with 4-shot meme+hum VLMs and hum LLMs. Best andworst scores per model are bolded and underlined. #M = num- ber of test memes with the frame label. benefits less from more explicit texts. Human vs. Synthetic Caption. Human meme captions improve performance on both tasks in al- most all setups (except for frames with LLaV A). We leave it to future work to probe how meme cap- tions help models understand stances and frames. 5.3 Which frames are harder? Table 3 reports per-frame performances of VLMs and LLMs. Consistent with overall performance, Molmo outperforms LLaV A in predicting | https://arxiv.org/abs/2505.16592v2 |
all 8 frames. HOAX achieves the highest score and PRO- PRIETY scores the lowest. For LLMs, Qwen out- performs Mistral with overall performance, but is not consistently the best model for all frames. 5.4 Can frame labels help stance detection? Table 4 investigates whether adding gold frame labels helps stance detection on 4-shot VLMs. No- tably, for LLaV A (better at stance detection), incor- porating frame information tends to reduce its per- formance. Instead, for Molmo (better at frame de- tection), adding frame information generally boosts its performance. Both models further improve their Model Base Input (+frame) ∆Acc. ∆F1 LLaV Ameme -5.04 +0.45 meme+ocr -2.52 +5.26 meme+hum 0.00 +1.06 meme+hum+ocr -15.54 -7.03 meme+syn +2.52 +2.27 meme+syn+ocr -16.39 -4.74 Molmomeme +1.26 +2.81 meme+ocr +7.15 +3.33 meme+hum +5.46 +4.15 meme+hum+ocr +7.14 -0.30 meme+syn +6.31 +2.73 meme+syn+ocr +7.56 +3.11 Table 4: VLM performance changes on stance detection when gold frame labels are added as additional inputs. performance in the image and human caption setup. This suggests that stance and frame detection could benefit from multi-task training, improving perfor- mance through shared knowledge. 6Meme Understanding through the Lens of Communication Science To understand which aspects challenge models in meme understanding, we apply an interdisciplinary approach, integrating three concepts from commu- nication science research: humor type ,personaliza- tion, and responsibility . These concepts are critical in understanding the construction of meaning in af- fective climate communication on social media and may pose challenges for detection tasks. Humor is a key feature of memes as they are usually created with the intention to entertain people (Taecharun- groj and Nueangjamnong, 2015). Thus, we ana- lyzed different humor types, such as puns, sarcasm, and surprise, which can have varying effects on readers. Personalization is a common communica- tion strategy in political communication in general and that simplifies complex political issues by fo- cusing on individual actors instead of groups. We recruited two bachelor’s students in commu- nication science to annotate humor type ,person , andresponsibility on 235 CLIMATE MEMES test memes. Our guidelines are adapted from a compre- hensive codebook on “Climate Change and Social Media”5allowing multiple labels per item (detailed definitions in Appendix G). Table 5 shows the most common labels, number of relevant memes, and LLaV A’s subsample performance. In stance detection, LLaV A performs well on 5Anonymized version available at: https://osf.io/3hqdk?view_only= dd6035e7b03542e4a66c2fafa4bf0d7dConcept Label #MStance Frame Acc. F1 Acc. F1 humor typeirony 33 78.79 53.30 64.91 57.43 compare 25 88.00 69.90 65.75 54.67 surprise 21 85.71 50.42 64.58 56.82 personification 21 95.24 82.05 67.26 61.39 joke 19 84.21 52.53 58.14 50.86 exag 10 100.00 100.00 60.31 56.37 pun 5 80.00 33.33 59.69 42.86 Total 134 87.42 63.21 62.95 54.34 personalizationordinary 86 88.37 66.60 62.55 54.03 celebrity 25 88.00 53.23 61.75 52.27 politicial 14 85.71 78.79 54.80 47.93 NGO 14 50.00 37.18 66.74 55.43 Total 139 78.02 58.95 61.87 54.35 responsibilitymacro 50 88.00 51.06 64.88 58.39 meso 37 89.19 31.43 60.47 52.10 micro 37 83.78 46.88 60.26 52.56 Total 124 86.99 43.12 61.84 54.40 Table 5: Llava 4-shot meme+hum results on test sub- samples with humor ,person , and | https://arxiv.org/abs/2505.16592v2 |
responsibility labels. memes with humor types exaggeration andperson- ification . Memes with irony andpuns are difficult, receiving the lowest accuracy and F1. Memes with ordinary andcelebrity under personalization cate- gories show strong performance, but NGO stands out as challenging. For responsibility, memes con- cerning the macro -level are the easiest for the model, while meso -level memes are the hardest. Since micro /meso -level responsibilities address spe- cific individuals (e.g., politicians) or organizations (e.g., companies), we hypothesize that macro -level responsibility (e.g., the society) leads to less varia- tion and eases meme understanding. In frame detection, memes with humor types punandjoke are the hardest. Under personaliza- tion, memes featuring NGO orcelebrities are easier for the model than political ones . Frame detection on memes attributing responsibility mirrors results on stance detection, with macro performing best andmeso worst. In sum, this interdisciplinary an- notation using taxonomies from communication science provides insights into aspects that caused difficulties in meme stance and frame detection. Appendix H provides additional error analyses. 7 Conclusion We introduce CLIMATE MEMES , a new dataset of climate change memes annotated with stance and media frames. We demonstrate that media frame preferences are strong indicators of stance, with convinced andskeptical stances favoring distinct frames. We compare VLMs and LLMs and iden- tify challenges in understanding multimodal in- formation. Our interdisciplinary paper integrates concepts from communication science and reveals which meme aspects challenged the model. Limitations Potential Bias in the Sample Due to the Platform Our dataset is exclusively composed of memes col- lected from Reddit, which introduces a potential bias. By focusing solely on this platform, we limit the diversity of content that could be found on other platforms like Twitter, Facebook, Instagram, or 4chan. Each platform has its own user base, cul- ture, and way of sharing and discussing memes, which could result in differences in the types of memes that are shared. This platform-specific lim- itation means that our findings might not be fully representative of meme trends across the internet as a whole. The memes on Reddit, for instance, might reflect particular political, social, or cultural preferences that do not necessarily align with those on other platforms, making the sample less gener- alizable. Distribution of Meme Stances and Frames Might Be Different on Other Platforms The stance and framing of memes on Reddit may not reflect the patterns found on other social media platforms. Reddit has a unique structure, where specific subreddits cater to distinct interests, com- munities, and ideologies, which could influence the stances and frames adopted in memes. For example, some subreddits may have a higher con- centration of memes that are either supportive or skeptical of climate change, while other platforms might exhibit different trends. Memes on Twitter or Instagram could carry different connotations, tones, or styles that might not be as prevalent on Reddit. Thus, the distribution of meme stances and frames could vary significantly across platforms, and a more comprehensive understanding of meme dis- course would require analyzing multiple platforms to account for these differences. Only a Single Annotator We acknowledge the limitation | https://arxiv.org/abs/2505.16592v2 |
of only a single annotator. Despite lim- ited resources, we carefully refined our annotation guidelines through six iterations, totaling about 540 hours of annotation work. Monthly Frequency: Sample Size May Be Too Small in Some Months to Derive Conclusions About Temporal Trends The monthly frequency of memes in our dataset might not be large enough in certain months to allow for meaningful conclu- sions about trends or changes over time. If the sam- ple size in a given month is too small, it becomesdifficult to accurately detect shifts in meme stances, frames, or topics that may occur over longer peri- ods. This limitation could obscure any subtle trends or variations in the frequency of specific meme types or themes, making it harder to assess how the discourse around a particular subject evolves. For instance, if a meme trend spikes during a specific event but the dataset contains very few memes from that month, it might not reflect the broader public sentiment or provide an accurate representation of the temporal dynamics. Visual inputs for VLMs We did not evaluate VLMs without visual input, and using the LLM backbone alone might not be 100% comparable to running a VLM without image input, because VLMs are fine-tuned on different datasets. Ethics Statement All annotations were conducted in accordance with ethical guidelines, ensuring that annotators were not exposed to any psychologically distressing con- tent during the process. All annotators are paid according to national standards. Acknowledgements This work is supported by the KLIMA-MEMES project funded by the Bavarian Research Institute for Digital Transformation (bidt), an institute of the Bavarian Academy of Sciences and Humanities. The authors are responsible for the content of this publication. References Mohammad Ali and Naeemul Hassan. 2022. A sur- vey of computational framing analysis approaches. InProceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing , pages 9335–9348, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, and Jing Jiang. 2022. Prompting for multimodal hateful meme classification. In Proceedings of the 2022 Confer- ence on Empirical Methods in Natural Language Processing , pages 321–332, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The media frames corpus: Annotations of frames across issues. In Pro- ceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) , pages 438– 444, Beijing, China. Association for Computational Linguistics. Dushyant Singh Chauhan, Dhanush S R, Asif Ekbal, and Pushpak Bhattacharyya. 2020. All-in-one: A deep attentive multi-task learning framework for hu- mour, sarcasm, offensive, motivation, and sentiment on memes. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing , pages 281–290, Suzhou, China. Association for Com- putational Linguistics. Yingying Chen, Kjerstin Thorson, and John Lavaccare. 2022. Convergence and divergence: The evolution of climate change frames within and across | https://arxiv.org/abs/2505.16592v2 |
public events. International Journal of Communication , 16:23. Yuyan Chen, Songzhou Yan, Zhihong Zhu, Zhixu Li, and Yanghua Xiao. 2024. XMecap: Meme caption generation with sub-image adaptability. In ACM Multimedia 2024 . Joe PL Davidson and Luke Kemp. 2024. Climate catas- trophe: The value of envisioning the worst-case sce- narios of climate change. Wiley Interdisciplinary Reviews: Climate Change , 15(2):e871. Corey B. Davis, Mark Glantz, and David R. Novak. 2016. “You Can’t Run Your SUV on Cute. Let’s Go!”: Internet Memes as Delegitimizing Discourse. Environmental Communication , 10:62 – 83. Matt Deitke, Christopher Clark, Sangho Lee, Rohun Tripathi, Yue Yang, Jae Sung Park, Mohammadreza Salehi, Niklas Muennighoff, Kyle Lo, Luca Soldaini, and 1 others. 2024. Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models. arXiv preprint arXiv:2409.17146 . Robert M Entman. 1993. Framing: Toward clarification of a fractured paradigm. Journal of communication , 43(4):51–58. Lea Frermann, Jiatong Li, Shima Khanehzar, and Gosia Mikolajczak. 2023. Conflicts, villains, resolutions: Towards models of narrative media framing. In Pro- ceedings of the 61st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 8712–8732, Toronto, Canada. Associ- ation for Computational Linguistics. William A Gamson and Andre Modigliani. 1989. Me- dia discourse and public opinion on nuclear power: A constructionist approach. American journal of sociology , 95(1):1–37. Todd Gidin. 1980. The whole world is watching. Berke- ley: University of California Press . Ming Shan Hee, Wen-Haw Chong, and Roy Ka-Wei Lee. 2023. Decoding the underlying meaning of multimodal hateful memes. In Proceedings of theThirty-Second International Joint Conference on Arti- ficial Intelligence, IJCAI-23 , pages 5995–6003. Inter- national Joint Conferences on Artificial Intelligence Organization. AI for Good. Andrew J Hoffman. 2011. Talking past each other? cul- tural framing of skeptical and convinced logics in the climate change debate. Organization & Environment , 24(1):3–33. Jianzhao Huang, Hongzhan Lin, Liu Ziyan, Ziyang Luo, Guang Chen, and Jing Ma. 2024. Towards low-resource harmful meme detection with LMM agents. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 2269–2293, Miami, Florida, USA. Association for Computational Linguistics. EunJeong Hwang and Vered Shwartz. 2023. MemeCap: A dataset for captioning and interpreting memes. InProceedings of the 2023 Conference on Empir- ical Methods in Natural Language Processing , pages 1433–1445, Singapore. Association for Computa- tional Linguistics. S. Mo Jang and P. Sol Hart. 2015. Polarized frames on “climate change” and “global warming” across countries and states: Evidence from twitter big data. Global Environmental Change , 32:11–17. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. Kristen Johnson, Di Jin, and Dan Goldwasser. 2017. Leveraging behavioral and social information for weakly supervised collective classification of politi- cal discourse on Twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 741–752, | https://arxiv.org/abs/2505.16592v2 |
Vancouver, Canada. Association for Computational Linguistics. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The Hateful Memes Chal- lenge: Detecting Hate Speech in Multimodal Memes. InAdvances in Neural Information Processing Sys- tems, volume 33, pages 2611–2624. Curran Asso- ciates, Inc. Chen Liu, Gregor Geigle, Robin Krebs, and Iryna Gurevych. 2022. FigMemes: A dataset for figura- tive language identification in politically-opinionated memes. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 7069–7086, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2024. Improved baselines with visual instruc- tion tuning. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) , pages 26296–26306. Julia Mendelsohn, Ceren Budak, and David Jurgens. 2021. Modeling framing in immigration discourse on social media. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2219–2263, Online. Association for Computational Linguistics. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 task 6: Detecting stance in tweets. InProceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) , pages 31– 41, San Diego, California. Association for Computa- tional Linguistics. Khoi P. N. Nguyen and Vincent Ng. 2024. Computa- tional meme understanding: A survey. In Proceed- ings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 21251–21267, Miami, Florida, USA. Association for Computational Linguistics. Yulia Otmakhova, Shima Khanehzar, and Lea Frermann. 2024. Media framing: A typology and survey of computational approaches across disciplines. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 15407–15428, Bangkok, Thailand. Association for Computational Linguistics. Shraman Pramanick, Dimitar Dimitrov, Rituparna Mukherjee, Shivam Sharma, Md. Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2021. De- tecting harmful memes and their targets. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021 , pages 2783–2796, Online. Association for Computational Linguistics. Andrew S. Ross and Damian J. Rivers. 2019. Internet Memes, Media Frames, and the Conflicting Logics of Climate Change Dis- course. Environmental Communication , 13(7):975–994. Publisher: Routledge _eprint: https://doi.org/10.1080/17524032.2018.1560347. Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas PYKL, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Björn Gambäck. 2020. SemEval-2020 task 8: Memotion analysis- the visuo- lingual metaphor! In Proceedings of the Four- teenth Workshop on Semantic Evaluation , pages 759– 773, Barcelona (online). International Committee for Computational Linguistics. Limor Shifman. 2014. Memes in Digital Culture . The MIT Press.David Snow and Robert D. Benford. 1992. Master- frames and cycles of protest. In Frontiers in Social Movement Theory . Manfred Stede, Yannic Bracke, Luka Borec, Neele Char- lotte Kinkel, and Maria Skeppstedt. 2023. Framing climate change in nature and science editorials: ap- plications of supervised and unsupervised text cate- gorization. Journal of Computational Social Science , 6(2):485–513. Manfred Stede and Ronny Patz. 2021. The climate change debate and natural language processing. In Proceedings of the 1st Workshop on NLP for | https://arxiv.org/abs/2505.16592v2 |
Positive Impact , pages 8–18, Online. Association for Compu- tational Linguistics. Viriya Taecharungroj and Pitchanut Nueangjamnong. 2015. Humour 2.0: Styles and types of humour and virality of memes on facebook. Journal of Creative Communications , 10(3):288–302. Xinpeng Wang, Bolei Ma, Chengzhi Hu, Leon Weber- Genzel, Paul Röttger, Frauke Kreuter, Dirk Hovy, and Barbara Plank. 2024. “my answer is C”: First-token probabilities do not match text answers in instruction- tuned language models. In Findings of the Asso- ciation for Computational Linguistics: ACL 2024 , pages 7407–7416, Bangkok, Thailand. Association for Computational Linguistics. Bo Xu, Tingting Li, Junzhe Zheng, Mehdi Naseri- parsa, Zhehuan Zhao, Hongfei Lin, and Feng Xia. 2022. Met-meme: A multimodal meme dataset rich in metaphors. In Proceedings of the 45th Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval , SIGIR ’22, page 2887–2899, New York, NY , USA. Association for Computing Machinery. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, and 43 others. 2024. Qwen2 technical report. Preprint , arXiv:2407.10671. Bingbing Zhang and Juliet Pinto. 2021. Changing the world one meme at a time: The effects of climate change memes on civic engagement intentions. Envi- ronmental Communication , 15:749 – 764. Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, and Minlie Huang. 2023. Large language models are not robust multiple choice selectors. In The Twelfth Inter- national Conference on Learning Representations . A Subreddits in C LIMATE MEMES Table 6 details the public descriptions and meme frequencies of 47 subreddits in C LIMATE MEMES . subreddit frequency description ClimateMemes 591 The community to share environmental memes of prime quality. We advocate for climate action through funny captions and satire. Release your inner Greta, share your dankest decarbonization memes and raise global awareness to save the planet! Discuss climate strikes, climate change denial and doomerism, Fridays For Future, facts and news about nature, climate crisis quotes, ecology, Extinction Rebellion, and the end of the world. TheRightCantMeme 91 Get your fix at left-wing Reddit alternatives: Hexbear and Lemmygrad. Also check out the Discord. dankmemes 90 D A N K memes 76 Memes! A way of describing cultural information being shared. An element of a culture or system of behavior that may be considered to be passed from one individual to another by nongenetic means, especially imitation. meme 50 r/meme is a place to share memes. We’re fairly liberal but do have a few rules on what can and cannot be shared. ConservativeMemes 45 Become a ConservativeMemes subscriber! — Click the JOIN button now, and post your Conservative Memes later at /r/ConservativeMemes !!! — If you like political humor, political memes, politically incorrect memes, or conservative memes, this is the sub for you! PoliticalCompassMemes 39 Political Compass Memes terriblefacebookmemes 30 Community for all those terrible memes your uncle posts on facebook ConspiracyMemes 18 This subreddit is devoted to memes relating to all things conspiracy. Things are pretty laid | https://arxiv.org/abs/2505.16592v2 |
back around here so all people are welcome. The moderators believe in free speech and try not to moderate comments or posts unless it is absolutely necessary. Memes_Of_The_Dank 15 This is a meme subreddit. That should be obvious by now. Also, it is slowly recovering from bots, and that’s good. libertarianmeme 14 For an end to democracy and tyranny. For more information about our ideology, check out the Mises Institute HistoryMemes 11 A place for history memes about events over 20 years ago. CommunismMemes 9 A place to share memes about communism. PoliticalMemes 7 We’re striving for equality here. Not "equality" in the sense that we’ll allow people to post bigoted nonsense or perpetuate a false equivalency of entities, but "equality" in the sense that we are all co-inhabitants of this flying rock and need to learn to live together peacefully. dank_meme 7 Dank Memes MemeEconomy 7 sciencememes 7 PrequelMemes 6 Memes of the Star Wars Prequels. PresidentialRaceMemes 6 MemeThatNews 6 Learn and comment on the news with memes. AusMemes 6 The Australia Memes subreddit. Just waiting for a mate. Animemes 6 A community for anime memes! Marxism_Memes 5 MEMES ARE THE NEW PAMPHLETS JURY NULLIFICATION FOR COMRADE LUIGI! TheLeftCantMeme 4 They make a lot of bad Political Memes MinecraftMemes 3 A place to post memes about Minecraft! Our Discord Server can be found in the sidebar below. AnarchyMemeCollective 3 A reddit for sharing anarchist memes and for discussing anarchism. If you share your own OC let us know and we may share if on our other platforms. depression_memes 3 Memes about depression. Funnymemes 3 "Where Laughter Lives: Your Daily Dose of the Funniest Memes!" marvelmemes 2 Welcome to r/marvelmemes: The home of Marvel memes on Reddit! lotrmemes 2 Come on in, have a seat! This subreddit is a warm resting place for all weary travelers who are fond of Tolkien and his works. We welcome all Tolkien related content! Grab a pint, a long pipe, and relax. VegMeme 2 A place to share animal rights humor, cartoons, image macros etc, because if you can’t have a laugh at the hypocrisy and ignorance of carnists or have a good-natured laugh at ourselves you will probably become a misanthropic douchebag. Jordan_Peterson_Memes 2 Welcome to the official subreddit for Jordan Peterson memes. animememes 2 An anime meme subreddit that’s friendly for women, queer people, and generally marginalized anime fans who want a break from how toxic anime spaces usually are. Of course, anyone is welcome, but be respectful to the intention of the space. AvatarMemes 2 A subreddit for memes and other humor related to the Avatar franchise. Jokes based on ATLA, LoK, etc. are welcome. CoronavirusMemes 2 Opening back up due to popular demand, didn’t know people still wanted to post about the coronavirus. Monkeypoxmemes are allowed. Getting a laugh out of the Coronavirus while we still can, and spreading happiness in a time of distress. SequelMemes 1 Memes of the Star Wars Sequels V oluntaristMemes 1 Memes for voluntarists and other liberty loving people. CommunistMemes 1 Communism is always the end goal! | https://arxiv.org/abs/2505.16592v2 |
SimpsonsMemes 1 Memes from The Simpsons! MemePiece 1 The best place to find One Piece memes! We celebrate the comedic and casual side of the series One Piece. Casual or low effort content, normally removed from r/OnePiece, is likely welcome! CrusadeMemes 1 DEUS VULT MemeReserve 1 The Doomsday Global Meme Vault is a fail-safe meme storage sub, built to stand the test of time — and the challenge of natural or economical collapse. Only for the best memes! GameOfThronesMemes 1 This subreddit is currently closed. Please check out r/aSongofMemesAndRage for memes based off GOT, ASOIAF, etc. IncrediblesMemes 1 It’s showtime memesITA 1 Pizza, pasta & memes. AnimeMeme 1 AnimeMeme for anime memes. YouBelongWithMemes 1 The official meme subreddit for r/TaylorSwift Table 6: C LIMATE MEMES ’s 47 subreddits with their descriptions and meme frequency. B Annotation Guidelines B.1 Filtering Climate Change Memes Is this image associated with the topic of climate change? Often images that discuss terms such as “climate change,” “global warming,” “greenhouse gas,” “carbon emission,” “fossil fuel,” “ozone,” “air pollution,” “carbon dioxide emissions,” “deforesta- tion,” “industrial pollution,” “rising sea levels,” “ex- treme weather,” “melting glaciers,” “ocean acidi- fication,” “biodiversity loss,” “ecosystem disrup- tion,” “carbon capture,” “carbon storage,” “soil carbon,” “renewable energy,” “sustainable prac- tices,” “Paris Agreement,” “Kyoto Protocol,” “car- bon tax,” “emissions trading schemes,” “green tech- nology,” “sustainable technology,” and “environ- mental change” are associated with climate change. Additionally, if the meme features a well-known environmentalist or a political leader who has made statements related to climate change and environ- mental protection, it should also be considered as "associated with climate change." If you encounter an unfamiliar person, please use Google to search and confirm. Is this image a meme? Is it a cartoon? Memes are created by taking an existing widespread image and attaching new meaning to it by adding text within the image. A political cartoon, also known as an editorial cartoon, uses caricatures and satire to express an artist’s opinion on current events, often critiquing political leaders, social issues, or corruption through humor and exaggeration. A cartoon style often features exaggerated characters and simplified forms, and the text is usually in hand-drawn fonts that match the casual, expressive tone of the illustration. Both memes and political cartoons are considered memes in this study. B.2 Stance Annotation What is the stance of this CC meme? We an- notate the stances of CC memes into the following three categories: convinced ,skeptical andneither . •convinced : Accepts environmental risks, sup- ports regulation of harmful activities, and re- flects egalitarian and communitarian values. •skeptical : Downplays or denies environmen- tal risks, opposes regulation, and prioritizes individual freedom and commerce. •neither : Does not align with convinced or skeptical stance and may present a neutral or unrelated stance.B.3 Media Frame Annotation Climate change, a critical global issue, refers to long-term alterations in temperature and weather patterns, largely driven by human activities such as fossil fuel combustion. As this issue gains promi- nence, memes—images paired with text—have be- come a widespread tool for expressing opinions and social commentary online via media framing. In | https://arxiv.org/abs/2505.16592v2 |
this task, you will be given CC memes and will be asked the following question: which media frames are used in these CC memes? Choose one or multiple that apply. •REAL emphasizes that there are evidences in- dicating that CC is occurring; •HOAX questions the faithfulness of public communication by politicians, the media, en- vironmentalists, etc., e.g., if they are misrep- resented or manipulated; •CAUSE attributes human activities as a signifi- cant cause of CC; •IMPACT highlights that CC leads to more net negative outcomes than if there was no CC; •ALLOCATION captures discussions about the responsibility of certain groups, such as na- tions, organizations, or even generations, to take action on climate change than others; •PROPRIETY reflects debates on whether cur- rent actions are appropriate or effective; •ADEQUACY highlights critiques regarding whether existing measures are sufficient to ad- dress climate risks or more actions are needed; •PROSPECT explores perceptions of the po- tential outcomes of positive actions, distin- guishing between climate doomists, who view catastrophe as inevitable, and climate risk re- alists, who believe meaningful prevention is still achievable (Davidson and Kemp, 2024). B.4 Discussion about Moral Frame While moral judgment is a salient feature in many memes—such as blaming past generations, expos- ing hypocrisy, or invoking responsibility for future generations—it is theoretically debated whether morality should be treated as a distinct media frame or as an underlying component of framing pro- cesses (Entman, 1993; Gamson and Modigliani, 1989). Rather than isolating morality as a stan- dalone frame, we integrate moral reasoning into the ALLOCATION frame. This choice reflects our observation that moral claims are often embedded in discussions of responsibility and fairness, and Frame α Real 0.810 Hoax 0.868 Cause 0.825 Impact 0.711 Action_allocation 0.786 Action_propriety 0.777 Action_adequacy 0.740 Action_prospect 0.834 Table 7: Cohen’s κscores for IAA among two annota- tors. enables us to capture moral stances consistently across both convinced andskeptical memes with- out losing theoretical coherence. B.5 Frame-level IAA Table 7 presents per-frame Cohen’s κfor inter- annotator agreement (IAA) among two annotators. C Frame Distribution in Climate Change Memes Among the 1,184 climate change-associated memes we identified, we annotated a total of 2,499 frames, averaging approximately 2.11 frames per meme. Table 8 presents the distribution of these frames across eight main categories. The ADEQUACY frame is the most frequently anno- tated, followed closely by HOAX ,IMPACT , and CAUSE frames. PROSPECT is the least frequent frame, indicating fewer memes discussing future- oriented aspects of climate change. D Frame Correlation We demonstrate the correlation among 8 frames in Figure 8. E Stance and Frame Distribution in Test Set & Majority Vote Baseline In our dataset, we analyzed a total of 235 cases annotated with stance labels. The distribution of the stances in these cases is summarized in Table 9. The majority class is Convinced with 190 cases. A majority-vote baseline predicting this class achieves an accuracy of 0.8085 and a macro F1 score of 0.2980. Table 10 summarizes the distribution of “posi- tive” and “negative” labels across the eight frames. For each meme, we computed a majority-vote base- line | https://arxiv.org/abs/2505.16592v2 |
by predicts “negative” for all frames. The reported metrics are averaged over all 235 cases. Figure 8: Correlation heatmap of frames. The values represent pairwise Pearson correlation coefficients. Val- ues marked with * indicate corresponding p-values less than 0.05, indicating significance. This baseline, predicting “negative” for all frames, achieves an average accuracy of 0.7383 and an average macro F1 score of 0.4398 across the dataset. F Additional Experimental Results We show the full experimental results of stance detection of VLMs in Table 11, of LLMs in Ta- ble 12, frame detection of VLMs in Table 13, and of LLMs in Table 14. In line chart Figure 9, we also present VLM performances on stance and me- dia frame detection with different shot and input setups. G Definition of CC-associated communication science concepts Humor Type. For the humor types category, the content format used to create humor in memes is coded. Humor types are initially independent of the respective humor style. Following (Taecharungroj and Nueangjamnong, 2015), a distinction is made between seven humor types, several of which can in principle be used simultaneously in a meme. •Puns use language to construct new mean- ings or use words or phrases in a way that suggests two interpreta-tions, e.g. words that are pronounced the same but have different meanings. •Personifications (personification) are used when human s and/or behavior are attributed to other objects such as animals, plants or ob- jects. •Exaggerations and understatements are dis- proportionate enlargements or reductions of (a) Stance detection (b) Frame detection Figure 9: Accuracy and Macro-F1 of VLMs on stance and media frame detection with different shot and input setups. Frame REAL HOAX CAUSE IMPACT ALLOCATION PROPRIETY ADEQUACY PROSPECT Total Count 269 387 370 395 208 382 419 69 2,499 % 10.8% 15.5% 14.8% 15.8% 8.3% 15.3% 16.8% 2.8% 100% Table 8: Distribution of frames annotated in the 1,184 climate change-associated memes (2,499 in total, averaging 2.11 per meme). Stance Convinced Skeptical Neither Count 190 32 13 Table 9: Distribution of stance labels in the 235 test memes. Frame Positive Negative Total REAL 42 193 235 HOAX 79 156 235 CAUSE 75 160 235 IMPACT 74 161 235 ALLOCATION 49 186 235 PROPRIETY 78 157 235 ADEQUACY 82 153 235 PROSPECT 12 222 235 Table 10: Distribution of “positive” and “negative” la- bels across eight frames in the 235 test memes. a fact or context. Something is depicted as being larger or smaller than it (supposedly) actually is. Both the behavior of people and the consequences of events are depicted larger or smaller. •Comparisonsare combinations of two or more elements (e.g. before and after pictures) to construct a funny situa-tion. •Irony and sarcasm refers to the use of words to express the opposite of what one actually means. •Surprise is the use of unexpected elements in memes. Memes with this element have a surprising ending/resolution. •Jokes and nonsense describes content with no particular meaning and non-serious state- ments or actions that are only in-tended to make us laugh. Personalization. Who is shown in the picture? •Political actors include heads of | https://arxiv.org/abs/2505.16592v2 |
state, mem- bers of government, official state delegates to the COP, ministers, representatives of institu- tions such as the UN or EU. •NGO members or environmental activists. Members of protest movements such as Fri- days for Future are considered environ-mentalactivists, whereas “normal” participants in demonstrations are coded as “normal citi- zens”. •Celebrities are famous people who do not have an official political function. This in- cludes, for example, people such as athletes, actors, influencers or artists. •Normal or ordinary citizens are people who are not clearly assigned to one of the other categories. Responsibility. To whom the responsibility for solving or combating the climate problem is at- tributed. The aim is to record who should take measures against climate change (e.g. more en- vironmental protection, fewer emissions) or who isexpected to do so. Responsibility can be explic- itly attributed or suggested by listing necessary measures that only a certain group can take. •Responsibility at micro level: Responsibil- ity for individual persons such as politicians, activists, entrepreneurs, etc. •Responsibility at meso level: Responsibility for individual companies, institutions, parties, parliaments, governments. •Responsibility at macro level: Responsibility for certain countries, politics in general, the economic system, society, us as humanity, etc H Case Study: Memes with Features in Communication We present a case study on selected memes to ana- lyze the model’s stance and frame prediction per- formance. Table 15 showcases three representative examples where our model made prediction errors. For each meme, we provide the associated features, gold and predicted stances and frames, as well as a detailed interpretation generated by the model. This qualitative analysis helps illustrate common challenges and nuanced aspects of meme commu- nication that contribute to model misclassification. I Elaboration About Use of AI Assistants The authors used ChatGPT to polish writings for clarity and coherence and to assist with code gener- ation. The authors manually inspected all ChatGPT suggestions and made corrections when necessary. Model #S Inputs Acc. F1 precision recall LLaV A0meme 76.89 28.98 26.75 31.61 meme+ocr 45.80 30.80 35.87 37.73 meme+hum 68.91 46.12 58.29 55.05 meme+hum+ocr 66.39 44.83 48.50 55.75 meme+syn 55.46 34.41 36.67 38.22 meme+syn+ocr 56.72 35.75 38.32 40.48 1meme 64.29 37.75 39.42 38.16 meme+ocr 60.08 42.27 41.78 49.04 meme+hum 73.95 55.68 54.30 63.42 meme+hum+ocr 71.85 52.60 53.35 61.04 meme+syn 62.18 37.19 37.69 40.12 meme+syn+ocr 66.39 41.85 41.85 44.24 2meme 68.07 34.77 40.36 35.37 meme+ocr 68.91 43.26 42.31 45.27 meme+hum 82.35 56.08 55.79 58.62 meme+hum+ocr 74.37 45.31 43.33 52.29 meme+syn 70.59 41.25 41.07 42.70 meme+syn+ocr 73.95 43.79 44.55 44.95 3meme 73.95 35.64 45.55 35.40 meme+ocr 74.79 45.90 45.97 45.95 meme+hum 83.61 54.21 54.73 55.01 meme+hum+ocr 81.09 51.65 52.37 53.97 meme+syn 73.11 40.04 38.67 42.22 meme+syn+ocr 76.47 41.10 39.71 42.73 4meme 77.31 39.08 49.55 40.69 meme+ocr 77.31 44.06 46.19 42.86 meme+hum 86.55 56.68 62.50 54.48 meme+hum+ocr 83.19 53.57 55.13 53.97 meme+syn 73.95 40.01 38.76 41.69 meme+syn+ocr 76.89 41.10 39.60 42.90 Molmo0meme 42.02 25.02 30.30 30.31 meme+ocr 39.08 24.32 31.31 31.70 meme+hum 47.06 33.27 40.09 46.07 meme+hum+ocr 43.70 31.50 40.16 45.55 meme+syn 30.67 25.91 37.67 41.08 meme+syn+ocr 28.99 24.80 37.18 40.38 1meme 61.76 30.83 30.98 32.34 meme+ocr 52.10 | https://arxiv.org/abs/2505.16592v2 |
31.68 34.73 37.93 meme+hum 59.66 41.76 43.43 51.90 meme+hum+ocr 53.78 37.00 41.04 48.83 meme+syn 53.78 37.45 39.43 46.01 meme+syn+ocr 56.72 39.45 41.55 48.08 2meme 45.38 26.90 31.27 32.56 meme+ocr 52.52 31.22 34.30 36.36 meme+hum 68.49 47.99 46.59 55.31 meme+hum+ocr 66.39 43.66 44.16 52.27 meme+syn 57.38 35.00 36.62 38.26 meme+syn+ocr 63.03 36.54 36.65 41.55 3meme 47.06 28.81 33.69 37.59 meme+ocr 55.88 33.20 35.32 39.48 meme+hum 68.49 47.99 47.64 53.57 meme+hum+ocr 68.49 44.41 45.90 51.40 meme+syn 60.34 37.13 37.96 41.21 meme+syn+ocr 63.03 35.93 36.24 40.68 4meme 47.06 28.16 32.95 34.98 meme+ocr 57.56 34.70 36.37 42.78 meme+hum 72.27 49.53 50.21 55.34 meme+hum+ocr 70.17 46.52 52.67 54.70 meme+syn 61.76 39.25 40.45 45.16 meme+syn+ocr 65.97 38.32 37.73 43.63 Table 11: VLMs’ performance in terms of accuracy and Macro-F1 on stance detection. Hum = human caption, syn = synthetic caption, #S = number of shots. Model #S Inputs Acc. F1 precision recall Mistral0ocr 46.64 33.90 37.53 42.64 hum 66.67 51.62 50.70 65.00 hum+ocr 62.71 47.90 47.81 60.98 syn 39.41 29.94 37.35 40.45 syn+ocr 41.53 30.61 36.59 41.32 1ocr 48.10 35.28 39.01 44.88 hum 69.20 50.68 49.24 59.52 hum+ocr 62.87 49.07 48.56 64.09 syn 54.62 37.70 39.04 44.40 syn+ocr 52.94 40.13 41.31 51.75 2ocr 48.74 36.19 39.70 45.89 hum 71.73 52.68 50.88 60.56 hum+ocr 63.45 46.47 46.22 56.28 syn 56.78 39.92 40.15 46.82 syn+ocr 53.81 42.10 43.23 56.23 3ocr 52.32 38.59 40.81 49.21 hum 77.54 57.36 55.22 62.96 hum+ocr 69.33 50.81 49.02 59.57 syn 55.93 36.06 37.34 41.03 syn+ocr 57.98 43.90 43.68 55.56 4ocr 51.90 37.09 39.52 46.43 hum 79.32 60.54 57.57 66.95 hum+ocr 67.65 48.96 47.40 57.36 syn 58.23 36.06 37.24 40.24 syn+ocr 59.66 42.71 42.97 51.47 Qwen20ocr 41.95 31.49 36.80 45.72 hum 64.29 50.29 55.11 59.46 hum+ocr 67.51 49.31 49.53 59.05 syn 55.88 36.20 53.12 51.88 syn+ocr 61.76 37.06 43.43 47.13 1ocr 43.28 31.34 35.86 40.83 hum 64.71 48.60 50.08 58.11 hum+ocr 65.13 49.02 50.31 59.80 syn 61.34 40.18 42.22 43.47 syn+ocr 65.13 39.49 42.33 39.37 2ocr 44.96 32.74 37.72 44.56 hum 70.59 52.60 55.23 59.66 hum+ocr 69.33 50.69 52.38 61.32 syn 62.61 40.74 50.80 43.77 syn+ocr 64.29 42.51 47.10 46.20 3ocr 47.06 34.10 38.19 42.38 hum 73.53 55.19 57.97 61.74 hum+ocr 66.81 49.35 50.24 58.10 syn 62.18 40.97 47.24 44.47 syn+ocr 59.24 39.80 44.51 45.65 4ocr 49.16 34.06 37.96 39.98 hum 73.11 53.28 55.79 58.30 hum+ocr 70.17 51.66 52.22 61.01 syn 68.91 44.66 47.97 49.62 syn+ocr 61.34 39.08 40.29 44.12 Table 12: LLMs’ performance in terms of accuracy and Macro-F1 on stance detection. Hum = hum caption, syn = synthetic caption, #S = number of shots.Model #S Inputs Acc. F1 precision recall LLaV A0meme 46.27 41.06 44.59 50.60 meme+ocr 44.65 40.27 46.50 50.39 meme+hum 49.38 45.18 51.77 53.19 meme+hum+ocr 46.70 43.10 51.23 52.08 meme+syn 44.93 41.34 50.59 50.22 meme+syn+ocr 44.19 40.85 50.65 50.35 1meme 55.86 47.82 50.96 53.05 meme+ocr 53.09 45.94 49.77 52.87 meme+hum 56.42 49.85 53.13 56.16 meme+hum+ocr 56.62 49.44 53.21 55.01 meme+syn 57.71 49.35 52.78 53.97 meme+syn+ocr 56.97 48.89 52.42 53.85 2meme 54.87 47.03 50.80 52.67 meme+ocr 50.95 44.22 47.79 52.40 meme+hum 55.86 48.91 52.19 55.72 meme+hum+ocr 54.85 47.80 51.38 54.34 meme+syn 58.13 49.56 | https://arxiv.org/abs/2505.16592v2 |
52.88 53.88 meme+syn+ocr 57.13 48.74 52.03 53.51 3meme 53.44 46.15 49.55 53.47 meme+ocr 48.86 42.50 45.73 53.41 meme+hum 51.91 45.14 47.76 55.38 meme+hum+ocr 53.56 46.75 50.18 55.09 meme+syn 55.62 47.85 51.38 54.45 meme+syn+ocr 55.53 47.74 51.29 54.54 4meme 51.87 45.63 50.22 53.83 meme+ocr 46.36 40.72 43.54 53.40 meme+hum 49.96 44.18 47.60 55.55 meme+hum+ocr 50.53 44.46 48.10 54.18 meme+syn 52.45 45.78 49.53 54.70 meme+syn+ocr 52.57 45.87 49.43 54.81 Molmo0meme 49.38 43.89 48.00 55.04 meme+ocr 53.90 46.13 49.03 55.52 meme+hum 53.66 47.71 51.30 56.43 meme+hum+ocr 53.21 47.23 50.97 56.12 meme+syn 51.80 45.80 49.40 55.56 meme+syn+ocr 52.77 46.20 49.21 55.35 1meme 54.88 47.73 50.06 55.65 meme+ocr 58.20 48.36 50.89 55.82 meme+hum 57.75 50.68 54.44 57.07 meme+hum+ocr 58.32 50.04 53.30 57.14 meme+syn 50.51 44.61 49.13 54.24 meme+syn+ocr 51.42 45.30 49.80 55.25 2meme 53.58 46.62 48.77 54.85 meme+ocr 54.89 47.35 51.36 55.12 meme+hum 59.84 52.15 55.45 57.88 meme+hum+ocr 56.88 49.58 53.43 56.63 meme+syn 52.73 46.66 51.48 55.47 meme+syn+ocr 54.26 47.58 51.83 55.43 3meme 55.72 48.61 51.00 55.14 meme+ocr 54.02 47.28 51.17 56.01 meme+hum 60.04 52.05 54.70 57.93 meme+hum+ocr 58.36 50.93 54.35 57.98 meme+syn 56.16 49.14 53.04 55.84 meme+syn+ocr 53.71 47.00 50.81 55.44 4meme 60.37 52.60 54.99 57.41 meme+ocr 56.98 49.68 53.12 56.63 meme+hum 62.74 54.24 56.65 59.35 meme+hum+ocr 60.40 52.46 55.06 58.54 meme+syn 58.37 51.02 54.12 57.04 meme+syn+ocr 54.23 47.97 51.53 56.92 Table 13: VLMs’ performance in terms of accuracy and Macro-F1 on frame detection. Hum = human caption, syn = synthetic caption, #S = number of shots. Model #S Inputs Acc. F1 precision recall Mistral0ocr 55.03 49.79 54.96 55.65 hum 58.72 53.35 57.91 58.97 hum+ocr 57.73 52.46 57.30 58.47 syn 53.52 48.26 53.95 55.06 syn+ocr 54.70 49.30 54.48 55.68 1ocr 56.32 50.75 55.43 56.04 hum 60.81 55.08 59.07 59.85 hum+ocr 59.79 54.18 58.18 59.29 syn 54.97 49.76 54.58 55.17 syn+ocr 56.98 51.78 56.35 57.14 2ocr 59.18 52.51 55.95 56.65 hum 62.38 56.23 59.36 60.41 hum+ocr 62.27 55.96 58.96 60.03 syn 56.44 50.65 55.15 55.28 syn+ocr 58.61 52.43 56.00 56.54 3ocr 60.55 53.80 57.04 57.88 hum 63.67 57.55 60.39 61.55 hum+ocr 63.84 57.63 60.16 61.78 syn 58.45 52.48 56.41 56.81 syn+ocr 60.45 54.12 57.55 58.25 4ocr 61.71 54.79 57.59 58.68 hum 64.61 58.31 61.00 62.31 hum+ocr 65.09 58.78 61.06 62.94 syn 59.03 53.01 56.42 57.48 syn+ocr 61.78 55.20 58.10 59.12 Qwen20ocr 56.60 49.47 53.29 54.49 hum 65.07 55.32 57.85 58.63 hum+ocr 60.82 52.94 55.81 56.99 syn 54.24 47.82 52.28 53.33 syn+ocr 53.69 47.41 51.66 53.06 1ocr 59.59 51.82 54.94 55.63 hum 64.07 56.54 59.53 60.69 hum+ocr 63.39 55.68 58.50 59.15 syn 57.55 50.92 54.79 54.84 syn+ocr 57.69 51.08 54.90 55.30 2ocr 62.11 54.01 56.95 57.72 hum 64.84 57.34 59.96 61.24 hum+ocr 64.31 56.57 58.88 60.05 syn 58.36 52.09 55.99 57.10 syn+ocr 60.65 53.87 57.07 58.42 3ocr 63.14 54.32 56.92 57.65 hum 65.76 58.22 60.59 61.94 hum+ocr 65.32 57.52 59.76 61.05 syn 59.85 53.24 56.58 57.65 syn+ocr 60.85 54.16 57.57 58.96 4ocr 64.02 55.45 57.99 58.80 hum 65.86 58.23 60.66 62.18 hum+ocr 64.98 57.51 60.01 61.12 syn 60.33 53.98 57.27 58.26 syn+ocr 60.88 54.24 57.45 58.45 Table 14: LLMs’ performance in terms of accuracy and Macro-F1 on frame | https://arxiv.org/abs/2505.16592v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.