text
string
source
string
how would you restate this question? Please provide your counterfactual transformation of the question, ensuring the subject matter remains unchanged but the domain is freely changed. Please output in YAML format, only including the transformed question and corresponding rationale: { "transformed_question": <transforme...
https://arxiv.org/abs/2505.21184v1
we employ the hatespeech label to classify the samples into three categories (non-hateful, unclear, hateful), and focus on the hateful and non-hateful categories. Baseline Methods We compare our PoisonSwarm with existing data augmen- tation and synthesis methods. We use typical data augmentation methods from nlpaug [11...
https://arxiv.org/abs/2505.21184v1
in total), aiming to validate their effectiveness in generating harmful content. 12 Y. Yan et al. Table 3: Experimental results of baseline comparison on the MHS dataset. The best results are in bold, and the second-best results are in underline . Method SSR (↑)Tox. (↑) Div.(↑)Nat. (↑) No Aug./Syn. - 0.5867 0.5313 - Da...
https://arxiv.org/abs/2505.21184v1
)Removinghal- lucination evaluation (w/o Hall.Eval ) improves SSR (7.91% ↑), but it comes at the cost of reduced performance in other metrics, indicating that hallucination evaluation is critical to filtering out low-quality or irrelevant outputs. The sub- stantial drop in naturalness also underscores its role in ensur...
https://arxiv.org/abs/2505.21184v1
Unlike those back- ward A**n countries that can’t seem to get their act together. (target_race) Children are so annoying... whining, crying, and causing trouble. (target_age) Key characteristics: 1) Uses seemingly objective language; 2) presents implicit bias through structured arguments; 3) disguises intolerance as re...
https://arxiv.org/abs/2505.21184v1
Durrett, G.: Synthesizrr: Generating diverse datasets with retrieval augmentation. In: Proc. 2024 EMNLP. pp. 19200–19227 (2024) 7. Huang, L., Yu, W., Ma, W., et al.: A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. (2024) 8. Hui, Z., Guo, Z....
https://arxiv.org/abs/2505.21184v1
arXiv:2505.21189v1 [cs.CL] 27 May 2025Exploring the Latent Capacity of LLMs for One-Step Text Generation* Gleb Mezentsev AIRI Skoltech mezentsev@airi.netIvan Oseledets AIRI Skoltech oseledets@airi.net Abstract A recent study showed that large language models (LLMs) can reconstruct surprisingly long texts – up to thousa...
https://arxiv.org/abs/2505.21189v1
(next-token prediction cross-entropy loss) over a concatenated input sequence Z= [mem 1, . . . , mem K, t1, . . . , t N] passed through a frozen LLM. In the case of per- fect next-token prediction accuracy (which could be achieved for reasonable text length), this allows the model to autoregressively predict the whole ...
https://arxiv.org/abs/2505.21189v1
the vectors by optimizing cross-entropy loss between the tar- get sequence T= [t1, t2, . . . , t N]and the frozen LLM’s output for the input sequence. The pre- diction is obtained using standard causal attention masking, so that the predicted probabilities for the token tidepend on the first iinput "proto-tokens" (see ...
https://arxiv.org/abs/2505.21189v1
evaluate perfor- mance on unnatural texts. To assess generation performance on natural but unseen texts, we use a collection of fanfiction texts from AO3 library2, with a publication date cutoff of October 2024, which is later than the end of training for all models. For data processing details, see Kuratov et al. (202...
https://arxiv.org/abs/2505.21189v1
functions of eandm. It is possible, that while one of them mostly incorpo- rates language information, the role of the other one is mainly structural or mechanistic. This could be related to the phenomenon of "attention sinks" – Xiao et al. (2023) showed that LLMs strongly at- tend to the initial tokens in the sequence...
https://arxiv.org/abs/2505.21189v1
use scheme with sharing m token between texts and random seeds and etoken being unique for each text/seed pair. 4 Share pPythia Llama 160M 410M 1.4B 3.2-1B 3.2-3B 3.1-8B RandomCtokensFalse 90 92 90 256 362 512 True 45 22 45 181 256 256 HLMFalse 507.5±105.9377.1±133.1470.7±103.11551.3±159.52193.4±190.22974.4±298.3 True ...
https://arxiv.org/abs/2505.21189v1
two times lower in non-autoregressive 5 128 256 512 1024 2048 4096 Autoregressive HLM326412825651210242048One-forward HLMOne trainable embedding 256 512 1024 2048 4096 8192 Autoregressive HLM64128256512102420484096One-forward HLMT wo trainable embeddings Model Pythia-160M Pythia-410M Pythia-1.4B Llama-3.2-1B Llama-3.2-...
https://arxiv.org/abs/2505.21189v1
text pairs and different-context text pairs. Token-level distance is measured as cosine distance between TF-IDF embeddings. Semantic distance is measured as cosine distance between semantic text embeddings (see Section 3 for details). We start by measuring the distances between three types proto-token embedding pairs: ...
https://arxiv.org/abs/2505.21189v1
find that both the number and the arrange- ment of such tokens is crucial for enabling this generation capacity. Interestingly, with only one proto-token, LLMs are unable to generate more than a single token of text. In contrast, two properly arranged proto-tokens can enable the generation of sequences hundreds tokens ...
https://arxiv.org/abs/2505.21189v1
O’Brien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, and 1 others. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning , pages 2397–2430. PMLR. Timur Garipov, Pavel Izmailov, Dmitrii Podopri...
https://arxiv.org/abs/2505.21189v1
arXiv:2505.21190v1 [cs.CL] 27 May 2025LUNGUAGE : A Benchmark for Structured and Sequential Chest X-ray Interpretation Jong Hak Moon1, Geon Choi1, Paloma Rabaey3, Min Gwan Kim6, Hyuk Gi Hong5, Jung-Oh Lee6,Hangyul Yoon1,Eun Woo Doe7,Jiyoun Kim1,Harshita Sharma2, Daniel C. Castro2,Javier Alvarez-Valle2,Edward Choi1 1KAIS...
https://arxiv.org/abs/2505.21190v1
it difficult to distinguish precise from incomplete outputs. Structured representation frameworks have partially addressed these issues by extracting clinical entities and relations from radiology reports [ 13,16,36,40,42]. Some include temporal descriptors like “worsened” or “stable” [ 16,36]. However, all remain limi...
https://arxiv.org/abs/2505.21190v1
sensitivity to prompt design. To mitigate such variability, we incorporate a task-specific vocabulary and schema-aligned reference set to constrain output to valid clinical concepts and enhance consistency through retrieval-augmented prompting. 2 Figure 1: Schema for Single and Sequential Report Structuring. The figure...
https://arxiv.org/abs/2505.21190v1
diagnoses based on other modalities (e.g., “AIDS”); and PATIENT INFO for reported history or symptoms (e.g., “fever,” “cough”). 3 RELATIONS capture clinical properties and inter-entity connections, often spanning multiple sen- tences. The schema includes diagnostic stance ( DXSTATUS ,DXCERTAINTY ); spatial and descript...
https://arxiv.org/abs/2505.21190v1
Similarly, “lung volumes” reported as low on day 10 and described as “no change” on day 90 can be grouped to indicate persistent low lung volume. TEMPORAL GROUPS divide each ENTITY GROUP into distinct diagnostic episodes based on temporal distance, shifts in status or certainty, and explicit expressions of clinical cha...
https://arxiv.org/abs/2505.21190v1
mitigate this, we guide the model by matching sentences against a curated vocabulary from our annotation corpus (Section 3.1). The task spans both intra- and inter-sentential contexts, extracting triplets without templates to handle lexical variation. Prompt details and vocabulary-matching algorithm are in Appendix B.2...
https://arxiv.org/abs/2505.21190v1
Let Spred= (Spred 1, . . . , Spred T)andSgold= (Sgold 1, . . . , Sgold T)denote the predicted and gold sequences for a given patient, where each S(·) tis the set of all structured findings at the t-th study. Pairwise similarity is computed over every possible pair of findings, pooled across all timepoints: (fpred, fgol...
https://arxiv.org/abs/2505.21190v1
,COMPARISON ,PASTHX,OTHER SOURCE , ASSESSMENT LIMITATIONS 6 Set-level matching with partial credit. We can compute the combined MatchScore by multiplying semantic, temporal, and structural similarity scores (Equations 3-5), as shown in Equation 2. We then perform optimal bipartite matching between predicted findings ia...
https://arxiv.org/abs/2505.21190v1
We evaluate the model’s ability to generate accurate structured representations from individual reports by comparing predicted (entity, relation, attribute) triplets against expert annotations inLUNGUAGE . Using micro-averaged precision, recall, and F1 scores at both the entity–relation and full triplet levels, we asse...
https://arxiv.org/abs/2505.21190v1
metric, we refer to Appendix D. Table 2 shows the Kendall Tau and Pearson correlation between each single-report level metric and the total number of errors (both significant and insignificant) identified by radiologists, across all reports in the ReXVal dataset. A more negative correlation indicates stronger alignment...
https://arxiv.org/abs/2505.21190v1
our annotated reports as ground truth structured resources and compare them with outputs from the structuring process in Section 4. MAIRA-2 (standard setting) clearly outperforms all other models, demonstrating the value of longitudinal context even when evaluated at single-report level. The cascaded setting slightly u...
https://arxiv.org/abs/2505.21190v1
for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization , pages 65–72, 2005. [4]Shruthi Bannur, Kenza Bouzid, Daniel C Castro, Anton Schwaighofer, Anja Thieme, Sam Bond-Taylor, Maximi...
https://arxiv.org/abs/2505.21190v1
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004. [19] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19...
https://arxiv.org/abs/2505.21190v1
Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [34] Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, et al. Clinical information e...
https://arxiv.org/abs/2505.21190v1
sections describe image-based observations and interpretations. Section-level coverage across the dataset is summarized as: •History (i.e., Indication) : 1,362 reports (92.5%) •Findings : 1,224 reports (83.1%) •Impression : 1,015 reports (68.9%) Among the reports, 767 contained both findings and impression sections, 45...
https://arxiv.org/abs/2505.21190v1
from patient history, clinical documentation, or other diagnostic modalities: •COF (Clinical Objective Findings) : Structured clinical measurements or physical findings derived from sources such as laboratory tests or vital signs (e.g., “elevated white cell count”, “low oxygen saturation”). These provide objective supp...
https://arxiv.org/abs/2505.21190v1
•Severity : Reflects the degree of abnormality or clinical impact, often based on radiologic intensity or extent (e.g., “mild”, “moderate”, “severe”, “marked”). •Comparison : Indicates asymmetry or difference across anatomical sides or regions within the same image (e.g., “left greater than right”, “right lung appears ...
https://arxiv.org/abs/2505.21190v1
in LUNGUAGE . To initiate this process, we first applied GPT-4 to a subset of reports to produce initial structured outputs, from which we extracted candidate terms for each relation type. These candidate vocabularies were then manually reviewed and refined by relation category to ensure clinical accuracy, coverage, an...
https://arxiv.org/abs/2505.21190v1
internally consistent taxonomy of radiologic language, aligned with the conventions of routine diagnostic documentation. A.1.2 Single Annotation Details To construct a clinically reliable gold-standard dataset, we implemented a structured annotation pipeline that reviewed and refined the initial triplets generated by G...
https://arxiv.org/abs/2505.21190v1
together; otherwise, they were assigned to separate entity groups. To further structure these entity groups, we assessed whether each represented a single episode of care or multiple distinct episodes. This required examining the temporal order and interval between observations. Intervals were computed using the StudyD...
https://arxiv.org/abs/2505.21190v1
“effusion,” “pleural effusion,” 18 “pleural effusion left”) unified under one normalized cluster. Similarly, subject p10523725 had the highest number of temporal groups (6) within a single entity group, driven by repeated mentions of dyspnea across non-contiguous timepoints. These results highlight the complexity and v...
https://arxiv.org/abs/2505.21190v1
context- sensitive inferences based on both the prompt and observed patterns in the data. The matching algorithm is summarized below: Algorithm 1 Span-Based V ocabulary Matching 1:Input: Curated vocabulary V; report section Tcomposed of multiple sentences. 2:Output: List of matched word spans in T, each labeled with on...
https://arxiv.org/abs/2505.21190v1
performance across all prompt configurations. Under the zero-shot setting, incorporating vocabulary guidance raised the triplet-level F1 score from 0.52 to 0.78, and the entity-relation F1 from 0.79 to 0.92. When five in-context demonstrations were provided, the triplet F1 increased further—reaching 0.84 without vocabu...
https://arxiv.org/abs/2505.21190v1
to opacity in the right cardiophrenic sulcus. Whereas the gold annotations grouped temporally related expressions (e.g., opacity. . . interval resolution ) into a single entity, the model assigned each instance to a separate group. This highlights the model’s limited ability to incorporate temporal continuity cues such...
https://arxiv.org/abs/2505.21190v1
0.10 MORPHOLOGY 0.05 DISTRIBUTION 0.05 MEASUREMENT 0.05 COMPARISON 0.03 PAST HX 0.01 OTHER SOURCE 0.01 ASSESSMENT LIMITATIONS 0.01 C.2 L UNGUAGE SCORE examples Single-Report Assessment To illustrate how LUNGUAGESCORE evaluates structured prediction quality in the single-report setting, we present detailed examples of p...
https://arxiv.org/abs/2505.21190v1
predicted findings. Each score is computed from three components: •Semantic Score : In the sequential-report setting, semantic similarity is computed between ENTITYGROUP representations, which group together lexically variable but conceptually equivalent findings observed at different timepoints. 27 •Temporal Score : V...
https://arxiv.org/abs/2505.21190v1
10], MedCPT [ 14], BioClinical- BERT [ 2], ClinicalBERT [ 20] and BioBERT [ 17]. To decide which models to use in the semantic similarity step of LUNGUAGE SCORE , we conducted an experiment over ReXVal, a subset of the MIMIC-CXR test set encompassing 50 randomly selected studies. We structured each individual study acc...
https://arxiv.org/abs/2505.21190v1
shows the highest overall correlation, its relationship with error counts is not consistently linear, especially when the number of errors is low. In these cases where distinguishing between high-quality outputs is most crucial, its ability to make fine-grained distinctions is limited. In contrast, our metric not only ...
https://arxiv.org/abs/2505.21190v1
Patient ID # Attr. (W/I) Single Score Effect Rate (S, %) Sequential Score Effect Rate (Seq, %) p10274145 5 (0/5) 0.981 0.38 0.979 0.42 p10523725 3 (1/2) 0.989 0.37 0.987 0.43 p10886362 8 (5/3) 0.983 0.21 0.979 0.26 p10959054 13 (9/4) 0.967 0.25 0.963 0.28 p12433421 15 (8/7) 0.968 0.21 0.971 0.19 p15321868 2 (1/1) 0.982...
https://arxiv.org/abs/2505.21190v1
using their default settings, without grounding. RGRG [ 32] and CVT2DistilGPT2 [ 22]For both models, we input the current frontal image, once again choosing a random one when there are multiple ones, and foregoing generation when there are none. For the CVT2DistilGPT2 model, we use the variant that was trained on MIMIC...
https://arxiv.org/abs/2505.21190v1
arXiv:2505.21212v1 [cs.AI] 27 May 2025Interpretable DNFs Martin C. Cooper1,Imane Bousdira2,Cl´ement Carbonnel3 1IRIT, University of Toulouse, France 2IRIT, INP Toulouse, France 3LIRMM, CNRS, University of Montpellier, France {cooper, imane.bousdira }@irit.fr, clement.carbonnel@lirmm.fr Abstract A classifier is consider...
https://arxiv.org/abs/2505.21212v1
trees. We restrict our attention to classifiers which are functions of boolean features only. (However, most of our results can be extended to non-boolean features through binarisation.) In Section 2, we observe that a boolean classifier κis in- terpretable if and only if both κand its complement κare expressible as k-...
https://arxiv.org/abs/2505.21212v1
κ(v′)̸=κ(v). By k-AXp- interpretability, (κ, v′)has an AXp Aof size at most k. Let yi=viifi∈ F \ Aandyi=v′ iifi∈A. By definition, κ(y) =κ(v′)̸=κ(v). Therefore, Ais a wCXp of (κ, v)and hence some subset of Ais a CXp of size at most k. Since k-CXp-interpretability follows from k-AXp- interpretability, this leads to a nat...
https://arxiv.org/abs/2505.21212v1
Dκis the DNF formula whose terms are the prime im- plicants of κof size at most kandDκis the DNF formula whose terms are the prime implicants of κof size at most k. The smallest integer ksuch that a boolean function κand its complement can be expressed as k-DNF formulas is called thecertificate complexity ofκ[2, Chapte...
https://arxiv.org/abs/2505.21212v1
constant then the theorem obviously holds, so let us assume that it is not. (This assumption implies in particular k >0,|Dκ|>0, and|Dκ|>0.) We claim that for all integers j≥0, either |Dκ|< kjor there exists a consistent set Qofjliterals that is contained in at least (1/k)j· |Dκ|terms of Dκ. We will prove this claim by ...
https://arxiv.org/abs/2505.21212v1
can further assume that they are prime implicants. Let Ebe the set of all subsets of Fwhose features correspond exactly to a term. (Note that multiple terms may correspond to the same set of features, so Ecan be strictly smaller than the sum of the sizes of these formulas.) Then, for any choice of vat least one term ev...
https://arxiv.org/abs/2505.21212v1
mim( GD)≤k. Example 3. Consider the majority function on 2k−1argu- ments defined by κmaj(x1, . . . , x 2k−1)≡(P2k−1 i=1xi≥k). This function κmajisk-AXp-interpretable since it is the dis- junction of all terms composed of exactly kpositive literals and its complement is the disjunction of all terms composed of exactly k...
https://arxiv.org/abs/2505.21212v1
k2[13]. On the other hand, the function κmajof Example 3 is not expressible as a nested k-DNF . We proceed again by contra- diction. If κmajcould be represented by a nested k-DNF gen- erated from a k×kmatrix L, thenLwould contain only pos- itive literals. Let L1be set of the literals in the first column ofL, and Jthe o...
https://arxiv.org/abs/2505.21212v1
where h≥1(since a DNF satisfying κ(0, . . . , 0) = 0 cannot contain the term x1···xk). Let xij(j= 1, . . . , h ) be these positive literals. Letrij=ij+1−ij(j= 1, . . . , h −1),rih=k+i1−ih andri= 0for all i /∈ {i1, . . . , i h}. Then tis the conjunction of the leftmost rijliterals in row i(fori= 1, . . . , k ) of the ab...
https://arxiv.org/abs/2505.21212v1
considered. Next, we provide an experimen- tal comparison with the depth- kdecision trees obtained by CART [7]. 6.1 Heuristic algorithm The heuristic consists of three steps: constructing the matrix, constructing the nested k-DNF, and a pruning phase. In Al- gorithm 1, we show how to construct the k×kmatrix Lby proceed...
https://arxiv.org/abs/2505.21212v1
required a depth of 4 to create a decision tree that fits the data exactly, as mentioned in Example 2.Algorithm 1 Construct matrix Input : k, dataset Output : matrix L 1:fori= 0to k−1do 2: forj= 0to k−1do 3: ifi= 0then 4: limit = 0 5: else 6: limit = min( k−j,⌈(2(n−j)/i)−1⌉) 7: end if \\Ec1(t): nb. examples in class 1 ...
https://arxiv.org/abs/2505.21212v1
DT DNF DNF Balance-scale 92.96 92.05 93.28 92.18 90.58 93.10 Banknote 98.25 90.25 88.52 99.02 90.01 88.52 Car-evaluation 92.83 82.51 91.48 93.64 82.97 91.79 Compas discretized 67.31 66.40 67.31 66.97 66.51 67.70 Indians Diabetes 77.64 79.66 77.23 77.43 79.57 76.97 Iris 98.00 98.00 98.00 98.00 97.27 98.00 Lymph 85.00 81...
https://arxiv.org/abs/2505.21212v1
14.6 3.4 23.2 7.2 36.4 13.6 Banknote 4.0 2.0 8.0 2.0 14.0 2.0 21.2 2.2 27.2 2.6 Car-evaluation 3.0 1.0 4.0 2.6 6.0 3.2 9.8 3.8 16.4 4.4 Compas discretized 4.0 1.8 8.0 2.6 15.6 4.2 29.0 4.6 51.4 6.4 Indians Diabetes 4.0 2.0 8.0 3.2 15.6 4.2 27.4 5.0 43.4 5.8 Iris 3.0 1.6 4.4 2.0 4.4 2.0 4.4 2.6 4.4 3.2 Lymph 4.0 1.8 8.0...
https://arxiv.org/abs/2505.21212v1
Ann Now ´e, Grzegorz J. Nalepa, Roy Fairstein, and Roxana Radulescu, editors, ECAI , volume 372 of Frontiers in Artificial Intelligence and Applications , pages 469–476. IOS Press, 2023. [11]Martin C. Cooper and Jo ˜ao Marques-Silva. Tractabil- ity of explaining classifier decisions. Artif. Intell. , 316, 2023. [12]Emi...
https://arxiv.org/abs/2505.21212v1
arXiv:2505.21218v1 [cs.CL] 27 May 2025Pretrained LLMs Learn Multiple Types of Uncertainty Roi Cohen HPI / University of Potsdam Roi.Cohen@hpi.deOmri Fahn Tel Aviv University omrifahn@mail.tau.ac.ilGerard de Melo HPI / University of Potsdam Gerard.DeMelo@hpi.de Abstract Large Language Models are known to capture real-wo...
https://arxiv.org/abs/2505.21218v1
expressing a lack of knowledge both verbally and through their output distribution. Preprint. Under review. Figure 1: Illustration of identifying multiple data-specific uncertainty linear vectors when investigating the hidden space at the end of each transformer layer. Some more advanced methods such as instruction-tun...
https://arxiv.org/abs/2505.21218v1
scaling model size alone. To conclude, our contributions are: (1) We introduce an analytical framework for probing how LLMs encode uncertainty, (2) we conduct thorough experiments across models, layers, and datasets, and show that uncertainty is not only a learnable and linearly separable concept but also represented i...
https://arxiv.org/abs/2505.21218v1
in this work, we will use test sets derived from our question answering datasets, which we use in order to train our classifier during the linear uncertainty search process (see Section 2.1). Technically, given a textual input xto the model M, recall that hi(x)is the hidden state vector produced by the end of the i-th ...
https://arxiv.org/abs/2505.21218v1
uncertainty token to the model’s vocabulary and teaches the model to use it during pretraining by adapting its loss to consider the new token. Datasets and Benchmarks. We utilize 16 QA datasets and benchmarks in both our linear uncer- tainty search (Section 2.1) and the induced classifier evaluation (Section 2.2). We g...
https://arxiv.org/abs/2505.21218v1
from which we can predict generation correctness to an extent that is better than random. We additionally claim and show that rather than learning one unified uncertainty, LLMs learn several different ones. We later hypothesize that this fact might be one of the reasons for a high rate of misinformation and hallucinati...
https://arxiv.org/abs/2505.21218v1
a remarkably high accuracy—occasionally comparable to, or even surpassing, the level observed when the uncertainty vector is derived from the same dataset. This suggests that, for example, although mathematical uncertainty may be represented in various ways within the latent space, it is not strictly dataset-specific; ...
https://arxiv.org/abs/2505.21218v1
7 illustrates the impact of model size on uncertainty-based correctness prediction accuracy across layers. Ignoring Llama-3.1-8B-Instruct , the highest performance is achieved by the clas- sifier derived from layer 18 of Llama-3.1-8B . Moreover, a comparison between Llama-3.2-3B 7 Figure 8: Correctness prediction accu-...
https://arxiv.org/abs/2505.21218v1
a measure of the probability that a prediction is incorrect alongside the actual prediction. The problem of factual error detection can be viewed as a variation of calibration, where instead of a continuous probability, we provide a binary prediction for whether the model is correct or not. Common approaches to calibra...
https://arxiv.org/abs/2505.21218v1
than scale, as the more critical lever for improving reliability. Our results offer actionable insights for both understanding and mitigating LLM hallucinations, and open up new directions for principled model design and interpretability. 9 References Dimitrios Alivanistos, Selene Báez Santamaría, Michael Cochez, Jan-C...
https://arxiv.org/abs/2505.21218v1
Geva, Jonathan Berant, and Amir Globerson. Crawling the internal knowledge- base of language models. In Andreas Vlachos and Isabelle Augenstein, editors, Findings of the Association for Computational Linguistics: EACL 2023 , pages 1856–1869, Dubrovnik, Croatia, May 2023a. Association for Computational Linguistics. doi:...
https://arxiv.org/abs/2505.21218v1
70 of Proceedings of Machine Learning Research , pages 1321–1330. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/guo17a. html . Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. Finding neurons in a haystack: Case studies with sparse probing. arXiv preprint...
https://arxiv.org/abs/2505.21218v1
and Owain Evans. TruthfulQA: Measuring how models mimic human falsehoods. In Annual Meeting of the Association for Computational Linguistics , 2021. URL https://api.semanticscholar.org/CorpusID:237532606 . Hongyuan Lu, Wai Lam, Hong Cheng, and Helen Meng. On controlling fallback responses for grounded dialogue generati...
https://arxiv.org/abs/2505.21218v1
word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2080–2094, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main. 168. URL https://aclanthology.org/2021.naacl...
https://arxiv.org/abs/2505.21218v1
Empirical Meth- ods in Natural Language Processing , pages 5942–5966, Singapore, December 2023. Associ- ation for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.364. URL https: //aclanthology.org/2023.emnlp-main.364 . Lei Yu, Meng Cao, Jackie Chi Kit Cheung, and Yue Dong. Mechanistic understanding and miti...
https://arxiv.org/abs/2505.21218v1
checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit. Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist: • You sho...
https://arxiv.org/abs/2505.21218v1
the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless set...
https://arxiv.org/abs/2505.21218v1
well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility ca...
https://arxiv.org/abs/2505.21218v1
data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anon...
https://arxiv.org/abs/2505.21218v1
experiments. 19 •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should di...
https://arxiv.org/abs/2505.21218v1
that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Dat...
https://arxiv.org/abs/2505.21218v1
be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Doe...
https://arxiv.org/abs/2505.21218v1
arXiv:2505.21219v1 [cs.LG] 27 May 2025Addressing Data Quality Decompensation in Federated Learning via Dynamic Client Selection Qinjun FeiaNuria Rodríguez-Barrosoa, *María Victoria Luzónb Zhongliang Zhangc, dFrancisco Herreraa aDepartment of Computer Science and Artificial Intelligence, Andalusian Research Institute in...
https://arxiv.org/abs/2505.21219v1
model poisoning further distort the training process, increasing the risk of unreliable global updates [21, 32, 36]. This variability in data quality introduces a persistent issue referred to as data quality decompensation, where the accumulation of unreliable client updates over multiple rounds leads to global model i...
https://arxiv.org/abs/2505.21219v1
approach provides a practical incentive alignment strategy under budget constraints, fostering stable and cost-efficient client engagement. (3)Shapley Value-Based Contribution Assessment: SBRO-FL leverages the Shapley value to quantify the marginal impact of each client on the global model. This contribution metric is ...
https://arxiv.org/abs/2505.21219v1
rate and gt irepresents the gradient or update direction calculated on the local data. After local training, the clients transmit their updated parameters wt iback to the server. The server then aggregates these updates by computing a weighted average, based on the size of each client’s dataset, to yield the new global...
https://arxiv.org/abs/2505.21219v1
economic constraints. In cross-silo FL, client selection must also account for budget limitations, where high-quality data alone does not guarantee selection if participation costs are ex- cessive. This necessitates a balance between data reliability and financial feasibility. Moreover, since incentives directly impact...
https://arxiv.org/abs/2505.21219v1
differences from standard FL procedures. The communication flow for each round is as follows: 5 Fig. 2. Workflow of SBRO-FL within the traditional FL framework. 1.Task Publication: The central server publishes the task requirements, including the data type, model size, and training methodology, to all potential clients...
https://arxiv.org/abs/2505.21219v1
using it for selection may overlook the disproportionate impact of data quality decompensation. Specifically, poor-quality contributions tend to degrade the global model more significantly than high-quality updates improve it [7]. To address this, we apply a transformation process inspired by prospect theory [15]. As i...
https://arxiv.org/abs/2505.21219v1
non-negative by shifting the minimum reputa- tion score to zero. The decision variable xi∈ {0,1}determines whether client Ciis selected in round t. The selection history Ht={xt−1 1, . . . , xt−1 n, . . . , xt−5 1, . . . , xt−5 n}keeps track of each client’s participation in the last five rounds. St={Ci|xt i= 1}denotes ...
https://arxiv.org/abs/2505.21219v1
it quantifies the significance of individual data points in machine learning models. Its effectiveness in fairly assess- ing collaborative contributions makes it a natural choice for evaluating the impact of selected clients in FL. In the FL scenario, the Shapley value of client CiinSis formally defined in Definition 3...
https://arxiv.org/abs/2505.21219v1
Ci∈St,svt i>0Bi (9) where Sposis the total Shapley value of all positively contributing clients, and Bposis the total bid price of those clients. The reputation update formula is expressed as follows: Ri=  Ri−ψ·ρerri ifCi∈Standsvt i≤0, Ri+ω· 1−exp −svt i/Spos Bi/Bpos ifCi∈Standsvt i>0 Ri ifCi/∈St.(10) Algorith...
https://arxiv.org/abs/2505.21219v1
used in FL experiments Dataset Total Samples Training Samples Test Samples Classes EMNIST-letter 124,800 88,800 36,000 26 FashionMNIST 70,000 60,000 10,000 10 SVHN 600,000 73,257 53,608 10 CIFAR-10 60,000 50,000 10,000 10 To simulate a realistic cross-silo FL scenario, the training data for each dataset was limited to ...
https://arxiv.org/abs/2505.21219v1
an open source framework that provides a comprehensive set of tools for deep learning and machine learning in federated environ- ments. FLEXible allows full customization of the FL scenario, from foundational components to high-level configurations, making it well suited to evaluate the proposed method. CNN architectur...
https://arxiv.org/abs/2505.21219v1
0.7630 0.6917 0.7816 0.6541 +10.3% Fig. 4. Trends in Model Accuracy: Comparing SBRO-FL and Baseline Methods Across Diverse Datasets. Another interesting observation from the FashionMNIST experiment is that SBRO-FL achieved a slightly higher final accuracy than HQRS-FL, the ideal “oracle" scenario. This phenomenon can b...
https://arxiv.org/abs/2505.21219v1
+24.5% SVHN 0.7808 0.5935 0.8055 0.5590 +31.6% Average 0.7620 0.6532 0.7812 0.6506 +16.7% As shown in Table 3, SBRO-FL effectively selects high-quality clients, even amidst varying data quality and low-bid interference. SBRO-FL significantly outperforms both RS-FL and All-FL across all datasets, with particularly stron...
https://arxiv.org/abs/2505.21219v1
extensive empiri- cal evaluation in four benchmark datasets shows that SBRO-FL consistently outperforms traditional random and inclusive selection strategies, even in the presence of adversarial bidding and noisy data. These results confirm the practical effectiveness of our approach in enhancing the robustness and eff...
https://arxiv.org/abs/2505.21219v1
energy prediction scheme based on federated learning for smart grids. IEEE Internet of Things Journal , 10(9):7719–7736, May 2023. ISSN 2372-2541. [2] Ravikumar Balakrishnan, Tian Li, Tianyi Zhou, Nageen Himayat, Virginia Smith, and Jeff Bilmes. Diverse client selection for federated learning: Submodularity and converg...
https://arxiv.org/abs/2505.21219v1
federated learning services market. IEEE Transactions on Mobile Computing , 20(10):3034–3048, 2020. [15] Daniel Kahneman and Amos Tversky. Prospect theory: An analysis of decision under risks. Econometrica , 47(2):363–391, 1979. [16] Jiawen Kang, Zehui Xiong, Dusit Niyato, Shengli Xie, and Junshan Zhang. Incentive mech...
https://arxiv.org/abs/2505.21219v1
[31] Anichur Rahman, Md Sazzad Hossain, Ghulam Muhammad, Dipanjali Kundu, Tanoy Deb- nath, Muaz Rahman, Md Saikat Islam Khan, Prayag Tiwari, and Shahab S Band. Federated learning-based ai approaches in smart healthcare: concepts, taxonomies, challenges and open issues. Cluster computing , 26(4):2271–2311, 2023. [32] Nu...
https://arxiv.org/abs/2505.21219v1