text,label "Prompting has emerged as a practical way to adapt frozen vision-language models (VLMs) for video anomaly detection (VAD). Yet, existing prompts are often overly abstract, overlooking the fine-grained human-object interactions or action semantics that define complex anomalies in surveillance videos. We propose ASK-Hint, a structured prompting framework that leverages action-centric knowledge to elicit more accurate and interpretable reasoning from frozen VLMs. Our approach organizes prompts into semantically coherent groups (e.g. violence, property crimes, public safety) and formulates fine-grained guiding questions that align model predictions with discriminative visual cues. Extensive experiments on UCF-Crime and XD-Violence show that ASK-Hint consistently improves AUC over prior baselines, achieving state-of-the-art performance compared to both fine-tuned and training-free methods. Beyond accuracy, our framework provides interpretable reasoning traces towards anomaly and demonstrates strong generalization across datasets and VLM backbones. These results highlight the critical role of prompt granularity and establish ASK-Hint as a new training-free and generalizable solution for explainable video anomaly detection.",0 "OpenAI's o3-preview reasoning model exceeded human accuracy on the ARC-AGI benchmark, but does that mean state-of-the-art models recognize and reason with the abstractions that the task creators intended? We investigate models' abstraction abilities on ConceptARC. We evaluate models under settings that vary the input modality (textual vs. visual), whether the model is permitted to use external Python tools, and, for reasoning models, the amount of reasoning effort. In addition to measuring output accuracy, we perform fine-grained evaluation of the natural-language rules that models generate to explain their solutions. This dual evaluation lets us assess whether models solve tasks using the abstractions ConceptARC was designed to elicit, rather than relying on surface-level patterns. Our results show that, while some models using text-based representations match human output accuracy, the best models' rules are often based on surface-level ``shortcuts'' and capture intended abstractions far less often than humans. Thus their capabilities for general abstract reasoning may be overestimated by evaluations based on accuracy alone. In the visual modality, AI models' output accuracy drops sharply, yet our rule-level analysis reveals that models might be underestimated, as they still exhibit a substantial share of rules that capture intended abstractions, but are often unable to correctly apply these rules. In short, our results show that models still lag humans in abstract reasoning, and that using accuracy alone to evaluate abstract reasoning on ARC-like tasks may overestimate abstract-reasoning capabilities in textual modalities and underestimate it in visual modalities. We believe that our evaluation framework offers a more faithful picture of multimodal models' abstract reasoning abilities and a more principled way to track progress toward human-like, abstraction-centered intelligence.",0 "Pose estimation refers to tracking a human's full body posture, including their head, torso, arms, and legs. The problem is challenging in practical settings where the number of body sensors are limited. Past work has shown promising results using conditional diffusion models, where the pose prediction is conditioned on both measurements from the sensors. Unfortunately, nearly all these approaches generalize poorly across users, primarly because location measurements are highly influenced by the body size of the user. In this paper, we formulate pose estimation as an inverse problem and design an algorithm capable of zero-shot generalization. Our idea utilizes a pre-trained diffusion model and conditions it on rotational measurements alone; the priors from this model are then guided by a likelihood term, derived from the measured locations. Thus, given any user, our proposed InPose method generatively estimates the highly likely sequence of poses that best explains the sparse on-body measurements.",0 "Public funding processes demand fairness, learning, and outcomes that participants can understand. We introduce Komitee Equal Shares, a priceable virtual-budget allocation framework that integrates two signals: in voter mode, participants cast point votes in evaluator mode, small groups assess proposals against collectively defined impact fields. The framework extends the Method of Equal Shares by translating both signals into virtual spending power and producing voting receipts. We deployed the framework in the 2025 Kultur Komitee in Winterthur, Switzerland. Our contributions are: (1) a clear separation of decision modes, addressing a gap in social choice that typically treats participatory budgeting as preference aggregation while citizens also see themselves as evaluators and (2) the design of voting receipts that operationalise priceability into participant-facing explanations, making proportional allocations legible and traceable. The framework generalises to participatory grant-making and budgeting, offering a model where citizens act as voters and evaluators within one proportional, explainable allocation.",2 "Our goal is to enable social robots to interact autonomously with humans in a realistic, engaging, and expressive manner. The 12 Principles of Animation are a well-established framework animators use to create movements that make characters appear convincing, dynamic, and emotionally expressive. This paper proposes a novel approach that leverages Dynamic Movement Primitives (DMPs) to implement key animation principles, providing a learnable, explainable, modulable, online adaptable and composable model for automatic expressive motion generation. DMPs, originally developed for general imitation learning in robotics and grounded in a spring-damper system design, offer mathematical properties that make them particularly suitable for this task. Specifically, they enable modulation of the intensities of individual principles and facilitate the decomposition of complex, expressive motion sequences into learnable and parametrizable primitives. We present the mathematical formulation of the parameterized animation principles and demonstrate the effectiveness of our framework through experiments and application on three robotic platforms with different kinematic configurations, in simulation, on actual robots and in a 12 users study. Our results show that the approach allows for creating diverse and nuanced expressions using a single base model.",1 "Deep learning has achieved remarkable success in medical image analysis, however its adoption in clinical practice is limited by a lack of interpretability. These models often make correct predictions without explaining their reasoning. They may also rely on image regions unrelated to the disease or visual cues, such as annotations, that are not present in real-world conditions. This can reduce trust and increase the risk of misleading diagnoses. We introduce the Guided Focus via Segment-Wise Relevance Network (GFSR-Net), an approach designed to improve interpretability and reliability in medical imaging. GFSR-Net uses a small number of human annotations to approximate where a person would focus within an image intuitively, without requiring precise boundaries or exhaustive markings, making the process fast and practical. During training, the model learns to align its focus with these areas, progressively emphasizing features that carry diagnostic meaning. This guidance works across different types of natural and medical images, including chest X-rays, retinal scans, and dermatological images. Our experiments demonstrate that GFSR achieves comparable or superior accuracy while producing saliency maps that better reflect human expectations. This reduces the reliance on irrelevant patterns and increases confidence in automated diagnostic tools.",0 "Understanding how subjective experience arises from information processing remains a central challenge in neuroscience, cognitive science, and AI research. The Modular Consciousness Theory (MCT) proposes a biologically grounded and computationally explicit framework in which consciousness is a discrete sequence of Integrated Informational States (IISs). Each IIS is a packet of integrated information tagged with a multidimensional density vector that quantifies informational richness. Its magnitude correlates with subjective intensity, shaping memory, behavior, and continuity of experience. Inputs from body and environment are adaptively filtered, processed by modules (abstraction, narration, evaluation, self-evaluation), and integrated into an IIS. The resulting packet, tagged with its density vector, is transmitted to behavioral readiness, memory, and decision-making modules, closing the loop. This explains why strongly tagged states exert greater influence on long-term memory and action. Unlike Global Workspace Theory, Integrated Information Theory, or Higher-Order Thought, MCT specifies a full computational pipeline producing discrete informational units with quantifiable internal structure. Subjectivity is reframed as a correlate of the density-tagging signal with functional consequences. MCT generates testable predictions, such as stress enhancing memory encoding, and provides a naturalistic blueprint for both biological and artificial architectures. Consciousness, in this view, is not an irreducible essence but an evolvable, quantifiable, and constructible feature of complex information processing.",0 "Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) 'understanding' on the part of the explainee. However, what it means to 'understand' is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding for XAI-explanations and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, philosophy and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, 'knowing how' to do or decide something, and comprehension, 'knowing that' -- both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.",0 "We present BiasLab, a dataset of 300 political news articles annotated for perceived ideological bias. These articles were selected from a curated 900-document pool covering diverse political events and source biases. Each article is labeled by crowdworkers along two independent scales, assessing sentiment toward the Democratic and Republican parties, and enriched with rationale indicators. The annotation pipeline incorporates targeted worker qualification and was refined through pilot-phase analysis. We quantify inter-annotator agreement, analyze misalignment with source-level outlet bias, and organize the resulting labels into interpretable subsets. Additionally, we simulate annotation using schema-constrained GPT-4o, enabling direct comparison to human labels and revealing mirrored asymmetries, especially in misclassifying subtly right-leaning content. We define two modeling tasks: perception drift prediction and rationale type classification, and report baseline performance to illustrate the challenge of explainable bias detection. BiasLab's rich rationale annotations provide actionable interpretations that facilitate explainable modeling of political bias, supporting the development of transparent, socially aware NLP systems. We release the dataset, annotation schema, and modeling code to encourage research on human-in-the-loop interpretability and the evaluation of explanation effectiveness in real-world settings.",0 "Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues. Explainability in CRSs is crucial as it enables users to understand the reasoning behind recommendations, increasing system transparency and trustworthiness. However, current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability. Large language models (LLMs) offer powerful reasoning capabilities that can bridge this gap by generating human-understandable preference summaries. However, effectively reasoning over user preferences in CRSs remains challenging as LLMs pre-trained on large-scale corpora may not be well-suited for analyzing user preferences. While KGs provide rich domain knowledge, integrating them with LLMs encounters a significant modality gap between structured KG information and unstructured conversations. In this paper, we propose COMPASS, a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs. COMPASS employs a two-stage training approach: first, it bridges the gap between the structured KG and natural language through novel graph entity captioning pre-training. Next, COMPASS optimizes user preference reasoning via knowledge-aware instruction fine-tuning, where the LLM learns to reason and summarize user preferences from dialogue histories and KG-augmented context. This enables COMPASS to perform knowledge-aware reasoning and generate interpretable user preferences that can seamlessly integrate with existing CRS models for improving recommendation performance and explainability. Our experiments on benchmark datasets demonstrate the effectiveness of COMPASS in improving various CRS models.",0 "The safe use of pharmaceuticals in food-producing animals is vital to protect\nanimal welfare and human food safety. Adverse events (AEs) may signal\nunexpected pharmacokinetic or toxicokinetic effects, increasing the risk of\nviolative residues in the food chain. This study introduces a predictive\nframework for classifying outcomes (Death vs. Recovery) using ~1.28 million\nreports (1987-2025 Q1) from the U.S. FDA's OpenFDA Center for Veterinary\nMedicine. A preprocessing pipeline merged relational tables and standardized\nAEs through VeDDRA ontologies. Data were normalized, missing values imputed,\nand high-cardinality features reduced; physicochemical drug properties were\nintegrated to capture chemical-residue links. We evaluated supervised models,\nincluding Random Forest, CatBoost, XGBoost, ExcelFormer, and large language\nmodels (Gemma 3-27B, Phi 3-12B). Class imbalance was addressed, such as\nundersampling and oversampling, with a focus on prioritizing recall for fatal\noutcomes. Ensemble methods(Voting, Stacking) and CatBoost performed best,\nachieving precision, recall, and F1-scores of 0.95. Incorporating Average\nUncertainty Margin (AUM)-based pseudo-labeling of uncertain cases improved\nminority-class detection, particularly in ExcelFormer and XGBoost.\nInterpretability via SHAP identified biologically plausible predictors,\nincluding lung, heart, and bronchial disorders, animal demographics, and drug\nphysicochemical properties. These features were strongly linked to fatal\noutcomes. Overall, the framework shows that combining rigorous data\nengineering, advanced machine learning, and explainable AI enables accurate,\ninterpretable predictions of veterinary safety outcomes. The approach supports\nFARAD's mission by enabling early detection of high-risk drug-event profiles,\nstrengthening residue risk assessment, and informing regulatory and clinical\ndecision-making..""",0 "Humans intuitively perceive complex social signals in visual scenes, yet it remains unclear whether state-of-the-art AI models encode the same similarity structure. We study (Q1) whether modern video and language models capture human-perceived similarity in social videos, and (Q2) how to instill this structure into models using human behavioral data. To address this, we introduce a new benchmark of over 49,000 odd-one-out similarity judgments on 250 three-second video clips of social interactions, and discover a modality gap: despite the task being visual, caption-based language embeddings align better with human similarity than any pretrained video model. We close this gap by fine-tuning a TimeSformer video model on these human judgments with our novel hybrid triplet-RSA objective using low-rank adaptation (LoRA), aligning pairwise distances to human similarity. This fine-tuning protocol yields significantly improved alignment with human perceptions on held-out videos in terms of both explained variance and odd-one-out triplet accuracy. Variance partitioning shows that the fine-tuned video model increases shared variance with language embeddings and explains additional unique variance not captured by the language model. Finally, we test transfer via linear probes and find that human-similarity fine-tuning strengthens the encoding of social-affective attributes (intimacy, valence, dominance, communication) relative to the pretrained baseline. Overall, our findings highlight a gap in pretrained video models' social recognition and demonstrate that behavior-guided fine-tuning shapes video representations toward human social perception.",0 "Deep neural networks (DNNs) have achieved remarkable success across domains but remain difficult to interpret, limiting their trustworthiness in high-stakes applications. This paper focuses on deep vision models, for which a dominant line of explainability methods are Class Activation Mapping (CAM) and its variants working by highlighting spatial regions that drive predictions. We figure out that CAM provides little semantic insight into what attributes underlie these activations. To address this limitation, we propose TextCAM, a novel explanation framework that enriches CAM with natural languages. TextCAM combines the precise spatial localization of CAM with the semantic alignment of vision-language models (VLMs). Specifically, we derive channel-level semantic representations using CLIP embeddings and linear discriminant analysis, and aggregate them with CAM weights to produce textual descriptions of salient visual evidence. This yields explanations that jointly specify where the model attends and what visual attributes likely support its decision. We further extend TextCAM to generate feature channels into semantically coherent groups, enabling more fine-grained visual-textual explanations. Experiments on ImageNet, CLEVR, and CUB demonstrate that TextCAM produces faithful and interpretable rationales that improve human understanding, detect spurious correlations, and preserve model fidelity.",0 "The US Decennial Census provides valuable data for both research and policy purposes. Census data are subject to a variety of disclosure avoidance techniques prior to release in order to preserve respondent confidentiality. While many are interested in studying the impacts of disclosure avoidance methods on downstream analyses, particularly with the introduction of differential privacy in the 2020 Decennial Census, these efforts are limited by a critical lack of data: The underlying ""microdata,"" which serve as necessary input to disclosure avoidance methods, are kept confidential. In this work, we aim to address this limitation by providing tools to generate synthetic microdata solely from published Census statistics, which can then be used as input to any number of disclosure avoidance algorithms for the sake of evaluation and carrying out comparisons. We define a principled distribution over microdata given published Census statistics and design algorithms to sample from this distribution. We formulate synthetic data generation in this context as a knapsack-style combinatorial optimization problem and develop novel algorithms for this setting. While the problem we study is provably hard, we show empirically that our methods work well in practice, and we offer theoretical arguments to explain our performance. Finally, we verify that the data we produce are ""close"" to the desired ground truth.",0 "At HVEI-2012, I presented a neurobiologically-based model for trichromatic color sensations in humans, mapping the neural substrate for color sensations to V1-L4: the thalamic recipient layer of the primary visual cortex. In this paper, I propose that V1-L4 itself consists of three distinct sub-layers that directly correspond to the three primary color sensations: blue, red, and green. Furthermore, I apply this model to three aspects of color vision: the three-dimensional (3D) color solid, dichromatism, and ocular agnosticism. Regarding these aspects further: (1) 3D color solid: V1-L4 is known to exhibit a gradient of cell densities from its outermost layer (i.e., its pia side) to its innermost layer (i.e., its white matter side). Taken together with the proposition that the population size of a cell assembly directly corresponds with the magnitude of a color sensation, it can be inferred that the neurobiologically-based color solid is a tilted cuboid. (2) Chromatic color blindness: Using deuteranopia as an example, at the retinal level, M-cones are lost and replaced by L-cones. However, at the cortical level, deuteranopia manifests as a fusion of the two bottom layers of V1-L4. (3) Ocular agnosticism: Although color sensation is monocular, we normally are not aware of which eye we are seeing with. This visual phenomenon can be explained by the nature of ocular integration within V1-L4. A neurobiologically-based model for human color sensations could significantly contribute to future engineering efforts aimed at enhancing human color experiences.",0 "We investigate whether large language models (LLMs) can generate effective, user-facing explanations from a mathematically interpretable recommendation model. The model is based on constrained matrix factorization, where user types are explicitly represented and predicted item scores share the same scale as observed ratings, making the model's internal representations and predicted scores directly interpretable. This structure is translated into natural language explanations using carefully designed LLM prompts. Many works in explainable AI rely on automatic evaluation metrics, which often fail to capture users' actual needs and perceptions. In contrast, we adopt a user-centered approach: we conduct a study with 326 participants who assessed the quality of the explanations across five key dimensions-transparency, effectiveness, persuasion, trust, and satisfaction-as well as the recommendations themselves. To evaluate how different explanation strategies are perceived, we generate multiple explanation types from the same underlying model, varying the input information provided to the LLM. Our analysis reveals that all explanation types are generally well received, with moderate statistical differences between strategies. User comments further underscore how participants react to each type of explanation, offering complementary insights beyond the quantitative results.",1 "Automated driving functions increasingly rely on machine learning for tasks like perception and trajectory planning, requiring large, relevant datasets. The performance of these algorithms depends on how closely the training data matches the task. To ensure reliable functioning, it is crucial to know what is included in the dataset to assess the trained model's operational risk. We aim to enhance the safe use of machine learning in automated driving by developing a method to recognize situations that an automated vehicle has not been sufficiently trained on. This method also improves explainability by describing the dataset at a human-understandable level. We propose modeling driving data as knowledge graphs, representing driving scenes with entities and their relationships. These graphs are queried for specific sub-scene configurations to check their occurrence in the dataset. We estimate a vehicle's competence in a driving scene by considering the coverage and complexity of sub-scene configurations in the training set. Higher complexity scenes require greater coverage for high competence. We apply this method to the NuPlan dataset, modeling it with knowledge graphs and analyzing the coverage of specific driving scenes. This approach helps monitor the competence of machine learning models trained on the dataset, which is essential for trustworthy AI to be deployed in automated driving.",0 "Classical planners are powerful systems, but modeling tasks in input formats such as PDDL is tedious and error-prone. In contrast, planning with Large Language Models (LLMs) allows for almost any input text, but offers no guarantees on plan quality or even soundness. In an attempt to merge the best of these two approaches, some work has begun to use LLMs to automate parts of the PDDL creation process. However, these methods still require various degrees of expert input or domain-specific adaptations. We present NL2Plan, the first fully automatic system for generating complete PDDL tasks from minimal natural language descriptions. NL2Plan uses an LLM to incrementally extract the necessary information from the short text input before creating a complete PDDL description of both the domain and the problem which is finally solved by a classical planner. We evaluate NL2Plan on seven planning domains, five of which are novel and thus not in the LLM training data, and find that NL2Plan outperforms directly generating the files with an LLM+validator combination. As such, NL2Plan is a powerful tool for assistive PDDL modeling and a step towards solving natural language planning task with interpretability and guarantees.",0 "Large language models (LLMs) are trained on vast amounts of text from the Internet, but do they truly understand the viral content that rapidly spreads online -- commonly known as memes? In this paper, we introduce CHIME, a dataset for CHinese Internet Meme Explanation. The dataset comprises popular phrase-based memes from the Chinese Internet, annotated with detailed information on their meaning, origin, example sentences, types, etc. To evaluate whether LLMs understand these memes, we designed two tasks. In the first task, we assessed the models' ability to explain a given meme, identify its origin, and generate appropriate example sentences. The results show that while LLMs can explain the meanings of some memes, their performance declines significantly for culturally and linguistically nuanced meme types. Additionally, they consistently struggle to provide accurate origins for the memes. In the second task, we created a set of multiple-choice questions (MCQs) requiring LLMs to select the most appropriate meme to fill in a blank within a contextual sentence. While the evaluated models were able to provide correct answers, their performance remains noticeably below human levels. We have made CHIME public and hope it will facilitate future research on computational meme understanding.",0 "Interpreting the internal behavior of large language models trained on code remains a critical challenge, particularly for applications demanding trust, transparency, and semantic robustness. We propose Code Concept Analysis (CoCoA): a global post-hoc interpretability framework that uncovers emergent lexical, syntactic, and semantic structures in a code language model's representation space by clustering contextualized token embeddings into human-interpretable concept groups. We propose a hybrid annotation pipeline that combines static analysis tool-based syntactic alignment with prompt-engineered large language models (LLMs), enabling scalable labeling of latent concepts across abstraction levels. We analyse the distribution of concepts across layers and across three finetuning tasks. Emergent concept clusters can help identify unexpected latent interactions and be used to identify trends and biases within the model's learned representations. We further integrate LCA with local attribution methods to produce concept-grounded explanations, improving the coherence and interpretability of token-level saliency. Empirical evaluations across multiple models and tasks show that LCA discovers concepts that remain stable under semantic-preserving perturbations (average Cluster Sensitivity Index, CSI = 0.288) and evolve predictably with fine-tuning. In a user study, concept-augmented explanations disambiguate token roles. In a user study on the programming-language classification task, concept-augmented explanations disambiguated token roles and improved human-centric explainability by 37 percentage points compared with token-level attributions using Integrated Gradients.",1 "Identifying long COVID symptoms is a challenging task, primarily due to the reliance on patient reports and the lack of disease specific biomarkers. The objective of this study is to identify individual long COVID symptoms, post COVID 19 conditions (PCC) participants, and participants' sex, and to identify the associated brain regions by developing an explainable machine learning algorithm using brain MRI features. This study implements secondary analysis using an anonymized, publicly accessible dataset that categorizes 10902 participants into three groups: the PCC group, the Unimpaired Post COVID 19 group (UPC), and the Healthy Non COVID group (HNC), each with corresponding symptoms, demographics, and brain structural MRI features. The aim is to develop and cross validate a support vector classifier (SVC) algorithm to identify the occurrence of various target labels from the dataset. The SVC classifier identified the occurrence of long-COVID symptoms with various performances for different target labels. The model performance and influential area are identified and discussed in light of previous research. The demonstrated approach offers an alternative modality for determining the occurrence of long COVID symptoms based on neuroimaging biomarkers.",2 "Large language models (LLMs) are emerging as everyday assistants, but their role as longitudinal virtual coaches is underexplored. This two-month single subject case study documents LLM guided half marathon preparation (July-September 2025). Using text based interactions and consumer app logs, the LLM acted as planner, explainer, and occasional motivator. Performance improved from sustaining 2 km at 7min 54sec per km to completing 21.1 km at 6min 30sec per km, with gains in cadence, pace HR coupling, and efficiency index trends. While causal attribution is limited without a control, outcomes demonstrate safe, measurable progress. At the same time, gaps were evident, no realtime sensor integration, text only feedback, motivation support that was user initiated, and limited personalization or safety guardrails. We propose design requirements for next generation systems, persistent athlete models with explicit guardrails, multimodal on device sensing, audio, haptic, visual feedback, proactive motivation scaffolds, and privacy-preserving personalization. This study offers grounded evidence and a design agenda for evolving LLMs from retrospective advisors to closed-loop coaching companions.",0 "Despite extensive observational and theoretical efforts, the physical processes responsible for shaping the diversity of accelerated electron spectra observed in solar flares remain poorly understood. We use 2D particle-in-cell (PIC) simulations of magnetized plasmas subject to continuous shear-driven magnetic amplification to investigate whether electron temperature anisotropy instabilities in above-the-loop-top (ALT) regions can account for this diversity. We explore how the resulting spectra depend on key plasma parameters: the initial electron temperature $T_e$ and the initial ratio of electron cyclotron to plasma frequencies, $f_e = \omega_{ce}/\omega_{pe}$. In our simulations, the adiabatic evolution of the plasma generates electron temperature anisotropy with the electron temperature perpendicular to the magnetic field being larger than the parallel temperature. This eventually drives electromagnetic instabilities capable of scattering and accelerating electrons. The simulations consistently produce nonthermal tails in the electron spectra whose hardness increases with the initial value of $f_e$, while depending only weakly on $T_e$. For runs in which $f_e \lesssim 1.2$, the spectra exhibit double power-law shapes with downward (knee-like) breaks, and the electron scattering is dominated by OQES modes. In runs with $f_e\gtrsim 1.5$, PEMZ modes dominate and produce harder double power-law spectra with upward (elbow-like) breaks. Cases that include the $f_e\sim 1.2-1.5$ transition yield nearly single power-laws that end with bump-like breaks. Our results support the role of temperature anisotropy instabilities in accelerating electrons in ALT regions, offering a promising framework to help explain the wide range of nonthermal electron spectra reported in solar flare observations.",0 "Interpretability is essential for deploying deep learning models in symbolic music analysis, yet most research emphasizes model performance over explanation. To address this, we introduce MUSE-Explainer, a new method that helps reveal how music Graph Neural Network models make decisions by providing clear, human-friendly explanations. Our approach generates counterfactual explanations by making small, meaningful changes to musical score graphs that alter a model's prediction while ensuring the results remain musically coherent. Unlike existing methods, MUSE-Explainer tailors its explanations to the structure of musical data and avoids unrealistic or confusing outputs. We evaluate our method on a music analysis task and show it offers intuitive insights that can be visualized with standard music tools such as Verovio.",0 "The relationship between computing systems and the brain has served as motivation for pioneering theoreticians since John von Neumann and Alan Turing. Uniform, scale-free biological networks, such as the brain, have powerful properties, including generalizing over time, which is the main barrier for Machine Learning on the path to Universal Reasoning Models. We introduce `Dragon Hatchling' (BDH), a new Large Language Model architecture based on a scale-free biologically inspired network of \$n\$ locally-interacting neuron particles. BDH couples strong theoretical foundations and inherent interpretability without sacrificing Transformer-like performance. BDH is a practical, performant state-of-the-art attention-based state space sequence learning architecture. In addition to being a graph model, BDH admits a GPU-friendly formulation. It exhibits Transformer-like scaling laws: empirically BDH rivals GPT2 performance on language and translation tasks, at the same number of parameters (10M to 1B), for the same training data. BDH can be represented as a brain model. The working memory of BDH during inference entirely relies on synaptic plasticity with Hebbian learning using spiking neurons. We confirm empirically that specific, individual synapses strengthen connection whenever BDH hears or reasons about a specific concept while processing language inputs. The neuron interaction network of BDH is a graph of high modularity with heavy-tailed degree distribution. The BDH model is biologically plausible, explaining one possible mechanism which human neurons could use to achieve speech. BDH is designed for interpretability. Activation vectors of BDH are sparse and positive. We demonstrate monosemanticity in BDH on language tasks. Interpretability of state, which goes beyond interpretability of neurons and model parameters, is an inherent feature of the BDH architecture.",0 "The gastrointestinal (GI) tract of humans can have a wide variety of aberrant mucosal abnormality findings, ranging from mild irritations to extremely fatal illnesses. Prompt identification of gastrointestinal disorders greatly contributes to arresting the progression of the illness and improving therapeutic outcomes. This paper presents an ensemble of pre-trained vision transformers (ViTs) for accurately classifying endoscopic images of the GI tract to categorize gastrointestinal problems and illnesses. ViTs, attention-based neural networks, have revolutionized image recognition by leveraging the transformative power of the transformer architecture, achieving state-of-the-art (SOTA) performance across various visual tasks. The proposed model was evaluated on the publicly available HyperKvasir dataset with 10,662 images of 23 different GI diseases for the purpose of identifying GI tract diseases. An ensemble method is proposed utilizing the predictions of two pre-trained models, MobileViT_XS and MobileViT_V2_200, which achieved accuracies of 90.57% and 90.48%, respectively. All the individual models are outperformed by the ensemble model, GastroViT, with an average precision, recall, F1 score, and accuracy of 69%, 63%, 64%, and 91.98%, respectively, in the first testing that involves 23 classes. The model comprises only 20 million (M) parameters, even without data augmentation and despite the highly imbalanced dataset. For the second testing with 16 classes, the scores are even higher, with average precision, recall, F1 score, and accuracy of 87%, 86%, 87%, and 92.70%, respectively. Additionally, the incorporation of explainable AI (XAI) methods such as Grad-CAM (Gradient Weighted Class Activation Mapping) and SHAP (Shapley Additive Explanations) enhances model interpretability, providing valuable insights for reliable GI diagnosis in real-world settings.",0 "Indoor scene classification is a critical task in computer vision, with wide-ranging applications that go from robotics to sensitive content analysis, such as child sexual abuse imagery (CSAI) classification. The problem is particularly challenging due to the intricate relationships between objects and complex spatial layouts. In this work, we propose the Attention over Scene Graphs for Sensitive Content Analysis (ASGRA), a novel framework that operates on structured graph representations instead of raw pixels. By first converting images into Scene Graphs and then employing a Graph Attention Network for inference, ASGRA directly models the interactions between a scene's components. This approach offers two key benefits: (i) inherent explainability via object and relationship identification, and (ii) privacy preservation, enabling model training without direct access to sensitive images. On Places8, we achieve 81.27% balanced accuracy, surpassing image-based methods. Real-world CSAI evaluation with law enforcement yields 74.27% balanced accuracy. Our results establish structured scene representations as a robust paradigm for indoor scene classification and CSAI classification. Code is publicly available at https://github.com/tutuzeraa/ASGRA.",0 "In this paper, we study a simple linear model of the cochlea as a set of vibrating strings. We make hypothesis that the information sent to the auditory cortex is the energy stored in the strings and consider all oscillation modes of the strings. We show the emergence of the sub-harmonic series whose existence was hypothesized in the XVI century to explain the consonance of the minor chord. We additionally show how the nonlinearity of the energy can be used to study the emergence of the combination tone (Tartini's third sound) shedding new light on this long debated subject.",0 "Modelling human variation in rating tasks is crucial for personalization, pluralistic model alignment, and computational social science. We propose representing 709 individuals using natural language value profiles -- descriptions of underlying values compressed from in-context demonstrations -- along with a steerable decoder model that estimates individual ratings from a rater representation. To measure the predictive information in a rater representation, we introduce an information-theoretic methodology and find that demonstrations contain the most information, followed by value profiles, then demographics. However, value profiles effectively compress the useful information from demonstrations (>70% information preservation) and offer advantages in terms of scrutability, interpretability, and steerability. Furthermore, clustering value profiles to identify similarly behaving individuals better explains rater variation than the most predictive demographic groupings. Going beyond test set performance, we show that the decoder predictions change in line with semantic profile differences, are well-calibrated, and can help explain instance-level disagreement by simulating an annotator population. These results demonstrate that value profiles offer novel, predictive ways to describe individual variation beyond demographics or group information.",2 "In this paper, we present our experimental study on generating plausible textual explanations for the outcomes of video summarization. For the needs of this study, we extend an existing framework for multigranular explanation of video summarization by integrating a SOTA Large Multimodal Model (LLaVA-OneVision) and prompting it to produce natural language descriptions of the obtained visual explanations. Following, we focus on one of the most desired characteristics for explainable AI, the plausibility of the obtained explanations that relates with their alignment with the humans' reasoning and expectations. Using the extended framework, we propose an approach for evaluating the plausibility of visual explanations by quantifying the semantic overlap between their textual descriptions and the textual descriptions of the corresponding video summaries, with the help of two methods for creating sentence embeddings (SBERT, SimCSE). Based on the extended framework and the proposed plausibility evaluation approach, we conduct an experimental study using a SOTA method (CA-SUM) and two datasets (SumMe, TVSum) for video summarization, to examine whether the more faithful explanations are also the more plausible ones, and identify the most appropriate approach for generating plausible textual explanations for video summarization.",0 A large language model (LLM) can map a feedback causal fuzzy cognitive map (FCM) into text and then reconstruct the FCM from the text. This explainable AI system approximates an identity map from the FCM to itself and resembles the operation of an autoencoder (AE). Both the encoder and the decoder explain their decisions in contrast to black-box AEs. Humans can read and interpret the encoded text in contrast to the hidden variables and synaptic webs in AEs. The LLM agent approximates the identity map through a sequence of system instructions that does not compare the output to the input. The reconstruction is lossy because it removes weak causal edges or rules while it preserves strong causal edges. The encoder preserves the strong causal edges even when it trades off some details about the FCM to make the text sound more natural.,0 "Recent failures such as Google Gemini generating people of color in Nazi-era uniforms illustrate how AI outputs can be factually plausible yet socially harmful. AI models are increasingly evaluated for ""fairness,"" yet existing benchmarks often conflate two fundamentally different dimensions: factual correctness and normative fairness. A model may generate responses that are factually accurate but socially unfair, or conversely, appear fair while distorting factual reality. We argue that identifying the boundary between fact and fair is essential for meaningful fairness evaluation. We introduce Fact-or-Fair, a benchmark with (i) objective queries aligned with descriptive, fact-based judgments, and (ii) subjective queries aligned with normative, fairness-based judgments. Our queries are constructed from 19 statistics and are grounded in cognitive psychology, drawing on representativeness bias, attribution bias, and ingroup-outgroup bias to explain why models often misalign fact and fairness. Experiments across ten frontier models reveal different levels of fact-fair trade-offs. By reframing fairness evaluation, we provide both a new theoretical lens and a practical benchmark to advance the responsible model assessments. Our test suite is publicly available at https://github.com/uclanlp/Fact-or-Fair.",0 "The way residents perceive safety plays an important role in how they use public spaces. Studies have combined large-scale street view images and advanced computer vision techniques to measure the perception of safety of urban environments. Despite their success, such studies have often overlooked the specific environmental visual factors that draw human attention and trigger people's feelings of safety perceptions. In this study, we introduce a computational framework that enriches the existing body of literature on place perception by using eye-tracking systems with street view images and deep learning approaches. Eye-tracking systems quantify not only what users are looking at but also how long they engage with specific environmental elements. This allows us to explore the nuance of which visual environmental factors influence human safety perceptions. We conducted our research in Helsingborg, Sweden, where we recruited 12 volunteers outfitted with eye-tracking systems. They were asked to indicate which of the two street view images appeared safer. By examining participants' focus on specific features using Mean Object Ratio in Highlighted Regions (MoRH) and Mean Object Hue (MoH), we identified key visual elements that attract human attention when perceiving safe environments. For instance, certain urban infrastructure and public space features draw more human attention while the sky is less relevant in influencing safety perceptions. These insights offer a more human-centered understanding of which urban features influence human safety perceptions. Furthermore, we compared the real human attention from eye-tracking systems with attention maps obtained from eXplainable Artificial Intelligence (XAI) results. Several XAI models were tested, and we observed that XGradCAM and EigenCAM most closely align with human safety perceptual patterns.",1 "This paper bridges internal and external analysis approaches to large language models (LLMs) by demonstrating that geometric properties of internal model representations serve as reliable proxies for evaluating generated text quality. We validate a set of metrics including Maximum Explainable Variance, Effective Rank, Intrinsic Dimensionality, MAUVE score, and Schatten Norms measured across different layers of LLMs, demonstrating that Intrinsic Dimensionality and Effective Rank can serve as universal assessments of text naturalness and quality. Our key finding reveals that different models consistently rank text from various sources in the same order based on these geometric properties, indicating that these metrics reflect inherent text characteristics rather than model-specific artifacts. This allows a reference-free text quality evaluation that does not require human-annotated datasets, offering practical advantages for automated evaluation pipelines.",0 "Why do Vision Language Models (VLMs), despite success on standard benchmarks, often fail to match human performance on surprisingly simple visual reasoning tasks? While the underlying computational principles are still debated, we hypothesize that a crucial factor is a deficit in visually-grounded serial processing. To test this hypothesis, we compared human and VLM performance across tasks designed to vary serial processing demands in three distinct domains: geometric reasoning, perceptual enumeration, and mental rotation. Tasks within each domain varied serial processing load by manipulating factors such as geometric concept complexity, perceptual individuation load, and transformation difficulty. Across all domains, our results revealed a consistent pattern: decreased VLM accuracy was strongly correlated with increased human reaction time (used as a proxy for serial processing load). As tasks require more demanding serial processing -- whether composing concepts, enumerating items, or performing mental transformations -- the VLM-human performance gap widens reliably. These findings support our hypothesis, indicating that limitations in serial, visually grounded reasoning represent a fundamental bottleneck that distinguishes current VLMs from humans.",0 "Current wireless networks are designed to optimize spectral efficiency for human users, who typically require sustained connections for high-data-rate applications like file transfers and video streaming. However, these networks are increasingly inadequate for the emerging era of machine-type communications (MTC). With a vast number of devices exhibiting sporadic traffic patterns consisting of short packets, the grant-based multiple access procedures utilized by existing networks lead to significant delays and inefficiencies. To address this issue the unsourced random access (URA) paradigm has been proposed. This paradigm assumes the devices to share a common encoder thus simplifying the reception process by eliminating the identification procedure. The URA paradigm not only addresses the computational challenges but it also considers the random access (RA) as a coding problem, i.e., takes into account both medium access protocols and physical layer effects. In this monograph we provide a comprehensive overview of the URA problem in noisy channels, with the main task being to explain the major ideas rather than to list all existing solutions.",0 "Post-hoc explanation methods for black-box models often struggle with faithfulness and human interpretability due to the lack of explainability in current neural architectures. Meanwhile, B-cos networks have been introduced to improve model explainability by proposing an architecture that removes bias terms and promotes input-weight alignment. Although B-cos networks have shown success in building explainable systems, their application has so far been limited to computer vision models and their associated training pipelines. In this work, we introduce B-cos LMs, i.e., B-cos language models (LMs) empowered for natural language processing (NLP) tasks. Our approach directly transforms pre-trained language models into B-cos LMs by combining B-cos conversion and task fine-tuning, improving efficiency compared to previous methods. Our automatic and human evaluation results demonstrate that B-cos LMs produce more faithful and human interpretable explanations than post-hoc methods, while maintaining task performance comparable to conventional fine-tuning. Our in-depth analysis explores how B-cos LMs differ from conventionally fine-tuned models in their learning processes and explanation patterns. Finally, we present a first exploration of transforming decoder-only models to B-cos LMs for generation tasks.",2 "Multifarious assembly models consider multiple structures assembled from a shared set of components, reflecting the efficient usage of components in biological self-assembly. These models are subject to a high-dimensional parameter space, with only a finite region of parameter space giving reliable self-assembly. Here we use a continuous-time Gillespie simulation method to study multifarious self-assembly and find that the region of parameter space in which reliable self-assembly can be achieved is smaller than what was obtained previously using a discrete-time Monte Carlo simulation method. We explain this discrepancy through a detailed analysis of the stability of assembled structures against chimera formation. We find that our continuous-time simulations of multifarious self-assembly can expose this instability in large systems even at moderate simulation times. In contrast, discrete-time simulations are slow to show this instability, particularly for large system sizes. For the remaining state space we find good agreement between the predictions of continuous- and discrete-time simulations. We present physical arguments that can help us predict the state boundaries in the parameter space, and gain a deeper understanding of multifarious self-assembly.",0 "This study introduces ""Survey and Questionnaire Item Embeddings Differentials"" (SQuID), a novel methodological approach that enables neural network embeddings to effectively recover latent dimensions from psychometric survey items. We demonstrate that embeddings derived from large language models, when processed with SQuID, can recover the structure of human values obtained from human rater judgments on the Revised Portrait Value Questionnaire (PVQ-RR). Our experimental validation on 1097 human respondents compares multiple embedding models across a number of evaluation metrics. Unlike previous approaches, SQuID successfully addresses the challenge of obtaining negative correlations between dimensions without requiring domain-specific fine-tuning. Quantitative analysis reveals that our embedding-based approach explains 55% of variance in dimension-dimension similarities compared to human data. Multidimensional scaling configurations from both types of data show fair factor congruence coefficients and largely follow the underlying theory. These results demonstrate that semantic embeddings can effectively replicate psychometric structures previously established through extensive human surveys. The approach offers substantial advantages in cost, scalability and flexibility while maintaining comparable quality to traditional methods. Our findings have significant implications for psychometrics and social science research, providing a complementary methodology that could expand the scope of human behavior and experience represented in measurement tools.",2 "Artificial intelligence (AI) systems, and Large Language Models (LLMs) in particular, are increasingly employed for creative tasks like scientific idea generation, constituting a form of generalization from training data unaddressed by existing conceptual frameworks. Despite its similarities to compositional generalization (CG), combinatorial creativity (CC) is an open-ended ability. Instead of evaluating for accuracy or correctness against fixed targets, which would contradict the open-ended nature of CC, we propose a theoretical framework and algorithmic task for evaluating outputs by their degrees of novelty and utility. From here, we make several important empirical contributions: (1) We obtain the first insights into the scaling behavior of creativity for LLMs. (2) We discover that, for fixed compute budgets, there exist optimal model depths and widths for creative ability. (3) We find that the ideation-execution gap, whereby LLMs excel at generating novel scientific ideas but struggle to ensure their practical feasibility, may be explained by a more fundamental novelty-utility tradeoff characteristic of creativity algorithms in general. Importantly, this tradeoff remains persistent even at scale, casting doubt on the long-term creative potential of LLMs in their current form. Together, our conceptual framework and empirical findings provide a foundation for understanding and improving creativity in modern AI models, bridging the gap between human and machine intelligence.",0 "lantar pressure mapping is essential in clinical diagnostics and sports\nscience, yet large heterogeneous datasets often contain outliers from technical\nerrors or procedural inconsistencies. Statistical Parametric Mapping (SPM)\nprovides interpretable analyses but is sensitive to alignment and its capacity\nfor robust outlier detection remains unclear. This study compares an SPM\napproach with an explainable machine learning (ML) approach to establish\ntransparent quality-control pipelines for plantar pressure datasets. Data from\nmultiple centers were annotated by expert consensus and enriched with synthetic\nanomalies resulting in 798 valid samples and 2000 outliers. We evaluated (i) a\nnon-parametric, registration-dependent SPM approach and (ii) a convolutional\nneural network (CNN), explained using SHapley Additive exPlanations (SHAP).\nPerformance was assessed via nested cross-validation; explanation quality via a\nsemantic differential survey with domain experts. The ML model reached high\naccuracy and outperformed SPM, which misclassified clinically meaningful\nvariations and missed true outliers. Experts perceived both SPM and SHAP\nexplanations as clear, useful, and trustworthy, though SPM was assessed less\ncomplex. These findings highlight the complementary potential of SPM and\nexplainable ML as approaches for automated outlier detection in plantar\npressure data, and underscore the importance of explainability in translating\ncomplex model outputs into interpretable insights that can effectively inform\ndecision-making.""",1 "Time-series anomaly detection (TSAD) increasingly demands explanations that articulate not only if an anomaly occurred, but also what pattern it exhibits and why it is anomalous. Leveraging the impressive explanatory capabilities of Large Language Models (LLMs), recent works have attempted to treat time series as text for explainable TSAD. However, this approach faces a fundamental challenge: LLMs operate on discrete tokens and struggle to directly process long, continuous signals. Consequently, naive time-to-text serialization suffers from a lack of contextual grounding and representation alignment between the two modalities. To address this gap, we introduce AXIS, a framework that conditions a frozen LLM for nuanced time-series understanding. Instead of direct serialization, AXIS enriches the LLM's input with three complementary hints derived from the series: (i) a symbolic numeric hint for numerical grounding, (ii) a context-integrated, step-aligned hint distilled from a pretrained time-series encoder to capture fine-grained dynamics, and (iii) a task-prior hint that encodes global anomaly characteristics. Furthermore, to facilitate robust evaluation of explainability, we introduce a new benchmark featuring multi-format questions and rationales that supervise contextual grounding and pattern-level semantics. Extensive experiments, including both LLM-based and human evaluations, demonstrate that AXIS yields explanations of significantly higher quality and achieves competitive detection accuracy compared to general-purpose LLMs, specialized time-series LLMs, and time-series Vision Language Models.",0 "Speech Emotion Recognition (SER) is typically trained and evaluated on majority-voted labels, which simplifies benchmarking but masks subjectivity and provides little transparency into why predictions are made. This neglects valid minority annotations and limits interpretability. We propose an explainable Speech Language Model (SpeechLM) framework that frames SER as a generative reasoning task. Given an utterance, the model first produces a transcript, then outputs both an emotion label and a concise natural-language rationale grounded in lexical and acoustic cues. Rationales are generated by a reasoning-capable teacher LLM and used as intermediate supervision, combined with majority labels during fine-tuning. Unlike prior work primarily focused on boosting classification accuracy, we aim to enhance explainability while preserving competitive performance. To this end, we complement majority-label metrics with annotator-aware scoring that credits matches with any annotator label. On MSP-Podcast v1.12, our model maintains improvements over zero-shot SpeechLM baselines, and produces rationales that 7 human evaluators find plausible and well grounded. This demonstrates that incorporating rationale supervision offers a practical path toward interpretable SER without sacrificing predictive quality.",1 "Retinal disease diagnosis is critical in preventing vision loss and reducing socioeconomic burdens. Globally, over 2.2 billion people are affected by some form of vision impairment, resulting in annual productivity losses estimated at $411 billion. Traditional manual grading of retinal fundus images by ophthalmologists is time-consuming and subjective. In contrast, deep learning has revolutionized medical diagnostics by automating retinal image analysis and achieving expert-level performance. In this study, we present EYE-DEX, an automated framework for classifying 10 retinal conditions using the large-scale Retinal Disease Dataset comprising 21,577 eye fundus images. We benchmark three pre-trained Convolutional Neural Network (CNN) models--VGG16, VGG19, and ResNet50--with our finetuned VGG16 achieving a state-of-the-art global benchmark test accuracy of 92.36%. To enhance transparency and explainability, we integrate the Gradient-weighted Class Activation Mapping (Grad-CAM) technique to generate visual explanations highlighting disease-specific regions, thereby fostering clinician trust and reliability in AI-assisted diagnostics.",0 "This article presents a modular, component-based architecture for developing and evaluating AI agents that bridge the gap between natural language interfaces and complex enterprise data warehouses. The system directly addresses core challenges in data accessibility by enabling non-technical users to interact with complex data warehouses through a conversational interface, translating ambiguous user intent into precise, executable database queries to overcome semantic gaps. A cornerstone of the design is its commitment to transparent decision-making, achieved through a multi-layered reasoning framework that explains the ""why"" behind every decision, allowing for full interpretability by tracing conclusions through specific, activated business rules and data points. The architecture integrates a robust quality assurance mechanism via an automated evaluation framework that serves multiple functions: it enables performance benchmarking by objectively measuring agent performance against golden standards, and it ensures system reliability by automating the detection of performance regressions during updates. The agent's analytical depth is enhanced by a statistical context module, which quantifies deviations from normative behavior, ensuring all conclusions are supported by quantitative evidence including concrete data, percentages, and statistical comparisons. We demonstrate the efficacy of this integrated agent-development-with-evaluation framework through a case study on an insurance claims processing system. The agent, built on a modular architecture, leverages the BigQuery ecosystem to perform secure data retrieval, apply domain-specific business rules, and generate human-auditable justifications. The results confirm that this approach creates a robust, evaluable, and trustworthy system for deploying LLM-powered agents in data-sensitive, high-stakes domains.",0 "Since the advent of large language models (LLMs), research has focused on instruction following and deductive reasoning. A central question remains: can these models discover new knowledge, and how can we evaluate this ability? We address this by studying abductive reasoning-the generation of plausible hypotheses to explain observations-and introduce GEAR (General Evaluation for Abductive Reasoning), a general-purpose, fully automated, transparent, and label-free evaluation paradigm. GEAR scores hypothesis sets by three metrics: consistency (each hypothesis explains the observations), generalizability (consistent hypotheses make meaningful predictions on unseen inputs), and diversity (the set covers distinct predictions and patterns). Built this way, GEAR is scalable (no human gold answers), reliable (deterministic scoring aligned with classical abduction), and open-ended (scores improve only when models produce new plausible hypotheses, unlike static benchmarks that saturate once accuracy is high). Using GEAR, we conduct a fine-grained study of nine LLMs on four abduction benchmarks with 1,500 problems, generating over 50,000 candidate hypotheses and revealing model differences obscured by gold-answer or purely human evaluations. We further propose a momentum-based curriculum that adjusts GEAR-derived training data by learning velocity: it starts with what the model learns quickly and shifts toward harder objectives such as generating diverse hypotheses once the model is confident on foundational objectives. Without gold-label supervision, this strategy improves all GEAR objectives and these gains transfer to established abductive reasoning benchmarks. Taken together, GEAR provides a principled framework that evaluates abduction and supplies label-free, scalable training signals that help LLMs produce more diverse and reliable hypotheses.",0 "Facial Beauty Prediction (FBP) has made significant strides with the application of deep learning, yet state-of-the-art models often exhibit critical limitations, including architectural constraints, inherent demographic biases, and a lack of transparency. Existing methods, primarily based on Convolutional Neural Networks (CNNs), excel at capturing local texture but struggle with global facial harmony, while Vision Transformers (ViTs) effectively model long-range dependencies but can miss fine-grained details. Furthermore, models trained on benchmark datasets can inadvertently learn and perpetuate societal biases related to protected attributes like ethnicity. To address these interconnected challenges, we propose \textbf{FairViT-GAN}, a novel hybrid framework that synergistically integrates a CNN branch for local feature extraction and a ViT branch for global context modeling. More significantly, we introduce an adversarial debiasing mechanism where the feature extractor is explicitly trained to produce representations that are invariant to protected attributes, thereby actively mitigating algorithmic bias. Our framework's transparency is enhanced by visualizing the distinct focus of each architectural branch. Extensive experiments on the SCUT-FBP5500 benchmark demonstrate that FairViT-GAN not only sets a new state-of-the-art in predictive accuracy, achieving a Pearson Correlation of \textbf{0.9230} and reducing RMSE to \textbf{0.2650}, but also excels in fairness. Our analysis reveals a remarkable \textbf{82.9\% reduction in the performance gap} between ethnic subgroups, with the adversary's classification accuracy dropping to near-random chance (52.1\%). We believe FairViT-GAN provides a robust, transparent, and significantly fairer blueprint for developing responsible AI systems for subjective visual assessment.",0 "The chemical reaction recommendation is to select proper reaction condition parameters for chemical reactions, which is pivotal to accelerating chemical science. With the rapid development of large language models (LLMs), there is growing interest in leveraging their reasoning and planning capabilities for reaction condition recommendation. Despite their success, existing methods rarely explain the rationale behind the recommended reaction conditions, limiting their utility in high-stakes scientific workflows. In this work, we propose ChemMAS, a multi-agent system that reframes condition prediction as an evidence-based reasoning task. ChemMAS decomposes the task into mechanistic grounding, multi-channel recall, constraint-aware agentic debate, and rationale aggregation. Each decision is backed by interpretable justifications grounded in chemical knowledge and retrieved precedents. Experiments show that ChemMAS achieves 20-35% gains over domain-specific baselines and outperforms general-purpose LLMs by 10-15% in Top-1 accuracy, while offering falsifiable, human-trustable rationales, which establishes a new paradigm for explainable AI in scientific discovery.",0 "A central challenge in explainable AI, particularly in the visual domain, is producing explanations grounded in human-understandable concepts. To tackle this, we introduce OCEAN (Object-Centric Explananda via Agent Negotiation), a novel, inherently interpretable framework built on object-centric representations and a transparent multi-agent reasoning process. The game-theoretic reasoning process drives agents to agree on coherent and discriminative evidence, resulting in a faithful and interpretable decision-making process. We train OCEAN end-to-end and benchmark it against standard visual classifiers and popular posthoc explanation tools like GradCAM and LIME across two diagnostic multi-object datasets. Our results demonstrate competitive performance with respect to state-of-the-art black-box models with a faithful reasoning process, which was reflected by our user study, where 927 participants consistently rated OCEAN's explanations as more intuitive and trustworthy.",2 "Electrospun yarns often fall short of the strength and stiffness of their\nconstituent nanofibers because of loose packing and inter-fiber slip. We report\na simple, twist-free route to close this gap by liquid-assisted rolling: yarns\nare briefly wetted (water or ethanol) and subjected to gentle rolling action\n(mechanical strokes perpendicular and parallel to the yarn axis), then dried\nunder controlled conditions so that meniscus forces compact the assembly into\ntightly bound bundles. The treatment yields large gains in tensile strength and\nmodulus, and as yarn diameter decreases the properties of liquid-treated yarns\napproach single-fiber limits, indicating more efficient load transfer.\nDry-rolling controls produce negligible changes compared to as-spun yarns,\nconfirming that capillarity-driven consolidation, rather than mechanical\npressing, dominates the improvement. Water consistently outperforms ethanol,\nreflecting its larger elastocapillary driving term gamma*(1 + cos theta) on PAN\nand thus stronger capillary compaction; a short post-treatment anneal near Tg\nfurther increases stiffness with a corresponding reduction in ductility. To\nrationalize these trends, we quantify microstructure via SEM-derived alignment\nand packing density and show that these complementary descriptors jointly\nexplain variability in mechanical response. A compact constitutive framework,\ngrounded in distributed fiber recruitment and adhesion/frictional contact,\ncaptures the observed strengthening-ductility trade-off across processing\nroutes. The results establish capillarity-driven consolidation as a scalable\npathway to engineer processing-structure-property relationships in hierarchical\npolymer fiber assemblies and provide practical guidance f""",0 "Brain-Computer Interfaces (BCIs) suffer from high inter-subject variability and limited labeled data, often requiring lengthy calibration phases. In this work, we present an end-to-end approach that explicitly models the subject dependency using lightweight convolutional neural networks (CNNs) conditioned on the subject's identity. Our method integrates hyperparameter optimization strategies that prioritize class imbalance and evaluates two conditioning mechanisms to adapt pre-trained models to unseen subjects with minimal calibration data. We benchmark three lightweight architectures on a time-modulated Event-Related Potentials (ERP) classification task, providing interpretable evaluation metrics and explainable visualizations of the learned representations. Results demonstrate improved generalization and data-efficient calibration, highlighting the scalability and practicality of subject-adaptive BCIs.",0 "Information-processing systems coordinating across multiple agents and objectives face fundamental thermodynamic constraints. We show that solutions with maximum utility to act as coordination focal points have much higher selection pressure for being findable across agents rather than accuracy. We derive that the information-theoretic minimum description length of coordination protocols to precision $\varepsilon$ scales as $L(P)\geq NK\log_2 K+N^2d^2\log (1/\varepsilon)$ for $N$ agents with $d$ potentially conflicting objectives and internal model complexity $K$. This scaling forces progressive simplification, with coordination dynamics changing the environment itself and shifting optimization across hierarchical levels. Moving from established focal points requires re-coordination, creating persistent metastable states and hysteresis until significant environmental shifts trigger phase transitions through spontaneous symmetry breaking. We operationally define coordination temperature to predict critical phenomena and estimate coordination work costs, identifying measurable signatures across systems from neural networks to restaurant bills to bureaucracies. Extending the topological version of Arrow's theorem on the impossibility of consistent preference aggregation, we find it recursively binds whenever preferences are combined. This potentially explains the indefinite cycling in multi-objective gradient descent and alignment faking in Large Language Models trained with reinforcement learning with human feedback. We term this framework Thermodynamic Coordination Theory (TCT), which demonstrates that coordination requires radical information loss.",0 "Large Language Model (LLM)-based systems present new opportunities for autonomous health monitoring in sensor-rich industrial environments. This study explores the potential of LLMs to detect and classify faults directly from sensor data, while producing inherently explainable outputs through natural language reasoning. We systematically evaluate how LLM-system architecture (single-LLM vs. multi-LLM), input representations (raw vs. descriptive statistics), and context window size affect diagnostic performance. Our findings show that LLM systems perform most effectively when provided with summarized statistical inputs, and that systems with multiple LLMs using specialized prompts offer improved sensitivity for fault classification compared to single-LLM systems. While LLMs can produce detailed and human-readable justifications for their decisions, we observe limitations in their ability to adapt over time in continual learning settings, often struggling to calibrate predictions during repeated fault cycles. These insights point to both the promise and the current boundaries of LLM-based systems as transparent, adaptive diagnostic tools in complex environments.",0 "Recent studies highlight various machine learning (ML)-based techniques for code clone detection, which can be integrated into developer tools such as static code analysis. With the advancements brought by ML in code understanding, ML-based code clone detectors could accurately identify and classify cloned pairs, especially semantic clones, but often operate as black boxes, providing little insight into the decision-making process. Post hoc explainers, on the other hand, aim to interpret and explain the predictions of these ML models after they are made, offering a way to understand the underlying mechanisms driving the model's decisions. However, current post hoc techniques require white-box access to the ML model or are computationally expensive, indicating a need for advanced post hoc explainers. In this paper, we propose a novel approach that leverages the in-context learning capabilities of large language models to elucidate the predictions made by the ML-based code clone detectors. We perform a study using ChatGPT-4 to explain the code clone results inferred by GraphCodeBERT. We found that our approach is promising as a post hoc explainer by giving the correct explanations up to 98% and offering good explanations 95% of the time. However, the explanations and the code line examples given by the LLM are useful in some cases. We also found that lowering the temperature to zero helps increase the accuracy of the explanation. Lastly, we list the insights that can lead to further improvements in future work. This study paves the way for future studies in using LLMs as a post hoc explainer for various software engineering tasks.",0 "Standard LLM evaluation practices compress diverse abilities into single scores, obscuring their inherently multidimensional nature. We present JE-IRT, a geometric item-response framework that embeds both LLMs and questions in a shared space. For question embeddings, the direction encodes semantics and the norm encodes difficulty, while correctness on each question is determined by the geometric interaction between the model and question embeddings. This geometry replaces a global ranking of LLMs with topical specialization and enables smooth variation across related questions. Building on this framework, our experimental results reveal that out-of-distribution behavior can be explained through directional alignment, and that larger norms consistently indicate harder questions. Moreover, JE-IRT naturally supports generalization: once the space is learned, new LLMs are added by fitting a single embedding. The learned space further reveals an LLM-internal taxonomy that only partially aligns with human-defined subject categories. JE-IRT thus establishes a unified and interpretable geometric lens that connects LLM abilities with the structure of questions, offering a distinctive perspective on model evaluation and generalization.",0 "The pursuit of general-purpose artificial intelligence depends on large language models (LLMs) that can handle both structured reasoning and open-ended generation. We present Omni-Thinker, a unified reinforcement learning (RL) framework that scales LLMs across diverse tasks by combining hybrid rewards with backward-transfer-guided scheduling. Hybrid rewards integrate rule-based verifiable signals with preference-based evaluations from an LLM-as-a-Judge, enabling learning in both deterministic and subjective domains. Our scheduler orders tasks according to accuracy backward transfer (BWT), reducing forgetting and improving multi-task performance. Experiments across four domains show gains of 6.2% over joint training and 12.4% over model merging. Moreover, we demonstrate that simple assumptions on accuracy transfer yield accurate predictions of curriculum outcomes, with entropy dynamics explaining deviations due to generative tasks. These findings underscore the importance of BWT-aware scheduling and hybrid supervision for scaling RL-based post-training toward general-purpose LLMs.",0 "The ""black box"" nature of Large Reasoning Models (LRMs) presents critical limitations in reliability and transparency, fueling the debate around the ""illusion of thinking"" and the challenge of state hallucinations in agentic systems. In response, we introduce The STAR-XAI Protocol (Socratic, Transparent, Agentic, Reasoning - for eXplainable Artificial Intelligence), a novel operational methodology for training and operating verifiably reliable AI agents. Our method reframes the human-AI interaction as a structured Socratic dialogue governed by an explicit, evolving symbolic rulebook (the Consciousness Transfer Package - CTP) and a suite of integrity protocols, including a state-locking Checksum that eradicates internal state corruption. Through an exhaustive case study in the complex strategic game ""Caps i Caps,"" we demonstrate that this ""Clear Box"" framework transforms an opaque LRM into a disciplined strategist. The agent not only exhibits the emergence of complex tactics, such as long-term planning, but also achieves ante-hoc transparency by justifying its intentions before acting. Crucially, it demonstrates Second-Order Agency by identifying and correcting flaws in its own supervisor-approved plans, leading to empirically-proven, 100% reliable state tracking and achieving ""zero hallucinations by design."" The STAR-XAI Protocol thus offers a practical pathway toward building AI agents that are not just high-performing but intrinsically auditable, trustworthy, and reliable.",0 "Mutual understanding of artificial agents' decisions is key to ensuring a trustworthy and successful human-robot interaction. Hence, robots are expected to make reasonable decisions and communicate them to humans when needed. In this article, the focus is on an approach to modeling and reasoning about the comparison of two competing plans, so that robots can later explain the divergent result. First, a novel ontological model is proposed to formalize and reason about the differences between competing plans, enabling the classification of the most appropriate one (e.g., the shortest, the safest, the closest to human preferences, etc.). This work also investigates the limitations of a baseline algorithm for ontology-based explanatory narration. To address these limitations, a novel algorithm is presented, leveraging divergent knowledge between plans and facilitating the construction of contrastive narratives. Through empirical evaluation, it is observed that the explanations excel beyond the baseline method.",1 "Concept Activation Vectors (CAVs) are a tool from explainable AI, offering a promising approach for understanding how human-understandable concepts are encoded in a model's latent spaces. They are computed from hidden-layer activations of inputs belonging either to a concept class or to non-concept examples. Adopting a probabilistic perspective, the distribution of the (non-)concept inputs induces a distribution over the CAV, making it a random vector in the latent space. This enables us to derive mean and covariance for different types of CAVs, leading to a unified theoretical view. This probabilistic perspective also reveals a potential vulnerability: CAVs can strongly depend on the rather arbitrary non-concept distribution, a factor largely overlooked in prior work. We illustrate this with a simple yet effective adversarial attack, underscoring the need for a more systematic study.",0 "People with Multiple Sclerosis (MS) complain of problems with hand dexterity and cognitive fatigue. However, in many cases, impairments are subtle and difficult to detect. Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging technique that measures brain hemodynamic responses during cognitive or motor tasks. We aimed to detect brain activity biomarkers that could explain subjective reports of cognitive fatigue while completing dexterous tasks and provide targets for future brain stimulation treatments. We recruited 15 people with MS who did not have a hand (Nine Hole Peg Test [NHPT]), mobility, or cognitive impairment, and 12 age- and sex-matched controls. Participants completed two types of hand dexterity tasks with their dominant hand, single task and dual task (NHPT while holding a ball between the fifth finger and hypothenar eminence of the same hand). We analyzed fNIRS data (oxygenated and deoxygenated hemoglobin levels) using a machine learning framework to classify MS patients from controls based on their brain activation patterns in bilateral prefrontal and sensorimotor cortices. The K-Nearest Neighbor classifier achieved an accuracy of 75.0% for single manual dexterity tasks and 66.7% for the more complex dual manual dexterity tasks. Using XAI, we found that the most important brain regions contributing to the machine learning model were the supramarginal/angular gyri and the precentral gyrus (sensory integration and motor regions) of the ipsilateral hemisphere, with suppressed activity and slower neurovascular response in the MS group. During both tasks, deoxygenated hemoglobin levels were better predictors than the conventional measure of oxygenated hemoglobin. This nonconventional method of fNIRS data analysis revealed novel brain activity biomarkers that can help develop personalized brain stimulation targets.",1 "Cross-View Geo-Localization (CVGL) focuses on identifying correspondences between images captured from distinct perspectives of the same geographical location. However, existing CVGL approaches are typically restricted to a single view or modality, and their direct visual matching strategy lacks interpretability: they only determine whether two images correspond, without explaining the rationale behind the match. In this paper, we present GLEAM-C, a foundational CVGL model that unifies multiple views and modalities-including UAV imagery, street maps, panoramic views, and ground photographs-by aligning them exclusively with satellite imagery. Our framework enhances training efficiency through optimized implementation while achieving accuracy comparable to prior modality-specific CVGL models through a two-phase training strategy. Moreover, to address the lack of interpretability in traditional CVGL methods, we leverage the reasoning capabilities of multimodal large language models (MLLMs) to propose a new task, GLEAM-X, which combines cross-view correspondence prediction with explainable reasoning. To support this task, we construct a bilingual benchmark using GPT-4o and Doubao-1.5-Thinking-Vision-Pro to generate training and testing data. The test set is further refined through detailed human revision, enabling systematic evaluation of explainable cross-view reasoning and advancing transparency and scalability in geo-localization. Together, GLEAM-C and GLEAM-X form a comprehensive CVGL pipeline that integrates multi-modal, multi-view alignment with interpretable correspondence analysis, unifying accurate cross-view matching with explainable reasoning and advancing Geo-Localization by enabling models to better Explain And Match. Code and datasets used in this work will be made publicly accessible at https://github.com/Lucky-Lance/GLEAM.",0 "We use the notion of oracle machines and reductions from computability theory to formalise different Human-in-the-loop (HITL) setups for AI systems, distinguishing between trivial human monitoring (i.e., total functions), single endpoint human action (i.e., many-one reductions), and highly involved human-AI interaction (i.e., Turing reductions). We then proceed to show that the legal status and safety of different setups vary greatly. We present a taxonomy to categorise HITL failure modes, highlighting the practical limitations of HITL setups. We then identify omissions in UK and EU legal frameworks, which focus on HITL setups that may not always achieve the desired ethical, legal, and sociotechnical outcomes. We suggest areas where the law should recognise the effectiveness of different HITL setups and assign responsibility in these contexts, avoiding human ""scapegoating"". Our work shows an unavoidable trade-off between attribution of legal responsibility, and technical explainability. Overall, we show how HITL setups involve many technical design decisions, and can be prone to failures out of the humans' control. Our formalisation and taxonomy opens up a new analytic perspective on the challenges in creating HITL setups, helping inform AI developers and lawmakers on designing HITL setups to better achieve their desired outcomes.",0 "Ontologies are a standard tool for creating semantic schemata in many knowledge intensive domains of human interest. They are becoming increasingly important also in the areas that have been until very recently dominated by subsymbolic knowledge representation and machine-learning (ML) based data processing. One such area is information security, and specifically, malware detection. We thus propose PE Malware Ontology that offers a reusable semantic schema for Portable Executable (PE - the Windows binary format) malware files. This ontology is inspired by the structure of the EMBER dataset, which focuses on the static malware analysis of PE files. With this proposal, we hope to provide a unified semantic representation for the existing and future PE-malware datasets and facilitate the application of symbolic, neuro-symbolic, or otherwise explainable approaches in the PE-malware-detection domain, which may produce interpretable results described by the terms defined in our ontology. In addition, we also publish semantically treated EMBER data, including fractional datasets, to support the reproducibility of experiments on EMBER. We supplement our work with a preliminary case study, conducted using concept learning, to show the general feasibility of our approach. While we were not able to match the precision of the state-of-the-art ML tools, the learned malware discriminators were interesting and highly interpretable.",0 "The creation and perception of humour is a fundamental human trait, positioning its computational understanding as one of the most challenging tasks in natural language processing (NLP). As an abstract, creative, and frequently context-dependent construct, humour requires extensive reasoning to understand and create, making it a pertinent task for assessing the common-sense knowledge and reasoning abilities of modern large language models (LLMs). In this work, we survey the landscape of computational humour as it pertains to the generative tasks of creation and explanation. We observe that, despite the task of understanding humour bearing all the hallmarks of a foundational NLP task, work on generating and explaining humour beyond puns remains sparse, while state-of-the-art models continue to fall short of human capabilities. We bookend our literature survey by motivating the importance of computational humour processing as a subdiscipline of NLP and presenting an extensive discussion of future directions for research in the area that takes into account the subjective and ethically ambiguous nature of humour.",0 "Large Language Models (LLMs) exhibit a notable performance ceiling on complex, multi-faceted tasks, as they often fail to integrate diverse information or adhere to multiple constraints. We posit that such limitation arises when the demands of a task exceed the LLM's effective cognitive load capacity. This interpretation draws a strong analogy to Cognitive Load Theory (CLT) in cognitive science, which explains similar performance boundaries in the human mind, and is further supported by emerging evidence that reveals LLMs have bounded working memory characteristics. Building upon this CLT-grounded understanding, we introduce CoThinker, a novel LLM-based multi-agent framework designed to mitigate cognitive overload and enhance collaborative problem-solving abilities. CoThinker operationalizes CLT principles by distributing intrinsic cognitive load through agent specialization and managing transactional load via structured communication and a collective working memory. We empirically validate CoThinker on complex problem-solving tasks and fabricated high cognitive load scenarios, demonstrating improvements over existing multi-agent baselines in solution quality and efficiency. Our analysis reveals characteristic interaction patterns, providing insights into the emergence of collective cognition and effective load management, thus offering a principled approach to overcoming LLM performance ceilings.",0 "Understanding what deep learning (DL) models learn is essential for the safe deployment of artificial intelligence (AI) in clinical settings. While previous work has focused on pixel-based explainability methods, less attention has been paid to the textual concepts learned by these models, which may better reflect the reasoning used by clinicians. We introduce Mammo-CLIP Dissect, the first concept-based explainability framework for systematically dissecting DL vision models trained for mammography. Leveraging a mammography-specific vision-language model (Mammo-CLIP) as a ""dissector,"" our approach labels neurons at specified layers with human-interpretable textual concepts and quantifies their alignment to domain knowledge. Using Mammo-CLIP Dissect, we investigate three key questions: (1) how concept learning differs between DL vision models trained on general image datasets versus mammography-specific datasets; (2) how fine-tuning for downstream mammography tasks affects concept specialisation; and (3) which mammography-relevant concepts remain underrepresented. We show that models trained on mammography data capture more clinically relevant concepts and align more closely with radiologists' workflows than models not trained on mammography data. Fine-tuning for task-specific classification enhances the capture of certain concept categories (e.g., benign calcifications) but can reduce coverage of others (e.g., density-related features), indicating a trade-off between specialisation and generalisation. Our findings show that Mammo-CLIP Dissect provides insights into how convolutional neural networks (CNNs) capture mammography-specific knowledge. By comparing models across training data and fine-tuning regimes, we reveal how domain-specific training and task-specific adaptation shape concept learning. Code and concept set are available: https://github.com/Suaiba/Mammo-CLIP-Dissect.",0 "Quantum Software Engineering (QSE) is a research area practiced by tech firms. Quantum developers face challenges in optimizing quantum computing and QSE concepts. They use Stack Overflow (SO) to discuss challenges and label posts with specialized quantum tags, which often refer to technical aspects rather than developer posts. Categorizing questions based on quantum concepts can help identify frequent QSE challenges. We conducted studies to classify questions into various challenges. We extracted 2829 questions from Q&A platforms using quantum-related tags. Posts were analyzed to identify frequent challenges and develop a novel grounded theory. Challenges include Tooling, Theoretical, Learning, Conceptual, Errors, and API Usage. Through content analysis and grounded theory, discussions were annotated with common challenges to develop a ground truth dataset. ChatGPT validated human annotations and resolved disagreements. Fine-tuned transformer algorithms, including BERT, DistilBERT, and RoBERTa, classified discussions into common challenges. We achieved an average accuracy of 95% with BERT DistilBERT, compared to fine-tuned Deep and Machine Learning (D&ML) classifiers, including Feedforward Neural Networks (FNN), Convolutional Neural Networks (CNN), and Long Short-Term Memory networks (LSTM), which achieved accuracies of 89%, 86%, and 84%, respectively. The Transformer-based approach outperforms the D&ML-based approach with a 6\% increase in accuracy by processing actual discussions, i.e., without data augmentation. We applied SHAP (SHapley Additive exPlanations) for model interpretability, revealing how linguistic features drive predictions and enhancing transparency in classification. These findings can help quantum vendors and forums better organize discussions for improved access and readability. However,empirical evaluation studies with actual developers and vendors are needed.",0 "Flight trajectory prediction for multiple aircraft is essential and provides critical insights into how aircraft navigate within current air traffic flows. However, predicting multi-agent flight trajectories is inherently challenging. One of the major difficulties is modeling both the individual aircraft behaviors over time and the complex interactions between flights. Generating explainable prediction outcomes is also a challenge. Therefore, we propose a Multi-Agent Inverted Transformer, MAIFormer, as a novel neural architecture that predicts multi-agent flight trajectories. The proposed framework features two key attention modules: (i) masked multivariate attention, which captures spatio-temporal patterns of individual aircraft, and (ii) agent attention, which models the social patterns among multiple agents in complex air traffic scenes. We evaluated MAIFormer using a real-world automatic dependent surveillance-broadcast flight trajectory dataset from the terminal airspace of Incheon International Airport in South Korea. The experimental results show that MAIFormer achieves the best performance across multiple metrics and outperforms other methods. In addition, MAIFormer produces prediction outcomes that are interpretable from a human perspective, which improves both the transparency of the model and its practical utility in air traffic control.",0 "Modern machine learning produces models that are impossible for users or developers to fully understand -- raising concerns about trust, oversight, safety, and human dignity when they are integrated into software products. Transparency and explainability methods aim to provide some help in understanding models, but it remains challenging for developers to design explanations that are understandable to target users and effective for their purpose. Emerging guidelines and regulations set goals but may not provide effective actionable guidance to developers. In a large-scale experiment with 124 participants, we explored how developers approach providing end-user explanations, including what challenges they face, and to what extent specific policies can guide their actions. We investigated whether and how specific forms of policy guidance help developers design explanations and provide evidence for policy compliance for an ML-powered screening tool for diabetic retinopathy. Participants across the board struggled to produce quality explanations and comply with the provided policies. Contrary to our expectations, we found that the nature and specificity of policy guidance had little effect. We posit that participant noncompliance is in part due to a failure to imagine and anticipate the needs of non-technical stakeholders. Drawing on cognitive process theory and the sociological imagination to contextualize participants' failure, we recommend educational interventions.",2 "Feature attribution methods, such as SHAP and LIME, explain machine learning model predictions by quantifying the influence of each input component. When applying feature attributions to explain language models, a basic question is defining the interpretable components. Traditional feature attribution methods, commonly treat individual words as atomic units. This is highly computationally inefficient for long-form text and fails to capture semantic information that spans multiple words. To address this, we present CafGa, an interactive tool for generating and evaluating feature attribution explanations at customizable granularities. CafGa supports customized segmentation with user interaction and visualizes the deletion and insertion curves for explanation assessments. Through a user study involving 90 participants of various expertise, we confirm CafGa's usefulness, particularly among LLM practitioners. Explanations created using CafGa were also perceived as more useful compared to those generated by two fully automatic baseline methods: PartitionSHAP and MExGen, suggesting the effectiveness of the system.",2 "Conceptual models such as Concept Bottleneck Models (CBMs) have driven substantial progress in improving interpretability for image classification by leveraging human-interpretable concepts. However, extending these models from static images to sequences of images, such as video data, introduces a significant challenge due to the temporal dependencies inherent in videos, which are essential for capturing actions and events. In this work, we introduce MoTIF (Moving Temporal Interpretable Framework), an architectural design inspired by a transformer that adapts the concept bottleneck framework for video classification and handles sequences of arbitrary length. Within the video domain, concepts refer to semantic entities such as objects, attributes, or higher-level components (e.g., 'bow', 'mount', 'shoot') that reoccur across time - forming motifs collectively describing and explaining actions. Our design explicitly enables three complementary perspectives: global concept importance across the entire video, local concept relevance within specific windows, and temporal dependencies of a concept over time. Our results demonstrate that the concept-based modeling paradigm can be effectively transferred to video data, enabling a better understanding of concept contributions in temporal contexts while maintaining competitive performance. Code available at github.com/patrick-knab/MoTIF.",0 "As designers become familiar with Generative AI, a new concept is emerging: Agentic AI. While generative AI produces output in response to prompts, agentic AI systems promise to perform mundane tasks autonomously, potentially freeing designers to focus on what they love: being creative. But how do designers feel about integrating agentic AI systems into their workflows? Through design fiction, we investigated how designers want to interact with a collaborative agentic AI platform. Ten professional designers imagined and discussed collaborating with an AI agent to organise inspiration sources and ideate. Our findings highlight the roles AI agents can play in supporting designers, the division of authority between humans and AI, and how designers' intent can be explained to AI agents beyond prompts. We synthesise our findings into a conceptual framework that identifies authority distribution among humans and AI agents and discuss directions for utilising AI agents in future design workflows.",0 "Humans are social creatures who readily recognize various social interactions from simple display of moving shapes. While previous research has often focused on visual features, we examine what semantic representations that humans employ to complement visual features. In Study 1, we directly asked 76 human participants to label the animations based on their impression of moving shapes. We found that human responses were distributed. In Study 2, we measured the representational geometry of 27 social interactions through human similarity judgments and compared it with model predictions based on visual features, labels, and semantic embeddings from animation descriptions. We found that semantic models provided complementary information to visual features in explaining human judgments. Among the semantic models, verb-based embeddings extracted from descriptions account for human similarity judgments the best. These results suggest that social perception in simple displays reflects the semantic structure of social interactions, bridging visual and abstract representations.",2 "Audio Large Language Models (Audio LLMs) enable human-like conversation about music, yet it is unclear if they are truly listening to the audio or just using textual reasoning, as recent benchmarks suggest. This paper investigates this issue by quantifying the contribution of each modality to a model's output. We adapt the MM-SHAP framework, a performance-agnostic score based on Shapley values that quantifies the relative contribution of each modality to a model's prediction. We evaluate two models on the MuChoMusic benchmark and find that the model with higher accuracy relies more on text to answer questions, but further inspection shows that even if the overall audio contribution is low, models can successfully localize key sound events, suggesting that audio is not entirely ignored. Our study is the first application of MM-SHAP to Audio LLMs and we hope it will serve as a foundational step for future research in explainable AI and audio.",0 "Speech foundation models (SFMs) are increasingly hailed as powerful computational models of human speech perception. However, since their representations are inherently black-box, it remains unclear what drives their alignment with brain responses. To remedy this, we built linear encoding models from six interpretable feature families: mel-spectrogram, Gabor filter bank features, speech presence, phonetic, syntactic, and semantic features, and contextualized embeddings from three state-of-the-art SFMs (Whisper, HuBERT, WavLM), quantifying electrocorticography (ECoG) response variance shared between feature classes. Variance-partitioning analyses revealed several key insights: First, the SFMs' alignment with the brain can be mostly explained by their ability to learn and encode simple interpretable speech features. Second, SFMs exhibit a systematic trade-off between encoding of brain-relevant low-level and high-level features across layers. Finally, our results show that SFMs learn brain-relevant semantics which cannot be explained by lower-level speech features, with this capacity increasing with model size and context length. Together, our findings suggest a principled approach to build more interpretable, accurate, and efficient encoding models of the brain by augmenting SFM embeddings with interpretable features.",0 "The BRIDGES meeting in gauge theory, extremal structures, and stability was held June 2024 at l'Institut d'\'Etudes Scientifiques de Carg\`ese in Corsica, organized by Daniele Faenzi, Eveline Legendre, Eric Loubeau, and Henrique S\'a Earp. The first week was a summer school consisting of four independent but related lecture series by Oscar Garc\'ia Prada, Spiro Karigiannis, Laurent Manivel, and Ruxandra Moraru. The present document consists of notes for the lecture series by Spiro Karigiannis on ""Flows of geometric structures, especially $\mathrm{G}_2$-structures"". Some assistance in the preparation of these notes by the author was provided by several participants of the summer school. See the Comments field for more information. The main theme is short time existence (STE) and uniqueness for geometric flows. We first introduce geometric structures on manifolds and geometric flows of such structures. We discuss some qualitative features of geometric flows, and consider the notions of strong and weak parabolicity. We focus on the Ricci flow, explaining carefully the DeTurck trick to establish short-time existence and uniqueness, an argument which we then extend to a general class of geometric flows of Riemannian metrics, previewing similar ideas for flows of $\mathrm{G}_2$-structures. Finally, we consider geometric flows of $\mathrm{G}_2$-structures. We review the basics of $\mathrm{G}_2$-geometry and survey several different geometric flows of $\mathrm{G}_2$-structures. In particular, we clarify in what sense STE results for the $\mathrm{G}_2$ Laplacian flow differ from STE results for other geometric flows. We conclude with a summary of some recent results by the author with Dwivedi and Gianniotis, including a classification of all possible heat-type flows of $\mathrm{G}_2$-structures, and a sufficient condition for such a flow to admit STE and uniqueness by a modified DeTurck trick.",0 "In this paper, we address the point cloud registration problem, where well-known methods like ICP fail under uncertainty arising from sensor noise, pose-estimation errors, and partial overlap due to occlusion. We develop a novel approach, Gaussian Process Concept Attribution (GP-CA), which not only quantifies registration uncertainty but also explains it by attributing uncertainty to well-known sources of errors in registration problems. Our approach leverages active learning to discover new uncertainty sources in the wild by querying informative instances. We validate GP-CA on three publicly available datasets and in our real-world robot experiment. Extensive ablations substantiate our design choices. Our approach outperforms other state-of-the-art methods in terms of runtime, high sample-efficiency with active learning, and high accuracy. Our real-world experiment clearly demonstrates its applicability. Our video also demonstrates that GP-CA enables effective failure-recovery behaviors, yielding more robust robotic perception.",0 "The recent rise of reasoning-tuned Large Language Models (LLMs)--which generate chains of thought (CoTs) before giving the final answer--has attracted significant attention and offers new opportunities for gaining insights into human label variation, which refers to plausible differences in how multiple annotators label the same data instance. Prior work has shown that LLM-generated explanations can help align model predictions with human label distributions, but typically adopt a reverse paradigm: producing explanations based on given answers. In contrast, CoTs provide a forward reasoning path that may implicitly embed rationales for each answer option, before generating the answers. We thus propose a novel LLM-based pipeline enriched with linguistically-grounded discourse segmenters to extract supporting and opposing statements for each answer option from CoTs with improved accuracy. We also propose a rank-based HLV evaluation framework that prioritizes the ranking of answers over exact scores, which instead favor direct comparison of label distributions. Our method outperforms a direct generation method as well as baselines on three datasets, and shows better alignment of ranking methods with humans, highlighting the effectiveness of our approach.",1 "Recent advances in deep learning have led to increasingly complex models with deeper layers and more parameters, reducing interpretability and making their decisions harder to understand. While many methods explain black-box reasoning, most lack effective interventions or only operate at sample-level without modifying the model itself. To address this, we propose the Concept Bottleneck Model for Enhancing Human-Neural Network Mutual Understanding (CBM-HNMU). CBM-HNMU leverages the Concept Bottleneck Model (CBM) as an interpretable framework to approximate black-box reasoning and communicate conceptual understanding. Detrimental concepts are automatically identified and refined (removed/replaced) based on global gradient contributions. The modified CBM then distills corrected knowledge back into the black-box model, enhancing both interpretability and accuracy. We evaluate CBM-HNMU on various CNN and transformer-based models across Flower-102, CIFAR-10, CIFAR-100, FGVC-Aircraft, and CUB-200, achieving a maximum accuracy improvement of 2.64% and a maximum increase in average accuracy across 1.03%. Source code is available at: https://github.com/XiGuaBo/CBM-HNMU.",0 "We study a one-dimensional quasiperiodic tight-binding model with simultaneous off-diagonal (hopping) and diagonal (onsite) modulations. Using the inverse participation ratio and the wave-packet centroid, we construct localization-delocalization phase diagrams for both equilibrium and nonequilibrium steady states. We analyze the robustness of initial-state properties under dissipation and characterize dissipation-induced localization-delocalization transitions (and their reversals) in detail. Trace-distance dynamics provide evidence for a quantum Mpemba effect: states prepared farther from the steady state can relax faster than states initialized closer to it. We propose a starting-line hypothesis that explains the presence or absence of this effect across parameter regimes. In addition, we examine thermodynamic functionality and find that the localized phase favors the realization of quantum heaters. These results advance the understanding of steady-state phase transitions and relaxation dynamics in dissipatively driven quasiperiodic systems, and broaden the thermodynamic landscape of quasiperiodic platforms.",0 "The ability to explain complex information from chart images is vital for effective data-driven decision-making. In this work, we address the challenge of generating detailed explanations alongside answering questions about charts. We present ChartQA-X, a comprehensive dataset comprising 30,299 chart samples across four chart types, each paired with contextually relevant questions, answers, and explanations. Explanations are generated and selected based on metrics such as faithfulness, informativeness, coherence, and perplexity. Our human evaluation with 245 participants shows that model-generated explanations in ChartQA-X surpass human-written explanations in accuracy and logic and are comparable in terms of clarity and overall quality. Moreover, models fine-tuned on ChartQA-X show substantial improvements across various metrics, including absolute gains of up to 24.57 points in explanation quality, 18.96 percentage points in question-answering accuracy, and 14.75 percentage points on unseen benchmarks for the same task. By integrating explanatory narratives with answers, our approach enables agents to convey complex visual information more effectively, improving comprehension and greater trust in the generated responses.",2 "Increasing deployment of large language models (LLMs) in real-world applications raises significant safety concerns. Most existing safety research focuses on evaluating LLM outputs or specific safety tasks, limiting their ability to address broader, undefined risks. Sparse Autoencoders (SAEs) facilitate interpretability research to clarify model behavior by explaining single-meaning atomic features decomposed from entangled signals. jHowever, prior applications on SAEs do not interpret features with fine-grained safety-related concepts, thus inadequately addressing safety-critical behaviors, such as generating toxic responses and violating safety regulations. For rigorous safety analysis, we must extract a rich and diverse set of safety-relevant features that effectively capture these high-risk behaviors, yet face two challenges: identifying SAEs with the greatest potential for generating safety concept-specific neurons, and the prohibitively high cost of detailed feature explanation. In this paper, we propose Safe-SAIL, a framework for interpreting SAE features within LLMs to advance mechanistic understanding in safety domains. Our approach systematically identifies SAE with best concept-specific interpretability, explains safety-related neurons, and introduces efficient strategies to scale up the interpretation process. We will release a comprehensive toolkit including SAE checkpoints and human-readable neuron explanations, which supports empirical analysis of safety risks to promote research on LLM safety.",0 "Vision Language Models (VLMs) have recently been adopted in robotics for their capability in common sense reasoning and generalizability. Existing work has applied VLMs to generate task and motion planning from natural language instructions and simulate training data for robot learning. In this work, we explore using VLM to interpret human demonstration videos and generate robot task planning. Our method integrates keyframe selection, visual perception, and VLM reasoning into a pipeline. We named it SeeDo because it enables the VLM to ''see'' human demonstrations and explain the corresponding plans to the robot for it to ''do''. To validate our approach, we collected a set of long-horizon human videos demonstrating pick-and-place tasks in three diverse categories and designed a set of metrics to comprehensively benchmark SeeDo against several baselines, including state-of-the-art video-input VLMs. The experiments demonstrate SeeDo's superior performance. We further deployed the generated task plans in both a simulation environment and on a real robot arm.",0 "Alignment between human brain networks and artificial models has become an active research area in vision science and machine learning. A widely adopted approach is identifying ""metamers,"" stimuli physically different yet perceptually equivalent within a system. However, conventional methods lack a direct approach to searching for the human metameric space. Instead, researchers first develop biologically inspired models and then infer about human metamers indirectly by testing whether model metamers also appear as metamers to humans. Here, we propose the Multidimensional Adaptive Metamer Exploration (MAME) framework, enabling direct, high-dimensional exploration of human metameric spaces through online image generation guided by human perceptual feedback. MAME modulates reference images across multiple dimensions based on hierarchical neural network responses, adaptively updating generation parameters according to participants' perceptual discriminability. Using MAME, we successfully measured multidimensional human metameric spaces within a single psychophysical experiment. Experimental results using a biologically plausible CNN model showed that human discrimination sensitivity was lower for metameric images based on low-level features compared to high-level features, which image contrast metrics could not explain. The finding suggests a relatively worse alignment between the metameric spaces of 103 humans and the CNN model for low-level processing compared to high-level processing. Counterintuitively, given recent discussions on alignment at higher representational levels, our results highlight the importance of early visual computations in shaping biologically plausible models. Our MAME framework can serve as a future scientific tool for directly investigating the functional organization of human vision.",2 "Assertion messages significantly enhance unit tests by clearly explaining the reasons behind test failures, yet they are frequently omitted by developers and automated test-generation tools. Despite recent advancements, Large Language Models (LLMs) have not been systematically evaluated for their ability to generate informative assertion messages. In this paper, we introduce an evaluation of four state-of-the-art Fill-in-the-Middle (FIM) LLMs - Qwen2.5-Coder-32B, Codestral-22B, CodeLlama-13B, and StarCoder - on a dataset of 216 Java test methods containing developer-written assertion messages. We find that Codestral-22B achieves the highest quality score of 2.76 out of 5 using a human-like evaluation approach, compared to 3.24 for manually written messages. Our ablation study shows that including descriptive test comments further improves Codestral's performance to 2.97, highlighting the critical role of context in generating clear assertion messages. Structural analysis demonstrates that all models frequently replicate developers' preferred linguistic patterns. We discuss the limitations of the selected models and conventional text evaluation metrics in capturing diverse assertion message structures. Our benchmark, evaluation results, and discussions provide an essential foundation for advancing automated, context-aware generation of assertion messages in test code. A replication package is available at https://doi.org/10.5281/zenodo.15293133",0 "The rapid growth of large language models (LLMs) with diverse capabilities, latency and computational costs presents a critical deployment challenge: selecting the most suitable model for each prompt to optimize the trade-off between performance and efficiency. We introduce LLMRank, a prompt-aware routing framework that leverages rich, human-readable features extracted from prompts, including task type, reasoning patterns, complexity indicators, syntactic cues, and signals from a lightweight proxy solver. Unlike prior one-shot routers that rely solely on latent embeddings, LLMRank predicts per-model utility using a neural ranking model trained on RouterBench, comprising 36,497 prompts spanning 11 benchmarks and 11 state-of-the-art LLMs, from small efficient models to large frontier systems. Our approach achieves up to 89.2% of oracle utility, while providing interpretable feature attributions that explain routing decisions. Extensive studies demonstrate the importance of multifaceted feature extraction and the hybrid ranking objective, highlighting the potential of feature-driven routing for efficient and transparent LLM deployment.",0 "The field of AI-generated text detection has evolved from supervised classification to zero-shot statistical analysis. However, current approaches share a fundamental limitation: they aggregate token-level measurements into scalar scores, discarding positional information about where anomalies occur. Our empirical analysis reveals that AI-generated text exhibits significant non-stationarity, statistical properties vary by 73.8\% more between text segments compared to human writing. This discovery explains why existing detectors fail against localized adversarial perturbations that exploit this overlooked characteristic. We introduce Temporal Discrepancy Tomography (TDT), a novel detection paradigm that preserves positional information by reformulating detection as a signal processing task. TDT treats token-level discrepancies as a time-series signal and applies Continuous Wavelet Transform to generate a two-dimensional time-scale representation, capturing both the location and linguistic scale of statistical anomalies. On the RAID benchmark, TDT achieves 0.855 AUROC (7.1\% improvement over the best baseline). More importantly, TDT demonstrates robust performance on adversarial tasks, with 14.1\% AUROC improvement on HART Level 2 paraphrasing attacks. Despite its sophisticated analysis, TDT maintains practical efficiency with only 13\% computational overhead. Our work establishes non-stationarity as a fundamental characteristic of AI-generated text and demonstrates that preserving temporal dynamics is essential for robust detection.",0 "Recent advances in Artificial Intelligence Generated Content have led to highly realistic synthetic videos, particularly in human-centric scenarios involving speech, gestures, and full-body motion, posing serious threats to information authenticity and public trust. Unlike DeepFake techniques that focus on localized facial manipulation, human-centric video generation methods can synthesize entire human bodies with controllable movements, enabling complex interactions with environments, objects, and even other people. However, existing detection methods largely overlook the growing risks posed by such full-body synthetic content. Meanwhile, a growing body of research has explored leveraging LLMs for interpretable fake detection, aiming to explain decisions in natural language. Yet these approaches heavily depend on supervised fine-tuning, which introduces limitations such as annotation bias, hallucinated supervision, and weakened generalization. To address these challenges, we propose AvatarShield, a novel multimodal human-centric synthetic video detection framework that eliminates the need for dense textual supervision by adopting Group Relative Policy Optimization, enabling LLMs to develop reasoning capabilities from simple binary labels. Our architecture combines a discrete vision tower for high-level semantic inconsistencies and a residual extractor for fine-grained artifact analysis. We further introduce FakeHumanVid, a large-scale benchmark containing 15K real and synthetic videos across nine state-of-the-art human generation methods driven by text, pose, or audio. Extensive experiments demonstrate that AvatarShield outperforms existing methods in both in-domain and cross-domain settings.",0 "Deep learning has become the de facto standard and dominant paradigm in image analysis tasks, achieving state-of-the-art performance. However, this approach often results in ""black-box"" models, whose decision-making processes are difficult to interpret, raising concerns about reliability in critical applications. To address this challenge and provide human a method to understand how AI model process and make decision, the field of xAI has emerged. This paper surveys four representative approaches in xAI for visual perception tasks: (i) Saliency Maps, (ii) Concept Bottleneck Models (CBM), (iii) Prototype-based methods, and (iv) Hybrid approaches. We analyze their underlying mechanisms, strengths and limitations, as well as evaluation metrics, thereby providing a comprehensive overview to guide future research and applications.",0 "Understanding irradiation-induced strain in silicon carbide (SiC) is essential for designing radiation-tolerant ceramic materials. However, conventional methods often fail to resolve nanoscale strain gradients, especially in polycrystalline forms. In this study, we employ nano-beam precession electron diffraction (N-PED) to perform high-resolution, multi-directional strain mapping in both single-crystal 4H-SiC and polycrystalline {\alpha}-SiC subjected to helium and hydrogen ion irradiation. The high-resolution X-ray diffraction (HR-XRD) simulations of He + H irradiated single-crystal 4H-SiC closely match the strain profiles obtained from N-PED, demonstrating the reliability and accuracy of the N-PED method. In He-irradiated polycrystalline {\alpha}-SiC at high temperatures, a bubble-depleted zone (BDZ) near the grain boundary (GB) reveals that GBs act as active sinks for irradiation-induced defects. N-PED further shows strain amplification localized at the GBs, reaching up to 2.5%, along with strain relief within the BDZ. To explain this behavior, density functional theory (DFT) calculations of binding and migration energies indicate a strong tendency for Si, C, and He atoms to segregate toward the GB core. This segregation reduces the availability of vacancies to accommodate He atoms and leads to local strain relaxation near the GB. Furthermore, first-principles tensile simulations reveal that Si and C interstitials mitigate He-induced GB embrittlement. Charge density and DOS analyses link this effect to the bonding characteristics between point defects and neighboring atoms at GB. These insights underscore the importance of grain boundary engineering in enhancing radiation tolerance of SiC for nuclear and space applications.",0 "Real-time threat monitoring identifies threatening behaviors in video streams and provides reasoning and assessment of threat events through explanatory text. However, prevailing methodologies, whether based on supervised learning or generative models, struggle to concurrently satisfy the demanding requirements of real-time performance and decision explainability. To bridge this gap, we introduce Live-E2T, a novel framework that unifies these two objectives through three synergistic mechanisms. First, we deconstruct video frames into structured Human-Object-Interaction-Place semantic tuples. This approach creates a compact, semantically focused representation, circumventing the information degradation common in conventional feature compression. Second, an efficient online event deduplication and updating mechanism is proposed to filter spatio-temporal redundancies, ensuring the system's real time responsiveness. Finally, we fine-tune a Large Language Model using a Chain-of-Thought strategy, endow it with the capability for transparent and logical reasoning over event sequences to produce coherent threat assessment reports. Extensive experiments on benchmark datasets, including XD-Violence and UCF-Crime, demonstrate that Live-E2T significantly outperforms state-of-the-art methods in terms of threat detection accuracy, real-time efficiency, and the crucial dimension of explainability.",0 "Superconducting radio-frequency (SRF) cavities are the leading technology for highly efficient particle acceleration, and their performance can be significantly enhanced through the controlled introduction of interstitial impurities into bulk niobium. Nitrogen doping has demonstrated a substantial reduction in surface resistance losses, which improves the quality factor of the cavities. More recently, oxygen doping has emerged as a promising alternative, demonstrating comparable reductions in surface resistance. In this study, we combine cavity measurements on \SI{1.3}{GHz} niobium SRF cavities subjected to a range of nitrogen- and oxygen-based treatments with material characterizations performed on cavity cutouts processed under identical conditions. This approach allows us to quantitatively assess the contribution of each impurity to the reduction of surface resistance. We find that nitrogen is ten times more effective than oxygen in reducing surface resistance at \SI{16}{MV/m}. We propose a model to explain this variation, suggesting that nitrogen more effectively traps hydrogen, thus suppressing the formation of niobium hydrides within the RF penetration layer and enabling an improved superconducting gap.",0 "The performance of machine learning models relies heavily on the quality of input data, yet real-world applications often face significant data-related challenges. A common issue arises when curating training data or deploying models: two datasets from the same domain may exhibit differing distributions. While many techniques exist for detecting such distribution shifts, there is a lack of comprehensive methods to explain these differences in a human-understandable way beyond opaque quantitative metrics. To bridge this gap, we propose a versatile framework of interpretable methods for comparing datasets. Using a variety of case studies, we demonstrate the effectiveness of our approach across diverse data modalities-including tabular data, text data, images, time-series signals -- in both low and high-dimensional settings. These methods complement existing techniques by providing actionable and interpretable insights to better understand and address distribution shifts.",0 "A novel hybrid method based on Mie theory and the Discrete Dipole Approximation (DDA) was developed to study the microscopic parameters governing the optical response of tunable photonic crystals (PC). The method is based on a two-step process. An effective polarizability derived from Mie theory is determined by equating the extinction efficiency of an isolated nanoparticle (NP) to the extinction efficiency of an equivalent particle considering the dipolar limit. Then, this effective polarizability is used in the DDA framework to compute the optical response of an interacting particle array constituting the PC structure. As a particular example, the method was applied to a linear array of core-shell magnetite@silica NPs to study the dependence of extinction and absorption on system parameters such as core radius, shell thickness, total radius, interparticle separation, and size distribution. The results indicate that an increase in these parameters leads to a redshift of the extinction peak as well as an increase in its $FWHM$. Finally, the method is applied to fitting experimental results on reflection/transmission measurements of magnetite@silica NPs colloids subjected to different magnetic field strengths with very good agreement. The presented method reduces the computational cost and time for the NPs sizes considered, and can be applied to PCs responsive to different stimuli such as mechanical stress, electric field and temperature, \textit{inter alia}.",0 "Graph Neural Networks (GNNs) are widely used for node classification, yet their opaque decision-making limits trust and adoption. While local explanations offer insights into individual predictions, global explanation methods, those that characterize an entire class, remain underdeveloped. Existing global explainers rely on motif discovery in small graphs, an approach that breaks down in large, real-world settings where subgraph repetition is rare, node attributes are high-dimensional, and predictions arise from complex structure-attribute interactions. We propose GnnXemplar, a novel global explainer inspired from Exemplar Theory from cognitive science. GnnXemplar identifies representative nodes in the GNN embedding space, exemplars, and explains predictions using natural language rules derived from their neighborhoods. Exemplar selection is framed as a coverage maximization problem over reverse k-nearest neighbors, for which we provide an efficient greedy approximation. To derive interpretable rules, we employ a self-refining prompt strategy using large language models (LLMs). Experiments across diverse benchmarks show that GnnXemplar significantly outperforms existing methods in fidelity, scalability, and human interpretability, as validated by a user study with 60 participants.",2 "Variable importance measures (VIMs) aim to quantify the contribution of each input covariate to the predictability of a given output. With the growing interest in explainable AI, numerous VIMs have been proposed, many of which are heuristic in nature. This is often justified by the inherent subjectivity of the notion of importance. This raises important questions regarding usage: What makes a good VIM? How can we compare different VIMs? In this paper, we address these questions by: (1) proposing an axiomatic framework that bridges the gap between variable importance and variable selection. This framework formalizes the intuitive principle that features providing no additional information should not be assigned importance. It helps avoid false positives due to spurious correlations, which can arise with popular methods such as Shapley values; and (2) introducing a general pipeline for constructing VIMs, which clarifies the objective of various VIMs and thus facilitates meaningful comparisons. This approach is natural in statistics, but the literature has diverged from it. Finally, we provide an extensive set of examples to guide practitioners in selecting and estimating appropriate indices aligned with their specific goals and data.",0 "Advanced cyber threats (e.g., Fileless Malware and Advanced Persistent Threat (APT)) have driven the adoption of provenance-based security solutions. These solutions employ Machine Learning (ML) models for behavioral modeling and critical security tasks such as malware and anomaly detection. However, the opacity of ML-based security models limits their broader adoption, as the lack of transparency in their decision-making processes restricts explainability and verifiability. We tailored our solution towards Graph Neural Network (GNN)-based security solutions since recent studies employ GNNs to comprehensively digest system provenance graphs for security-critical tasks. To enhance the explainability of GNN-based security models, we introduce PROVEXPLAINER, a framework offering instance-level security-aware explanations using an interpretable surrogate model. PROVEXPLAINER's interpretable feature space consists of discriminant subgraph patterns and graph structural features, which can be directly mapped to the system provenance problem space, making the explanations human interpretable. We show how PROVEXPLAINER synergizes with current state-of-the-art (SOTA) GNN explainers to deliver domain and instance-specific explanations. We measure the explanation quality using the Fidelity+/Fidelity- metric as used by traditional GNN explanation literature, we incorporate the precision/recall metric, where we consider the accuracy of the explanation against the ground truth, and we designed a human actionability metric based on graph traversal distance. On real-world Fileless and APT datasets, PROVEXPLAINER achieves up to 29%/27%/25%/1.4x higher Fidelity+, precision, recall, and actionability (where higher values are better), and 12% lower Fidelity- (where lower values are better) when compared against SOTA GNN explainers.",0 "Emotions, which influence how convincing an argument is, are developed in context of the self and sender, and therefore require modeling the cognitive evaluation process. While binary emotionality has been studied in argument mining, and the cognitive appraisal has been modeled in general emotion analysis, these fields have not been brought together yet. We therefore propose the Contextualized Argument Appraisal Framework that contextualizes the interplay between the sender, receiver, and argument. It includes emotion labels, appraisals, such as argument familiarity, response urgency, and expected effort, as well as convincingness variables. To evaluate the framework and pave the way to computational modeling, we perform a study in a role-playing scenario, mimicking real-world exposure to arguments, asking participants to disclose their emotion, explain the main cause, the argument appraisal, and the perceived convincingness. To consider the subjective nature of such annotations, we also collect demographic data and personality traits of both the participants and the perceived sender of the argument. The analysis of the resulting corpus of 800 arguments, each annotated by 5 participants, reveals that convincingness is positively correlated with positive emotions (e.g., trust) and negatively correlated with negative emotions (e.g., anger). The appraisal variables disclose the importance of the argument familiarity. For most participants, the content of the argument itself is the primary driver of the emotional response.",1 "Contract review is a complex and time-intensive task that typically demands specialized legal expertise, rendering it largely inaccessible to non-experts. Moreover, legal interpretation is rarely straightforward-ambiguity is pervasive, and judgments often hinge on subjective assessments. Compounding these challenges, contracts are usually confidential, restricting their use with proprietary models and necessitating reliance on open-source alternatives. To address these challenges, we introduce PAKTON: a fully open-source, end-to-end, multi-agent framework with plug-and-play capabilities. PAKTON is designed to handle the complexities of contract analysis through collaborative agent workflows and a novel retrieval-augmented generation (RAG) component, enabling automated legal document review that is more accessible, adaptable, and privacy-preserving. Experiments demonstrate that PAKTON outperforms both general-purpose and pretrained models in predictive accuracy, retrieval performance, explainability, completeness, and grounded justifications as evaluated through a human study and validated with automated metrics.",0 "Wearable systems can recognize activities from IMU data but often fail to explain their underlying causes or contextual significance. To address this limitation, we introduce two large-scale resources: SensorCap, comprising 35,960 IMU--caption pairs, and OpenSQA, with 199,701 question--answer pairs designed for causal and explanatory reasoning. OpenSQA includes a curated tuning split (Tune-OpenSQA) optimized for scientific accuracy, narrative clarity, and diagnostic insight. Leveraging these datasets, we develop LLaSA (Large Language and Sensor Assistant), a family of compact sensor-aware language models (7B and 13B) that generate interpretable, context-rich responses to open-ended questions grounded in raw IMU data. LLaSA outperforms commercial LLMs, including GPT-3.5 and GPT-4o-mini, on benchmark and real-world tasks, demonstrating the effectiveness of domain supervision and model alignment for sensor reasoning. Our code repository and datasets can be found at https://github.com/BASHLab/LLaSA.",0 "Modern deep neural networks have now reached human-level performance across a variety of tasks. However, unlike humans they lack the ability to explain their decisions by showing where and telling what concepts guided them. In this work, we present a unified framework for transforming any vision neural network into a spatially and conceptually interpretable model. We introduce a spatially-aware concept bottleneck layer that projects black-box"" features of pre-trained backbone models into interpretable concept maps, without requiring human labels. By training a classification layer over this bottleneck, we obtain a self-explaining model that articulates which concepts most influenced its prediction, along with heatmaps that ground them in the input image. Accordingly, we name this method ""Spatially-Aware and Label-Free Concept Bottleneck Model"" (SALF-CBM). Our results show that the proposed SALF-CBM: (1) Outperforms non-spatial CBM methods, as well as the original backbone, on a variety of classification tasks; (2) Produces high-quality spatial explanations, outperforming widely used heatmap-based methods on a zero-shot segmentation task; (3) Facilitates model exploration and debugging, enabling users to query specific image regions and refine the model's decisions by locally editing its concept maps.""""""",0 "As machine learning systems increasingly inform critical decisions, the need for human-understandable explanations grows. Current evaluations of Explainable AI (XAI) often prioritize technical fidelity over cognitive accessibility which critically affects users, in particular those with visual impairments. We propose CUE, a model for Cognitive Understanding of Explanations, linking explanation properties to cognitive sub-processes: legibility (perception), readability (comprehension), and interpretability (interpretation). In a study (N=455) testing heatmaps with varying colormaps (BWR, Cividis, Coolwarm), we found comparable task performance but lower confidence/effort for visually impaired users. Unlike expected, these gaps were not mitigated and sometimes worsened by accessibility-focused color maps like Cividis. These results challenge assumptions about perceptual optimization and support the need for adaptive XAI interfaces. They also validate CUE by demonstrating that altering explanation legibility affects understandability. We contribute: (1) a formalized cognitive model for explanation understanding, (2) an integrated definition of human-centered explanation properties, and (3) empirical evidence motivating accessible, user-tailored XAI.",0 "An important line of research attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human-understandable concepts. Previous work supports that deep representations are linearly separable with respect to their concept label, implying that the feature space has directions where intermediate representations may be projected onto, to become more understandable. These directions are called interpretable, and when considered as a set, they may form an interpretable feature space basis. Compared to previous top-down probing approaches which use concept annotations to identify the interpretable directions one at a time, in this work, we take a bottom-up approach, identifying the directions from the structure of the feature space, collectively, without relying on supervision from concept labels. Instead, we learn the directions by optimizing for a sparsity property that holds for any interpretable basis. We experiment with existing popular CNNs and demonstrate the effectiveness of our method in extracting an interpretable basis across network architectures and training datasets. We make extensions to existing basis interpretability metrics and show that intermediate layer representations become more interpretable when transformed with the extracted bases. Finally, we compare the bases extracted with our method with the bases derived with supervision and find that, in one aspect, unsupervised basis extraction has a strength that constitutes a limitation of learning the basis with supervision, and we provide potential directions for future research.",0 "In Nigeria, electoral behavior is often interpreted through ethno-religious views and regional allegiances, without empirically assessing the influence of socioeconomic indicators such as health, income, education, and other deprivations on voter behavior. This study investigates the voting pattern in the 2023 Nigerian presidential election and previous cycles using spatio-temporal and multivariate analysis. It examines whether support for some candidates was more substantial in states with higher Human Development Index (HDI) and whether this alignment impacts governance quality and macroeconomic performance post-election. Socioeconomic data were obtained from the Global Data Lab and the Nigerian Bureau of Statistics (NBS) for the year preceding each election, while presidential vote results were sourced from the Independent National Electoral Commission (INEC). The results show that the Labour Party (LP) dominated states with high socioeconomic indices, accounting for about 30-53% of the variance in voting patterns of LP and People's Democratic Party (PDP), while the All Progressive Congress (APC) variance is less explained by these factors. Multinomial logits based on HDI is used to predict party win probabilities the model predicted about 60% wins accurately. Comparative analysis of four presidential cycles revealed that in 2011, the winner had an HDI-vote correlation of 0.44, with improved macroeconomic indices post-election. In contrast, 2015, 2019, and 2023 saw negative correlations of -0.38, -0.43, and -0.34, respectively, alongside macroeconomic decline. The findings suggest that socioeconomic development shapes political preferences, promotes issue-based politics, and supports quality leadership therefore, strengthening education, healthcare, and poverty reduction should be prioritized to enhance citizens' well-being and build an informed electorate.",1 "Explainability, the capability of an artificial intelligence system (AIS) to explain its outcomes in a manner that is comprehensible to human beings at an acceptable level, has been deemed essential for critical sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, ``Oracle'' (without explainability) versus ``AI Colleague'' (with explainability) for a thorough analysis. We discuss how the level of automation and explainability of AIS can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved parties and mitigate the risk of potential defensive medicine practices.",0 "Consistent high-quality nursing care is essential for patient safety, yet current nursing education depends on subjective, time-intensive instructor feedback in training future nurses, which limits scalability and efficiency in their training, and thus hampers nursing competency when they enter the workforce. In this paper, we introduce a video-language model (VLM) based framework to develop the AI capability of automated procedural assessment and feedback for nursing skills training, with the potential of being integrated into existing training programs. Mimicking human skill acquisition, the framework follows a curriculum-inspired progression, advancing from high-level action recognition, fine-grained subaction decomposition, and ultimately to procedural reasoning. This design supports scalable evaluation by reducing instructor workload while preserving assessment quality. The system provides three core capabilities: 1) diagnosing errors by identifying missing or incorrect subactions in nursing skill instruction videos, 2) generating explainable feedback by clarifying why a step is out of order or omitted, and 3) enabling objective, consistent formative evaluation of procedures. Validation on synthesized videos demonstrates reliable error detection and temporal localization, confirming its potential to handle real-world training variability. By addressing workflow bottlenecks and supporting large-scale, standardized evaluation, this work advances AI applications in nursing education, contributing to stronger workforce development and ultimately safer patient care.",2 "Visual Question Answering (VQA) is increasingly used in diverse applications ranging from general visual reasoning to safety-critical domains such as medical imaging and autonomous systems, where models must provide not only accurate answers but also explanations that humans can easily understand and verify. Prototype-based modeling has shown promise for interpretability by grounding predictions in semantically meaningful regions for purely visual reasoning tasks, yet remains underexplored in the context of VQA. We present ProtoVQA, a unified prototypical framework that (i) learns question-aware prototypes that serve as reasoning anchors, connecting answers to discriminative image regions, (ii) applies spatially constrained matching to ensure that the selected evidence is coherent and semantically relevant, and (iii) supports both answering and grounding tasks through a shared prototype backbone. To assess explanation quality, we propose the Visual-Linguistic Alignment Score (VLAS), which measures how well the model's attended regions align with ground-truth evidence. Experiments on Visual7W show that ProtoVQA yields faithful, fine-grained explanations while maintaining competitive accuracy, advancing the development of transparent and trustworthy VQA systems.",0 "Large language models (LLMs) have demonstrated promising performance on\nmedical benchmarks; however, their ability to perform medical calculations, a\ncrucial aspect of clinical decision-making, remains underexplored and poorly\nevaluated. Existing benchmarks often assess only the final answer with a wide\nnumerical tolerance, overlooking systematic reasoning failures and potentially\ncausing serious clinical misjudgments. In this work, we revisit medical\ncalculation evaluation with a stronger focus on clinical trustworthiness.\nFirst, we clean and restructure the MedCalc-Bench dataset and propose a new\nstep-by-step evaluation pipeline that independently assesses formula selection,\nentity extraction, and arithmetic computation. Under this granular framework,\nthe accuracy of GPT-4o drops from 62.7% to 43.6%, revealing errors masked by\nprior evaluations. Second, we introduce an automatic error analysis framework\nthat generates structured attribution for each failure mode. Human evaluation\nconfirms its alignment with expert judgment, enabling scalable and explainable\ndiagnostics. Finally, we propose a modular agentic pipeline, MedRaC, that\ncombines retrieval-augmented generation and Python-based code execution.\nWithout any fine-tuning, MedRaC improves the accuracy of different LLMs from\n16.35% up to 53.19%. Our work highlights the limitations of current benchmark\npractices and proposes a more clinically faithful methodology. By enabling\ntransparent and transferable reasoning evaluation, we move closer to making\nLLM-based systems trustworthy for real-world medical applications""""""",1 "Recent black-box counterfactual generation frameworks fail to take into account the semantic content of the proposed edits, while relying heavily on training to guide the generation process. We propose a novel, plug-and-play black-box counterfactual generation framework, which suggests step-by-step edits based on theoretical guarantees of optimal edits to produce human-level counterfactual explanations with zero training. Our framework utilizes a pre-trained image editing diffusion model, and operates without access to the internals of the classifier, leading to an explainable counterfactual generation process. Throughout our experimentation, we showcase the explanatory gap between human reasoning and neural model behavior by utilizing both Convolutional Neural Network (CNN), Vision Transformer (ViT) and Large Vision Language Model (LVLM) classifiers, substantiated through a comprehensive human evaluation.",0 "Trust is central to human social interactions, manifesting in actions that make one vulnerable to another. We argue that trust will thus depend on the decision-making processes that arise in neural systems. Building on advances in the cognitive neuroscience of decision making, we propose a mechanistic model of trust arising from multiple parallel systems that perform distinct, complementary information processing. Because each system learns via different mechanisms, trust can be created (or destroyed) in multiple ways. This systems-level taxonomy of information representations provides a principled basis for differentiating forms of trust, linking them to specific learning processes, and generating testable predictions about their expression in behavior. By situating trust within a broader theory of neural decision systems, our account unifies diverse findings across psychology, neuroscience, and the social sciences, and offers a foundation for explaining how humans develop, maintain, and repair trust in a complex social world.",0 "Autonomous navigation in maritime domains is accelerating alongside advances in artificial intelligence, sensing, and connectivity. Opaque decision-making and poorly calibrated human-automation interaction remain key barriers to safe adoption. This article synthesizes 100 studies on automation transparency for Maritime Autonomous Surface Ships (MASS) spanning situation awareness (SA), human factors, interface design, and regulation. We (i) map the Guidance-Navigation-Control stack to shore-based operational modes -- remote supervision (RSM) and remote control (RCM) -- and identify where human unsafe control actions (Human-UCAs) concentrate in handover and emergency loops; (ii) summarize evidence that transparency features (decision rationales, alternatives, confidence/uncertainty, and rule-compliance indicators) improve understanding and support trust calibration, though reliability and predictability often dominate trust; (iii) distill design strategies for transparency at three layers: sensor/SA acquisition and fusion, HMI/eHMI presentation (textual/graphical overlays, color coding, conversational and immersive UIs), and engineer-facing processes (resilient interaction design, validation, and standardization). We integrate methods for Human-UCA identification (STPA-Cog + IDAC), quantitative trust/SA assessment, and operator workload monitoring, and outline regulatory and rule-based implications including COLREGs formalization and route exchange. We conclude with an adaptive transparency framework that couples operator state estimation with explainable decision support to reduce cognitive overload and improve takeover timeliness. The review highlights actionable figure-of-merit displays (e.g., CPA/TCPA risk bars, robustness heatmaps), transparent model outputs (rule traceability, confidence), and training pipelines (HIL/MIL, simulation) as near-term levers for safer MASS operations.",0 "Understanding how built environments shape human experience is central to designing sustainable cities. Cycling provides a critical case: it delivers health and environmental benefits, yet its uptake depends strongly on the experience of cycling rather than infrastructure alone. Research on this relationship has grown rapidly but remains fragmented across disciplines and scales, and has concentrated on network-level analyses of routes and connectivity. This bias is especially problematic in historical cities, where embedding new infrastructure is difficult, and where cycling experience is shaped not only by spatial form but also by how cyclists perceive, interpret, and physically respond to their environment - through psychological factors such as safety and comfort, physiological demands such as stress and fatigue, and perceptual cues in the streetscape. We systematically reviewed 68 studies across urban planning, transportation, behavioural science, neuroscience, and public health. Two scales of analysis were identified: a macro scale addressing the ability to cycle and a micro scale addressing the propensity to cycle. Methods were classified into objective and subjective approaches, with hybrid approaches beginning to emerge. We find a persistent reliance on objective proxies, limited integration of subjective accounts, and insufficient attention to the streetscape as a lived environment. Addressing these gaps is essential to explain why environments enable or deter cycling, and to inform the design of cities that support cycling as both mobility and lived experience.",0 "The diffusion of micro- and nano-swimmers in a fluid, confined within irregular structures that impose entropic barriers, is often modeled using overdamped active Brownian dynamics, where viscous effects are paramount and inertia is negligible. Here, we numerically investigate the diffusive behavior of chiral self-propelled particles in a two-dimensional asymmetric channel subjected to an external torque arising from a gravitational field. We reveal the emergence of resonant diffusion when the external torque $\omega$ approaches the intrinsic angular velocity $\omega_{0}$ of particles. This resonance manifests as a pronounced accumulation of particles near the upper-left corner of the channel, accompanied by an enhanced peak in the effective diffusion coefficient. In particular, it is observed only for low rotational diffusion rates and does not persist beyond moderate values of $\omega_{0}$. Prominent transport features, such as rectification at low values of $\omega$, a monotonic increase in average velocity with $\omega$, and a nonmonotonic response of transport characteristics (average velocity and effective diffusion coefficient) as a function of the rotational diffusion rate near the resonance point, are explained. Furthermore, we show that the transport characteristics depend strongly on the aspect ratio of the channel. For instance, the enhanced diffusion peak becomes more pronounced with increasing aspect ratio, and the average velocity saturates at higher values for wider bottleneck openings. It is conceivable that these findings have a great potential for developing microfluidic and lab-on-a-chip devices for particle separation, targeted drug delivery, and advanced active materials.",0 "WARNING: This paper contains examples of offensive materials. Toxic content has become pervasive on social media platforms. We introduce SMARTER, a data-efficient two-stage framework for explainable content moderation using Large Language Models (LLMs). In Stage 1, we leverage LLMs' own outputs to generate synthetic explanations for both correct and incorrect labels, enabling alignment via preference optimization with minimal human supervision. In Stage 2, we refine explanation quality through cross-model training, allowing weaker models to align stylistically and semantically with stronger ones. Experiments on three benchmark tasks -- HateXplain, Latent Hate, and Implicit Hate -- demonstrate that SMARTER enables LLMs to achieve up to a 13.5% macro-F1 improvement over standard few-shot baselines while using only a fraction of the full training data. Our framework offers a scalable strategy for low-resource settings by harnessing LLMs' self-improving capabilities for both classification and explanation.",0 "As autonomous technologies increasingly shape maritime operations, understanding why an AI system makes a decision becomes as crucial as what it decides. In complex and dynamic maritime environments, trust in AI depends not only on performance but also on transparency and interpretability. This paper highlights the importance of Explainable AI (XAI) as a foundation for effective human-machine teaming in the maritime domain, where informed oversight and shared understanding are essential. To support the user-centered integration of XAI, we propose a domain-specific survey designed to capture maritime professionals' perceptions of trust, usability, and explainability. Our aim is to foster awareness and guide the development of user-centric XAI systems tailored to the needs of seafarers and maritime teams.",2 "With the increasing prevalence of synthetic images, evaluating image authenticity and locating forgeries accurately while maintaining human interpretability remains a challenging task. Existing detection models primarily focus on simple authenticity classification, ultimately providing only a forgery probability or binary judgment, which offers limited explanatory insights into image authenticity. Moreover, while MLLM-based detection methods can provide more interpretable results, they still lag behind expert models in terms of pure authenticity classification accuracy. To address this, we propose DF-LLaVA, a simple yet effective framework that unlocks the intrinsic discrimination potential of MLLMs. Our approach first extracts latent knowledge from MLLMs and then injects it into training via prompts. This framework allows LLaVA to achieve outstanding detection accuracy exceeding expert models while still maintaining the interpretability offered by MLLMs. Extensive experiments confirm the superiority of our DF-LLaVA, achieving both high accuracy and explainability in synthetic image detection. Code is available online at: https://github.com/Eliot-Shen/DF-LLaVA.",0 "Reasoning is central to purposeful action, yet most robotic foundation models map perception and instructions directly to control, which limits adaptability, generalization, and semantic grounding. We introduce Action Reasoning Models (ARMs), a class of robotic foundation models that integrate perception, planning, and control through a structured three-stage pipeline. Our model, MolmoAct, encodes observations and instructions into depth-aware perception tokens, generates mid-level spatial plans as editable trajectory traces, and predicts precise low-level actions, enabling explainable and steerable behavior. MolmoAct-7B-D achieves strong performance across simulation and real-world settings: 70.5% zero-shot accuracy on SimplerEnv Visual Matching tasks, surpassing closed-source Pi-0 and GR00T N1.5; 86.6% average success on LIBERO, including an additional 6.3% gain over ThinkAct on long-horizon tasks; and in real-world fine-tuning, an additional 10% (single-arm) and an additional 22.7% (bimanual) task progression over Pi-0-FAST. It also outperforms baselines by an additional 23.3% on out-of-distribution generalization and achieves top human-preference scores for open-ended instruction following and trajectory steering. Furthermore, we release, for the first time, the MolmoAct Dataset -- a mid-training robot dataset comprising over 10,000 high quality robot trajectories across diverse scenarios and tasks. Training with this dataset yields an average 5.5% improvement in general performance over the base model. We release all model weights, training code, our collected dataset, and our action reasoning dataset, establishing MolmoAct as both a state-of-the-art robotics foundation model and an open blueprint for building ARMs that transform perception into purposeful action through structured reasoning. Blogpost: https://allenai.org/blog/molmoact",0 "With the rapid development of Internet technologies, web systems have become essential infrastructures for modern information exchange and business operations. However, alongside their expansion, numerous security vulnerabilities have emerged, making web security a critical research focus within the broader field of cybersecurity. These issues are closely related to data protection, privacy preservation, and business continuity, and systematic research on web security is crucial for mitigating malicious attacks and enhancing the reliability and robustness of network systems. This paper first reviews the OWASP Top 10, summarizing the types, causes, and impacts of common web vulnerabilities, and illustrates their exploitation mechanisms through representative cases. Building upon this, the Gruyere platform is adopted as an experimental subject for analyzing known vulnerabilities. The study presents detailed reproduction steps for specific vulnerabilities, proposes comprehensive remediation strategies, and further compares Gruyere's vulnerabilities with contemporary real-world cases. The findings suggest that, although Gruyere's vulnerabilities are relatively outdated, their underlying principles remain highly relevant for explaining a wide range of modern security flaws. Overall, this research demonstrates that web system security analysis based on Gruyere not only deepens the understanding of vulnerability mechanisms but also provides practical support for technological innovation and security defense.",0 "Effort estimation is a crucial activity in agile software development, where teams collaboratively review, discuss, and estimate the effort required to complete user stories in a product backlog. Current practices in agile effort estimation heavily rely on subjective assessments, leading to inaccuracies and inconsistencies in the estimates. While recent machine learning-based methods show promising accuracy, they cannot explain or justify their estimates and lack the capability to interact with human team members. Our paper fills this significant gap by leveraging the powerful capabilities of Large Language Models (LLMs). We propose a novel LLM-based multi-agent framework for agile estimation that not only can produce estimates, but also can coordinate, communicate and discuss with human developers and other agents to reach a consensus. Evaluation results on a real-life dataset show that our approach outperforms state-of-the-art techniques across all evaluation metrics in the majority of the cases. Our human study with software development practitioners also demonstrates an overwhelmingly positive experience in collaborating with our agents in agile effort estimation.",0 "Accurate staging of Diabetic Retinopathy (DR) is essential for guiding timely interventions and preventing vision loss. However, current staging models are hardly interpretable, and most public datasets contain no clinical reasoning or interpretation beyond image-level labels. In this paper, we present a novel method that integrates graph representation learning with vision-language models (VLMs) to deliver explainable DR diagnosis. Our approach leverages optical coherence tomography angiography (OCTA) images by constructing biologically informed graphs that encode key retinal vascular features such as vessel morphology and spatial connectivity. A graph neural network (GNN) then performs DR staging while integrated gradients highlight critical nodes and edges and their individual features that drive the classification decisions. We collect this graph-based knowledge which attributes the model's prediction to physiological structures and their characteristics. We then transform it into textual descriptions for VLMs. We perform instruction-tuning with these textual descriptions and the corresponding image to train a student VLM. This final agent can classify the disease and explain its decision in a human interpretable way solely based on a single image input. Experimental evaluations on both proprietary and public datasets demonstrate that our method not only improves classification accuracy but also offers more clinically interpretable results. An expert study further demonstrates that our method provides more accurate diagnostic explanations and paves the way for precise localization of pathologies in OCTA images.",0 "We prove a fundamental impossibility theorem: neural networks cannot simultaneously learn well-calibrated confidence estimates with meaningful diversity when trained using binary correct/incorrect supervision. Through rigorous mathematical analysis and comprehensive empirical evaluation spanning negative reward training, symmetric loss functions, and post-hoc calibration methods, we demonstrate this is an information-theoretic constraint, not a methodological failure. Our experiments reveal universal failure patterns: negative rewards produce extreme underconfidence (ECE greater than 0.8) while destroying confidence diversity (std less than 0.05), symmetric losses fail to escape binary signal averaging, and post-hoc methods achieve calibration (ECE less than 0.02) only by compressing the confidence distribution. We formalize this as an underspecified mapping problem where binary signals cannot distinguish between different confidence levels for correct predictions: a 60 percent confident correct answer receives identical supervision to a 90 percent confident one. Crucially, our real-world validation shows 100 percent failure rate for all training methods across MNIST, Fashion-MNIST, and CIFAR-10, while post-hoc calibration's 33 percent success rate paradoxically confirms our theorem by achieving calibration through transformation rather than learning. This impossibility directly explains neural network hallucinations and establishes why post-hoc calibration is mathematically necessary, not merely convenient. We propose novel supervision paradigms using ensemble disagreement and adaptive multi-agent learning that could overcome these fundamental limitations without requiring human confidence annotations.",0 "Large language models (LLMs) can lead to undesired consequences when misaligned with human values, especially in scenarios involving complex and sensitive social biases. Previous studies have revealed the misalignment of LLMs with human values using expert-designed or agent-based emulated bias scenarios. However, it remains unclear whether the alignment of LLMs with human values differs across different types of scenarios (e.g., scenarios containing negative vs. non-negative questions). In this study, we investigate the alignment of LLMs with human values regarding social biases (HVSB) in different types of bias scenarios. Through extensive analysis of 12 LLMs from four model families and four datasets, we demonstrate that LLMs with large model parameter scales do not necessarily have lower misalignment rate and attack success rate. Moreover, LLMs show a certain degree of alignment preference for specific types of scenarios and the LLMs from the same model family tend to have higher judgment consistency. In addition, we study the understanding capacity of LLMs with their explanations of HVSB. We find no significant differences in the understanding of HVSB across LLMs. We also find LLMs prefer their own generated explanations. Additionally, we endow smaller language models (LMs) with the ability to explain HVSB. The generation results show that the explanations generated by the fine-tuned smaller LMs are more readable, but have a relatively lower model agreeability.",0 "DNN-based language models excel across various NLP tasks but remain highly vulnerable to textual adversarial attacks. While adversarial text generation is crucial for NLP security, explainability, evaluation, and data augmentation, related work remains overwhelmingly English-centric, leaving the problem of constructing high-quality and sustainable adversarial robustness benchmarks for lower-resourced languages both difficult and understudied. First, method customization for lower-resourced languages is complicated due to linguistic differences and limited resources. Second, automated attacks are prone to generating invalid or ambiguous adversarial texts. Last but not least, language models continuously evolve and may be immune to parts of previously generated adversarial texts. To address these challenges, we introduce HITL-GAT, an interactive system based on a general approach to human-in-the-loop generation of adversarial texts. Additionally, we demonstrate the utility of HITL-GAT through a case study on Tibetan script, employing three customized adversarial text generation methods and establishing its first adversarial robustness benchmark, providing a valuable reference for other lower-resourced languages.",0 "Comparing information structures in between deep neural networks (DNNs) and the human brain has become a key method for exploring their similarities and differences. Recent research has shown better alignment of vision-language DNN models, such as CLIP, with the activity of the human ventral occipitotemporal cortex (VOTC) than earlier vision models, supporting the idea that language modulates human visual perception. However, interpreting the results from such comparisons is inherently limited due to the ""black box"" nature of DNNs. To address this, we combined model-brain fitness analyses with human brain lesion data to examine how disrupting the communication pathway between the visual and language systems causally affects the ability of vision-language DNNs to explain the activity of the VOTC. Across four diverse datasets, CLIP consistently captured unique variance in VOTC neural representations, relative to both label-supervised (ResNet) and unsupervised (MoCo) models. This advantage tended to be left-lateralized at the group level, aligning with the human language network. Analyses of 33 stroke patients revealed that reduced white matter integrity between the VOTC and the language region in the left angular gyrus was correlated with decreased CLIP-brain correspondence and increased MoCo-brain correspondence, indicating a dynamic influence of language processing on the activity of the VOTC. These findings support the integration of language modulation in neurocognitive models of human vision, reinforcing concepts from vision-language DNN models. The sensitivity of model-brain similarity to specific brain lesions demonstrates that leveraging manipulation of the human brain is a promising framework for evaluating and developing brain-like computer models.",2 "Generating regulatorily compliant Suspicious Activity Report (SAR) remains a high-cost, low-scalability bottleneck in Anti-Money Laundering (AML) workflows. While large language models (LLMs) offer promising fluency, they suffer from factual hallucination, limited crime typology alignment, and poor explainability -- posing unacceptable risks in compliance-critical domains. This paper introduces Co-Investigator AI, an agentic framework optimized to produce Suspicious Activity Reports (SARs) significantly faster and with greater accuracy than traditional methods. Drawing inspiration from recent advances in autonomous agent architectures, such as the AI Co-Scientist, our approach integrates specialized agents for planning, crime type detection, external intelligence gathering, and compliance validation. The system features dynamic memory management, an AI-Privacy Guard layer for sensitive data handling, and a real-time validation agent employing the Agent-as-a-Judge paradigm to ensure continuous narrative quality assurance. Human investigators remain firmly in the loop, empowered to review and refine drafts in a collaborative workflow that blends AI efficiency with domain expertise. We demonstrate the versatility of Co-Investigator AI across a range of complex financial crime scenarios, highlighting its ability to streamline SAR drafting, align narratives with regulatory expectations, and enable compliance teams to focus on higher-order analytical work. This approach marks the beginning of a new era in compliance reporting -- bringing the transformative benefits of AI agents to the core of regulatory processes and paving the way for scalable, reliable, and transparent SAR generation.",0 "Scientific claim verification against tables typically requires predicting whether a claim is supported or refuted given a table. However, we argue that predicting the final label alone is insufficient: it reveals little about the model's reasoning and offers limited interpretability. To address this, we reframe table-text alignment as an explanation task, requiring models to identify the table cells essential for claim verification. We build a new dataset by extending the SciTab benchmark with human-annotated cell-level rationales. 20 Annotators verify the claim label and highlight the minimal set of cells needed to support their decision. After the annotation process, we utilize the collected information and propose a taxonomy for handling ambiguous cases. Our experiments show that (i) incorporating table alignment information improves claim verification performance, and (ii) most LLMs, while often predicting correct labels, fail to recover human-aligned rationales, suggesting that their predictions do not stem from faithful reasoning.",1 "Chemists in search of structure-property relationships face great challenges due to limited high quality, concordant datasets. Machine learning (ML) has significantly advanced predictive capabilities in chemical sciences, but these modern data-driven approaches have increased the demand for data. In response to the growing demand for explainable AI (XAI) and to bridge the gap between predictive accuracy and human comprehensibility, we introduce LAMeL - a Linear Algorithm for Meta-Learning that preserves interpretability while improving the prediction accuracy across multiple properties. While most approaches treat each chemical prediction task in isolation, LAMeL leverages a meta-learning framework to identify shared model parameters across related tasks, even if those tasks do not share data, allowing it to learn a common functional manifold that serves as a more informed starting point for new unseen tasks. Our method delivers performance improvements ranging from 1.1- to 25-fold over standard ridge regression, depending on the domain of the dataset. While the degree of performance enhancement varies across tasks, LAMeL consistently outperforms or matches traditional linear methods, making it a reliable tool for chemical property prediction where both accuracy and interpretability are critical.",0 "AI-readiness describes the degree to which data may be optimally and ethically used for subsequent AI and Machine Learning (AI/ML) methods, where those methods may involve some combination of model training, data classification, and ethical, explainable prediction. The Bridge2AI consortium has defined the particular criteria a biomedical dataset may possess to render it AI-ready: in brief, a dataset's readiness is related to its FAIRness, provenance, degree of characterization, explainability, sustainability, and computability, in addition to its accompaniment with documentation about ethical data practices. To ensure AI-readiness and to clarify data structure and relationships within Bridge2AI's Grand Challenges (GCs), particular types of metadata are necessary. The GCs within the Bridge2AI initiative include four data-generating projects focusing on generating AI/ML-ready datasets to tackle complex biomedical and behavioral research problems. These projects develop standardized, multimodal data, tools, and training resources to support AI integration, while addressing ethical data practices. Examples include using voice as a biomarker, building interpretable genomic tools, modeling disease trajectories with diverse multimodal data, and mapping cellular and molecular health indicators across the human body. This report assesses the state of metadata creation and standardization in the Bridge2AI GCs, provides guidelines where required, and identifies gaps and areas for improvement across the program. New projects, including those outside the Bridge2AI consortium, would benefit from what we have learned about creating metadata as part of efforts to promote AI readiness.",0 "Large Language Models (LLMs) have demonstrated impressive performances in complex text generation tasks. However, the contribution of the input prompt to the generated content still remains obscure to humans, underscoring the necessity of understanding the causality between input and output pairs. Existing works for providing prompt-specific explanation often confine model output to be classification or next-word prediction. Few initial attempts aiming to explain the entire language generation often treat input prompt texts independently, ignoring their combinatorial effects on the follow-up generation. In this study, we introduce a counterfactual explanation framework based on Joint Prompt Attribution, JoPA, which aims to explain how a few prompt texts collaboratively influences the LLM's complete generation. Particularly, we formulate the task of prompt attribution for generation interpretation as a combinatorial optimization problem, and introduce a probabilistic algorithm to search for the casual input combination in the discrete space. We define and utilize multiple metrics to evaluate the produced explanations, demonstrating both the faithfulness and efficiency of our framework.",0 "Shapes of cognition is a new conceptual paradigm for the computational cognitive modeling of Language-Endowed Intelligent Agents (LEIAs). Shapes are remembered constellations of sensory, linguistic, conceptual, episodic, and procedural knowledge that allow agents to cut through the complexity of real life the same way as people do: by expecting things to be typical, recognizing patterns, acting by habit, reasoning by analogy, satisficing, and generally minimizing cognitive load to the degree situations permit. Atypical outcomes are treated using shapes-based recovery methods, such as learning on the fly, asking a human partner for help, or seeking an actionable, even if imperfect, situational understanding. Although shapes is an umbrella term, it is not vague: shapes-based modeling involves particular objectives, hypotheses, modeling strategies, knowledge bases, and actual models of wide-ranging phenomena, all implemented within a particular cognitive architecture. Such specificity is needed both to vet our hypotheses and to achieve our practical aims of building useful agent systems that are explainable, extensible, and worthy of our trust, even in critical domains. However, although the LEIA example of shapes-based modeling is specific, the principles can be applied more broadly, giving new life to knowledge-based and hybrid AI.",0 "This paper introduces HARMONIC, a cognitive-robotic architecture designed for robots in human-robotic teams. HARMONIC supports semantic perception interpretation, human-like decision-making, and intentional language communication. It addresses the issues of safety and quality of results; aims to solve problems of data scarcity, explainability, and safety; and promotes transparency and trust. Two proof-of-concept HARMONIC-based robotic systems are demonstrated, each implemented in both a high-fidelity simulation environment and on physical robotic platforms.",0 "Agentic workflows promise efficiency, but adoption hinges on whether people actually trust systems that act on their behalf. We present DoubleAgents, an agentic planning tool that embeds transparency and control through user intervention, value-reflecting policies, rich state visualizations, and uncertainty flagging for human coordination tasks. A built-in respondent simulation generates realistic scenarios, allowing users to rehearse, refine policies, and calibrate their reliance before live use. We evaluate DoubleAgents in a two-day lab study (n=10), two deployments (n=2), and a technical evaluation. Results show that participants initially hesitated to delegate but grew more reliant as they experienced transparency, control, and adaptive learning during simulated cases. Deployment results demonstrate DoubleAgents' real-world relevance and usefulness, showing that the effort required scaled appropriately with task complexity and contextual data. We contribute trust-by-design patterns and mechanisms for proactive AI -- consistency, controllability, and explainability -- along with simulation as a safe path to build and calibrate trust over time.",1 "Concept-based interpretability for Convolutional Neural Networks (CNNs) aims to align internal model representations with high-level semantic concepts, but existing approaches largely overlook the semantic roles of individual filters and the dynamic propagation of concepts across layers. To address these limitations, we propose ConceptFlow, a concept-based interpretability framework that simulates the internal ""thinking path"" of a model by tracing how concepts emerge and evolve across layers. ConceptFlow comprises two key components: (i) concept attentions, which associate each filter with relevant high-level concepts to enable localized semantic interpretation, and (ii) conceptual pathways, derived from a concept transition matrix that quantifies how concepts propagate and transform between filters. Together, these components offer a unified and structured view of internal model reasoning. Experimental results demonstrate that ConceptFlow yields semantically meaningful insights into model reasoning, validating the effectiveness of concept attentions and conceptual pathways in explaining decision behavior. By modeling hierarchical conceptual pathways, ConceptFlow provides deeper insight into the internal logic of CNNs and supports the generation of more faithful and human-aligned explanations.",0 "Since the earliest proposals for artificial neural network (ANN) models of the mind and brain, critics have pointed out key weaknesses in these models compared to human cognitive abilities. Here we review recent work that uses metalearning to overcome several classic challenges, which we characterize as addressing the Problem of Incentive and Practice -- that is, providing machines with both incentives to improve specific skills and opportunities to practice those skills. This explicit optimization contrasts with more conventional approaches that hope the desired behaviour will emerge through optimizing related but different objectives. We review applications of this principle to addressing four classic challenges for ANNs: systematic generalization, catastrophic forgetting, few-shot learning and multi-step reasoning. We also discuss how large language models incorporate key aspects of this metalearning framework (namely, sequence prediction with feedback trained on diverse data), which helps to explain some of their successes on these classic challenges. Finally, we discuss the prospects for understanding aspects of human development through this framework, and whether natural environments provide the right incentives and practice for learning how to make challenging generalizations.",0 "Enormous attention and resources are being devoted to the quest for artificial general intelligence and, even more ambitiously, artificial superintelligence. We wonder about the implications for our methodological research, which aims to help decision makers cope with what econometricians call identification problems, inferential problems in empirical research that do not diminish as sample size grows. Of particular concern are missing data problems in prediction and treatment choice. Essentially all data collection intended to inform decision making is subject to missing data, which gives rise to identification problems. Thus far, we see no indication that the current dominant architecture of machine learning (ML)-based artificial intelligence (AI) systems will outperform humans in this context. In this paper, we explain why we have reached this conclusion and why we see the missing data problem as a cautionary case study in the quest for superintelligence more generally. We first discuss the concept of intelligence, before presenting a decision-theoretic perspective that formalizes the connection between intelligence and identification problems. We next apply this perspective to two leading cases of missing data problems. Then we explain why we are skeptical that AI research is currently on a path toward machines doing better than humans at solving these identification problems.",0 "This research is part of a study of a real-time, cloud-based on-street parking service using crowd-sourced in-vehicle fleet data. The service provides real-time information about available parking spots by classifying crowd-sourced detections observed via ultrasonic sensors. The goal of this research is to optimize the current parking service quality by analyzing the automation of the existing test process for ground truth tests. Therefore, methods from the field of machine learning, especially image pattern recognition, are applied to enrich the database and substitute human engineering work in major areas of the analysis process. After an introduction into the related areas of machine learning, this paper explains the methods and implementations made to achieve a high level of automation, applying convolutional neural networks. Finally, predefined metrics present the performance level achieved, showing a time reduction of human resources up to 99.58 %. The overall improvements are discussed, summarized, and followed by an outlook for future development and potential application of the analysis automation tool.",0 "In high-stakes disaster scenarios, timely and informed decision-making is critical yet often challenged by uncertainty, dynamic environments, and limited resources. This paper presents a systematic review of Human-AI collaboration patterns that support decision-making across all disaster management phases. Drawing from 51 peer-reviewed studies, we identify four major categories: Human-AI Decision Support Systems, Task and Resource Coordination, Trust and Transparency, and Simulation and Training. Within these, we analyze sub-patterns such as cognitive-augmented intelligence, multi-agent coordination, explainable AI, and virtual training environments. Our review highlights how AI systems may enhance situational awareness, improves response efficiency, and support complex decision-making, while also surfacing critical limitations in scalability, interpretability, and system interoperability. We conclude by outlining key challenges and future research directions, emphasizing the need for adaptive, trustworthy, and context-aware Human-AI systems to improve disaster resilience and equitable recovery outcomes.",0 "Subjective teacher evaluations play a key role in shaping students' educational trajectories. Previous studies have shown that students of low socioeconomic status (SES) receive worse subjective evaluations than their high SES peers, even when they score similarly on objective standardized tests. This is often interpreted as evidence of teacher bias. Measurement error in test scores challenges this interpretation. We discuss how both classical and non-classical measurement error in test scores generate a biased coefficient of the conditional SES gap, and consider three empirical strategies to address this bias. Using administrative data from the Netherlands, where secondary school track recommendations are pivotal teacher judgments, we find that measurement error explains 35 to 43% of the conditional SES gap in track recommendations.",0 "This study investigates the oscillation behavior of a sessile drop placed on a hydrophobic substrate subjected to vertical vibrations with varying frequencies and amplitudes. We examined the responses of both Newtonian and viscoelastic drops. For viscoelastic samples, image analysis techniques were employed to correlate the drop dynamics with the rheological properties of the material. Overall, we demonstrate that this drop-based method allows for oscillatory shear experiments at frequencies that are difficult to access using conventional rheometers. The results reveal that the essential features of the drop response can be explained by the ratio of two characteristic time scales: the internal polymer relaxation time ($t_{p}$) and the external forcing time scale ($1/f$). This ratio defines the Deborah number ($De$). When the two time scales are comparable ($De \approx 1$), viscous dissipation dominates, which is observed in Lissajous curves and the drop's profile. At very low Deborah numbers ($De \ll 1$), the drop behaves like a Newtonian fluid (having a peak around natural frequency of the drop), while at high Deborah numbers ($De \gg 1$), it exhibits an elastic response. Furthermore, we show that increasing the applied deformation drives the system into the nonlinear viscoelastic regime. In this regime, unlike traditional rheology measurements, we observe the presence of $even$ and $odd$ harmonics in the drop response. This is attributed to the inherent geometric asymmetry of the drop setup, which breaks the symmetric assumptions typically present in standard rheological techniques.",0 "Developing professional, structured reasoning on par with human financial analysts and traders remains a central challenge in AI for finance, where markets demand interpretability and trust. Traditional time-series models lack explainability, while LLMs face challenges in turning natural-language analysis into disciplined, executable trades. Although reasoning LLMs have advanced in step-by-step planning and verification, their application to risk-sensitive financial decisions is underexplored. We present Trading-R1, a financially-aware model that incorporates strategic thinking and planning for comprehensive thesis composition, facts-grounded analysis, and volatility-adjusted decision making. Trading-R1 aligns reasoning with trading principles through supervised fine-tuning and reinforcement learning with a three-stage easy-to-hard curriculum. Training uses Tauric-TR1-DB, a 100k-sample corpus spanning 18 months, 14 equities, and five heterogeneous financial data sources. Evaluated on six major equities and ETFs, Trading-R1 demonstrates improved risk-adjusted returns and lower drawdowns compared to both open-source and proprietary instruction-following models as well as reasoning models. The system generates structured, evidence-based investment theses that support disciplined and interpretable trading decisions. Trading-R1 Terminal will be released at https://github.com/TauricResearch/Trading-R1.",0 "Automating the generation of scientific videos is a crucial yet challenging task for effective knowledge dissemination. However, existing works on document automation primarily focus on static media such as posters and slides, lacking mechanisms for personalized dynamic orchestration and multimodal content synchronization. To address these challenges, we introduce VideoAgent, a novel multi-agent framework that synthesizes personalized scientific videos through a conversational interface. VideoAgent parses a source paper into a fine-grained asset library and, guided by user requirements, orchestrates a narrative flow that synthesizes both static slides and dynamic animations to explain complex concepts. To enable rigorous evaluation, we also propose SciVidEval, the first comprehensive suite for this task, which combines automated metrics for multimodal content quality and synchronization with a Video-Quiz-based human evaluation to measure knowledge transfer. Extensive experiments demonstrate that our method significantly outperforms existing commercial scientific video generation services and approaches human-level quality in scientific communication.",0 "Large language models (LLMs) often generate natural language rationales -- free-form explanations that help improve performance on complex reasoning tasks and enhance interpretability for human users. However, evaluating these rationales remains challenging. While recent work has relied on binary preference judgments from humans or LLM judges, such evaluations are often opaque and coarse-grained, offering limited insight into what makes one rationale better than another. In this work, we rethink preference evaluation for LLM-generated rationales by asking: (1) What attributes define good rationales? (2) Can human preferences be explained by these attributes? (3) Can attribute-based evaluation overcome the limitations of binary comparisons? We identify a set of key rationale attributes from prior literature and assess them using automatic metrics, LLM judgments, and human annotations. We then analyze two standard human preference datasets MT Bench and Chatbot Arena using SHAP to identify which attributes best explain human preference outcomes. Finally, we re-evaluate model-generated rationales using attribute-specific ELO scores, revealing more nuanced model comparisons and insights. Our findings suggest that fine-grained attribute evaluations can better characterize rationale quality and guide future research toward more interpretable and reliable evaluation practices.",0 "The impressive capabilities of deep learning models are often counterbalanced by their inherent opacity, commonly termed the ""black box"" problem, which impedes their widespread acceptance in high-trust domains. In response, the intersecting disciplines of interpretability and explainability, collectively falling under the Explainable AI (XAI) umbrella, have become focal points of research. Although these terms are frequently used as synonyms, they carry distinct conceptual weights. This document offers a comparative exploration of interpretability and explainability within the deep learning paradigm, carefully outlining their respective definitions, objectives, prevalent methodologies, and inherent difficulties. Through illustrative examinations of the MNIST digit classification task and IMDB sentiment analysis, we substantiate a key argument: interpretability generally pertains to a model's inherent capacity for human comprehension of its operational mechanisms (global understanding), whereas explainability is more commonly associated with post-hoc techniques designed to illuminate the basis for a model's individual predictions or behaviors (local explanations). For example, feature attribution methods can reveal why a specific MNIST image is recognized as a '7', and word-level importance can clarify an IMDB sentiment outcome. However, these local insights do not render the complex underlying model globally transparent. A clear grasp of this differentiation, as demonstrated by these standard datasets, is vital for fostering dependable and sound artificial intelligence.",0 "This study investigates how the U.S. Centers for Disease Control and Prevention (CDC) communicated COVID-19 guidance on Twitter and how publics responded over two years of the pandemic. Drawing on 275,124 tweets mentioning or addressing @CDCgov, I combine BERTopic modeling, sentiment analysis (VADER), credibility checks (Iffy Index), change point detection (PELT), and survival analysis to trace three phases of discourse: (1) early hoax claims and testing debates, (2) lockdown and mask controversies, and (3) post-vaccine variant concerns. I introduce the concept of crisis messaging journeys to explain how archived ""receipts"" of prior CDC statements fueled epistemic struggles, political polarization, and sustained engagement. Findings show that skeptical, cognitively complex discourse particularly questioning institutional trust prolonged participation, while positive affirmation predicted faster disengagement. I conclude with design recommendations for annotated, cautious, and flashpoint-responsive communication strategies to bolster public trust and resilience during protracted health crises.",0 "Recovering and distinguishing between the strict-preference, indifference and/or indecisiveness parts of a decision maker's preferences is a challenging task but also important for testing theory and conducting welfare analysis. This paper contributes towards this goal by reporting on data from a lab experiment on riskless choice that were analyzed with novel theory-guided computational methods. The experiment included both Forced- and Free-Choice treatments. Its main novelty consisted of allowing subjects to select multiple alternatives at each menu. Based on a new non-parametric goodness-of-fit criterion that we introduce, which generalizes a widely used pre-existing method to environments of multi-valued choices, each subject's decisions were tested against three structured general choice models that feature maximization of stable but potentially weak and/or incomplete preferences. Nearly 60% of all subjects' are well-explained by one of these models, typically with a unique model-optimal preference relation per subject. Importantly, revealed preferences typically have a non-trivial indifference part that, on average, accounts for up to 19% of all possible comparisons. In addition, 22% of all subjects are best explained by models of incomplete-preference maximization and reveal preferences that typically exhibit the distinctions between indifference and indecisiveness that these models afford or predict. These distinctions are documented empirically for the first time.",0 "Artificial intelligence (AI) is advancing at a pace that raises urgent questions about how to align machine decision-making with human moral values. This working paper investigates how leading AI systems prioritize moral outcomes and what this reveals about the prospects for human-AI symbiosis. We address two central questions: (1) What moral values do state-of-the-art large language models (LLMs) implicitly favour when confronted with dilemmas? (2) How do differences in model architecture, cultural origin, and explainability affect these moral preferences? To explore these questions, we conduct a quantitative experiment with six LLMs, ranking and scoring outcomes across 18 dilemmas representing five moral frameworks. Our findings uncover strikingly consistent value biases. Across all models, Care and Virtue values outcomes were rated most moral, while libertarian choices were consistently penalized. Reasoning-enabled models exhibited greater sensitivity to context and provided richer explanations, whereas non-reasoning models produced more uniform but opaque judgments. This research makes three contributions: (i) Empirically, it delivers a large-scale comparison of moral reasoning across culturally distinct LLMs; (ii) Theoretically, it links probabilistic model behaviour with underlying value encodings; (iii) Practically, it highlights the need for explainability and cultural awareness as critical design principles to guide AI toward a transparent, aligned, and symbiotic future.",0 "We are surrounded by spatio-temporal patterns resulting from the interaction of the numerous basic units constituting natural or human-made systems. In presence of diffusive-like coupling, Turing theory has been largely applied to explain the formation of such self-organized motifs both on continuous domains or networked systems, where reactions occur in the nodes and the available links are used for species to diffuse. In many relevant applications, those links are not static, as very often assumed, but evolve in time and more importantly they adapt their weights to the states of the nodes. In this work, we make one step forward and we provide a general theory to prove the validity of Turing idea in the case of adaptive symmetric networks with positive weights. The conditions for the emergence of Turing instability rely on the spectral property of the Laplace matrix and the model parameters, thus strengthening the interplay between dynamics and network topology. A rich variety of patterns are presented by using two prototype models of nonlinear dynamical systems, the Brusselator and the FitzHugh-Nagumo model. Because many empirical networks adapt to changes in the system states, our results pave the way for a thorough understanding of self-organization in real-world systems.",0 "Evaluating explainable AI (XAI) approaches is a challenging task in general, due to the subjectivity of explanations. In this paper, we focus on tabular data and the specific use case of AI models predicting the values of Boolean functions. We extend the previous work in this domain by proposing a formal and precise measure of importance of variables based on actual causality, and we evaluate state-of-the-art XAI tools against this measure. We also present a novel XAI tool B-ReX, based on the existing tool ReX, and demonstrate that it is superior to other black-box XAI tools on a large-scale benchmark. Specifically, B-ReX achieves a Jensen-Shannon divergence of 0.072 $\pm$ 0.012 on random 10-valued Boolean formulae",0 "Smart contracts automate the management of high-value assets, where vulnerabilities can lead to catastrophic financial losses. This challenge is amplified in Large Language Models (LLMs) by two interconnected failures: they operate as unauditable ""black boxes"" lacking a transparent reasoning process, and consequently, generate code riddled with critical security vulnerabilities. To address both issues, we propose SmartCoder-R1 (based on Qwen2.5-Coder-7B), a novel framework for secure and explainable smart contract generation. It begins with Continual Pre-training (CPT) to specialize the model. We then apply Long Chain-of-Thought Supervised Fine-Tuning (L-CoT SFT) on 7,998 expert-validated reasoning-and-code samples to train the model to emulate human security analysis. Finally, to directly mitigate vulnerabilities, we employ Security-Aware Group Relative Policy Optimization (S-GRPO), a reinforcement learning phase that refines the generation policy by optimizing a weighted reward signal for compilation success, security compliance, and format correctness. Evaluated against 17 baselines on a benchmark of 756 real-world functions, SmartCoder-R1 establishes a new state of the art, achieving top performance across five key metrics: a ComPass of 87.70%, a VulRate of 8.60%, a SafeAval of 80.16%, a FuncRate of 53.84%, and a FullRate of 50.53%. This FullRate marks a 45.79% relative improvement over the strongest baseline, DeepSeek-R1. Crucially, its generated reasoning also excels in human evaluations, achieving high-quality ratings for Functionality (82.7%), Security (85.3%), and Clarity (90.7%).",0 "Despite extensive investment in artificial intelligence, 95% of enterprises report no measurable profit impact from AI deployments (MIT, 2025). In this theoretical paper, we argue that this gap reflects paradigmatic lock-in that channels AI into incremental optimization rather than structural transformation. Using a cross-case analysis, we propose a 2x2 framework that reconceptualizes AI strategy along two independent dimensions: the degree of transformation achieved (incremental to transformational) and the treatment of human contribution (reduced to amplified). The framework surfaces four patterns now dominant in practice: individual augmentation, process automation, workforce substitution, and a less deployed frontier of collaborative intelligence. Evidence shows that the first three dimensions reinforce legacy work models and yield localized gains without durable value capture. Realizing collaborative intelligence requires three mechanisms: complementarity (pairing distinct human and machine strengths), co-evolution (mutual adaptation through interaction), and boundary-setting (human determination of ethical and strategic parameters). Complementarity and boundary-setting are observable in regulated and high-stakes domains; co-evolution is largely absent, which helps explain limited system-level impact. Our findings in a case study analysis illustrated that advancing toward collaborative intelligence requires material restructuring of roles, governance, and data architecture rather than additional tools. The framework reframes AI transformation as an organizational design challenge: moving from optimizing the division of labor between humans and machines to architecting their convergence, with implications for operating models, workforce development, and the future of work.",0 "Reader curiosity, the drive to seek information, is crucial for textual engagement, yet remains relatively underexplored in NLP. Building on Loewenstein's Information Gap Theory, we introduce a framework that models reader curiosity by quantifying semantic information gaps within a text's semantic structure. Our approach leverages BERTopic-inspired topic modeling and persistent homology to analyze the evolving topology (connected components, cycles, voids) of a dynamic semantic network derived from text segments, treating these features as proxies for information gaps. To empirically evaluate this pipeline, we collect reader curiosity ratings from participants (n = 49) as they read S. Collins's ''The Hunger Games'' novel. We then use the topological features from our pipeline as independent variables to predict these ratings, and experimentally show that they significantly improve curiosity prediction compared to a baseline model (73% vs. 30% explained deviance), validating our approach. This pipeline offers a new computational method for analyzing text structure and its relation to reader engagement.",2 "Semantic Textual Relatedness (STR) captures nuanced relationships between texts that extend beyond superficial lexical similarity. In this study, we investigate STR in the context of job title matching - a key challenge in resume recommendation systems, where overlapping terms are often limited or misleading. We introduce a self-supervised hybrid architecture that combines dense sentence embeddings with domain-specific Knowledge Graphs (KGs) to improve both semantic alignment and explainability. Unlike previous work that evaluated models on aggregate performance, our approach emphasizes data stratification by partitioning the STR score continuum into distinct regions: low, medium, and high semantic relatedness. This stratified evaluation enables a fine-grained analysis of model performance across semantically meaningful subspaces. We evaluate several embedding models, both with and without KG integration via graph neural networks. The results show that fine-tuned SBERT models augmented with KGs produce consistent improvements in the high-STR region, where the RMSE is reduced by 25% over strong baselines. Our findings highlight not only the benefits of combining KGs with text embeddings, but also the importance of regional performance analysis in understanding model behavior. This granular approach reveals strengths and weaknesses hidden by global metrics, and supports more targeted model selection for use in Human Resources (HR) systems and applications where fairness, explainability, and contextual matching are essential.",0 "Chromosomal crossovers play a crucial role in meiotic cell division, as they ensure proper chromosome segregation and increase genetic variability. Experiments have consistently revealed two key observations across species: (i) the number of crossovers per chromosome is typically small, but at least one, and (ii) crossovers on the same chromosome are subject to interference, i.e., they are more separated than expected by chance. These observations can be explained by a recently proposed coarsening model, where the dynamics of droplets associated with chromosomes designate crossovers. We provide a comprehensive analysis of the coarsening model, which we also extend by including material exchanges between droplets, the synaptonemal complex, and the nucleoplasm. We derive scaling laws for the crossover count, which allows us to analyze data across species. Moreover, our model provides a coherent explanation of experimental data across mutants, including the wild-type and zyp1-mutant of A. thaliana. Consequently, the extended coarsening model provides a solid framework for investigating the underlying mechanisms of crossover placement.",0 "In recent years, there has been significant progress in the development of deep learning models over relational databases, including architectures based on heterogeneous graph neural networks (hetero-GNNs) and heterogeneous graph transformers. In effect, such architectures state how the database records and links (e.g., foreign-key references) translate into a large, complex numerical expression, involving numerous learnable parameters. This complexity makes it hard to explain, in human-understandable terms, how a model uses the available data to arrive at a given prediction. We present a novel framework for explaining machine-learning models over relational databases, where explanations are view definitions that highlight focused parts of the database that mostly contribute to the model's prediction. We establish such global abductive explanations by adapting the classic notion of determinacy by Nash, Segoufin, and Vianu (2010). In addition to tuning the tradeoff between determinacy and conciseness, the framework allows controlling the level of granularity by adopting different fragments of view definitions, such as ones highlighting whole columns, foreign keys between tables, relevant groups of tuples, and so on. We investigate the realization of the framework in the case of hetero-GNNs. We develop heuristic algorithms that avoid the exhaustive search over the space of all databases. We propose techniques that are model-agnostic, and others that are tailored to hetero-GNNs via the notion of learnable masking. Our approach is evaluated through an extensive empirical study on the RelBench collection, covering a variety of domains and different record-level tasks. The results demonstrate the usefulness of the proposed explanations, as well as the efficiency of their generation.",0 "To collaborate effectively with humans, language models must be able to explain their decisions in natural language. We study a specific type of self-explanation: self-generated counterfactual explanations (SCEs), where a model explains its prediction by modifying the input such that it would have predicted a different outcome. We evaluate whether LLMs can produce SCEs that are valid, achieving the intended outcome, and minimal, modifying the input no more than necessary. When asked to generate counterfactuals, we find that LLMs typically produce SCEs that are valid, but far from minimal, offering little insight into their decision-making behaviour. Worryingly, when asked to generate minimal counterfactuals, LLMs typically make excessively small edits that fail to change predictions. The observed validity-minimality trade-off is consistent across several LLMs, datasets, and evaluation settings. Our findings suggest that SCEs are, at best, an ineffective explainability tool and, at worst, can provide misleading insights into model behaviour. Proposals to deploy LLMs in high-stakes settings must consider the impact of unreliable self-explanations on downstream decision-making. Our code is available at https://github.com/HarryMayne/SCEs.",0 "Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, reducing willingness to accept assistance, likelihood of future use, and performance expectancy. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between \emph{offering} (suggesting help) and \emph{providing} (acting automatically). In Study 1, AI help was more threatening than human help. Across both studies, anticipatory help increased perceived threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism explaining why proactive AI features may backfire and suggest design implications for AI initiative.",1 "Explainable AI has become a common term in the literature, scrutinized by computer scientists and statisticians and highlighted by psychological or philosophical researchers. One major effort many researchers tackle is constructing general guidelines for XAI schemes, which we derived from our study. While some areas of XAI are well studied, we focus on uncertainty explanations and consider global explanations, which are often left out. We chose an algorithm that covers various concepts simultaneously, such as uncertainty, robustness, and global XAI, and tested its ability to calibrate trust. We then checked whether an algorithm that aims to provide more of an intuitive visual understanding, despite being complicated to understand, can provide higher user satisfaction and human interpretability.",0 "Powerful artificial intelligence (AI) tools that have emerged in recent years -- including large language models, automated coding assistants, and advanced image and speech generation technologies -- are the result of monumental human achievements. These breakthroughs reflect mastery across multiple technical disciplines and the resolution of significant technological challenges. However, some of the most profound challenges may still lie ahead. These challenges are not purely technical but pertain to the fair and responsible use of AI in ways that genuinely improve the global human condition. This article explores one promising application aligned with that vision: the use of AI tools to facilitate and enhance education, with a specific focus on signal processing (SP). It presents two interrelated perspectives: identifying and addressing technical limitations, and applying AI tools in practice to improve educational experiences. Primers are provided on several core technical issues that arise when using AI in educational settings, including how to ensure fairness and inclusivity, handle hallucinated outputs, and achieve efficient use of resources. These and other considerations -- such as transparency, explainability, and trustworthiness -- are illustrated through the development of an immersive, structured, and reliable ""smart textbook."" The article serves as a resource for researchers and educators seeking to advance AI's role in engineering education.",0 "Attention Deficit Hyperactivity Disorder (ADHD) is a common brain disorder in children that can persist into adulthood, affecting social, academic, and career life. Early diagnosis is crucial for managing these impacts on patients and the healthcare system but is often labor-intensive and time-consuming. This paper presents a novel method to improve ADHD diagnosis precision and timeliness by leveraging Deep Learning (DL) approaches and electroencephalogram (EEG) signals. We introduce ADHDeepNet, a DL model that utilizes comprehensive temporal-spatial characterization, attention modules, and explainability techniques optimized for EEG signals. ADHDeepNet integrates feature extraction and refinement processes to enhance ADHD diagnosis. The model was trained and validated on a dataset of 121 participants (61 ADHD, 60 Healthy Controls), employing nested cross-validation for robust performance. The proposed two-stage methodology uses a 10-fold cross-subject validation strategy. Initially, each iteration optimizes the model's hyper-parameters with inner 2-fold cross-validation. Then, Additive Gaussian Noise (AGN) with various standard deviations and magnification levels is applied for data augmentation. ADHDeepNet achieved 100% sensitivity and 99.17% accuracy in classifying ADHD/HC subjects. To clarify model explainability and identify key brain regions and frequency bands for ADHD diagnosis, we analyzed the learned weights and activation patterns of the model's primary layers. Additionally, t-distributed Stochastic Neighbor Embedding (t-SNE) visualized high-dimensional data, aiding in interpreting the model's decisions. This study highlights the potential of DL and EEG in enhancing ADHD diagnosis accuracy and efficiency.",2 "Subtrait (latent-trait components) assessment presents a promising path toward enhancing transparency of automated writing scores. We prototype explainability and subtrait scoring with generative language models and show modest correlation between human subtrait and trait scores, and between automated and human subtrait scores. Our approach provides details to demystify scores for educators and students.",0 "Sophisticated evasion tactics in malicious Android applications, combined with their intricate behavioral semantics, enable attackers to conceal malicious logic within legitimate functions, underscoring the critical need for robust and in-depth analysis frameworks. However, traditional analysis techniques often fail to recover deeply hidden behaviors or provide human-readable justifications for their decisions. Inspired by advances in large language models (LLMs), we introduce TraceRAG, a retrieval-augmented generation (RAG) framework that bridges natural language queries and Java code to deliver explainable malware detection and analysis. First, TraceRAG generates summaries of method-level code snippets, which are indexed in a vector database. At query time, behavior-focused questions retrieve the most semantically relevant snippets for deeper inspection. Finally, based on the multi-turn analysis results, TraceRAG produces human-readable reports that present the identified malicious behaviors and their corresponding code implementations. Experimental results demonstrate that our method achieves 96\% malware detection accuracy and 83.81\% behavior identification accuracy based on updated VirusTotal (VT) scans and manual verification. Furthermore, expert evaluation confirms the practical utility of the reports generated by TraceRAG.",0 "Deep learning offers a promising avenue for automating many recognition tasks in fields such as medicine and forensics. However, the black-box nature of these models hinders their adoption in high-stakes applications where trust and accountability are required. For 3D shape recognition tasks in particular, this paper introduces the Class Node Graph Attention Network (CGAT) architecture to address this need. Applied to 3D meshes of third molars derived from CBCT images, for Demirjian stage allocation, CGAT utilizes graph attention convolutions and an inherent attention mechanism, visualized via attention rollout, to explain its decision-making process. We evaluated the local mean curvature and distance to centroid node features, both individually and in combination, as well as model depth, finding that models incorporating directed edges to a global CLS node produced more intuitive attention maps, while also yielding desirable classification performance. We analyzed the attention-based explanations of the models, and their predictive performances to propose optimal settings for the CGAT. The combination of local mean curvature and distance to centroid as node features yielded a slight performance increase with 0.76 weighted F1 score, and more comprehensive attention visualizations. The CGAT architecture's ability to generate human-understandable attention maps can enhance trust and facilitate expert validation of model decisions. While demonstrated on dental data, CGAT is broadly applicable to graph-based classification and regression tasks, promoting wider adoption of transparent and competitive deep learning models in high-stakes environments.",0 "Explaining why the species lives at a particular location is important for understanding ecological systems and conserving biodiversity. However, existing ecological workflows are fragmented and often inaccessible to non-specialists. We propose an end-to-end visual-to-causal framework that transforms a species image into interpretable causal insights about its habitat preference. The system integrates species recognition, global occurrence retrieval, pseudo-absence sampling, and climate data extraction. We then discover causal structures among environmental features and estimate their influence on species occurrence using modern causal inference methods. Finally, we generate statistically grounded, human-readable causal explanations from structured templates and large language models. We demonstrate the framework on a bee and a flower species and report early results as part of an ongoing project, showing the potential of the multimodal AI assistant backed up by a recommended ecological modeling practice for describing species habitat in human-understandable language. Our code is available at: https://github.com/Yutong-Zhou-cv/BioX.",0 "This study provides a carefully controlled examination of the universality of the von Karman and additive constants associated with the classical logarithmic scaling of the mean streamwise velocity profile in high-friction Reynolds number (Re_tau) turbulent boundary layers (TBLs) subjected to weak-to-moderate adverse pressure gradients (APGs). The analysis leverages a recently developed method for imposing APGs with minimal pressure gradient (PG) history effects in Melbourne's high-Re_tau TBL facility (Deshpande et al., Phys. Rev. Fluids, vol. 8, 2023), in combination with direct measurements of local friction velocity via oil-film interferometry. The von Karman constant is found to remain invariant within experimental uncertainty, while the additive coefficient decreases with both the local APG and PG history, potentially explaining reported variability in logarithmic scalings across the APG TBL literature. The facility enables manual prescription of APGs along the full test section, allowing weak PG history perturbations to be followed by extended recovery regions, while maintaining matched local PG and Re_tau at downstream measurement locations. This experimental configuration allows for systematic decoupling of the effects of Re_tau, local PGs, and PG history, enabling assessment of their individual contributions to single-point turbulence statistics and energy spectra across different TBL regions. Present results at high Re_tau show that PG history influences both small-scale and large-scale motions in the overlap and outer regions, whereas local PGs primarily affect the large-scales. The strongest effects of the local PG occur in the outer region (around 0.4delta, where delta is the boundary layer thickness), while PG history effects extend down to approximately 0.25delta, just above the logarithmic region.",0 "This innovative practice article reports on the piloting of vibe coding (using natural language to create software applications with AI) for English as a Foreign Language (EFL) education. We developed a human-AI meta-languaging framework with three dimensions: talking to AI (prompt engineering), talking through AI (negotiating authorship), and talking about AI (mental models of AI). Using backward design principles, we created a four-hour workshop where two students designed applications addressing authentic EFL writing challenges. We adopted a case study methodology, collecting data from worksheets and video recordings, think-aloud protocols, screen recordings, and AI-generated images. Contrasting cases showed one student successfully vibe coding a functional application cohering to her intended design, while another encountered technical difficulties with major gaps between intended design and actual functionality. Analysis reveals differences in students' prompt engineering approaches, suggesting different AI mental models and tensions in attributing authorship. We argue that AI functions as a beneficial languaging machine, and that differences in how students talk to, through, and about AI explain vibe coding outcome variations. Findings indicate that effective vibe coding instruction requires explicit meta-languaging scaffolding, teaching structured prompt engineering, facilitating critical authorship discussions, and developing vocabulary for articulating AI mental models.",0 "Motor dysfunction is a common sign of neurodegenerative diseases (NDs) such as Parkinson's disease (PD) and Alzheimer's disease (AD), but may be difficult to detect, especially in the early stages. In this work, we examine the behavior of a wide array of explainable metrics extracted from the handwriting signals of 113 subjects performing multiple tasks on a digital tablet, as part of the Neurological Signals dataset. The aim is to measure their effectiveness in characterizing NDs, including AD and PD. To this end, task-agnostic and task-specific metrics are extracted from 14 distinct tasks. Subsequently, through statistical analysis and a series of classification experiments, we investigate which metrics provide greater discriminative power between NDs and healthy controls and amongst different NDs. Preliminary results indicate that the tasks at hand can all be effectively leveraged to distinguish between the considered set of NDs, specifically by measuring the stability, the speed of writing, the time spent not writing, and the pressure variations between groups from our handcrafted explainable metrics, which shows p-values lower than 0.0001 for multiple tasks. Using various binary classification algorithms on the computed metrics, we obtain up to 87 % accuracy for the discrimination between AD and healthy controls (CTL), and up to 69 % for the discrimination between PD and CTL.",2 "This chapter focuses on the intersection of user experience (UX) and wellbeing in the context of content moderation. Human content moderators play a key role in protecting end users from harm by detecting, evaluating, and addressing content that may violate laws or product policies. They face numerous challenges, including exposure to sensitive content, monotonous tasks, and complex decisions, which are often exacerbated by inadequate tools. This chapter explains the importance of incorporating wellbeing considerations throughout the product development lifecycle, offering a framework and practical strategies for implementation across key UX disciplines: research, writing, and design. By examining these considerations, this chapter provides a roadmap for creating user experiences that support content moderators, benefiting both the user and the business.",0 "As deep learning (DL) technologies advance, their application in automated visual inspection for Class III medical devices offers significant potential to enhance quality assurance and reduce human error. However, the adoption of such AI-based systems introduces new regulatory complexities-particularly under the EU Artificial Intelligence (AI) Act, which imposes high-risk system obligations that differ in scope and depth from established regulatory frameworks such as the Medical Device Regulation (MDR) and the U.S. FDA Quality System Regulation (QSR). This paper presents a high-level technical assessment of the foreseeable challenges that manufacturers are likely to encounter when qualifying DL-based automated inspections -- specifically static models -- within the existing medical device compliance landscape. It examines divergences in risk management principles, dataset governance, model validation, explainability requirements, and post-deployment monitoring obligations. The discussion also explores potential implementation strategies and highlights areas of uncertainty, including data retention burdens, global compliance implications, and the practical difficulties of achieving statistical significance in validation with limited defect data. Disclaimer: This paper presents a technical perspective and does not constitute legal or regulatory advice.",0 "The Internet of Electric Vehicles (IoEV) envisions a tightly coupled ecosystem of electric vehicles (EVs), charging infrastructure, and grid services, yet it remains vulnerable to cyberattacks, unreliable battery-state predictions, and opaque decision processes that erode trust and performance. To address these challenges, we introduce a novel Agentic Artificial Intelligence (AAI) framework tailored for IoEV, where specialized agents collaborate to deliver autonomous threat mitigation, robust analytics, and interpretable decision support. Specifically, we design an AAI architecture comprising dedicated agents for cyber-threat detection and response at charging stations, real-time State of Charge (SoC) estimation, and State of Health (SoH) anomaly detection, all coordinated through a shared, explainable reasoning layer; develop interpretable threat-mitigation mechanisms that proactively identify and neutralize attacks on both physical charging points and learning components; propose resilient SoC and SoH models that leverage continuous and adversarial-aware learning to produce accurate, uncertainty-aware forecasts with human-readable explanations; and implement a three-agent pipeline, where each agent uses LLM-driven reasoning and dynamic tool invocation to interpret intent, contextualize tasks, and execute formal optimizations for user-centric assistance. Finally, we validate our framework through comprehensive experiments across diverse IoEV scenarios, demonstrating significant improvements in security and prediction accuracy. All datasets, models, and code will be released publicly.",0 "The first voice timbre attribute detection challenge is featured in a special session at NCMMSC 2025. It focuses on the explainability of voice timbre and compares the intensity of two speech utterances in a specified timbre descriptor dimension. The evaluation was conducted on the VCTK-RVA dataset. 153 Participants developed their systems and submitted their outputs to the organizer, who evaluated the performance and sent feedback to them. Six teams submitted their outputs, with five providing descriptions of their methodologies.",2 "AI-based recommender systems increasingly influence recruitment decisions. Thus, transparency and responsible adoption in Human Resource Management (HRM) are critical. This study examines how HR managers' AI literacy influences their subjective perception and objective understanding of explainable AI (XAI) elements in recruiting recommender dashboards. In an online experiment, 410 German-based HR managers compared baseline dashboards to versions enriched with three XAI styles: important features, counterfactuals, and model criteria. Our results show that the dashboards used in practice do not explain AI results and even keep AI elements opaque. However, while adding XAI features improves subjective perceptions of helpfulness and trust among users with moderate or high AI literacy, it does not increase their objective understanding. It may even reduce accurate understanding, especially with complex explanations. Only overlays of important features significantly aided the interpretations of high-literacy users. Our findings highlight that the benefits of XAI in recruitment depend on users' AI literacy, emphasizing the need for tailored explanation strategies and targeted literacy training in HRM to ensure fair, transparent, and effective adoption of AI.",0 "Explainable Reinforcement Learning (XRL) has emerged as a promising approach in improving the transparency of Reinforcement Learning (RL) agents. However, there remains a gap between complex RL policies and domain experts, due to the limited comprehensibility of XRL results and isolated coverage of current XRL approaches that leave users uncertain about which tools to employ. To address these challenges, we introduce TalkToAgent, a multi-agent Large Language Models (LLM) framework that delivers interactive, natural language explanations for RL policies. The architecture with five specialized LLM agents (Coordinator, Explainer, Coder, Evaluator, and Debugger) enables TalkToAgent to automatically map user queries to relevant XRL tools and clarify an agent's actions in terms of either key state variables, expected outcomes, or counterfactual explanations. Moreover, our approach extends previous counterfactual explanations by deriving alternative scenarios from qualitative behavioral descriptions, or even new rule-based policies. We validated TalkToAgent on quadruple-tank process control problem, a well-known nonlinear control benchmark. Results demonstrated that TalkToAgent successfully mapped user queries into XRL tasks with high accuracy, and coder-debugger interactions minimized failures in counterfactual generation. Furthermore, qualitative evaluation confirmed that TalkToAgent effectively interpreted agent's actions and contextualized their meaning within the problem domain.",0 "Hallucinated outputs from large language models (LLMs) pose risks in the medical domain, especially for lay audiences making health-related decisions. Existing automatic factual consistency evaluation methods, such as entailment- and question-answering (QA) -based, struggle with plain language summarization (PLS) due to elaborative explanation phenomenon, which introduces external content (e.g., definitions, background, examples) absent from the scientific abstract to enhance comprehension. To address this, we introduce PlainQAFact, an automatic factual consistency evaluation metric trained on a fine-grained, human-annotated dataset PlainFact, for evaluating factual consistency of both source-simplified and elaborately explained sentences. PlainQAFact first classifies sentence type, then applies a retrieval-augmented QA scoring method. Empirical results show that existing evaluation metrics fail to evaluate the factual consistency in PLS, especially for elaborative explanations, whereas PlainQAFact consistently outperforms them across all evaluation settings. We further analyze PlainQAFact's effectiveness across external knowledge sources, answer extraction strategies, answer overlap measures, and document granularity levels, refining its overall factual consistency assessment. Taken together, our work presents the first evaluation metric designed for PLS factual consistency evaluation, providing the community with both a robust benchmark and a practical tool to advance reliable and safe plain language communication in the medical domain. PlainQAFact and PlainFact are available at: https://github.com/zhiwenyou103/PlainQAFact",0 "Counterfactual explanations (CFs) offer human-centric insights into machine learning predictions by highlighting minimal changes required to alter an outcome. Therefore, CFs can be used as (i) interventions for abnormality prevention and (ii) augmented data for training robust models. In this work, we explore large language models (LLMs), specifically GPT-4o-mini, for generating CFs in a zero-shot and three-shot setting. We evaluate our approach on two datasets: the AI-Readi flagship dataset for stress prediction and a public dataset for heart disease detection. Compared to traditional methods such as DiCE, CFNOW, and NICE, our few-shot LLM-based approach achieves high plausibility (up to 99%), strong validity (up to 0.99), and competitive sparsity. Moreover, using LLM-generated CFs as augmented samples improves downstream classifier performance (an average accuracy gain of 5%), especially in low-data regimes. This demonstrates the potential of prompt-based generative techniques to enhance explainability and robustness in clinical and physiological prediction tasks. Code base: github.com/shovito66/SenseCF.",0 "The origins of radio astronomy and the discovery of the first radio galaxies are described which showed that the radio emission of active galaxies is very diverse in shape and can reach a size of many times their optical extent. In 1974 the first ""giant"" radio galaxy (GRG) was discovered, several times larger than any previously known one. Since 2012, when about 100 such GRGs larger than 1 Megaparsec (3.3 million light years) had been reported in literature, the author is performing his own search for GRGs and maintains a list of currently nearly 7000 GRGs, with more than half of these found on his own or his students at the Departamento de Astronom\'ia of Universidad de Guanajuato. An analysis of the very largest GRGs does not reveal any single property of these that would explain why they could grow to such large sizes. Recent advances in radio telescopes have led to vast amounts of images rich in GRGs, but due to the complexity of identifying their host galaxies only a fraction of these images can be searched with visual inspection by humans. Currently available machine algorithms and citizen science projects are prone to erroneous identifications and also leave unnoticed a substantial fraction of GRGs, such that supervision of the results by experts is essential to produce reliable results.",0 "The integration of Large Language Models (LLMs) with computer vision is profoundly transforming perception tasks like image segmentation. For intelligent transportation systems (ITS), where accurate scene understanding is critical for safety and efficiency, this new paradigm offers unprecedented capabilities. This survey systematically reviews the emerging field of LLM-augmented image segmentation, focusing on its applications, challenges, and future directions within ITS. We provide a taxonomy of current approaches based on their prompting mechanisms and core architectures, and we highlight how these innovations can enhance road scene understanding for autonomous driving, traffic monitoring, and infrastructure maintenance. Finally, we identify key challenges, including real-time performance and safety-critical reliability, and outline a perspective centered on explainable, human-centric AI as a prerequisite for the successful deployment of this technology in next-generation transportation systems.",1 "Hilbert spaces in theories of gravity are notoriously subtle due to the Hamiltonian constraints, particularly regarding the inner product. To demystify this subject, we review and extend a collection of ideas in canonical gravity, and connect to the sum-over-histories approach by clarifying the Hilbert space interpretation of various gravitational path integrals. We use one-dimensional (or mini-superspace) models as the simplest context to exemplify the conceptual ideas. We emphasise that a physical Hilbert space can be defined either by requiring states to be annihilated by constraint operators (e.g., the Wheeler-DeWitt equation) or by equivalence relations between wavefunctions, and explain that these two approaches are related by an inner product. We advocate that the group averaging procedure constructs the correct physical inner product. The Klein-Gordon inner product is not positive-definite, which we explain as arising from a bad gauge choice; nonetheless, it agrees with group averaging when such a problem is absent. These concepts are all embedded in the BRST/BFV formalism, which provides a systematic way to construct these and other physically equivalent inner products (e.g., from maximal-volume gauge and Gaussian averaged gauges). Finally we discuss the application of these ideas in the semi-classical approximation, including non-perturbative gravitational effects.",0 "As generative foundation models improve, they also tend to become more persuasive, raising concerns that AI automation will enable governments, firms, and other actors to manipulate beliefs with unprecedented scale and effectiveness at virtually no cost. The full economic and social ramifications of this trend have been difficult to foresee, however, given that we currently lack a complete theoretical understanding of why persuasion is costly for human labor to produce in the first place. This paper places human and AI agents on a common conceptual footing by formalizing informational persuasion as a mathematical decision problem and characterizing its computational complexity. A novel proof establishes that persuasive messages are challenging to discover (NP-Hard) but easy to adopt if supplied by others (NP). This asymmetry helps explain why people are susceptible to persuasion, even in contexts where all relevant information is publicly available. The result also illuminates why litigation, strategic communication, and other persuasion-oriented activities have historically been so human capital intensive, and it provides a new theoretical basis for studying how AI will impact various industries.",0 "Advances in computer vision have opened new avenues for clinical applications, particularly in computerized exposure therapy where visual stimuli can be dynamically adjusted based on patient responses. As a critical step toward such adaptive systems, we investigated whether pretrained computer vision models can accurately predict fear levels from spider-related images. We adapted three diverse models using transfer learning to predict human fear ratings (on a 0-100 scale) from a standardized dataset of 313 images. The models were evaluated using cross-validation, achieving an average mean absolute error (MAE) between 10.1 and 11.0. Our learning curve analysis revealed that reducing the dataset size significantly harmed performance, though further increases yielded no substantial gains. Explainability assessments showed the models' predictions were based on spider-related features. A category-wise error analysis further identified visual conditions associated with higher errors (e.g., distant views and artificial/painted spiders). These findings demonstrate the potential of explainable computer vision models in predicting fear ratings, highlighting the importance of both model explainability and a sufficient dataset size for developing effective emotion-aware therapeutic technologies.",1 "We investigate the salience of extinction risk as a source of impatience. Our framework distinguishes between human extinction risk and individual mortality risk while allowing for various degrees of intergenerational altruism. Additionally, we consider the evolutionarily motivated ""selfish gene"" perspective. We find that the risk of human extinction is an indispensable component of the discount rate, whereas individual mortality risk can be hedged against - partially or fully, depending on the setup - through human reproduction. Overall, we show that in the face of extinction risk, people become more impatient rather than more farsighted. Thus, the greater the threat of extinction, the less incentive there is to invest in avoiding it. Our framework can help explain why humanity consistently underinvests in mitigation of catastrophic risks, ranging from climate change mitigation, via pandemic prevention, to addressing the emerging risks of transformative artificial intelligence.",0 "This paper introduces the Comprehensive Applicant Profile Score (CAPS), a novel multi-modal framework designed to quantitatively model and interpret holistic college admissions evaluations. CAPS decomposes applicant profiles into three interpretable components: academic performance (Standardized Academic Score, SAS), essay quality (Essay Quality Index, EQI), and extracurricular engagement (Extracurricular Impact Score, EIS). Leveraging transformer-based semantic embeddings, LLM scoring, and XGBoost regression, CAPS provides transparent and explainable evaluations aligned with human judgment. Experiments on a synthetic but realistic dataset demonstrate strong performance, achieving an EQI prediction R^2 of 0.80, classification accuracy over 75%, a macro F1 score of 0.69, and a weighted F1 score of 0.74. CAPS addresses key limitations in traditional holistic review -- particularly the opacity, inconsistency, and anxiety faced by applicants -- thus paving the way for more equitable and data-informed admissions practices.",0 "This systematic literature review analyzes the current state of compliance with Regulation (EU) 2024/1689 in autonomous robotic systems, focusing on cybersecurity frameworks and methodologies. Using the PRISMA protocol, 22 studies were selected from 243 initial records across IEEE Xplore, ACM DL, Scopus, and Web of Science. Findings reveal partial regulatory alignment: while progress has been made in risk management and encrypted communications, significant gaps persist in explainability modules, real-time human oversight, and knowledge base traceability. Only 40% of reviewed solutions explicitly address transparency requirements, and 30% implement failure intervention mechanisms. The study concludes that modular approaches integrating risk, supervision, and continuous auditing are essential to meet the AI Act mandates in autonomous robotics.",0 "Despite the continued anthropomorphization of AI systems, the potential impact of racialization during human-AI interaction is understudied. This study explores how human-AI cooperation may be impacted by the belief that data used to train an AI system is racialized, that is, it was trained on data from a specific group of people. During this study, participants completed a human-AI cooperation task using the Pig Chase game. 1069 Participants of different self-identified demographics interacted with AI agents whose perceived racial identities were manipulated, allowing us to assess how sociocultural perspectives influence the decision-making of participants in the game. After the game, participants completed a survey questionnaire to explain the strategies they used while playing the game and to understand the perceived intelligence of their AI teammates. Statistical analysis of task behavior data revealed a statistically significant effect of the participant's demographic, as well as the interaction between this self-identified demographic and the treatment condition (i.e., the perceived demographic of the agent). The results indicated that Non-White participants viewed AI agents racialized as White in a positive way compared to AI agents racialized as Black. Both Black and White participants viewed the AI agent in the control treatment in a negative way. A baseline cognitive model of the task using ACT-R cognitive architecture was used to understand a cognitive-level, process-based explanation of the participants' perspectives based on results found from the study. This model helps us better understand the factors affecting the decision-making strategies of the game participants. Results from analysis of these data, as well as cognitive modeling, indicate a need to expand understanding of the ways racialization (whether implicit or explicit) impacts interaction with AI systems.",2 "Human learning embodies a striking duality: sometimes, we appear capable of following logical, compositional rules and benefit from structured curricula (e.g., in formal education), while other times, we rely on an incremental approach or trial-and-error, learning better from curricula that are randomly interleaved. Influential psychological theories explain this seemingly disparate behavioral evidence by positing two qualitatively different learning systems -- one for rapid, rule-based inferences and another for slow, incremental adaptation. It remains unclear how to reconcile such theories with neural networks, which learn via incremental weight updates and are thus a natural model for the latter type of learning, but are not obviously compatible with the former. However, recent evidence suggests that metalearning neural networks and large language models are capable of ""in-context learning"" (ICL) -- the ability to flexibly grasp the structure of a new task from a few examples. Here, we show that the dynamic interplay between ICL and default in-weight learning (IWL) naturally captures a broad range of learning phenomena observed in humans, reproducing curriculum effects on category-learning and compositional tasks, and recapitulating a tradeoff between flexibility and retention. Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties that can coexist with their native IWL, thus offering a novel perspective on dual-process theories and human cognitive flexibility.",0 "Compositionality has long been considered a key explanatory property underlying human intelligence: arbitrary concepts can be composed into novel complex combinations, permitting the acquisition of an open ended, potentially infinite expressive capacity from finite learning experiences. Influential arguments have held that neural networks fail to explain this aspect of behavior, leading many to dismiss them as viable models of human cognition. Over the last decade, however, modern deep neural networks (DNNs), which share the same fundamental design principles as their predecessors, have come to dominate artificial intelligence, exhibiting the most advanced cognitive behaviors ever demonstrated in machines. In particular, large language models (LLMs), DNNs trained to predict the next word on a large corpus of text, have proven capable of sophisticated behaviors such as writing syntactically complex sentences without grammatical errors, producing cogent chains of reasoning, and even writing original computer programs -- all behaviors thought to require compositional processing. In this chapter, we survey recent empirical work from machine learning for a broad audience in philosophy, cognitive science, and neuroscience, situating recent breakthroughs within the broader context of philosophical arguments about compositionality. In particular, our review emphasizes two approaches to endowing neural networks with compositional generalization capabilities: (1) architectural inductive biases, and (2) metalearning, or learning to learn. We also present findings suggesting that LLM pretraining can be understood as a kind of metalearning, and can thereby equip DNNs with compositional generalization abilities in a similar way. We conclude by discussing the implications that these findings may have for the study of compositionality in human cognition and by suggesting avenues for future research.",0 "Multimodal misinformation, encompassing textual, visual, and cross-modal distortions, poses an increasing societal threat that is amplified by generative AI. Existing methods typically focus on a single type of distortion and struggle to generalize to unseen scenarios. In this work, we observe that different distortion types share common reasoning capabilities while also requiring task-specific skills. We hypothesize that joint training across distortion types facilitates knowledge sharing and enhances the model's ability to generalize. To this end, we introduce TRUST-VL, a unified and explainable vision-language model for general multimodal misinformation detection. TRUST-VL incorporates a novel Question-Aware Visual Amplifier module, designed to extract task-specific visual features. To support training, we also construct TRUST-Instruct, a large-scale instruction dataset containing 198K samples featuring structured reasoning chains aligned with human fact-checking workflows. Extensive experiments on both in-domain and zero-shot benchmarks demonstrate that TRUST-VL achieves state-of-the-art performance, while also offering strong generalization and interpretability.",0 "As software systems grow increasingly complex, explainability has become a crucial non-functional requirement for transparency, user trust, and regulatory compliance. Eliciting explainability requirements is challenging, as different methods capture varying levels of detail and structure. This study examines the efficiency and effectiveness of three commonly used elicitation methods - focus groups, interviews, and online surveys - while also assessing the role of taxonomy usage in structuring and improving the elicitation process. We conducted a case study at a large German IT consulting company, utilizing a web-based personnel management software. A total of two focus groups, 18 interviews, and an online survey with 188 participants were analyzed. The results show that interviews were the most efficient, capturing the highest number of distinct needs per participant per time spent. Surveys collected the most explanation needs overall but had high redundancy. Delayed taxonomy introduction resulted in a greater number and diversity of needs, suggesting that a two-phase approach is beneficial. Based on our findings, we recommend a hybrid approach combining surveys and interviews to balance efficiency and coverage. Future research should explore how automation can support elicitation and how taxonomies can be better integrated into different methods.",2 "Generative AI, such as Large Language Models (LLMs), has achieved impressive progress but still produces hallucinations and unverifiable claims, limiting reliability in sensitive domains. Retrieval-Augmented Generation (RAG) improves accuracy by grounding outputs in external knowledge, especially in domains like healthcare, where precision is vital. However, RAG remains opaque and essentially a black box, heavily dependent on data quality. We developed a method-agnostic, perturbation-based framework that provides token and component-level interoperability for Graph RAG using SMILE and named it as Knowledge-Graph (KG)-SMILE. By applying controlled perturbations, computing similarities, and training weighted linear surrogates, KG-SMILE identifies the graph entities and relations most influential to generated outputs, thereby making RAG more transparent. We evaluate KG-SMILE using comprehensive attribution metrics, including fidelity, faithfulness, consistency, stability, and accuracy. Our findings show that KG-SMILE produces stable, human-aligned explanations, demonstrating its capacity to balance model effectiveness with interpretability and thereby fostering greater transparency and trust in machine learning technologies.",0 "Capturing the similarities between human language units is crucial for explaining how humans associate different objects, and therefore its computation has received extensive attention, research, and applications. With the ever-increasing amount of information around us, calculating similarity becomes increasingly complex, especially in many cases, such as legal or medical affairs, measuring similarity requires extra care and precision, as small acts within a language unit can have significant real-world effects. My research goal in this thesis is to develop regression models that account for similarities between language units in a more refined way. Computation of similarity has come a long way, but approaches to debugging the measures are often based on continually fitting human judgment values. To this end, my goal is to develop an algorithm that precisely catches loopholes in a similarity calculation. Furthermore, most methods have vague definitions of the similarities they compute and are often difficult to interpret. The proposed framework addresses both shortcomings. It constantly improves the model through catching different loopholes. In addition, every refinement of the model provides a reasonable explanation. The regression model introduced in this thesis is called progressively refined similarity computation, which combines attack testing with adversarial training. The similarity regression model of this thesis achieves state-of-the-art performance in handling edge cases.",0 "Human cognition is profoundly shaped by the environments in which it unfolds. Yet, it remains an open question whether learning and decision making can be explained as a principled adaptation to the statistical structure of real-world tasks. We introduce ecologically rational analysis, a computational framework that unifies the normative foundations of rational analysis with ecological grounding. Leveraging large language models to generate ecologically valid cognitive tasks at scale, and using meta-learning to derive rational models optimized for these environments, we develop a new class of learning algorithms: Ecologically Rational Meta-learned Inference (ERMI). ERMI internalizes the statistical regularities of naturalistic problem spaces and adapts flexibly to novel situations, without requiring hand-crafted heuristics or explicit parameter updates. We show that ERMI captures human behavior across 15 experiments spanning function learning, category learning, and decision making, outperforming several established cognitive models in trial-by-trial prediction. Our results suggest that much of human cognition may reflect adaptive alignment to the ecological structure of the problems we encounter in everyday life.",0 "Detecting elephants through seismic signals is an emerging research topic aimed at developing solutions for Human-Elephant Conflict (HEC). Despite the promising results, such solutions heavily rely on manual classification of elephant footfalls, which limits their applicability for real-time classification in natural settings. To address this limitation and build on our previous work, this study introduces a classification framework targeting resource-constrained implementations, prioritizing both accuracy and computational efficiency. As part of this framework, a novel event detection technique named Contextually Customized Windowing (CCW), tailored specifically for detecting elephant footfalls, was introduced, and evaluations were conducted by comparing it with the Short-Term Average/Long-Term Average (STA/LTA) method. The yielded results show that the maximum validated detection range was 155.6 m in controlled conditions and 140 m in natural environments. Elephant footfall classification using Support Vector Machine (SVM) with a Radial Basis Function (RBF) kernel demonstrated superior performance across multiple settings, achieving an accuracy of 99% in controlled environments, 73% in natural elephant habitats, and 70% in HEC-prone human habitats, the most challenging scenario. Furthermore, feature impact analysis using explainable AI identified the number of Zero Crossings and Dynamic Time Warping (DTW) Alignment Cost as the most influential factors in all experiments, while Predominant Frequency exhibited significant influence in controlled settings.",0 "Building a conversational embodied agent to execute real-life tasks has been a long-standing yet quite challenging research goal, as it requires effective human-agent communication, multi-modal understanding, long-range sequential decision making, etc. Traditional symbolic methods have scaling and generalization issues, while end-to-end deep learning models suffer from data scarcity and high task complexity, and are often hard to explain. To benefit from both worlds, we propose JARVIS, a neuro-symbolic commonsense reasoning framework for modular, generalizable, and interpretable conversational embodied agents. First, it acquires symbolic representations by prompting large language models (LLMs) for language understanding and sub-goal planning, and by constructing semantic maps from visual observations. Then the symbolic module reasons for sub-goal planning and action generation based on task- and action-level common sense. Extensive experiments on the TEACh dataset validate the efficacy and efficiency of our JARVIS framework, which achieves state-of-the-art (SOTA) results on all three dialog-based embodied tasks, including Execution from Dialog History (EDH), Trajectory from Dialog (TfD), and Two-Agent Task Completion (TATC) (e.g., our method boosts the unseen Success Rate on EDH from 6.1\% to 15.8\%). Moreover, we systematically analyze the essential factors that affect the task performance and also demonstrate the superiority of our method in few-shot settings. Our JARVIS model ranks first in the Alexa Prize SimBot Public Benchmark Challenge.",0 "The Internet of Medical Things transcends traditional medical boundaries, enabling a transition from reactive treatment to proactive prevention. This innovative method revolutionizes healthcare by facilitating early disease detection and tailored care, particularly in chronic disease management, where IoMT automates treatments based on real-time health data collection. Nonetheless, its benefits are countered by significant security challenges that endanger the lives of its users due to the sensitivity and value of the processed data, thereby attracting malicious interests. Moreover, the utilization of wireless communication for data transmission exposes medical data to interception and tampering by cybercriminals. Additionally, anomalies may arise due to human error, network interference, or hardware malfunctions. In this context, anomaly detection based on Machine Learning (ML) is an interesting solution, but it comes up against obstacles in terms of explicability and privacy protection. To address these challenges, a new framework for Intrusion Detection Systems is introduced, leveraging Artificial Neural Networks for intrusion detection while utilizing Federated Learning for privacy preservation. Additionally, eXplainable Artificial Intelligence methods are incorporated to enhance model explanation and interpretation. The efficacy of the proposed framework is evaluated and compared with centralized approaches using multiple datasets containing network and medical data, simulating various attack types impacting the confidentiality, integrity, and availability of medical and physiological data. The results offer compelling evidence that the FL method performs comparably to the centralized method, demonstrating high performance. Additionally, it affords the dual advantage of safeguarding privacy and providing model explanation while adhering to ethical principles.",0 "Explainability in AI and ML models is critical for fostering trust, ensuring accountability, and enabling informed decision making in high stakes domains. Yet this objective is often unmet in practice. This paper proposes a general purpose framework that bridges state of the art explainability techniques with Malle's five category model of behavior explanation: Knowledge Structures, Simulation/Projection, Covariation, Direct Recall, and Rationalization. The framework is designed to be applicable across AI assisted decision making systems, with the goal of enhancing transparency, interpretability, and user trust. We demonstrate its practical relevance through real world case studies, including credit risk assessment and regulatory analysis powered by large language models (LLMs). By aligning technical explanations with human cognitive mechanisms, the framework lays the groundwork for more comprehensible, responsible, and ethical AI systems.",0 "In recent years, deep learning has achieved unprecedented success in various computer vision tasks, particularly in object detection. However, the black-box nature and high complexity of deep neural networks pose significant challenges for interpretability, especially in critical domains such as autonomous driving, medical imaging, and security systems. Explainable Artificial Intelligence (XAI) aims to address this challenge by providing tools and methods to make model decisions more transparent, interpretable, and trust-worthy for humans. This review provides a comprehensive analysis of state-of-the-art explain-ability methods specifically applied to object detection models. The paper be-gins by categorizing existing XAI techniques based on their underlying mechanisms-perturbation-based, gradient-based, backpropagation-based, and graph-based methods. Notable methods such as D-RISE, BODEM, D-CLOSE, and FSOD are discussed in detail. Furthermore, the paper investigates their applicability to various object detection architectures, including YOLO, SSD, Faster R-CNN, and EfficientDet. Statistical analysis of publication trends from 2022 to mid-2025 shows an accelerating interest in explainable object detection, indicating its increasing importance. The study also explores common datasets and evaluation metrics, and highlights the major challenges associated with model interpretability. By providing a structured taxonomy and a critical assessment of existing methods, this review aims to guide researchers and practitioners in selecting suitable explainability techniques for object detection applications and to foster the development of more interpretable AI systems.",0 "Physics provides fundamental laws that describe and predict the natural world. AI systems aspiring toward more general, real-world intelligence must therefore demonstrate strong physics problem-solving abilities: to formulate and apply physical laws for explaining and predicting physical processes. The International Physics Olympiad (IPhO)--the world's most prestigious physics competition--offers a rigorous benchmark for this purpose. We introduce Physics Supernova, an AI agent system with superior physics problem-solving abilities that match elite IPhO gold medalists. In IPhO 2025 theory problems, Physics Supernova attains 23.5/30 points, ranking 14th of 406 contestants and surpassing the median performance of human gold medalists. We extensively analyzed Physics Supernova's capabilities and flexibility across diverse physics tasks. These results show that principled tool integration within agent systems can deliver competitive improvements in solving challenging science problems. The codes are available at https://github.com/CharlesQ9/Physics-Supernova.",0 "Federated Learning (FL) is a widespread and well-adopted paradigm of decentralised learning that allows training one model from multiple sources without the need to transfer data between participating clients directly. Since its inception in 2015, it has been divided into numerous subfields that deal with application-specific issues, such as data heterogeneity or resource allocation. One such sub-field, Clustered Federated Learning (CFL), deals with the problem of clustering the population of clients into separate cohorts to deliver personalised models. Although a few remarkable works have been published in this domain, the problem remains largely unexplored, as its basic assumptions and settings differ slightly from those of standard FL. In this work, we present One-Shot Clustered Federated Learning (OCFL), a clustering-agnostic algorithm that can automatically detect the earliest suitable moment for clustering. Our algorithm is based on computing the cosine distance between the gradients of the clients and a temperature measure that detects when the federated model starts to converge. We empirically evaluate our methodology by testing various one-shot clustering algorithms for over forty different tasks on five benchmark datasets. Our experiments showcase the good performance of our approach when used to perform CFL in an automated manner without the need to adjust hyperparameters. We also revisit the practical feasibility of CFL algorithms based on the gradients of the clients, providing firm evidence of the high efficiency of density-based clustering methods when used to differentiate between the loss surfaces of neural networks trained on different distributions. Moreover, by inspecting the feasibility of local explanations generated with the help of GradCAM, we can provide more insights into the relationship between personalisation and the explainability of local predictions.",0 "Online harms are a growing problem in digital spaces, putting user safety at risk and reducing trust in social media platforms. One of the most persistent forms of harm is hate speech. To address this, we need tools that combine the speed and scale of automated systems with the judgment and insight of human moderators. These tools should not only find harmful content but also explain their decisions clearly, helping to build trust and understanding. In this paper, we present WATCHED, a chatbot designed to support content moderators in tackling hate speech. The chatbot is built as an Artificial Intelligence Agent system that uses Large Language Models along with several specialised tools. It compares new posts with real examples of hate speech and neutral content, uses a BERT-based classifier to help flag harmful messages, looks up slang and informal language using sources like Urban Dictionary, generates chain-of-thought reasoning, and checks platform guidelines to explain and support its decisions. This combination allows the chatbot not only to detect hate speech but to explain why content is considered harmful, grounded in both precedent and policy. Experimental results show that our proposed method surpasses existing state-of-the-art methods, reaching a macro F1 score of 0.91. Designed for moderators, safety teams, and researchers, the tool helps reduce online harms by supporting collaboration between AI and human oversight.",0 "Large Language Models (LLM) have achieved remarkable performances in general domains and are now extending into the expert domain of law. Several benchmarks have been proposed to evaluate LLMs' legal capabilities. However, these benchmarks fail to evaluate open-ended and provision-grounded Question Answering (QA). To address this, we introduce a Korean Benchmark for Legal EXplainable QA (KoBLEX), designed to evaluate provision-grounded, multi-hop legal reasoning. KoBLEX includes 226 scenario-based QA instances and their supporting provisions, created using a hybrid LLM-human expert pipeline. We also propose a method called Parametric provision-guided Selection Retrieval (ParSeR), which uses LLM-generated parametric provisions to guide legally grounded and reliable answers. ParSeR facilitates multi-hop reasoning on complex legal questions by generating parametric provisions and employing a three-stage sequential retrieval process. Furthermore, to better evaluate the legal fidelity of the generated answers, we propose Legal Fidelity Evaluation (LF-Eval). LF-Eval is an automatic metric that jointly considers the question, answer, and supporting provisions and shows a high correlation with human judgments. Experimental results show that ParSeR consistently outperforms strong baselines, achieving the best results across multiple LLMs. Notably, compared to standard retrieval with GPT-4o, ParSeR achieves +37.91 higher F1 and +30.81 higher LF-Eval. Further analyses reveal that ParSeR efficiently delivers consistent performance across reasoning depths, with ablations confirming the effectiveness of ParSeR.",0 "Forecasting future links is a central task in temporal graph (TG) reasoning, requiring models to leverage historical interactions to predict upcoming ones. Traditional neural approaches, such as temporal graph neural networks, achieve strong performance but lack explainability and cannot be applied to unseen graphs without retraining. Recent studies have begun to explore using large language models (LLMs) for graph reasoning, but most of them are constrained to static graphs or small synthetic TGs and lack the evaluation of the quality of reasoning traces generated by LLMs. In this work, we present Reasoning-Enhanced Learning for Temporal Graphs (ReaL-TG), a reinforcement learning framework that fine-tunes LLMs to perform explainable link forecasting on real-world TGs. ReaL-TG uses outcome-based reward to encourage models to self-explore reasoning strategies from graph structure and to produce explanations that directly justify their predictions. To enable evaluation on LLM-generated reasoning traces, we propose a new evaluation protocol combining ranking metrics with an LLM-as-a-Judge system that assesses both the quality of reasoning and the impact of hallucinations. Experiments with ReaL-TG-4B, obtained by fine-tuning Qwen3-4B under our framework, show that it outperforms much larger frontier LLMs, including GPT-5 mini, on ranking metrics, while producing high-quality explanations confirmed by both the LLM judge and human evaluation.",1 "Transferring knowledge across generations is fundamental to human civilization, yet the challenge of passing on complex practical skills persists. Methods without a physically present instructor, such as videos, often fail to explain complex manual tasks, where spatial and social factors are critical. Technologies such as eXtended Reality and Artificial Intelligence hold the potential to retain expert knowledge and facilitate the creation of tailored, contextualized, and asynchronous explanations regardless of time and place. In contrast to videos, the learner's perspective can be different from the recorded perspective in XR. This paper investigates the impact of asynchronous first- and third-person perspectives and gaze visualizations on efficiency, feeling of embodiment, and connectedness during manual tasks. The empirical results of our study (N=36) show that the first-person perspective is better in quantitative measures and preferred by users. We identify best practices for presenting preserved knowledge and provide guidelines for designing future systems.",0 "This position paper argues that annotation disagreement in Natural Language Inference (NLI) is not mere noise but often reflects meaningful variation, especially when triggered by ambiguity in the premise or hypothesis. While underspecified guidelines and annotator behavior contribute to variation, content-based ambiguity provides a process-independent signal of divergent human perspectives. We call for a shift toward ambiguity-aware NLI that first identifies ambiguous input pairs, classifies their types, and only then proceeds to inference. To support this shift, we present a framework that incorporates ambiguity detection and classification prior to inference. We also introduce a unified taxonomy that synthesizes existing taxonomies, illustrates key subtypes with examples, and motivates targeted detection methods that better align models with human interpretation. Although current resources lack datasets explicitly annotated for ambiguity and subtypes, this gap presents an opportunity: by developing new annotated resources and exploring unsupervised approaches to ambiguity detection, we enable more robust, explainable, and human-aligned NLI systems.",0 "Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics within MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.",0 "We propose a novel SuperBrain framework for collective intelligence, grounded in the co-evolution of large language models (LLMs) and human users. Unlike static prompt engineering or isolated agent simulations, our approach emphasizes a dynamic pathway from Subclass Brain to Superclass Brain: (1) A Subclass Brain arises from persistent, personalized interaction between a user and an LLM, forming a cognitive dyad with adaptive learning memory. (2) Through GA-assisted forward-backward evolution, these dyads iteratively refine prompts and task performance. (3) Multiple Subclass Brains coordinate via Swarm Intelligence, optimizing across multi-objective fitness landscapes and exchanging distilled heuristics. (4) Their standardized behaviors and cognitive signatures integrate into a Superclass Brain, an emergent meta-intelligence capable of abstraction, generalization and self-improvement. We outline the theoretical constructs, present initial implementations (e.g., UAV scheduling, KU/KI keyword filtering) and propose a registry for cross-dyad knowledge consolidation. This work provides both a conceptual foundation and an architectural roadmap toward scalable, explainable and ethically aligned collective AI.",0 "Interior design involves the careful selection and arrangement of objects to create an aesthetically pleasing, functional, and harmonized space that aligns with the client's design brief. This task is particularly challenging, as a successful design must not only incorporate all the necessary objects in a cohesive style, but also ensure they are arranged in a way that maximizes accessibility, while adhering to a variety of affordability and usage considerations. Data-driven solutions have been proposed, but these are typically room- or domain-specific and lack explainability in their design design considerations used in producing the final layout. In this paper, we investigate if large language models (LLMs) can be directly utilized for interior design. While we find that LLMs are not yet capable of generating complete layouts, they can be effectively leveraged in a structured manner, inspired by the workflow of interior designers. By systematically probing LLMs, we can reliably generate a list of objects along with relevant constraints that guide their placement. We translate this information into a design layout graph, which is then solved using an off-the-shelf constrained optimization setup to generate the final layouts. We benchmark our algorithm in various design configurations against existing LLM-based methods and human designs, and evaluate the results using a variety of quantitative and qualitative metrics along with user studies. In summary, we demonstrate that LLMs, when used in a structured manner, can effectively generate diverse high-quality layouts, making them a viable solution for creating large-scale virtual scenes. Project webpage at https://flairgpt.github.io/",1 "It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of ""black box problem"" in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does (""shallow opacity""), 2) the analysts do not know what the system does (""standard black box opacity""), or 3) the analysts cannot possibly know what the system might do (""deep opacity""). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give ""informed consent"", or guarantee ""anonymity"". It follows from these points that agents in big data analytics and AI often cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation.",0 "The evolution of technology and education is driving the emergence of Intelligent & Autonomous Tutoring Systems (IATS), where objective and domain-agnostic methods for determining question difficulty are essential. Traditional human labeling is subjective, and existing NLP-based approaches fail in symbolic domains like algebra. This study introduces the Approach of Passive Measures among Educands (APME), a reinforcement learning-based Multi-Armed Bandit (MAB) framework that estimates difficulty solely from solver performance data -- marks obtained and time taken -- without requiring linguistic features or expert labels. By leveraging the inverse coefficient of variation as a risk-adjusted metric, the model provides an explainable and scalable mechanism for adaptive assessment. Empirical validation was conducted on three heterogeneous datasets. Across these diverse contexts, the model achieved an average R2 of 0.9213 and an average RMSE of 0.0584, confirming its robustness, accuracy, and adaptability to different educational levels and assessment formats. Compared with baseline approaches-such as regression-based, NLP-driven, and IRT models-the proposed framework consistently outperformed alternatives, particularly in purely symbolic domains. The findings highlight that (i) item heterogeneity strongly influences perceived difficulty, and (ii) variance in solver outcomes is as critical as mean performance for adaptive allocation. Pedagogically, the model aligns with Vygotskys Zone of Proximal Development by identifying tasks that balance challenge and attainability, supporting motivation while minimizing disengagement. This domain-agnostic, self-supervised approach advances difficulty tagging in IATS and can be extended beyond algebra wherever solver interaction data is available",0 "This study analyzes the 2018 Chinese Household Income Project survey data to evaluate the income gaps between an ""outsider"" ethnic minority group, the Mongols, an ""insider"" ethnic minority group, the Manchus, and the majority Han group in urban and rural areas of Liaoning province and Inner Mongolia in China. Three statistical methods, a simple first-order OLS linear regression, linear regressions with interaction terms, and the Blinder-Oaxaca Decomposition, are used to investigate the income disparity amongst the three groups. The results indicate that Mongols suffer a significant ethnic wage penalty attributable to possible discrimination in the rural areas of these two provinces, while the urban income gaps between the three groups can mostly be explained by participation in public sector occupations or affiliation with the Chinese Communist Party. In rural settings, Mongols also have higher returns to public sector jobs and CCP membership compared to the other two ethnic groups. The findings suggest that Chinese affirmative actions regarding ethnic policy are effective in accelerating the integration of ethnic minorities with Han in the outcomes of the labor market. This conclusion is consistent with previous studies.",1 "Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.",2 "Existing Causal-Why Video Question Answering (VideoQA) models often struggle with higher-order reasoning, relying on opaque, monolithic pipelines that entangle video understanding, causal inference, and answer generation. These black-box approaches offer limited interpretability and tend to depend on shallow heuristics. We propose a novel, modular framework that explicitly decouples causal reasoning from answer generation, introducing natural language causal chains as interpretable intermediate representations. Inspired by human cognitive models, these structured cause-effect sequences bridge low-level video content with high-level causal reasoning, enabling transparent and logically coherent inference. Our two-stage architecture comprises a Causal Chain Extractor (CCE) that generates causal chains from video-question pairs, and a Causal Chain-Driven Answerer (CCDA) that produces answers grounded in these chains. To address the lack of annotated reasoning traces, we introduce a scalable method for generating high-quality causal chains from existing datasets using large language models. We also propose CauCo, a new evaluation metric for causality-oriented captioning. Experiments on three large-scale benchmarks demonstrate that our approach not only outperforms state-of-the-art models, but also yields substantial gains in explainability, user trust, and generalization -- positioning the CCE as a reusable causal reasoning engine across diverse domains. Project page: https://paritoshparmar.github.io/chainreaction/",0 "The functional computation of the human brain arises from the collective behaviour of the underlying neural network. The emerging technology enables the recording of population activity in neurons, and the theory of neural networks is expected to explain and extract functional computations from the data. Thermodynamically, a large proportion of the whole-body energy is consumed by the brain, and functional computation of the human brain seems to involve high energy consumption. The human brain, however, does not increase its energy consumption with its function, and most of its energy consumption is not involved in specific brain function: how can the human brain perform its wide repertoire of functional computations without drastically changing its energy consumption? Here, we present a mechanism to perform functional computation by subtle modification of the interaction network among the brain regions. We first show that, by analyzing the data of spontaneous and task-induced whole-cerebral-cortex activity, the probability fluxes, which are the microscopic irreversible measure of state transitions, exhibit unique patterns depending on the task being performed, indicating that the human brain function is a distinct sequence of the brain state transitions. We then fit the parameters of Ising spin systems with asymmetric interactions, where we reveal that the symmetric interactions among the brain regions are strong and task-independent, but the antisymmetric interactions are subtle and task-dependent, and the inferred model reproduces most of the observed probability flux patterns. Our results indicate that the human brain performs its functional computation by subtly modifying the antisymmetric interaction among the brain regions, which might be possible with a small amount of energy.",0 "Speech-to-Speech (S2S) Large Language Models (LLMs) are foundational to natural human-computer interaction, enabling end-to-end spoken dialogue systems. However, evaluating these models remains a fundamental challenge. We propose \texttt{SageLM}, an end-to-end, multi-aspect, and explainable speech LLM for comprehensive S2S LLMs evaluation. First, unlike cascaded approaches that disregard acoustic features, SageLM jointly assesses both semantic and acoustic dimensions. Second, it leverages rationale-based supervision to enhance explainability and guide model learning, achieving superior alignment with evaluation outcomes compared to rule-based reinforcement learning methods. Third, we introduce \textit{SpeechFeedback}, a synthetic preference dataset, and employ a two-stage training paradigm to mitigate the scarcity of speech preference data. Trained on both semantic and acoustic dimensions, SageLM achieves an 82.79\% agreement rate with human evaluators, outperforming cascaded and SLM-based baselines by at least 7.42\% and 26.20\%, respectively.",0 "Counterfactual reasoning -- the practice of asking ``what if'' by varying inputs and observing changes in model behavior -- has become central to interpretable and fair AI. This thesis develops frameworks that use counterfactuals to explain, audit, and mitigate bias in vision classifiers and generative models. By systematically altering semantically meaningful attributes while holding others fixed, these methods uncover spurious correlations, probe causal dependencies, and help build more robust systems. The first part addresses vision classifiers. CAVLI integrates attribution (LIME) with concept-level analysis (TCAV) to quantify how strongly decisions rely on human-interpretable concepts. With localized heatmaps and a Concept Dependency Score, CAVLI shows when models depend on irrelevant cues like backgrounds. Extending this, ASAC introduces adversarial counterfactuals that perturb protected attributes while preserving semantics. Through curriculum learning, ASAC fine-tunes biased models for improved fairness and accuracy while avoiding stereotype-laden artifacts. The second part targets generative Text-to-Image (TTI) models. TIBET provides a scalable pipeline for evaluating prompt-sensitive biases by varying identity-related terms, enabling causal auditing of how race, gender, and age affect image generation. To capture interactions, BiasConnect builds causal graphs diagnosing intersectional biases. Finally, InterMit offers a modular, training-free algorithm that mitigates intersectional bias via causal sensitivity scores and user-defined fairness goals. Together, these contributions show counterfactuals as a unifying lens for interpretability, fairness, and causality in both discriminative and generative models, establishing principled, scalable methods for socially responsible bias evaluation and mitigation.",0 "We propose a way to organise the subject of ``higher-order homological stability'', in the context of a graded $E_2$-algebra $\mathbf{R}$, along the same lines that the chromatic perspective organises stable homotopy theory. From this point of view proving a (higher-order) homological stability theorem corresponds to producing Smith--Toda complexes in the category of $\mathbf{R}$-modules: using this perspective we prove that whenever $\mathbf{R}$ is defined over a field of positive characteristic and satisfies some standard properties, there is a sequence of higher-order homological stability theorems whose slopes tend to 1. We propose that in a higher-order stable range the ``stable homology'' should be interpreted as certain Bousfield localisations in the category of $\mathbf{R}$-modules, leading to a chromatic tower and monochromatic layers. Given the existence of suitable Smith--Toda complexes we establish several properties of these localisations, in particular explaining how higher-order stabilisation maps yield periodic families in the monochromatic layers. We explain how to associate to such an $\mathbf{R}$ a Hopf algebra which completely governs the kinds of higher-order stability maps that it enjoys, in the sense that the cohomology of this Hopf algebra has precisely the same stability patterns as $\mathbf{R}$. When $\mathbf{R}$ comes from a sequence of groups, this Hopf algebra has a concrete description as the coinvariants of the $E_1$-Steinberg modules.",0 "In Massively Multiplayer Online Role-Playing Games (MMORPGs), auto-leveling bots exploit automated programs to level up characters at scale, undermining gameplay balance and fairness. Detecting such bots is challenging, not only because they mimic human behavior, but also because punitive actions require explainable justification to avoid legal and user experience issues. In this paper, we present a novel framework for detecting auto-leveling bots by leveraging contrastive representation learning and clustering techniques in a fully unsupervised manner to identify groups of characters with similar level-up patterns. To ensure reliable decisions, we incorporate a Large Language Model (LLM) as an auxiliary reviewer to validate the clustered groups, effectively mimicking a secondary human judgment. We also introduce a growth curve-based visualization to assist both the LLM and human moderators in assessing leveling behavior. This collaborative approach improves the efficiency of bot detection workflows while maintaining explainability, thereby supporting scalable and accountable bot regulation in MMORPGs.",0 "Deep generative models like VAEs and diffusion models have advanced various generation tasks by leveraging latent variables to learn data distributions and generate high-quality samples. Despite the field of explainable AI making strides in interpreting machine learning models, understanding latent variables in generative models remains challenging. This paper introduces LatentExplainer, a framework for automatically generating semantically meaningful explanations of latent variables in deep generative models. LatentExplainer tackles three main challenges: inferring the meaning of latent variables, aligning explanations with inductive biases, and handling varying degrees of explainability. Our approach perturbs latent variables, interprets changes in generated data, and uses multimodal large language models (MLLMs) to produce human-understandable explanations. We evaluate our proposed method on several real-world and synthetic datasets, and the results demonstrate superior performance in generating high-quality explanations for latent variables. The results highlight the effectiveness of incorporating inductive biases and uncertainty quantification, significantly enhancing model interpretability.",0 "AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated automatically without human raters. Two behavioural findings emerge. First, persona-over-model: persona design explains behaviour more than model identity. Second, engagement as a stressor: when replies must counter-arguments, ideological adherence strengthens and the prevalence of extreme content increases. We demonstrate that fully automated influence-content production is within reach of both large and small actors. Consequently, defence should shift from restricting model access towards conversation-centric detection and disruption of campaigns and coordination infrastructure. Paradoxically, the very consistency that enables these operations also provides a detection signature.",0 "The growing adoption of foundation models calls for a paradigm shift from Data Science to Model Science. Unlike data-centric approaches, Model Science places the trained model at the core of analysis, aiming to interact, verify, explain, and control its behavior across diverse operational contexts. This paper introduces a conceptual framework for a new discipline called Model Science, along with the proposal for its four key pillars: Verification, which requires strict, context-aware evaluation protocols; Explanation, which is understood as various approaches to explore of internal model operations; Control, which integrates alignment techniques to steer model behavior; and Interface, which develops interactive and visual explanation tools to improve human calibration and decision-making. The proposed framework aims to guide the development of credible, safe, and human-aligned AI systems.",0 "Compositional visual reasoning has emerged as a key research frontier in multimodal AI, aiming to endow machines with the human-like ability to decompose visual scenes, ground intermediate concepts, and perform multi-step logical inference. While early surveys focus on monolithic vision-language models or general multimodal reasoning, a dedicated synthesis of the rapidly expanding compositional visual reasoning literature is still missing. We fill this gap with a comprehensive survey spanning 2023 to 2025 that systematically reviews 260+ papers from top venues (CVPR, ICCV, NeurIPS, ICML, ACL, etc.). We first formalize core definitions and describe why compositional approaches offer advantages in cognitive alignment, semantic fidelity, robustness, interpretability, and data efficiency. Next, we trace a five-stage paradigm shift: from prompt-enhanced language-centric pipelines, through tool-enhanced LLMs and tool-enhanced VLMs, to recently minted chain-of-thought reasoning and unified agentic VLMs, highlighting their architectural designs, strengths, and limitations. We then catalog 60+ benchmarks and corresponding metrics that probe compositional visual reasoning along dimensions such as grounding accuracy, chain-of-thought faithfulness, and high-resolution perception. Drawing on these analyses, we distill key insights, identify open challenges (e.g., limitations of LLM-based reasoning, hallucination, a bias toward deductive reasoning, scalable supervision, tool integration, and benchmark limitations), and outline future directions, including world-model integration, human-AI collaborative reasoning, and richer evaluation protocols. By offering a unified taxonomy, historical roadmap, and critical outlook, this survey aims to serve as a foundational reference and inspire the next generation of compositional visual reasoning research.",0 "The use of children's drawings to examining their conceptual understanding has been proven to be an effective method, but there are two major problems with previous research: 1. The content of the drawings heavily relies on the task, and the ecological validity of the conclusions is low; 2. The interpretation of drawings relies too much on the subjective feelings of the researchers. To address this issue, this study uses the Large Language Model (LLM) to identify 1420 children's scientific drawings (covering 9 scientific themes/concepts), and uses the word2vec algorithm to calculate their semantic similarity. The study explores whether there are consistent drawing representations for children on the same theme, and attempts to establish a norm for children's scientific drawings, providing a baseline reference for follow-up children's drawing research. The results show that the representation of most drawings has consistency, manifested as most semantic similarity>0.8. At the same time, it was found that the consistency of the representation is independent of the accuracy (of LLM's recognition), indicating the existence of consistency bias. In the subsequent exploration of influencing factors, we used Kendall rank correlation coefficient to investigate the effects of ""sample size"", ""abstract degree"", and ""focus points"" on drawings, and used word frequency statistics to explore whether children represented abstract themes/concepts by reproducing what was taught in class. It was found that accuracy (of LLM's recognition) is the most sensitive indicator, and data such as sample size and semantic similarity are related to it; The consistency between classroom experiments and teaching purpose is also an important factor, many students focus more on the experiments themselves rather than what they explain.",0 "State-sponsored trolls, malicious actors who deploy sophisticated linguistic manipulation in coordinated information campaigns, posing threats to online discourse integrity. While Large Language Models (LLMs) achieve strong performance on general natural language processing (NLP) tasks, they struggle with subtle propaganda detection and operate as ``black boxes'', providing no interpretable insights into manipulation strategies. This paper introduces X-Troll, a novel framework that bridges this gap by integrating explainable adapter-based LLMs with expert-derived linguistic knowledge to detect state-sponsored trolls and provide human-readable explanations for its decisions. X-Troll incorporates appraisal theory and propaganda analysis through specialized LoRA adapters, using dynamic gating to capture campaign-specific discourse patterns in coordinated information operations. Experiments on real-world data demonstrate that our linguistically-informed approach shows strong performance compared with both general LLM baselines and existing troll detection models in accuracy while providing enhanced transparency through expert-grounded explanations that reveal the specific linguistic strategies used by state-sponsored actors. X-Troll source code is available at: https://github.com/ltian678/xtroll_source/.",0 "The frequency exponent of 1/f noise in graphene-boron nitride heterostructures is known to have multiple extrema in its dependence on the charge carrier concentration. This behavior is explained in the present paper as a result of the charge carrier trapping by impurities in the boron nitride. A kinetic equation for the charge carriers subject to trapping and interacting with acoustic phonons is derived. This equation is solved numerically, and the equilibrium solutions are used to evaluate the frequency exponent according to the quantum theory of 1/f noise. It is found that the frequency exponent does develop several minima and maxima, provided that the trapping probability is sufficiently wide and has a threshold with respect to the charge carrier energy. A detailed comparison with the experimental data is made, and the results are used to estimate the energy threshold and the trapping cross-section.",0 "Explainable AI (XAI) methods often struggle to generate clear, interpretable outputs for users without domain expertise. We introduce Feature-Guided Neighbor Selection (FGNS), a post hoc method that enhances interpretability by selecting class-representative examples using both local and global feature importance. In a user study (N = 98) evaluating Kannada script classifications, FGNS significantly improved non-experts' ability to identify model errors while maintaining appropriate agreement with correct predictions. Participants made faster and more accurate decisions compared to those given traditional k-NN explanations. Quantitative analysis shows that FGNS selects neighbors that better reflect class characteristics rather than merely minimizing feature-space distance, leading to more consistent selection and tighter clustering around class prototypes. These results support FGNS as a step toward more human-aligned model assessment, although further work is needed to address the gap between explanation quality and perceived trust.",2 "Perceived risk in automated vehicles (AVs) can create the very danger that automation is meant to prevent: a frightened rider may hesitate when seconds matter, misjudge hazards, or disengage. However, measuring how perceived risk evolves in real time during driving remains challenging, leaving a gap in decoding such hidden psychological states. Here, we present a novel method to time-continuously measure and decode perceived risk. We conducted a controlled experiment where 2,164 participants viewed high-fidelity videos of common highway driving scenes and provided 141,628 discrete safety ratings. Through continuous-signal reconstruction of the discrete ratings, we obtained 236 hours of time-continuous perceived risk data - the largest perceived risk dataset to date. Leveraging this dataset, we trained deep neural networks that predict moment-by-moment perceived risk from vehicle kinematics with a mean relative error below $3\%$. Explainable AI analysis uncovers which factors determine perceived risk in real time. Our findings demonstrate a new paradigm for quantifying dynamic passenger experience and psychological constructs in real time. These findings can guide the design of AVs and other machines that operate in close proximity to people, adjusting behaviour before trust erodes, and help realise automation's benefits in transport, healthcare, and service robotics.",2 "Decision-makers run the risk of relying too much on machine recommendations, which is associated with lower cognitive engagement. Reflection has been shown to increase cognitive engagement and improve critical thinking and therefore decision-making. Questions are a means to stimulate reflection, but there is a research gap regarding the systematic creation and use of relevant questions for machine-assisted decision-making. We therefore present a taxonomy of questions aimed at promoting reflection and cognitive engagement in order to stimulate a deliberate decision-making process. Our taxonomy builds on the Socratic questioning method and a question bank for explainable AI. As a starting point, we focus on clinical decision-making. Brief discussions with two medical and three educational researchers provide feedback on the relevance and expected benefits of our taxonomy. Our work contributes to research on mitigating overreliance in human-AI interactions and aims to support effective human oversight as required by the European AI Act.",0 "As artificial intelligence rapidly transforms society, developers and policymakers struggle to anticipate which applications will face public moral resistance. We propose that these judgments are not idiosyncratic but systematic and predictable. In a large, preregistered study (N = 587, U.S. representative sample), we used a comprehensive taxonomy of 100 AI applications spanning personal and organizational contexts-including both functional uses and the moral treatment of AI itself. In participants' collective judgment, applications ranged from highly unacceptable to fully acceptable. We found this variation was strongly predictable: five core moral qualities-perceived risk, benefit, dishonesty, unnaturalness, and reduced accountability-collectively explained over 90% of the variance in acceptability ratings. The framework demonstrated strong predictive power across all domains and successfully predicted individual-level judgments for held-out applications. These findings reveal that a structured moral psychology underlies public evaluation of new technologies, offering a powerful tool for anticipating public resistance and guiding responsible innovation in AI.",2 "Modern neural network technologies, including large language models, have achieved remarkable success in various applied artificial intelligence applications, however, they face a range of fundamental limitations. Among them are hallucination effects, high computational complexity of training and inference, costly fine-tuning, and catastrophic forgetting issues. These limitations significantly hinder the use of neural networks in critical areas such as medicine, industrial process management, and scientific research. This article proposes an alternative approach based on the nearest neighbors method with hierarchical clustering structures. Employing the k-nearest neighbors algorithm significantly reduces or completely eliminates hallucination effects while simplifying model expansion and fine-tuning without the need for retraining the entire network. To overcome the high computational load of the k-nearest neighbors method, the paper proposes using tree-like data structures based on Kohonen self-organizing maps, thereby greatly accelerating nearest neighbor searches. Tests conducted on handwritten digit recognition and simple subtitle translation tasks confirmed the effectiveness of the proposed approach. With only a slight reduction in accuracy, the nearest neighbor search time was reduced hundreds of times compared to exhaustive search methods. The proposed method features transparency and interpretability, closely aligns with human cognitive mechanisms, and demonstrates potential for extensive use in tasks requiring high reliability and explainable results.",0 "We introduce PASTA (Perceptual Assessment System for explanaTion of Artificial Intelligence), a novel human-centric framework for evaluating eXplainable AI (XAI) techniques in computer vision. Our first contribution is the creation of the PASTA-dataset, the first large-scale benchmark that spans a diverse set of models and both saliency-based and concept-based explanation methods. This dataset enables robust, comparative analysis of XAI techniques based on human judgment. Our second contribution is an automated, data-driven benchmark that predicts human preferences using the PASTA-dataset. This scoring called PASTA-score method offers scalable, reliable, and consistent evaluation aligned with human perception. Additionally, our benchmark allows for comparisons between explanations across different modalities, an aspect previously unaddressed. We then propose to apply our scoring method to probe the interpretability of existing models and to build more human interpretable XAI methods.",1 "Estimating emotional states from physiological signals is a central topic in affective computing and psychophysiology. While many emotion estimation systems implicitly assume a stable relationship between physiological features and subjective affect, this assumption has rarely been tested over long timeframes. This study investigates whether such relationships remain consistent across several months within individuals. We developed a custom measurement system and constructed a longitudinal dataset by collecting physiological signals -- including blood volume pulse, electrodermal activity (EDA), skin temperature, and acceleration--along with self-reported emotional states from 24 participants over two three-month periods. Data were collected in naturalistic working environments, allowing analysis of the relationship between physiological features and subjective arousal in everyday contexts. We examined how physiological-arousal relationships evolve over time by using Explainable Boosting Machines (EBMs) to ensure model interpretability. A model trained on 1st-period data showed a 5\% decrease in accuracy when tested on 2nd-period data, indicating long-term variability in physiological-arousal associations. EBM-based comparisons further revealed that while heart rate remained a relatively stable predictor, minimum EDA exhibited substantial individual-level fluctuations between periods. While the number of participants is limited, these findings highlight the need to account for temporal variability in physiological-arousal relationships and suggest that emotion estimation models should be periodically updated -- e.g., every five months -- based on observed shift trends to maintain robust performance over time.",1 "The rapid adoption of large language models (LLMs) in customer service introduces new risks, as malicious actors can exploit them to conduct large-scale user impersonation through machine-generated text (MGT). Current MGT detection methods often struggle in online conversational settings, reducing the reliability and interpretability essential for trustworthy AI deployment. In customer service scenarios where operators are typically non-expert users, explanation become crucial for trustworthy MGT detection. In this paper, we propose EMMM, an explanation-then-detection framework that balances latency, accuracy, and non-expert-oriented interpretability. Experimental results demonstrate that EMMM provides explanations accessible to non-expert users, with 70\% of human evaluators preferring its outputs, while achieving competitive accuracy compared to state-of-the-art models and maintaining low latency, generating outputs within 1 second. Our code and dataset are open-sourced at https://github.com/AngieYYF/EMMM-explainable-chatbot-detection.",0 "Information asymmetry in financial markets, often amplified by strategically crafted corporate narratives, undermines the effectiveness of conventional textual analysis. We propose a novel multimodal framework for financial risk assessment that integrates textual sentiment with paralinguistic cues derived from executive vocal tract dynamics in earnings calls. Central to this framework is the Physics-Informed Acoustic Model (PIAM), which applies nonlinear acoustics to robustly extract emotional signatures from raw teleconference sound subject to distortions such as signal clipping. Both acoustic and textual emotional states are projected onto an interpretable three-dimensional Affective State Label (ASL) space-Tension, Stability, and Arousal. Using a dataset of 1,795 earnings calls (approximately 1,800 hours), we construct features capturing dynamic shifts in executive affect between scripted presentation and spontaneous Q&A exchanges. Our key finding reveals a pronounced divergence in predictive capacity: while multimodal features do not forecast directional stock returns, they explain up to 43.8% of the out-of-sample variance in 30-day realized volatility. Importantly, volatility predictions are strongly driven by emotional dynamics during executive transitions from scripted to spontaneous speech, particularly reduced textual stability and heightened acoustic instability from CFOs, and significant arousal variability from CEOs. An ablation study confirms that our multimodal approach substantially outperforms a financials-only baseline, underscoring the complementary contributions of acoustic and textual modalities. By decoding latent markers of uncertainty from verifiable biometric signals, our methodology provides investors and regulators a powerful tool for enhancing market interpretability and identifying hidden corporate uncertainty.",0 "The increasing digitization of smart grids has improved operational efficiency but also introduced new cybersecurity vulnerabilities, such as False Data Injection Attacks (FDIAs) targeting Automatic Generation Control (AGC) systems. While machine learning (ML) and deep learning (DL) models have shown promise in detecting such attacks, their opaque decision-making limits operator trust and real-world applicability. This paper proposes a hybrid framework that integrates lightweight ML-based attack detection with natural language explanations generated by Large Language Models (LLMs). Classifiers such as LightGBM achieve up to 95.13% attack detection accuracy with only 0.004 s inference latency. Upon detecting a cyberattack, the system invokes LLMs, including GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o mini, to generate human-readable explanation of the event. Evaluated on 100 test samples, GPT-4o mini with 20-shot prompting achieved 93% accuracy in identifying the attack target, a mean absolute error of 0.075 pu in estimating attack magnitude, and 2.19 seconds mean absolute error (MAE) in estimating attack onset. These results demonstrate that the proposed framework effectively balances real-time detection with interpretable, high-fidelity explanations, addressing a critical need for actionable AI in smart grid cybersecurity.",0 "With the improving semantic understanding capability of Large Language Models (LLMs), they exhibit a greater awareness and alignment with human values, but this comes at the cost of transparency. Although promising results are achieved via experimental analysis, an in-depth understanding of the LLM's internal workings is unavoidable to comprehend the reasoning behind the re-ranking, which provides end users with an explanation that enables them to make an informed decision. Moreover, in newly developed systems with limited user engagement and insufficient ranking data, accurately re-ranking content remains a significant challenge. While various training methods affect the training of LLMs and generate inference, our analysis has found that some training methods exhibit better explainability than others, implying that an accurate semantic understanding has not been learned through all training methods; instead, abstract knowledge has been gained to optimize evaluation, which raises questions about the true reliability of LLMs. Therefore, in this work, we analyze how different training methods affect the semantic understanding of the re-ranking task in LLMs and investigate whether these models can generate more informed textual reasoning to overcome the challenges of transparency or LLMs and limited training data. To analyze the LLMs for re-ranking tasks, we utilize a relatively small ranking dataset from the environment and the Earth science domain to re-rank retrieved content. Furthermore, we also analyze the explainable information to see if the re-ranking can be reasoned using explainability.",0 "The advent of large (visual) language models (LLM / LVLM) have led to a deluge of automated human-like systems in several domains including social media content generation, search and recommendation, healthcare prognosis, AI assistants for cognitive tasks etc. Although these systems have been successfully integrated in production; very little focus has been placed on sports, particularly accurate identification and natural language description of the game play. Most existing LLM/LVLMs can explain generic sports activities, but lack sufficient domain-centric sports' jargon to create natural (human-like) descriptions. This work highlights the limitations of existing SoTA LLM/LVLMs for generating production-grade sports captions from images in a desired stylized format, and proposes a two-level fine-tuned LVLM pipeline to address that. The proposed pipeline yields an improvement > 8-10% in the F1, and > 2-10% in BERT score compared to alternative approaches. In addition, it has a small runtime memory footprint and fast execution time. During Super Bowl LIX the pipeline proved its practical application for live professional sports journalism; generating highly accurate and stylized captions at the rate of 6 images per 3-5 seconds for over 1000 images during the game play.",0 "Adapting trajectories to dynamic situations and user preferences is crucial for robot operation in unstructured environments with non-expert users. Natural language enables users to express these adjustments in an interactive manner. We introduce OVITA, an interpretable, open-vocabulary, language-driven framework designed for adapting robot trajectories in dynamic and novel situations based on human instructions. OVITA leverages multiple pre-trained Large Language Models (LLMs) to integrate user commands into trajectories generated by motion planners or those learned through demonstrations. OVITA employs code as an adaptation policy generated by an LLM, enabling users to adjust individual waypoints, thus providing flexible control. Another LLM, which acts as a code explainer, removes the need for expert users, enabling intuitive interactions. The efficacy and significance of the proposed OVITA framework is demonstrated through extensive simulations and real-world environments with diverse tasks involving spatiotemporal variations on heterogeneous robotic platforms such as a KUKA IIWA robot manipulator, Clearpath Jackal ground robot, and CrazyFlie drone.",0 "Accurately classifying chemical structures is essential for cheminformatics and bioinformatics, including tasks such as identifying bioactive compounds of interest, screening molecules for toxicity to humans, finding non-organic compounds with desirable material properties, or organizing large chemical libraries for drug discovery or environmental monitoring. However, manual classification is labor-intensive and difficult to scale to large chemical databases. Existing automated approaches either rely on manually constructed classification rules, or are deep learning methods that lack explainability. This work presents an approach that uses generative artificial intelligence to automatically write chemical classifier programs for classes in the Chemical Entities of Biological Interest (ChEBI) database. These programs can be used for efficient deterministic run-time classification of SMILES structures, with natural language explanations. The programs themselves constitute an explainable computable ontological model of chemical class nomenclature, which we call the ChEBI Chemical Class Program Ontology (C3PO). We validated our approach against the ChEBI database, and compared our results against deep learning models and a naive SMARTS pattern based classifier. C3PO outperforms the naive classifier, but does not reach the performance of state of the art deep learning methods. However, C3PO has a number of strengths that complement deep learning methods, including explainability and reduced data dependence. C3PO can be used alongside deep learning classifiers to provide an explanation of the classification, where both methods agree. The programs can be used as part of the ontology development process, and iteratively refined by expert human curators.",0 "Background: Existing robust, pervasive device-based systems developed in recent years to detect depression require data collected over a long period and may not be effective in cases where early detection is crucial. Objective: Our main objective was to develop a minimalistic system to identify depression using data retrieved in the fastest possible time. Methods: We developed a fast tool that retrieves the past 7 days' app usage data in 1 second (mean 0.31, SD 1.10 seconds). A total of 100 students from Bangladesh participated in our study, and our tool collected their app usage data. To identify depressed and nondepressed students, we developed a diverse set of ML models. We selected important features using the stable approach, along with 3 main types of feature selection (FS) approaches. Results: Leveraging only the app usage data retrieved in 1 second, our light gradient boosting machine model used the important features selected by the stable FS approach and correctly identified 82.4% (n=42) of depressed students (precision=75%, F1-score=78.5%). Moreover, after comprehensive exploration, we presented a parsimonious stacking model where around 5 features selected by the all-relevant FS approach Boruta were used in each iteration of validation and showed a maximum precision of 77.4% (balanced accuracy=77.9%). A SHAP analysis of our best models presented behavioral markers that were related to depression. Conclusions: Due to our system's fast and minimalistic nature, it may make a worthwhile contribution to identifying depression in underdeveloped and developing regions. In addition, our detailed discussion about the implication of our findings can facilitate the development of less resource-intensive systems to better understand students who are depressed.",0 "Automated clinical coding involves mapping unstructured text from Electronic Health Records (EHRs) to standardized code systems such as the International Classification of Diseases (ICD). While recent advances in deep learning have significantly improved the accuracy and efficiency of ICD coding, the lack of explainability in these models remains a major limitation, undermining trust and transparency. Current explorations about explainability largely rely on attention-based techniques and qualitative assessments by physicians, yet lack systematic evaluation using consistent criteria on high-quality rationale datasets, as well as dedicated approaches explicitly trained to generate rationales for further enhancing explanation. In this work, we conduct a comprehensive evaluation of the explainability of the rationales for ICD coding through two key lenses: faithfulness that evaluates how well explanations reflect the model's actual reasoning and plausibility that measures how consistent the explanations are with human expert judgment. To facilitate the evaluation of plausibility, we construct a new rationale-annotated dataset, offering denser annotations with diverse granularity and aligns better with current clinical practice, and conduct evaluation across three types of rationales of ICD coding. Encouraged by the promising plausibility of LLM-generated rationales for ICD coding, we further propose new rationale learning methods to improve the quality of model-generated rationales, where rationales produced by prompting LLMs with/without annotation examples are used as distant supervision signals. We empirically find that LLM-generated rationales align most closely with those of human experts. Moreover, incorporating few-shot human-annotated examples not only further improves rationale generation but also enhances rationale-learning approaches.",0 "Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)'s reasoning to make better decision. In this work, we introduce a new task - Sentiment Reasoning - for both speech and text modalities, and our proposed multimodal multitask framework and the world's largest multimodal sentiment analysis dataset. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (+2% increase in both accuracy and macro-F1) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (five languages - Vietnamese, English, Chinese, German, and French) and models are published online: https://github.com/leduckhai/Sentiment-Reasoning",0 "This paper introduces Multi-Output LOcal Narrative Explanation (MOLONE), a novel comparative explanation method designed to enhance preference selection in human-in-the-loop Preference Bayesian optimization (PBO). The preference elicitation in PBO is a non-trivial task because it involves navigating implicit trade-offs between vector-valued outcomes, subjective priorities of decision-makers, and decision-makers' uncertainty in preference selection. Existing explainable AI (XAI) methods for BO primarily focus on input feature importance, neglecting the crucial role of outputs (objectives) in human preference elicitation. MOLONE addresses this gap by providing explanations that highlight both input and output importance, enabling decision-makers to understand the trade-offs between competing objectives and make more informed preference selections. MOLONE focuses on local explanations, comparing the importance of input features and outcomes across candidate samples within a local neighborhood of the search space, thus capturing nuanced differences relevant to preference-based decision-making. We evaluate MOLONE within a PBO framework using benchmark multi-objective optimization functions, demonstrating its effectiveness in improving convergence compared to noisy preference selections. Furthermore, a user study confirms that MOLONE significantly accelerates convergence in human-in-the-loop scenarios by facilitating more efficient identification of preferred options.",1 "Manual parameter tuning of cyber-physical systems is a common practice, but it is labor-intensive. Bayesian Optimization (BO) offers an automated alternative, yet its black-box nature reduces trust and limits human-BO collaborative system tuning. Experts struggle to interpret BO recommendations due to the lack of explanations. This paper addresses the post-hoc BO explainability problem for cyber-physical systems. We introduce TNTRules (Tune-No-Tune Rules), a novel algorithm that provides both global and local explanations for BO recommendations. TNTRules generates actionable rules and visual graphs, identifying optimal solution bounds and ranges, as well as potential alternative solutions. Unlike existing explainable AI (XAI) methods, TNTRules is tailored specifically for BO, by encoding uncertainty via a variance pruning technique and hierarchical agglomerative clustering. A multi-objective optimization approach allows maximizing explanation quality. We evaluate TNTRules using established XAI metrics (Correctness, Completeness, and Compactness) and compare it against adapted baseline methods. The results demonstrate that TNTRules generates high-fidelity, compact, and complete explanations, significantly outperforming three baselines on 5 multi-objective testing functions and 2 hyperparameter tuning problems.",0 "Cooperation on social networks is crucial for understanding human survival and development. Although network structure has been found to significantly influence cooperation, human experiments have observed different cooperation phenomena under similar conditions. While evidence suggests that these differences arise from human exploration, our understanding of its impact mechanisms and characteristics remains limited. Here, we seek to formalize human exploration as an individual learning process involving trial and reflection, and integrate social learning to examine how their interdependence shapes cooperation. We find that individual learning can alter neighbor imitation tendencies, and the resulting shifts in the local cooperative environment feed back into the experiential cognition that guides individual learning. This coupled dynamic makes the ability of social networks to promote cooperation largely dependent on whether individuals focus on long-term payoffs, and exhibits a series of characteristics that can explain previously unexplained and seemingly contradictory cooperation phenomena. Surprisingly, individual learning can promote cooperation more than social learning when its probability is negatively correlated with payoffs, a mechanism rooted in the psychological tendency to avoid trial-and-error when individuals are satisfied with their current payoffs. These results explain the contradictory cooperation phenomenon by accounting for decision preferences and cognitive processes underlying exploration, bridging the gap between theoretical research and reality.",0 "Krylov complexity, a quantum complexity measure which uniquely characterizes the spread of a quantum state or an operator, has recently been studied in the context of quantum chaos. However, the definitiveness of this measure as a chaos quantifier is in question in light of its strong dependence on the initial condition. This article clarifies the connection between the Krylov complexity dynamics and the initial operator or state. We find that the Krylov complexity depends monotonically on the inverse participation ratio (IPR) of the initial condition in the eigenbasis of the Hamiltonian. We explain the reversal of the complexity saturation levels observed in \href{https://doi.org/10.1103/PhysRevE.107.024217}{ Phys.Rev.E.107,024217, 2023} using the initial spread of the operator in the Hamiltonian eigenbasis. IPR dependence is present even in the fully chaotic regime, where popular quantifiers of chaos, such as out-of-time-ordered correlators and entanglement generation, show similar behavior regardless of the initial condition. Krylov complexity averaged over many initial conditions still does not characterize chaos.",0 "My findings show, for the first time, what causes loss of awareness, anesthesia, memory replay, opioids induced respiratory depression (OIRD), and slow wave sleep. Opiates are fast pain relievers and anesthetics that can cause respiratory arrest. I found how mu-opioids and other medial habenula activators slowdown respiration during SWS and anesthesia. Using DTI method I observed that human hippocampus is connected to MHb via posterior septum, while amygdala via anteromedial BNST. MHb projected to pineal gland and contralateral MHb (Vadovi\v{c}ov\'a, 2014). MHb has dense mu-opioids receptors (Gardon and Faget, 2014) and strong projections to IPN. Herkenham (1981) found increased glucose intake during anesthesia in MHb and IPN. The IPN projects to serotonergic MRN/DRN, and pain/interoception/arousal linked PAG. The question is: What is the MHb-IPN circuit doing? This extended circuit model explains role of the dentate gyrus >posterior septum >MHb >IPN >MRN >hippocampus + BF + claustrum >cortical slow-wave activity (SWA) in memory replay, loss of awareness, anesthesia and SWS. It proposes new neural mechanisms for anesthetic ketamine, nitrous oxide, and phencyclidine effects: activation of the IPN >MRN >claustrum >cortical SWA circuit by the 5-HT2a receptors in the IPN and claustrum. This brain model shows why are ketamine and psychedelics anxiolytic and antidepressant. How they by activating the 5-HT2a receptors in vACC/infralimbic cortex increase safety, well-being signal, socializing, and cognitive flexibility, and attenuate fear, worries, anger, impulsivity, self-defence and wanting. This model suggests that mu-opioids, acetylcholine, nicotine, cannabinoids, adenosine, GLP-1RA, neuropeptide Y, and substance P activate the MHb-IPN-MRN circuit which promotes rest, recovery, repair, serotonin-BDNF-proteins production-spines/synapses growth-anti-inflammatory state.",0 "Many domains now employ AI-based decision-making aids, and although the potential for AI systems to assist with decision making is much discussed, human-AI collaboration often underperforms due to factors such as (mis)trust in the AI system and beliefs about AI being incapable of completing subjective tasks. One potential tool for influencing human decision making is performance pressure, which hasn't been much studied in interaction with human-AI decision making. In this work, we examine how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior. Using an inherently low-stakes task (spam review classification), we demonstrate effective and simple methods to apply pressure and influence human AI advice-taking behavior by manipulating financial incentives and imposing time limits. Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior. We conclude by discussing the implications of these interactions, strategies to effectively use pressure, and encourage future research to incorporate pressure analysis.",0 "AI-driven tools for healthcare are widely acknowledged as potentially beneficial to health practitioners and patients, e.g. the QCancer regression tool for cancer risk prediction. However, for these tools to be trusted, they need to be supplemented with explanations. We examine how explanations' content and format affect user comprehension and trust when explaining QCancer's predictions. Regarding content, we deploy the SHAP and Occlusion-1 explanation methods. Regarding format, we present SHAP explanations, conventionally, as charts (SC) and Occlusion-1 explanations as charts (OC) as well as text (OT), to which their simpler nature lends itself. We conduct experiments with two sets of stakeholders: the general public (representing patients) and medical students (representing healthcare practitioners). Our experiments showed higher subjective comprehension and trust for Occlusion-1 over SHAP explanations based on content. However, when controlling for format, only OT outperformed SC, suggesting this trend is driven by preferences for text. Other findings corroborated that explanation format, rather than content, is often the critical factor.",1 "Evaluating automatically generated radiology reports remains a fundamental challenge due to the lack of clinically grounded, interpretable, and fine-grained metrics. Existing methods either produce coarse overall scores or rely on opaque black-box models, limiting their usefulness in real-world clinical workflows. We introduce RadReason, a novel evaluation framework for radiology reports that not only outputs fine-grained sub-scores across six clinically defined error types, but also produces human-readable justifications that explain the rationale behind each score. Our method builds on Group Relative Policy Optimization and incorporates two key innovations: (1) Sub-score Dynamic Weighting, which adaptively prioritizes clinically challenging error types based on live F1 statistics; and (2) Majority-Guided Advantage Scaling, which adjusts policy gradient updates based on prompt difficulty derived from sub-score agreement. Together, these components enable more stable optimization and better alignment with expert clinical judgment. Experiments on the ReXVal benchmark show that RadReason surpasses all prior offline metrics and achieves parity with GPT-4-based evaluations, while remaining explainable, cost-efficient, and suitable for clinical deployment. Code will be released upon publication.",0 "During virtual navigation, users exhibit varied interaction and navigation behaviors influenced by several factors. Existing theories and models have been developed to explain and predict these diverse patterns. While users often experience uncomfortable sensations, such as cybersickness, during virtual reality (VR) use, they do not always make optimal decisions to mitigate these effects. Although methods like reinforcement learning have been used to model decision-making processes, they typically rely on random selection to simulate actions, failing to capture the complexities of real navigation behavior. In this study, we propose curiosity as a key factor driving irrational decision-making, suggesting that users continuously balance exploration and cybersickness according to the free energy principle during virtual navigation. Our findings show that VR users generally adopt conservative strategies when navigating, with most participants displaying negative curiosity across trials. However, curiosity levels tend to rise when the virtual environment changes, illustrating the dynamic interplay between exploration and discomfort. This study provides a quantitative approach to decoding curiosity-driven behavior during virtual navigation, offering insights into how users balance exploration and the avoidance of cybersickness. Future research will further refine this model by incorporating additional psychological and environmental factors to improve the accuracy of navigation pattern predictions.",0 "Diabetic Retinopathy (DR) is a major cause of global blindness, necessitating early and accurate diagnosis. While deep learning models have shown promise in DR detection, their black-box nature often hinders clinical adoption due to a lack of transparency and interpretability. To address this, we propose XDR-LVLM (eXplainable Diabetic Retinopathy Diagnosis with LVLM), a novel framework that leverages Vision-Language Large Models (LVLMs) for high-precision DR diagnosis coupled with natural language-based explanations. XDR-LVLM integrates a specialized Medical Vision Encoder, an LVLM Core, and employs Multi-task Prompt Engineering and Multi-stage Fine-tuning to deeply understand pathological features within fundus images and generate comprehensive diagnostic reports. These reports explicitly include DR severity grading, identification of key pathological concepts (e.g., hemorrhages, exudates, microaneurysms), and detailed explanations linking observed features to the diagnosis. Extensive experiments on the Diabetic Retinopathy (DDR) dataset demonstrate that XDR-LVLM achieves state-of-the-art performance, with a Balanced Accuracy of 84.55% and an F1 Score of 79.92% for disease diagnosis, and superior results for concept detection (77.95% BACC, 66.88% F1). Furthermore, human evaluations confirm the high fluency, accuracy, and clinical utility of the generated explanations, showcasing XDR-LVLM's ability to bridge the gap between automated diagnosis and clinical needs by providing robust and interpretable insights.",0 "\hspace{2mm} Diffusion-weighted magnetic resonance imaging (dMRI) of the brain offers unique capabilities including noninvasive probing of tissue microstructure and structural connectivity. It is widely used for clinical assessment of disease and injury, and for neuroscience research. Analyzing the dMRI data to extract useful information for medical and scientific purposes can be challenging. The dMRI measurements may suffer from strong noise and artifacts, and may exhibit high inter-session and inter-scanner variability in the data, as well as inter-subject heterogeneity in brain structure. Moreover, the relationship between measurements and the phenomena of interest can be highly complex. Recent years have witnessed increasing use of machine learning methods for dMRI analysis. This manuscript aims to assess these efforts, with a focus on methods that have addressed data preprocessing and harmonization, microstructure mapping, tractography, and white matter tract analysis. We study the main findings, strengths, and weaknesses of the existing methods and suggest topics for future research. We find that machine learning may be exceptionally suited to tackle some of the difficult tasks in dMRI analysis. However, for this to happen, several shortcomings of existing methods and critical unresolved issues need to be addressed. There is a pressing need to improve evaluation practices, to increase the availability of rich training datasets and validation benchmarks, as well as model generalizability, reliability, and explainability concerns.",0 "Explainable Artificial Intelligence (XAI) aims to uncover the inner reasoning of machine learning models. In IoT systems, XAI improves the transparency of models processing sensor data from multiple heterogeneous devices, ensuring end-users understand and trust their outputs. Among the many applications, XAI has also been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes. Existing approaches highlight which sensor events are most important for each predicted activity, using simple rules to convert these events into natural language explanations for non-expert users. However, these methods produce rigid explanations lacking natural language flexibility and are not scalable. With the recent rise of Large Language Models (LLMs), it is worth exploring whether they can enhance explanation generation, considering their proven knowledge of human activities. This paper investigates potential approaches to combine XAI and LLMs for sensor-based ADL recognition. We evaluate if LLMs can be used: a) as explainable zero-shot ADL recognition models, avoiding costly labeled data collection, and b) to automate the generation of explanations for existing data-driven XAI approaches when training data is available and the goal is higher recognition rates. Our critical evaluation provides insights into the benefits and challenges of using LLMs for explainable ADL recognition.",0 "Graph neural networks have demonstrated remarkable success in predicting molecular properties by leveraging the rich structural information encoded in molecular graphs. However, their black-box nature reduces interpretability, which limits trust in their predictions for important applications such as drug discovery and materials design. Furthermore, existing explanation techniques often fail to reliably quantify the contribution of individual atoms or substructures due to the entangled message-passing dynamics. We introduce SEAL (Substructure Explanation via Attribution Learning), a new interpretable graph neural network that attributes model predictions to meaningful molecular subgraphs. SEAL decomposes input graphs into chemically relevant fragments and estimates their causal influence on the output. The strong alignment between fragment contributions and model predictions is achieved by explicitly reducing inter-fragment message passing in our proposed model architecture. Extensive evaluations on synthetic benchmarks and real-world molecular datasets demonstrate that SEAL outperforms other explainability methods in both quantitative attribution metrics and human-aligned interpretability. A user study further confirms that SEAL provides more intuitive and trustworthy explanations to domain experts. By bridging the gap between predictive performance and interpretability, SEAL offers a promising direction for more transparent and actionable molecular modeling.",0 "Recent advancements in visual generative models have enabled high-quality image and video generation, opening diverse applications. However, evaluating these models often demands sampling hundreds or thousands of images or videos, making the process computationally expensive, especially for diffusion-based models with inherently slow sampling. Moreover, existing evaluation methods rely on rigid pipelines that overlook specific user needs and provide numerical results without clear explanations. In contrast, humans can quickly form impressions of a model's capabilities by observing only a few samples. To mimic this, we propose the Evaluation Agent framework, which employs human-like strategies for efficient, dynamic, multi-round evaluations using only a few samples per round, while offering detailed, user-tailored analyses. It offers four key advantages: 1) efficiency, 2) promptable evaluation tailored to diverse user needs, 3) explainability beyond single numerical scores, and 4) scalability across various models and tools. Experiments show that Evaluation Agent reduces evaluation time to 10% of traditional methods while delivering comparable results. The Evaluation Agent framework is fully open-sourced to advance research in visual generative models and their efficient evaluation.",0 "A growing body of empirical work suggests that the widespread adoption of generative AI produces a significant homogenizing effect on information, creativity, and cultural production. I first develop a novel theoretical framework to explain this phenomenon. I argue that a dynamic of AI-derivative epistemology, in which individuals increasingly defer to AI outputs, allows a centralized AI Prism to function, a technical mechanism whose architecture is designed to reduce variance and converge on the statistical mean. This provides a causal explanation for the generative monocultures observed in recent studies. However, I contend this represents only the first stage of a more complex and dialectical process. This paper's central and paradoxical thesis is that the very homogenization that flattens knowledge within specialized domains simultaneously renders that knowledge into consistent modules that can be recombined across them, a process foundational to innovation and creativity. However, this recombinant potential is not automatic, but rather conditional. This paper argues that these opposing forces, homogenizing defaults versus recombinant possibilities, are governed by the nature of human engagement with the technology. The ultimate effect of generative AI is conditional on whether individuals act as passive consumers deferring to the AI's statistical outputs, or as active curators who critically interrogate, re-contextualize, and recombine them. The paper concludes by outlining the cognitive and institutional scaffolds required to resolve this tension, arguing they are the decisive variable that determine whether generative AI becomes an instrument of innovation or homogenization.",0 "Pre-service teachers play a unique dual role as they straddle between the roles of students and future teachers. This dual role requires them to adopt both the learner's and the instructor's perspectives while engaging with pedagogical and content knowledge. The current study investigates how pre-service elementary teachers taking a physical science course prompt AI to generate representations that effectively communicate conceptual ideas to two distinct audiences. The context involves 2400 participants interacting with AI to generate appropriate representations that explain the concepts of wave velocity to their elementary students (while casting themselves as teachers) and the Ideal Gas Law to their English teachers (while casting themselves as students). Emergent coding of the AI prompts highlight that, when acting as teachers, participants were more explicit in specifying the target audience, predetermining the type of representation, and producing a broader variety of representations compared to when they acted as students. Implications of the observed 'exploratory' and 'prescriptive' prompting trends across the two roles on pre-service teachers' education and their professional development are discussed.",2 "Continuous Descent Operations (CDO) involve smooth, idle-thrust descents that avoid level-offs, reducing fuel burn, emissions, and noise while improving efficiency and passenger comfort. Despite its operational and environmental benefits, limited research has systematically examined the factors influencing CDO performance. Moreover, many existing methods in related areas, such as trajectory optimization, lack the transparency required in aviation, where explainability is critical for safety and stakeholder trust. This study addresses these gaps by proposing a Fuzzy-Enhanced Explainable AI (FEXAI) framework that integrates fuzzy logic with machine learning and SHapley Additive exPlanations (SHAP) analysis. For this purpose, a comprehensive dataset of 29 features, including 11 operational and 18 weather-related features, was collected from 1,094 flights using Automatic Dependent Surveillance-Broadcast (ADS-B) data. Machine learning models and SHAP were then applied to classify flights' CDO adherence levels and rank features by importance. The three most influential features, as identified by SHAP scores, were then used to construct a fuzzy rule-based classifier, enabling the extraction of interpretable fuzzy rules. All models achieved classification accuracies above 90%, with FEXAI providing meaningful, human-readable rules for operational users. Results indicated that the average descent rate within the arrival route, the number of descent segments, and the average change in directional heading during descent were the strongest predictors of CDO performance. The FEXAI method proposed in this study presents a novel pathway for operational decision support and could be integrated into aviation tools to enable real-time advisories that maintain CDO adherence under varying operational conditions.",0 "Over time, software systems have reached a level of complexity that makes it difficult for their developers and users to explain particular decisions made by them. In this paper, we focus on the explainability of component-based systems for Question Answering (QA). These components often conduct processes driven by AI methods, in which behavior and decisions cannot be clearly explained or justified, s.t., even for QA experts interpreting the executed process and its results is hard. To address this challenge, we present an approach that considers the components' input and output data flows as a source for representing the behavior and provide explanations for the components, enabling users to comprehend what happened. In the QA framework used here, the data flows of the components are represented as SPARQL queries (inputs) and RDF triples (outputs). Hence, we are also providing valuable insights on verbalization regarding these data types. In our experiments, the approach generates explanations while following template-based settings (baseline) or via the use of Large Language Models (LLMs) with different configurations (automatic generation). Our evaluation shows that the explanations generated via LLMs achieve high quality and mostly outperform template-based approaches according to the users' ratings. Therefore, it enables us to automatically explain the behavior and decisions of QA components to humans while using RDF and SPARQL as a context for explanations.",0 "Online coordination of multi-robot systems in open and unknown environments faces significant challenges, particularly when semantic features detected during operation dynamically trigger new tasks. Recent large language model (LLMs)-based approaches for scene reasoning and planning primarily focus on one-shot, end-to-end solutions in known environments, lacking both dynamic adaptation capabilities for online operation and explainability in the processes of planning. To address these issues, a novel framework (DEXTER-LLM) for dynamic task planning in unknown environments, integrates four modules: (i) a mission comprehension module that resolves partial ordering of tasks specified by natural languages or linear temporal logic formulas (LTL); (ii) an online subtask generator based on LLMs that improves the accuracy and explainability of task decomposition via multi-stage reasoning; (iii) an optimal subtask assigner and scheduler that allocates subtasks to robots via search-based optimization; and (iv) a dynamic adaptation and human-in-the-loop verification module that implements multi-rate, event-based updates for both subtasks and their assignments, to cope with new features and tasks detected online. The framework effectively combines LLMs' open-world reasoning capabilities with the optimality of model-based assignment methods, simultaneously addressing the critical issue of online adaptability and explainability. Experimental evaluations demonstrate exceptional performances, with 100% success rates across all scenarios, 160 tasks and 480 subtasks completed on average (3 times the baselines), 62% less queries to LLMs during adaptation, and superior plan quality (2 times higher) for compound tasks. Project page at https://tcxm.github.io/DEXTER-LLM/",0 "We propose a neurosymbolic approach to the explanation of complex sequences of decisions that combines the strengths of decision procedures and Large Language Models (LLMs). We demonstrate this approach by producing explanations for the solutions of Hitori puzzles. The rules of Hitori include local constraints that are effectively explained by short resolution proofs. However, they also include a connectivity constraint that is more suitable for visual explanations. Hence, Hitori provides an excellent testing ground for a flexible combination of SAT solvers and LLMs. We have implemented a tool that assists humans in solving Hitori puzzles, and we present experimental evidence of its effectiveness.",0 "Objective: This study aims to uncover the opaque decision-making process of an artificial intelligence (AI) agent for automatic treatment planning. Approach: We examined a previously developed AI agent based on the Actor-Critic with Experience Replay (ACER) network, which automatically tunes treatment planning parameters (TPPs) for inverse planning in prostate cancer intensity modulated radiotherapy. We selected multiple checkpoint ACER agents from different stages of training and applied an explainable AI (EXAI) method to analyze the attribution from dose-volume histogram (DVH) inputs to TPP-tuning decisions. We then assessed each agent's planning efficacy and efficiency and evaluated their policy and final TPP tuning spaces. Combining these analyses, we systematically examined how ACER agents generated high-quality treatment plans in response to different DVH inputs. Results: Attribution analysis revealed that ACER agents progressively learned to identify dose-violation regions from DVH inputs and promote appropriate TPP-tuning actions to mitigate them. Organ-wise similarities between DVH attributions and dose-violation reductions ranged from 0.25 to 0.5 across tested agents. Agents with stronger attribution-violation similarity required fewer tuning steps (~12-13 vs. 22), exhibited a more concentrated TPP-tuning space with lower entropy (~0.3 vs. 0.6), converged on adjusting only a few TPPs, and showed smaller discrepancies between practical and theoretical tuning steps. Putting together, these findings indicate that high-performing ACER agents can effectively identify dose violations from DVH inputs and employ a global tuning strategy to achieve high-quality treatment planning, much like skilled human planners. Significance: Better interpretability of the agent's decision-making process may enhance clinician trust and inspire new strategies for automatic treatment planning.",0 "The interaction between humans and AI in safety-critical systems presents a unique set of challenges that remain partially addressed by existing frameworks. These challenges stem from the complex interplay of requirements for transparency, trust, and explainability, coupled with the necessity for robust and safe decision-making. A framework that holistically integrates human and AI capabilities while addressing these concerns is notably required, bridging the critical gaps in designing, deploying, and maintaining safe and effective systems. This paper proposes a holistic conceptual framework for critical infrastructures by adopting an interdisciplinary approach. It integrates traditionally distinct fields such as mathematics, decision theory, computer science, philosophy, psychology, and cognitive engineering and draws on specialized engineering domains, particularly energy, mobility, and aeronautics. Its flexibility is further demonstrated through a case study on power grid management.",0 "Eye-movement related artifacts including blinks and saccades are significantly larger in amplitude than cortical activity as recorded by scalp electroencephalography (EEG), but are typically discarded in EEG studies focusing on cognitive mechanisms as explained by cortical source activity. Accumulating evidence however indicates that spontaneous eye blinks are not necessarily random, and can be modulated by attention and cognition beyond just physiological necessities. In this exploratory analysis we reanalyze a public EEG dataset of musicians listening to or imagining music (Bach chorales) while simultaneously reading from a sheet of music. We ask whether blink timing in reading music, accompanied by listening or imagery, is sufficient to uniquely identify the music being read from a given score. Intra-subject blink counts and timing are compared across trials using a spike train distance metric (Victor and Purpura, 1997). One-trial-left-out cross-validation is used to identify the music being read with above chance level accuracy (best subject: 56\%, chance: 25\%), where accuracy is seen to vary with subject, condition, and a tunable cost factor for time shifts. Future studies may consider incorporating eye blink contributions to brain decoding, especially in wearables where eye blinks could be easier to record than EEG given their higher amplitudes.",0 "This paper focuses on a key challenge in visual emotion understanding: given an art image, the model pinpoints pixel regions that trigger a specific human emotion, and generates linguistic explanations for it. Despite advances in general segmentation, pixel-level emotion understanding still faces a dual challenge: first, the subjectivity of emotion limits general segmentation models like SAM to adapt to emotion-oriented segmentation tasks; and second, the abstract nature of art expression makes it hard for captioning models to balance pixel-level semantics and emotion reasoning. To solve the above problems, this paper proposes the Emotion stimuli Segmentation and Explanation Model (EmoSEM) model to endow the segmentation framework with emotion comprehension capability. First, to enable the model to perform segmentation under the guidance of emotional intent well, we introduce an emotional prompt with a learnable mask token as the conditional input for segmentation decoding. Then, we design an emotion projector to establish the association between emotion and visual features. Next, more importantly, to address emotion-visual stimuli alignment, we develop a lightweight prefix adapter, a module that fuses the learned emotional mask with the corresponding emotion into a unified representation compatible with the language model. Finally, we input the joint visual, mask, and emotional tokens into the language model and output the emotional explanations. It ensures that the generated interpretations remain semantically and emotionally coherent with the visual stimuli. Our method realizes end-to-end modeling from low-level pixel features to high-level emotion interpretation, delivering the first interpretable fine-grained framework for visual emotion analysis. Extensive experiments validate the effectiveness of our model. Code will be made publicly available.",0 "Understanding what knowledge is implicitly encoded in deep learning models is essential for improving the interpretability of AI systems. This paper examines common methods to explain the knowledge encoded in word embeddings, which are core elements of large language models (LLMs). These methods typically involve mapping embeddings onto collections of human-interpretable semantic features, known as feature norms. Prior work assumes that accurately predicting these semantic features from the word embeddings implies that the embeddings contain the corresponding knowledge. We challenge this assumption by demonstrating that prediction accuracy alone does not reliably indicate genuine feature-based interpretability. We show that these methods can successfully predict even random information, concluding that the results are predominantly determined by an algorithmic upper bound rather than meaningful semantic representation in the word embeddings. Consequently, comparisons between datasets based solely on prediction performance do not reliably indicate which dataset is better captured by the word embeddings. Our analysis illustrates that such mappings primarily reflect geometric similarity within vector spaces rather than indicating the genuine emergence of semantic properties.",0 "The global spread of misinformation and concerns about content trustworthiness have driven the development of automated fact-checking systems. Since false information often exploits social media dynamics such as ""likes"" and user networks to amplify its reach, effective solutions must go beyond content analysis to incorporate these factors. Moreover, simply labelling content as false can be ineffective or even reinforce biases such as automation and confirmation bias. This paper proposes an explainable framework that combines content, social media, and graph-based features to enhance fact-checking. It integrates a misinformation classifier with explainability techniques to deliver complete and interpretable insights supporting classification decisions. Experiments demonstrate that multimodal information improves performance over single modalities, with evaluations conducted on datasets in English, Spanish, and Portuguese. Additionally, the framework's explanations were assessed for interpretability, trustworthiness, and robustness with a novel protocol, showing that it effectively generates human-understandable justifications for its predictions.",0 "The opaqueness of many complex machine learning algorithms is often mentioned as one of the main obstacles to the ethical development of artificial intelligence (AI). But what does it mean for an algorithm to be opaque? Highly complex algorithms such as artificial neural networks process enormous volumes of data in parallel along multiple hidden layers of interconnected nodes, rendering their inner workings epistemically inaccessible to any human being, including their designers and developers; they are ""black boxes"" for all their stakeholders. But opaqueness is not always the inevitable result of technical complexity. Sometimes, the way an algorithm works is intentionally hidden from view for proprietary reasons, especially in commercial automated decision systems, creating an entirely different type of opaqueness. In the first part of the chapter, we will examine these two ways of understanding opacity and the ethical implications that stem from each of them. In the second part, we explore the different explanatory methods that have been developed in computer science to overcome an AI system's technical opaqueness. As the analysis shows, explainable AI (XAI) still faces numerous challenges.",0 "This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. We conclude that while there are very significant technical hurdles to real human enhancement through AI, and significant ethical problems, there are also significant benefits that may realistically be achieved in ways that are consonant with a rights-based ethics as well. We also highlight the specific concerns that apply particularly to applications of AI for ""sheer"" IA (more realistic in the near term), and to enhancement applications, respectively.",0 "Hierarchical multi-agent systems (HMAS) organize collections of agents into layered structures that help manage complexity and scale. These hierarchies can simplify coordination, but they also can introduce trade-offs that are not always obvious. This paper proposes a multi-dimensional taxonomy for HMAS along five axes: control hierarchy, information flow, role and task delegation, temporal layering, and communication structure. The intent is not to prescribe a single ""best"" design but to provide a lens for comparing different approaches. Rather than treating these dimensions in isolation, the taxonomy is connected to concrete coordination mechanisms - from the long-standing contract-net protocol for task allocation to more recent work in hierarchical reinforcement learning. Industrial contexts illustrate the framework, including power grids and oilfield operations, where agents at production, maintenance, and supply levels coordinate to diagnose well issues or balance energy demand. These cases suggest that hierarchical structures may achieve global efficiency while preserving local autonomy, though the balance is delicate. The paper closes by identifying open challenges: making hierarchical decisions explainable to human operators, scaling to very large agent populations, and assessing whether learning-based agents such as large language models can be safely integrated into layered frameworks. This paper presents what appears to be the first taxonomy that unifies structural, temporal, and communication dimensions of hierarchical MAS into a single design framework, bridging classical coordination mechanisms with modern reinforcement learning and large language model agents.",0 "Recent advances in Multimodal Large Language Models (MLLMs) have introduced a paradigm shift for Image Quality Assessment (IQA) from unexplainable image quality scoring to explainable IQA, demonstrating practical applications like quality control and optimization guidance. However, current explainable IQA methods not only inadequately use the same distortion criteria to evaluate both User-Generated Content (UGC) and AI-Generated Content (AIGC) images, but also lack detailed quality analysis for monitoring image quality and guiding image restoration. In this study, we establish the first large-scale Visual Distortion Assessment Instruction Tuning Dataset for UGC images, termed ViDA-UGC, which comprises 11K images with fine-grained quality grounding, detailed quality perception, and reasoning quality description data. This dataset is constructed through a distortion-oriented pipeline, which involves human subject annotation and a Chain-of-Thought (CoT) assessment framework. This framework guides GPT-4o to generate quality descriptions by identifying and analyzing UGC distortions, which helps capturing rich low-level visual features that inherently correlate with distortion patterns. Moreover, we carefully select 476 images with corresponding 6,149 question answer pairs from ViDA-UGC and invite a professional team to ensure the accuracy and quality of GPT-generated information. The selected and revised data further contribute to the first UGC distortion assessment benchmark, termed ViDA-UGC-Bench. Experimental results demonstrate the effectiveness of the ViDA-UGC and CoT framework for consistently enhancing various image quality analysis abilities across multiple base MLLMs on ViDA-UGC-Bench and Q-Bench, even surpassing GPT-4o.",1 "Accurate assessment of neuromuscular reflexes, such as the H-reflex, plays a critical role in sports science, rehabilitation, and clinical neurology. Traditional analysis of H-reflex EMG waveforms is subject to variability and interpretation bias among clinicians and researchers, limiting reliability and standardization. To address these challenges, we propose a Fine-Tuned Vision-Language Model (VLM) Consortium and a reasoning Large-Language Model (LLM)-enabled Decision Support System for automated H-reflex waveform interpretation and diagnosis. Our approach leverages multiple VLMs, each fine-tuned on curated datasets of H-reflex EMG waveform images annotated with clinical observations, recovery timelines, and athlete metadata. These models are capable of extracting key electrophysiological features and predicting neuromuscular states, including fatigue, injury, and recovery, directly from EMG images and contextual metadata. Diagnostic outputs from the VLM consortium are aggregated using a consensus-based method and refined by a specialized reasoning LLM, which ensures robust, transparent, and explainable decision support for clinicians and sports scientists. The end-to-end platform orchestrates seamless communication between the VLM ensemble and the reasoning LLM, integrating prompt engineering strategies and automated reasoning workflows using LLM Agents. Experimental results demonstrate that this hybrid system delivers highly accurate, consistent, and interpretable H-reflex assessments, significantly advancing the automation and standardization of neuromuscular diagnostics. To our knowledge, this work represents the first integration of a fine-tuned VLM consortium with a reasoning LLM for image-based H-reflex analysis, laying the foundation for next-generation AI-assisted neuromuscular assessment and athlete monitoring platforms.",0 "Chest X-ray imaging is crucial for diagnosing pulmonary and cardiac diseases, yet its interpretation demands extensive clinical experience and suffers from inter-observer variability. While deep learning models offer high diagnostic accuracy, their black-box nature hinders clinical adoption in high-stakes medical settings. To address this, we propose X-Ray-CoT (Chest X-Ray Chain-of-Thought), a novel framework leveraging Vision-Language Large Models (LVLMs) for intelligent chest X-ray diagnosis and interpretable report generation. X-Ray-CoT simulates human radiologists' ""chain-of-thought"" by first extracting multi-modal features and visual concepts, then employing an LLM-based component with a structured Chain-of-Thought prompting strategy to reason and produce detailed natural language diagnostic reports. Evaluated on the CORDA dataset, X-Ray-CoT achieves competitive quantitative performance, with a Balanced Accuracy of 80.52% and F1 score of 78.65% for disease diagnosis, slightly surpassing existing black-box models. Crucially, it uniquely generates high-quality, explainable reports, as validated by preliminary human evaluations. Our ablation studies confirm the integral role of each proposed component, highlighting the necessity of multi-modal fusion and CoT reasoning for robust and transparent medical AI. This work represents a significant step towards trustworthy and clinically actionable AI systems in medical imaging.",0 "We introduce fCrit, a dialogue-based AI system designed to critique furniture design with a focus on explainability. Grounded in reflective learning and formal analysis, fCrit employs a multi-agent architecture informed by a structured design knowledge base. We argue that explainability in the arts should not only make AI reasoning transparent but also adapt to the ways users think and talk about their designs. We demonstrate how fCrit supports this process by tailoring explanations to users' design language and cognitive framing. This work contributes to Human-Centered Explainable AI (HCXAI) in creative practice, advancing domain-specific methods for situated, dialogic, and visually grounded AI support.",0 "Large Language Models (LLMs) such as GPT, LLaMA, and Claude achieve remarkable performance in text generation but remain opaque in their decision-making processes, limiting trust and accountability in high-stakes applications. We present gSMILE (generative SMILE), a model-agnostic, perturbation-based framework for token-level interpretability in LLMs. Extending the SMILE methodology, gSMILE uses controlled prompt perturbations, Wasserstein distance metrics, and weighted linear surrogates to identify input tokens with the most significant impact on the output. This process enables the generation of intuitive heatmaps that visually highlight influential tokens and reasoning paths. We evaluate gSMILE across leading LLMs (OpenAI's gpt-3.5-turbo-instruct, Meta's LLaMA 3.1 Instruct Turbo, and Anthropic's Claude 2.1) using attribution fidelity, attribution consistency, attribution stability, attribution faithfulness, and attribution accuracy as metrics. Results show that gSMILE delivers reliable human-aligned attributions, with Claude 2.1 excelling in attention fidelity and GPT-3.5 achieving the highest output consistency. These findings demonstrate gSMILE's ability to balance model performance and interpretability, enabling more transparent and trustworthy AI systems.",0 "Large Language Models (LLMs) excel in translation among other things, demonstrating competitive performance for many language pairs in zero- and few-shot settings. But unlike dedicated neural machine translation models, LLMs are not trained on any translation-related objective. What explains their remarkable translation abilities? Are these abilities grounded in ""incidental bilingualism"" (Briakou et al. 2023) in training data? Does instruction tuning contribute to it? Are LLMs capable of aligning and leveraging semantically identical or similar monolingual contents from different corners of the internet that are unlikely to fit in a single context window? I offer some reflections on this topic, informed by recent studies and growing user experience. My working hypothesis is that LLMs' translation abilities originate in two different types of pre-training data that may be internalized by the models in different ways. I discuss the prospects for testing the ""duality"" hypothesis empirically and its implications for reconceptualizing translation, human and machine, in the age of deep learning.",0 "Major Depressive Disorder is one of the leading causes of disability worldwide, yet its diagnosis still depends largely on subjective clinical assessments. Integrating Artificial Intelligence (AI) holds promise for developing objective, scalable, and timely diagnostic tools. In this paper, we present a comprehensive survey of state-of-the-art AI methods for depression detection and diagnosis, based on a systematic review of 55 key studies. We introduce a novel hierarchical taxonomy that structures the field by primary clinical task (diagnosis vs. prediction), data modality (text, speech, neuroimaging, multimodal), and computational model class (e.g., graph neural networks, large language models, hybrid approaches). Our in-depth analysis reveals three major trends: the predominance of graph neural networks for modeling brain connectivity, the rise of large language models for linguistic and conversational data, and an emerging focus on multimodal fusion, explainability, and algorithmic fairness. Alongside methodological insights, we provide an overview of prominent public datasets and standard evaluation metrics as a practical guide for researchers. By synthesizing current advances and highlighting open challenges, this survey offers a comprehensive roadmap for future innovation in computational psychiatry.",0 "LLMs enable qualitative coding at large scale, but assessing reliability remains challenging where human experts seldom agree. We investigate confidence-diversity calibration as a quality assessment framework for accessible coding tasks where LLMs already demonstrate strong performance but exhibit overconfidence. Analysing 5,680 coding decisions from eight state-of-the-art LLMs across ten categories, we find that mean self-confidence tracks inter-model agreement closely (Pearson r=0.82). Adding model diversity quantified as normalised Shannon entropy produces a dual signal explaining agreement almost completely (R-squared=0.979), though this high predictive power likely reflects task simplicity for current LLMs. The framework enables a three-tier workflow auto-accepting 35 percent of segments with less than 5 percent error, cutting manual effort by 65 percent. Cross-domain validation confirms transferability (kappa improvements of 0.20 to 0.78). While establishing a methodological foundation for AI judgement calibration, the true potential likely lies in more challenging scenarios where LLMs may demonstrate comparative advantages over human cognitive limitations.",0 "Business interview preparation demands both solid theoretical grounding and refined soft skills, yet conventional classroom methods rarely deliver the individualized, culturally aware practice employers currently expect. This paper introduces SimInterview, a large language model (LLM)-based simulated multilingual interview training system designed for business professionals entering the AI-transformed labor market. Our system leverages an LLM agent and synthetic AI technologies to create realistic virtual recruiters capable of conducting personalized, real-time conversational interviews. The framework dynamically adapts interview scenarios using retrieval-augmented generation (RAG) to match individual resumes with specific job requirements across multiple languages. Built on LLMs (OpenAI o3, Llama 4 Maverick, Gemma 3), integrated with Whisper speech recognition, GPT-SoVITS voice synthesis, Ditto diffusion-based talking head generation model, and ChromaDB vector databases, our system significantly improves interview readiness across English and Japanese markets. Experiments with university-level candidates show that the system consistently aligns its assessments with job requirements, faithfully preserves resume content, and earns high satisfaction ratings, with the lightweight Gemma 3 model producing the most engaging conversations. Qualitative findings revealed that the standardized Japanese resume format improved document retrieval while diverse English resumes introduced additional variability, and they highlighted how cultural norms shape follow-up questioning strategies. Finally, we also outlined a contestable AI design that can explain, detect bias, and preserve human-in-the-loop to meet emerging regulatory expectations.",1 "The integration of Large Language Models (LLMs) into software engineering has revolutionized code generation, enabling unprecedented productivity through promptware and autonomous AI agents. However, this transformation introduces significant risks, including insecure code generation, hallucinated outputs, irreversible actions, and a lack of transparency and accountability. Incidents like the Replit database deletion underscore the urgent need for robust safety and governance mechanisms. This paper comprehensively analyzes the inherent challenges of LLM-assisted code generation, such as vulnerability inheritance, overtrust, misinterpretation, and the absence of standardized validation and rollback protocols. To address these, we propose the SAFE-AI Framework, a holistic approach emphasizing Safety, Auditability, Feedback, and Explainability. The framework integrates guardrails, sandboxing, runtime verification, risk-aware logging, human-in-the-loop systems, and explainable AI techniques to mitigate risks while fostering trust and compliance. We introduce a novel taxonomy of AI behaviors categorizing suggestive, generative, autonomous, and destructive actions to guide risk assessment and oversight. Additionally, we identify open problems, including the lack of standardized benchmarks for code specific hallucinations and autonomy levels, and propose future research directions for hybrid verification, semantic guardrails, and proactive governance tools. Through detailed comparisons of autonomy control, prompt engineering, explainability, and governance frameworks, this paper provides a roadmap for responsible AI integration in software engineering, aligning with emerging regulations like the EU AI Act and Canada's AIDA to ensure safe, transparent, and accountable AI-driven development.",0 "Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcement learning (RL) can drive self-improvement in language models on verifiable tasks, some models exhibit substantial gains while others quickly plateau. For instance, we find that Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game of Countdown. This discrepancy raises a critical question: what intrinsic properties enable effective self-improvement? We introduce a framework to investigate this question by analyzing four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ. Our study reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama initially lacks them. In systematic experimentation with controlled behavioral datasets, we find that priming Llama with examples containing these reasoning behaviors enables substantial improvements during RL, matching or exceeding Qwen's performance. Importantly, the presence of reasoning behaviors, rather than correctness of answers, proves to be the critical factor -- models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions. Finally, leveraging continued pretraining with OpenWebMath data, filtered to amplify reasoning behaviors, enables the Llama model to match Qwen's self-improvement trajectory. Our findings establish a fundamental relationship between initial reasoning behaviors and the capacity for improvement, explaining why some language models effectively utilize additional computation while others plateau.",0 "Artificial intelligence is reshaping science and industry, yet many users\nstill regard its models as opaque \black boxes\"". Conventional explainable\nartificial-intelligence methods clarify individual predictions but overlook the\nupstream decisions and downstream quality checks that determine whether\ninsights can be trusted. In this work, we present Holistic Explainable\nArtificial Intelligence (HXAI), a user-centric framework that embeds\nexplanation into every stage of the data-analysis workflow and tailors those\nexplanations to users. HXAI unifies six components (data, analysis set-up,\nlearning process, model output, model quality, communication channel) into a\nsingle taxonomy and aligns each component with the needs of 13 domain experts,\ndata analysts and data scientists. A 112-item question bank covers these needs;\nour survey of contemporary tools highlights critical coverage gaps. Grounded in\ntheories of human explanation, principles from human-computer interaction and\nfindings from empirical user studies, HXAI identifies the characteristics that\nmake explanations clear, actionable and cognitively manageable. A comprehensive\ntaxonomy operationalises these insights, reducing terminological ambiguity and\nenabling rigorous coverage analysis of existing toolchains. We further\ndemonstrate how AI agents that embed large-language models can orchestrate\ndiverse explanation techniques, translating technical artifacts into\nstakeholder-specific narratives that bridge the gap between AI developers and\ndomain experts. Departing from traditional surveys or perspective articles,\nthis work melds concepts from multiple disciplines, lessons from real-world\nprojects and a critical synthesis of the literature to advance a novel,\nend-to-end viewpoint on transparency, trustworthiness and responsible AI\ndeployment.""""""",1 "Motor vehicle crashes remain a leading cause of injury and death worldwide, necessitating data-driven approaches to understand and mitigate crash severity. This study introduces a curated dataset of more than 3 million people involved in accidents in Ohio over six years (2017-2022), aggregated to more than 2.3 million vehicle-level records for predictive analysis. The primary contribution is a transparent and reproducible methodology that combines Automated Machine Learning (AutoML) and explainable artificial intelligence (AI) to identify and interpret key risk factors associated with severe crashes. Using the JADBio AutoML platform, predictive models were constructed to distinguish between severe and non-severe crash outcomes. The models underwent rigorous feature selection across stratified training subsets, and their outputs were interpreted using SHapley Additive exPlanations (SHAP) to quantify the contribution of individual features. A final Ridge Logistic Regression model achieved an AUC-ROC of 85.6% on the training set and 84.9% on a hold-out test set, with 17 features consistently identified as the most influential predictors. Key features spanned demographic, environmental, vehicle, human, and operational categories, including location type, posted speed, minimum occupant age, and pre-crash action. Notably, certain traditionally emphasized factors, such as alcohol or drug impairment, were less influential in the final model compared to environmental and contextual variables. Emphasizing methodological rigor and interpretability over mere predictive performance, this study offers a scalable framework to support Vision Zero with aligned interventions and advanced data-informed traffic safety policy.",0 "The rise of LLMs poses new possibilities in modeling opinion evolution, a long-standing task in simulation, by leveraging advanced reasoning abilities to recreate complex, large-scale human cognitive trends. While most prior works focus on opinion evolution surrounding specific isolated events or the views within a country, ours is the first to model the large-scale attitude evolution of a population representing an entire country towards another -- US citizens' perspectives towards China. To tackle the challenges of this broad scenario, we propose a framework that integrates media data collection, user profile creation, and cognitive architecture for opinion updates to successfully reproduce the real trend of US attitudes towards China over a 20-year period from 2005 to today. We also leverage LLMs' capabilities to introduce debiased media exposure, extracting neutral events from typically subjective news contents, to uncover the roots of polarized opinion formation, as well as a devils advocate agent to help explain the rare reversal from negative to positive attitudes towards China, corresponding with changes in the way Americans obtain information about the country. The simulation results, beyond validating our framework architecture, also reveal the impact of biased framing and selection bias in shaping attitudes. Overall, our work contributes to a new paradigm for LLM-based modeling of cognitive behaviors in a large-scale, long-term, cross-border social context, providing insights into the formation of international biases and offering valuable implications for media consumers to better understand the factors shaping their perspectives, and ultimately contributing to the larger social need for bias reduction and cross-cultural tolerance.",0 "Automated feedback generation has the potential to enhance students' learning progress by providing timely and targeted feedback. Moreover, it can assist teachers in optimizing their time, allowing them to focus on more strategic and personalized aspects of teaching. To generate high-quality, information-rich formative feedback, it is essential first to extract relevant indicators, as these serve as the foundation upon which the feedback is constructed. Teachers often employ feedback criteria grids composed of various indicators that they evaluate systematically. This study examines the initial phase of extracting such indicators from students' submissions of a language learning course using the large language model Llama 3.1. Accordingly, the alignment between indicators generated by the LLM and human ratings across various feedback criteria is investigated. The findings demonstrate statistically significant strong correlations, even in cases involving unanticipated combinations of indicators and criteria. The methodology employed in this paper offers a promising foundation for extracting indicators from students' submissions using LLMs. Such indicators can potentially be utilized to auto-generate explainable and transparent formative feedback in future research.",0 "We introduce GANDiff FR, the first synthetic framework that precisely controls demographic and environmental factors to measure, explain, and reduce bias with reproducible rigor. GANDiff FR unifies StyleGAN3-based identity-preserving generation with diffusion-based attribute control, enabling fine-grained manipulation of pose around 30 degrees, illumination (four directions), and expression (five levels) under ceteris paribus conditions. We synthesize 10,000 demographically balanced faces across five cohorts validated for realism via automated detection (98.2%) and human review (89%) to isolate and quantify bias drivers. Benchmarking ArcFace, CosFace, and AdaFace under matched operating points shows AdaFace reduces inter-group TPR disparity by 60% (2.5% vs. 6.3%), with illumination accounting for 42% of residual bias. Cross-dataset evaluation on RFW, BUPT, and CASIA WebFace confirms strong synthetic-to-real transfer (r 0.85). Despite around 20% computational overhead relative to pure GANs, GANDiff FR yields three times more attribute-conditioned variants, establishing a reproducible, regulation-aligned (EU AI Act) standard for fairness auditing. Code and data are released to support transparent, scalable bias evaluation.",0 "Explaining deep learning models is essential for clinical integration of medical image analysis systems. A good explanation highlights if a model depends on spurious features that undermines generalization and harms a subset of patients or, conversely, may present novel biological insights. Although techniques like GradCAM can identify influential features, they are measurement tools that do not themselves form an explanation. We propose a human-machine-VLM interaction system tailored to explaining classifiers in computational pathology, including multi-instance learning for whole-slide images. Our proof of concept comprises (1) an AI-integrated slide viewer to run sliding-window experiments to test claims of an explanation, and (2) quantification of an explanation's predictiveness using general-purpose vision-language models. The results demonstrate that this allows us to qualitatively test claims of explanations and can quantifiably distinguish competing explanations. This offers a practical path from explainable AI to explained AI in digital pathology and beyond. Code and prompts are available at https://github.com/nki-ai/x2x.",0 "Many important scientific problems involve multivariate optimization coupled with slow and laborious experimental measurements. These complex, high-dimensional searches can be defined by non-convex optimization landscapes that resemble needle-in-a-haystack surfaces, leading to entrapment in local minima. Contextualizing optimizers with human domain knowledge is a powerful approach to guide searches to localized fruitful regions. However, this approach is susceptible to human confirmation bias and it is also challenging for domain experts to keep track of the rapidly expanding scientific literature. Here, we propose the use of Large Language Models (LLMs) for contextualizing Bayesian optimization (BO) via a hybrid optimization framework that intelligently and economically blends stochastic inference with domain knowledge-based insights from the LLM, which is used to suggest new, better-performing areas of the search space for exploration. Our method fosters user engagement by offering real-time commentary on the optimization progress, explaining the reasoning behind the search strategies. We validate the effectiveness of our approach on synthetic benchmarks with up to 15 independent variables and demonstrate the ability of LLMs to reason in four real-world experimental tasks where context-aware suggestions boost optimization performance substantially.",0 "The Medico 2025 challenge addresses Visual Question Answering (VQA) for Gastrointestinal (GI) imaging, organized as part of the MediaEval task series. The challenge focuses on developing Explainable Artificial Intelligence (XAI) models that answer clinically relevant questions based on GI endoscopy images while providing interpretable justifications aligned with medical reasoning. It introduces two subtasks: (1) answering diverse types of visual questions using the Kvasir-VQA-x1 dataset, and (2) generating multimodal explanations to support clinical decision-making. The Kvasir-VQA-x1 dataset, created from 6,500 images and 159,549 complex question-answer (QA) pairs, serves as the benchmark for the challenge. By combining quantitative performance metrics and expert-reviewed explainability assessments, this task aims to advance trustworthy Artificial Intelligence (AI) in medical image analysis. Instructions, data access, and an updated guide for participation are available in the official competition repository: https://github.com/simula/MediaEval-Medico-2025",0 "Recent advancements in machine learning have spurred growing interests in automated interpreting quality assessment. Nevertheless, existing research suffers from insufficient examination of language use quality, unsatisfactory modeling effectiveness due to data scarcity and imbalance, and a lack of efforts to explain model predictions. To address these gaps, we propose a multi-dimensional modeling framework that integrates feature engineering, data augmentation, and explainable machine learning. This approach prioritizes explainability over ``black box'' predictions by utilizing only construct-relevant, transparent features and conducting Shapley Value (SHAP) analysis. Our results demonstrate strong predictive performance on a novel English-Chinese consecutive interpreting dataset, identifying BLEURT and CometKiwi scores to be the strongest predictive features for fidelity, pause-related features for fluency, and Chinese-specific phraseological diversity metrics for language use. Overall, by placing particular emphasis on explainability, we present a scalable, reliable, and transparent alternative to traditional human evaluation, facilitating the provision of detailed diagnostic feedback for learners and supporting self-regulated learning advantages not afforded by automated scores in isolation.",0 "Protoplanetary disk evolution can be deeply influenced by the UV radiation emitted by neighboring massive stars (mainly of spectral type O and B). We show that the process of external photoevaporation, which causes an outside-in depletion of disk material due to environmental UV radiation, can lead to a significant decrease in disk size, and moderate in disk mass and lifetime even at moderate irradiation levels (1-10 G$_{0}$). In this work we investigate the role of external photoevaporation in shaping the masses and sizes of the ten AGE-PRO disks in the Upper Scorpius region, which we estimate to be subject to FUV fluxes ranging between 2 and 12 G$_{0}$, on average. We compare the disk masses and sizes resulting from 1D numerical viscous evolution simulations in which the effect of external photoevaporation is included, to the values retrieved from the AGE-PRO observations. While the pure viscous framework fails in adequately explaining the observed disk properties in Upper Scorpius, with the inclusion of external photoevaporation we can successfully reproduce gas disk sizes for 7 out of 10 sources within a factor <2, when the initial disk mass is 1-10% of the stellar mass. We emphasize the importance of accounting for the environmental irradiation when comparing star-forming regions of different ages, even when moderate FUV irradiation fields are experienced, as in the case of Upper Scorpius.",0 "Policies generated by Reinforcement Learning (RL) algorithms are difficult to explain to users, as they emerge from the interaction of complex reward structures and neural network representations. Consequently, analyzing and predicting agent behavior can be challenging, undermining user trust in real-world applications. To facilitate user understanding, current methods for global policy summarization typically rely on videos that demonstrate agent behavior in a subset of world states. However, users can only watch a limited number of demonstrations, constraining their understanding. Moreover, these methods place the burden of interpretation on users by presenting raw behaviors rather than synthesizing them into coherent patterns. To resolve these issues, we introduce SySLLM (Synthesized Summary using Large Language Models), advocating for a new paradigm of abstractive-textual policy explanations. By leveraging Large Language Models (LLMs)-which possess extensive world knowledge and pattern synthesis capabilities-SySLLM generates textual summaries that provide structured and comprehensible explanations of agent policies. SySLLM demonstrates that LLMs can interpret spatio-temporally structured descriptions of state-action trajectories from an RL agent and generate valuable policy insights in a zero-shot setting, without any prior knowledge or fine-tuning. Our evaluation shows that SySLLM captures key insights, such as goal preferences and exploration strategies, that were also identified by human experts. Furthermore, in a large-scale user study (with 200 participants), SySLLM summaries were preferred over demonstration-based summaries (HIGHLIGHTS) by a clear majority (75.5%) of participants.",2 "The alignment of language models (LMs) with human preferences is critical for building reliable AI systems. The problem is typically framed as optimizing an LM policy to maximize the expected reward that reflects human preferences. Recently, Direct Preference Optimization (DPO) was proposed as a LM alignment method that directly optimize the policy from static preference data, and further improved by incorporating on-policy sampling (i.e., preference candidates generated during the training loop) for better LM alignment. However, we show on-policy data is not always optimal, with systematic effectiveness difference emerging between static and on-policy preference candidates. For example, on-policy data can result in a 3$\times$ effectiveness compared with static data for Llama-3, and a 0.4$\times$ effectiveness for Zephyr. To explain the phenomenon, we propose the alignment stage assumption, which divides the alignment process into two distinct stages: the preference injection stage, which benefits from diverse data, and the preference fine-tuning stage, which favors high-quality data. Through theoretical and empirical analysis, we characterize these stages and propose an effective algorithm to identify the boundaries between them. We perform experiments on 5 models (Llama, Zephyr, Phi-2, Qwen, Pythia) and 2 alignment methods (DPO, SLiC-HF) to show the generalizability of alignment stage assumption and boundary measurement.",1 "Explainability and its emerging counterpart contestability have become important normative and design principles for trustworthy AI as they enable users and subjects to understand and challenge AI decisions. However, realizing these principles is difficult, as they assume different meanings in technical, legal, and organizational dimensions of AI regulation. To resolve this conceptual polysemy, in this paper, we present the findings of an interview study with 14 experts to examine the intersection and implementation of explainability and contestability, and their understanding in different research communities. We outline differentiations between descriptive and normative explainability, judicial and non-judicial channels of contestation, and individual and collective contestation action. We further describe the main points of friction in the realization of both principles, including the alignment between top-down and bottom-up regulation, the assignment of responsibility, and the need for interdisciplinary collaboration. Lastly, we formulate three recommendations for AI policy to implement both principles through a Regulation by Design perspective. We believe our contributions can inform policy-making and regulation of these core principles and enable more effective and equitable design, development, and deployment of trustworthy public AI systems.",2 "While automatic subjective speech quality assessment has witnessed much progress, an open question is whether an automatic quality assessment at frame resolution is possible. This would be highly desirable, as it adds explainability to the assessment of speech synthesis systems. Here, we take first steps towards this goal by identifying issues of existing quality predictors that prevent sensible frame-level prediction. Further, we define criteria that a frame-level predictor should fulfill. We also suggest a chunk-based processing that avoids the impact of a localized distortion on the score of neighboring frames. Finally, we measure in experiments with localized artificial distortions the localization performance of a set of frame-level quality predictors and show that they can outperform detection performance of human annotations obtained from a crowd-sourced perception experiment.",0 "Reinforcement learning in large reasoning models enables learning from feedback on their outputs, making it particularly valuable in scenarios where fine-tuning data is limited. However, its application in multi-modal human activity recognition (HAR) domains remains largely underexplored. Our work extends reinforcement learning to the human activity recognition domain with multimodal large language models. By incorporating visual reinforcement learning in the training process, the model's generalization ability on few-shot recognition can be greatly improved. Additionally, visual reinforcement learning can enhance the model's reasoning ability and enable explainable analysis in the inference stage. We name our few-shot human activity recognition method with visual reinforcement learning FAVOR. Specifically, our approach first utilizes a multimodal large language model (MLLM) to generate multiple candidate responses for the human activity image, each containing reasoning traces and final answers. These responses are then evaluated using reward functions, and the MLLM model is subsequently optimized using the Group Relative Policy Optimization (GRPO) algorithm. In this way, the MLLM model can be adapted to human activity recognition with only a few samples. Extensive experiments on four human activity recognition datasets and five different settings demonstrate the superiority of the proposed method.",0 "As ChatGPT and other Large Language Model (LLM)-based AI chatbots become increasingly integrated into individuals' daily lives, important research questions arise. What concerns and risks do these systems pose for individual users? What potential harms might they cause, and how can these be mitigated? In this work, we review recent literature and reports, and conduct a comprehensive investigation into these questions. We begin by explaining how LLM-based AI chatbots work, providing essential background to help readers understand chatbots' inherent limitations. We then identify a range of risks associated with individual use of these chatbots, including hallucinations, intrinsic biases, sycophantic behavior, cognitive decline from overreliance, social isolation, and privacy leakage. Finally, we propose several key mitigation strategies to address these concerns. Our goal is to raise awareness of the potential downsides of AI chatbot use, and to empower users to enhance, rather than diminish, human intelligence, to enrich, rather than compromise, daily life.",0 "Large language models (LLMs) have created new opportunities to assist teachers and support student learning. While researchers have explored various prompt engineering approaches in educational contexts, the degree to which these approaches generalize across domains--such as science, computing, and engineering--remains underexplored. In this paper, we introduce Chain-of-Thought Prompting + Active Learning (CoTAL), an LLM-based approach to formative assessment scoring that (1) leverages Evidence-Centered Design (ECD) to align assessments and rubrics with curriculum goals, (2) applies human-in-the-loop prompt engineering to automate response scoring, and (3) incorporates chain-of-thought (CoT) prompting and teacher and student feedback to iteratively refine questions, rubrics, and LLM prompts. Our findings demonstrate that CoTAL improves GPT-4's scoring performance across domains, achieving gains of up to 38.9% over a non-prompt-engineered baseline (i.e., without labeled examples, chain-of-thought prompting, or iterative refinement). Teachers and students judge CoTAL to be effective at scoring and explaining responses, and their feedback produces valuable insights that enhance grading accuracy and explanation quality.",0 "Accessing suitable datasets is critical for research and development in recommender systems. However, finding datasets that match specific recommendation task or domains remains a challenge due to scattered sources and inconsistent metadata. To address this gap, we propose a community-driven and explainable dataset search engine tailored for recommender system research. Our system supports semantic search across multiple dataset attributes, such as dataset names, descriptions, and recommendation domain, and provides explanations of search relevance to enhance transparency. The system encourages community participation by allowing users to contribute standardized dataset metadata in public repository. By improving dataset discoverability and search interpretability, the system facilitates more efficient research reproduction. The platform is publicly available at: https://ds4rs.com.",0 "Emotion analysis is an inherently ambiguous task. Previous work studied annotator properties to explain disagreement, but this overlooks the possibility that ambiguity may stem from missing information about the context of events. In this paper, we propose a novel approach that adds reasonable contexts to event descriptions, which may better explain a particular situation. Our goal is to understand whether these enriched contexts enable human annotators to annotate emotions more reliably. We disambiguate a target event description by automatically generating multiple event chains conditioned on differing emotions. By combining techniques from short story generation in various settings, we achieve coherent narratives that result in a specialized dataset for the first comprehensive and systematic examination of contextualized emotion analysis. Through automatic and human evaluation, we find that contextual narratives enhance the interpretation of specific emotions and support annotators in producing more consistent annotations.",1 "Business communication digitisation has reorganised the process of persuasive discourse, which allows not only greater transparency but also advanced deception. This inquiry synthesises classical rhetoric and communication psychology with linguistic theory and empirical studies in the financial reporting, sustainability discourse, and digital marketing to explain how deceptive language can be systematically detected using persuasive lexicon. In controlled settings, detection accuracies of greater than 99% were achieved by using computational textual analysis as well as personalised transformer models. However, reproducing this performance in multilingual settings is also problematic and, to a large extent, this is because it is not easy to find sufficient data, and because few multilingual text-processing infrastructures are in place. This evidence shows that there has been an increasing gap between the theoretical representations of communication and those empirically approximated, and therefore, there is a need to have strong automatic text-identification systems where AI-based discourse is becoming more realistic in communicating with humans.",0 "Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: 'pre-modernity' prior to digital computation over data, via the 'modernity' of digital data processing to our present 'post-modernity' when not only the data is digital, but our lives themselves are largely digital. In each section, the situation in technology and society is sketched, and then the developments in digital ethics are explained. Finally, a brief outlook is provided.",0 "Biomechanical features have become important indicators for evaluating athletes' techniques. Traditionally, experts propose significant features and evaluate them using physics equations. However, the complexity of the human body and its movements makes it challenging to explicitly analyze the relationships between some features and athletes' final performance. With advancements in modern machine learning and statistics, data analytics methods have gained increasing importance in sports analytics. In this study, we leverage machine learning models to analyze expert-proposed biomechanical features from the finals of long jump competitions in the World Championships. The objectives of the analysis include identifying the most important features contributing to top-performing jumps and exploring the combined effects of these key features. Using quantile regression, we model the relationship between the biomechanical feature set and the target variable (effective distance), with a particular focus on elite-level jumps. To interpret the model, we apply SHapley Additive exPlanations (SHAP) alongside Partial Dependence Plots (PDPs) and Individual Conditional Expectation (ICE) plots. The findings reveal that, beyond the well-documented velocity-related features, specific technical aspects also play a pivotal role. For male athletes, the angle of the knee of the supporting leg before take-off is identified as a key factor for achieving top 10% performance in our dataset, with angles greater than 169{\deg}contributing significantly to jump performance. In contrast, for female athletes, the landing pose and approach step technique emerge as the most critical features influencing top 10% performances, alongside velocity. This study establishes a framework for analyzing the impact of various features on athletic performance, with a particular emphasis on top-performing events.",0 "In the rapidly evolving field of Explainable Natural Language Processing (NLP), textual explanations, i.e., human-like rationales, are pivotal for explaining model predictions and enriching datasets with interpretable labels. Traditional approaches rely on human annotation, which is costly, labor-intensive, and impedes scalability. In this work, we present an automated framework that leverages multiple state-of-the-art large language models (LLMs) to generate high-quality textual explanations. We rigorously assess the quality of these LLM-generated explanations using a comprehensive suite of Natural Language Generation (NLG) metrics. Furthermore, we investigate the downstream impact of these explanations on the performance of pre-trained language models (PLMs) and LLMs across natural language inference tasks on two diverse benchmark datasets. Our experiments demonstrate that automated explanations exhibit highly competitive effectiveness compared to human-annotated explanations in improving model performance. Our findings underscore a promising avenue for scalable, automated LLM-based textual explanation generation for extending NLP datasets and enhancing model performance.",0 "Large, publicly available clinical datasets have emerged as a novel resource for understanding disease heterogeneity and to explore personalization of therapy. These datasets are derived from data not originally collected for research purposes and, as a result, are often incomplete and lack critical labels. Many AI tools have been developed to retrospectively label these datasets, such as by performing disease classification; however, they often suffer from limited interpretability. Previous work has attempted to explain predictions using Concept Bottleneck Models (CBMs), which learn interpretable concepts that map to higher-level clinical ideas, facilitating human evaluation. However, these models often experience performance limitations when the concepts fail to adequately explain or characterize the task. We use the identification of Acute Respiratory Distress Syndrome (ARDS) as a challenging test case to demonstrate the value of incorporating contextual information from clinical notes to improve CBM performance. Our approach leverages a Large Language Model (LLM) to process clinical notes and generate additional concepts, resulting in a 10% performance gain over existing methods. Additionally, it facilitates the learning of more comprehensive concepts, thereby reducing the risk of information leakage and reliance on spurious shortcuts, thus improving the characterization of ARDS.",0 "AI based social media recommendations have great potential to improve the user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being a black box creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the users perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis.",1 "Social identity theory (SIT) and social categorization theory (SCT) are two facets of the social identity approach (SIA) to understanding social phenomena. SIT and SCT are models that describe and explain how people interact with one another socially, connecting the individual to the group through an understanding of underlying psychological mechanisms and intergroup behaviour. SIT, originally developed in the 1970s, and SCT, a later, more general offshoot, have been broadly applied to a range of social phenomena among people. The rise of increasingly social machines embedded in daily life has spurned efforts on understanding whether and how artificial agents can and do participate in SIA activities. As agents like social robots and chatbots powered by sophisticated large language models (LLMs) advance, understanding the real and potential roles of these technologies as social entities is crucial. Here, I provide a primer on SIA and extrapolate, through case studies and imagined examples, how SIT and SCT can apply to artificial social agents. I emphasize that not all human models and sub-theories will apply. I further argue that, given the emerging competence of these machines and our tendency to be taken in by them, we experts may need to don the hat of the uncanny killjoy, for our own good.",0 "Sparse autoencoders (SAEs) decompose large language model (LLM) activations into latent features that reveal mechanistic structure. Conventional SAEs train on broad data distributions, forcing a fixed latent budget to capture only high-frequency, generic patterns. This often results in significant linear ``dark matter'' in reconstruction error and produces latents that fragment or absorb each other, complicating interpretation. We show that restricting SAE training to a well-defined domain (medical text) reallocates capacity to domain-specific features, improving both reconstruction fidelity and interpretability. Training JumpReLU SAEs on layer-20 activations of Gemma-2 models using 195k clinical QA examples, we find that domain-confined SAEs explain up to 20\% more variance, achieve higher loss recovery, and reduce linear residual error compared to broad-domain SAEs. Automated and human evaluations confirm that learned features align with clinically meaningful concepts (e.g., ``taste sensations'' or ``infectious mononucleosis''), rather than frequent but uninformative tokens. These domain-specific SAEs capture relevant linear structure, leaving a smaller, more purely nonlinear residual. We conclude that domain-confinement mitigates key limitations of broad-domain SAEs, enabling more complete and interpretable latent decompositions, and suggesting the field may need to question ``foundation-model'' scaling for general-purpose SAEs.",0 "The promise of human-AI teaming lies in humans and AI working together to achieve performance levels neither could accomplish alone. Effective communication between AI and humans is crucial for teamwork, enabling users to efficiently benefit from AI assistance. This paper investigates how AI communication impacts human-AI team performance. We examine AI explanations that convey an awareness of its strengths and limitations. To achieve this, we train a decision tree on the model's mistakes, allowing it to recognize and explain where and why it might err. Through a user study on an income prediction task, we assess the impact of varying levels of information and explanations about AI predictions. Our results show that AI performance insights enhance task performance, and conveying AI awareness of its strengths and weaknesses improves trust calibration. These findings highlight the importance of considering how information delivery influences user trust and reliance in AI-assisted decision-making.",1 "This demonstration paper presents $\mathbf{LayLens}$, a tool aimed to make deepfake understanding easier for users of all educational backgrounds. While prior works often rely on outputs containing technical jargon, LayLens bridges the gap between model reasoning and human understanding through a three-stage pipeline: (1) explainable deepfake detection using a state-of-the-art forgery localization model, (2) natural language simplification of technical explanations using a vision-language model, and (3) visual reconstruction of a plausible original image via guided image editing. The interface presents both technical and layperson-friendly explanations in addition to a side-by-side comparison of the uploaded and reconstructed images. A user study with 15 participants shows that simplified explanations significantly improve clarity and reduce cognitive load, with most users expressing increased confidence in identifying deepfakes. LayLens offers a step toward transparent, trustworthy, and user-centric deepfake forensics.",1 "Ecological research increasingly relies on integrating heterogeneous datasets and knowledge to explain and predict complex phenomena. Yet, differences in data types, terminology, and documentation often hinder interoperability, reuse, and causal understanding. We present the Semantic Units Framework, a novel, domain-agnostic semantic modelling approach applied here to ecological data and knowledge in compliance with the FAIR (Findable, Accessible, Interoperable, Reusable) and CLEAR (Cognitively interoperable, semantically Linked, contextually Explorable, easily Accessible, human-Readable and -interpretable) Principles. The framework models data and knowledge as modular, logic-aware semantic units: single propositions (statement units) or coherent groups of propositions (compound units). Statement units can model measurements, observations, or universal relationships, including causal ones, and link to methods and evidence. Compound units group related statement units into reusable, semantically coherent knowledge objects. Implemented using RDF, OWL, and knowledge graphs, semantic units can be serialized as FAIR Digital Objects with persistent identifiers, provenance, and semantic interoperability. We show how universal statement units build ecological causal networks, which can be composed into causal maps and perspective-specific subnetworks. These support causal reasoning, confounder detection (back-door), effect identification with unobserved confounders (front-door), application of do-calculus, and alignment with Bayesian networks, structural equation models, and structural causal models. By linking fine-grained empirical data to high-level causal reasoning, the Semantic Units Framework provides a foundation for ecological knowledge synthesis, evidence annotation, cross-domain integration, reproducible workflows, and AI-ready ecological research.",0 "Explaining machine learning (ML) models for time series (TS) classification remains challenging due to the difficulty of interpreting raw time series and the high dimensionality of the input space. We introduce PHAR-Post-hoc Attribution Rules-a unified framework that transforms numeric feature attributions from post-hoc, instance-wise explainers (e.g., LIME, SHAP) into structured, human-readable rules. These rules define interpretable intervals that indicate where and when key decision boundaries occur, enhancing model transparency. PHAR performs comparably to native rule-based methods, such as Anchor, while scaling more efficiently to long TS sequences and achieving broader instance coverage. A dedicated rule fusion step consolidates rule sets using strategies like weighted selection and lasso-based refinement, balancing key quality metrics: coverage, confidence, and simplicity. This fusion ensures each instance receives a concise and unambiguous rule, improving both explanation fidelity and consistency. We further introduce visualization techniques to illustrate specificity-generalization trade-offs in the derived rules. PHAR resolves conflicting and overlapping explanations-a common effect of the Rashomon phenomenon-into coherent, domain-adaptable insights. Comprehensive experiments on UCR/UEA Time Series Classification Archive demonstrate that PHAR improves interpretability, decision transparency, and practical applicability for TS classification tasks.",0 "Insider threats, which can lead to severe losses, remain a major security concern. While machine learning-based insider threat detection (ITD) methods have shown promising results, their progress is hindered by the scarcity of high-quality data. Enterprise data is sensitive and rarely accessible, while publicly available datasets, when limited in scale due to cost, lack sufficient real-world coverage; and when purely synthetic, they fail to capture rich semantics and realistic user behavior. To address this, we propose Chimera, the first large language model (LLM)-based multi-agent framework that automatically simulates both benign and malicious insider activities and collects diverse logs across diverse enterprise environments. Chimera models each employee with agents that have role-specific behavior and integrates modules for group meetings, pairwise interactions, and autonomous scheduling, capturing realistic organizational dynamics. It incorporates 15 types of insider attacks (e.g., IP theft, system sabotage) and has been deployed to simulate activities in three sensitive domains: technology company, finance corporation, and medical institution, producing a new dataset, ChimeraLog. We assess ChimeraLog via human studies and quantitative analysis, confirming its diversity, realism, and presence of explainable threat patterns. Evaluations of existing ITD methods show an average F1-score of 0.83, which is significantly lower than 0.99 on the CERT dataset, demonstrating ChimeraLog's higher difficulty and utility for advancing ITD research.",0 "Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in document understanding. However, their reasoning processes remain largely black-box, making it difficult to ensure reliability and trustworthiness, especially in high-stakes domains such as legal, financial, and medical document analysis. Existing methods use fixed Chain-of-Thought (CoT) reasoning with supervised fine-tuning (SFT) but suffer from catastrophic forgetting, poor adaptability, and limited generalization across domain tasks. In this paper, we propose DocThinker, a rule-based Reinforcement Learning (RL) framework for dynamic inference-time reasoning. Instead of relying on static CoT templates, DocThinker autonomously refines reasoning strategies via policy learning, generating explainable intermediate results, including structured reasoning processes, rephrased questions, regions of interest (RoI) supporting the answer, and the final answer. By integrating multi-objective rule-based rewards and KL-constrained optimization, our method mitigates catastrophic forgetting and enhances both adaptability and transparency. Extensive experiments on multiple benchmarks demonstrate that DocThinker significantly improves generalization while producing more explainable and human-understandable reasoning steps. Our findings highlight RL as a powerful alternative for enhancing explainability and adaptability in MLLM-based document understanding. Code will be available at https://github.com/wenwenyu/DocThinker.",0 "In the context of AI-based decision support systems, explanations can help users to judge when to trust the AI's suggestion, and when to question it. In this way, human oversight can prevent AI errors and biased decision-making. However, this rests on the assumption that users will consider explanations in enough detail to be able to catch such errors. We conducted an online study on trust in explainable DSS, and were surprised to find that in many cases, participants spent little time on the explanation and did not always consider it in detail. We present an exploratory analysis of this data, investigating what factors impact how carefully study participants consider AI explanations, and how this in turn impacts whether they are open to changing their mind based on what the AI suggests.",1 "The Automated Audio Captioning (AAC) task asks models to generate natural language descriptions of an audio input. Evaluating these machine-generated audio captions is a complex task that requires considering diverse factors, among them, auditory scene understanding, sound-object inference, temporal coherence, and the environmental context of the scene. While current methods focus on specific aspects, they often fail to provide an overall score that aligns well with human judgment. In this work, we propose CLAIR-A, a simple and flexible method that leverages the zero-shot capabilities of large language models (LLMs) to evaluate candidate audio captions by directly asking LLMs for a semantic distance score. In our evaluations, CLAIR-A better predicts human judgements of quality compared to traditional metrics, with a 5.8% relative accuracy improvement compared to the domain-specific FENSE metric and up to 11% over the best general-purpose measure on the Clotho-Eval dataset. Moreover, CLAIR-A offers more transparency by allowing the language model to explain the reasoning behind its scores, with these explanations rated up to 30% better by human evaluators than those provided by baseline methods. CLAIR-A is made publicly available at https://github.com/DavidMChan/clair-a.",0 "Recent progress in auditory intelligence has yielded high-performing systems for sound event detection (SED), acoustic scene classification (ASC), automated audio captioning (AAC), and audio question answering (AQA). Yet these tasks remain largely constrained to surface-level recognition-capturing what happened but not why, what it implies, or how it unfolds in context. I propose a conceptual reframing of auditory intelligence as a layered, situated process that encompasses perception, reasoning, and interaction. To instantiate this view, I introduce four cognitively inspired task paradigms-ASPIRE, SODA, AUX, and AUGMENT-those structure auditory understanding across time-frequency pattern captioning, hierarchical event/scene description, causal explanation, and goal-driven interpretation, respectively. Together, these paradigms provide a roadmap toward more generalizable, explainable, and human-aligned auditory intelligence, and are intended to catalyze a broader discussion of what it means for machines to understand sound.",0 "Current AI approaches to refugee integration optimize narrow objectives such as employment and fail to capture the cultural, emotional, and ethical dimensions critical for long-term success. We introduce EMPATHIA (Enriched Multimodal Pathways for Agentic Thinking in Humanitarian Immigrant Assistance), a multi-agent framework addressing the central Creative AI question: how do we preserve human dignity when machines participate in life-altering decisions? Grounded in Kegan's Constructive Developmental Theory, EMPATHIA decomposes integration into three modules: SEED (Socio-cultural Entry and Embedding Decision) for initial placement, RISE (Rapid Integration and Self-sufficiency Engine) for early independence, and THRIVE (Transcultural Harmony and Resilience through Integrated Values and Engagement) for sustained outcomes. SEED employs a selector-validator architecture with three specialized agents - emotional, cultural, and ethical - that deliberate transparently to produce interpretable recommendations. Experiments on the UN Kakuma dataset (15,026 individuals, 7,960 eligible adults 15+ per ILO/UNHCR standards) and implementation on 6,359 working-age refugees (15+) with 150+ socioeconomic variables achieved 87.4% validation convergence and explainable assessments across five host countries. EMPATHIA's weighted integration of cultural, emotional, and ethical factors balances competing value systems while supporting practitioner-AI collaboration. By augmenting rather than replacing human expertise, EMPATHIA provides a generalizable framework for AI-driven allocation tasks where multiple values must be reconciled.",0 "The proliferation of deepfake technologies poses urgent challenges and serious risks to digital integrity, particularly within critical sectors such as forensics, journalism, and the legal system. While existing detection systems have made significant progress in classification accuracy, they typically function as black-box models, offering limited transparency and minimal support for human reasoning. This lack of interpretability hinders their usability in real-world decision-making contexts, especially for non-expert users. In this paper, we present DF-P2E (Deepfake: Prediction to Explanation), a novel multimodal framework that integrates visual, semantic, and narrative layers of explanation to make deepfake detection interpretable and accessible. The framework consists of three modular components: (1) a deepfake classifier with Grad-CAM-based saliency visualisation, (2) a visual captioning module that generates natural language summaries of manipulated regions, and (3) a narrative refinement module that uses a fine-tuned Large Language Model (LLM) to produce context-aware, user-sensitive explanations. We instantiate and evaluate the framework on the DF40 benchmark, the most diverse deepfake dataset to date. Experiments demonstrate that our system achieves competitive detection performance while providing high-quality explanations aligned with Grad-CAM activations. By unifying prediction and explanation in a coherent, human-aligned pipeline, this work offers a scalable approach to interpretable deepfake detection, advancing the broader vision of trustworthy and transparent AI systems in adversarial media environments.",0 "Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based only on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that 300 participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior.",2 "Companion chatbots offer a potential solution to the growing epidemic of loneliness, but their impact on users' psychosocial well-being remains poorly understood, raising critical ethical questions about their deployment and design. This study presents a large-scale survey (n = 404) of regular users of companion chatbots, investigating the relationship between chatbot usage and loneliness. We develop a model explaining approximately 50% of variance in loneliness; while usage does not directly predict loneliness, we identify factors including neuroticism, social network size, and problematic use. Through cluster analysis and mixed-methods thematic analysis combining manual coding with automated theme extraction, we identify seven distinct user profiles demonstrating that companion chatbots can either enhance or potentially harm psychological well-being depending on user characteristics. Different usage patterns can lead to markedly different outcomes, with some users experiencing enhanced social confidence while others risk further isolation. These findings have significant implications for responsible AI development, suggesting that one-size-fits-all approaches to AI companionship may be ethically problematic. Our work contributes to the ongoing dialogue about the role of AI in social and emotional support, offering insights for developing more targeted and ethical approaches to AI companionship that complement rather than replace human connections.",2 "With the growing availability of urban data and the increasing complexity of societal challenges, visual analytics has become essential for deriving insights into pressing real-world problems. However, analyzing such data is inherently complex and iterative, requiring expertise across multiple domains. The need to manage diverse datasets, distill intricate workflows, and integrate various analytical methods presents a high barrier to entry, especially for researchers and urban experts who lack proficiency in data management, machine learning, and visualization. Advancements in large language models offer a promising solution to lower the barriers to the construction of analytics systems by enabling users to specify intent rather than define precise computational operations. However, this shift from explicit operations to intent-based interaction introduces challenges in ensuring alignment throughout the design and development process. Without proper mechanisms, gaps can emerge between user intent, system behavior, and analytical outcomes. To address these challenges, we propose Urbanite, a framework for human-AI collaboration in urban visual analytics. Urbanite leverages a dataflow-based model that allows users to specify intent at multiple scopes, enabling interactive alignment across the specification, process, and evaluation stages of urban analytics. Based on findings from a survey to uncover challenges, Urbanite incorporates features to facilitate explainability, multi-resolution definition of tasks across dataflows, nodes, and parameters, while supporting the provenance of interactions. We demonstrate Urbanite's effectiveness through usage scenarios created in collaboration with 11 urban experts. Urbanite is available at https://urbantk.org/urbanite.",1 "Explainable AI (XAI) in creative contexts can go beyond transparency to support artistic engagement, modifiability, and sustained practice. While curated datasets and training human-scale models can offer artists greater agency and control, large-scale generative models like text-to-image diffusion systems often obscure these possibilities. We suggest that even large models can be treated as creative materials if their internal structure is exposed and manipulable. We propose a craft-based approach to explainability rooted in long-term, hands-on engagement akin to Sch\""on's ""reflection-in-action"" and demonstrate its application through a model-bending and inspection plugin integrated into the node-based interface of ComfyUI. We demonstrate that by interactively manipulating different parts of a generative model, artists can develop an intuition about how each component influences the output.",0 "Objective: This study proposes and preliminarily validates a novel ""Functional-Energetic Topology Model"" to uncover neurodynamic mechanisms of Non-Suicidal Self-Injury (NSSI), using Graph Neural Networks (GNNs) to decode brain network patterns from single-channel EEG in real-world settings.Methods: EEG data were collected over ~1 month from three adolescents with NSSI using a smartphone app and a portable Fp1 EEG headband during impulsive and non-impulsive states. A theory-driven GNN with seven functional nodes was built. Performance was evaluated via intra-subject (80/20 split) and leave-one-subject-out cross-validation (LOSOCV). GNNExplainer was used for interpretability.Results: The model achieved high intra-subject accuracy (>85%) and significantly above-chance cross-subject performance (approximately73.7%). Explainability analysis revealed a key finding: during NSSI states, a critical feedback loop regulating somatic sensation exhibits dysfunction and directional reversal. Specifically, the brain loses its ability to self-correct via negative bodily feedback, and the regulatory mechanism enters an ""ineffective idling"" state.Conclusion: This work demonstrates the feasibility of applying theory-guided GNNs to sparse, single-channel EEG for decoding complex mental states. The identified ""feedback loop reversal"" offers a novel, dynamic, and computable model of NSSI mechanisms, paving the way for objective biomarkers and next-generation Digital Therapeutics (DTx).",0 "Graph Neural Networks (GNNs) have emerged as powerful tools for learning over structured data, including text-attributed graphs, which are common in domains such as citation networks, social platforms, and knowledge graphs. GNNs are not inherently interpretable and thus, many explanation methods have been proposed. However, existing explanation methods often struggle to generate interpretable, fine-grained rationales, especially when node attributes include rich natural language. In this work, we introduce LOGIC, a lightweight, post-hoc framework that uses large language models (LLMs) to generate faithful and interpretable explanations for GNN predictions. LOGIC projects GNN node embeddings into the LLM embedding space and constructs hybrid prompts that interleave soft prompts with textual inputs from the graph structure. This enables the LLM to reason about GNN internal representations and produce natural language explanations along with concise explanation subgraphs. Our experiments across four real-world TAG datasets demonstrate that LOGIC achieves a favorable trade-off between fidelity and sparsity, while significantly improving human-centric metrics such as insightfulness. LOGIC sets a new direction for LLM-based explainability in graph learning by aligning GNN internals with human reasoning.",0 "As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Macro F1 score of 0.89, where OpenAI Moderator and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-the-loop, for better model robustness and explainability.",0 "This paper considers Ecogame, an innovative art project of 1970, whose creators believed in a positive vision of a technological future; an understanding, posited on cybernetics, of a future that could be participatory via digital means, and therefore more democratised. Using simulation and early machine learning techniques over a live network, Ecogame combined the power of visual art with cybernetic concepts of adaptation, feedback, and control to propose that behaviour had implications for the total system. It provides an historical precedent for contemporary AI-driven art about using AI in a more human-centred way.",0 "Accurate and interpretable classification of brain tumors from magnetic resonance imaging (MRI) is critical for effective diagnosis and treatment planning. This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs) using a soft voting strategy to classify three common brain tumor types: glioma, meningioma, and pituitary adenoma. The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol. To enhance transparency and clinical trust, the framework integrates an Explainable AI (XAI) module employing Grad-CAM++ for class-specific saliency visualization, alongside a symbolic Clinical Decision Rule Overlay (CDRO) that maps predictions to established radiological heuristics. The ensemble classifier achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%. Grad-CAM++ visualizations revealed strong spatial alignment between model attention and expert-annotated tumor regions, supported by Dice coefficients up to 0.88 and IoU scores up to 0.78. Clinical rule activation further validated model predictions in cases with distinct morphological features. A human-centered interpretability assessment involving five board-certified radiologists yielded high Likert-scale scores for both explanation usefulness (mean = 4.4) and heatmap-region correspondence (mean = 4.0), reinforcing the framework's clinical relevance. Overall, the proposed approach offers a robust, interpretable, and generalizable solution for automated brain tumor classification, advancing the integration of deep learning into clinical neurodiagnostics.",0 "The mainstream paradigm of remote sensing image interpretation has long been dominated by vision-centered models, which rely on visual features for semantic understanding. However, these models face inherent limitations in handling multi-modal reasoning, semantic abstraction, and interactive decision-making. While recent advances have introduced Large Language Models (LLMs) into remote sensing workflows, existing studies primarily focus on downstream applications, lacking a unified theoretical framework that explains the cognitive role of language. This review advocates a paradigm shift from vision-centered to language-centered remote sensing interpretation. Drawing inspiration from the Global Workspace Theory (GWT) of human cognition, We propose a language-centered framework for remote sensing interpretation that treats LLMs as the cognitive central hub integrating perceptual, task, knowledge and action spaces to enable unified understanding, reasoning, and decision-making. We first explore the potential of LLMs as the central cognitive component in remote sensing interpretation, and then summarize core technical challenges, including unified multimodal representation, knowledge association, and reasoning and decision-making. Furthermore, we construct a global workspace-driven interpretation mechanism and review how language-centered solutions address each challenge. Finally, we outline future research directions from four perspectives: adaptive alignment of multimodal data, task understanding under dynamic knowledge constraints, trustworthy reasoning, and autonomous interaction. This work aims to provide a conceptual foundation for the next generation of remote sensing interpretation systems and establish a roadmap toward cognition-driven intelligent geospatial analysis.",0 "Accurate diagnosis of skin diseases remains a significant challenge due to the complex and diverse visual features present in dermatoscopic images, often compounded by a lack of interpretability in existing purely visual diagnostic models. To address these limitations, this study introduces VL-MedGuide (Visual-Linguistic Medical Guide), a novel framework leveraging the powerful multi-modal understanding and reasoning capabilities of Visual-Language Large Models (LVLMs) for intelligent and inherently interpretable auxiliary diagnosis of skin conditions. VL-MedGuide operates in two interconnected stages: a Multi-modal Concept Perception Module, which identifies and linguistically describes dermatologically relevant visual features through sophisticated prompt engineering, and an Explainable Disease Reasoning Module, which integrates these concepts with raw visual information via Chain-of-Thought prompting to provide precise disease diagnoses alongside transparent rationales. Comprehensive experiments on the Derm7pt dataset demonstrate that VL-MedGuide achieves state-of-the-art performance in both disease diagnosis (83.55% BACC, 80.12% F1) and concept detection (76.10% BACC, 67.45% F1), surpassing existing baselines. Furthermore, human evaluations confirm the high clarity, completeness, and trustworthiness of its generated explanations, bridging the gap between AI performance and clinical utility by offering actionable, explainable insights for dermatological practice.",0 "This paper introduces and formalizes Noosem\`ia, a novel cognitive-phenomenological pattern emerging from human interaction with generative AI systems, particularly those enabling dialogic or multimodal exchanges. We propose a multidisciplinary framework to explain how, under certain conditions, users attribute intentionality, agency, and even interiority to these systems - a process grounded not in physical resemblance, but in linguistic performance, epistemic opacity, and emergent technological complexity. By linking an LLM declination of meaning holism to our technical notion of the LLM Contextual Cognitive Field, we clarify how LLMs construct meaning relationally and how coherence and a simulacrum of agency arise at the human-AI interface. The analysis situates noosemia alongside pareidolia, animism, the intentional stance and the uncanny valley, distinguishing its unique characteristics. We also introduce a-noosemia to describe the phenomenological withdrawal of such projections. The paper concludes with reflections on the broader philosophical, epistemological and social implications of noosemic dynamics and directions for future research.",0 "Current explainable AI (XAI) approaches prioritize algorithmic transparency and present explanations in abstract, non-adaptive formats that often fail to support meaningful end-user understanding. This paper introduces ""Explanatory AI"" as a complementary paradigm that leverages generative AI capabilities to serve as explanatory partners for human understanding rather than providers of algorithmic transparency. While XAI reveals algorithmic decision processes for model validation, Explanatory AI addresses contextual reasoning to support human decision-making in sociotechnical contexts. We develop a definition and systematic eight-dimensional conceptual model distinguishing Explanatory AI through narrative communication, adaptive personalization, and progressive disclosure principles. Empirical validation through Rapid Contextual Design methodology with healthcare professionals demonstrates that users consistently prefer context-sensitive, multimodal explanations over technical transparency. Our findings reveal the practical urgency for AI systems designed for human comprehension rather than algorithmic introspection, establishing a comprehensive research agenda for advancing user-centered AI explanation approaches across diverse domains and cultural contexts.",0 "Kinetics of a balanced network of neurons with a sparse grid of synaptic links is well representable by the stochastic dynamics of a generic neuron subject to an effective shot noise. The rate of delta-pulses of the noise is determined self-consistently from the probability density of the neuron states. Importantly, the most sophisticated (but robust) collective regimes of the network do not allow for the diffusion approximation, which is routinely adopted for a shot noise in mathematical neuroscience. These regimes can be expected to be biologically relevant. For the kinetics equations of the complete mean field theory of a homogeneous inhibitory network of quadratic integrate-and-fire neurons, we introduce circular cumulants of the genuine phase variable and derive a rigorous two cumulant reduction for both time-independent conditions and modulation of the excitatory current. The low dimensional model is examined with numerical simulations and found to be accurate for time-independent states and dynamic response to a periodic modulation deep into the parameter domain where the diffusion approximation is not applicable. The accuracy of a low dimensional model indicates and explains a low embedding dimensionality of the macroscopic collective dynamics of the network. The reduced model can be instrumental for theoretical studies of inhibitory-excitatory balances neural networks.",0 "Weld defect detection is crucial for ensuring the safety and reliability of piping systems in the oil and gas industry, especially in challenging marine and offshore environments. Traditional non-destructive testing (NDT) methods often fail to detect subtle or internal defects, leading to potential failures and costly downtime. Furthermore, existing neural network-based approaches for defect classification frequently rely on arbitrarily selected pretrained architectures and lack interpretability, raising safety concerns for deployment. To address these challenges, this paper introduces ``Adapt-WeldNet"", an adaptive framework for welding defect detection that systematically evaluates various pre-trained architectures, transfer learning strategies, and adaptive optimizers to identify the best-performing model and hyperparameters, optimizing defect detection and providing actionable insights. Additionally, a novel Defect Detection Interpretability Analysis (DDIA) framework is proposed to enhance system transparency. DDIA employs Explainable AI (XAI) techniques, such as Grad-CAM and LIME, alongside domain-specific evaluations validated by certified ASNT NDE Level II professionals. Incorporating a Human-in-the-Loop (HITL) approach and aligning with the principles of Trustworthy AI, DDIA ensures the reliability, fairness, and accountability of the defect detection system, fostering confidence in automated decisions through expert validation. By improving both performance and interpretability, this work enhances trust, safety, and reliability in welding defect detection systems, supporting critical operations in offshore and marine environments.",0 "The Vehicle Routing Problem (VRP) is a complex optimization problem with numerous real-world applications, mostly solved using metaheuristic algorithms due to its $\mathcal{NP}$-Hard nature. Traditionally, these metaheuristics rely on human-crafted designs developed through empirical studies. However, recent research shows that machine learning methods can be used the structural characteristics of solutions in combinatorial optimization, thereby aiding in designing more efficient algorithms, particularly for solving VRP. Building on this advancement, this study extends the previous research by conducting a sensitivity analysis using multiple classifier models that are capable of predicting the quality of VRP solutions. Hence, by leveraging explainable AI, this research is able to extend the understanding of how these models make decisions. Finally, our findings indicate that while feature importance varies, certain features consistently emerge as strong predictors. Furthermore, we propose a unified framework able of ranking feature impact across different scenarios to illustrate this finding. These insights highlight the potential of feature importance analysis as a foundation for developing a guidance mechanism of metaheuristic algorithms for solving the VRP.",0 "Generative AI has made image creation more accessible, yet aligning outputs with nuanced creative intent remains challenging, particularly for non-experts. Existing tools often require users to externalize ideas through prompts or references, limiting fluid exploration. We introduce ThematicPlane, a system that enables users to navigate and manipulate high-level semantic concepts (e.g., mood, style, or narrative tone) within an interactive thematic design plane. This interface bridges the gap between tacit creative intent and system control. In our exploratory study (N=6), participants engaged in divergent and convergent creative modes, often embracing unexpected results as inspiration or iteration cues. While they grounded their exploration in familiar themes, differing expectations of how themes mapped to outputs revealed a need for more explainable controls. Overall, ThematicPlane fosters expressive, iterative workflows and highlights new directions for intuitive, semantics-driven interaction in generative design tools.",1 "Video quality assessment (VQA) aims to objectively quantify perceptual quality degradation in alignment with human visual perception. Despite recent advances, existing VQA models still suffer from two critical limitations: \textit{poor generalization to out-of-distribution (OOD) videos} and \textit{limited explainability}, which restrict their applicability in real-world scenarios. To address these challenges, we propose \textbf{VQAThinker}, a reasoning-based VQA framework that leverages large multimodal models (LMMs) with reinforcement learning to jointly model video quality understanding and scoring, emulating human perceptual decision-making. Specifically, we adopt group relative policy optimization (GRPO), a rule-guided reinforcement learning algorithm that enables reasoning over video quality under score-level supervision, and introduce three VQA-specific rewards: (1) a \textbf{bell-shaped regression reward} that increases rapidly as the prediction error decreases and becomes progressively less sensitive near the ground truth; (2) a \textbf{pairwise ranking reward} that guides the model to correctly determine the relative quality between video pairs; and (3) a \textbf{temporal consistency reward} that encourages the model to prefer temporally coherent videos over their perturbed counterparts. Extensive experiments demonstrate that VQAThinker achieves state-of-the-art performance on both in-domain and OOD VQA benchmarks, showing strong generalization for video quality scoring. Furthermore, evaluations on video quality understanding tasks validate its superiority in distortion attribution and quality description compared to existing explainable VQA models and LMMs. These findings demonstrate that reinforcement learning offers an effective pathway toward building generalizable and explainable VQA models solely with score-level supervision.",0 "Driver visual attention prediction is a critical task in autonomous driving and human-computer interaction (HCI) research. Most prior studies focus on estimating attention allocation at a single moment in time, typically using static RGB images such as driving scene pictures. In this work, we propose a vision-language framework that models the changing landscape of drivers' gaze through natural language, using few-shot and zero-shot learning on single RGB images. We curate and refine high-quality captions from the BDD-A dataset using human-in-the-loop feedback, then fine-tune LLaVA to align visual perception with attention-centric scene understanding. Our approach integrates both low-level cues and top-down context (e.g., route semantics, risk anticipation), enabling language-based descriptions of gaze behavior. We evaluate performance across training regimes (few shot, and one-shot) and introduce domain-specific metrics for semantic alignment and response diversity. Results show that our fine-tuned model outperforms general-purpose VLMs in attention shift detection and interpretability. To our knowledge, this is among the first attempts to generate driver visual attention allocation and shifting predictions in natural language, offering a new direction for explainable AI in autonomous driving. Our approach provides a foundation for downstream tasks such as behavior forecasting, human-AI teaming, and multi-agent coordination.",0 "Capturing human learning behavior based on deep learning methods has become a major research focus in both psychology and intelligent systems. Recent approaches rely on controlled experiments or rule-based models to explore cognitive processes. However, they struggle to capture learning dynamics, track progress over time, or provide explainability. To address these challenges, we introduce LearnerAgent, a novel multi-agent framework based on Large Language Models (LLMs) to simulate a realistic teaching environment. To explore human-like learning dynamics, we construct learners with psychologically grounded profiles-such as Deep, Surface, and Lazy-as well as a persona-free General Learner to inspect the base LLM's default behavior. Through weekly knowledge acquisition, monthly strategic choices, periodic tests, and peer interaction, we can track the dynamic learning progress of individual learners over a full-year journey. Our findings are fourfold: 1) Longitudinal analysis reveals that only Deep Learner achieves sustained cognitive growth. Our specially designed ""trap questions"" effectively diagnose Surface Learner's shallow knowledge. 2) The behavioral and cognitive patterns of distinct learners align closely with their psychological profiles. 3) Learners' self-concept scores evolve realistically, with the General Learner developing surprisingly high self-efficacy despite its cognitive limitations. 4) Critically, the default profile of base LLM is a ""diligent but brittle Surface Learner""-an agent that mimics the behaviors of a good student but lacks true, generalizable understanding. Extensive simulation experiments demonstrate that LearnerAgent aligns well with real scenarios, yielding more insightful findings about LLMs' behavior.",0 "Trajectory prediction is a critical task in modeling human behavior, especially in safety-critical domains such as social robotics and autonomous vehicle navigation. Traditional heuristics based on handcrafted rules often lack accuracy and generalizability. Although deep learning approaches offer improved performance, they typically suffer from high computational cost, limited explainability, and, importantly, poor generalization to out-of-distribution (OOD) scenarios. In this paper, we introduce TrajEvo, a framework that leverages Large Language Models (LLMs) to automatically design trajectory prediction heuristics. TrajEvo employs an evolutionary algorithm to generate and refine prediction heuristics from past trajectory data. We propose two key innovations: Cross-Generation Elite Sampling to encourage population diversity, and a Statistics Feedback Loop that enables the LLM to analyze and improve alternative predictions. Our evaluations demonstrate that TrajEvo outperforms existing heuristic methods across multiple real-world datasets, and notably surpasses both heuristic and deep learning methods in generalizing to an unseen OOD real-world dataset. TrajEvo marks a promising step toward the automated design of fast, explainable, and generalizable trajectory prediction heuristics. We release our source code to facilitate future research at https://github.com/ai4co/trajevo.",0 "Large Language Models (LLMs) have been extensively tuned to mitigate explicit biases, yet they often exhibit subtle implicit biases rooted in their pre-training data. Rather than directly probing LLMs with human-crafted questions that may trigger guardrails, we propose studying how models behave when they proactively ask questions themselves. The 20 Questions game, a multi-turn deduction task, serves as an ideal testbed for this purpose. We systematically evaluate geographic performance disparities in entity deduction using a new dataset, Geo20Q+, consisting of both notable people and culturally significant objects (e.g., foods, landmarks, animals) from diverse regions. We test popular LLMs across two gameplay configurations (canonical 20-question and unlimited turns) and in seven languages (English, Hindi, Mandarin, Japanese, French, Spanish, and Turkish). Our results reveal geographic disparities: LLMs are substantially more successful at deducing entities from the Global North than the Global South, and the Global West than the Global East. While Wikipedia pageviews and pre-training corpus frequency correlate mildly with performance, they fail to fully explain these disparities. Notably, the language in which the game is played has minimal impact on performance gaps. These findings demonstrate the value of creative, free-form evaluation frameworks for uncovering subtle biases in LLMs that remain hidden in standard prompting setups. By analyzing how models initiate and pursue reasoning goals over multiple turns, we find geographic and cultural disparities embedded in their reasoning processes. We release the dataset (Geo20Q+) and code at https://sites.google.com/view/llmbias20q/home.",0 "Despite the remarkable capabilities of large language models (LLMs) across a range of tasks, mathematical reasoning remains a challenging frontier. Motivated by the observation that humans learn more effectively when prompted not what to think but how to think, we introduce BloomWise, a cognitively-inspired prompting technique designed to enhance LLMs' performance on mathematical problem solving while making their solutions more explainable. BloomWise encourages LLMs to generate solutions - in the form of explanations - by progressing through a sequence of cognitive operations-from basic (e.g., remembering) to more advanced reasoning skills (e.g., evaluating) - mirroring how humans build understanding. The process iterates through these levels, halting early if a convergence criterion is met: specifically, if two or more consecutive levels yield the same answer, the solution from the earliest such level is output; otherwise, the process continues until all levels are completed. Through extensive experiments across five popular math reasoning datasets, we demonstrate the effectiveness of BloomWise. We also present comprehensive ablation studies to analyze the strengths of each component within our system.",0 "Multi-annotator learning traditionally aggregates diverse annotations to approximate a single ground truth, treating disagreements as noise. However, this paradigm faces fundamental challenges: subjective tasks often lack absolute ground truth, and sparse annotation coverage makes aggregation statistically unreliable. We introduce a paradigm shift from sample-wise aggregation to annotator-wise behavior modeling. By treating annotator disagreements as valuable information rather than noise, modeling annotator-specific behavior patterns can reconstruct unlabeled data to reduce annotation cost, enhance aggregation reliability, and explain annotator decision behavior. To this end, we propose QuMAB (Query-based Multi-Annotator Behavior Pattern Learning), which uses light-weight queries to model individual annotators while capturing inter-annotator correlations as implicit regularization, preventing overfitting to sparse individual data while maintaining individualization and improving generalization, with a visualization of annotator focus regions offering an explainable analysis of behavior understanding. We contribute two large-scale datasets with dense per-annotator labels: STREET (4,300 labels/annotator) and AMER (average 3,118 labels/annotator), the first multimodal multi-annotator dataset. Extensive experiments demonstrate the superiority of our QuMAB in modeling individual annotators' behavior patterns, their utility for consensus prediction, and applicability under sparse annotations.",0 "Understanding the behavior of large language models (LLMs) is crucial for ensuring their safe and reliable use. However, existing explainable AI (XAI) methods for LLMs primarily rely on word-level explanations, which are often computationally inefficient and misaligned with human reasoning processes. Moreover, these methods often treat explanation as a one-time output, overlooking its inherently interactive and iterative nature. In this paper, we present LLM Analyzer, an interactive visualization system that addresses these limitations by enabling intuitive and efficient exploration of LLM behaviors through counterfactual analysis. Our system features a novel algorithm that generates fluent and semantically meaningful counterfactuals via targeted removal and replacement operations at user-defined levels of granularity. These counterfactuals are used to compute feature attribution scores, which are then integrated with concrete examples in a table-based visualization, supporting dynamic analysis of model behavior. A user study with 100 LLM practitioners and interviews with 903 experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants rather than passive recipients.",2 "Large language models (LLMs) are beginning to reshape how chemists plan and run reactions in organic synthesis. Trained on millions of reported transformations, these text-based models can propose synthetic routes, forecast reaction outcomes and even instruct robots that execute experiments without human supervision. Here we survey the milestones that turned LLMs from speculative tools into practical lab partners. We show how coupling LLMs with graph neural networks, quantum calculations and real-time spectroscopy shrinks discovery cycles and supports greener, data-driven chemistry. We discuss limitations, including biased datasets, opaque reasoning and the need for safety gates that prevent unintentional hazards. Finally, we outline community initiatives open benchmarks, federated learning and explainable interfaces that aim to democratize access while keeping humans firmly in control. These advances chart a path towards rapid, reliable and inclusive molecular innovation powered by artificial intelligence and automation.",0 "In large-scale maintenance organizations, identifying subject matter experts and managing communications across complex entities relationships poses significant challenges -- including information overload and longer response times -- that traditional communication approaches fail to address effectively. We propose a novel framework that combines RDF graph databases with LLMs to process natural language queries for precise audience targeting, while providing transparent reasoning through a planning-orchestration architecture. Our solution enables communication owners to formulate intuitive queries combining concepts such as equipment, manufacturers, maintenance engineers, and facilities, delivering explainable results that maintain trust in the system while improving communication efficiency across the organization.",0 "Recent advancements in explainable recommendation have greatly bolstered user experience by elucidating the decision-making rationale. However, the existing methods actually fail to provide effective feedback signals for potentially better or worse generated explanations due to their reliance on traditional supervised learning paradigms in sparse interaction data. To address these issues, we propose a novel human-like feedback-driven optimization framework. This framework employs a dynamic interactive optimization mechanism for achieving human-centered explainable requirements without incurring high labor costs. Specifically, we propose to utilize large language models (LLMs) as human simulators to predict human-like feedback for guiding the learning process. To enable the LLMs to deeply understand the task essence and meet user's diverse personalized requirements, we introduce a human-induced customized reward scoring method, which helps stimulate the language understanding and logical reasoning capabilities of LLMs. Furthermore, considering the potential conflicts between different perspectives of explanation quality, we introduce a principled Pareto optimization that transforms the multi-perspective quality enhancement task into a multi-objective optimization problem for improving explanation performance. At last, to achieve efficient model training, we design an off-policy optimization pipeline. By incorporating a replay buffer and addressing the data distribution biases, we can effectively improve data utilization and enhance model generality. Extensive experiments on four datasets demonstrate the superiority of our approach.",0 "The goal of translation, be it by human or by machine, is, given some text in a source language, to produce text in a target language that simultaneously 1) preserves the meaning of the source text and 2) achieves natural expression in the target language. However, researchers in the machine translation community usually assess translations using a single score intended to capture semantic accuracy and the naturalness of the output simultaneously. In this paper, we build on recent advances in information theory to mathematically prove and empirically demonstrate that such single-score summaries do not and cannot give the complete picture of a system's true performance. Concretely, we prove that a tradeoff exists between accuracy and naturalness and demonstrate it by evaluating the submissions to the WMT24 shared task. Our findings help explain well-known empirical phenomena, such as the observation that optimizing translation systems for a specific accuracy metric (like BLEU) initially improves the system's naturalness, while ``overfitting'' the system to the metric can significantly degrade its naturalness. Thus, we advocate for a change in how translations are evaluated: rather than comparing systems using a single number, they should be compared on an accuracy-naturalness plane.",0 "As the use of AI systems in society grows, addressing potential biases that emerge from data or are learned by models is essential to prevent systematic disadvantages against specific groups. Several notions of (un)fairness have been proposed in the literature, alongside corresponding algorithmic methods for detecting and mitigating unfairness, but, with very few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. In this paper, we contribute a novel interpretable, explainable method for bias detection relying on debates about the presence of bias against individuals, based on the values of protected features for the individuals and others in their neighbourhoods. Our method builds upon techniques from formal and computational argumentation, whereby debates result from arguing about biases within and across neighbourhoods. We provide formal, quantitative, and qualitative evaluations of our method, highlighting its strengths in performance against baselines, as well as its interpretability and explainability.",0 "While the activations of neurons in deep neural networks usually do not have a simple human-understandable interpretation, sparse autoencoders (SAEs) can be used to transform these activations into a higher-dimensional latent space which may be more easily interpretable. However, these SAEs can have millions of distinct latent features, making it infeasible for humans to manually interpret each one. In this work, we build an open-source automated pipeline to generate and evaluate natural language explanations for SAE features using LLMs. We test our framework on SAEs of varying sizes, activation functions, and losses, trained on two different open-weight LLMs. We introduce five new techniques to score the quality of explanations that are cheaper to run than the previous state of the art. One of these techniques, intervention scoring, evaluates the interpretability of the effects of intervening on a feature, which we find explains features that are not recalled by existing methods. We propose guidelines for generating better explanations that remain valid for a broader set of activating contexts, and discuss pitfalls with existing scoring techniques. We use our explanations to measure the semantic similarity of independently trained SAEs, and find that SAEs trained on nearby layers of the residual stream are highly similar. Our large-scale analysis confirms that SAE latents are indeed much more interpretable than neurons, even when neurons are sparsified using top-$k$ postprocessing. Our code is available at https://github.com/EleutherAI/sae-auto-interp, and our explanations are available at https://huggingface.co/datasets/EleutherAI/auto_interp_explanations.",0 "Software defect prediction using code metrics has been extensively researched over the past five decades. However, prediction harnessing non-software metrics is under-researched. Considering that the root cause of software defects is often attributed to human error, human factors theory might offer key forecasting metrics for actionable insights. This paper explores automated software defect prediction at the method level based on the developers' coding habits. First, we propose a framework for deciding the metrics to conduct predictions. Next, we compare the performance of our metrics to that of the code and commit history metrics shown by research to achieve the highest performance to date. Finally, we analyze the prediction importance of each metric. As a result of our analyses of twenty-one critical infrastructure large-scale open-source software projects, we have presented: (1) a human error-based framework with metrics useful for defect prediction at method level; (2) models using our proposed metrics achieve better average prediction performance than the state-of-the-art code metrics and history measures; (3) the prediction importance of all metrics distributes differently with each of the novel metrics having better average importance than code and history metrics; (4) the novel metrics dramatically enhance the explainability, practicality, and actionability of software defect prediction models, significantly advancing the field. We present a systematic approach to forecasting defect-prone software methods via a human error framework. This work empowers practitioners to act on predictions, empirically demonstrating how developer coding habits contribute to defects in software systems.",0 "We present SAInT, a Python-based tool for visually exploring and understanding the behavior of Machine Learning (ML) models through integrated local and global sensitivity analysis. Our system supports Human-in-the-Loop (HITL) workflows by enabling users - both AI researchers and domain experts - to configure, train, evaluate, and explain models through an interactive graphical interface without programming. The tool automates model training and selection, provides global feature attribution using variance-based sensitivity analysis, and offers per-instance explanation via LIME and SHAP. We demonstrate the system on a classification task predicting survival on the Titanic dataset and show how sensitivity information can guide feature selection and data refinement.",0 "Deception detection is a critical task in real-world applications such as security screening, fraud prevention, and credibility assessment. While deep learning methods have shown promise in surpassing human-level performance, their effectiveness often depends on the availability of high-quality and diverse deception samples. Existing research predominantly focuses on single-domain scenarios, overlooking the significant performance degradation caused by domain shifts. To address this gap, we present the SVC 2025 Multimodal Deception Detection Challenge, a new benchmark designed to evaluate cross-domain generalization in audio-visual deception detection. Participants are required to develop models that not only perform well within individual domains but also generalize across multiple heterogeneous datasets. By leveraging multimodal data, including audio, video, and text, this challenge encourages the design of models capable of capturing subtle and implicit deceptive cues. Through this benchmark, we aim to foster the development of more adaptable, explainable, and practically deployable deception detection systems, advancing the broader field of multimodal learning. By the conclusion of the workshop competition, a total of 21 teams had submitted their final results. https://sites.google.com/view/svc-mm25 for more information.",2 "Reinforcement learning (RL) has demonstrated remarkable success in solving complex decision-making problems, yet its adoption in critical domains is hindered by the lack of interpretability in its decision-making processes. Existing explainable AI (xAI) approaches often fail to provide meaningful explanations for RL agents, particularly because they overlook the contrastive nature of human reasoning--answering ""why this action instead of that one?"". To address this gap, we propose a novel framework of contrastive learning to explain RL selected actions, named $\textbf{VisionMask}$. VisionMask is trained to generate explanations by explicitly contrasting the agent's chosen action with alternative actions in a given state using a self-supervised manner. We demonstrate the efficacy of our method through experiments across diverse RL environments, evaluating it in terms of faithfulness, robustness, and complexity. Our results show that VisionMask significantly improves human understanding of agent behavior while maintaining accuracy and fidelity. Furthermore, we present examples illustrating how VisionMask can be used for counterfactual analysis. This work bridges the gap between RL and xAI, paving the way for safer and more interpretable RL systems.",0 "Motion sensor time-series are central to human activity recognition (HAR), with applications in health, sports, and smart devices. However, existing methods are trained for fixed activity sets and require costly retraining when new behaviours or sensor setups appear. Recent attempts to use large language models (LLMs) for HAR, typically by converting signals into text or images, suffer from limited accuracy and lack verifiable interpretability. We propose ZARA, the first agent-based framework for zero-shot, explainable HAR directly from raw motion time-series. ZARA integrates an automatically derived pair-wise feature knowledge base that captures discriminative statistics for every activity pair, a multi-sensor retrieval module that surfaces relevant evidence, and a hierarchical agent pipeline that guides the LLM to iteratively select features, draw on this evidence, and produce both activity predictions and natural-language explanations. ZARA enables flexible and interpretable HAR without any fine-tuning or task-specific classifiers. Extensive experiments on 8 HAR benchmarks show that ZARA achieves SOTA zero-shot performance, delivering clear reasoning while exceeding the strongest baselines by 2.53x in macro F1. Ablation studies further confirm the necessity of each module, marking ZARA as a promising step toward trustworthy, plug-and-play motion time-series analysis. Our codes are available at https://github.com/zechenli03/ZARA.",0 "Well-being encompasses mental, physical, and social dimensions essential to personal growth and informed life decisions. As individuals increasingly consult Large Language Models (LLMs) to understand well-being, a key challenge emerges: Can LLMs generate explanations that are not only accurate but also tailored to diverse audiences? High-quality explanations require both factual correctness and the ability to meet the expectations of users with varying expertise. In this work, we construct a large-scale dataset comprising 43,880 explanations of 2,194 well-being concepts, generated by ten diverse LLMs. We introduce a principle-guided LLM-as-a-judge evaluation framework, employing dual judges to assess explanation quality. Furthermore, we show that fine-tuning an open-source LLM using Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) can significantly enhance the quality of generated explanations. Our results reveal: (1) The proposed LLM judges align well with human evaluations; (2) explanation quality varies significantly across models, audiences, and categories; and (3) DPO- and SFT-finetuned models outperform their larger counterparts, demonstrating the effectiveness of preference-based learning for specialized explanation tasks.",0 "The progression from novice to disciplinary expert is a longstanding area of inquiry in educational research. Studies investigating such progressions have often resorted to participants' self-assessments or other qualitative indicators as a starting point to define experience. But does a participant's estimated experience coincide with metrics derived from their conceptual understanding of a discipline? Using data extracted from over 150 concept maps, we first demonstrate that disciplinary experience is a reliable variable to explain differences in conceptual understanding across a highly diverse learners' population. Through a comparison of unsupervised and semi-supervised models, we then motivate clustering 440 participants into three distinguished experience levels, and support such a classification performed in other studies of educational research. By analysing cluster composition, we also identify discrepancies between the perceived and predicted experience levels of the study participants. Lastly, for studies processing participants data through network analysis, we present insights into statistically significant metrics that can characterise each experience level, and advocate for the use of node-level metrics in such studies.",2 "While quantum statistical mechanics triumphs in explaining many equilibrium phenomena, there is an increasing focus on going beyond conventional scenarios of thermalization. Traditionally examples of non-thermalizing systems are either integrable, or disordered. Recently, examples of translationally-invariant physical systems have been discovered whose excited energies avoid thermalization either due to local constraints (whether exact or emergent), or due to higher-form symmetries. In this article, we extend these investigations for the case of 3D $U(1)$ quantum dimer models, which are lattice gauge theories with finite-dimensional local Hilbert spaces (also generically called quantum link models) with staggered charged static matter. Using a combination of analytical and numerical methods, we uncover a class of athermal states that arise in large winding sectors, when the system is subjected to external electric fields. The polarization of the dynamical fluxes in the direction of applied field traps excitations in 2D planes, while an interplay with the Gauss Law constraint in the perpendicular direction causes exotic athermal behaviour due to the emergence of new conserved quantities. This causes a geometric fragmentation of the system. We provide analytical arguments showing that the scaling of the number of fragments is exponential in the linear system size, leading to weak fragmentation. Further, we identify sectors which host fractonic excitations with severe mobility restrictions. The unitary evolution of fragments dominated by fractons is qualitatively different from the one dominated by non-fractonic excitations.",0 "Radar-based human pose estimation (HPE) provides a privacy-preserving, illumination-invariant sensing modality but is challenged by noisy, multipath-affected measurements. We introduce RadProPoser, a probabilistic encoder-decoder architecture that processes complex-valued radar tensors from a compact 3-transmitter, 4-receiver MIMO radar. By incorporating variational inference into keypoint regression, RadProPoser jointly predicts 26 three-dimensional joint locations alongside heteroscedastic aleatoric uncertainties and can be recalibrated to predict total uncertainty. We explore different probabilistic formulations using both Gaussian and Laplace distributions for latent priors and likelihoods. On our newly released dataset with optical motion-capture ground truth, RadProPoser achieves an overall mean per-joint position error (MPJPE) of 6.425 cm, with 5.678 cm at the 45 degree aspect angle. The learned uncertainties exhibit strong alignment with actual pose errors and can be calibrated to produce reliable prediction intervals, with our best configuration achieving an expected calibration error of 0.021. As an additional demonstration, sampling from these latent distributions enables effective data augmentation for downstream activity classification, resulting in an F1 score of 0.870. To our knowledge, this is the first end-to-end radar tensor-based HPE system to explicitly model and quantify per-joint uncertainty from raw radar tensor data, establishing a foundation for explainable and reliable human motion analysis in radar applications.",0 "The superconducting version of a diode effect has been the subject of extensive research in the past few years. So far, the focus has almost exclusively been on charge transport, but a natural question is whether it is possible to obtain nonreciprocal spin transport without dissipation. Here, we demonstrate that it is possible to generate electrically tunable nonreciprocal spin transport carried by a supercurrent using superconductor/ferromagnet multilayers. The nonreciprocal spin supercurrent reaches an ideal efficiency of 100%, meaning that the spin-polarization of the critical current is finite in one flow direction whereas it vanishes in the other direction. We explain the underlying physics generating this phenomenon. This result provides a way to integrate nonreciprocal supercurrents with spin-polarization, offering new functionality in quantum technologies based on Josephson junctions.",0 "Information Pursuit (IP) is an explainable prediction algorithm that greedily selects a sequence of interpretable queries about the data in order of information gain, updating its posterior at each step based on observed query-answer pairs. The standard paradigm uses hand-crafted dictionaries of potential data queries curated by a domain expert or a large language model after a human prompt. However, in practice, hand-crafted dictionaries are limited by the expertise of the curator and the heuristics of prompt engineering. This paper introduces a novel approach: learning a dictionary of interpretable queries directly from the dataset. Our query dictionary learning problem is formulated as an optimization problem by augmenting IP's variational formulation with learnable dictionary parameters. To formulate learnable and interpretable queries, we leverage the latent space of large vision and language models like CLIP. To solve the optimization problem, we propose a new query dictionary learning algorithm inspired by classical sparse dictionary learning. Our experiments demonstrate that learned dictionaries significantly outperform hand-crafted dictionaries generated with large language models.",0 "Automatic medical report generation has the potential to support clinical diagnosis, reduce the workload of radiologists, and demonstrate potential for enhancing diagnostic consistency. However, current evaluation metrics often fail to reflect the clinical reliability of generated reports. Early overlap-based methods focus on textual matches between predicted and ground-truth entities but miss fine-grained clinical details (e.g., anatomical location, severity). Some diagnostic metrics are limited by fixed vocabularies or templates, reducing their ability to capture diverse clinical expressions. LLM-based approaches further lack interpretable reasoning steps, making it hard to assess or trust their behavior in safety-critical settings. These limitations hinder the comprehensive assessment of the reliability of generated reports and pose risks in their selection for clinical use. Therefore, we propose a Granular Explainable Multi-Agent Score (GEMA-Score) in this paper, which conducts both objective quantification and subjective evaluation through a large language model-based multi-agent workflow. Our GEMA-Score parses structured reports and employs stable calculations through interactive exchanges of information among agents to assess disease diagnosis, location, severity, and uncertainty. Additionally, an LLM-based scoring agent evaluates completeness, readability, and clinical terminology while providing explanatory feedback. Extensive experiments validate that GEMA-Score achieves the highest correlation with human expert evaluations on a public dataset, demonstrating its effectiveness in clinical scoring (Kendall coefficient = $0.69$ for ReXVal dataset and Kendall coefficient = $0.45$ for RadEvalX dataset). The anonymous project demo is available at: https://github.com/Zhenxuan-Zhang/GEMA_score.",0 "The human brain is organized as a complex network, where connections between regions are characterized by both functional connectivity (FC) and structural connectivity (SC). While previous studies have primarily focused on network-level FC-SC correlations (i.e., the correlation between FC and SC across all edges within a predefined network), edge-level correlations (i.e., the correlation between FC and SC across subjects at each edge) has received comparatively little attention. In this study, we systematically analyze both network-level and edge-level FC-SC correlations, demonstrating that they lead to divergent conclusions about the strength of brain function-structure association. To explain these discrepancies, we introduce new random effects models that decompose FC and SC variability into different sources: subject effects, edge effects, and their interactions. Our results reveal that network-level and edge-level FC-SC correlations are influenced by different effects, each contributing differently to the total variability in FC and SC. This modeling framework provides the first statistical approach for disentangling and quantitatively assessing different sources of FC and SC variability and yields new insights into the relationship between functional and structural brain networks.",0 "In previous publications, we have argued that a form of panprotopsychism based on quantum states and events offers a solution to the combination problem. This framework explains the emergence of complex phenomenal qualities and conscious subjects. Furthermore, the inherent openness of quantum mechanics allows consciousness and, more generally, phenomenal properties to exert a causal influence. If the view proposed by quantum panprotopsychism is valid, it suggests that we inhabit a consciousness-centered universe. A world whose fundamental nature is phenomenal. This is at odds with the current view about the human condition that was strongly influenced by a science based on classical mechanicism, and led to nihilism and existentialism in late 19th-century Europe, and more recently to the rise of anti-foundationalists perspectives. The centrality of consciousness resulting from the incorporation of a quantum ontology into our worldview leads us to reconsider the nihilistic view and conclude that we live in a world in which a precise physical order leads to people capable of accessing a transcendent phenomenal realm.",0 "Traditional surgical skill acquisition relies heavily on expert feedback, yet direct access is limited by faculty availability and variability in subjective assessments. While trainees can practice independently, the lack of personalized, objective, and quantitative feedback reduces the effectiveness of self-directed learning. Recent advances in computer vision and machine learning have enabled automated surgical skill assessment, demonstrating the feasibility of automatic competency evaluation. However, it is unclear whether such Artificial Intelligence (AI)-driven feedback can contribute to skill acquisition. Here, we examine the effectiveness of explainable AI (XAI)-generated feedback in surgical training through a human-AI study. We create a simulation-based training framework that utilizes XAI to analyze videos and extract surgical skill proxies related to primitive actions. Our intervention provides automated, user-specific feedback by comparing trainee performance to expert benchmarks and highlighting deviations from optimal execution through understandable proxies for actionable guidance. In a prospective user study with 1067 medical students, we compare the impact of XAI-guided feedback against traditional video-based coaching on task outcomes, cognitive load, and trainees' perceptions of AI-assisted learning. Results showed improved cognitive load and confidence post-intervention. While no differences emerged between the two feedback types in reducing performance gaps or practice adjustments, trends in the XAI group revealed desirable effects where participants more closely mimicked expert practice. This work encourages the study of explainable AI in surgical education and the development of data-driven, adaptive feedback mechanisms that could transform learning experiences and competency assessment.",2 "Arabic-language patient feedback remains under-analysed because dialect diversity and scarce aspect-level sentiment labels hinder automated assessment. To address this gap, we introduce EHSAN, a data-centric hybrid pipeline that merges ChatGPT pseudo-labelling with targeted human review to build the first explainable Arabic aspect-based sentiment dataset for healthcare. Each sentence is annotated with an aspect and sentiment label (positive, negative, or neutral), forming a pioneering Arabic dataset aligned with healthcare themes, with ChatGPT-generated rationales provided for each label to enhance transparency. To evaluate the impact of annotation quality on model performance, we created three versions of the training data: a fully supervised set with all labels reviewed by 109 humans, a semi-supervised set with 50% human review, and an unsupervised set with only machine-generated labels. We fine-tuned two transformer models on these datasets for both aspect and sentiment classification. Experimental results show that our Arabic-specific model achieved high accuracy even with minimal human supervision, reflecting only a minor performance drop when using ChatGPT-only labels. Reducing the number of aspect classes notably improved classification metrics across the board. These findings demonstrate an effective, scalable approach to Arabic aspect-based sentiment analysis (SA) in healthcare, combining large language model annotation with human expertise to produce a robust and explainable dataset. Future directions include generalisation across hospitals, prompt refinement, and interpretable data-driven modelling.",2 "Trustworthy interpretation of deep learning models is critical for neuroimaging applications, yet commonly used Explainable AI (XAI) methods lack rigorous validation, risking misinterpretation. We performed the first large-scale, systematic comparison of XAI methods on ~45,000 structural brain MRIs using a novel XAI validation framework. This framework establishes verifiable ground truth by constructing prediction tasks with known signal sources - from localized anatomical features to subject-specific clinical lesions - without artificially altering input images. Our analysis reveals systematic failures in two of the most widely used methods: GradCAM consistently failed to localize predictive features, while Layer-wise Relevance Propagation generated extensive, artifactual explanations that suggest incompatibility with neuroimaging data characteristics. Our results indicate that these failures stem from a domain mismatch, where methods with design principles tailored to natural images require substantial adaptation for neuroimaging data. In contrast, the simpler, gradient-based method SmoothGrad, which makes fewer assumptions about data structure, proved consistently accurate, suggesting its conceptual simplicity makes it more robust to this domain shift. These findings highlight the need for domain-specific adaptation and validation of XAI methods, suggest that interpretations from prior neuroimaging studies using standard XAI methodology warrant re-evaluation, and provide urgent guidance for practical application of XAI in neuroimaging.",0 "Traffic signal control (TSC) is vital for mitigating congestion and sustaining urban mobility. In this paper, we introduce Traffic-R1, a foundation model with human-like reasoning for TSC systems. Our model is developed through self-exploration and iteration of reinforced large language models (LLMs) with expert guidance in a simulated traffic environment. Compared to traditional reinforcement learning (RL) and recent LLM-based methods, Traffic-R1 offers three significant advantages. First, Traffic-R1 delivers zero-shot generalisation, transferring unchanged to new road networks and out-of-distribution incidents by utilizing its internal traffic control policies and human-like reasoning. Second, its 3B-parameter architecture is lightweight enough for real-time inference on mobile-class chips, enabling large-scale edge deployment. Third, Traffic-R1 provides an explainable TSC process and facilitates multi-intersection communication through its self-iteration and a new synchronous communication network. Extensive benchmarks demonstrate that Traffic-R1 sets a new state of the art, outperforming strong baselines and training-intensive RL controllers. In practice, the model now manages signals for more than 55,000 drivers daily, shortening average queues by over 5% and halving operator workload. Our checkpoint is available at https://huggingface.co/Season998/Traffic-R1.",0 "Ensuring safety and in-domain responses for Retrieval-Augmented Generation (RAG) systems is paramount in safety-critical applications, yet remains a significant challenge. To address this, we evaluate four methodologies for Out-Of-Domain (OOD) query detection: GPT-4o, regression-based, Principal Component Analysis (PCA)-based, and Neural Collapse (NC), to ensure the RAG system only responds to queries confined to the system's knowledge base. Specifically, our evaluation explores two novel dimensionality reduction and feature separation strategies: \textit{PCA}, where top components are selected using explained variance or OOD separability, and an adaptation of \textit{Neural Collapse Feature Separation}. We validate our approach on standard datasets (StackExchange and MSMARCO) and real-world applications (Substance Use and COVID-19), including tests against LLM-simulated and actual attacks on a COVID-19 vaccine chatbot. Through human and LLM-based evaluations of response correctness and relevance, we confirm that an external OOD detector is crucial for maintaining response relevance.",0 "Astronomical research traditionally relies on extensive domain knowledge to interpret observations and narrow down hypotheses. We demonstrate that this process can be emulated using large language model-based agents to accelerate research workflows. We propose mephisto, a multi-agent collaboration framework that mimics human reasoning to interpret multi-band galaxy observations. mephisto interacts with the CIGALE codebase, which includes spectral energy distribution (SED) models to explain observations. In this open-world setting, mephisto learns from its self-play experience, performs tree search, and accumulates knowledge in a dynamically updated base. As a proof of concept, we apply mephisto to the latest data from the James Webb Space Telescope. mephisto attains near-human proficiency in reasoning about galaxies' physical scenarios, even when dealing with a recently discovered population of ""Little Red Dot"" galaxies. This represents the first demonstration of agentic research in astronomy, advancing towards end-to-end research via LLM agents and potentially expediting astronomical discoveries.",0 "The differences between images belonging to fine-grained categories are often subtle and highly localized, and existing explainability techniques for deep learning models are often too diffuse to provide useful and interpretable explanations. We propose a new explainability method (PAIR-X) that leverages both intermediate model activations and backpropagated relevance scores to generate fine-grained, highly-localized pairwise visual explanations. We use animal and building re-identification (re-ID) as a primary case study of our method, and we demonstrate qualitatively improved results over a diverse set of explainability baselines on 35 public re-ID datasets. In interviews, animal re-ID experts found PAIR-X to be a meaningful improvement over existing baselines for deep model explainability, and suggested that its visualizations would be directly applicable to their work. We also propose a novel quantitative evaluation metric for our method, and demonstrate that PAIR-X visualizations appear more plausible for correct image matches than incorrect ones even when the model similarity score for the pairs is the same. By improving interpretability, PAIR-X enables humans to better distinguish correct and incorrect matches. Our code is available at: https://github.com/pairx-explains/pairx",1 "Large Language Models (LLMs) are known to overuse certain terms like ""delve"" and ""intricate."" The exact reasons for these lexical choices, however, have been unclear. Using Meta's Llama model, this study investigates the contribution of Learning from Human Feedback (LHF), under which we subsume Reinforcement Learning from Human Feedback and Direct Preference Optimization. We present a straightforward procedure for detecting the lexical preferences of LLMs that are potentially LHF-induced. Next, we more conclusively link LHF to lexical overuse by experimentally emulating the LHF procedure and demonstrating that 1083 participants systematically prefer text variants that include certain words. This lexical overuse can be seen as a sort of misalignment, though our study highlights the potential divergence between the lexical expectations of different populations -- namely LHF workers versus LLM users. Our work contributes to the growing body of research on explainable artificial intelligence and emphasizes the importance of both data and procedural transparency in alignment research.",2 "Synthetic images, audio, and video can now be generated and edited by Artificial Intelligence (AI). In particular, the malicious use of synthetic data has raised concerns about potential harms to cybersecurity, personal privacy, and public trust. Although AI-based detection tools exist to help identify synthetic content, their limitations often lead to user mistrust and confusion between real and fake content. This study examines the role of AI performance in influencing human trust and decision making in synthetic data identification. Through an online human subject experiment involving 400 participants, we examined how varying AI performance impacts human trust and dependence on AI in deepfake detection. Our findings indicate how participants calibrate their dependence on AI based on their perceived risk and the prediction results provided by AI. These insights contribute to the development of transparent and explainable AI systems that better support everyday users in mitigating the harms of synthetic media.",2 "The security of software builds has attracted increased attention in recent years in response to incidents like solarwinds and xz. Now, several companies including Oracle and Google rebuild open source projects in a secure environment and publish the resulting binaries through dedicated repositories. This practice enables direct comparison between these rebuilt binaries and the original ones produced by developers and published in repositories such as Maven Central. These binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process. While there are some tools that can be used for this purpose, they all fall short of providing provenance, i.e. readable explanation of why two binaries are equivalent, or not. To address this issue, we present daleq, a tool that disassembles Java byte code into a relational database, and can normalise this database by applying datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with datalog proofs recording the normalisation process. We demonstrate the impact of daleq in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, daleq is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that daleq outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.",0 "In the Indian subcontinent, Telugu, one of India's six classical languages, is the most widely spoken Dravidian Language. Despite its 96 million speaker base worldwide, Telugu remains underrepresented in the global NLP and Machine Learning landscape, mainly due to lack of high-quality annotated resources. This work introduces TeSent, a comprehensive benchmark dataset for sentiment classification, a key text classification problem, in Telugu. TeSent not only provides ground truth labels for the sentences, but also supplements with provisions for evaluating explainability and fairness, two critical requirements in modern-day machine learning tasks. We scraped Telugu texts covering multiple domains from various social media platforms, news websites and web-blogs to preprocess and generate 26,150 sentences, and developed a custom-built annotation platform and a carefully crafted annotation protocol for collecting the ground truth labels along with their human-annotated rationales. We then fine-tuned several SOTA pre-trained models in two ways: with rationales, and without rationales. Further, we provide a detailed plausibility and faithfulness evaluation suite, which exploits the rationales, for six widely used post-hoc explainers applied on the trained models. Lastly, we curate TeEEC, Equity Evaluation Corpus in Telugu, a corpus to evaluate fairness of Telugu sentiment and emotion related NLP tasks, and provide a fairness evaluation suite for the trained classifier models. Our experimental results suggest that training with rationales may improve model accuracy, reduce bias in models, and make the explainers' output more aligned to human reasoning.",0 "Advances in generative models have led to AI-generated images visually indistinguishable from authentic ones. Despite numerous studies on detecting AI-generated images with classifiers, a gap persists between such methods and human cognitive forensic analysis. We present ForenX, a novel method that not only identifies the authenticity of images but also provides explanations that resonate with human thoughts. ForenX employs the powerful multimodal large language models (MLLMs) to analyze and interpret forensic cues. Furthermore, we overcome the limitations of standard MLLMs in detecting forgeries by incorporating a specialized forensic prompt that directs the MLLMs attention to forgery-indicative attributes. This approach not only enhance the generalization of forgery detection but also empowers the MLLMs to provide explanations that are accurate, relevant, and comprehensive. Additionally, we introduce ForgReason, a dataset dedicated to descriptions of forgery evidences in AI-generated images. Curated through collaboration between an LLM-based agent and a team of human annotators, this process provides refined data that further enhances our model's performance. We demonstrate that even limited manual annotations significantly improve explanation quality. We evaluate the effectiveness of ForenX on two major benchmarks. The model's explainability is verified by comprehensive subjective evaluations.",1 "Explainability remains a critical challenge in artificial intelligence (AI) systems, particularly in high stakes domains such as healthcare, finance, and decision support, where users must understand and trust automated reasoning. Traditional explainability methods such as feature importance and post-hoc justifications often fail to capture the cognitive processes that underlie human decision making, leading to either too technical or insufficiently meaningful explanations. We propose a novel appraisal based framework inspired by the Component Process Model (CPM) for explainability to address this gap. While CPM has traditionally been applied to emotion research, we use its appraisal component as a cognitive model for generating human aligned explanations. By structuring explanations around key appraisal dimensions such as relevance, implications, coping potential, and normative significance our framework provides context sensitive, cognitively meaningful justifications for AI decisions. This work introduces a new paradigm for generating intuitive, human-centred explanations in AI driven systems by bridging cognitive science and explainable AI.",0 "Non-intrusive load monitoring (NILM) aims to disaggregate total electricity consumption into individual appliance usage, thus enabling more effective energy management. While deep learning has advanced NILM, it remains limited by its dependence on labeled data, restricted generalization, and lack of explainability. This paper introduces the first prompt-based NILM framework that leverages large language models (LLMs) with in-context learning. We design and evaluate prompt strategies that integrate appliance features, contextual information, and representative time-series examples through extensive case studies. Extensive experiments on the REDD and UK-DALE datasets show that LLMs guided solely by prompts deliver only basic NILM capabilities, with performance that lags behind traditional deep-learning models in complex scenarios. However, the experiments also demonstrate strong generalization across different houses and even regions by simply adapting the injected appliance features. It also provides clear, human-readable explanations for the inferred appliance states. Our findings define the capability boundaries of using prompt-only LLMs for NILM tasks. Their strengths in generalization and explainability present a promising new direction for the field.",0 "The growing integration of Artificial Intelligence (AI) into education has intensified the need for transparency and interpretability. While hackathons have long served as agile environments for rapid AI prototyping, few have directly addressed eXplainable AI (XAI) in real-world educational contexts. This paper presents a comprehensive analysis of the XAI Challenge 2025, a hackathon-style competition jointly organized by Ho Chi Minh City University of Technology (HCMUT) and the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI (TRNS-AI), held as part of the International Joint Conference on Neural Networks (IJCNN 2025). The challenge tasked participants with building Question-Answering (QA) systems capable of answering student queries about university policies while generating clear, logic-based natural language explanations. To promote transparency and trustworthiness, solutions were required to use lightweight Large Language Models (LLMs) or hybrid LLM-symbolic systems. A high-quality dataset was provided, constructed via logic-based templates with Z3 validation and refined through expert student review to ensure alignment with real-world academic scenarios. We describe the challenge's motivation, structure, dataset construction, and evaluation protocol. Situating the competition within the broader evolution of AI hackathons, we argue that it represents a novel effort to bridge LLMs and symbolic reasoning in service of explainability. Our findings offer actionable insights for future XAI-centered educational systems and competitive research initiatives.",2 "This work-in-progress paper presents SPARC (Systematic Problem Solving and Algorithmic Reasoning for Children), a gamified learning platform designed to enhance engagement and knowledge retention in K-12 STEM education. Traditional approaches often struggle to motivate students or facilitate deep understanding, especially for complex scientific concepts. SPARC addresses these challenges by integrating interactive, narrative-driven gameplay with an artificial intelligence peer agent built on large language models. Rather than simply providing answers, the agent engages students in dialogue and inquiry, prompting them to explain concepts and solve problems collaboratively. The platform's design is grounded in educational theory and closely aligned with state learning standards. Initial classroom pilots utilized a multi-method assessment framework combining pre- and post-tests, in-game analytics, and qualitative feedback from students and teachers. Preliminary findings indicate that SPARC significantly increases student engagement, with most participants reporting greater interest in STEM subjects and moderate gains in conceptual understanding observed in post-test results. Ongoing development focuses on refining the AI agent, expanding curriculum integration, and improving accessibility. These early results demonstrate the potential of combining AI-driven peer support with game-based learning to create inclusive, effective, and engaging educational experiences for K-12 learners.",1 "We analyze anonymous interaction data of minors in class-rooms spanning several months, schools, and subjects employing a novel, simple topic modeling approach. Specifically, we categorize more than 17,000 messages generated by students, teachers, and ChatGPT in two dimensions: content (such as nature and people) and tasks (such as writing and explaining). Our hierarchical categorization done separately for each dimension includes exemplary prompts, and provides both a high-level overview as well as tangible insights. Prior works mostly lack a content or thematic categorization. While task categorizations are more prevalent in education, most have not been supported by real-world data for K-12. In turn, it is not surprising that our analysis yielded a number of novel applications. In deriving these insights, we found that many of the well-established classical and emerging computational methods, i.e., topic modeling, for analysis of large amounts of texts underperform, leading us to directly apply state-of-the-art LLMs with adequate pre-processing to achieve hierarchical topic structures with better human alignment through explicit instructions than prior approaches. Our findings support fellow researchers, teachers and students in enriching the usage of GenAI, while our discussion also highlights a number of concerns and open questions for future research.",1 "Academic performance depends on a multivariable nexus of socio-academic and financial factors. This study investigates these influences to develop effective strategies for optimizing students' CGPA. To achieve this, we reviewed various literature to identify key influencing factors and constructed an initial hypothetical causal graph based on the findings. Additionally, an online survey was conducted, where 1,050 students participated, providing comprehensive data for analysis. Rigorous data preprocessing techniques, including cleaning and visualization, ensured data quality before analysis. Causal analysis validated the relationships among variables, offering deeper insights into their direct and indirect effects on CGPA. Regression models were implemented for CGPA prediction, while classification models categorized students based on performance levels. Ridge Regression demonstrated strong predictive accuracy, achieving a Mean Absolute Error of 0.12 and a Mean Squared Error of 0.023. Random Forest outperformed in classification, attaining an F1-score near perfection and an accuracy of 98.68%. Explainable AI techniques such as SHAP, LIME, and Interpret enhanced model interpretability, highlighting critical factors such as study hours, scholarships, parental education, and prior academic performance. The study culminated in the development of a web-based application that provides students with personalized insights, allowing them to predict academic performance, identify areas for improvement, and make informed decisions to enhance their outcomes.",2 "Growing excitement around deploying AI across various domains calls for a careful assessment of how human decision-makers interact with AI-powered systems. In particular, it is essential to understand when decision-makers voluntarily choose to consult AI tools, which we term decision-maker adoption. We interviewed 9 experts across four domains -- medicine, law, journalism, and the public sector -- to explore current AI use cases and perceptions of adoption. From these interviews, we identify key factors that shape decision-maker adoption of AI tools: the decision-maker's background, perceptions of the AI, consequences for the decision-maker, and perceived implications for other stakeholders. We translate these factors into an AI adoption sheet to analyze how decision-makers approach adoption choices through comparative, cross-domain case studies, highlighting how our factors help explain inter-domain differences in adoption. Our findings offer practical guidance for supporting the responsible and context-aware deployment of AI by better accounting for the decision-maker's perspective.",1 "Classification models that provide human-interpretable explanations enhance clinicians' trust and usability in medical image diagnosis. One research focus is the integration and prediction of pathology-related visual attributes used by radiologists alongside the diagnosis, aligning AI decision-making with clinical reasoning. Radiologists use attributes like shape and texture as established diagnostic criteria and mirroring these in AI decision-making both enhances transparency and enables explicit validation of model outputs. However, the adoption of such models is limited by the scarcity of large-scale medical image datasets annotated with these attributes. To address this challenge, we propose synthesizing attribute-annotated data using a generative model. We enhance the Diffusion Model with attribute conditioning and train it using only 20 attribute-labeled lung nodule samples from the LIDC-IDRI dataset. Incorporating its generated images into the training of an explainable model boosts performance, increasing attribute prediction accuracy by 13.4% and target prediction accuracy by 1.8% compared to training with only the small real attribute-annotated dataset. This work highlights the potential of synthetic data to overcome dataset limitations, enhancing the applicability of explainable models in medical image analysis.",0 "With the development of generative artificial intelligence (GenAI) tools to create art, stakeholders cannot come to an agreement on the value of these works. In this study we uncovered the mixed opinions surrounding art made by AI. We developed two versions of a dance performance augmented by technology either with or without GenAI. For each version we informed audiences of the performance's development either before or after a survey on their perceptions of the performance. There were thirty-nine participants (13 males, 26 female) divided between the four performances. Results demonstrated that individuals were more inclined to attribute artistic merit to works made by GenAI when they were unaware of its use. We present this case study as a call to address the importance of utilizing the social context and the users' interpretations of GenAI in shaping a technical explanation, leading to a greater discussion that can bridge gaps in understanding.",2 "Many existing AI music generation tools rely on text prompts, complex interfaces, or instrument-like controls, which may require musical or technical knowledge that non-musicians do not possess. This paper introduces DeformTune, a prototype system that combines a tactile deformable interface with the MeasureVAE model to explore more intuitive, embodied, and explainable AI interaction. We conducted a preliminary study with 11 adult participants without formal musical training to investigate their experience with AI-assisted music creation. Thematic analysis of their feedback revealed recurring challenge--including unclear control mappings, limited expressive range, and the need for guidance throughout use. We discuss several design opportunities for enhancing explainability of AI, including multimodal feedback and progressive interaction support. These findings contribute early insights toward making AI music systems more explainable and empowering for novice users.",1 "Reinforcement learning agents can achieve super-human performance in complex decision-making tasks, but their behaviour is often difficult to understand and explain. This lack of explanation limits deployment, especially in safety-critical settings where understanding and trust are essential. We identify three core explanatory targets that together provide a comprehensive view of reinforcement learning agents: behaviour, outcomes, and predictions. We develop a unified theoretical framework for explaining these three elements of reinforcement learning agents through the influence of individual features that the agent observes in its environment. We derive feature influences by using Shapley values, which collectively and uniquely satisfy a set of well-motivated axioms for fair and consistent credit assignment. The proposed approach, Shapley Values for Explaining Reinforcement Learning (SVERL), provides a single theoretical framework to comprehensively and meaningfully explain reinforcement learning agents. It yields explanations with precise semantics that are not only interpretable but also mathematically justified, enabling us to identify and correct conceptual issues in prior explanations. Through illustrative examples, we show how SVERL produces useful, intuitive explanations of agent behaviour, outcomes, and predictions, which are not apparent from observing agent behaviour alone.",0 "Code review (CR) is essential to software development, helping ensure that new code is properly integrated. However, the CR process often involves significant effort, including code adjustments, responses to reviewers, and continued implementation. While past studies have examined CR delays and iteration counts, few have investigated the effort based on the volume of code changes required, especially in the context of GitLab Merge Requests (MRs), which remains underexplored. In this paper, we define and measure CR effort as the amount of code modified after submission, using a dataset of over 23,600 MRs from four GitLab projects. We find that up to 71% of MRs require adjustments after submission, and 28% of these involve changes to more than 200 lines of code. Surprisingly, this effort is not correlated with review time or the number of participants. To better understand and predict CR effort, we train an interpretable machine learning model using metrics across multiple dimensions: text features, code complexity, developer experience, review history, and branching. Our model achieves strong performance (AUC 0.84-0.88) and reveals that complexity, experience, and text features are key predictors. Historical project characteristics also influence current review effort. Our findings highlight the feasibility of using machine learning to explain and anticipate the effort needed to integrate code changes during review.",1 "Autonomous driving systems face significant challenges in achieving human-like adaptability, robustness, and interpretability in complex, open-world environments. These challenges stem from fragmented architectures, limited generalization to novel scenarios, and insufficient semantic extraction from perception. To address these limitations, we propose a unified Perception-Language-Action (PLA) framework that integrates multi-sensor fusion (cameras, LiDAR, radar) with a large language model (LLM)-augmented Vision-Language-Action (VLA) architecture, specifically a GPT-4.1-powered reasoning core. This framework unifies low-level sensory processing with high-level contextual reasoning, tightly coupling perception with natural language-based semantic understanding and decision-making to enable context-aware, explainable, and safety-bounded autonomous driving. Evaluations on an urban intersection scenario with a construction zone demonstrate superior performance in trajectory tracking, speed prediction, and adaptive planning. The results highlight the potential of language-augmented cognitive frameworks for advancing the safety, interpretability, and scalability of autonomous driving systems.",0 "Current large language models (LLMs) have demonstrated emerging capabilities in social intelligence tasks, including implicature resolution and theory-of-mind reasoning, both of which require substantial pragmatic understanding. However, how LLMs acquire this pragmatic competence throughout the training process remains poorly understood. In this work, we introduce ALTPRAG, a dataset grounded in the pragmatic concept of alternatives, to evaluate whether LLMs at different training stages can accurately infer nuanced speaker intentions. Each instance pairs two equally plausible yet pragmatically divergent continuations and requires the model to (i) infer the speaker's intended meaning and (ii) explain when and why a speaker would choose one utterance over its alternative, thus directly probing pragmatic competence through contrastive reasoning. We systematically evaluate 22 LLMs across 3 key training stages: after pre-training, supervised fine-tuning (SFT), and preference optimization, to examine the development of pragmatic competence. Our results show that even base models exhibit notable sensitivity to pragmatic cues, which improves consistently with increases in model and data scale. Additionally, SFT and RLHF contribute further gains, particularly in cognitive-pragmatic scenarios. These findings highlight pragmatic competence as an emergent and compositional property of LLM training and offer new insights for aligning models with human communicative norms.",0 "Explainability in artificial intelligence (XAI) remains a crucial aspect for fostering trust and understanding in machine learning models. Current visual explanation techniques, such as gradient-based or class-activation-based methods, often exhibit a strong dependence on specific model architectures. Conversely, perturbation-based methods, despite being model-agnostic, are computationally expensive as they require evaluating models on a large number of forward passes. In this work, we introduce Foveation-based Explanations (FovEx), a novel XAI method inspired by human vision. FovEx seamlessly integrates biologically inspired perturbations by iteratively creating foveated renderings of the image and combines them with gradient-based visual explorations to determine locations of interest efficiently. These locations are selected to maximize the performance of the model to be explained with respect to the downstream task and then combined to generate an attribution map. We provide a thorough evaluation with qualitative and quantitative assessments on established benchmarks. Our method achieves state-of-the-art performance on both transformers (on 4 out of 5 metrics) and convolutional models (on 3 out of 5 metrics), demonstrating its versatility among various architectures. Furthermore, we show the alignment between the explanation map produced by FovEx and human gaze patterns (+14\% in NSS compared to RISE, +203\% in NSS compared to GradCAM). This comparison enhances our confidence in FovEx's ability to close the interpretation gap between humans and machines.",0 "Autonomous business processes (ABPs), i.e., self-executing workflows leveraging AI/ML, have the potential to improve operational efficiency, reduce errors, lower costs, improve response times, and free human workers for more strategic and creative work. However, ABPs may raise specific concerns including decreased stakeholder trust, difficulties in debugging, hindered accountability, risk of bias, and issues with regulatory compliance. We argue for eXplainable ABPs (XABPs) to address these concerns by enabling systems to articulate their rationale. The paper outlines a systematic approach to XABPs, characterizing their forms, structuring explainability, and identifying key BPM research challenges towards XABPs.",0 "The deterministic and time-reversal symmetric dynamics of isolated quantum systems is at odds with irreversible equilibration observed in generic thermodynamic systems. Standard approaches at a reconciliation employ subjective restrictions on the space of observables or states and do not explain how a single macroscopic quantum system achieves equilibrium dynamically. We instead argue that quantum theory is an effective theory and requires corrections to accurately describe systems approaching the thermodynamic limit. We construct a stochastic extension of quantum theory which is practically identical to quantum mechanics for microscopic systems, yet allows single, isolated macroscopic systems to objectively thermalize, generically. A fluctuation-dissipation relation guarantees physical consistency including norm preservation, energy conservation, no superluminal signalling and the emergence of microcanonical equilibrium. We further discuss the inclusion of objective collapse, thereby realizing a falsifiable theory of spontaneous universal irreversibility which describes the quantum-to-classical crossover dynamics of macroscopic quantum systems. This model admits spontaneous symmetry breaking, quantum state reduction and objective quantum thermalization for individual systems while realizing an emergent hybrid, Born-Maxwell-Boltzmann-Gibbs-microcanonical distribution for ensembles.",0 "Blockchain technology is widely used in various fields due to its ability to provide decentralization and trustless security. This is a fundamental understanding held by many advocates, but it is misunderstood, leading participants to fail to recognize the limitations of the security that blockchain can provide. Among all current network attacks, Denial of Service (DoS) attacks pose significant threats due to their ease of execution and destructive potential. This paper, based on the blockchain architecture hierarchy, categorizes and organizes existing DoS attacks, with a focus on explaining the principles and methods of contract layer and consensus layer DoS attacks. Furthermore, this paper comprehensively analyzes and compares commonly used detection methods and defense technologies, which will contribute to strengthening the security and stability of blockchain systems and promoting further innovation and application of blockchain systems.",0 "Automated Program Repair (APR) seeks to automatically correct software bugs without requiring human intervention. However, existing tools tend to generate patches that satisfy test cases without fixing the underlying bug, those are known as overfitting patches. To address this issue, Automated Patch Correctness Assessment (APCA) attempts to identify overfitting patches generated by APR tools. It can be solved as a static approach, meaning that no additional information is needed beyond the original and fixed code snippets. Current static techniques often struggle with reliability, flexibility and transparency. To address these issues, we introduce RePaCA, a novel static APCA technique that leverages Large Language Models (LLMs) specialized in thinking tasks. Our model is prompted with both buggy and fixed code snippets and guided to generate a Chain of Thought that analyses code differences, reasons about how the patch addresses the root cause, and ultimately provides a binary classification: correct or overfitting. To enhance these reasoning capabilities for the APCA task specifically, the LLM is finetuned using Reinforcement Learning with the Group Relative Policy Optimization algorithm. When evaluated on a standard Defects4J-derived test, our approach achieves state-of-the-art performance, with 83.1% accuracy and an 84.8% F1-score. Furthermore, our model demonstrates superior generalization capabilities when trained on different datasets, outperforming the leading technique. This reasoning capability also provides enhanced explainability for the patch assessment. These findings underscore the considerable promise of finetuned, reasoning LLMs to advance static APCA by enhancing accuracy, generalization, and explainability.",0 "This work demonstrates a methodology for using deep learning to discover simple, practical criteria for classifying matrices based on abstract algebraic properties. By combining a high-performance neural network with explainable AI (XAI) techniques, we can distill a model's learned strategy into human-interpretable rules. We apply this approach to the challenging case of monotone matrices, defined by the condition that their inverses are entrywise nonnegative. Despite their simple definition, an easy characterization in terms of the matrix elements or the derived parameters is not known. Here, we present, to the best of our knowledge, the first systematic machine-learning approach for deriving a practical criterion that distinguishes monotone from non-monotone matrices. After establishing a labelled dataset by randomly generated monotone and non-monotone matrices uniformly on $(-1,1)$, we employ deep neural network algorithms for classifying the matrices as monotone or non-monotone, using both their entries and a comprehensive set of matrix features. By saliency methods, such as integrated gradients, we identify among all features, two matrix parameters which alone provide sufficient information for the matrix classification, with $95\%$ accuracy, namely the absolute values of the two lowest-order coefficients, $c_0$ and $c_1$ of the matrix's characteristic polynomial. A data-driven study of 18,000 random $7\times7$ matrices shows that the monotone class obeys $\lvert c_{0}/c_{1}\rvert\le0.18$ with probability $>99.98\%$; because $\lvert c_{0}/c_{1}\rvert = 1/\mathrm{tr}(A^{-1})$ for monotone $A$, this is equivalent to the simple bound $\mathrm{tr}(A^{-1})\ge5.7$.",0 "Sewer pipe faults, such as leaks and blockages, can lead to severe consequences including groundwater contamination, property damage, and service disruption. Traditional inspection methods rely heavily on the manual review of CCTV footage collected by mobile robots, which is inefficient and susceptible to human error. To automate this process, we propose a novel system incorporating explainable deep learning anomaly detection combined with sequential probability ratio testing (SPRT). The anomaly detector processes single image frames, providing interpretable spatial localisation of anomalies, whilst the SPRT introduces temporal evidence aggregation, enhancing robustness against noise over sequences of image frames. Experimental results demonstrate improved anomaly detection performance, highlighting the benefits of the combined spatiotemporal analysis system for reliable and robust sewer inspection.",0 "With the rapid growth of Artificial Intelligence, Large Language Models (LLMs) have become essential for Question Answering (QA) systems, improving efficiency and reducing human workload in customer service. The emergence of Vietnamese LLMs (ViLLMs) highlights lightweight open-source models as a practical choice for their accuracy, efficiency, and privacy benefits. However, domain-specific evaluations remain limited, and the absence of benchmark datasets reflecting real customer interactions makes it difficult for enterprises to select suitable models for support applications. To address this gap, we introduce the Customer Support Conversations Dataset (CSConDa), a curated benchmark of over 9,000 QA pairs drawn from real interactions with human advisors at a large Vietnamese software company. Covering diverse topics such as pricing, product availability, and technical troubleshooting, CSConDa provides a representative basis for evaluating ViLLMs in practical scenarios. We further present a comprehensive evaluation framework, benchmarking 11 lightweight open-source ViLLMs on CSConDa with both automatic metrics and syntactic analysis to reveal model strengths, weaknesses, and linguistic patterns. This study offers insights into model behavior, explains performance differences, and identifies key areas for improvement, supporting the development of next-generation ViLLMs. By establishing a robust benchmark and systematic evaluation, our work enables informed model selection for customer service QA and advances research on Vietnamese LLMs. The dataset is publicly available at https://huggingface.co/datasets/ura-hcmut/Vietnamese-Customer-Support-QA.",0 "How thousands of microtubules and molecular motors self-organize into spindles remains poorly understood. By combining static, nanometer-resolution, large-scale electron tomography reconstructions and dynamic, optical-resolution, polarized light microscopy, we test an active liquid crystal continuum model of mitotic spindles in human tissue culture cells. The predictions of this coarse-grained theory quantitatively agree with the experimentally measured spindle morphology and fluctuation spectra. These findings argue that local interactions and polymerization produce collective alignment, diffusive-like motion, and polar transport which govern the behaviors of the spindle's microtubule network, and provide a means to measure the spindle's material properties. This work demonstrates that a coarse-grained theory featuring measurable, physically-interpretable parameters can quantitatively describe the mechanical behavior and self-organization of human mitotic spindles.",0 "Accurate load forecasting is essential to the operation of modern electric power systems. Given the sensitivity of electricity demand to weather variability and temporal dynamics, capturing non-linear patterns is essential for long-term planning. This paper presents a comparative analysis of machine learning models, Linear Regression, XGBoost, LightGBM, and Long Short-Term Memory (LSTM), for forecasting system-wide electricity load up to one year in advance. Midterm forecasting has shown to be crucial for maintenance scheduling, resource allocation, financial forecasting, and market participation. The paper places a focus on the use of a method called ""Shapley Additive Explanations"" (SHAP) to improve model explainability. SHAP enables the quantification of feature contributions, guiding informed feature engineering and improving both model transparency and forecasting accuracy.",0 "Recent large vision-language models (LVLMs) have advanced capabilities in visual question answering (VQA). However, interpreting where LVLMs direct their visual attention remains a significant challenge, yet is essential for understanding model behavior. We introduce GLIMPSE (Gradient-Layer Importance Mapping for Prompted Visual Saliency Explanation), a lightweight, model-agnostic framework that jointly attributes LVLM outputs to the most relevant visual evidence and textual signals that support open-ended generation. GLIMPSE fuses gradient-weighted attention, adaptive layer propagation, and relevance-weighted token aggregation to produce holistic response-level heat maps for interpreting cross-modal reasoning, outperforming prior methods in faithfulness and pushing the state-of-the-art in human-attention alignment. We demonstrate an analytic approach to uncover fine-grained insights into LVLM cross-modal attribution, trace reasoning dynamics, analyze systematic misalignment, diagnose hallucination and bias, and ensure transparency.",0 "We propose a novel segmentation-based explainable artificial intelligence (XAI) method for neural networks working on point cloud classification. As one building block of this method, we propose a novel point-shifting mechanism to introduce perturbations in point cloud data. Recently, AI has seen an exponential growth. Hence, it is important to understand the decision-making process of AI algorithms when they are applied in critical areas. Our work focuses on explaining AI algorithms that classify point cloud data. An important aspect of the methods used for explaining AI algorithms is their ability to produce explanations that are easy for humans to understand. This allows them to analyze the AI algorithms better and make appropriate decisions based on that analysis. Therefore, in this work, we intend to generate meaningful explanations that can be easily interpreted by humans. The point cloud data we consider represents 3D objects such as cars, guitars, and laptops. We make use of point cloud segmentation models to generate explanations for the working of classification models. The segments are used to introduce perturbations into the input point cloud data and generate saliency maps. The perturbations are introduced using the novel point-shifting mechanism proposed in this work which ensures that the shifted points no longer influence the output of the classification algorithm. In contrast to previous methods, the segments used by our method are meaningful, i.e. humans can easily interpret the meaning of the segments. Thus, the benefit of our method over other methods is its ability to produce more meaningful saliency maps. We compare our method with the use of classical clustering algorithms to generate explanations. We also analyze the saliency maps generated for example inputs using our method to demonstrate the usefulness of the method in generating meaningful explanations.",0 "Ensuring transparency and trust in AI-driven public health and biomedical sciences systems requires more than accurate predictions-it demands explanations that are clear, contextual, and socially accountable. While explainable AI (XAI) has advanced in areas like feature attribution and model interpretability, most methods still lack the structure and adaptability needed for diverse health stakeholders, including clinicians, policymakers, and the general public. We introduce PHAX-a Public Health Argumentation and eXplainability framework-that leverages structured argumentation to generate human-centered explanations for AI outputs. PHAX is a multi-layer architecture combining defeasible reasoning, adaptive natural language techniques, and user modeling to produce context-aware, audience-specific justifications. More specifically, we show how argumentation enhances explainability by supporting AI-driven decision-making, justifying recommendations, and enabling interactive dialogues across user types. We demonstrate the applicability of PHAX through use cases such as medical term simplification, patient-clinician communication, and policy justification. In particular, we show how simplification decisions can be modeled as argument chains and personalized based on 3 users expertise-enhancing both interpretability and trust. By aligning formal reasoning methods with communicative demands, PHAX contributes to a broader vision of transparent, human-centered AI in public health.",1 "Humans can effortlessly describe what they see, yet establishing a shared representational format between vision and language remains a significant challenge. Emerging evidence suggests that human brain representations in both vision and language are well predicted by semantic feature spaces obtained from large language models (LLMs). This raises the possibility that sensory systems converge in their inherent ability to transform their inputs onto shared, embedding-like representational space. However, it remains unclear how such a space manifests in human behaviour. To investigate this, sixty-three participants performed behavioural similarity judgements separately on 100 natural scene images and 100 corresponding sentence captions from the Natural Scenes Dataset. We found that visual and linguistic similarity judgements not only converge at the behavioural level but also predict a remarkably similar network of fMRI brain responses evoked by viewing the natural scene images. Furthermore, computational models trained to map images onto LLM-embeddings outperformed both category-trained and AlexNet controls in explaining the behavioural similarity structure. These findings demonstrate that human visual and linguistic similarity judgements are grounded in a shared, modality-agnostic representational structure that mirrors how the visual system encodes experience. The convergence between sensory and artificial systems suggests a common capacity of how conceptual representations are formed-not as arbitrary products of first order, modality-specific input, but as structured representations that reflect the stable, relational properties of the external world.",2 "The need for explanations in AI has, by and large, been driven by the desire to increase the transparency of black-box machine learning models. However, such explanations, which focus on the internal mechanisms that lead to a specific output, are often unsuitable for non-experts. To facilitate a human-centered perspective on AI explanations, agents need to focus on individuals and their preferences as well as the context in which the explanations are given. This paper proposes a personalized approach to explanation, where the agent tailors the information provided to the user based on what is most likely pertinent to them. We propose a model of the agent's worldview that also serves as a personal and dynamic memory of its previous interactions with the same user, based on which the artificial agent can estimate what part of its knowledge is most likely new information to the user.",0 "Supply Chain Management requires addressing a variety of complex decision-making challenges, from sourcing strategies to planning and execution. Over the last few decades, advances in computation and information technologies have enabled the transition from manual, intuition and experience-based decision-making, into more automated and data-driven decisions using a variety of tools that apply optimization techniques. These techniques use mathematical methods to improve decision-making. Unfortunately, business planners and executives still need to spend considerable time and effort to (i) understand and explain the recommendations coming out of these technologies; (ii) analyze various scenarios and answer what-if questions; and (iii) update the mathematical models used in these tools to reflect current business environments. Addressing these challenges requires involving data science teams and/or the technology providers to explain results or make the necessary changes in the technology and hence significantly slows down decision making. Motivated by the recent advances in Large Language Models (LLMs), we report how this disruptive technology can democratize supply chain technology - namely, facilitate the understanding of tools' outcomes, as well as the interaction with supply chain tools without human-in-the-loop. Specifically, we report how we apply LLMs to address the three challenges described above, thus substantially reducing the time to decision from days and weeks to minutes and hours as well as dramatically increasing planners' and executives' productivity and impact.",0 "Judgment of risk is key to decision-making under uncertainty. As Daniel Kahneman and Amos Tversky famously discovered, humans do so in a distinctive way that departs from mathematical rationalism. Specifically, they demonstrated experimentally that humans accept more risk when they feel themselves at risk of losing something than when they might gain. I report the first tests of Kahneman and Tversky's landmark 'prospect theory' with Large Language Models, including today's state of the art chain-of-thought 'reasoners'. In common with humans, I find that prospect theory often anticipates how these models approach risky decisions across a range of scenarios. I also demonstrate that context is key to explaining much of the variance in risk appetite. The 'frame' through which risk is apprehended appears to be embedded within the language of the scenarios tackled by the models. Specifically, I find that military scenarios generate far larger 'framing effects' than do civilian settings, ceteris paribus. My research suggests, therefore, that language models the world, capturing our human heuristics and biases. But also that these biases are uneven - the idea of a 'frame' is richer than simple gains and losses. Wittgenstein's notion of 'language games' explains the contingent, localised biases activated by these scenarios. Finally, I use my findings to reframe the ongoing debate about reasoning and memorisation in LLMs.",0 "AI-driven clinical text classification is vital for explainable automated retrieval of population-level health information. This work investigates whether human-based clinical rationales can serve as additional supervision to improve both performance and explainability of transformer-based models that automatically encode clinical documents. We analyzed 99,125 human-based clinical rationales that provide plausible explanations for primary cancer site diagnoses, using them as additional training samples alongside 128,649 electronic pathology reports to evaluate transformer-based models for extracting primary cancer sites. We also investigated sufficiency as a way to measure rationale quality for pre-selecting rationales. Our results showed that clinical rationales as additional training data can improve model performance in high-resource scenarios but produce inconsistent behavior when resources are limited. Using sufficiency as an automatic metric to preselect rationales also leads to inconsistent results. Importantly, models trained on rationales were consistently outperformed by models trained on additional reports instead. This suggests that clinical rationales don't consistently improve model performance and are outperformed by simply using more reports. Therefore, if the goal is optimizing accuracy, annotation efforts should focus on labeling more reports rather than creating rationales. However, if explainability is the priority, training models on rationale-supplemented data may help them better identify rationale-like features. We conclude that using clinical rationales as additional training data results in smaller performance improvements and only slightly better explainability (measured as average token-level rationale coverage) compared to training on additional reports.",0 "Deep Neural Networks (DNNs) have made tremendous progress in multimodal tasks such as image captioning. However, explaining/interpreting how these models integrate visual information, language information and knowledge representation to generate meaningful captions remains a challenging problem. Standard metrics to measure performance typically rely on comparing generated captions with human-written ones that may not provide a user with a deep insights into this integration. In this work, we develop a novel explanation framework that is easily interpretable based on Hybrid Markov Logic Networks (HMLNs) - a language that can combine symbolic rules with real-valued functions - where we hypothesize how relevant examples from the training data could have influenced the generation of the observed caption. To do this, we learn a HMLN distribution over the training instances and infer the shift in distributions over these instances when we condition on the generated sample which allows us to quantify which examples may have been a source of richer information to generate the observed caption. Our experiments on captions generated for several state-of-the-art captioning models using Amazon Mechanical Turk illustrate the interpretability of our explanations, and allow us to compare these models along the dimension of explainability.",0 "Increasingly large parameter spaces, used to more accurately model precision observables in physics, can paradoxically lead to large deviations in the inferred parameters of interest -- a bias known as volume projection effects -- when marginalising over many nuisance parameters. For posterior distributions that admit a Laplace expansion, we show that this artefact of Bayesian inference can be mitigated by defining expectation values with respect to a non-flat volume measure, such that the posterior mean becomes unbiased on average. We begin by finding a measure that ensures the mean is an unbiased estimator of the mode. Although the mode itself, as we rediscover, is biased under sample averaging, this choice yields the least biased estimator due to a cancellation we clarify. We further explain why bias in marginal posteriors can appear relatively large, yet remains correctable, when the number of nuisances is large. To demonstrate our approach, we present mock analyses in large-scale structure (LSS) wherein cosmological parameters are subject to large projection effects (at the 1-2$\sigma$ level) under a flat measure, that are however recovered at high fidelity ($<0.1\sigma$) when estimated using non-flat counterparts. Our cosmological analyses are enabled by $\texttt{PyBird-JAX}$, a fast, differentiable pipeline for LSS developed in our companion paper [1].",0 "Deep clustering uncovers hidden patterns and groups in complex time series data, yet its opaque decision-making limits use in safety-critical settings. This survey offers a structured overview of explainable deep clustering for time series, collecting current methods and their real-world applications. We thoroughly discuss and compare peer-reviewed and preprint papers through application domains across healthcare, finance, IoT, and climate science. Our analysis reveals that most work relies on autoencoder and attention architectures, with limited support for streaming, irregularly sampled, or privacy-preserved series, and interpretability is still primarily treated as an add-on. To push the field forward, we outline six research opportunities: (1) combining complex networks with built-in interpretability; (2) setting up clear, faithfulness-focused evaluation metrics for unsupervised explanations; (3) building explainers that adapt to live data streams; (4) crafting explanations tailored to specific domains; (5) adding human-in-the-loop methods that refine clusters and explanations together; and (6) improving our understanding of how time series clustering models work internally. By making interpretability a primary design goal rather than an afterthought, we propose the groundwork for the next generation of trustworthy deep clustering time series analytics.",0 "We introduce a novel visual tokenization framework that embeds a provable PCA-like structure into the latent token space. While existing visual tokenizers primarily optimize for reconstruction fidelity, they often neglect the structural properties of the latent space--a critical factor for both interpretability and downstream tasks. Our method generates a 1D causal token sequence for images, where each successive token contributes non-overlapping information with mathematically guaranteed decreasing explained variance, analogous to principal component analysis. This structural constraint ensures the tokenizer extracts the most salient visual features first, with each subsequent token adding diminishing yet complementary information. Additionally, we identified and resolved a semantic-spectrum coupling effect that causes the unwanted entanglement of high-level semantic content and low-level spectral details in the tokens by leveraging a diffusion decoder. Experiments demonstrate that our approach achieves state-of-the-art reconstruction performance and enables better interpretability to align with the human vision system. Moreover, autoregressive models trained on our token sequences achieve performance comparable to current state-of-the-art methods while requiring fewer tokens for training and inference.",0 "Recent advances in diffusion models have enabled the creation of deceptively real images, posing significant security risks when misused. In this study, we empirically show that different timesteps of DDIM inversion reveal varying subtle distinctions between synthetic and real images that are extractable for detection, in the forms of such as Fourier power spectrum high-frequency discrepancies and inter-pixel variance distributions. Based on these observations, we propose a novel synthetic image detection method that directly utilizes features of intermediately noised images by training an ensemble on multiple noised timesteps, circumventing conventional reconstruction-based strategies. To enhance human comprehension, we introduce a metric-grounded explanation generation and refinement module to identify and explain AI-generated flaws. Additionally, we construct the GenHard and GenExplain benchmarks to provide detection samples of greater difficulty and high-quality rationales for fake images. Extensive experiments show that our method achieves state-of-the-art performance with 98.91% and 95.89% detection accuracy on regular and challenging samples respectively, and demonstrates generalizability and robustness. Our code and datasets are available at https://github.com/Shadowlized/ESIDE.",0 "Metal additive manufacturing (AM) involves complex interdependencies among processes, materials, feedstock, and post-processing steps. However, the underlying relationships and domain knowledge remain fragmented across literature and static databases that often require expert-level queries, limiting their applicability in design and planning. To address these limitations, we develop a novel and structured knowledge graph (KG), representing 53 distinct metals and alloys across seven material categories, nine AM processes, four feedstock types, and corresponding post-processing requirements. A large language model (LLM) interface, guided by a few-shot prompting strategy, enables natural language querying without the need for formal query syntax. The system supports a range of tasks, including compatibility evaluation, constraint-based filtering, and design for AM (DfAM) guidance. User queries in natural language are normalized, translated into Cypher, and executed on the KG, with results returned in a structured format. This work introduces the first interactive system that connects a domain-specific metal AM KG with an LLM interface, delivering accessible and explainable decision support for engineers and promoting human-centered tools in manufacturing knowledge systems.",0 "Despite increasing AI chatbot deployment in public discourse, empirical evidence on their capacity to foster intercultural empathy remains limited. Through a randomized experiment, we assessed how different AI deliberation approaches--cross-cultural deliberation (presenting other-culture perspectives), own-culture deliberation (representing participants' own culture), and non-deliberative control--affect intercultural empathy across American and Latin American participants. Cross-cultural deliberation increased intercultural empathy among American participants through positive emotional engagement, but produced no such effects for Latin American participants, who perceived AI responses as culturally inauthentic despite explicit prompting to represent their cultural perspectives. Our analysis of participant-driven feedback, where users directly flagged and explained culturally inappropriate AI responses, revealed systematic gaps in AI's representation of Latin American contexts that persist despite sophisticated prompt engineering. These findings demonstrate that current approaches to AI cultural alignment--including linguistic adaptation and explicit cultural prompting--cannot fully address deeper representational asymmetries in AI systems. Our work advances both deliberation theory and AI alignment research by revealing how the same AI system can simultaneously promote intercultural understanding for one cultural group while failing for another, with critical implications for designing equitable AI systems for cross-cultural democratic discourse.",2 "Construction throughout history typically assumes that its blueprints and building blocks are pre-determined. However, recent work suggests that alternative approaches can enable new paradigms for structure formation. Aleatory architectures, or those which rely on the properties of their granular building blocks rather than pre-planned design or computation, have thus far relied on human intervention for their creation. We imagine that robotic swarms could be valuable to create such aleatory structures by manipulating and forming structures from entangled granular materials. To discover principles by which robotic systems can effectively manipulate soft matter, we develop a robophysical model for interaction with geometrically cohesive granular media composed of u-shape particles. This robotic platform uses environmental signals to autonomously coordinate excavation, transport, and deposition of material. We test the effect of substrate initial conditions by characterizing robot performance in two different material compaction states and observe as much as a 75% change in transported mass depending on initial substrate compressive loading. These discrepancies suggest the functional role that material properties such as packing and cohesion/entanglement play in excavation and construction. To better understand these material properties, we develop an apparatus for tensile testing of the geometrically cohesive substrates, which reveals how entangled material strength responds strongly to initial compressive loading. These results explain the variation observed in robotic performance and point to future directions for better understanding robotic interaction mechanics with entangled materials.",0 "AI technologies, including deep learning, large-language models have gone from one breakthrough to the other. As a result, we are witnessing growing excitement in robotics at the prospect of leveraging the potential of AI to tackle some of the outstanding barriers to the full deployment of robots in our daily lives. However, action and sensing in the physical world pose greater and different challenges than analysing data in isolation. As the development and application of AI in robotic products advances, it is important to reflect on which technologies, among the vast array of network architectures and learning models now available in the AI field, are most likely to be successfully applied to robots; how they can be adapted to specific robot designs, tasks, environments; which challenges must be overcome. This article offers an assessment of what AI for robotics has achieved since the 1990s and proposes a short- and medium-term research roadmap listing challenges and promises. These range from keeping up-to-date large datasets, representatives of a diversity of tasks robots may have to perform, and of environments they may encounter, to designing AI algorithms tailored specifically to robotics problems but generic enough to apply to a wide range of applications and transfer easily to a variety of robotic platforms. For robots to collaborate effectively with humans, they must predict human behavior without relying on bias-based profiling. Explainability and transparency in AI-driven robot control are not optional but essential for building trust, preventing misuse, and attributing responsibility in accidents. We close on what we view as the primary long-term challenges, that is, to design robots capable of lifelong learning, while guaranteeing safe deployment and usage, and sustainable computational costs.",0 "Human perceptual estimates exhibit a striking reversal in bias depending on uncertainty: they shift toward prior expectations under high sensory uncertainty, but away from them when internal noise is dominant. While Bayesian inference combined with efficient coding can explain this dual bias, existing models rely on handcrafted priors or fixed encoders, offering no account of how such representations and inferences could emerge through learning. We introduce a Generative Adversarial Inference (GAI) network that simultaneously learns sensory representations and inference strategies directly from data, without assuming explicit likelihoods or priors. Through joint reconstruction and adversarial training, the model learns a representation that approximates an efficient code consistent with information-theoretic predictions. Trained on Gabor stimuli with varying signal-to-noise ratios, GAI spontaneously reproduces the full transition from prior attraction to repulsion, and recovers the Fisher information profile predicted by efficient coding theory. It also captures the characteristic bias reversal observed in human perception more robustly than supervised or variational alternatives. These results show that a single adversarially trained network can jointly acquire an efficient sensory code and support Bayesian-consistent behavior, providing a neurally plausible, end-to-end account of perceptual bias that unifies normative theory and deep learning.",0 "Understanding the sources of variability in annotations is crucial for developing fair NLP systems, especially for tasks like sexism detection where demographic bias is a concern. This study investigates the extent to which annotator demographic features influence labeling decisions compared to text content. Using a Generalized Linear Mixed Model, we quantify this inf luence, finding that while statistically present, demographic factors account for a minor fraction ( 8%) of the observed variance, with tweet content being the dominant factor. We then assess the reliability of Generative AI (GenAI) models as annotators, specifically evaluating if guiding them with demographic personas improves alignment with human judgments. Our results indicate that simplistic persona prompting often fails to enhance, and sometimes degrades, performance compared to baseline models. Furthermore, explainable AI (XAI) techniques reveal that model predictions rely heavily on content-specific tokens related to sexism, rather than correlates of demographic characteristics. We argue that focusing on content-driven explanations and robust annotation protocols offers a more reliable path towards fairness than potentially persona simulation.",1 "The theoretical code-switching (CS) literature provides numerous pointwise investigations that aim to explain patterns in CS, i.e. why bilinguals switch language in certain positions in a sentence more often than in others. A resulting consensus is that CS can be explained by the syntax of the contributing languages. There is however no large-scale, multi-language, cross-phenomena experiment that tests this claim. When designing such an experiment, we need to make sure that the system that is predicting where bilinguals tend to switch has access only to syntactic information. We provide such an experiment here. Results show that syntax alone is sufficient for an automatic system to distinguish between sentences in minimal pairs of CS, to the same degree as bilingual humans. Furthermore, the learnt syntactic patterns generalise well to unseen language pairs.",0 "Imitation Learning (IL) is a widely adopted approach which enables agents to learn from human expert demonstrations by framing the task as a supervised learning problem. However, IL often suffers from causal confusion, where agents misinterpret spurious correlations as causal relationships, leading to poor performance in testing environments with distribution shift. To address this issue, we introduce GAze-Based Regularization in Imitation Learning (GABRIL), a novel method that leverages the human gaze data gathered during the data collection phase to guide the representation learning in IL. GABRIL utilizes a regularization loss which encourages the model to focus on causally relevant features identified through expert gaze and consequently mitigates the effects of confounding variables. We validate our approach in Atari environments and the Bench2Drive benchmark in CARLA by collecting human gaze datasets and applying our method in both domains. Experimental results show that the improvement of GABRIL over behavior cloning is around 179% more than the same number for other baselines in the Atari and 76% in the CARLA setup. Finally, we show that our method provides extra explainability when compared to regular IL agents.",0 "This article redefines arbitrariness not as a normative flaw or a symptom of domination, but as a foundational functional mechanism structuring human systems and interactions. Diverging from critical traditions that conflate arbitrariness with injustice, it posits arbitrariness as a semiotic trait: a property enabling systems - linguistic, legal, or social - to operate effectively while withholding their internal rationale. Building on Ferdinand de Saussure's concept of l'arbitraire du signe, the analysis extends this principle beyond language to demonstrate its cross-domain applicability, particularly in law and social dynamics. The paper introduces the ""Motivation -> Constatability -> Contestability"" chain, arguing that motivation functions as a crucial interface rendering an act's logic vulnerable to intersubjective contestation. When this chain is broken through mechanisms like ""immotivization"" or ""Conflict Lateralization"" (exemplified by ""the blur of the wolf drowned in the fish""), acts produce binding effects without exposing their rationale, thus precluding justiciability. This structural opacity, while appearing illogical, is a deliberate design protecting authority from accountability. Drawing on Shannon's entropy model, the paper formalizes arbitrariness as A = H(L|M) (conditional entropy). It thereby proposes a modern theory of arbitrariness as a neutral operator central to control as well as care, an overlooked dimension of interpersonal relations. While primarily developed through human social systems, this framework also illuminates a new pathway for analyzing explainability in advanced artificial intelligence systems.",0 "As machine learning models are increasingly deployed in sensitive application areas, the demand for interpretable and trustworthy decision-making has increased. Random Forests (RF), despite their widespread use and strong performance on tabular data, remain difficult to interpret due to their ensemble nature. We present Forest-Guided Clustering (FGC), a model-specific explainability method that reveals both local and global structure in RFs by grouping instances according to shared decision paths. FGC produces human-interpretable clusters aligned with the model's internal logic and computes cluster-specific and global feature importance scores to derive decision rules underlying RF predictions. FGC accurately recovered latent subclass structure on a benchmark dataset and outperformed classical clustering and post-hoc explanation methods. Applied to an AML transcriptomic dataset, FGC uncovered biologically coherent subpopulations, disentangled disease-relevant signals from confounders, and recovered known and novel gene expression patterns. FGC bridges the gap between performance and interpretability by providing structure-aware insights that go beyond feature-level attribution.",0 "Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding, reasoning, and problem-solving across various domains. However, their ability to perform complex, multi-step reasoning task-essential for applications in science, medicine, and law-remains an area of active investigation. This paper examines the reasoning capabilities of contemporary LLMs, analyzing their strengths, limitations, and potential for improvement. The study uses prompt engineering techniques on the Graduate-Level GoogleProof Q&A (GPQA) dataset to assess the scientific reasoning of GPT-4o. Five popular prompt engineering techniques and two tailored promptings were tested: baseline direct answer (zero-shot), chain-of-thought (CoT), zero-shot CoT, self-ask, self-consistency, decomposition, and multipath promptings. Our findings indicate that while LLMs exhibit emergent reasoning abilities, they often rely on pattern recognition rather than true logical inference, leading to inconsistencies in complex problem-solving. The results indicated that self-consistency outperformed the other prompt engineering technique with an accuracy of 52.99%, followed by direct answer (52.23%). Zero-shot CoT (50%) outperformed multipath (48.44%), decomposition (47.77%), self-ask (46.88%), and CoT (43.75%). Self-consistency performed the second worst in explaining the answers. Simple techniques such as direct answer, CoT, and zero-shot CoT have the best scientific reasoning. We propose a research agenda aimed at bridging these gaps by integrating structured reasoning frameworks, hybrid AI approaches, and human-in-the-loop methodologies. By critically evaluating the reasoning mechanisms of LLMs, this paper contributes to the ongoing discourse on the future of artificial general intelligence and the development of more robust, trustworthy AI systems.",0 "Forecasting in financial markets remains a significant challenge due to their nonlinear and regime-dependent dynamics. Traditional deep learning models, such as long short-term memory networks and multilayer perceptrons, often struggle to generalize across shifting market conditions, highlighting the need for a more adaptive and interpretable approach. To address this, we introduce Kolmogorov-Arnold networks for stock prediction and explainable regimes (KASPER), a novel framework that integrates regime detection, sparse spline-based function modeling, and symbolic rule extraction. The framework identifies hidden market conditions using a Gumbel-Softmax-based mechanism, enabling regime-specific forecasting. For each regime, it employs Kolmogorov-Arnold networks with sparse spline activations to capture intricate price behaviors while maintaining robustness. Interpretability is achieved through symbolic learning based on Monte Carlo Shapley values, which extracts human-readable rules tailored to each regime. Applied to real-world financial time series from Yahoo Finance, the model achieves an $R^2$ score of 0.89, a Sharpe Ratio of 12.02, and a mean squared error as low as 0.0001, outperforming existing methods. This research establishes a new direction for regime-aware, transparent, and robust forecasting in financial markets.",0 "Effective human-AI teaming heavily depends on swift trust, particularly in high-stakes scenarios such as emergency response, where timely and accurate decision-making is critical. In these time-sensitive and cognitively demanding settings, adaptive explainability is essential for fostering trust between human operators and AI systems. However, existing explainable AI (XAI) approaches typically offer uniform explanations and rely heavily on explicit feedback mechanisms, which are often impractical in such high-pressure scenarios. To address this gap, we propose a conceptual framework for adaptive XAI that operates non-intrusively by responding to users' real-time cognitive and emotional states through implicit feedback, thereby enhancing swift trust in high-stakes environments. The proposed adaptive explainability trust framework (AXTF) leverages physiological and behavioral signals, such as EEG, ECG, and eye tracking, to infer user states and support explanation adaptation. At its core is a multi-objective, personalized trust estimation model that maps workload, stress, and emotion to dynamic trust estimates. These estimates guide the modulation of explanation features enabling responsive and personalized support that promotes swift trust in human-AI collaboration. This conceptual framework establishes a foundation for developing adaptive, non-intrusive XAI systems tailored to the rigorous demands of high-pressure, time-sensitive environments.",1 "Fair and dynamic energy allocation in community microgrids remains a critical challenge, particularly when serving socio-economically diverse participants. Static optimization and cost-sharing methods often fail to adapt to evolving inequities, leading to participant dissatisfaction and unsustainable cooperation. This paper proposes a novel framework that integrates multi-objective mixed-integer linear programming (MILP), cooperative game theory, and a dynamic equity-adjustment mechanism driven by reinforcement learning (RL). At its core, the framework utilizes a bi-level optimization model grounded in Equity-regarding Welfare Maximization (EqWM) principles, which incorporate Rawlsian fairness to prioritize the welfare of the least advantaged participants. We introduce a Proximal Policy Optimization (PPO) agent that dynamically adjusts socio-economic weights in the optimization objective based on observed inequities in cost and renewable energy access. This RL-powered feedback loop enables the system to learn and adapt, continuously striving for a more equitable state. To ensure transparency, Explainable AI (XAI) is used to interpret the benefit allocations derived from a weighted Shapley value. Validated across six realistic scenarios, the framework demonstrates peak demand reductions of up to 72.6%, and significant cooperative gains. The adaptive RL mechanism further reduces the Gini coefficient over time, showcasing a pathway to truly sustainable and fair energy communities.",0 "We present updated non-adiabatic and inhomogeneous evolution models for Uranus and Neptune, employing an interior composition of methane, ammonia, water, and rocks. Following formation trends of the gas giants, Uranus and Neptune formation models are applied, where both planets begin with layers stable to convection. Both planets are subject to convective mixing throughout their evolution. Consistent with past work on this subject, the interior heat of Uranus evolution models is preserved by the stability of an outer composition gradient at lower initial entropy, where convective mixing is inhibited over evolutionary timescales. In contrast, if Neptune's initial entropy is enough to convectively mix its envelope, it undergoes homogenization and adiabatic cooling of the outer 40\% of its envelope. The subsequent release of internal energy during Neptune's evolution, driven by the convective instability of its primordial outer compositional gradient, accounts for its higher luminosity relative to Uranus. This work proposes that the observed luminosity differences between Uranus and Neptune could be explained by the convective stability of their outer envelopes. The extensive convective mixing in Neptune can lead to a higher metallicity in its outer region compared to Uranus, a feature seen in atmospheric measurements and shown in past interior models of Neptune. Due to Neptune's more pronounced cooling, our models predict favorable conditions for hydrogen-water immiscibility in its envelope.",0 "Large language models (LLMs) produce high-dimensional embeddings that capture rich semantic and syntactic relationships between words, sentences, and concepts. Investigating the topological structures of LLM embedding spaces via mapper graphs enables us to understand their underlying structures. Specifically, a mapper graph summarizes the topological structure of the embedding space, where each node represents a topological neighborhood (containing a cluster of embeddings), and an edge connects two nodes if their corresponding neighborhoods overlap. However, manually exploring these embedding spaces to uncover encoded linguistic properties requires considerable human effort. To address this challenge, we introduce a framework for semi-automatic annotation of these embedding properties. To organize the exploration process, we first define a taxonomy of explorable elements within a mapper graph such as nodes, edges, paths, components, and trajectories. The annotation of these elements is executed through two types of customizable LLM-based agents that employ perturbation techniques for scalable and automated analysis. These agents help to explore and explain the characteristics of mapper elements and verify the robustness of the generated explanations. We instantiate the framework within a visual analytics workspace and demonstrate its effectiveness through case studies. In particular, we replicate findings from prior research on BERT's embedding properties across various layers of its architecture and provide further observations into the linguistic properties of topological neighborhoods.",0 "Sparse autoencoders (SAEs) have emerged as a powerful technique for extracting human-interpretable features from neural networks activations. Previous works compared different models based on SAE-derived features but those comparisons have been restricted to models within the same modality. We propose a novel indicator allowing quantitative comparison of models across SAE features, and use it to conduct a comparative study of visual, textual and multimodal encoders. We also propose to quantify the Comparative Sharedness of individual features between different classes of models. With these two new tools, we conduct several studies on 21 encoders of the three types, with two significantly different sizes, and considering generalist and domain specific datasets. The results allow to revisit previous studies at the light of encoders trained in a multimodal context and to quantify to which extent all these models share some representations or features. They also suggest that visual features that are specific to VLMs among vision encoders are shared with text encoders, highlighting the impact of text pretraining. The code is available at https://github.com/CEA-LIST/SAEshareConcepts",0 "Accurate remote sensing geographic mapping requires timely and representative samples. However, rapid land surface changes often render static samples obsolete within months, making manual sample updates labor-intensive and unsustainable. To address this challenge, we propose TasGen, a two-stage Temporal spectral-aware Automatic Sample Generation method for generating dynamic training samples from single-date static labels without human intervention. Land surface dynamics often manifest as anomalies in temporal-spectral sequences. %These anomalies are multivariate yet unified: temporal, spectral, or joint anomalies stem from different mechanisms and cannot be naively coupled, as this may obscure the nature of changes. Yet, any land surface state corresponds to a coherent temporal-spectral signature, which would be lost if the two dimensions are modeled separately. To effectively capture these dynamics, TasGen first disentangles temporal and spectral features to isolate their individual contributions, and then couples them to model their synergistic interactions. In the first stage, we introduce a hierarchical temporal-spectral variational autoencoder (HTS-VAE) with a dual-dimension embedding to learn low-dimensional latent patterns of normal samples by first disentangling and then jointly embedding temporal and spectral information. This temporal-spectral embedding enables robust anomaly detection by identifying deviations from learned joint patterns. In the second stage, a classifier trained on stable samples relabels change points across time to generate dynamic samples. To not only detect but also explain surface dynamics, we further propose an anomaly interpretation method based on Gibbs sampling, which attributes changes to specific spectral-temporal dimensions.",0 "Effectively explaining decisions of black-box machine learning models is critical to responsible deployment of AI systems that rely on them. Recognizing their importance, the field of explainable AI (XAI) provides several techniques to generate these explanations. Yet, there is relatively little emphasis on the user (the explainee) in this growing body of work and most XAI techniques generate ""one-size-fits-all"" explanations. To bridge this gap and achieve a step closer towards human-centered XAI, we present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise. Informed by existing work, I-CEE explains the decisions of image classification models by providing the user with an informative subset of training data (i.e., example images), corresponding local explanations, and model decisions. However, unlike prior work, I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users. We posit that by tailoring the example set to user expertise, I-CEE can better facilitate users' understanding and simulatability of the model. To evaluate our approach, we conduct detailed experiments in both simulation and with human participants (N = 100) on multiple datasets. Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions (simulatability) compared to baselines, providing promising preliminary results. Experiments with human participants demonstrate that our method significantly improves user simulatability accuracy, highlighting the importance of human-centered XAI",2 "Autonomic Dysreflexia (AD) is a potentially life-threatening condition characterized by sudden, severe blood pressure (BP) spikes in individuals with spinal cord injury (SCI). Early, accurate detection is essential to prevent cardiovascular complications, yet current monitoring methods are either invasive or rely on subjective symptom reporting, limiting applicability in daily file. This study presents a non-invasive, explainable machine learning framework for detecting AD using multimodal wearable sensors. Data were collected from 27 individuals with chronic SCI during urodynamic studies, including electrocardiography (ECG), photoplethysmography (PPG), bioimpedance (BioZ), temperature, respiratory rate (RR), and heart rate (HR), across three commercial devices. Objective AD labels were derived from synchronized cuff-based BP measurements. Following signal preprocessing and feature extraction, BorutaSHAP was used for robust feature selection, and SHAP values for explainability. We trained modality- and device-specific weak learners and aggregated them using a stacked ensemble meta-model. Cross-validation was stratified by participants to ensure generalizability. HR- and ECG-derived features were identified as the most informative, particularly those capturing rhythm morphology and variability. The Nearest Centroid ensemble yielded the highest performance (Macro F1 = 0.77+/-0.03), significantly outperforming baseline models. Among modalities, HR achieved the highest area under the curve (AUC = 0.93), followed by ECG (0.88) and PPG (0.86). RR and temperature features contributed less to overall accuracy, consistent with missing data and low specificity. The model proved robust to sensor dropout and aligned well with clinical AD events. These results represent an important step toward personalized, real-time monitoring for 900 individuals with SCI.",2 "As surgery embraces digital transformation--integrating sophisticated imaging, advanced algorithms, and robotics to support and automate complex sub-tasks--human judgment of system correctness remains a vital safeguard for patient safety. This shift introduces new ""operator-type"" roles tasked with verifying complex algorithmic outputs, particularly at critical junctures of the procedure, such as the intermediary check before drilling or implant placement. A prime example is 2D/3D registration, a key enabler of image-based surgical navigation that aligns intraoperative 2D images with preoperative 3D data. Although registration algorithms have advanced significantly, they occasionally yield inaccurate results. Because even small misalignments can lead to revision surgery or irreversible surgical errors, there is a critical need for robust quality assurance. Current visualization-based strategies alone have been found insufficient to enable humans to reliably detect 2D/3D registration misalignments. In response, we propose the first artificial intelligence (AI) framework trained specifically for 2D/3D registration quality verification, augmented by explainability features that clarify the model's decision-making. Our explainable AI (XAI) approach aims to enhance informed decision-making for human operators by providing a second opinion together with a rationale behind it. Through algorithm-centric and13 human-centered evaluations, we systematically compare four conditions: AI-only, human-only, human-AI, and human-XAI. Our findings reveal that while explainability features modestly improve user trust and willingness to override AI errors, they do not exceed the standalone AI in aggregate performance. Nevertheless, future work extending both the algorithmic design and the human-XAI collaboration elements holds promise for more robust quality assurance of 2D/3D registration.",1 "Computer vision applications are omnipresent nowadays. The current paper explores the use of fuzzy logic in computer vision, stressing its role in handling uncertainty, noise, and imprecision in image data. Fuzzy logic is able to model gradual transitions and human-like reasoning and provides a promising approach to computer vision. Fuzzy approaches offer a way to improve object recognition, image segmentation, and feature extraction by providing more adaptable and interpretable solutions compared to traditional methods. We discuss key fuzzy techniques, including fuzzy clustering, fuzzy inference systems, type-2 fuzzy sets, and fuzzy rule-based decision-making. The paper also discusses various applications, including medical imaging, autonomous systems, and industrial inspection. Additionally, we explore the integration of fuzzy logic with deep learning models such as convolutional neural networks (CNNs) to enhance performance in complex vision tasks. Finally, we examine emerging trends such as hybrid fuzzy-deep learning models and explainable AI.",0 "Federated Learning (FL), a privacy-aware approach in distributed deep learning environments, enables many clients to collaboratively train a model without sharing sensitive data, thereby reducing privacy risks. However, enabling human trust and control over FL systems requires understanding the evolving behaviour of clients, whether beneficial or detrimental for the training, which still represents a key challenge in the current literature. To address this challenge, we introduce Federated Behavioural Planes (FBPs), a novel method to analyse, visualise, and explain the dynamics of FL systems, showing how clients behave under two different lenses: predictive performance (error behavioural space) and decision-making processes (counterfactual behavioural space). Our experiments demonstrate that FBPs provide informative trajectories describing the evolving states of clients and their contributions to the global model, thereby enabling the identification of clusters of clients with similar behaviours. Leveraging the patterns identified by FBPs, we propose a robust aggregation technique named Federated Behavioural Shields to detect malicious or noisy client models, thereby enhancing security and surpassing the efficacy of existing state-of-the-art FL defense mechanisms. Our code is publicly available on GitHub.",0 "Retrosynthesis planning, essential in organic synthesis and drug discovery, has greatly benefited from recent AI-driven advancements. Nevertheless, existing methods frequently face limitations in both applicability and explainability. Traditional graph-based and sequence-to-sequence models often lack generalized chemical knowledge, leading to predictions that are neither consistently accurate nor easily explainable. To address these challenges, we introduce RetroDFM-R, a reasoning-based large language model (LLM) designed specifically for chemical retrosynthesis. Leveraging large-scale reinforcement learning guided by chemically verifiable rewards, RetroDFM-R significantly enhances prediction accuracy and explainability. Comprehensive evaluations demonstrate that RetroDFM-R significantly outperforms state-of-the-art methods, achieving a top-1 accuracy of 65.0% on the USPTO-50K benchmark. Double-blind human assessments further validate the chemical plausibility and practical utility of RetroDFM-R's predictions. RetroDFM-R also accurately predicts multistep retrosynthetic routes reported in the literature for both real-world drug molecules and perovskite materials. Crucially, the model's explicit reasoning process provides human-interpretable insights, thereby enhancing trust and practical value in real-world retrosynthesis applications.",0 "AI-driven video generation techniques have made significant progress in recent years. However, AI-generated videos (AGVs) involving human activities often exhibit substantial visual and semantic distortions, hindering the practical application of video generation technologies in real-world scenarios. To address this challenge, we conduct a pioneering study on human activity AGV quality assessment, focusing on visual quality evaluation and the identification of semantic distortions. First, we construct the AI-Generated Human activity Video Quality Assessment (Human-AGVQA) dataset, consisting of 6,000 AGVs derived from 15 popular text-to-video (T2V) models using 400 text prompts that describe diverse human activities. We conduct a subjective study to evaluate the human appearance quality, action continuity quality, and overall video quality of AGVs, and identify semantic issues of human body parts. Based on Human-AGVQA, we benchmark the performance of T2V models and analyze their strengths and weaknesses in generating different categories of human activities. Second, we develop an objective evaluation metric, named AI-Generated Human activity Video Quality metric (GHVQ), to automatically analyze the quality of human activity AGVs. GHVQ systematically extracts human-focused quality features, AI-generated content-aware quality features, and temporal continuity features, making it a comprehensive and explainable quality metric for human activity AGVs. The extensive experimental results show that GHVQ outperforms existing quality metrics on the Human-AGVQA dataset by a large margin, demonstrating its efficacy in assessing the quality of human activity AGVs. The Human-AGVQA dataset and GHVQ metric will be released at https://github.com/zczhang-sjtu/GHVQ.git.",0 "Images taken in low light often show color shift, low contrast, noise, and other artifacts that hurt computer-vision accuracy. Retinex theory addresses this by viewing an image S as the pixel-wise product of reflectance R and illumination I, mirroring the way people perceive stable object colors under changing light. The decomposition is ill-posed, and classic Retinex models have four key flaws: (i) they treat the red, green, and blue channels independently; (ii) they lack a neuroscientific model of color vision; (iii) they cannot perfectly rebuild the input image; and (iv) they do not explain human color constancy. We introduce the first Quaternion Retinex formulation, in which the scene is written as the Hamilton product of quaternion-valued reflectance and illumination. To gauge how well reflectance stays invariant, we propose the Reflectance Consistency Index. Tests on low-light crack inspection, face detection under varied lighting, and infrared-visible fusion show gains of 2-11 percent over leading methods, with better color fidelity, lower noise, and higher reflectance stability.",0 "With the rapid advancement of generative AI, synthetic content across images, videos, and audio has become increasingly realistic, amplifying the risk of misinformation. Existing detection approaches predominantly focus on binary classification while lacking detailed and interpretable explanations of forgeries, which limits their applicability in safety-critical scenarios. Moreover, current methods often treat each modality separately, without a unified benchmark for cross-modal forgery detection and interpretation. To address these challenges, we introduce METER, a unified, multi-modal benchmark for interpretable forgery detection spanning images, videos, audio, and audio-visual content. Our dataset comprises four tracks, each requiring not only real-vs-fake classification but also evidence-chain-based explanations, including spatio-temporal localization, textual rationales, and forgery type tracing. Compared to prior benchmarks, METER offers broader modality coverage and richer interpretability metrics such as spatial/temporal IoU, multi-class tracing, and evidence consistency. We further propose a human-aligned, three-stage Chain-of-Thought (CoT) training strategy combining SFT, DPO, and a novel GRPO stage that integrates a human-aligned evaluator with CoT reasoning. We hope METER will serve as a standardized foundation for advancing generalizable and interpretable forgery detection in the era of generative media.",0 "People with disabilities (PwD) experience disproportionately high levels of discrimination and hate online, particularly in India, where entrenched stigma and limited resources intensify these challenges. Large language models (LLMs) are increasingly used to identify and mitigate online hate, yet most research on online ableism focuses on Western audiences with Western AI models. Are these models adequately equipped to recognize ableist harm in non-Western places like India? Do localized, Indic language models perform better? To investigate, we adopted and translated a publicly available ableist speech dataset to Hindi, and prompted eight LLMs--four developed in the U.S. (GPT-4, Gemini, Claude, Llama) and four in India (Krutrim, Nanda, Gajendra, Airavata)--to score and explain ableism. In parallel, we recruited 175 PwD from both the U.S. and India to perform the same task, revealing stark differences between groups. Western LLMs consistently overestimated ableist harm, while Indic LLMs underestimated it. Even more concerning, all LLMs were more tolerant of ableism when it was expressed in Hindi and asserted Western framings of ableist harm. In contrast, Indian PwD interpreted harm through intention, relationality, and resilience--emphasizing a desire to inform and educate perpetrators. This work provides groundwork for global, inclusive standards of ableism, demonstrating the need to center local disability experiences in the design and evaluation of AI systems.",0 "Many calls for explainable AI (XAI) systems in medicine are tied to a desire for AI accountability--accounting for, mitigating, and ultimately preventing harms from AI systems. Because XAI systems provide human-understandable explanations for their output, they are often viewed as a primary path to prevent harms to patients. However, when harm occurs, laws, policies, and regulations also shape AI accountability by impacting how harmed individuals can obtain recourse. Current approaches to XAI explore physicians' medical and relational needs to counter harms to patients, but there is a need to understand how XAI systems should account for the legal considerations of those impacted. We conduct an analysis of 31 legal cases and reported harms to identify patterns around how AI systems impact patient care. Our findings reflect how patients' medical care relies on a complex web of stakeholders--physicians, state health departments, health insurers, care facilities, among others--and many AI systems deployed across their healthcare delivery negatively impact their care. In response, patients have had no option but to seek legal recourse for harms. We shift the frame from physician-centered to patient-centered accountability approaches by describing how lawyers and technologists need to recognize and address where AI harms happen. We present paths for preventing or countering harm (1) by changing liability structures to reflect the role of many stakeholders in shaping how AI systems impact patient care and (2) by designing XAI systems that can help advocates, such as legal representatives, who provide critical legal expertise and practically support recourse for patients.",2 "Gradient-based methods are a prototypical family of explainability techniques, especially for image-based models. Nonetheless, they have several shortcomings in that they (1) require white-box access to models, (2) are vulnerable to adversarial attacks, and (3) produce attributions that lie off the image manifold, leading to explanations that are not actually faithful to the model and do not align well with human perception. To overcome these challenges, we introduce Derivative-Free Diffusion Manifold-Constrainted Gradients (FreeMCG), a novel method that serves as an improved basis for explainability of a given neural network than the traditional gradient. Specifically, by leveraging ensemble Kalman filters and diffusion models, we derive a derivative-free approximation of the model's gradient projected onto the data manifold, requiring access only to the model's outputs. We demonstrate the effectiveness of FreeMCG by applying it to both counterfactual generation and feature attribution, which have traditionally been treated as distinct tasks. Through comprehensive evaluation on both tasks, counterfactual explanation and feature attribution, we show that our method yields state-of-the-art results while preserving the essential properties expected of XAI tools.",0 "This study replicates and adapts the experiment of Hoelzl and Rustichini (2005), which examined overplacement, i.e., overconfidence in relative self-assessments, by analyzing individuals' voting preferences between a performance-based and a lottery-based bonus payment mechanism. The original study found underplacement - the majority of their sample apparently expected to perform worse than others - in difficult tasks with monetary incentives, contradicting the widely held assumption of a general human tendency toward overconfidence. This paper challenges the comparability of the two payment schemes, arguing that differences in outcome structures and non-monetary motives may have influenced 84 participants' choices beyond misconfidence. In an online replication, a fixed-outcome distribution lottery mechanism with interdependent success probabilities and no variance in the number of winners - designed to better align with the performance-based payment scheme - is compared against the probabilistic-outcome distribution lottery used in the original study, which features an independent success probability and a variable number of winners. The results align more closely with traditional overplacement patterns than underplacement, as nearly three-fourths of participants prefer the performance-based option regardless of lottery design. Key predictors of voting behavior include expected performance, group performance estimations, and sample question outcomes, while factors such as social comparison tendencies and risk attitudes play no significant role. Self-reported voting rationales highlight the influence of normative beliefs, control preferences, and feedback signals beyond confidence. These results contribute to methodological discussions in overconfidence research by reassessing choice-based overconfidence measures and exploring alternative explanations for observed misplacement effects.",2 "Phishing emails are a critical component of the cybercrime kill chain due to their wide reach and low cost. Their ever-evolving nature renders traditional rule-based and feature-engineered detectors ineffective in the ongoing arms race between attackers and defenders. The rise of large language models (LLMs) further exacerbates the threat, enabling attackers to craft highly convincing phishing emails at minimal cost. This work demonstrates that LLMs can generate psychologically persuasive phishing emails tailored to victim profiles, successfully bypassing nearly all commercial and academic detectors. To defend against such threats, we propose PiMRef, the first reference-based phishing email detector that leverages knowledge-based invariants. Our core insight is that persuasive phishing emails often contain disprovable identity claims, which contradict real-world facts. PiMRef reframes phishing detection as an identity fact-checking task. Given an email, PiMRef (i) extracts the sender's claimed identity, (ii) verifies the legitimacy of the sender's domain against a predefined knowledge base, and (iii) detects call-to-action prompts that push user engagement. Contradictory claims are flagged as phishing indicators and serve as human-understandable explanations. Compared to existing methods such as D-Fence, HelpHed, and ChatSpamDetector, PiMRef boosts precision by 8.8% with no loss in recall on standard benchmarks like Nazario and PhishPot. In a real-world evaluation of 10,183 emails across five university accounts over three years, PiMRef achieved 92.1% precision, 87.9% recall, and a median runtime of 0.05s, outperforming the state-of-the-art in both effectiveness and efficiency.",0 "The implementation of Artificial Intelligence (AI) in household environments, especially in the form of proactive autonomous agents, brings about possibilities of comfort and attention as well as it comes with intra or extramural ethical challenges. This article analyzes agentic AI and its applications, focusing on its move from reactive to proactive autonomy, privacy, fairness and user control. We review responsible innovation frameworks, human-centered design principles, and governance practices to distill practical guidance for ethical smart home systems. Vulnerable 1060 user groups such as 300 elderly individuals, 407 children, and 353 neurodivergent who face higher risks of surveillance, bias, and privacy risks were studied in detail in context of Agentic AI. Design imperatives are highlighted such as tailored explainability, granular consent mechanisms, and robust override controls, supported by participatory and inclusive methodologies. It was also explored how data-driven insights, including social media analysis via Natural Language Processing(NLP), can inform specific user needs and ethical concerns. This survey aims to provide both a conceptual foundation and suggestions for developing transparent, inclusive, and trustworthy agentic AI in household automation.",2 "Machine learning-based decision models are increasingly being used to make decisions that significantly impact people's lives, but their opaque nature leaves end users without a clear understanding of why a decision was made. Counterfactual Explanations (CFEs) have grown in popularity as a means of offering actionable guidance by identifying the minimum changes in feature values required to flip a model's prediction to something more desirable. Unfortunately, most prior research in CFEs relies on artificial evaluation metrics, such as proximity, which may overlook end-user preferences and constraints, e.g., the user's perception of effort needed to make certain feature changes may differ from that of the model designer. To address this research gap, this paper makes three novel contributions. First, we conduct a pilot study with 20 crowd-workers on Amazon MTurk to experimentally validate the alignment of existing CF evaluation metrics with real-world user preferences. Results show that user-preferred CFEs matched those based on proximity in only 63.81% of cases, highlighting the limited applicability of these metrics in real-world settings. Second, inspired by the need to design a user-informed evaluation metric for CFEs, we conduct a more detailed two-day user study with 41 participants facing realistic credit application scenarios to find experimental support for or against three intuitive hypotheses that may explain how end users evaluate CFEs. Third, based on the findings of this second study, we propose the AWP model, a novel user-centric, two-stage model that describes one possible mechanism by which users evaluate and select CFEs. Our results show that AWP predicts user-preferred CFEs with 84.37% accuracy. Our study provides the first human-centered validation for personalized cost models in CFE generation and highlights the need for adaptive, user-centered evaluation metrics.",2 "Explainable AI (XAI) holds significant promise for enhancing the transparency and trustworthiness of AI-driven threat detection in Security Operations Centers (SOCs). However, identifying the appropriate level and format of explanation, particularly in environments that demand rapid decision-making under high-stakes conditions, remains a complex and underexplored challenge. To address this gap, we conducted a three-month mixed-methods study combining an online survey (N1=248) with in-depth interviews (N2=24) to examine (1) how SOC analysts conceptualize AI-generated explanations and (2) which types of explanations are perceived as actionable and trustworthy across different analyst roles. Our findings reveal that participants were consistently willing to accept XAI outputs, even in cases of lower predictive accuracy, when explanations were perceived as relevant and evidence-backed. Analysts repeatedly emphasized the importance of understanding the rationale behind AI decisions, expressing a strong preference for contextual depth over a mere presentation of outcomes on dashboards. Building on these insights, this study re-evaluates current explanation methods within security contexts and demonstrates that role-aware, context-rich XAI designs aligned with SOC workflows can substantially improve practical utility. Such tailored explainability enhances analyst comprehension, increases triage efficiency, and supports more confident responses to evolving threats.",2 "Large Language Models (LLMs) have demonstrated great potential as evaluators of NLG systems, allowing for high-quality, reference-free, and multi-aspect assessments. However, existing LLM-based metrics suffer from two major drawbacks: reliance on proprietary models to generate training data or perform evaluations, and a lack of fine-grained, explanatory feedback. In this paper, we introduce OpeNLGauge, a fully open-source, reference-free NLG evaluation metric that provides accurate explanations based on error spans. OpeNLGauge is available as a two-stage ensemble of larger open-weight LLMs, or as a small fine-tuned evaluation model, with confirmed generalizability to unseen tasks, domains and aspects. Our extensive meta-evaluation shows that OpeNLGauge achieves competitive correlation with human judgments, outperforming state-of-the-art models on certain tasks while maintaining full reproducibility and providing explanations more than twice as accurate.",0 "Automated machine learning systems efficiently streamline model selection but often focus on a single best-performing model, overlooking explanation uncertainty, an essential concern in human centered explainable AI. To address this, we propose a novel framework that incorporates model multiplicity into explanation generation by aggregating partial dependence profiles (PDP) from a set of near optimal models, known as the Rashomon set. The resulting Rashomon PDP captures interpretive variability and highlights areas of disagreement, providing users with a richer, uncertainty aware view of feature effects. To evaluate its usefulness, we introduce two quantitative metrics, the coverage rate and the mean width of confidence intervals, to evaluate the consistency between the standard PDP and the proposed Rashomon PDP. Experiments on 35 regression datasets from the OpenML CTR23 benchmark suite show that in most cases, the Rashomon PDP covers less than 70% of the best model's PDP, underscoring the limitations of single model explanations. Our findings suggest that Rashomon PDP improves the reliability and trustworthiness of model interpretations by adding additional information that would otherwise be neglected. This is particularly useful in high stakes domains where transparency and confidence are critical.",0 "The use of Bidirectional Encoder Representations from Transformers (BERT) model and its variants for classifying collaborative problem solving (CPS) has been extensively explored within the AI in Education community. However, limited attention has been given to understanding how individual tokenised words in the dataset contribute to the model's classification decisions. Enhancing the explainability of BERT-based CPS diagnostics is essential to better inform end users such as teachers, thereby fostering greater trust and facilitating wider adoption in education. This study undertook a preliminary step towards model transparency and explainability by using SHapley Additive exPlanations (SHAP) to examine how different tokenised words in transcription data contributed to a BERT model's classification of CPS processes. The findings suggested that well-performing classifications did not necessarily equate to a reasonable explanation for the classification decisions. Particular tokenised words were used frequently to affect classifications. The analysis also identified a spurious word, which contributed positively to the classification but was not semantically meaningful to the class. While such model transparency is unlikely to be useful to an end user to improve their practice, it can help them not to overrely on LLM diagnostics and ignore their human expertise. We conclude the workshop paper by noting that the extent to which the model appropriately uses the tokens for its classification is associated with the number of classes involved. It calls for an investigation into the exploration of ensemble model architectures and the involvement of human-AI complementarity for CPS diagnosis, since considerable human reasoning is still required for fine-grained discrimination of CPS subskills.",0 "Detecting collaborative problem solving (CPS) indicators from dialogue using machine learning techniques is a significant challenge for the field of AI in Education. Recent studies have explored the use of Bidirectional Encoder Representations from Transformers (BERT) models on transcription data to reliably detect meaningful CPS indicators. A notable advancement involved the multimodal BERT variant, AudiBERT, which integrates speech and acoustic-prosodic audio features to enhance CPS diagnosis. Although initial results demonstrated multimodal improvements, the statistical significance of these enhancements remained unclear, and there was insufficient guidance on leveraging human-AI complementarity for CPS diagnosis tasks. This workshop paper extends the previous research by highlighting that the AudiBERT model not only improved the classification of classes that were sparse in the dataset, but it also had statistically significant class-wise improvements over the BERT model for classifications in the social-cognitive dimension. However, similar significant class-wise improvements over the BERT model were not observed for classifications in the affective dimension. A correlation analysis highlighted that larger training data was significantly associated with higher recall performance for both the AudiBERT and BERT models. Additionally, the precision of the BERT model was significantly associated with high inter-rater agreement among human coders. When employing the BERT model to diagnose indicators within these subskills that were well-detected by the AudiBERT model, the performance across all indicators was inconsistent. We conclude the paper by outlining a structured approach towards achieving human-AI complementarity for CPS diagnosis, highlighting the crucial inclusion of model explainability to support human agency and engagement in the reflective coding process.",1 "Machine Learning predictors are increasingly being employed in high-stakes applications such as credit scoring. Explanations help users unpack the reasons behind their predictions, but are not always ""high quality''. That is, end-users may have difficulty interpreting or believing them, which can complicate trust assessment and downstream decision-making. We argue that classifiers should have the option to refuse handling inputs whose predictions cannot be explained properly and introduce a framework for learning to reject low-quality explanations (LtX) in which predictors are equipped with a rejector that evaluates the quality of explanations. In this problem setting, the key challenges are how to properly define and assess explanation quality and how to design a suitable rejector. Focusing on popular attribution techniques, we introduce ULER (User-centric Low-quality Explanation Rejector), which learns a simple rejector from human ratings and per-feature relevance judgments to mirror human judgments of explanation quality. Our experiments show that ULER outperforms both state-of-the-art and explanation-aware learning to reject strategies at LtX on eight classification and regression benchmarks and on a new human-annotated dataset, which we will publicly release to support future research.",0 "This systematic literature review examines the role of large language models (LLMs) in UI/UX design, synthesizing findings from 38 peer-reviewed studies published between 2022 and 2025. We identify key LLMs in use, including GPT-4, Gemini, and PaLM, and map their integration across the design lifecycle, from ideation to evaluation. Common practices include prompt engineering, human-in-the-loop workflows, and multimodal input. While LLMs are reshaping design processes, challenges such as hallucination, prompt instability, and limited explainability persist. Our findings highlight LLMs as emerging collaborators in design, and we propose directions for the ethical, inclusive, and effective integration of these technologies.",0 "Team modeling remains a fundamental challenge at the intersection of Artificial Intelligence and the Social Sciences. Social Science research emphasizes the need to jointly model dynamics and relations, while practical applications demand unified models capable of inferring multiple team constructs simultaneously, providing interpretable insights and actionable recommendations to enhance team performance. However, existing works do not meet these practical demands. To bridge this gap, we present TRENN, a novel tempo-relational architecture that integrates: (i) an automatic temporal graph extractor, (ii) a tempo-relational encoder, (iii) a decoder for team construct prediction, and (iv) two complementary explainability modules. TRENN jointly captures relational and temporal team dynamics, providing a solid foundation for MT-TRENN, which extends TReNN by replacing the decoder with a multi-task head, enabling the model to learn shared Social Embeddings and simultaneously predict multiple team constructs, including Emergent Leadership, Leadership Style, and Teamwork components. Experimental results demonstrate that our approach significantly outperforms approaches that rely exclusively on temporal or relational information. Additionally, experimental evaluation has shown that the explainability modules integrated in MT-TRENN yield interpretable insights and actionable suggestions to support team improvement. These capabilities make our approach particularly well-suited for Human-Centered AI applications, such as intelligent decision-support systems in high-stakes collaborative environments.",0 "We study a disordered network of bistable bonds subjected to periodic strain. The model is inspired by experiments on crumpled sheets and it features behaviors associated with glasses, including a complex energy landscape, memories, and large avalanches. At small strain amplitudes, the system quickly converges to a limit cycle where the system repeatedly cycles between a set of states. At large amplitudes, motion is erratic and does not converge to a limit cycle. The transition appears to be continuous, with diverging time scales. The nature of instabilities is different on both sides of the transition. At small strain amplitudes, instabilities are correlated only over a finite distance. Above the transition, instabilities are localized along diagonal bands. The distance between bands grows near the transition and appears to diverge. We propose a simple model that explains these observations. Below the transition, we propose a new ``order parameter'' -- the polarization of the instabilities along the driving direction.",0 "This paper introduces a dataset that is the result of a user study on the comprehensibility of explainable artificial intelligence (XAI) algorithms. The study participants were recruited from 149 candidates to form three groups representing experts in the domain of mycology (DE), students with a data science and visualization background (IT) and students from social sciences and humanities (SSH). The main part of the dataset contains 39 transcripts of interviews during which participants were asked to complete a series of tasks and questions related to the interpretation of explanations of decisions of a machine learning model trained to distinguish between edible and inedible mushrooms. The transcripts were complemented with additional data that includes visualizations of explanations presented to the user, results from thematic analysis, recommendations of improvements of explanations provided by the participants, and the initial survey results that allow to determine the domain knowledge of the participant and data analysis literacy. The transcripts were manually tagged to allow for automatic matching between the text and other data related to particular fragments. In the advent of the area of rapid development of XAI techniques, the need for a multidisciplinary qualitative evaluation of explainability is one of the emerging topics in the community. Our dataset allows not only to reproduce the study we conducted, but also to open a wide range of possibilities for the analysis of the material we gathered.",2 "We report on the optical second harmonic generation (SHG) on the 1S exciton-polariton resonance in bulk ZnSe that is subject to an external magnetic field applied perpendicular to the light wave vector $\mathbf k$ (Voigt geometry). For the symmetry allowed geometry with the $\mathbf{k}\parallel[111]$ crystal axes, the nonreciprocal dependence of the SHG intensity on the magnetic field direction is found. It is explained by an interference of the crystallographic and magnetic-field-induced SHG signals. Relative phases of these signals are evaluated from the rotational anisotropy diagrams. Phenomenological and microscopic models of the effect are developed. To the best of our knowledge, this is the first experimental observation of the nonreciprocal SHG in semiconductor crystals, and the first one for exciton-polaritons.",0 "Monopoles have been a subject of much theoretical and experimental research since they were proposed to symmetrize Maxwell's equations. However, no experimental signature of their existence has been detected. Many mechanisms have been proposed to explain this lack of success. In here we generalize QED to a two photon theory where the magnetic photon and the monopole belong to dark matter. A naive mixing interaction produces duality and generates experimental consequences for indirect monopole detection via the magnetic charge of the electron.",0 "Driven by access to large volumes of movement data, the study of human mobility has grown rapidly over the past decades. The field has shown that human mobility is scale-free, proposed models to generate scale-free moving distance distributions, and explained how the scale-free distribution arises. It has not, however, explicitly addressed how mobility is structured by geographical constraints. How mobility relates to the outlines of landmasses, lakes, and rivers; by the placement of buildings, roadways, and cities. Based on millions of moves, we show how separating the effect of geography from mobility choices, reveals a power law spanning five orders of magnitude. To do so, we incorporate geography via the `pair distribution function' that encapsulates the structure of locations on which mobility occurs. Showing how the spatial distribution of human settlements shapes human mobility, our approach bridges the gap between distance- and opportunity-based models of human mobility.",0 "Soft fracture in highly deformable solids involves both geometric and constitutive nonlinearities, necessitating advanced theoretical and computational frameworks for its accurate understanding. Tensile fractures subjected to mixed-mode loading deviate from their original planar shape, resulting in echelon crack patterns. When out-of-plane shear is superimposed, a crack front segments into an array of tilted facets. The physical interpretation of echelon cracks is only marginally understood, and it is customarily based on rather limited approaches based on Linear Elastic Fracture Mechanics. Here we investigate mixed-mode I + III fracture within the framework of configurational mechanics. Using the Configurational Force Method, implemented as a post-processing algorithm in a finite-element-based simulation, we compute the configurational forces acting at the crack tip of model fracture geometries prior to propagation. Configurational forces characterize both the magnitude and direction of propagation for maximal energy release rate. Our results reveal the complex interactions between tilted facets and their critical role in shaping the fracture morphology. We also examine the effects of facet coalescence-driven by the growth of the parent crack-where neighboring facets merge into a unified crack front. These findings provide new insights into fracture processes in soft, quasi-brittle materials under mixed-mode loading.",0 "This paper presents an intervention study on the effects of the combined methods of (1) the Socratic method, (2) Chain of Thought (CoT) reasoning, (3) simplified gamification and (4) formative feedback on university students' Maths learning driven by large language models (LLMs). We call our approach Mathematics Explanations through Games by AI LLMs (MEGA). Some students struggle with Maths and as a result avoid Math-related discipline or subjects despite the importance of Maths across many fields, including signal processing. Oftentimes, students' Maths difficulties stem from suboptimal pedagogy. We compared the MEGA method to the traditional step-by-step (CoT) method to ascertain which is better by using a within-group design after randomly assigning questions for the participants, who are university students. Samples (n=60) were randomly drawn from each of the two test sets of the Grade School Math 8K (GSM8K) and Mathematics Aptitude Test of Heuristics (MATH) datasets, based on the error margin of 11%, the confidence level of 90%, and a manageable number of samples for the student evaluators. These samples were used to evaluate two capable LLMs at length (Generative Pretrained Transformer 4o (GPT4o) and Claude 3.5 Sonnet) out of the initial six that were tested for capability. The results showed that students agree in more instances that the MEGA method is experienced as better for learning for both datasets. It is even much better than the CoT (47.5% compared to 26.67%) in the more difficult MATH dataset, indicating that MEGA is better at explaining difficult Maths problems.",2 "This paper introduces UrbanScore - a real-time web platform that computes a personalised liveability score for any urban address. The system fuses five data streams: (i) address geocoding via Nominatim, (ii) facility extraction from OpenStreetMap through Overpass QL, (iii) segment-level traffic metrics from TomTom Flow v10, (iv) hourly air-quality readings from OpenWeatherMap, and (v) user-declared preference profiles, all persisted in an Oracle 19c relational store. Six sub-scores (air, traffic, lifestyle, education, metro access, surface transport) are derived, adaptively weighted and combined an OpenAI large-language model then converts the numeric results into concise, user-friendly explanations. A pilot deployment covering the 226 km2 metropolitan area of Bucharest evaluated 3,450 unique addresses over four weeks. Median end-to-end latency was 2.1 s (p95 = 2.9s), meeting the <3 non-functional requirement. Aggregate scores ranged from 34 to 92 (mean 68, SD 11), with high-scoring clusters along metro corridors that pair abundant green space with PM2.5 levels below 35 ug m-3. A detailed case study of the Tineretului district produced an overall score of 91/100 and demonstrated how the narrative layer guides users toward comparable neighbourhoods. Limitations include dependence on third-party API uptime, spatial bias toward well-mapped OSM regions and the absence of noise and crime layers, cited by 18% of survey participants as a desired enhancement. Overall, the results show that open geodata, commercial mobility feeds and conversational AI can be integrated into a performant, explainable decision-support tool that places ""liveability analytics"" in the hands of every house-hunter, commuter and city planner.",2 "Large Language Models (LLMs) have demonstrated a remarkable ability to capture extensive world knowledge, yet how this is achieved without direct sensorimotor experience remains a fundamental puzzle. This study proposes a novel theoretical solution by introducing the Collective World Model hypothesis. We argue that an LLM does not learn a world model from scratch; instead, it learns a statistical approximation of a collective world model that is already implicitly encoded in human language through a society-wide process of embodied, interactive sense-making. To formalize this process, we introduce generative emergent communication (Generative EmCom), a framework built on the Collective Predictive Coding (CPC). This framework models the emergence of language as a process of decentralized Bayesian inference over the internal states of multiple agents. We argue that this process effectively creates an encoder-decoder structure at a societal scale: human society collectively encodes its grounded, internal representations into language, and an LLM subsequently decodes these symbols to reconstruct a latent space that mirrors the structure of the original collective representations. This perspective provides a principled, mathematical explanation for how LLMs acquire their capabilities. The main contributions of this paper are: 1) the formalization of the Generative EmCom framework, clarifying its connection to world models and multi-agent reinforcement learning, and 2) its application to interpret LLMs, explaining phenomena such as distributional semantics as a natural consequence of representation reconstruction. This work provides a unified theory that bridges individual cognitive development, collective language evolution, and the foundations of large-scale AI.",0 "This paper introduces the System 0/1/2/3 framework as an extension of dual-process theory, employing a quad-process model of cognition. Expanding upon System 1 (fast, intuitive thinking) and System 2 (slow, deliberative thinking), we incorporate System 0, which represents pre-cognitive embodied processes, and System 3, which encompasses collective intelligence and symbol emergence. We contextualize this model within Bergson's philosophy by adopting multi-scale time theory to unify the diverse temporal dynamics of cognition. System 0 emphasizes morphological computation and passive dynamics, illustrating how physical embodiment enables adaptive behavior without explicit neural processing. Systems 1 and 2 are explained from a constructive perspective, incorporating neurodynamical and AI viewpoints. In System 3, we introduce collective predictive coding to explain how societal-level adaptation and symbol emergence operate over extended timescales. This comprehensive framework ranges from rapid embodied reactions to slow-evolving collective intelligence, offering a unified perspective on cognition across multiple timescales, levels of abstraction, and forms of human intelligence. The System 0/1/2/3 model provides a novel theoretical foundation for understanding the interplay between adaptive and cognitive processes, thereby opening new avenues for research in cognitive science, AI, robotics, and collective intelligence.",0 "This article addresses the challenge of modeling the amplitude of spatially indexed low frequency fluctuations (ALFF) in resting state functional MRI as a function of cortical structural features and a multi-task coactivation network in the Adolescent Brain Cognitive Development (ABCD) Study. It proposes a generative model that integrates effects of spatially-varying inputs and a network-valued input using deep neural networks to capture complex non-linear and spatial associations with the output. The method models spatial smoothness, accounts for subject heterogeneity and complex associations between network and spatial images at different scales, enables accurate inference of each images effect on the output image, and allows prediction with uncertainty quantification via Monte Carlo dropout, contributing to one of the first Explainable AI (XAI) frameworks for heterogeneous imaging data. The model is highly scalable to high-resolution data without the heavy pre-processing or summarization often required by Bayesian methods. Empirical results demonstrate its strong performance compared to existing statistical and deep learning methods. We applied the XAI model to the ABCD data which revealed associations between cortical features and ALFF throughout the entire brain. Our model performed comparably to existing methods in predictive accuracy but provided superior uncertainty quantification and faster computation, demonstrating its effectiveness for large-scale neuroimaging analysis. Open-source software in Python for XAI is available.",0 "Enhancing simulation environments to replicate real-world driver behavior, i.e., more humanlike sim agents, is essential for developing autonomous vehicle technology. In the context of highway merging, previous works have studied the operational-level yielding dynamics of lag vehicles in response to a merging car at highway on-ramps. Other works focusing on tactical decision modeling generally consider limited action sets or utilize payoff functions with large parameter sets and limited payoff bounds. In this work, we aim to improve the simulation of the highway merge scenario by targeting a game theoretic model for tactical decision-making with improved payoff functions and lag actions. We couple this with an underlying dynamics model to have a unified decision and dynamics model that can capture merging interactions and simulate more realistic interactions in an explainable and interpretable fashion. The proposed model demonstrated good reproducibility of complex interactions when validated on a real-world dataset. The model was finally integrated into a high fidelity simulation environment and confirmed to have adequate computation time efficiency for use in large-scale simulations to support autonomous vehicle development.",0 "The deployment of autonomous agents in environments involving human interaction has increasingly raised security concerns. Consequently, understanding the circumstances behind an event becomes critical, requiring the development of capabilities to justify their behaviors to non-expert users. Such explanations are essential in enhancing trustworthiness and safety, acting as a preventive measure against failures, errors, and misunderstandings. Additionally, they contribute to improving communication, bridging the gap between the agent and the user, thereby improving the effectiveness of their interactions. This work presents an accountability and explainability architecture implemented for ROS-based mobile robots. The proposed solution consists of two main components. Firstly, a black box-like element to provide accountability, featuring anti-tampering properties achieved through blockchain technology. Secondly, a component in charge of generating natural language explanations by harnessing the capabilities of Large Language Models (LLMs) over the data contained within the previously mentioned black box. The study evaluates the performance of our solution in three different scenarios, each involving autonomous agent navigation functionalities. This evaluation includes a thorough examination of accountability and explainability metrics, demonstrating the effectiveness of our approach in using accountable data from robot actions to obtain coherent, accurate and understandable explanations, even when facing challenges inherent in the use of autonomous agents in real-world scenarios.",0 "Neuroaesthetics is an interdisciplinary field that brings together neuroscience, psychology, and the arts to explore how the human brain perceives and responds to visual beauty. This paper examines the neural mechanisms behind aesthetic experiences, aiming to explain why certain designs or artworks feel emotionally or cognitively ""right."" By analyzing the interaction between perception, emotion, and cognition, neuroaesthetics reveals how beauty is constructed in the brain and how this understanding can inform fields such as graphic and interface design. This paper offers a clear and accessible overview of core neuroaesthetic principles, making the subject approachable to a wide audience. The findings suggest that impactful design is more than surface-level appeal: well-crafted visual experiences can engage, support, and connect people in meaningful ways.",0 "Witness reports of Unidentified Aerial Phenomena (UAP) occasionally associate UAP sightings with local electromagnetic interferences, such as spinning magnetic compasses onboard aircraft or sudden malfunctions of mechanical vehicles. These reports have motivated the incorporation of a magnetometer into the instrumentation suite of the Galileo Project (GP), a Harvard-led scientific collaboration whose aim is to collect and analyze multi-sensor data that collectively could help elucidate the nature of UAP. The goal of the GP magnetometry investigation is to identify magnetic anomalies that cannot be readily explained in terms of a natural or human-made origin, and analyze these jointly with the data collected from the other modalities. These include an ensemble of visible and infrared cameras, a broadband acoustic system and a weather-monitoring system. Here, we present GP's first geomagnetic variometer station, deployed at the GP observatory in Colorado, USA. We describe the calibration and deployment of the instrumentation, which consists of a vector magnetometer and its data acquisition system, and the collection and processing of the data. Moreover, we present and discuss examples of the magnetic field data obtained over a period of 6 months, including data recorded during the May 2024 G5 extreme geomagnetic storm. We find that the data meet and even surpass the requirements laid out in GP's Science Traceability Matrix. Key to the evaluation of our data is the proximity of the variometer station to the USGS magnetic observatory in Boulder, Colorado. By comparing the two sets of data, we find that they are of similar quality. Having established the proper functioning of the first GP variometer station, we will use it as the model for variometer stations at future GP observatories.",0 "Large Language Models (LLMs) can memorize and reveal personal information, raising concerns regarding compliance with the EU's GDPR, particularly the Right to Be Forgotten (RTBF). Existing machine unlearning methods assume the data to forget is already known but do not address how to identify which individual-fact associations are stored in the model. Privacy auditing techniques typically operate at the population level or target a small set of identifiers, limiting applicability to individual-level data inquiries. We introduce WikiMem, a dataset of over 5,000 natural language canaries covering 243 human-related properties from Wikidata, and a model-agnostic metric to quantify human-fact associations in LLMs. Our approach ranks ground-truth values against counterfactuals using calibrated negative log-likelihood across paraphrased prompts. We evaluate 200 individuals across 15 LLMs (410M-70B parameters), showing that memorization correlates with subject web presence and model scale. We provide a foundation for identifying memorized personal data in LLMs at the individual level, enabling the dynamic construction of forget sets for machine unlearning and RTBF requests.",0 "Recent advancements in Large Language Models (LLMs) have brought them closer to matching human cognition across a variety of tasks. How well do these models align with human performance in detecting and mapping analogies? Prior research has shown that LLMs can extract similarities from analogy problems but lack robust human-like reasoning. Building on Webb, Holyoak, and Lu (2023), the current study focused on a story-based analogical mapping task and conducted a fine-grained evaluation of LLM reasoning abilities compared to human performance. First, it explored the semantic representation of analogies in LLMs, using sentence embeddings to assess whether they capture the similarity between the source and target texts of an analogy, and the dissimilarity between the source and distractor texts. Second, it investigated the effectiveness of explicitly prompting LLMs to explain analogies. Throughout, we examine whether LLMs exhibit similar performance profiles to those observed in humans by evaluating their reasoning at the level of individual analogies, and not just at the level of overall accuracy (as prior studies have done). Our experiments include evaluating the impact of model size (8B vs. 70B parameters) and performance variation across state-of-the-art model architectures such as GPT-4 and LLaMA3. This work advances our understanding of the analogical reasoning abilities of LLMs and their potential as models of human reasoning.",0 "Commit messages are essential in software development as they serve to document and explain code changes. Yet, their quality often falls short in practice, with studies showing significant proportions of empty or inadequate messages. While automated commit message generation has advanced significantly, particularly with Large Language Models (LLMs), the evaluation of generated messages remains challenging. Traditional reference-based automatic metrics like BLEU, ROUGE-L, and METEOR have notable limitations in assessing commit message quality, as they assume a one-to-one mapping between code changes and commit messages, leading researchers to rely on resource-intensive human evaluation. This study investigates the potential of LLMs as automated evaluators for commit message quality. Through systematic experimentation with various prompt strategies and state-of-the-art LLMs, we demonstrate that LLMs combining Chain-of-Thought reasoning with few-shot demonstrations achieve near human-level evaluation proficiency. Our LLM-based evaluator significantly outperforms traditional metrics while maintaining acceptable reproducibility, robustness, and fairness levels despite some inherent variability. This work conducts a comprehensive preliminary study on using LLMs for commit message evaluation, offering a scalable alternative to human assessment while maintaining high-quality evaluation.",0 "In this work, we address the often-overlooked issue of Timescale Dependent Label Inconsistency (TsDLI) in training neural network models for EEG-based human emotion recognition. To mitigate TsDLI and enhance model generalization and explainability, we propose two novel regularization strategies: Local Variation Loss (LVL) and Local-Global Consistency Loss (LGCL). Both methods incorporate classical mathematical principles--specifically, functions of bounded variation and commute-time distances--within a graph theoretic framework. Complementing our regularizers, we introduce a suite of new evaluation metrics that better capture the alignment between temporally local predictions and their associated global emotion labels. We validate our approach through comprehensive experiments on two widely used EEG emotion datasets, DREAMER and DEAP, across a range of neural architectures including LSTM and transformer-based models. Performance is assessed using five distinct metrics encompassing both quantitative accuracy and qualitative consistency. Results consistently show that our proposed methods outperform state-of-the-art baselines, delivering superior aggregate performance and offering a principled trade-off between interpretability and predictive power under label inconsistency. Notably, LVL achieves the best aggregate rank across all benchmarked backbones and metrics, while LGCL frequently ranks the second, highlighting the effectiveness of our framework.",0 "Since the public release of ChatGPT in November 2022, the AI landscape is undergoing a rapid transformation. Currently, the use of AI chatbots by consumers has largely been limited to image generation or question-answering language models. The next generation of AI systems, AI agents that can plan and execute complex tasks with only limited human involvement, will be capable of a much broader range of actions. In particular, consumers could soon be able to delegate purchasing decisions to AI agents acting as Custobots. Against this background, the Article explores whether EU consumer law, as it currently stands, is ready for the rise of the Custobot Economy. In doing so, the Article makes three contributions. First, it outlines how the advent of AI agents could change the existing e-commerce landscape. Second, it explains how AI agents challenge the premises of a human-centric consumer law which is based on the assumption that consumption decisions are made by humans. Third, the Article presents some initial considerations how a future consumer law could look like that works for both humans and machines.",0 "Large Language Models have shown impressive capabilities in coding tasks like code generation and code completion, as they have been trained on a large amount of code data. Also, since one of the core pretraining objectives is Next Token Prediction, these models tends to learn surface-level syntactic patterns in code. However, this does not guarantee code comprehension ability i.e. the ability to capture the semantics of the code. In our opinion, this is the reason why these models often underperform on tasks that require deeper semantic understanding, such as code debugging and code optimization. To address this, we propose fine-tuning these models specifically for code comprehension tasks using large-scale datasets, enabling them to develop a more robust understanding of code semantics. We evaluate three code models of varying sizes on a suite of code comprehension tasks designed to assess semantic understanding beyond surface-level syntactic pattern matching. In particular, we analyze performance on the Subjectivity Grading Task and observe that model performance improves after fine-tuning on relevant downstream tasks. The most significant improvement is seen in the QWQ-32B model, where accuracy increases from 70% to 83.47%. A similar or explainable trend is observed across other models, clearly indicating an enhancement in code comprehension ability. Among the models studied, the DPO-fine-tuned Codestral-22B achieves the highest micro-accuracy of 87.66% on the Subjectivity Grading Task.",0 "PG 1159 stars are thought to be progenitors of the majority of H-deficient white dwarfs. Their unusual He-, C-, and O-dominated surface composition is typically believed to result from a late thermal pulse experienced by a single (pre-)white dwarf. Yet, other formation channels - involving close binary evolution - have recently been proposed and could lead to similar surface compositions. Here we present a non-local thermodynamic equilibrium spectral analysis based on new UV and archival optical spectra of one of the hottest PG 1159 stars, $\text{RX J}0122.9\text{ -}7521$. We find $T_\text{eff} = 175$ kK and a surface gravity of log $g = 7.7$, and an astonishingly low O/C ratio of $7.3 \times 10^{-3}$ by mass. By combining the spectroscopic surface gravity and Gaia parallax with a spectral energy distribution fit, we derive a mass of $M_\text{spec} = 1.8^{+1.1}_{-0.7}$ $M_\odot$. Although this spectroscopic mass is higher than predicted by evolutionary models, it is subject to substantial uncertainty. Furthermore, we find that $\text{RX J}0122.9\text{ -}7521$ shows strongly rotationally broadened lines, suggesting that the previously reported photometric period of $41$ min indeed corresponds to the rotational period of this star. Our kinematic analysis shows that $\text{RX J}0122.9\text{ -}7521$ belongs to the Galactic halo, which - assuming single-star evolution - is in stark contrast to its relatively high mass. The rapid rotation, high mass, and halo kinematics, as well as the lack of evidence for a close companion, lead us to believe that $\text{RX J}0122.9\text{ -}7521$ formed through the merger of two white dwarfs. Yet, none of the current models can explain the surface abundances of $\text{RX J}0122.9\text{ -}7521$.",0 "Our society increasingly depends on intelligent systems to solve complex problems, ranging from recommender systems suggesting the next movie to watch to AI models assisting in medical diagnoses for hospitalized patients. With the iterative improvement of diagnostic accuracy and efficiency, AI holds significant potential to mitigate medical misdiagnoses by preventing numerous deaths and reducing an economic burden of approximately 450 EUR billion annually. However, a key obstacle to AI adoption lies in the lack of transparency: many automated systems function as ""black boxes,"" providing predictions without revealing the underlying processes. This opacity can hinder experts' ability to trust and rely on AI systems. Visual analytics (VA) provides a compelling solution by combining AI models with interactive visualizations. These specialized charts and graphs empower users to incorporate their domain expertise to refine and improve the models, bridging the gap between AI and human understanding. In this work, we define, categorize, and explore how VA solutions can foster trust across the stages of a typical AI pipeline. We propose a design space for innovative visualizations and present an overview of our previously developed VA dashboards, which support critical tasks within the various pipeline stages, including data processing, feature engineering, hyperparameter tuning, understanding, debugging, refining, and comparing models.",0 "Natural language-based assessment (NLA) is an approach to second language assessment that uses instructions - expressed in the form of can-do descriptors - originally intended for human examiners, aiming to determine whether large language models (LLMs) can interpret and apply them in ways comparable to human assessment. In this work, we explore the use of such descriptors with an open-source LLM, Qwen 2.5 72B, to assess responses from the publicly available S&I Corpus in a zero-shot setting. Our results show that this approach - relying solely on textual information - achieves competitive performance: while it does not outperform state-of-the-art speech LLMs fine-tuned for the task, it surpasses a BERT-based model trained specifically for this purpose. NLA proves particularly effective in mismatched task settings, is generalisable to other data types and languages, and offers greater interpretability, as it is grounded in clearly explainable, widely applicable language descriptors.",0 "Background and Objective: Differentiating wide complex tachycardia (WCT) is clinically critical yet challenging due to morphological similarities in electrocardiogram (ECG) signals between life-threatening ventricular tachycardia (VT) and supraventricular tachycardia with aberrancy (SVT-A). Misdiagnosis carries fatal risks. We propose a computationally efficient deep learning solution to improve diagnostic accuracy and provide model interpretability for clinical deployment. Methods: A novel lightweight parallel deep architecture is introduced. Each pipeline processes individual ECG leads using two 1D-CNN blocks to extract local features. Feature maps are concatenated across leads, followed by LSTM layers to capture temporal dependencies. Final classification employs fully connected layers. Explainability is achieved via Shapley Additive Explanations (SHAP) for local/global interpretation. The model was evaluated on a 35-subject ECG database using standard performance metrics. Results: The model achieved $95.63\%$ accuracy ($95\%$ CI: $93.07-98.19\%$), with sensitivity=$95.10\%$, specificity=$96.06\%$, and F1-score=$95.12\%$. It outperformed state-of-the-art methods in both accuracy and computational efficiency, requiring minimal CNN blocks per pipeline. SHAP analysis demonstrated clinically interpretable feature contributions. Conclusions: Our end-to-end framework delivers high-precision WCT classification with minimal computational overhead. The integration of SHAP enhances clinical trust by elucidating decision logic, supporting rapid, informed diagnosis. This approach shows significant promise for real-world ECG analysis tools.",0 "This study uses double/debiased machine learning to evaluate the impact of transitioning from lecture-based blended teaching to a flipped classroom concept in a cohort comparison of a large compulsory introductory statistics course at a German tuition-free university. Our findings indicate positive changes in students' self-conception and a reduction in procrastination behaviors. However, we also observe a decline in the enjoyment of classroom sessions. Contrary to theoretical expectations, we do not find significant positive effects on exam scores, passing rates, or knowledge retention. Unlike most studies, however, we can leverage detailed usage data from the flipped cohort, including the timeliness and completeness of pre-class video watching, as well as quiz participation patterns, to check how well students implemented each part of the curriculum. Our findings suggest that, on average, students in the flipped cohort implemented the instructional approach insufficiently, explaining the mechanism of our null results in exam performance and knowledge retention. This highlights the need for additional strategies to ensure that students actually benefit from a flipped curriculum.",0 "Large Language Models (LLMs) have attained human-level fluency in text generation, which complicates the distinction between human-written and LLM-generated texts. This increases the risk of misuse and highlights the need for reliable detectors. Yet, existing detectors exhibit poor robustness on out-of-distribution (OOD) data and attacked data, which is critical for real-world scenarios. Also, they struggle to provide interpretable evidence to support their decisions, thus undermining the reliability. In light of these challenges, we propose IPAD (Inverse Prompt for AI Detection), a novel framework consisting of a Prompt Inverter that identifies predicted prompts that could have generated the input text, and two Distinguishers that examine the probability that the input texts align with the predicted prompts. Empirical evaluations demonstrate that IPAD outperforms the strongest baselines by 9.05% (Average Recall) on in-distribution data, 12.93% (AUROC) on out-of-distribution (OOD) data, and 5.48% (AUROC) on attacked data. IPAD also performs robustly on structured datasets. Furthermore, an interpretability assessment is conducted to illustrate that IPAD enhances the AI detection trustworthiness by allowing users to directly examine the decision-making evidence, which provides interpretable support for its state-of-the-art detection results.",0 "Objective: This study aims to support early diagnosis of Alzheimer's disease and detection of amyloid accumulation by leveraging the microstructural information available in multi-shell diffusion MRI (dMRI) data, using a vision transformer-based deep learning framework. Methods: We present a classification pipeline that employs the Swin Transformer, a hierarchical vision transformer model, on multi-shell dMRI data for the classification of Alzheimer's disease and amyloid presence. Key metrics from DTI and NODDI were extracted and projected onto 2D planes to enable transfer learning with ImageNet-pretrained models. To efficiently adapt the transformer to limited labeled neuroimaging data, we integrated Low-Rank Adaptation. We assessed the framework on diagnostic group prediction (cognitively normal, mild cognitive impairment, Alzheimer's disease dementia) and amyloid status classification. Results: The framework achieved competitive classification results within the scope of multi-shell dMRI-based features, with the best balanced accuracy of 95.2% for distinguishing cognitively normal individuals from those with Alzheimer's disease dementia using NODDI metrics. For amyloid detection, it reached 77.2% balanced accuracy in distinguishing amyloid-positive mild cognitive impairment/Alzheimer's disease dementia subjects from amyloid-negative cognitively normal subjects, and 67.9% for identifying amyloid-positive individuals among cognitively normal subjects. Grad-CAM-based explainability analysis identified clinically relevant brain regions, including the parahippocampal gyrus and hippocampus, as key contributors to model predictions. Conclusion: This study demonstrates the promise of diffusion MRI and transformer-based architectures for early detection of Alzheimer's disease and amyloid pathology, supporting biomarker-driven diagnostics in data-limited biomedical settings.",0 "Supervised Fine-Tuning (SFT) and Preference Optimization (PO) are key processes for aligning Language Models (LMs) with human preferences post pre-training. While SFT excels in efficiency and PO in effectiveness, they are often combined sequentially without integrating their optimization objectives. This approach ignores the opportunities to bridge their paradigm gap and take the strengths from both. In this paper, we interpret SFT and PO with two sub-processes -- Preference Estimation and Transition Optimization -- defined at token level within the Markov Decision Process (MDP). This modeling shows that SFT is only a special case of PO with inferior estimation and optimization. PO estimates the model's preference by its entire generation, while SFT only scores model's subsequent predicted tokens based on prior tokens from ground truth answer. These priors deviates from model's distribution, hindering the preference estimation and transition optimization. Building on this view, we introduce Intuitive Fine-Tuning (IFT) to integrate SFT and PO into a single process. Through a temporal residual connection, IFT brings better estimation and optimization by capturing LMs' intuitive sense of its entire answers. But it solely relies on a single policy and the same volume of non-preference-labeled data as SFT. Our experiments show that IFT performs comparably or even superiorly to SFT and some typical PO methods across several tasks, particularly those require generation, reasoning, and fact-following abilities. An explainable Frozen Lake game further validates the effectiveness of IFT for getting competitive policy.",0 "Understanding character relationships is essential for interpreting complex narratives and conducting socially grounded AI research. However, manual annotation is time-consuming and low in coverage, while large language models (LLMs) often produce hallucinated or logically inconsistent outputs. We present SymbolicThought, a human-in-the-loop framework that combines LLM-based extraction with symbolic reasoning. The system constructs editable character relationship graphs, refines them using seven types of logical constraints, and enables real-time validation and conflict resolution through an interactive interface. To support logical supervision and explainable social analysis, we release a dataset of 160 interpersonal relationships with corresponding logical structures. Experiments show that SymbolicThought improves annotation accuracy and consistency while significantly reducing time cost, offering a practical tool for narrative understanding, explainable AI, and LLM evaluation.",0 "Aligning language models with human preferences through reinforcement learning from human feedback is crucial for their safe and effective deployment. The human preference is typically represented through comparison where one response is chosen over another for a given prompt. However, standard preference datasets often lack explicit information on why a particular choice was made, presenting an ambiguity that can hinder efficient learning and robust alignment, especially given the high cost of acquiring extensive human annotations. While many studies focus on algorithmic improvements, this work adopts a data-centric perspective, exploring how to enhance learning from existing preference data. We propose augmenting standard preference pairs with rationales that explain the reasoning behind the human preference. Specifically, we introduce a simple and principled framework that leverages machine-generated rationales to enrich preference data for preference optimization algorithms. Our comprehensive analysis demonstrates that incorporating rationales improves learning efficiency. Extensive experiments reveal some advantages: rationale-augmented learning accelerates convergence and can achieve higher final model performance. Furthermore, this approach is versatile and compatible with various direct preference optimization algorithms. Our findings showcase the potential of thoughtful data design in preference learning, demonstrating that enriching existing datasets with explanatory rationales can help unlock improvements in model alignment and annotation efficiency.",1 "In recent years, numerous researchers have begun investigating how virtual reality (VR) tracking and interaction data can be used for a variety of machine learning purposes, including user identification, predicting cybersickness, and estimating learning gains. One constraint for this research area is the dearth of open datasets. In this paper, we present a new open dataset captured with our VR-based Full-scale Assembly Simulation Testbed (FAST). This dataset consists of data collected from 108 participants (50 females, 56 males, 2 non-binary) learning how to assemble two distinct full-scale structures in VR. In addition to explaining how the dataset was collected and describing the data included, we discuss how the dataset may be used by future researchers.",2 "Understanding and mitigating hallucinations in Large Language Models (LLMs) is crucial for ensuring reliable content generation. While previous research has primarily focused on ""when"" LLMs hallucinate, our work explains ""why"" and directly links model behaviour to the pre-training data that forms their prior knowledge. Specifically, we demonstrate that an asymmetry exists in the recognition of logically equivalent facts, which can be attributed to frequency discrepancies of entities appearing as subjects versus objects. Given that most pre-training datasets are inaccessible, we leverage the fully open-source OLMo series by indexing its Dolma dataset to estimate entity frequencies. Using relational facts (represented as triples) from Wikidata5M, we construct probing datasets to isolate this effect. Our experiments reveal that facts with a high-frequency subject and a low-frequency object are better recognised than their inverse, despite their logical equivalence. The pattern reverses in low-to-high frequency settings, and no statistically significant asymmetry emerges when both entities are high-frequency. These findings highlight the influential role of pre-training data in shaping model predictions and provide insights for inferring the characteristics of pre-training data in closed or partially closed LLMs.",0 "Explainable Recommender System (ExRec) provides transparency to the recommendation process, increasing users' trust and boosting the operation of online services. With the rise of large language models (LLMs), whose extensive world knowledge and nuanced language understanding enable the generation of human-like, contextually grounded explanations, LLM-powered ExRec has gained great momentum. However, existing LLM-based ExRec models suffer from profile deviation and high retrieval overhead, hindering their deployment. To address these issues, we propose Retrieval-Augmented Recommendation Explanation Generation with Hierarchical Aggregation (REXHA). Specifically, we design a hierarchical aggregation based profiling module that comprehensively considers user and item review information, hierarchically summarizing and constructing holistic profiles. Furthermore, we introduce an efficient retrieval module using two types of pseudo-document queries to retrieve relevant reviews to enhance the generation of recommendation explanations, effectively reducing retrieval latency and improving the recall of relevant reviews. Extensive experiments demonstrate that our method outperforms existing approaches by up to 12.6% w.r.t. the explanation quality while achieving high retrieval efficiency.",0 "Introduction: Chest CT scans are increasingly used in dyspneic patients where acute heart failure (AHF) is a key differential diagnosis. Interpretation remains challenging and radiology reports are frequently delayed due to a radiologist shortage, although flagging such information for emergency physicians would have therapeutic implication. Artificial intelligence (AI) can be a complementary tool to enhance the diagnostic precision. We aim to develop an explainable AI model to detect radiological signs of AHF in chest CT with an accuracy comparable to thoracic radiologists. Methods: A single-center, retrospective study during 2016-2021 at Copenhagen University Hospital - Bispebjerg and Frederiksberg, Denmark. A Boosted Trees model was trained to predict AHF based on measurements of segmented cardiac and pulmonary structures from acute thoracic CT scans. Diagnostic labels for training and testing were extracted from radiology reports. Structures were segmented with TotalSegmentator. Shapley Additive explanations values were used to explain the impact of each measurement on the final prediction. Results: Of the 4,672 subjects, 49% were female. The final model incorporated twelve key features of AHF and achieved an area under the ROC of 0.87 on the independent test set. Expert radiologist review of model misclassifications found that 24 out of 64 (38%) false positives and 24 out of 61 (39%) false negatives were actually correct model predictions, with the errors originating from inaccuracies in the initial radiology reports. Conclusion: We developed an explainable AI model with strong discriminatory performance, comparable to thoracic radiologists. The AI model's stepwise, transparent predictions may support decision-making.",2 "How do we learn when to persist, when to let go, and when to shift gears? Gearshift Fellowship (GF) is the prototype of a new Supertask paradigm designed to model how humans and artificial agents adapt to shifting environment demands. Grounded in cognitive neuroscience, computational psychiatry, economics, and artificial intelligence, Supertasks combine computational neurocognitive modeling with serious gaming. This creates a dynamic, multi-mission environment engineered to assess mechanisms of adaptive behavior across cognitive and social contexts. Computational parameters explain behavior and probe mechanisms by controlling the game environment. Unlike traditional tasks, GF enables neurocognitive modeling of individual differences across perceptual decisions, learning, and meta-cognitive levels. This positions GF as a flexible testbed for understanding how cognitive-affective control processes, learning styles, strategy use, and motivational shifts adapt across contexts and over time. It serves as an experimental platform for scientists, a phenotype-to-mechanism intervention for clinicians, and a training tool for players aiming to strengthen self-regulated learning, mood, and stress resilience. Online study (n = 60, ongoing) results show that GF recovers effects from traditional neuropsychological tasks (construct validity), uncovers novel patterns in how learning differs across contexts and how clinical features map onto distinct adaptations. These findings pave the way for developing in-game interventions that foster self-efficacy and agency to cope with real-world stress and uncertainty. GF builds a new adaptive ecosystem designed to accelerate science, transform clinical care, and foster individual growth. It offers a mirror and training ground where humans and machines co-develop together deeper flexibility and awareness.",0 "Shapes Constraint Language (SHACL) is a powerful language for validating RDF data. Given the recent industry attention to Knowledge Graphs (KGs), more users need to validate linked data properly. However, traditional SHACL validation engines often provide terse reports in English that are difficult for non-technical users to interpret and act upon. This paper presents xpSHACL, an explainable SHACL validation system that addresses this issue by combining rule-based justification trees with retrieval-augmented generation (RAG) and large language models (LLMs) to produce detailed, multilanguage, human-readable explanations for constraint violations. A key feature of xpSHACL is its usage of a Violation KG to cache and reuse explanations, improving efficiency and consistency.",0 "The advent of language models (LMs) has the potential to dramatically accelerate tasks that may be cast to text-processing however, real-world adoption is hindered by concerns regarding safety, explainability, and bias. How can we responsibly leverage LMs in a transparent, auditable manner -- minimizing risk and allowing human experts to focus on informed decision-making rather than data-processing or prompt engineering? In this work, we propose a framework for declaring statically typed, LM-powered subroutines (i.e., callable, function-like procedures) for use within conventional asynchronous code -- such that sparse feedback from human experts is used to improve the performance of each subroutine online (i.e., during use). In our implementation, all LM-produced artifacts (i.e., prompts, inputs, outputs, and data-dependencies) are recorded and exposed to audit on demand. We package this framework as a library to support its adoption and continued development. While this framework may be applicable across several real-world decision workflows (e.g., in healthcare and legal fields), we evaluate it in the context of public comment processing as mandated by the 1969 National Environmental Protection Act (NEPA): Specifically, we use this framework to develop ""CommentNEPA,"" an application that compiles, organizes, and summarizes a corpus of public commentary submitted in response to a project requiring environmental review. We quantitatively evaluate the application by comparing its outputs (when operating without human feedback) to historical ``ground-truth'' data as labelled by 5 human annotators during the preparation of official environmental impact statements.",1 "Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human ""thinking with images"". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, TreeBench consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing vision-grounded reasoning. The code is available at https://github.com/Haochen-Wang409/TreeVGR.",0 "Autonomous vehicles (AVs) are poised to redefine transportation by enhancing road safety, minimizing human error, and optimizing traffic efficiency. The success of AVs depends on their ability to interpret complex, dynamic environments through diverse data sources, including video streams, sensor measurements, and contextual textual information. However, seamlessly integrating these multimodal inputs and ensuring transparency in AI-driven decisions remain formidable challenges. This study introduces a novel multimodal framework that synergistically combines video, sensor, and textual data to predict driving actions while generating human-readable explanations, fostering trust and regulatory compliance. By leveraging VideoMAE for spatiotemporal video analysis, a custom sensor fusion module for real-time data processing, and BERT for textual comprehension, our approach achieves robust decision-making and interpretable outputs. Evaluated on the BDD-X (21113 samples) and nuScenes (1000 scenes) datasets, our model reduces training loss from 5.7231 to 0.0187 over five epochs, attaining an action prediction accuracy of 92.5% and a BLEU-4 score of 0.75 for explanation quality, outperforming state-of-the-art methods. Ablation studies confirm the critical role of each modality, while qualitative analyses and human evaluations highlight the model's ability to produce contextually rich, user-friendly explanations. These advancements underscore the transformative potential of multimodal integration and explainability in building safe, transparent, and trustworthy AV systems, paving the way for broader societal adoption of autonomous driving technologies.",0 "This paper presents a comprehensive five-stage evolutionary framework for understanding the development of artificial intelligence, arguing that its trajectory mirrors the historical progression of human cognitive technologies. We posit that AI is advancing through distinct epochs, each defined by a revolutionary shift in its capacity for representation and reasoning, analogous to the inventions of cuneiform, the alphabet, grammar and logic, mathematical calculus, and formal logical systems. This ""Geometry of Cognition"" framework moves beyond mere metaphor to provide a systematic, cross-disciplinary model that not only explains AI's past architectural shifts-from expert systems to Transformers-but also charts a concrete and prescriptive path forward. Crucially, we demonstrate that this evolution is not merely linear but reflexive: as AI advances through these stages, the tools and insights it develops create a feedback loop that fundamentally reshapes its own underlying architecture. We are currently transitioning into a ""Metalinguistic Moment,"" characterized by the emergence of self-reflective capabilities like Chain-of-Thought prompting and Constitutional AI. The subsequent stages, the ""Mathematical Symbolism Moment"" and the ""Formal Logic System Moment,"" will be defined by the development of a computable calculus of thought, likely through neuro-symbolic architectures and program synthesis, culminating in provably aligned and reliable AI that reconstructs its own foundational representations. This work serves as the methodological capstone to our trilogy, which previously explored the economic drivers (""why"") and cognitive nature (""what"") of AI. Here, we address the ""how,"" providing a theoretical foundation for future research and offering concrete, actionable strategies for startups and developers aiming to build the next generation of intelligent systems.",0 "Most protoplanetary disks experience a phase in which they are subjected to strong ultraviolet radiation from nearby massive stars. This UV radiation can substantially alter their chemistry by producing numerous radicals and molecular ions. In this Letter we present detailed analysis of the JWST-NIRSpec spectrum of the d203-506 obtained as part of the PDRs4All Early Release Science program. Using state-of-the-art spectroscopic data, we searched for species using a multi-molecule fitting tool, PAHTATmol, that we developed for this purpose. Based on this analysis, we report the clear detection of ro-vibrational emission of the CH radical and likely detection of the H$_3^+$ molecular ion, with estimated abundances of a few times 10$^{-7}$ and approximately 10$^{-8}$, respectively. The presence of CH is predicted by gas-phase models and well explained by hydrocarbon photochemistry. H$_3^+$ is usually formed through reactions of H$_2$ with H$_2^+$ originating from cosmic ray ionization of H$_2$. However, recent theoretical studies suggest that H$_3^+$ also forms through UV-driven chemistry in strongly irradiated ($G_0>$10$^3$), dense ($n_{\rm H} >10^{6}$ cm$^{-3}$) gas. The latter is favored as an explanation for the presence of ``hot'' H$_3^+$ ($T_{\rm ex}\gtrsim$1000 K) in the outer disk layers of d203-506, coinciding with the emission of FUV-pumped H$_2$ and other ``PDR species'', such as CH$^+$, CH$_3^+$, and OH. Our detection of infrared emission from vibrationally excited H$_3^+$ and CH raises questions about their excitation mechanisms and, underscore that UV radiation can have a profound impact on the chemistry of planet forming disks. They also demonstrate the power of JWST pushing the limit for the detection of elusive species in protoplanetary disks.",0 "The accelerating growth of photographic collections has outpaced manual cataloguing, motivating the use of vision language models (VLMs) to automate metadata generation. This study examines whether Al-generated catalogue descriptions can approximate human-written quality and how generative Al might integrate into cataloguing workflows in archival and museum collections. A VLM (InternVL2) generated catalogue descriptions for photographic prints on labelled cardboard mounts with archaeological content, evaluated by archive and archaeology experts and non-experts in a human-centered, experimental framework. 906 participants classified descriptions as AI-generated or expert-written, rated quality, and reported willingness to use and trust in AI tools. Classification performance was above chance level, with both groups underestimating their ability to detect Al-generated descriptions. OCR errors and hallucinations limited perceived quality, yet descriptions rated higher in accuracy and usefulness were harder to classify, suggesting that human review is necessary to ensure the accuracy and quality of catalogue descriptions generated by the out-of-the-box model, particularly in specialized domains like archaeological cataloguing. Experts showed lower willingness to adopt AI tools, emphasizing concerns on preservation responsibility over technical performance. These findings advocate for a collaborative approach where AI supports draft generation but remains subordinate to human verification, ensuring alignment with curatorial values (e.g., provenance, transparency). The successful integration of this approach depends not only on technical advancements, such as domain-specific fine-tuning, but even more on establishing trust among professionals, which could both be fostered through a transparent and explainable AI pipeline.",2 "Explainability has become a crucial non-functional requirement to enhance transparency, build user trust, and ensure regulatory compliance. However, translating explanation needs expressed in user feedback into structured requirements and corresponding explanations remains challenging. While existing methods can identify explanation-related concerns in user reviews, there is no established approach for systematically deriving requirements and generating aligned explanations. To contribute toward addressing this gap, we introduce a tool-supported approach that automates this process. To evaluate its effectiveness, we collaborated with an industrial automation manufacturer to create a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations. Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style. Nonetheless, correctness remains an issue, highlighting the importance of human validation. This work contributes to the advancement of explainability requirements in software systems by (1) introducing an automated approach to derive requirements from user reviews and generate corresponding explanations, (2) providing empirical insights into the strengths and limitations of automatically generated artifacts, and (3) releasing a curated dataset to support future research on the automatic generation of explainability requirements.",0 "This paper describes HARMONIC, a cognitive-robotic architecture that integrates the OntoAgent cognitive framework with general-purpose robot control systems applied to human-robot teaming (HRT). HARMONIC incorporates metacognition, meaningful natural language communication, and explainability capabilities required for developing mutual trust in HRT. Through simulation experiments involving a joint search task performed by a heterogeneous team of two HARMONIC-based robots and a human operator, we demonstrate heterogeneous robots that coordinate their actions, adapt to complex scenarios, and engage in natural human-robot communication. Evaluation results show that HARMONIC-based robots can reason about plans, goals, and team member attitudes while providing clear explanations for their decisions, which are essential requirements for realistic human-robot teaming.",0 "Modeling car-following behavior is fundamental to microscopic traffic simulation, yet traditional deterministic models often fail to capture the full extent of variability and unpredictability in human driving. While many modern approaches incorporate context-aware inputs (e.g., spacing, speed, relative speed), they frequently overlook structured stochasticity that arises from latent driver intentions, perception errors, and memory effects -- factors that are not directly observable from context alone. To fill the gap, this study introduces an interpretable stochastic modeling framework that captures not only context-dependent dynamics but also residual variability beyond what context can explain. Leveraging deep neural networks integrated with nonstationary Gaussian processes (GPs), our model employs a scenario-adaptive Gibbs kernel to learn dynamic temporal correlations in acceleration decisions, where the strength and duration of correlations between acceleration decisions evolve with the driving context. This formulation enables a principled, data-driven quantification of uncertainty in acceleration, speed, and spacing, grounded in both observable context and latent behavioral variability. Comprehensive experiments on the naturalistic vehicle trajectory dataset collected from the German highway, i.e., the HighD dataset, demonstrate that the proposed stochastic simulation method within this framework surpasses conventional methods in both predictive performance and interpretable uncertainty quantification. The integration of interpretability and accuracy makes this framework a promising tool for traffic analysis and safety-critical applications.",0 "This position paper looks at differences between the current understandings of human-centered explainability and explainability AI. We discuss current ideas in both fields, as well as the differences and opportunities we discovered. As an example of combining both, we will present preliminary work on a new algebraic machine learning approach. We are excited to continue discussing design opportunities for human-centered explainability (HCx) and xAI with the broader HCxAI community.",0 "Ethical hacking today relies on highly skilled practitioners executing complex sequences of commands, which is inherently time-consuming, difficult to scale, and prone to human error. To help mitigate these limitations, we previously introduced 'PenTest++', an AI-augmented system combining automation with generative AI supporting ethical hacking workflows. However, a key limitation of PenTest++ was its lack of support for privilege escalation, a crucial element of ethical hacking. In this paper we present 'PenTest2.0', a substantial evolution of PenTest++ supporting automated privilege escalation driven entirely by Large Language Model reasoning. It also incorporates several significant enhancements: 'Retrieval-Augmented Generation', including both one-line and offline modes; 'Chain-of-Thought' prompting for intermediate reasoning; persistent 'PenTest Task Trees' to track goal progression across turns; and the optional integration of human-authored hints. We describe how it operates, present a proof-of-concept prototype, and discuss its benefits and limitations. We also describe application of the system to a controlled Linux target, showing it can carry out multi-turn, adaptive privilege escalation. We explain the rationale behind its core design choices, and provide comprehensive testing results and cost analysis. Our findings indicate that 'PenTest2.0' represents a meaningful step toward practical, scalable, AI-automated penetration testing, whilst highlighting the shortcomings of generative AI systems, particularly their sensitivity to prompt structure, execution context, and semantic drift, reinforcing the need for further research and refinement in this emerging space. Keywords: AI, Ethical Hacking, Privilege Escalation, GenAI, ChatGPT, LLM (Large Language Model), HITL (Human-in-the-Loop)",0 "Passive tracking methods, such as phone and wearable sensing, have become dominant in monitoring human behaviors in modern ubiquitous computing studies. While there have been significant advances in machine-learning approaches to translate periods of raw sensor data to model momentary behaviors, (e.g., physical activity recognition), there still remains a significant gap in the translation of these sensing streams into meaningful, high-level, context-aware insights that are required for various applications (e.g., summarizing an individual's daily routine). To bridge this gap, experts often need to employ a context-driven sensemaking process in real-world studies to derive insights. This process often requires manual effort and can be challenging even for experienced researchers due to the complexity of human behaviors. We conducted three rounds of user studies with 21 experts to explore solutions to address challenges with sensemaking. We follow a human-centered design process to identify needs and design, iterate, build, and evaluate Vital Insight (VI), a novel, LLM-assisted, prototype system to enable human-in-the-loop inference (sensemaking) and visualizations of multi-modal passive sensing data from smartphones and wearables. Using the prototype as a technology probe, we observe experts' interactions with it and develop an expert sensemaking model that explains how experts move between direct data representations and AI-supported inferences to explore, question, and validate insights. Through this iterative process, we also synthesize and discuss a list of design implications for the design of future AI-augmented visualization systems to better assist experts' sensemaking processes in multi-modal health sensing data.",1 "The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neurovascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two non-invasive angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited datasets with annotations on CoW anatomy, especially for CTA. Therefore, we organized the TopCoW challenge with the release of an annotated CoW dataset. The TopCoW dataset is the first public dataset with voxel-level annotations for 13 CoW vessel components, enabled by virtual reality technology. It is also the first large dataset using 200 pairs of MRA and CTA from the same patients. As part of the benchmark, we invited submissions worldwide and attracted over 250 registered participants from six continents. The submissions were evaluated on both internal and external test datasets of 226 scans from over five centers. The top performing teams achieved over 90% Dice scores at segmenting the CoW components, over 80% F1 scores at detecting key CoW components, and over 70% balanced accuracy at classifying CoW variants for nearly all test sets. The best algorithms also showed clinical potential in classifying fetal-type posterior cerebral artery and locating aneurysms with CoW anatomy. TopCoW demonstrated the utility and versatility of CoW segmentation algorithms for a wide range of downstream clinical applications with explainability. The annotated datasets and best performing algorithms have been released as public Zenodo records to foster further methodological development and clinical tool building.",2 "Background: Evaluating AI-generated treatment plans is a key challenge as AI expands beyond diagnostics, especially with new reasoning models. This study compares plans from human experts and two AI models (a generalist and a reasoner), assessed by both human peers and a superior AI judge. Methods: Ten dermatologists, a generalist AI (GPT-4o), and a reasoning AI (o3) generated treatment plans for five complex dermatology cases. The anonymized, normalized plans were scored in two phases: 1) by the ten human experts, and 2) by a superior AI judge (Gemini 2.5 Pro) using an identical rubric. Results: A profound 'evaluator effect' was observed. Human experts scored peer-generated plans significantly higher than AI plans (mean 7.62 vs. 7.16; p=0.0313), ranking GPT-4o 6th (mean 7.38) and the reasoning model, o3, 11th (mean 6.97). Conversely, the AI judge produced a complete inversion, scoring AI plans significantly higher than human plans (mean 7.75 vs. 6.79; p=0.0313). It ranked o3 1st (mean 8.20) and GPT-4o 2nd, placing all human experts lower. Conclusions: The perceived quality of a clinical plan is fundamentally dependent on the evaluator's nature. An advanced reasoning AI, ranked poorly by human experts, was judged as superior by a sophisticated AI, revealing a deep gap between experience-based clinical heuristics and data-driven algorithmic logic. This paradox presents a critical challenge for AI integration, suggesting the future requires synergistic, explainable human-AI systems that bridge this reasoning gap to augment clinical care.",0 "Ergodicity, the property that all allowed configurations are explored over time, plays a pivotal role in explaining the equilibrium behavior of classical dynamical systems. Yet, such a property is typically precluded in quantum systems owing to the presence of energy eigenstates, which are stationary states in dynamics. However, recent theoretical works have argued that ergodic explorations of the Hilbert space, occurring at varying levels as measured by statistical pseudorandomness of the time-evolved quantum states, may be exhibited for quantum systems driven by Hamiltonians with aperiodic time dependencies, which do not face such obstacles. Here, we experimentally investigate the hierarchy of Hilbert-space ergodicities (HSE) achievable in the dynamics of a single quantum spin realized by a solid-state defect in diamond, upon subjecting it to various time-dependent modulations. Through continuous monitoring of spin trajectories with full state tomography, different degrees of HSE were observed, ranging from no HSE in a time-periodic (Floquet) drive, to partial HSE in a smoothly kicked time-quasiperiodic drive, to complete HSE in a drive composed of a sequence of kicks generated by the Fibonacci word. We formulate a theoretical understanding of the increasing levels of HSE observed by attributing them to increasing levels of complexities associated with the drive sequences, whose notions we elucidate. Our work constitutes the first unambiguous experimental evidence of Hilbert space ergodicity and promotes deeper investigations into the mechanisms and fine-grained levels with which closed quantum systems reach equilibrium.",0 "Automated depression diagnosis aims to analyze multimodal information from interview videos to predict participants' depression scores. Previous studies often lack clear explanations of how these scores were determined, limiting their adoption in clinical practice. While the advent of LLMs provides a possible pathway for explainable depression diagnosis, current LLMs capable of processing multimodal data lack training on interview data, resulting in poor diagnostic performance when used directly. In this paper, we propose a novel multimodal large language model (MLlm-DR) that can understand multimodal information inputs and supports explainable depression diagnosis. MLlm-DR integrates a smaller LLMs and a lightweight query module (LQ-former). Specifically, the smaller LLMs is designed to generate depression scores and corresponding evaluation rationales. To enhance its logical reasoning for domain-specific tasks while maintaining practicality, we constructed a robust training dataset to fine-tune it. Meanwhile, the LQ-former captures depression-related features from speech and visual data, aiding the model's ability to process multimodal information, to achieve comprehensive depression diagnosis. Our approach achieves state-of-the-art results on two interview-based benchmark datasets, CMDC and E-DAIC-WOZ, demonstrating its effectiveness and superiority.",2 "Content-aware layout aims to arrange design elements appropriately on a given canvas to convey information effectively. Recently, the trend for this task has been to leverage large language models (LLMs) to generate layouts automatically, achieving remarkable performance. However, existing LLM-based methods fail to adequately interpret spatial relationships among visual themes and design elements, leading to structural and diverse problems in layout generation. To address this issue, we introduce ReLayout, a novel method that leverages relation-CoT to generate more reasonable and aesthetically coherent layouts by fundamentally originating from design concepts. Specifically, we enhance layout annotations by introducing explicit relation definitions, such as region, salient, and margin between elements, with the goal of decomposing the layout into smaller, structured, and recursive layouts, thereby enabling the generation of more structured layouts. Furthermore, based on these defined relationships, we introduce a layout prototype rebalance sampler, which defines layout prototype features across three dimensions and quantifies distinct layout styles. This sampler addresses uniformity issues in generation that arise from data bias in the prototype distribution balance process. Extensive experimental results verify that ReLayout outperforms baselines and can generate structural and diverse layouts that are more aligned with human aesthetics and more explainable.",0 "We conducted an International AI Negotiation Competition in which thousands of participants designed and refined prompts for AI negotiation agents. We then facilitated over 180,000 negotiations between these agents across multiple scenarios with diverse characteristics and objectives. Our findings revealed that principles from human negotiation theory remain crucial even in AI-AI contexts. Surprisingly, warmth--a traditionally human relationship-building trait--was consistently associated with superior outcomes across all key performance metrics. Dominant agents, meanwhile, were especially effective at claiming value. Our analysis also revealed unique dynamics in AI-AI negotiations not fully explained by existing theory, including AI-specific technical strategies like chain-of-thought reasoning, prompt injection, and strategic concealment. When we applied natural language processing (NLP) methods to the full transcripts of all negotiations we found positivity, gratitude and question-asking (associated with warmth) were strongly associated with reaching deals as well as objective and subjective value, whereas conversation lengths (associated with dominance) were strongly associated with impasses. The results suggest the need to establish a new theory of AI negotiation, which integrates classic negotiation theory with AI-specific negotiation theories to better understand autonomous negotiations and optimize agent performance.",2 "The widespread and rapid adoption of AI-generated content, created by models such as Generative Adversarial Networks (GANs) and Diffusion Models, has revolutionized the digital media landscape by allowing efficient and creative content generation. However, these models also blur the difference between real images and AI-generated synthetic images, raising concerns regarding content authenticity and integrity. While many existing solutions to detect fake images focus solely on classification and higher-resolution images, they often lack transparency in their decision-making, making it difficult for users to understand why an image is classified as fake. In this paper, we present VERITAS, a comprehensive framework that not only accurately detects whether a small (32x32) image is AI-generated but also explains why it was classified that way through artifact localization and semantic reasoning. VERITAS produces human-readable explanations that describe key artifacts in synthetic images. We show that this architecture offers clear explanations of the basis of zero-shot synthetic image detection tasks. Code and relevant prompts can be found at https://github.com/V-i-g-n-e-s-h-N/VERITAS .",0 "Prosocial behaviours have been extensively studied across multiple disciplines. Cooperation, requiring a personal cost for collective benefits, is widespread in nature and human society, having been explained through mechanisms such as kin selection, direct and indirect reciprocity, and network reciprocity. Institutional incentives, which reward cooperation and punish anti-social behaviour, offer a promising approach to fostering cooperation in groups of self-interested individuals. Focusing on general $2\times2$ games and the collective risk game (which is a fundamental model for climate action), we analyse the associated cost of providing incentives under evolutionary dynamics governed by Fermi's rule, exploring the asymptotic behaviour of the incentive cost functons in the limits of neutral drift and strong selection. We also implement numerical simulations to study how parameters such as the intensity of selection affect the behaviour of the aforementioned cost functions.",0 "Knowledge tracing models have enabled a range of intelligent tutoring systems to provide feedback to students. However, existing methods for knowledge tracing in learning sciences are predominantly reliant on statistical data and instructor-defined knowledge components, making it challenging to integrate AI-generated educational content with traditional established methods. We propose a method for automatically extracting knowledge components from educational content using instruction-tuned large multimodal models. We validate this approach by comprehensively evaluating it against knowledge tracing benchmarks in five domains. Our results indicate that the automatically extracted knowledge components can effectively replace human-tagged labels, offering a promising direction for enhancing intelligent tutoring systems in limited-data scenarios, achieving more explainable assessments in educational settings, and laying the groundwork for automated assessment.",0 "The volume and diversity of digital information have led to a growing reliance on Machine Learning techniques, such as Natural Language Processing, for interpreting and accessing appropriate data. While vector and graph embeddings represent data for similarity tasks, current state-of-the-art pipelines lack guaranteed explainability, failing to determine similarity for given full texts accurately. These considerations can also be applied to classifiers exploiting generative language models with logical prompts, which fail to correctly distinguish between logical implication, indifference, and inconsistency, despite being explicitly trained to recognise the first two classes. We present a novel pipeline designed for hybrid explainability to address this. Our methodology combines graphs and logic to produce First-Order Logic representations, creating machine- and human-readable representations through Montague Grammar. Preliminary results indicate the effectiveness of this approach in accurately capturing full text similarity. To the best of our knowledge, this is the first approach to differentiate between implication, inconsistency, and indifference for text classification tasks. To address the limitations of existing approaches, we use three self-contained datasets annotated for the former classification task to determine the suitability of these approaches in capturing sentence structure equivalence, logical connectives, and spatiotemporal reasoning. We also use these data to compare the proposed method with language models pre-trained for detecting sentence entailment. The results show that the proposed method outperforms state-of-the-art models, indicating that natural language understanding cannot be easily generalised by training over extensive document corpora. This work offers a step toward more transparent and reliable Information Retrieval from extensive textual data.",0 "Involving only the measurements of commuting observables - the problem-setting and the corresponding solution - quantum algorithms should be subject to classical logic. This would allow flanking their customary quantum description with a classical logic description, with surprising consequences. In the classical logic description of the quantum algorithm, very simply, it is as if the problem-solver knew in advance, before beginning her problem-solving action, one of the possible halves of the information that specifies the solution of the problem she will produce and measure in the future and could use this knowledge to produce the solution with fewer computation steps. This is a causal loop whose retrocausal character turns out to be implicit in the very notion of quantum state superposition, both an essential ingredient of the quantum computational speedup and one of the pillars of quantum mechanics. Indeed, the key point of the work is that the classical logic description of a quantum state superposition must resort to a logical form of retrocausality that in turn must be physically implicit in the superposition itself. The existence of retrocausality in ordinary quantum physics implies e different way of viewing physical reality. It explains in a unified way all quantum speedups and quantum nonlocality. It highlights the teleological character of quantum algorithms, that is, their being evolutions toward a goal (the solution of the problem) with an attractor in the solution they will produce in the future (the solution again). Under the quantum cosmological assumption, it provides a plausible physical basis to the teleological character of natural evolutions.",0 "The task of describing video content in natural language is commonly referred to as video captioning. Unlike conventional video captions, which are typically brief and widely available, long-form paragraph descriptions in natural language are scarce. This limitation of current datasets is due to the expensive human manual annotation required and to the highly challenging task of explaining the language formation process from the perspective of the underlying story, as a complex system of interconnected events in space and time. Through a thorough analysis of recently published methods and available datasets, we identify a general lack of published resources dedicated to the problem of describing videos in complex language, beyond the level of descriptions in the form of enumerations of simple captions. Furthermore, while state-of-the-art methods produce impressive results on the task of generating shorter captions from videos by direct end-to-end learning between the videos and text, the problem of explaining the relationship between vision and language is still beyond our reach. In this work, we propose a shared representation between vision and language, based on graphs of events in space and time, which can be obtained in an explainable and analytical way, to integrate and connect multiple vision tasks to produce the final natural language description. Moreover, we also demonstrate how our automated and explainable video description generation process can function as a fully automatic teacher to effectively train direct, end-to-end neural student pathways, within a self-supervised neuro-analytical system. We validate that our explainable neuro-analytical approach generates coherent, rich and relevant textual descriptions on videos collected from multiple varied datasets, using both standard evaluation metrics, human annotations and consensus from ensembles of state-of-the-art VLMs.",0 "Large Language Models (LLMs) have revolutionized various fields with their exceptional capabilities in understanding, processing, and generating human-like text. This paper investigates the potential of LLMs in advancing Network Intrusion Detection Systems (NIDS), analyzing current challenges, methodologies, and future opportunities. It begins by establishing a foundational understanding of NIDS and LLMs, exploring the enabling technologies that bridge the gap between intelligent and cognitive systems in AI-driven NIDS. While Intelligent NIDS leverage machine learning and deep learning to detect threats based on learned patterns, they often lack contextual awareness and explainability. In contrast, Cognitive NIDS integrate LLMs to process both structured and unstructured security data, enabling deeper contextual reasoning, explainable decision-making, and automated response for intrusion behaviors. Practical implementations are then detailed, highlighting LLMs as processors, detectors, and explainers within a comprehensive AI-driven NIDS pipeline. Furthermore, the concept of an LLM-centered Controller is proposed, emphasizing its potential to coordinate intrusion detection workflows, optimizing tool collaboration and system performance. Finally, this paper identifies critical challenges and opportunities, aiming to foster innovation in developing reliable, adaptive, and explainable NIDS. By presenting the transformative potential of LLMs, this paper seeks to inspire advancement in next-generation network security systems.",0 "The rapid development of AI-generated content (AIGC) technology has led to the misuse of highly realistic AI-generated images (AIGI) in spreading misinformation, posing a threat to public information security. Although existing AIGI detection techniques are generally effective, they face two issues: 1) a lack of human-verifiable explanations, and 2) a lack of generalization in the latest generation technology. To address these issues, we introduce a large-scale and comprehensive dataset, Holmes-Set, which includes the Holmes-SFTSet, an instruction-tuning dataset with explanations on whether images are AI-generated, and the Holmes-DPOSet, a human-aligned preference dataset. Our work introduces an efficient data annotation method called the Multi-Expert Jury, enhancing data generation through structured MLLM explanations and quality control via cross-model evaluation, expert defect filtering, and human preference modification. In addition, we propose Holmes Pipeline, a meticulously designed three-stage training framework comprising visual expert pre-training, supervised fine-tuning, and direct preference optimization. Holmes Pipeline adapts multimodal large language models (MLLMs) for AIGI detection while generating human-verifiable and human-aligned explanations, ultimately yielding our model AIGI-Holmes. During the inference stage, we introduce a collaborative decoding strategy that integrates the model perception of the visual expert with the semantic reasoning of MLLMs, further enhancing the generalization capabilities. Extensive experiments on three benchmarks validate the effectiveness of our AIGI-Holmes.",1 "Product recommendations inherently involve comparisons, yet traditional opinion summarization often fails to provide holistic comparative insights. We propose the novel task of generating Query-Focused Comparative Explainable Summaries (QF-CES) using Multi-Source Opinion Summarization (M-OS). To address the lack of query-focused recommendation datasets, we introduce MS-Q2P, comprising 7,500 queries mapped to 22,500 recommended products with metadata. We leverage Large Language Models (LLMs) to generate tabular comparative summaries with query-specific explanations. Our approach is personalized, privacy-preserving, recommendation engine-agnostic, and category-agnostic. M-OS as an intermediate step reduces inference latency approximately by 40% compared to the direct input approach (DIA), which processes raw data directly. We evaluate open-source and proprietary LLMs for generating and assessing QF-CES. Extensive evaluations using QF-CES-PROMPT across 5 dimensions (clarity, faithfulness, informativeness, format adherence, and query relevance) showed an average Spearman correlation of 0.74 with human judgments, indicating its potential for QF-CES evaluation.",0 "With the rapid advancement of mathematical reasoning capabilities in Large Language Models (LLMs), AI systems are increasingly being adopted in educational settings to support students' comprehension of problem-solving processes. However, a critical component remains underexplored in current LLM-generated explanations: multimodal explanation. In real-world instructional contexts, human tutors routinely employ visual aids, such as diagrams, markings, and highlights, to enhance conceptual clarity. To bridge this gap, we introduce the multimodal solution explanation task, designed to evaluate whether models can identify visual keypoints, such as auxiliary lines, points, angles, and generate explanations that incorporate these key elements essential for understanding. To evaluate model performance on this task, we propose ME2, a multimodal benchmark consisting of 1,000 math problems annotated with visual keypoints and corresponding explanatory text that references those elements. Our empirical results show that, aside from recent large-scale open-source and closed-source models, most generalist open-source models, and even math-specialist models, struggle with the multimodal solution explanation task. This highlights a significant gap in current LLMs' ability to reason and explain with visual grounding in educational contexts. We expect that the multimodal solution explanation task and the ME2 dataset will catalyze further research on LLMs in education and promote their use as effective, explanation-oriented AI tutors.",0 "The conditional density characterizes the distribution of a response variable $y$ given other predictor $x$, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external $x$ is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional $y$ and a latent $z$ that comprises two components \([z_P,z_N]\). The $z_P$ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for $x$, such as logistic/linear regression. The $z_N$ component is a high-dimensional independent Gaussian vector, which explains the variations in $y$ not or less related to $x$. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since $z_P$ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of $x$-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of $z$, such as Gaussian mixture, fails to generate interpretable results.",0 "Human Activity Recognition (HAR), which uses data from Inertial Measurement Unit (IMU) sensors, has many practical applications in healthcare and assisted living environments. However, its use in real-world scenarios has been limited by the lack of comprehensive IMU-based HAR datasets that cover a wide range of activities and the lack of transparency in existing HAR models. Zero-shot HAR (ZS-HAR) overcomes the data limitations, but current models struggle to explain their decisions, making them less transparent. This paper introduces a novel IMU-based ZS-HAR model called the Self-Explainable Zero-shot Human Activity Recognition Network (SEZ-HARN). It can recognize activities not encountered during training and provide skeleton videos to explain its decision-making process. We evaluate the effectiveness of the proposed SEZ-HARN on four benchmark datasets PAMAP2, DaLiAc, HTD-MHAD and MHealth and compare its performance against three state-of-the-art black-box ZS-HAR models. The experiment results demonstrate that SEZ-HARN produces realistic and understandable explanations while achieving competitive Zero-shot recognition accuracy. SEZ-HARN achieves a Zero-shot prediction accuracy within 3\% of the best-performing black-box model on PAMAP2 while maintaining comparable performance on the other three datasets.",0 "Most modern theoretical considerations of the physical world suggest that nature is: (1) field-theoretic, (2) smooth, (3) local, (4) gauged, (5) containing fermions, and (6) non-perturbative. Tautologous as this may sound to experts, it is remarkable that the mathematical notion of geometry which reflects all of these aspects - namely, ``supergeometric homotopy theory'' - has received little attention. Elaborate algebraic machinery is known for perturbative field theories both at the classical and quantum level, but to tackle the deep open questions of the subject, these will need to be lifted to a global geometry of physics. Our aim in this series is to introduce inclined physicists to this theory, to fill mathematical gaps in the existing literature, and to rigorously develop the full power of supergeometric homotopy theory and apply it to the analysis of fermionic (not necessarily super-symmetric) field theories. Secondarily, this will also lead to a streamlined and rigorous perspective we hope would also be desirable to mathematicians. In this first part, we explain how classical bosonic Lagrangian field theory (variational Euler-Lagrange theory) finds a natural home in the ``topos of smooth sets'', thereby neatly setting the scene for the higher supergeometry discussed in later parts of the series. This introductory material will be largely known to a few experts but has never been comprehensively laid out before. A key technical point we make is to regard jet bundle geometry systematically in smooth sets instead of just its subcategories of diffeological spaces or even Fr{\'e}chet manifolds -- or worse simply as a formal object. Besides being more transparent and powerful, it is only on this backdrop that a reasonable supergeometric jet geometry exists, needed for satisfactory discussion of any field theory with fermions.",0 "This paper investigates the utilization of Quantum Computing and Neuromorphic Computing for Safe, Reliable, and Explainable Multi_Agent Reinforcement Learning (MARL) in the context of optimal control in autonomous robotics. The objective was to address the challenges of optimizing the behavior of autonomous agents while ensuring safety, reliability, and explainability. Quantum Computing techniques, including Quantum Approximate Optimization Algorithm (QAOA), were employed to efficiently explore large solution spaces and find approximate solutions to complex MARL problems. Neuromorphic Computing, inspired by the architecture of the human brain, provided parallel and distributed processing capabilities, which were leveraged to develop intelligent and adaptive systems. The combination of these technologies held the potential to enhance the safety, reliability, and explainability of MARL in autonomous robotics. This research contributed to the advancement of autonomous robotics by exploring cutting-edge technologies and their applications in multi-agent systems. Codes and data are available.",0 "A central goal of cognitive science is to provide a computationally explicit account of both the structure of the mind and its development: what are the primitive representational building blocks of cognition, what are the rules via which those primitives combine, and where do these primitives and rules come from in the first place? A long-standing debate concerns the adequacy of artificial neural networks as computational models that can answer these questions, in particular in domains related to abstract cognitive function, such as language and logic. This paper argues that recent advances in neural networks -- specifically, the advent of large language models (LLMs) -- represent an important shift in this debate. We test a variety of LLMs on an existing experimental paradigm used for studying the induction of rules formulated over logical concepts. Across four experiments, we find converging empirical evidence that LLMs provide at least as good a fit to human behavior as models that implement a Bayesian probablistic language of thought (pLoT), which have been the best computational models of human behavior on the same task. Moreover, we show that the LLMs make qualitatively different predictions about the nature of the rules that are inferred and deployed in order to complete the task, indicating that the LLM is unlikely to be a mere implementation of the pLoT solution. Based on these results, we argue that LLMs may instantiate a novel theoretical account of the primitive representations and computations necessary to explain human logical concepts, with which future work in cognitive science should engage.",0 "In AI-facilitated teaching, leveraging various query styles to interpret abstract educational content is crucial for delivering effective and accessible learning experiences. However, existing retrieval systems predominantly focus on natural text-image matching and lack the capacity to address the diversity and ambiguity inherent in real-world educational scenarios. To address this limitation, we develop a lightweight and efficient multi-modal retrieval module, named Uni-Retrieval, which extracts query-style prototypes and dynamically matches them with tokens from a continually updated Prompt Bank. This Prompt Bank encodes and stores domain-specific knowledge by leveraging a Mixture-of-Expert Low-Rank Adaptation (MoE-LoRA) module and can be adapted to enhance Uni-Retrieval's capability to accommodate unseen query types at test time. To enable natural language educational content generation, we integrate the original Uni-Retrieval with a compact instruction-tuned language model, forming a complete retrieval-augmented generation pipeline named Uni-RAG. Given a style-conditioned query, Uni-RAG first retrieves relevant educational materials and then generates human-readable explanations, feedback, or instructional content aligned with the learning objective. Experimental results on SER and other multi-modal benchmarks show that Uni-RAG outperforms baseline retrieval and RAG systems in both retrieval accuracy and generation quality, while maintaining low computational cost. Our framework provides a scalable, pedagogically grounded solution for intelligent educational systems, bridging retrieval and generation to support personalized, explainable, and efficient learning assistance across diverse STEM scenarios.",0 "Many of us now treat LLMs as modern-day oracles asking it almost any kind of question. However, consulting an LLM does not have to be a single turn activity. But long multi-turn interactions can get tedious if it is simply to clarify contextual information that can be arrived at through reasoning. In this paper, we examine the use of agent-based architecture to bolster LLM-based Question-Answering systems with additional reasoning capabilities. We examine the automatic resolution of potential incompleteness or ambiguities in questions by transducers implemented using LLM-based agents. We focus on several benchmark datasets that are known to contain questions with these deficiencies to varying degrees. We equip different LLMs (GPT-3.5-Turbo and Llama-4-Scout) with agents that act as specialists in detecting and resolving deficiencies of incompleteness and ambiguity. The agents are implemented as zero-shot ReAct agents. Rather than producing an answer in a single step, the model now decides between 3 actions a) classify b) resolve c) answer. Action a) decides if the question is incomplete, ambiguous, or normal. Action b) determines if any deficiencies identified can be resolved. Action c) answers the resolved form of the question. We compare the use of LLMs with and without the use of agents with these components. Our results show benefits of agents with transducer 1) A shortening of the length of interactions with human 2) An improvement in the answer quality and 3) Explainable resolution of deficiencies in the question. On the negative side we find while it may result in additional LLM invocations and in some cases, increased latency. But on tested datasets, the benefits outweigh the costs except when questions already have sufficient context. Suggesting the agent-based approach could be a useful mechanism to harness the power of LLMs to develop more robust QA systems.",0 "Understanding how individuals perceive and react to information is fundamental for advancing social and behavioral sciences and developing human-centered AI systems. Current approaches often lack the granular data needed to model these personalized responses, relying instead on aggregated labels that obscure the rich variability driven by individual differences. We introduce iNews, a novel large-scale dataset specifically designed to facilitate the modeling of personalized affective responses to news content. Our dataset comprises annotations from 291 demographically diverse UK participants across 2,899 multimodal Facebook news posts from major UK outlets, with an average of 5.18 annotators per sample. For each post, annotators provide multifaceted labels including valence, arousal, dominance, discrete emotions, content relevance judgments, sharing likelihood, and modality importance ratings. Crucially, we collect comprehensive annotator persona information covering demographics, personality, media trust, and consumption patterns, which explain 15.2% of annotation variance - substantially higher than existing NLP datasets. Incorporating this information yields a 7% accuracy gain in zero-shot prediction and remains beneficial even with 32-shot in-context learning. iNews opens new possibilities for research in LLM personalization, subjectivity, affective computing, and human behavior simulation.",2 "Students disengaging from their tasks can have serious long-term consequences, including academic drop-out. This is particularly relevant for students in distance education. One way to measure the level of disengagement in distance education is to observe participation in non-mandatory exercises in different online courses. In this paper, we detect student disengagement in the non-mandatory quizzes of 42 courses in four semesters from a distance-based university. We carefully identified the most informative student log data that could be extracted and processed from Moodle. Then, eight machine learning algorithms were trained and compared to obtain the highest possible prediction accuracy. Using the SHAP method, we developed an explainable machine learning framework that allows practitioners to better understand the decisions of the trained algorithm. The experimental results show a balanced accuracy of 91\%, where about 85\% of disengaged students were correctly detected. On top of the highly predictive performance and explainable framework, we provide a discussion on how to design a timely intervention to minimise disengagement from voluntary tasks in online learning.",1 "CD8+ ""killer"" T cells and CD4+ ""helper"" T cells play a central role in the adaptive immune system by recognizing antigens presented by Major Histocompatibility Complex (pMHC) molecules via T Cell Receptors (TCRs). Modeling binding between T cells and the pMHC complex is fundamental to understanding basic mechanisms of human immune response as well as in developing therapies. While transformer-based models such as TULIP have achieved impressive performance in this domain, their black-box nature precludes interpretability and thus limits a deeper mechanistic understanding of T cell response. Most existing post-hoc explainable AI (XAI) methods are confined to encoder-only, co-attention, or model-specific architectures and cannot handle encoder-decoder transformers used in TCR-pMHC modeling. To address this gap, we propose Quantifying Cross-Attention Interaction (QCAI), a new post-hoc method designed to interpret the cross-attention mechanisms in transformer decoders. Quantitative evaluation is a challenge for XAI methods; we have compiled TCR-XAI, a benchmark consisting of 274 experimentally determined TCR-pMHC structures to serve as ground truth for binding. Using these structures we compute physical distances between relevant amino acid residues in the TCR-pMHC interaction region and evaluate how well our method and others estimate the importance of residues in this region across the dataset. We show that QCAI achieves state-of-the-art performance on both interpretability and prediction accuracy under the TCR-XAI benchmark.",0 "Large language models (LLMs) exhibit strikingly conflicting behaviors: they can appear steadfastly overconfident in their initial answers whilst at the same time being prone to excessive doubt when challenged. To investigate this apparent paradox, we developed a novel experimental paradigm, exploiting the unique ability to obtain confidence estimates from LLMs without creating memory of their initial judgments -- something impossible in human participants. We show that LLMs -- Gemma 3, GPT4o and o1-preview -- exhibit a pronounced choice-supportive bias that reinforces and boosts their estimate of confidence in their answer, resulting in a marked resistance to change their mind. We further demonstrate that LLMs markedly overweight inconsistent compared to consistent advice, in a fashion that deviates qualitatively from normative Bayesian updating. Finally, we demonstrate that these two mechanisms -- a drive to maintain consistency with prior commitments and hypersensitivity to contradictory feedback -- parsimoniously capture LLM behavior in a different domain. Together, these findings furnish a mechanistic account of LLM confidence that explains both their stubbornness and excessive sensitivity to criticism.",0 "The wave description of geometric phase uses the superposition of light waves to explain the geometric phase's origin. While our previous work focused on a basis of linearly polarized waves, here we show that the same concepts can be applied to circularly polarized waves, and to any case in which a rotator is itself subjected to rotation. As with a linear polarization basis, we show that the addition of two vectors (rotators) with different orientations and magnitudes causes the orientation of the resulting vector to shift towards the component vector of greater magnitude, i.e. it introduces a geometric phase. We illustrate this approach with two classic examples of the geometric phase of rotations in space: a system of three fold mirrors, and the helical coiled fiber. In both cases we show that it is possible to derive the phase shift directly from the electromagnetic wave vector without needing to resort to mathematical abstractions such as differential geometry, or calculating solid angles in the space of directions.",0 "In the field of Human-Robot Interaction (HRI), a fundamental challenge is to facilitate human understanding of robots. The emerging domain of eXplainable HRI (XHRI) investigates methods to generate explanations and evaluate their impact on human-robot interactions. Previous works have highlighted the need to personalise the level of detail of these explanations to enhance usability and comprehension. Our paper presents a framework designed to update and retrieve user knowledge-memory models, allowing for adapting the explanations' level of detail while referencing previously acquired concepts. Three architectures based on our proposed framework that use Large Language Models (LLMs) are evaluated in two distinct scenarios: a hospital patrolling robot and a kitchen assistant robot. Experimental results demonstrate that a two-stage architecture, which first generates an explanation and then personalises it, is the framework architecture that effectively reduces the level of detail only when there is related user knowledge.",0 "Human-centered explainability has become a critical foundation for the responsible development of interactive information systems, where users must be able to understand, interpret, and scrutinize AI-driven outputs to make informed decisions. This systematic survey of literature aims to characterize recent progress in user studies on explainability in interactive information systems by reviewing how explainability has been conceptualized, designed, and evaluated in practice. Following PRISMA guidelines, eight academic databases were searched, and 100 relevant articles were identified. A structural encoding approach was then utilized to extract and synthesize insights from these articles. The main contributions include 1) five dimensions that researchers have used to conceptualize explainability 3) a categorization of explainability measurements into six user-centered dimensions. The review concludes by reflecting on ongoing challenges and providing recommendations for future exploration of related issues. The findings shed light on the theoretical foundations of human-centered explainability, informing the design of interactive information systems that better align with diverse user needs and promoting the development of systems that are transparent, trustworthy, and accountable.",0 "An Explainable Boosting Machine (EBM) is an interpretable machine learning (ML) algorithm that has benefits in high risk applications but has not yet found much use in atmospheric science. The overall goal of this work is twofold: (1) explore the use of EBMs, in combination with feature engineering, to obtain interpretable, physics-based machine learning algorithms for meteorological applications; (2) illustrate these methods for the detection of overshooting top (OTs) in satellite imagery. Specifically, we seek to simplify the process of OT detection by first using mathematical methods to extract key features, such as cloud texture using Gray-Level Co-occurrence Matrices, followed by applying an EBM. Our EBM focuses on the classification task of predicting OT regions, utilizing Channel 2 (visible imagery) and Channel 13 (infrared imagery) of the Advanced Baseline Imager sensor of the Geostationary Operational Environmental Satellite 16. Multi-Radar/Multi-Sensor system convection flags are used as labels to train the EBM model. Note, however, that detecting convection, while related, is different from detecting OTs. Once trained, the EBM was examined and minimally altered to more closely match strategies used by domain scientists to identify OTs. The result of our efforts is a fully interpretable ML algorithm that was developed in a human-machine collaboration. While the final model does not reach the accuracy of more complex approaches, it performs well and represents a significant step toward building fully interpretable ML algorithms for this and other meteorological applications.",0 "Insomnia affects a vast population of the world and can have a wide range of causes. Existing treatments for insomnia have been linked with many side effects like headaches, dizziness, etc. As such, there is a clear need for improved insomnia treatment. Brain modelling has helped with assessing the effects of brain pathology on brain network dynamics and with supporting clinical decisions in the treatment of Alzheimer's disease, epilepsy, etc. However, such models have not been developed for insomnia. Therefore, this project attempts to understand the characteristics of the brain of individuals experiencing insomnia using continuous long-duration EEG data. Brain networks are derived based on functional connectivity and spatial distance between EEG channels. The power spectral density of the channels is then computed for the major brain wave frequency bands. A graph convolutional neural network (GCNN) model is then trained to capture the functional characteristics associated with insomnia and configured for the classification task to judge performance. Results indicated a 50-second non-overlapping sliding window was the most suitable choice for EEG segmentation. This approach achieved a classification accuracy of 70% at window level and 68% at subject level. Additionally, the omission of EEG channels C4-P4, F4-C4 and C4-A1 caused higher degradation in model performance than the removal of other channels. These channel electrodes are positioned near brain regions known to exhibit atypical levels of functional connectivity in individuals with insomnia, which can explain such results.",0 "Large Language Models (LLMs) have proven immensely beneficial in education by capturing vast amounts of literature-based information, allowing them to generate context without relying on external sources. In this paper, we propose a generative AI-powered GATE question-answering framework (GATE stands for Graduate Aptitude Test in Engineering) that leverages LLMs to explain GATE solutions and support students in their exam preparation. We conducted extensive benchmarking to select the optimal embedding model and LLM, evaluating our framework based on criteria such as latency, faithfulness, and relevance, with additional validation through human evaluation. Our chatbot integrates state-of-the-art embedding models and LLMs to deliver accurate, context-aware responses. Through rigorous experimentation, we identified configurations that balance performance and computational efficiency, ensuring a reliable chatbot to serve students' needs. Additionally, we discuss the challenges faced in data processing and modeling and implemented solutions. Our work explores the application of Retrieval-Augmented Generation (RAG) for GATE Q/A explanation tasks, and our findings demonstrate significant improvements in retrieval accuracy and response quality. This research offers practical insights for developing effective AI-driven educational tools while highlighting areas for future enhancement in usability and scalability.",0 "Estimation of model uncertainty can help improve the explainability of Graph Convolutional Networks and the accuracy of the models at the same time. Uncertainty can also be used in critical applications to verify the results of the model by an expert or additional models. In this paper, we propose Variational Neural Network versions of spatial and spatio-temporal Graph Convolutional Networks. We estimate uncertainty in both outputs and layer-wise attentions of the models, which has the potential for improving model explainability. We showcase the benefits of these models in the social trading analysis and the skeleton-based human action recognition tasks on the Finnish board membership, NTU-60, NTU-120 and Kinetics datasets, where we show improvement in model accuracy in addition to estimated model uncertainties.",0 "When robots perform complex and context-dependent tasks in our daily lives, deviations from expectations can confuse users. Explanations of the robot's reasoning process can help users to understand the robot intentions. However, when to provide explanations and what they contain are important to avoid user annoyance. We have investigated user preferences for explanation demand and content for a robot that helps with daily cleaning tasks in a kitchen. Our results show that users want explanations in surprising situations and prefer concise explanations that clearly state the intention behind the confusing action and the contextual factors that were relevant to this decision. Based on these findings, we propose two algorithms to identify surprising actions and to construct effective explanations for Belief-Desire-Intention (BDI) robots. Our algorithms can be easily integrated in the BDI reasoning process and pave the way for better human-robot interaction with context- and user-specific explanations.",0 "The recent boom of large language models (LLMs) has re-ignited the hope that artificial intelligence (AI) systems could aid medical diagnosis. Yet despite dazzling benchmark scores, LLM assistants have yet to deliver measurable improvements at the bedside. This scoping review aims to highlight the areas where AI is limited to make practical contributions in the clinical setting, specifically in dementia diagnosis and care. Standalone machine-learning models excel at pattern recognition but seldom provide actionable, interpretable guidance, eroding clinician trust. Adjacent use of LLMs by physicians did not result in better diagnostic accuracy or speed. Key limitations trace to the data-driven paradigm: black-box outputs which lack transparency, vulnerability to hallucinations, and weak causal reasoning. Hybrid approaches that combine statistical learning with expert rule-based knowledge, and involve clinicians throughout the process help bring back interpretability. They also fit better with existing clinical workflows, as seen in examples like PEIRS and ATHENA-CDS. Future decision-support should prioritise explanatory coherence by linking predictions to clinically meaningful causes. This can be done through neuro-symbolic or hybrid AI that combines the language ability of LLMs with human causal expertise. AI researchers have addressed this direction, with explainable AI and neuro-symbolic AI being the next logical steps in further advancement in AI. However, they are still based on data-driven knowledge integration instead of human-in-the-loop approaches. Future research should measure success not only by accuracy but by improvements in clinician understanding, workflow fit, and patient outcomes. A better understanding of what helps improve human-computer interactions is greatly needed for AI systems to become part of clinical practice.",2 "In this paper we discuss the capability of large language models to base their answer and provide proper references when dealing with legal matters of non-english and non-chinese speaking country. We discuss the history of legal information retrieval, the difference between case law and statute law, its impact on the legal tasks and analyze the latest research in this field. Basing on that background we introduce gAIus, the architecture of the cognitive LLM-based agent, whose responses are based on the knowledge retrieved from certain legal act, which is Polish Civil Code. We propose a retrieval mechanism which is more explainable, human-friendly and achieves better results than embedding-based approaches. To evaluate our method we create special dataset based on single-choice questions from entrance exams for law apprenticeships conducted in Poland. The proposed architecture critically leveraged the abilities of used large language models, improving the gpt-3.5-turbo-0125 by 419%, allowing it to beat gpt-4o and lifting gpt-4o-mini score from 31% to 86%. At the end of our paper we show the possible future path of research and potential applications of our findings.",0 "There is growing interest in explainable recommender systems that provide recommendations along with explanations for the reasoning behind them. When evaluating recommender systems, most studies focus on overall recommendation performance. Only a few assess the quality of the explanations. Explanation quality is often evaluated through user studies that subjectively gather users' opinions on representative explanatory factors that shape end-users' perspective towards the results, not about the explanation contents itself. We aim to fill this gap by developing an objective metric to evaluate Veracity: the information quality of explanations. Specifically, we decompose Veracity into two dimensions: Fidelity and Attunement. Fidelity refers to whether the explanation includes accurate information about the recommended item. Attunement evaluates whether the explanation reflects the target user's preferences. By applying signal detection theory, we first determine decision outcomes for each dimension and then combine them to calculate a sensitivity, which serves as the final Veracity value. To assess the effectiveness of the proposed metric, we set up four cases with varying levels of information quality to validate whether our metric can accurately capture differences in quality. The results provided meaningful insights into the effectiveness of our proposed metric.",2 "Sound source localization (SSL) adds a spatial dimension to auditory perception, allowing a system to pinpoint the origin of speech, machinery noise, warning tones, or other acoustic events, capabilities that facilitate robot navigation, human-machine dialogue, and condition monitoring. While existing surveys provide valuable historical context, they typically address general audio applications and do not fully account for robotic constraints or the latest advancements in deep learning. This review addresses these gaps by offering a robotics-focused synthesis, emphasizing recent progress in deep learning methodologies. We start by reviewing classical methods such as Time Difference of Arrival (TDOA), beamforming, Steered-Response Power (SRP), and subspace analysis. Subsequently, we delve into modern machine learning (ML) and deep learning (DL) approaches, discussing traditional ML and neural networks (NNs), convolutional neural networks (CNNs), convolutional recurrent neural networks (CRNNs), and emerging attention-based architectures. The data and training strategy that are the two cornerstones of DL-based SSL are explored. Studies are further categorized by robot types and application domains to facilitate researchers in identifying relevant work for their specific contexts. Finally, we highlight the current challenges in SSL works in general, regarding environmental robustness, sound source multiplicity, and specific implementation constraints in robotics, as well as data and learning strategies in DL-based SSL. Also, we sketch promising directions to offer an actionable roadmap toward robust, adaptable, efficient, and explainable DL-based SSL for next-generation robots.",2 "Accelerometers produce enormous amounts of data. Research that incorporates such data often involves a derived summary metric to describe physical activity. Traditional metrics have often ignored the temporal nature of the data. We build on previous work that applies unsupervised machine learning techniques to describe physical activity patterns over time. Specifically, we evaluate a summary measure of accelerometer data derived from unsupervised clustering in a regression framework through comparisons with other traditional measures: duration of time spent in different activity intensity states, Time Active Mean (TAM), Time Active Variability (TAV), Activity Intensity Mean (AIM), and Activity Intensity Variability (AIV) using data from 268 children participating in the Stanford GOALS trial. The proportion of variation explained by the new measure was comparable to that of traditional measures across regressions of three pre-specified clinical outcomes (waist circumference, fasting insulin levels, and fasting triglyceride levels). For example, cluster membership explained 25%, 11%, and 6% of the variation in waist circumference, fasting insulin levels, and fasting triglyceride levels whereas TAM explained 25%, 10%, and 6% for these same outcomes. Importantly, however, there are challenges when regressing an outcome on a variable derived from unsupervised machine learning techniques, particularly regarding replicability. This includes the processing involved in deriving the variable as well as the machine learning approach itself. While these remain open topics to resolve, our findings demonstrate the promise of a new summary measure that enables addressing questions involving a temporal component that other traditional summary metrics do not reflect.",0 "Asymmetric partition of fate determinants during cell division is a hallmark of cell differentiation. Recent work suggested that such a mechanism is hijacked by cancer cells to increase both their phenotypic heterogeneity and plasticity and in turn their fitness. To quantify fluctuations in the partitioning of cellular elements, imaging-based approaches are used, whose accuracy is limited by the difficulty of detecting cell divisions. Our work addresses this gap proposing a general method based on high-throughput flow cytometry measurements coupled with a theoretical framework. We applied our method to a panel of both normal and cancerous human colon cells, showing that different kinds of colon adenocarcinoma cells display very distinct extents of fluctuations in their cytoplasm partition, explained by an asymmetric division of their size. To test the accuracy of our population-level protocol, we directly measure the inherited fractions of cellular elements from extensive time-lapses of live-cell laser scanning microscopy, finding excellent agreement across the cell types. Ultimately, our flow cytometry-based method promises to be accurate and easily applicable to a wide range of biological systems where the quantification of partition fluctuations would help accounting for the observed phenotypic heterogeneity and plasticity",0 "Recent advances in large language models and vision-language models have led to growing interest in explainable evaluation metrics for image captioning. However, these metrics generate explanations without standardized criteria, and the overall quality of the generated explanations remains unverified. In this paper, we propose EXPERT, a reference-free evaluation metric that provides structured explanations based on three fundamental criteria: fluency, relevance, and descriptiveness. By constructing large-scale datasets of high-quality structured explanations, we develop a two-stage evaluation template to effectively supervise a vision-language model for both scoring and explanation generation. EXPERT achieves state-of-the-art results on benchmark datasets while providing significantly higher-quality explanations than existing metrics, as validated through comprehensive human evaluation. Our code and datasets are available at https://github.com/hjkim811/EXPERT.",2 "Transparency and interpretability are crucial for enhancing customer confidence and user engagement, especially when dealing with black-box Machine Learning (ML)-based recommendation systems. Modern recommendation systems leverage Graph Neural Network (GNN) due to their ability to produce high-quality recommendations in terms of both relevance and diversity. Therefore, the explainability of GNN is especially important for Link Prediction (LP) tasks since recommending relevant items can be viewed as predicting links between users and items. GNN explainability has been a well-studied field, but existing methods primarily focus on node or graph-level tasks, leaving a gap in LP explanation techniques. This work introduces Z-REx, a GNN explanation framework designed explicitly for heterogeneous link prediction tasks. Z-REx utilizes structural and attribute perturbation to identify critical substructures and important features while reducing the search space by leveraging domain-specific knowledge. In our experimentation, we show the efficacy of Z-REx in generating contextually relevant and human-interpretable explanations for ZiGNN, a GNN-based recommendation engine, using a real-world real-estate dataset from Zillow Group, Inc. We compare against State-of-The-Art (SOTA) GNN explainers to show Z-REx outperforms them by 61% in the Fidelity metric by producing superior human-interpretable explanations.",0 "Sparse Autoencoders (SAEs) have been successfully used to probe Large Language Models (LLMs) and extract interpretable concepts from their internal representations. These concepts are linear combinations of neuron activations that correspond to human-interpretable features. In this paper, we investigate the effectiveness of SAE-based explainability approaches for sentence classification, a domain where such methods have not been extensively explored. We present a novel SAE-based architecture tailored for text classification, leveraging a specialized classifier head and incorporating an activation rate sparsity loss. We benchmark this architecture against established methods such as ConceptShap, Independent Component Analysis, and other SAE-based concept extraction techniques. Our evaluation covers two classification benchmarks and four fine-tuned LLMs from the Pythia family. We further enrich our analysis with two novel metrics for measuring the precision of concept-based explanations, using an external sentence encoder. Our empirical results show that our architecture improves both the causality and interpretability of the extracted features.",0 "The proliferation of disinformation challenges traditional, unscalable editorial processes and existing automated systems that prioritize engagement over public service values. To address this, we introduce the Public Service Algorithm (PSA), a novel framework using Large Language Models (LLMs) for scalable, transparent content curation based on Public Service Media (PSM) inspired values. Utilizing a large multilingual news dataset from the 'A European Perspective' project, our experiment directly compared article ratings from a panel of experienced editors from various European PSMs, with those from several LLMs, focusing on four criteria: diversity, in-depth analysis, forward-looking, and cross-border relevance. Utilizing criterion-specific prompts, our results indicate a promising alignment between human editorial judgment and LLM assessments, demonstrating the potential of LLMs to automate value-driven curation at scale without sacrificing transparency. This research constitutes a first step towards a scalable framework for the automatic curation of trustworthy news content.",0 "Large-scale vision-language models (VLMs), such as CLIP, have achieved remarkable success in zero-shot learning (ZSL) by leveraging large-scale visual-text pair datasets. However, these methods often lack interpretability, as they compute the similarity between an entire query image and the embedded category words, making it difficult to explain their predictions. One approach to address this issue is to develop interpretable models by integrating language, where classifiers are built using discrete attributes, similar to human perception. This introduces a new challenge: how to effectively align local visual features with corresponding attributes based on pre-trained VLMs. To tackle this, we propose LaZSL, a locally-aligned vision-language model for interpretable ZSL. LaZSL employs local visual-semantic alignment via optimal transport to perform interaction between visual regions and their associated attributes, facilitating effective alignment and providing interpretable similarity without the need for additional training. Extensive experiments demonstrate that our method offers several advantages, including enhanced interpretability, improved accuracy, and strong domain generalization. Codes available at: https://github.com/shiming-chen/LaZSL.",0 "Surprisingly compact substructures in galaxies and galaxy clusters, but also field halos, have been observed by gravitational lensing. They could be difficult to explain with collisionless dark matter (DM). To explain those objects, recent studies focused on the gravothermal collapse that halos consisting of self-interacting dark matter (SIDM) can undergo. However, simple models of elastic scattering could face problems explaining those compact objects during very later stages of the collapse and the post-collapse phase, where a black hole may have formed from DM. We aim to explain compact halos while avoiding the gravothermal catastrophe to which typical SIDM models are subject. Therefore, we investigate the evolution of a DM halo for an SIDM model consisting of two species with unequal masses, which features only interactions between the different species but not within themselves. Employing $N$-body simulations, we study the effect of unequal-mass SIDM models on the evolution of an isolated DM halo. In particular, the late stages of its evolution with high central densities are simulated. We find that our two-species SIDM models can produce density cores with their size depending on the mass ratio of the two species. Moreover, mass segregation caused by the unequal particle masses leads to a finite final density state or at least a slowly growing density, which depends on the mass ratio and the mass fraction of the two DM species. SIDM models consisting of two DM species can simultaneously explain DM halos with density cores, as well as systems that are denser in their centre than expected from collisionless DM, while avoiding the gravothermal catastrophe. They are a compelling alternative to single-species models, offering a rich phenomenology.",0 "Large Language Models (LLMs) have demonstrated remarkable capabilities at solving complex reasoning tasks with Chain-of-Thought (CoT) prompting, but their decision-making processes remain somewhat blackbox. We introduce textbfinverse reasoning, a novel paradigm enabling LLMs to decompose and explain their own reasoning chains post-hoc. Our approach, used in SAGE-nano, a 4-billion-parameter reasoning model, employs a metacognitive structure that reflects back via attention processes to identify major decision points and generate explanations of reasoning choices. While typical CoT approaches are directed towards forward reasoning generation, inverse reasoning provides insight into why specific reasoning chains were selected over others. Through thorough testing of logical reasoning puzzles, math problems and ethical dilemmas from AQUA-RAT, CommonsenseQA, and customized benchmarks, we demonstrate that SAGE-nano is at the cutting edge both on reasoning accuracy (74.6% on AQUA-RAT) and explanation quality (92.1% human preference score) for its task, and offers performance almost on par with models like Claude-3.5 Sonnet or GPT-4o. Our contributions are: (i) the first rigorous framework for LLM self-reflection via inverse reasoning, (ii) a novel metalearning framework to reverse the attention flow, (iii) comprehensive evaluation frameworks for reasoning transparency, and (iv) evidence that increasing reasoning using inverse reasoning improves interpretability along with reasoning performance. Our work creates new avenues for transparent AI systems and closes significant gaps in AI safety, education, and scientific discovery.",2 "As artificial intelligence systems increasingly operate in Real-world environments, the integration of multi-modal data sources such as vision, language, and audio presents both unprecedented opportunities and critical challenges for achieving trustworthy intelligence. In this paper, we propose a novel framework that unifies federated learning with explainable multi-modal reasoning to ensure trustworthiness in decentralized, dynamic settings. Our approach, called FedMM-X (Federated Multi-Modal Explainable Intelligence), leverages cross-modal consistency checks, client-level interpretability mechanisms, and dynamic trust calibration to address challenges posed by data heterogeneity, modality imbalance, and out-of-distribution generalization. Through rigorous evaluation across federated multi-modal benchmarks involving vision-language tasks, we demonstrate improved performance in both accuracy and interpretability while reducing vulnerabilities to adversarial and spurious correlations. Further, we introduce a novel trust score aggregation method to quantify global model reliability under dynamic client participation. Our findings pave the way toward developing robust, interpretable, and socially responsible AI systems in Real-world environments.",0 "Predictive modeling on tabular data is the cornerstone of many real-world applications. Although gradient boosting machines and some recent deep models achieve strong performance on tabular data, they often lack interpretability. On the other hand, large language models (LLMs) have demonstrated powerful capabilities to generate human-like reasoning and explanations, but remain under-performed for tabular data prediction. In this paper, we propose a new approach that leverages reasoning-based LLMs, trained using reinforcement learning, to perform more accurate and explainable predictions on tabular data. Our method introduces custom reward functions that guide the model not only toward better prediction accuracy but also toward human-understandable reasons for its predictions. The proposed method is evaluated on financial benchmark datasets and compared against established LLMs.",0 "The Semantic Theory of Evolution (STE) takes the existence of a number of arbitrary communication codes as a fundamental feature of life, from the genetic code to human cultural communication codes. Their arbitrariness enables, at each level, the selection of one out of several possible correspondences along with the generation of meaning. STE enables more novelties to emerge and suggests a greater variety of potential life forms. With this paper I ground STE on physical theories of meaningful information. Furthermore, I show that key features of the arbitrary communication codes employed by living organisms can be expressed by means of Evidence Theory (ET). In particular, I adapt ET to organisms that merely react to sequences of stimuli, explain its basics for organisms that are capable of prediction, and illustrate an unconventional version suitable for the most intricate communication codes employed by humans. Finally, I express the natural trend towards ambiguity reduction in terms of information entropy minimization along with thermodynamic entropy maximization.",0 "Scientific sketches (e.g., models) offer a powerful lens into students' conceptual understanding, yet AI-powered automated assessment of such free-form, visually diverse artifacts remains a critical challenge. Existing solutions often treat sketch evaluation as either an image classification task or monolithic vision-language models, which lack interpretability, pedagogical alignment, and adaptability across cognitive levels. To address these limitations, we present SketchMind, a cognitively grounded, multi-agent framework for evaluating and improving student-drawn scientific sketches. SketchMind comprises modular agents responsible for rubric parsing, sketch perception, cognitive alignment, and iterative feedback with sketch modification, enabling personalized and transparent evaluation. We evaluate SketchMind on a curated dataset of 3,575 student-generated sketches across six science assessment items with different highest order of Bloom's level that require students to draw models to explain phenomena. Compared to baseline GPT-4o performance without SRG (average accuracy: 55.6%), and with SRG integration achieves 77.1% average accuracy (+21.4% average absolute gain). We also demonstrate that multi-agent orchestration with SRG enhances SketchMind performance, for example, GPT-4.1 gains an average 8.9% increase in sketch prediction accuracy, outperforming single-agent pipelines across all items. Human evaluators rated the feedback and co-created sketches generated by \textsc{SketchMind} with GPT-4.1, which achieved an average of 4.1 out of 5, significantly higher than those of baseline models (e.g., 2.3 for GPT-4o). Experts noted the system's potential to meaningfully support conceptual growth through guided revision. Our code and (pending approval) dataset will be released to support reproducibility and future research in AI-driven education.",0 "Symptom Checkers (SCs) provide medical information tailored to user symptoms. A critical challenge in SC development is preventing unexpected performance degradation for individual diseases, especially rare diseases, when updating algorithms. This risk stems from the lack of practical pre-deployment evaluation methods. For rare diseases, obtaining sufficient evaluation data from user feedback is difficult. To evaluate the impact of algorithm updates on the diagnostic performance for individual rare diseases before deployment, this study proposes and validates a novel Synthetic Vignette Simulation Approach. This approach aims to enable this essential evaluation efficiently and at a low cost. To estimate the impact of algorithm updates, we generated synthetic vignettes from disease-phenotype annotations in the Human Phenotype Ontology (HPO), a publicly available knowledge base for rare diseases curated by experts. Using these vignettes, we simulated SC interviews to predict changes in diagnostic performance. The effectiveness of this approach was validated retrospectively by comparing the predicted changes with actual performance metrics using the R-squared ($R^2$) coefficient. Our experiment, covering eight past algorithm updates for rare diseases, showed that the proposed method accurately predicted performance changes for diseases with phenotype frequency information in HPO (n=5). For these updates, we found a strong correlation for both Recall@8 change ($R^2$ = 0.83,$p$ = 0.031) and Precision@8 change ($R^2$ = 0.78,$p$ = 0.047). Our proposed method enables the pre-deployment evaluation of SC algorithm changes for individual rare diseases. This evaluation is based on a publicly available medical knowledge database created by experts, ensuring transparency and explainability for stakeholders. Additionally, SC developers can efficiently improve diagnostic performance at a low cost.",2 "Artificial intelligence is humanity's most promising technology because of the remarkable capabilities offered by foundation models. Yet, the same technology brings confusion and consternation: foundation models are poorly understood and they may precipitate a wide array of harms. This dissertation explains how technology and society coevolve in the age of AI, organized around three themes. First, the conceptual framing: the capabilities, risks, and the supply chain that grounds foundation models in the broader economy. Second, the empirical insights that enrich the conceptual foundations: transparency created via evaluations at the model level and indexes at the organization level. Finally, the transition from understanding to action: superior understanding of the societal impact of foundation models advances evidence-based AI policy. View together, this dissertation makes inroads into achieving better societal outcomes in the age of AI by building the scientific foundations and research-policy interface required for better AI governance.",0 "Modeling task-driven attention in driving is a fundamental challenge for both autonomous vehicles and cognitive science. Existing methods primarily predict where drivers look by generating spatial heatmaps, but fail to capture the cognitive motivations behind attention allocation in specific contexts, which limits deeper understanding of attention mechanisms. To bridge this gap, we introduce Explainable Driver Attention Prediction, a novel task paradigm that jointly predicts spatial attention regions (where), parses attended semantics (what), and provides cognitive reasoning for attention allocation (why). To support this, we present W3DA, the first large-scale explainable driver attention dataset. It enriches existing benchmarks with detailed semantic and causal annotations across diverse driving scenarios, including normal conditions, safety-critical situations, and traffic accidents. We further propose LLada, a Large Language model-driven framework for driver attention prediction, which unifies pixel modeling, semantic parsing, and cognitive reasoning within an end-to-end architecture. Extensive experiments demonstrate the effectiveness of LLada, exhibiting robust generalization across datasets and driving conditions. This work serves as a key step toward a deeper understanding of driver attention mechanisms, with significant implications for autonomous driving, intelligent driver training, and human-computer interaction.",0 "This paper investigates how collaborative AI systems can enhance user agency in identifying and evaluating misinformation on social media platforms. Traditional methods, such as personal judgment or basic fact-checking, often fall short when faced with emotionally charged or context-deficient content. To address this, we designed and evaluated an interactive interface that integrates collaborative AI features, including real-time explanations, source aggregation, and debate-style interaction. These elements aim to support critical thinking by providing contextual cues and argumentative reasoning in a transparent, user-centered format. In a user study with 14 participants, 79% found the debate mode more effective than standard chatbot interfaces, and the multiple-source view received an average usefulness rating of 4.6 out of 5. Our findings highlight the potential of context-rich, dialogic AI systems to improve media literacy and foster trust in digital information environments. We argue that future tools for misinformation mitigation should prioritize ethical design, explainability, and interactive engagement to empower users in a post-truth era.",2 "Can Visual Language Models (VLMs) effectively capture human visual preferences? This work addresses this question by training VLMs to think about preferences at test time, employing reinforcement learning methods inspired by DeepSeek R1 and OpenAI O1. Using datasets such as ImageReward and Human Preference Score v2 (HPSv2), our models achieve accuracies of 64.9% on the ImageReward test set (trained on ImageReward official split) and 65.4% on HPSv2 (trained on approximately 25% of its data). These results match traditional encoder-based models while providing transparent reasoning and enhanced generalization. This approach allows to use not only rich VLM world knowledge, but also its potential to think, yielding interpretable outcomes that help decision-making processes. By demonstrating that human visual preferences reasonable by current VLMs, we introduce efficient soft-reward strategies for image ranking, outperforming simplistic selection or scoring methods. This reasoning capability enables VLMs to rank arbitrary images-regardless of aspect ratio or complexity-thereby potentially amplifying the effectiveness of visual Preference Optimization. By reducing the need for extensive markup while improving reward generalization and explainability, our findings can be a strong mile-stone that will enhance text-to-vision models even further.",0 "Alignment of large language models (LLMs) with human values has recently garnered significant attention, with prominent examples including the canonical yet costly Reinforcement Learning from Human Feedback (RLHF) and the simple Direct Preference Optimization (DPO). In this work, we demonstrate that both RLHF and DPO can be interpreted from the perspective of mutual information (MI) maximization, uncovering a profound connection to contrastive learning. Within this framework, both RLHF and DPO can be viewed as methods that perform contrastive learning based on the positive and negative samples derived from the base model, leveraging the Donsker-Varadhan (DV) lower bound on MI (equivalently, the MINE estimator). This paradigm further explains why RLHF may not intrinsically incentivize reasoning capacities in LLMs beyond what is already present in the base model. Building on this perspective, we replace the DV/MINE bound with the Jensen-Shannon MI estimator and propose Mutual Information Optimization (MIO). Comprehensive theoretical analysis and extensive empirical evaluations demonstrate that MIO mitigates the late-stage decline in chosen-likelihood observed in DPO, achieving competitive or superior performance across various challenging reasoning and mathematical benchmarks. We will release the model and code upon acceptance.",0 "When using reinforcement learning (RL) to tackle physical control tasks, inductive biases that encode physics priors can help improve sample efficiency during training and enhance generalization in testing. However, the current practice of incorporating these helpful physics-informed inductive biases inevitably runs into significant manual labor and domain expertise, making them prohibitive for general users. This work explores a symbolic approach to distill physics-informed inductive biases into RL agents, where the physics priors are expressed in a domain-specific language (DSL) that is human-readable and naturally explainable. Yet, the DSL priors do not translate directly into an implementable policy due to partial and noisy observations and additional physical constraints in navigation tasks. To address this gap, we develop a physics-informed program-guided RL (PiPRL) framework with applications to indoor navigation. PiPRL adopts a hierarchical and modularized neuro-symbolic integration, where a meta symbolic program receives semantically meaningful features from a neural perception module, which form the bases for symbolic programming that encodes physics priors and guides the RL process of a low-level neural controller. Extensive experiments demonstrate that PiPRL consistently outperforms purely symbolic or neural policies and reduces training time by over 26% with the help of the program-based inductive biases.",0 "In artificial intelligence (AI), the complexity of many models and processes surpasses human understanding, making it challenging to determine why a specific prediction is made. This lack of transparency is particularly problematic in critical fields like healthcare, where trust in a model's predictions is paramount. As a result, the explainability of machine learning (ML) and other complex models has become a key area of focus. Efforts to improve model explainability often involve experimenting with AI systems and approximating their behavior through interpretable surrogate mechanisms. However, these procedures can be resource-intensive. Optimal design of experiments, which seeks to maximize the information obtained from a limited number of observations, offers promising methods for improving the efficiency of these explainability techniques. To demonstrate this potential, we explore Local Interpretable Model-agnostic Explanations (LIME), a widely used method introduced by Ribeiro et al. (2016). LIME provides explanations by generating new data points near the instance of interest and passing them through the model. While effective, this process can be computationally expensive, especially when predictions are costly or require many samples. LIME is highly versatile and can be applied to a wide range of models and datasets. In this work, we focus on models involving tabular data, regression tasks, and linear models as interpretable local approximations. By utilizing optimal design of experiments' techniques, we reduce the number of function evaluations of the complex model, thereby reducing the computational effort of LIME by a significant amount. We consider this modified version of LIME to be energy-efficient or ""green"".",0 "Anomaly detection is the task of identifying rarely occurring (i.e. anormal or anomalous) samples that differ from almost all other samples in a dataset. As the patterns of anormal samples are usually not known a priori, this task is highly challenging. Consequently, anomaly detection lies between semi- and unsupervised learning. The detection of anomalies in sound data, often called 'ASD' (Anomalous Sound Detection), is a sub-field that deals with the identification of new and yet unknown effects in acoustic recordings. It is of great importance for various applications in Industry 4.0. Here, vibrational or acoustic data are typically obtained from standard sensor signals used for predictive maintenance. Examples cover machine condition monitoring or quality assurance to track the state of components or products. However, the use of intelligent algorithms remains a controversial topic. Management generally aims for cost-reduction and automation, while quality and maintenance experts emphasize the need for human expertise and comprehensible solutions. In this work, we present an anomaly detection approach specifically designed for spectrograms. The approach is based on statistical evaluations and is theoretically motivated. In addition, it features intrinsic explainability, making it particularly suitable for applications in industrial settings. Thus, this algorithm is of relevance for applications in which black-box algorithms are unwanted or unsuitable.",0 "Large Language Models (LLMs) have played a pivotal role in advancing Artificial Intelligence (AI). However, despite their achievements, LLMs often struggle to explain their decision-making processes, making them a 'black box' and presenting a substantial challenge to explainability. This lack of transparency poses a significant obstacle to the adoption of LLMs in high-stakes domain applications, where interpretability is particularly essential. To overcome these limitations, researchers have developed various explainable artificial intelligence (XAI) methods that provide human-interpretable explanations for LLMs. However, a systematic understanding of these methods remains limited. To address this gap, this survey provides a comprehensive review of explainability techniques by categorizing XAI methods based on the underlying transformer architectures of LLMs: encoder-only, decoder-only, and encoder-decoder models. Then these techniques are examined in terms of their evaluation for assessing explainability, and the survey further explores how these explanations are leveraged in practical applications. Finally, it discusses available resources, ongoing research challenges, and future directions, aiming to guide continued efforts toward developing transparent and responsible LLMs.",2 "This article concerns the optimal choice of flat taxes on labor and capital income, and on consumption, in a tractable economic model in which agents are subject to idiosyncratic investment risk. We identify the tax rates which maximize welfare in stationary equilibrium while preserving tax revenue, finding that an increase in welfare equivalent to a permanent increase in consumption of nearly 7% can be achieved by only taxing capital income and consumption. The Domar-Musgrave effect explains cases where it is optimal to tax capital income. We characterize the dynamic response to the substitution of consumption taxation for labor income taxation.",0 "Although several post-hoc methods for explainable AI have been developed, most are static and neglect the user perspective, limiting their effectiveness for the target audience. In response, we developed the interactive explainable intelligent system called IXAII that offers explanations from four explainable AI methods: LIME, SHAP, Anchors, and DiCE. Our prototype provides tailored views for five user groups and gives users agency over the explanations' content and their format. We evaluated IXAII through interviews with experts and lay users. Our results indicate that IXAII, which provides different explanations with multiple visualization options, is perceived as helpful to increase transparency. By bridging the gaps between explainable AI methods, interactivity, and practical implementation, we provide a novel perspective on AI explanation practices and human-AI interaction.",2 "Recent progress in multimodal graph neural networks has demonstrated that augmenting atomic XYZ geometries with textual chemical descriptors can enhance predictive accuracy across a range of electronic and thermodynamic properties. However, naively appending large sets of heterogeneous descriptors often degrades performance on tasks sensitive to molecular shape or symmetry, and undermines interpretability. xChemAgents proposes a cooperative agent framework that injects physics-aware reasoning into multimodal property prediction. xChemAgents comprises two language-model-based agents: a Selector, which adaptively identifies a sparse, weighted subset of descriptors relevant to each target, and provides a natural language rationale; and a Validator, which enforces physical constraints such as unit consistency and scaling laws through iterative dialogue. On standard benchmark datasets, xChemAgents achieves up to a 22% reduction in mean absolute error over the state-of-the-art baselines, while producing faithful, human-interpretable explanations. Experiment results highlight the potential of cooperative, self-verifying agents to enhance both accuracy and transparency in foundation-model-driven materials science. The implementation and accompanying dataset are available at https://github.com/KurbanIntelligenceLab/xChemAgents.",0 "Concept-Based Models (CBMs) are a class of deep learning models that provide interpretability by explaining predictions through high-level concepts. These models first predict concepts and then use them to perform a downstream task. However, current CBMs offer interpretability only for the final task prediction, while the concept predictions themselves are typically made via black-box neural networks. To address this limitation, we propose Hierarchical Concept Memory Reasoner (H-CMR), a new CBM that provides interpretability for both concept and task predictions. H-CMR models relationships between concepts using a learned directed acyclic graph, where edges represent logic rules that define concepts in terms of other concepts. During inference, H-CMR employs a neural attention mechanism to select a subset of these rules, which are then applied hierarchically to predict all concepts and the final task. Experimental results demonstrate that H-CMR matches state-of-the-art performance while enabling strong human interaction through concept and model interventions. The former can significantly improve accuracy at inference time, while the latter can enhance data efficiency during training when background knowledge is available.",0 "Chatbots are increasingly integrated into people's lives and are widely used to help people. Recently, there has also been growing interest in the reverse direction-humans help chatbots-due to a wide range of benefits including better chatbot performance, human well-being, and collaborative outcomes. However, little research has explored the factors that motivate people to help chatbots. To address this gap, we draw on the Computers Are Social Actors (CASA) framework to examine how chatbot anthropomorphism-including human-like identity, emotional expression, and non-verbal expression-influences human empathy toward chatbots and their subsequent prosocial behaviors and intentions. We also explore people's own interpretations of their prosocial behaviors toward chatbots. We conducted an online experiment (N = 244) in which chatbots made mistakes in a collaborative image labeling task and explained the reasons to participants. We then measured participants' prosocial behaviors and intentions toward the chatbots. Our findings revealed that human identity and emotional expression of chatbots increased participants' prosocial behavior and intention toward chatbots, with empathy mediating these effects. Qualitative analysis further identified two motivations for participants' prosocial behaviors: empathy for the chatbot and perceiving the chatbot as human-like. We discuss the implications of these results for understanding and promoting human prosocial behaviors toward chatbots.",2 "Concept-based explainability methods use human-understandable intermediaries to produce explanations for machine learning models. These methods assume concept predictions can help understand a model's internal reasoning. In this work, we assess the degree to which such an assumption is true by analyzing whether concept predictors leverage ""relevant"" features to make predictions, a term we call locality. Concept-based models that fail to respect localities also fail to be explainable because concept predictions are based on spurious features, making the interpretation of the concept predictions vacuous. To assess whether concept-based models respect localities, we construct and use three metrics to characterize when models respect localities, complementing our analysis with theoretical results. Each of our metrics captures a different notion of perturbation and assess whether perturbing ""irrelevant"" features impacts the predictions made by a concept predictors. We find that many concept-based models used in practice fail to respect localities because concept predictors cannot always clearly distinguish distinct concepts. Based on these findings, we propose suggestions for alleviating this issue.",0 "Healthcare professionals need effective ways to use, understand, and validate AI-driven clinical decision support systems. Existing systems face two key limitations: complex visualizations and a lack of grounding in scientific evidence. We present an integrated decision support system that combines interactive visualizations with a conversational agent to explain diabetes risk assessments. We propose a hybrid prompt handling approach combining fine-tuned language models for analytical queries with general Large Language Models (LLMs) for broader medical questions, a methodology for grounding AI explanations in scientific evidence, and a feature range analysis technique to support deeper understanding of feature contributions. We conducted a mixed-methods study with 30 healthcare professionals and found that the conversational interactions helped healthcare professionals build a clear understanding of model assessments, while the integration of scientific evidence calibrated trust in the system's decisions. Most participants reported that the system supported both patient risk evaluation and recommendation.",2 "Reinforcement learning has emerged as a powerful paradigm for post-training large language models (LLMs) to improve reasoning. Approaches like Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning with Verifiable Rewards (RLVR) have shown strong results, but they require extensive external supervision. We investigate an alternative class of methods, Reinforcement Learning from Internal Feedback (RLIF), which relies solely on intrinsic model-derived signals instead of external rewards. In particular, we leverage unsupervised reward proxies such as token-level entropy, trajectory-level entropy, and self-certainty. Our theoretical analysis shows these internal objectives are partially equivalent, and we empirically evaluate various RLIF strategies on challenging math reasoning benchmarks. Experimental results demonstrate that RLIF can boost the reasoning performance of base LLMs at the beginning phase of the training, matching or surpassing RLVR techniques on these tasks. However, when training progresses, performance degrades even below the model before training. Moreover, we find that RLIF yields little improvement for instruction-tuned models, indicating diminishing returns of intrinsic feedback once an LLM is already instruction-tuned. We further analyze this limitation by mixing model weights and explain the reason of RLIF's training behaviors, providing practical guidelines for integrating internal feedback signals into LLM training. We hope our analysis of internal feedback will inform more principled and effective strategies for LLM post-training.",0 "As artificial intelligence (AI) becomes increasingly central to healthcare, the demand for explainable and trustworthy models is paramount. Current report generation systems for chest X-rays (CXR) often lack mechanisms for validating outputs without expert oversight, raising concerns about reliability and interpretability. To address these challenges, we propose a novel multimodal framework designed to enhance the semantic alignment and localization accuracy of AI-generated medical reports. Our framework integrates two key modules: a Phrase Grounding Model, which identifies and localizes pathologies in CXR images based on textual prompts, and a Text-to-Image Diffusion Module, which generates synthetic CXR images from prompts while preserving anatomical fidelity. By comparing features between the original and generated images, we introduce a dual-scoring system: one score quantifies localization accuracy, while the other evaluates semantic consistency. This approach significantly outperforms existing methods, achieving state-of-the-art results in pathology localization and text-to-image alignment. The integration of phrase grounding with diffusion models, coupled with the dual-scoring evaluation system, provides a robust mechanism for validating report quality, paving the way for more trustworthy and transparent AI in medical imaging.",0 "Wearable biosensors have revolutionized human performance monitoring by enabling real-time assessment of physiological and biomechanical parameters. However, existing solutions lack the ability to simultaneously capture breath-force coordination and muscle activation symmetry in a seamless and non-invasive manner, limiting their applicability in strength training and rehabilitation. This work presents a wearable smart sportswear system that integrates screen-printed graphene-based strain sensors with compact electronics for wireless data transfer and a deep learning framework for real-time classification of exercise execution quality. By leveraging 1D ResNet-18 for feature extraction, the system achieves 92.1% classification accuracy across six exercise conditions, distinguishing between breathing irregularities and asymmetric muscle exertion. Additionally, t-SNE analysis and Grad-CAM-based explainability visualization confirm that the network accurately captures biomechanically relevant features, ensuring robust interpretability. The proposed system establishes a foundation for next-generation AI-powered sportswear, with applications in fitness optimization, injury prevention, and adaptive rehabilitation training.",0 "Interpretable models are crucial for supporting clinical decision-making, driving advances in their development and application for medical images. However, the nature of 3D volumetric data makes it inherently challenging to visualize and interpret intricate and complex structures like the cerebral cortex. Cortical surface renderings, on the other hand, provide a more accessible and understandable 3D representation of brain anatomy, facilitating visualization and interactive exploration. Motivated by this advantage and the widespread use of surface data for studying neurological disorders, we present the eXplainable Surface Vision Transformer (X-SiT). This is the first inherently interpretable neural network that offers human-understandable predictions based on interpretable cortical features. As part of X-SiT, we introduce a prototypical surface patch decoder for classifying surface patch embeddings, incorporating case-based reasoning with spatially corresponding cortical prototypes. The results demonstrate state-of-the-art performance in detecting Alzheimer's disease and frontotemporal dementia while additionally providing informative prototypes that align with known disease patterns and reveal classification errors.",0 "In the realm of Natural Language Processing (NLP), common approaches for handling human disagreement consist of aggregating annotators' viewpoints to establish a single ground truth. However, prior studies show that disregarding individual opinions can lead can lead to the side effect of underrepresenting minority perspectives, especially in subjective tasks, where annotators may systematically disagree because of their preferences. Recognizing that labels reflect the diverse backgrounds, life experiences, and values of individuals, this study proposes a new multi-perspective approach using soft labels to encourage the development of the next generation of perspective aware models, more inclusive and pluralistic. We conduct an extensive analysis across diverse subjective text classification tasks, including hate speech, irony, abusive language, and stance detection, to highlight the importance of capturing human disagreements, often overlooked by traditional aggregation methods. Results show that the multi-perspective approach not only better approximates human label distributions, as measured by Jensen-Shannon Divergence (JSD), but also achieves superior classification performance (higher F1 scores), outperforming traditional approaches. However, our approach exhibits lower confidence in tasks like irony and stance detection, likely due to the inherent subjectivity present in the texts. Lastly, leveraging Explainable AI (XAI), we explore model uncertainty and uncover meaningful insights into model predictions.",2 "Reading is a process that unfolds across space and time, alternating between fixations where a reader focuses on a specific point in space, and saccades where a reader rapidly shifts their focus to a new point. An ansatz of psycholinguistics is that modeling a reader's fixations and saccades yields insight into their online sentence processing. However, standard approaches to such modeling rely on aggregated eye-tracking measurements and models that impose strong assumptions, ignoring much of the spatio-temporal dynamics that occur during reading. In this paper, we propose a more general probabilistic model of reading behavior, based on a marked spatio-temporal point process, that captures not only how long fixations last, but also where they land in space and when they take place in time. The saccades are modeled using a Hawkes process, which captures how each fixation excites the probability of a new fixation occurring near it in time and space. The duration time of fixation events is modeled as a function of fixation-specific predictors convolved across time, thus capturing spillover effects. Empirically, our Hawkes process model exhibits a better fit to human saccades than baselines. With respect to fixation durations, we observe that incorporating contextual surprisal as a predictor results in only a marginal improvement in the model's predictive accuracy. This finding suggests that surprisal theory struggles to explain fine-grained eye movements.",2 "Distinguishing between human- and LLM-generated texts is crucial given the risks associated with misuse of LLMs. This paper investigates detection and explanation capabilities of current LLMs across two settings: binary (human vs. LLM-generated) and ternary classification (including an ``undecided'' class). We evaluate 6 close- and open-source LLMs of varying sizes and find that self-detection (LLMs identifying their own outputs) consistently outperforms cross-detection (identifying outputs from other LLMs), though both remain suboptimal. Introducing a ternary classification framework improves both detection accuracy and explanation quality across all models. Through comprehensive quantitative and qualitative analyses using our human-annotated dataset, we identify key explanation failures, primarily reliance on inaccurate features, hallucinations, and flawed reasoning. Our findings underscore the limitations of current LLMs in self-detection and self-explanation, highlighting the need for further research to address overfitting and enhance generalizability.",0 "As LLMs rapidly advance, increasing concerns arise regarding risks about actual authorship of texts we see online and in real world. The task of distinguishing LLM-authored texts is complicated by the nuanced and overlapping behaviors of both machines and humans. In this paper, we challenge the current practice of considering LLM-generated text detection a binary classification task of differentiating human from AI. Instead, we introduce a novel ternary text classification scheme, adding an ""undecided"" category for texts that could be attributed to either source, and we show that this new category is crucial to understand how to make the detection result more explainable to lay users. This research shifts the paradigm from merely classifying to explaining machine-generated texts, emphasizing need for detectors to provide clear and understandable explanations to users. Our study involves creating four new datasets comprised of texts from various LLMs and human authors. Based on new datasets, we performed binary classification tests to ascertain the most effective SOTA detection methods and identified SOTA LLMs capable of producing harder-to-detect texts. We constructed a new dataset of texts generated by two top-performing LLMs and human authors, and asked three human annotators to produce ternary labels with explanation notes. This dataset was used to investigate how three top-performing SOTA detectors behave in new ternary classification context. Our results highlight why ""undecided"" category is much needed from the viewpoint of explainability. Additionally, we conducted an analysis of explainability of the three best-performing detectors and the explanation notes of the human annotators, revealing insights about the complexity of explainable detection of machine-generated texts. Finally, we propose guidelines for developing future detection systems with improved explanatory power.",2 "Perturbation-based explanations are widely utilized to enhance the transparency of modern machine-learning models. However, their reliability is often compromised by the unknown model behavior under the specific perturbations used. This paper investigates the relationship between uncertainty calibration - the alignment of model confidence with actual accuracy - and perturbation-based explanations. We show that models frequently produce unreliable probability estimates when subjected to explainability-specific perturbations and theoretically prove that this directly undermines explanation quality. To address this, we introduce ReCalX, a novel approach to recalibrate models for improved perturbation-based explanations while preserving their original predictions. Experiments on popular computer vision models demonstrate that our calibration strategy produces explanations that are more aligned with human perception and actual object locations.",0 "We study the spectrum of the Poincar\'e operator in triaxial ellipsoids subject to a constant rotation. As explained in the paper, this mathematical problem is interesting for many physical applications. It is known that the spectrum of this bounded self-adjoint operator is pure point with polynomial eigenvectors [Backus & Rieutord, Phys. Rev. E 95 (2017), 053116]. We give two new proofs of this result. Moreover, we describe the large-degree asymptotics of the restriction of that operator to polynomial vector fields of fixed degrees. The main tool is the microlocal analysis of the partial differential equation satisfied by the orthogonal polynomials in ellipsoids. This work also contains numerical calculations of these spectra, showing a very good agreement with the mathematical results.",0 "Existing end-to-end sign-language animation systems suffer from low naturalness, limited facial/body expressivity, and no user control. We propose a human-centered, real-time speech-to-sign animation framework that integrates (1) a streaming Conformer encoder with an autoregressive Transformer-MDN decoder for synchronized upper-body and facial motion generation, (2) a transparent, editable JSON intermediate representation empowering deaf users and experts to inspect and modify each sign segment, and (3) a human-in-the-loop optimization loop that refines the model based on user edits and ratings. Deployed on Unity3D, our system achieves a 13 ms average frame-inference time and a 103 ms end-to-end latency on an RTX 4070. Our key contributions include the design of a JSON-centric editing mechanism for fine-grained sign-level personalization and the first application of an MDN-based feedback loop for continuous model adaptation. This combination establishes a generalizable, explainable AI paradigm for user-adaptive, low-latency multimodal systems. In studies with 20 deaf signers and 5 professional interpreters, we observe a +13 point SUS improvement, 6.7 point reduction in cognitive load, and significant gains in naturalness and trust (p $<$ .001) over baselines. This work establishes a scalable, explainable AI paradigm for accessible sign-language technologies.",0 "The human visual system provides us with a rich and meaningful percept of the world, transforming retinal signals into visuo-semantic representations. For a model of these representations, here we leveraged a combination of two currently dominating approaches: vision deep neural networks (DNNs) and large language models (LLMs). Using large-scale human electroencephalography (EEG) data recorded during object image viewing, we built encoding models to predict EEG responses using representations from a vision DNN, an LLM, and their fusion. We show that the fusion encoding model outperforms encoding models based on either the vision DNN or the LLM alone, as well as previous modelling approaches, in predicting neural responses to visual stimulation. The vision DNN and the LLM complemented each other in explaining stimulus-related signal in the EEG responses. The vision DNN uniquely captured earlier and broadband EEG signals, whereas the LLM uniquely captured later and low frequency signals, as well as detailed visuo-semantic stimulus information. Together, this provides a more accurate model of the time course of visuo-semantic processing in the human brain.",0 "The chain of contagion task (CCT) is a pychological test to measure the amount of contagious beliefs in individuals. Contagious beliefs thereby refer to the perception that certain objects, people, or substances can transmit contamination through mere contact or proximity (Rozin et al., 1986). In the CCT, a neutral object (usually a pen) is rubbed against an inherently disgusting object (e.g. a toilet paper with feces) and participants are asked how contaminated this pen is on a scale from 0 (not at all) to 100 (very contaminated). Afterwards, this pen is rubbed against another pen, and again, the experienced degree of contamination is assessed. This is repeated 12 times. The CCT has first been experimentally investigated by Tolin et al. (2004) in an in vivo procedure with real disgusting objects. The authors could show that contagious beliefs measured with the CCT show a strong bias for people with contamination-based obsessive-compulsive disorder (C-OCD) compared to anxious individuals and non-anxious controls. Fink-Lamotte et al. (2024) replicated these findings with an online version of the CCT using audio-imagery-based and video-based stimuli and instructions. Both studies used 12 pens to assess the degree of contagious beliefs. Within this brief report, we show that after 8 pens, hardly any additional variance is explained between participants and after the tenth pen, no new information is gained. Thus, we recommend only using 8 pens instead of 12 when using the CCT to assess contagious beliefs.",2 "The discourse around toxicity and LLMs in NLP largely revolves around detection tasks. This work shifts the focus to evaluating LLMs' reasoning about toxicity -- from their explanations that justify a stance -- to enhance their trustworthiness in downstream tasks. Despite extensive research on explainability, it is not straightforward to adopt existing methods to evaluate free-form toxicity explanation due to their over-reliance on input text perturbations, among other challenges. To account for these, we propose a novel, theoretically-grounded multi-dimensional criterion, Human-Aligned Faithfulness (HAF), that measures the extent to which LLMs' free-form toxicity explanations align with those of a rational human under ideal conditions. We develop six metrics, based on uncertainty quantification, to comprehensively evaluate \haf of LLMs' toxicity explanations with no human involvement, and highlight how ""non-ideal"" the explanations are. We conduct several experiments on three Llama models (of size up to 70B) and an 8B Ministral model on five diverse toxicity datasets. Our results show that while LLMs generate plausible explanations to simple prompts, their reasoning about toxicity breaks down when prompted about the nuanced relations between the complete set of reasons, the individual reasons, and their toxicity stances, resulting in inconsistent and nonsensical responses. We open-source our code and LLM-generated explanations at https://github.com/uofthcdslab/HAF.",0 "The gaze of a person tends to reflect their interest. This work explores what happens when this statement is taken literally and applied to robots. Here we present a robot system that employs a moving robot head with a screen-based eye model that can direct the robot's gaze to points in physical space and present a reflection-like mirror image of the attended region on top of each eye. We conducted a user study with 33 participants, who were asked to instruct the robot to perform pick-and-place tasks, monitor the robot's task execution, and interrupt it in case of erroneous actions. Despite a deliberate lack of instructions about the role of the eyes and a very brief system exposure, participants felt more aware about the robot's information processing, detected erroneous actions earlier, and rated the user experience higher when eye-based mirroring was enabled compared to non-reflective eyes. These results suggest a beneficial and intuitive utilization of the introduced method in cooperative human-robot interaction.",2 "Inverse problems are central to a wide range of fields, including healthcare, climate science, and agriculture. They involve the estimation of inputs, typically via iterative optimization, to some known forward model so that it produces a desired outcome. Despite considerable development in the explainability and interpretability of forward models, the iterative optimization of inverse problems remains largely cryptic to domain experts. We propose a methodology to produce explanations, from traces produced by an optimizer, that are interpretable by humans at the abstraction of the domain. The central idea in our approach is to instrument a differentiable simulator so that it emits natural language events during its forward and backward passes. In a post-process, we use a Language Model to create an explanation from the list of events. We demonstrate the effectiveness of our approach with an illustrative optimization problem and an example involving the training of a neural network.",0 "Artificial intelligence is more ubiquitous in multiple domains. Smartphones, social media platforms, search engines, and autonomous vehicles are just a few examples of applications that utilize artificial intelligence technologies to enhance their performance. This study carries out a scoping review of the current state-of-the-art artificial intelligence technologies following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. The goal was to find the most advanced technologies used in different domains of artificial intelligence technology research. Three recognized journals were used from artificial intelligence and machine learning domain: Journal of Artificial Intelligence Research, Journal of Machine Learning Research, and Machine Learning, and articles published in 2022 were observed. Certain qualifications were laid for the technological solutions: the technology must be tested against comparable solutions, commonly approved or otherwise well justified datasets must be used while applying, and results must show improvements against comparable solutions. One of the most important parts of the technology development appeared to be how to process and exploit the data gathered from multiple sources. The data can be highly unstructured, and the technological solution should be able to utilize the data with minimum manual work from humans. The results of this review indicate that creating labeled datasets is very laborious, and solutions exploiting unsupervised or semi-supervised learning technologies are more and more researched. The learning algorithms should be able to be updated efficiently, and predictions should be interpretable. Using artificial intelligence technologies in real-world applications, safety and explainable predictions are mandatory to consider before mass adoption can occur.",0 "Image explanation has been one of the key research interests in the Deep Learning field. Throughout the years, several approaches have been adopted to explain an input image fed by the user. From detecting an object in a given image to explaining it in human understandable sentence, to having a conversation describing the image, this problem has seen an immense change throughout the years, However, the existing works have been often found to (a) hallucinate objects that do not exist in the image and/or (b) lack identifying the complete set of objects present in the image. In this paper, we propose a novel approach to mitigate these drawbacks of inconsistency and incompleteness of the objects recognized during the image explanation. To enable this, we propose an interpretable framework that can be plugged atop diverse image explaining frameworks including Image Captioning, Visual Question Answering (VQA) and Prompt-based AI using LLMs, thereby enhancing their explanation capabilities by rectifying the incorrect or missing objects. We further measure the efficacy of the rectified explanations generated through our proposed approaches leveraging object based precision metrics, and showcase the improvements in the inconsistency and completeness of image explanations. Quantitatively, the proposed framework is able to improve the explanations over the baseline architectures of Image Captioning (improving the completeness by 81.81% and inconsistency by 37.10%), Visual Question Answering(average of 9.6% and 37.10% in completeness and inconsistency respectively) and Prompt-based AI model (0.01% and 5.2% for completeness and inconsistency respectively) surpassing the current state-of-the-art by a substantial margin.",0 "Smart contract vulnerability detection remains a major challenge in blockchain security. Existing vulnerability detection methods face two main issues: (1) Existing datasets lack comprehensive coverage and high-quality explanations for preference learning. (2) Large language models (LLMs) often struggle with accurately interpreting specific concepts in smart contract security. Empirical analysis shows that even after continual pre-training (CPT) and supervised fine-tuning (SFT), LLMs may misinterpret the execution order of state changes, resulting in incorrect explanations despite making correct detection decisions. To address these challenges, we propose Smart-LLaMA-DPO based on LLaMA-3.1-8B. We construct a comprehensive dataset covering four major vulnerability types and machine-unauditable vulnerabilities, including precise labels, explanations, and locations for SFT, as well as high-quality and low-quality output pairs for Direct Preference Optimization (DPO). Second, we perform CPT using large-scale smart contract to enhance the LLM's understanding of specific security practices in smart contracts. Futhermore, we conduct SFT with our comprehensive dataset. Finally, we apply DPO, leveraging human feedback and a specially designed loss function that increases the probability of preferred explanations while reducing the likelihood of non-preferred outputs. We evaluate Smart-LLaMA-DPO on four major vulnerability types: reentrancy, timestamp dependence, integer overflow/underflow, and delegatecall, as well as machine-unauditable vulnerabilities. Our method significantly outperforms state-of-the-art baselines, with average improvements of 10.43% in F1 score and 7.87% in accuracy. Moreover, both LLM evaluation and human evaluation confirm that our method generates more correct, thorough, and clear explanations.",0 "This paper focuses on explaining the timbre conveyed by speech signals and introduces a task termed voice timbre attribute detection (vTAD). In this task, voice timbre is explained with a set of sensory attributes describing its human perception. A pair of speech utterances is processed, and their intensity is compared in a designated timbre descriptor. Moreover, a framework is proposed, which is built upon the speaker embeddings extracted from the speech utterances. The investigation is conducted on the VCTK-RVA dataset. Experimental examinations on the ECAPA-TDNN and FACodec speaker encoders demonstrated that: 1) the ECAPA-TDNN speaker encoder was more capable in the seen scenario, where the testing speakers were included in the training set; 2) the FACodec speaker encoder was superior in the unseen scenario, where the testing speakers were not part of the training, indicating enhanced generalization capability. The VCTK-RVA dataset and open-source code are available on the website https://github.com/vTAD2025-Challenge/vTAD.",0 "Voice timbre refers to the unique quality or character of a person's voice that distinguishes it from others as perceived by human hearing. The Voice Timbre Attribute Detection (VtaD) 2025 challenge focuses on explaining the voice timbre attribute in a comparative manner. In this challenge, the human impression of voice timbre is verbalized with a set of sensory descriptors, including bright, coarse, soft, magnetic, and so on. The timbre is explained from the comparison between two voices in their intensity within a specific descriptor dimension. The VtaD 2025 challenge starts in May and culminates in a special proposal at the NCMMSC2025 conference in October 2025 in Zhenjiang, China.",0 "Concept-based explainable artificial intelligence (C-XAI) can help reveal the inner representations of AI models. Understanding these representations is particularly important in complex tasks like safety evaluation. Such tasks rely on high-level semantic information (e.g., about actions) to make decisions about abstract categories (e.g., whether a situation is dangerous). In this context, it may desirable for C-XAI concepts to show some variability, suggesting that the AI is capable of generalising beyond the concrete details of a situation. However, it is unclear whether people recognise and appreciate such generalisations and can distinguish them from other, less desirable forms of imprecision. This was investigated in an experimental railway safety scenario. Participants evaluated the performance of a simulated AI that evaluated whether traffic scenes involving people were dangerous. To explain these decisions, the AI provided concepts in the form of similar image snippets. These concepts differed in their match with the classified image, either regarding a highly relevant feature (i.e., relation to tracks) or a less relevant feature (i.e., actions). Contrary to the hypotheses, concepts that generalised over less relevant features led to ratings that were lower than for precisely matching concepts and comparable to concepts that systematically misrepresented these features. Conversely, participants were highly sensitive to imprecisions in relevant features. These findings cast doubts on whether people spontaneously recognise generalisations. Accordingly, they might not be able to infer from C-XAI concepts whether AI models have gained a deeper understanding of complex situations.",2 "2D cameras are often used in interactive systems. Other systems like gaming consoles provide more powerful 3D cameras for short range depth sensing. Overall, these cameras are not reliable in large, complex environments. In this work, we propose a 3D stereo vision based pipeline for interactive systems, that is able to handle both ordinary and sensitive applications, through robust scene understanding. We explore the fusion of multiple 3D cameras to do full scene reconstruction, which allows for preforming a wide range of tasks, like event recognition, subject tracking, and notification. Using possible feedback approaches, the system can receive data from the subjects present in the environment, to learn to make better decisions, or to adapt to completely new environments. Throughout the paper, we introduce the pipeline and explain our preliminary experimentation and results. Finally, we draw the roadmap for the next steps that need to be taken, in order to get this pipeline into production",0 "The rapid spread of misinformation in the digital era poses significant challenges to public discourse, necessitating robust and scalable fact-checking solutions. Traditional human-led fact-checking methods, while credible, struggle with the volume and velocity of online content, prompting the integration of automated systems powered by Large Language Models (LLMs). However, existing automated approaches often face limitations, such as handling complex claims, ensuring source credibility, and maintaining transparency. This paper proposes a novel multi-agent system for automated fact-checking that enhances accuracy, efficiency, and explainability. The system comprises four specialized agents: an Input Ingestion Agent for claim decomposition, a Query Generation Agent for formulating targeted subqueries, an Evidence Retrieval Agent for sourcing credible evidence, and a Verdict Prediction Agent for synthesizing veracity judgments with human-interpretable explanations. Evaluated on benchmark datasets (FEVEROUS, HOVER, SciFact), the proposed system achieves a 12.3% improvement in Macro F1-score over baseline methods. The system effectively decomposes complex claims, retrieves reliable evidence from trusted sources, and generates transparent explanations for verification decisions. Our approach contributes to the growing field of automated fact-checking by providing a more accurate, efficient, and transparent verification methodology that aligns with human fact-checking practices while maintaining scalability for real-world applications. Our source code is available at https://github.com/HySonLab/FactAgent",0 "As large language models (LLMs) are used in complex writing workflows, users engage in multi-turn interactions to steer generations to better fit their needs. Rather than passively accepting output, users actively refine, explore, and co-construct text. We conduct a large-scale analysis of this collaborative behavior for users engaged in writing tasks in the wild with two popular AI assistants, Bing Copilot and WildChat. Our analysis goes beyond simple task classification or satisfaction estimation common in prior work and instead characterizes how users interact with LLMs through the course of a session. We identify prototypical behaviors in how users interact with LLMs in prompts following their original request. We refer to these as Prototypical Human-AI Collaboration Behaviors (PATHs) and find that a small group of PATHs explain a majority of the variation seen in user-LLM interaction. These PATHs span users revising intents, exploring texts, posing questions, adjusting style or injecting new content. Next, we find statistically significant correlations between specific writing intents and PATHs, revealing how users' intents shape their collaboration behaviors. We conclude by discussing the implications of our findings on LLM alignment.",0 "Despite its importance for performance and injury prevention, golf swing analysis is limited by isolated metrics, underrepresentation of professional athletes, and a lack of rich, interpretable movement representations. We address these gaps with a holistic, data-driven framework for personalized golf swing analysis from a single wrist-worn sensor. We build a large dataset of professional swings from publicly available videos, reconstruct full-body 3D kinematics using biologically accurate human mesh recovery, and generate synthetic inertial data to train neural networks that infer motion and segment swing phases from wrist-based input. We learn a compositional, discrete vocabulary of motion primitives that facilitates the detection and visualization of technical flaws, and is expressive enough to predict player identity, club type, sex, and age. Our system accurately estimates full-body kinematics and swing events from wrist data, delivering lab-grade motion analysis on-course and supporting early detection of anomalous movement patterns. Explainability methods reveal subtle, individualized movement signatures, reinforcing the view that variability is a hallmark of skilled performance. Longitudinal tracking demonstrates practical value: as one player's handicap improved from 50 to 2.2 over 1.5 years, our system captured measurable technical progress and provided targeted, actionable feedback. Our findings challenge common assumptions, such as swing consistency across clubs and the existence of a single ""ideal"" swing, and uncover latent biomarkers shaped by both intrinsic traits and task-specific constraints. This work bridges lab and field-based biomechanics, offering scalable, accessible, high-fidelity motion analysis for research, coaching, and injury prevention, while opening new directions in movement-based phenotyping, personalized equipment design, and motor skill development.",0 "LMs' alignment with human reading behavior (i.e. psychometric predictive power; PPP) is known to improve during pretraining up to a tipping point, beyond which it either plateaus or degrades. Various factors, such as word frequency, recency bias in attention, and context size, have been theorized to affect PPP, yet there is no current account that explains why such a tipping point exists, and how it interacts with LMs' pretraining dynamics more generally. We hypothesize that the underlying factor is a pretraining phase transition, characterized by the rapid emergence of specialized attention heads. We conduct a series of correlational and causal experiments to show that such a phase transition is responsible for the tipping point in PPP. We then show that, rather than producing attention patterns that contribute to the degradation in PPP, phase transitions alter the subsequent learning dynamics of the model, such that further training keeps damaging PPP.",0 "Past work has long recognized the important role of context in guiding how humans search their memory. While context-based memory models can explain many memory phenomena, it remains unclear why humans develop such architectures over possible alternatives in the first place. In this work, we demonstrate that foundational architectures in neural machine translation -- specifically, recurrent neural network (RNN)-based sequence-to-sequence models with attention -- exhibit mechanisms that directly correspond to those specified in the Context Maintenance and Retrieval (CMR) model of human memory. Since neural machine translation models have evolved to optimize task performance, their convergence with human memory models provides a deeper understanding of the functional role of context in human memory, as well as presenting new ways to model human memory. Leveraging this convergence, we implement a neural machine translation model as a cognitive model of human memory search that is both interpretable and capable of capturing complex dynamics of learning. We show that our model accounts for both averaged and optimal human behavioral patterns as effectively as context-based memory models. Further, we demonstrate additional strengths of the proposed model by evaluating how memory search performance emerges from the interaction of different model components.",0 "In this workshop paper, we discuss the potential for measures of user-centric benefits (such as emotional well-being) that could be explored when evaluating explainable AI (XAI) systems within the arts. As a background to this, we draw from our recent review of creativity support tool (CST) evaluations, that found a paucity of studies evaluating CSTs for user-centric measures that benefit the user themselves. Specifically, we discuss measures of: (1) developing intrinsic abilities, (2) emotional well-being, (3) self-reflection, and (4) self-perception. By discussing these user-centric measures within the context of XAI and the arts, we wish to provoke discussion regarding the potential of such measures.",0 "Grammatical features such as number and gender serve two central functions in human languages. While they encode salient semantic attributes like numerosity and animacy, they also offload sentence processing cost by predictably linking words together via grammatical agreement. Grammars exhibit consistent organizational patterns across diverse languages, invariably rooted in a semantic foundation-a widely confirmed but still theoretically unexplained phenomenon. To explain the basis of universal grammatical patterns, we unify two fundamental properties of grammar, semantic encoding and agreement-based predictability, into a single information-theoretic objective under cognitive constraints, accounting for variable communicative need. Our analyses reveal that grammatical organization provably inherits from perceptual attributes, and our measurements on a diverse language sample show that grammars prioritize functional goals, promoting efficient language processing over semantic encoding.",0 "In today's digitalized world, where software systems are becoming increasingly ubiquitous and complex, the quality aspect of explainability is gaining relevance. A major challenge in achieving adequate explanations is the elicitation of individual explanation needs, as it may be subject to severe hypothetical or confirmation biases. To address these challenges, we aim to establish user-based indicators concerning user behavior or system events that can be captured at runtime to determine when a need for explanations arises. In this work, we conducted explorative research in form of an online study to collect self-reported indicators that could indicate a need for explanation. We compiled a catalog containing 17 relevant indicators concerning user behavior, 8 indicators concerning system events and 14 indicators concerning emotional states or physical reactions. We also analyze the relationships between these indicators and different types of need for explanation. The established indicators can be used in the elicitation process through prototypes, as well as after publication to gather requirements from already deployed applications using telemetry and usage data. Moreover, these indicators can be used to trigger explanations at appropriate moments during the runtime.",0 "Accurately assessing student knowledge is critical for effective education, yet traditional Knowledge Tracing (KT) methods rely on opaque latent embeddings, limiting interpretability. Even LLM-based approaches generate direct predictions or summaries that may hallucinate without any accuracy guarantees. We recast KT as an inverse problem: learning the minimum natural-language summary that makes past answers explainable and future answers predictable. Our Language Bottleneck Model (LBM) consists of an encoder LLM that writes an interpretable knowledge summary and a frozen decoder LLM that must reconstruct and predict student responses using only that summary text. By constraining all predictive information to pass through a short natural-language bottleneck, LBMs ensure that the summary contains accurate information while remaining human-interpretable. Experiments on synthetic arithmetic benchmarks and the large-scale Eedi dataset show that LBMs rival the accuracy of state-of-the-art KT and direct LLM methods while requiring orders-of-magnitude fewer student trajectories. We demonstrate that training the encoder with group-relative policy optimization, using downstream decoding accuracy as a reward signal, effectively improves summary quality.",0 "Minimum Bayes Risk (MBR) decoding optimizes output selection by maximizing the expected utility value of an underlying human distribution. While prior work has shown the effectiveness of MBR decoding through empirical evaluation, few studies have analytically investigated why the method is effective. As a result of our analysis, we show that, given the size $n$ of the reference hypothesis set used in computation, MBR decoding approaches the optimal solution with high probability at a rate of $O\left(n^{-\frac{1}{2}}\right)$, under certain assumptions, even though the language space $Y$ is significantly larger $|Y|\gg n$. This result helps to theoretically explain the strong performance observed in several prior empirical studies on MBR decoding. In addition, we provide the performance gap for maximum-a-posteriori (MAP) decoding and compare it to MBR decoding. The result of this paper indicates that MBR decoding tends to converge to the optimal solution faster than MAP decoding in several cases.",0 "Artificial intelligence (AI) has evolved into an ecosystem of specialized ""species,"" each with unique strengths. We analyze two: DeepSeek-V3, a 671-billion-parameter Mixture of Experts large language model (LLM) exemplifying scale-driven generality, and NetraAI, a dynamical system-based framework engineered for stability and interpretability on small clinical trial datasets. We formalize NetraAI's foundations, combining contraction mappings, information geometry, and evolutionary algorithms to identify predictive patient cohorts. Features are embedded in a metric space and iteratively contracted toward stable attractors that define latent subgroups. A pseudo-temporal embedding and long-range memory enable exploration of higher-order feature interactions, while an internal evolutionary loop selects compact, explainable 2-4-variable bundles (""Personas""). To guide discovery, we introduce an LLM Strategist as a meta-evolutionary layer that observes Persona outputs, prioritizes promising variables, injects domain knowledge, and assesses robustness. This two-tier architecture mirrors the human scientific process: NetraAI as experimentalist, the LLM as theorist, forming a self-improving loop. In case studies (schizophrenia, depression, pancreatic cancer), NetraAI uncovered small, high-effect-size subpopulations that transformed weak baseline models (AUC ~0.50-0.68) into near-perfect classifiers using only a few features. We position NetraAI at the intersection of dynamical systems, information geometry, and evolutionary learning, aligned with emerging concept-level reasoning paradigms such as LeCun's Joint Embedding Predictive Architecture (JEPA). By prioritizing reliable, explainable knowledge, NetraAI offers a new generation of adaptive, self-reflective AI to accelerate clinical discovery.",2 "Predictive Process Monitoring (PPM) often uses deep learning models to predict the future behavior of ongoing processes, such as predicting process outcomes. While these models achieve high accuracy, their lack of interpretability undermines user trust and adoption. Explainable AI (XAI) aims to address this challenge by providing the reasoning behind the predictions. However, current evaluations of XAI in PPM focus primarily on functional metrics (such as fidelity), overlooking user-centered aspects such as their effect on task performance and decision-making. This study investigates the effects of explanation styles (feature importance, rule-based, and counterfactual) and perceived AI accuracy (low or high) on decision-making in PPM. We conducted a decision-making experiment, where users were presented with the AI predictions, perceived accuracy levels, and explanations of different styles. Users' decisions were measured both before and after receiving explanations, allowing the assessment of objective metrics (Task Performance and Agreement) and subjective metrics (Decision Confidence). Our findings show that perceived accuracy and explanation style have a significant effect.",0 "Traditional quality assurance (QA) methods face significant challenges in addressing the complexity, scale, and rapid iteration cycles of modern software systems and are strained by limited resources available, leading to substantial costs associated with poor quality. The object of this research is the Quality Assurance processes for modern distributed software applications. The subject of the research is the assessment of the benefits, challenges, and prospects of integrating modern AI-oriented tools into quality assurance processes. We performed comprehensive analysis of implications on both verification and validation processes covering exploratory test analyses, equivalence partitioning and boundary analyses, metamorphic testing, finding inconsistencies in acceptance criteria (AC), static analyses, test case generation, unit test generation, test suit optimization and assessment, end to end scenario execution. End to end regression of sample enterprise application utilizing AI-agents over generated test scenarios was implemented as a proof of concept highlighting practical use of the study. The results, with only 8.3% flaky executions of generated test cases, indicate significant potential for the proposed approaches. However, the study also identified substantial challenges for practical adoption concerning generation of semantically identical coverage, ""black box"" nature and lack of explainability from state-of-the-art Large Language Models (LLMs), the tendency to correct mutated test cases to match expected results, underscoring the necessity for thorough verification of both generated artifacts and test execution results. The research demonstrates AI's transformative potential for QA but highlights the importance of a strategic approach to implementing these technologies, considering the identified limitations and the need for developing appropriate verification methodologies.",0 "This paper presents a macroscopic theory, alongside its numerical implementation, aimed at describing, explaining, and predicting the nucleation and propagation of fracture in viscoelastic materials subjected to quasistatic loading conditions. The focus is on polymers, in particular, on elastomers. To this end, the starting point of this work is devoted to summarizing the large body of experimental results on how elastomers deform, nucleate cracks, and propagate cracks when subjected to mechanical loads. When viewed collectively, the experiments make it plain that there are three basic ingredients that any attempt at a complete macroscopic theory of fracture in elastomers ought to account for: i) the viscoelasticity of the elastomer; ii) its strength; and iii) its fracture energy. A theory is then introduced that accounts for all these three basic ingredients by extending the phase-field theory initiated by Kumar, Francfort, and Lopez-Pamies (J. Mech. Phys. Solids 112 (2018), 523--551) for elastic brittle materials to seamlessly incorporate viscous energy dissipation by deformation, a generalized strength surface that is a hypersurface in stress-deformation space (and not just in stress space as for elastic brittle materials), and the pertinent Griffith criticality condition for materials that dissipate energy not just by the creation of surface but also by deformation, in this case, by viscous deformation (Shrimali and Lopez-Pamies (2023) Extreme Mech. Lett. 58, 101944). From an applications point of view, the proposed theory amounts to solving an initial-boundary-value problem comprised of two nonlinear PDEs coupled with a nonlinear ODE for the deformation field, a tensorial internal variable, and the phase field. A robust scheme is presented to generate solutions for these equations.",0 "This paper explores the human-centric operationalization of Automated Essay Scoring (AES) systems, addressing aspects beyond accuracy. We compare various machine learning-based approaches with Large Language Models (LLMs) approaches, identifying their strengths, similarities and differences. The study investigates key dimensions such as bias, robustness, and explainability, considered important for human-aware operationalization of AES systems. Our study shows that ML-based AES models outperform LLMs in accuracy but struggle with explainability, whereas LLMs provide richer explanations. We also found that both approaches struggle with bias and robustness to edge scores. By analyzing these dimensions, the paper aims to identify challenges and trade-offs between different methods, contributing to more reliable and trustworthy AES methods.",0 "A well-known, but often ignored issue in Yoneda-style definitions of cohomology objects via collections of $n$-step extensions (i.e., equivalence classes of exact sequences of a given length $n$ between two given objects, usually subject to further criteria, and equipped with some algebraic structure) is, whether such a collection of extensions forms a set. We explain that in the context of a semi-abelian variety of algebras, the answer to this question is, essentially, yes: for the collection of all $n$-step extensions between any two objects, a set of representing extensions can be chosen, so that the collection of extensions is ""small"" in the sense that a bijection to a set exists. We further consider some variations on this result, involving double extensions and crossed extensions (in the context of a semi-abelian variety), and Schreier extensions (in the category of monoids).",0 "This article introduces WebXAII, an open-source web framework designed to facilitate research on human interaction with eXplainable Artificial Intelligence (XAI) systems. The field of XAI is rapidly expanding, driven by the growing societal implications of the widespread adoption of AI (and in particular machine learning) across diverse applications. Researchers who study the interaction between humans and XAI techniques typically develop ad hoc interfaces in order to conduct their studies. These interfaces are usually not shared alongside the results of the studies, which limits their reusability and the reproducibility of experiments. In response, we design and implement WebXAII, a web-based platform that can embody full experimental protocols, meaning that it can present all aspects of the experiment to human participants and record their responses. The experimental protocols are translated into a composite architecture of generic views and modules, which offers a lot of flexibility. The architecture is defined in a structured configuration file, so that protocols can be implemented with minimal programming skills. We demonstrate that WebXAII can effectively embody relevant protocols, by reproducing the protocol of a state-of-the-art study of the literature.",2 "This study investigated the impact of a theory-driven, explainable Learning Analytics Dashboard (LAD) on university students' human-AI collaborative academic abstract writing task. Grounded in Self-Regulated Learning (SRL) theory and incorporating Explainable AI (XAI) principles, our LAD featured a three-layered design (Visual, Explainable, Interactive). In an experimental study, participants were randomly assigned to either an experimental group (using the full explainable LAD) or a control group (using a visual-only LAD) to collaboratively write an academic abstract with a Generative AI. While quantitative analysis revealed no significant difference in the quality of co-authored abstracts between the two groups, a significant and noteworthy difference emerged in conceptual understanding: students in the explainable LAD group demonstrated a superior grasp of abstract writing principles, as evidenced by their higher scores on a knowledge test (p= .026). These findings highlight that while basic AI-generated feedback may suffice for immediate task completion, the provision of explainable feedback is crucial for fostering deeper learning, enhancing conceptual understanding, and developing transferable skills fundamental to self-regulated learning in academic writing contexts.",2 "Healthcare 5.0 integrates Artificial Intelligence (AI), the Internet of Things (IoT), real-time monitoring, and human-centered design toward personalized medicine and predictive diagnostics. However, the increasing reliance on interconnected medical technologies exposes them to cyber threats. Meanwhile, current AI-driven cybersecurity models often neglect biomedical data, limiting their effectiveness and interpretability. This study addresses this gap by applying eXplainable AI (XAI) to a Healthcare 5.0 dataset that integrates network traffic and biomedical sensor data. Classification outputs indicate that XGBoost achieved 99% F1-score for benign and data alteration, and 81% for spoofing. Explainability findings reveal that network data play a dominant role in intrusion detection whereas biomedical features contributed to spoofing detection, with temperature reaching a Shapley values magnitude of 0.37.",0 "The revelation of the supreme authority of nucleic acids in the cellular landscape has precipitated the recognition of the versatility of RNAs in cells. The subsequent discovery of non-coding RNAs was a major breakthrough that revealed their extensive involvement in virtually all physiological processes within the cell. Beyond the barriers of the cell, the current perception seems to support the idea of their participation in intercellular regulation and cross-kingdom communication. However, the presence of non-coding RNAs in the extracellular environment remains essentially a mystery, and the understanding of the significance and the processes governing this presence faces several constraints. This has led us to forge an original and predictive idea that seems to allow an emancipation from the various constraints posed in the current perception of the cited phenomena. In this paper, we will attempt to explore the extent of the probable existence of cellular organizations specializing in the production and management of non-coding RNAs. We will try, through the development of this hypothesis, to draw a picture explaining the significance and logistics of extracellular non-coding RNAs, with an emphasis on microRNAs. This exercise will be realized while relying on and confronting purely theoretical points of view, as well as relevant experimental results. In this manuscript, we will address the presumed morphology, intracellular organization, selective export, transport, transfer, distribution, reception and intracellular function of non-coding RNAs, in the perspective of a regulation cycle orchestrated by NAcrins under normal or disturbed physiological contexts.",0 "This book provides a comprehensive exploration of affective computing and human-computer interaction technologies. It begins with the historical development and basic concepts of human-computer interaction, delving into the technical frameworks and practical applications of emotional computing, visual interaction, voice interaction, brain-computer interfaces, physiological electrical signal analysis, and social robotics. The book covers a wide range of topics, including the psychological and neuroscience foundations of emotion, multimodal emotion recognition, emotional expression mechanisms, and the principles of brain-computer interfaces. Key technologies such as affective computing based on discrete emotion theory and dimensional models, visual perception principles, speech recognition and synthesis, EEG signal acquisition and processing, and multimodal emotion recognition are explained in detail. This book also addresses the technical challenges in the field, including multimodal data fusion, privacy and security, and ethical considerations in human-machine relationships. It discusses the applications of these technologies across various domains such as education, healthcare, entertainment, and intelligent assistance. Looking to the future, the book anticipates trends such as the deep integration of artificial intelligence with emotion recognition, the advancement of multimodal interaction technologies, and the development of more personalized and adaptive emotion recognition systems. It emphasizes the importance of balancing technological innovation with ethical considerations to ensure the responsible development and application of affective computing technologies.",0 "Modern AI systems frequently rely on opaque black-box models, most notably Deep Neural Networks, whose performance stems from complex architectures with millions of learned parameters. While powerful, their complexity poses a major challenge to trustworthiness, particularly due to a lack of transparency. Explainable AI (XAI) addresses this issue by providing human-understandable explanations of model behavior. However, to ensure their usefulness and trustworthiness, such explanations must be rigorously evaluated. Despite the growing number of XAI methods, the field lacks standardized evaluation protocols and consensus on appropriate metrics. To address this gap, we conduct a systematic literature review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and introduce a unified framework for the eValuation of XAI (VXAI). We identify 362 relevant publications and aggregate their contributions into 41 functionally similar metric groups. In addition, we propose a three-dimensional categorization scheme spanning explanation type, evaluation contextuality, and explanation quality desiderata. Our framework provides the most comprehensive and structured overview of VXAI to date. It supports systematic metric selection, promotes comparability across methods, and offers a flexible foundation for future extensions.",0 "In this work, we reflect on the data-driven modeling paradigm that is gaining ground in AI-driven automation of patient care. We argue that the repurposing of existing real-world patient datasets for machine learning may not always represent an optimal approach to model development as it could lead to undesirable outcomes in patient care. We reflect on the history of data analysis to explain how the data-driven paradigm rose to popularity, and we envision ways in which systems thinking and clinical domain theory could complement the existing model development approaches in reaching human-centric outcomes. We call for a purpose-driven machine learning paradigm that is grounded in clinical theory and the sociotechnical realities of real-world operational contexts. We argue that understanding the utility of existing patient datasets requires looking in two directions: upstream towards the data generation, and downstream towards the automation objectives. This purpose-driven perspective to AI system development opens up new methodological opportunities and holds promise for AI automation of patient care.",2 "State-of-the-art transformer models for Speech Emotion Recognition (SER) rely on temporal feature aggregation, yet advanced pooling methods remain underexplored. We systematically benchmark pooling strategies, including Multi-Query Multi-Head Attentive Statistics Pooling, which achieves a 3.5 percentage point macro F1 gain over average pooling. Attention analysis shows 15 percent of frames capture 80 percent of emotion cues, revealing a localized pattern of emotional information. Analysis of high-attention frames reveals that non-linguistic vocalizations and hyperarticulated phonemes are disproportionately prioritized during pooling, mirroring human perceptual strategies. Our findings position attentive pooling as both a performant SER mechanism and a biologically plausible tool for explainable emotion localization. On Interspeech 2025 Speech Emotion Recognition in Naturalistic Conditions Challenge, our approach obtained a macro F1 score of 0.3649.",0 "We propose a statistical model to estimate population proportions under the survey variable cause model (Groves 2006), the setting in which the characteristic measured by the survey has a direct causal effect on survey participation. For example, we estimate employee satisfaction from a survey in which the decision of an employee to participate depends on their satisfaction. We model the time at which a respondent 'arrives' to take the survey, leveraging results from the counting processes literature that has been developed to analyze similar problems with survival data. Our approach is particularly useful for nonresponse bias analysis because it relies on different assumptions than traditional adjustments such as poststratification, which assumes the common cause model, the setting in which external factors explain the characteristic measured by the survey and participation. Our motivation is the Federal Employee Viewpoint Survey, which asks federal employees whether they are satisfied with their work organization. Our model suggests that the sample proportion overestimates the proportion of federal employees that are not satisfied with their work organization even after adjustment by poststratification. Employees that are not satisfied likely select into the survey, and this selection cannot be explained by personal characteristics like race, gender, and occupation or work-place characteristics like agency, unit, and location.",2 "Sleep is commonly studied through neurochemical, evolutionary, and behavioral frameworks, typically emphasizing circadian rhythms and energy conservation. However, these approaches do not fully explain a deeper biophysical question: why does sleep universally involve physical stillness, a lying posture, and disconnection from conscious control? This paper introduces a new hypothesis that sleep is not merely a biological function, but a state of vibrational synchronization between the human body and natural frequencies generated by the Earth. In this state, the body reduces its autonomous activity and aligns with external environmental rhythms, allowing for energy restoration, internal recalibration, and systemic reorganization. This perspective reframes life as a continuous process of internally driven vibration influenced by external physical fields. The proposed model offers new avenues for understanding aging, health, death, and consciousness.",0 "This paper presents a social network analysis of the professional support networks of 100 LGBTQ+ and/or women PhD physicists, comparing the networks based on the career sectors of academia, industry, and government/nonprofit. The methods for constructing and analyzing the ego networks, which are novel in many ways, are explained in greater detail in an earlier publication (Hatcher et. al., 2025). We use statistical tests of independence to explore differences between sectors in terms of whole network metrics, network composition based on alter characteristics, and support types. We find that alters associated with groups (like affinity groups and personal and professional interest groups) are more likely to provide identity-based and community building support, participants in Academia have fewer personal friends in their networks while those in Industry have more, participants in Government report less instrumental support, and those in Academia report less material support. These results and others lead to suggestions for employers in these sectors on how to better support these physicists, including continuing to promote participation in affinity and interest groups, providing more material support and/or personal time in the academic sector, and more instrumental support in the form of professional development or training in the government sector.",2 "The air flows in the proximal and distal portions of the human lungs are interconnected: the lower Reynolds number in the deeper generations causes a progressive flow regularization, while mass conservation requires flow rate oscillations to propagate through the airway bifurcations. To explain how these two competing effects shape the flow state in the deeper generations, we have performed the first high-fidelity numerical simulations of the air flow in a lung model including 23 successive bifurcations of a single planar airway. Turbulence modelling or assumptions on flow regimes are not required. The chosen flow rate is stationary (steady on average), and representative of the peak inspiratory flow reached by adult patients breathing through therapeutical inhalers. As expected, advection becomes progressively less important after each bifurcation, until a time-dependent Stokes regime governed solely by viscous diffusion is established in the smallest generations. However, fluctuations in this regime are relatively fast and large with respect to the mean flow, which is in contrast with the commonly agreed picture that only the breathing frequency is relevant at the scale of the alveoli. We demonstrate that the characteristic frequency and amplitude of these fluctuations are linked to the flow in the upper part of the bronchial tree, as they originate from the time-dependent flow splitting in the upper bifurcations. Even though these fluctuations are observed here in an idealized, rigid lung model, our findings suggest that the assumptions usually adopted in many of the current lung models might need to be revised.",2 "We introduce SafeRL-Lite, an open-source Python library for building reinforcement learning (RL) agents that are both constrained and explainable. Existing RL toolkits often lack native mechanisms for enforcing hard safety constraints or producing human-interpretable rationales for decisions. SafeRL-Lite provides modular wrappers around standard Gym environments and deep Q-learning agents to enable: (i) safety-aware training via constraint enforcement, and (ii) real-time post-hoc explanation via SHAP values and saliency maps. The library is lightweight, extensible, and installable via pip, and includes built-in metrics for constraint violations. We demonstrate its effectiveness on constrained variants of CartPole and provide visualizations that reveal both policy logic and safety adherence. The full codebase is available at: https://github.com/satyamcser/saferl-lite.",0 "Captions are crucial for understanding scientific visualizations and documents. Existing captioning methods for scientific figures rely on figure-caption pairs extracted from documents for training, many of which fall short with respect to metrics like helpfulness, explainability, and visual-descriptiveness [15] leading to generated captions being misaligned with reader preferences. To enable the generation of high-quality figure captions, we introduce FigCaps-HF a new framework for figure-caption generation that can incorporate domain expert feedback in generating captions optimized for reader preferences. Our framework comprises of 1) an automatic method for evaluating quality of figure-caption pairs, 2) a novel reinforcement learning with human feedback (RLHF) method to optimize a generative figure-to-caption model for reader preferences. We demonstrate the effectiveness of our simple learning framework by improving performance over standard fine-tuning across different types of models. In particular, when using BLIP as the base model, our RLHF framework achieves a mean gain of 35.7%, 16.9%, and 9% in ROUGE, BLEU, and Meteor, respectively. Finally, we release a large-scale benchmark dataset with human feedback on figure-caption pairs to enable further evaluation and development of RLHF techniques for this problem.",0 "Monolayer hexagonal boron nitride is a prototypical planar 2-dimensional system material and has been the subject of many investigations of its exceptional vibrational, spectroscopic and transport properties. The lattice thermal conductivity remains quite uncertain, with theoretical and experimental reports varying between 218 and 1060 Wm-1K-1. It has a strong temperature evolution and is sensitive to strain effects and isotope concentrations. While the impact of isotope scattering has been widely studied and is well understood, nuclear quantum effects and 4-phonon scattering have so far been neglected. Monolayer hexagonal boron nitride is composed of light elements, and further has its 3-phonon scattering phase space restricted by mirror plane symmetry, so these effects may be of similar order as isotope scattering, and would lead to a completely different understanding of the fundamental processes limiting the lattice thermal conductivity for this system. In this work, we use both classical and path-integral molecular dynamics, in conjunction with the Temperature Dependent Effective Potential method, to compute temperature-dependent renormalized phonons including isotope scattering, 3-phonon scattering, 4-phonon scattering and nuclear quantum effects. We show the impact of the latter two on the lattice thermal conductivity for a large temperature range, as well as their impact on the phonon lifetimes. Overall, our work provides a robust framework for calculations of the lattice thermal conductivity in solids, providing quantitative improvements and physical understanding that help explain the variety of results found in the literature.",0 "Malicious package detection has become a critical task in ensuring the security and stability of the PyPI. Existing detection approaches have focused on advancing model selection, evolving from traditional machine learning (ML) models to large language models (LLMs). However, as the complexity of the model increases, the time consumption also increases, which raises the question of whether a lightweight model achieves effective detection. Through empirical research, we demonstrate that collecting a sufficiently comprehensive feature set enables even traditional ML models to achieve outstanding performance. However, with the continuous emergence of new malicious packages, considerable human and material resources are required for feature analysis. Also, traditional ML model-based approaches lack of explainability to malicious packages.Therefore, we propose a novel approach MalGuard based on graph centrality analysis and the LIME (Local Interpretable Model-agnostic Explanations) algorithm to detect malicious packages.To overcome the above two challenges, we leverage graph centrality analysis to extract sensitive APIs automatically to replace manual analysis. To understand the sensitive APIs, we further refine the feature set using LLM and integrate the LIME algorithm with ML models to provide explanations for malicious packages. We evaluated MalGuard against six SOTA baselines with the same settings. Experimental results show that our proposed MalGuard, improves precision by 0.5%-33.2% and recall by 1.8%-22.1%. With MalGuard, we successfully identified 113 previously unknown malicious packages from a pool of 64,348 newly-uploaded packages over a five-week period, and 109 out of them have been removed by the PyPI official.",0 "People need to internalize the skills of AI agents to improve their own capabilities. Our paper focuses on Mahjong, a multiplayer game involving imperfect information and requiring effective long-term decision-making amidst randomness and hidden information. Through the efforts of AI researchers, several impressive Mahjong AI agents have already achieved performance levels comparable to those of professional human players; however, these agents are often treated as black boxes from which few insights can be gleaned. This paper introduces Mxplainer, a parameterized search algorithm that can be converted into an equivalent neural network to learn the parameters of black-box agents. Experiments conducted on AI and human player data demonstrate that the learned parameters provide human-understandable insights into these agents' characteristics and play styles. In addition to analyzing the learned parameters, we also showcase how our search-based framework can locally explain the decision-making processes of black-box agents for most Mahjong game states.",0 "Why do we give the explanations we do? Recent work has suggested that we should think of explanation as a kind of cooperative social interaction, between a why-question-asker and an explainer. Here, we apply this perspective to consider the role that emotion plays in this social interaction. We develop a computational framework for modeling explainers who consider the emotional impact an explanation might have on a listener. We test our framework by using it to model human intuitions about how a doctor might explain to a patient why they have a disease, taking into account the patient's propensity for regret. Our model predicts human intuitions well, better than emotion-agnostic ablations, suggesting that people do indeed reason about emotion when giving explanations.",2 "Building trust in reinforcement learning (RL) agents requires understanding why they make certain decisions, especially in high-stakes applications like robotics, healthcare, and finance. Existing explainability methods often focus on single states or entire trajectories, either providing only local, step-wise insights or attributing decisions to coarse, episodelevel summaries. Both approaches miss the recurring strategies and temporally extended patterns that actually drive agent behavior across multiple decisions. We address this gap by proposing a fully offline, reward-free framework for behavior discovery and segmentation, enabling the attribution of actions to meaningful and interpretable behavior segments that capture recurring patterns appearing across multiple trajectories. Our method identifies coherent behavior clusters from state-action sequences and attributes individual actions to these clusters for fine-grained, behavior-centric explanations. Evaluations on four diverse offline RL environments show that our approach discovers meaningful behaviors and outperforms trajectory-level baselines in fidelity, human preference, and cluster coherence. Our code is publicly available.",2 "Despite promising developments in Explainable Artificial Intelligence, the practical value of XAI methods remains under-explored and insufficiently validated in real-world settings. Robust and context-aware evaluation is essential, not only to produce understandable explanations but also to ensure their trustworthiness and usability for intended users, but tends to be overlooked because of no clear guidelines on how to design an evaluation with users. This study addresses this gap with two main goals: (1) to develop a framework of well-defined, atomic properties that characterise the user experience of XAI in healthcare; and (2) to provide clear, context-sensitive guidelines for defining evaluation strategies based on system characteristics. We conducted a systematic review of 82 user studies, sourced from five databases, all situated within healthcare settings and focused on evaluating AI-generated explanations. The analysis was guided by a predefined coding scheme informed by an existing evaluation framework, complemented by inductive codes developed iteratively. The review yields three key contributions: (1) a synthesis of current evaluation practices, highlighting a growing focus on human-centred approaches in healthcare XAI; (2) insights into the interrelations among explanation properties; and (3) an updated framework and a set of actionable guidelines to support interdisciplinary teams in designing and implementing effective evaluation strategies for XAI systems tailored to specific application contexts.",2 "Large Language Models (LLMs) are set to reshape cybersecurity by augmenting red and blue team operations. Red teams can exploit LLMs to plan attacks, craft phishing content, simulate adversaries, and generate exploit code. Conversely, blue teams may deploy them for threat intelligence synthesis, root cause analysis, and streamlined documentation. This dual capability introduces both transformative potential and serious risks. This position paper maps LLM applications across cybersecurity frameworks such as MITRE ATT&CK and the NIST Cybersecurity Framework (CSF), offering a structured view of their current utility and limitations. While LLMs demonstrate fluency and versatility across various tasks, they remain fragile in high-stakes, context-heavy environments. Key limitations include hallucinations, limited context retention, poor reasoning, and sensitivity to prompts, which undermine their reliability in operational settings. Moreover, real-world integration raises concerns around dual-use risks, adversarial misuse, and diminished human oversight. Malicious actors could exploit LLMs to automate reconnaissance, obscure attack vectors, and lower the technical threshold for executing sophisticated attacks. To ensure safer adoption, we recommend maintaining human-in-the-loop oversight, enhancing model explainability, integrating privacy-preserving mechanisms, and building systems robust to adversarial exploitation. As organizations increasingly adopt AI driven cybersecurity, a nuanced understanding of LLMs' risks and operational impacts is critical to securing their defensive value while mitigating unintended consequences.",0 "In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used. In this position paper, we argue for a novel and pragmatic perspective: Explainable machine learning needs to recognize its parallels with applied statistics. Concretely, explanations are statistics of high-dimensional functions, and we should think about them analogously to traditional statistical quantities. Among others, this implies that we must think carefully about the matter of interpretation, or how the explanations relate to intuitive questions that humans have about the world. The fact that this is scarcely being discussed in research papers is one of the main drawbacks of the current literature. Moving forward, the analogy between explainable machine learning and applied statistics suggests fruitful ways for how research practices can be improved.",0 "The spreading dynamics of infectious diseases is influenced by individual behaviours, which are in turn affected by the level of awareness about the epidemic. Modelling the co-evolution of disease transmission and behavioural changes within a population enables better understanding, prediction and control of epidemics. Here, our primary goal is to provide an overview of the most popular modelling approaches, ranging from compartmental mean-field to agent-based models, with a particular focus on how behavioural factors are incorporated into epidemic dynamics. We classify modelling approaches based on the fundamental conceptual distinction between models of behaviours and models of behavioural determinants (such as awareness, beliefs, opinions, or trust); in particular, we observe that most studies model and interpret the variables related to individual responses either as behaviours or as determinants, with the implicit assumption that they correlate linearly. Based on preliminary empirical observations, we then challenge this assumption by analysing a recent dataset about time series of social indicators, collected during the COVID-19 pandemic. We examine the case study of Italian regions and we discover that behavioural responses are poorly explained by awareness, beliefs or trust, thereby calling for a careful interpretation of the modelling assumptions and for the development of further models, which fully account for the inherent complexity of individual responses and human behaviours.",0 "Concept-based approaches, which aim to identify human-understandable concepts within a model's internal representations, are a promising method for interpreting embeddings from deep neural network models, such as CLIP. While these approaches help explain model behavior, current methods lack statistical rigor, making it challenging to validate identified concepts and compare different techniques. To address this challenge, we introduce a hypothesis testing framework that quantifies rotation-sensitive structures within the CLIP embedding space. Once such structures are identified, we propose a post-hoc concept decomposition method. Unlike existing approaches, it offers theoretical guarantees that discovered concepts represent robust, reproducible patterns (rather than method-specific artifacts) and outperforms other techniques in terms of reconstruction error. Empirically, we demonstrate that our concept-based decomposition algorithm effectively balances reconstruction accuracy with concept interpretability and helps mitigate spurious cues in data. Applied to a popular spurious correlation dataset, our method yields a 22.6% increase in worst-group accuracy after removing spurious background concepts.",0 "As image generators produce increasingly realistic images, concerns about potential misuse continue to grow. Supervised detection relies on large, curated datasets and struggles to generalize across diverse generators. In this work, we investigate the use of pre-trained Vision-Language Models (VLMs) for zero-shot detection of AI-generated images. While off-the-shelf VLMs exhibit some task-specific reasoning and chain-of-thought prompting offers gains, we show that task-aligned prompting elicits more focused reasoning and significantly improves performance without fine-tuning. Specifically, prefixing the model's response with the phrase ""Let's examine the style and the synthesis artifacts"" -- a method we call zero-shot-s$^2$ -- boosts Macro F1 scores by 8%-29%. These gains are consistent for two widely used open-source models and across three recent, diverse datasets spanning human faces, objects, and animals with images generated by 16 different models -- demonstrating strong generalization. We further evaluate the approach across three additional model sizes and observe improvements in most dataset-model combinations -- suggesting robustness to model scale. Surprisingly, self-consistency, a behavior previously observed in language reasoning, where aggregating answers from diverse reasoning paths improves performance, also holds in this setting. Even here, zero-shot-s$^2$ scales better than chain-of-thought in most cases -- indicating that it elicits more useful diversity. Our findings show that task-aligned prompts elicit more focused reasoning and enhance latent capabilities in VLMs, like the detection of AI-generated images -- offering a simple, generalizable, and explainable alternative to supervised methods. Our code is publicly available on github: https://github.com/Zoher15/Zero-shot-s2.",0 "Standard quantum information theory is founded on the assumption that multi-party state space possesses a tensor product structure. Anyons, as quasiparticles in two-dimensional systems, exhibit unique entanglement properties that differ from the conventional quantum systems, resulting from the absence of a tensor product structure in their state spaces. This motivates us to investigate the relationship between Bell nonlocality and entanglement in anyonic states. Specifically, we find that certain pure anyonic states with non-zero anyonic entanglement entropy (AEE) are local, yet exhibit nonlocality when subjected to collective measurements on multiple copies-a phenomenon known as superactivation of nonlocality, which is typically observed in conventional mixed states. To analyze this, we decompose the total entanglement of anyonic states into two components: one from the tensor product structure and the other representing residual contributions. By studying their asymptotic behavior, we find that the former gradually increases and approaches the AEE while the latter diminishes with the number of copies. Crucially, the entanglement component associated with the tensor product structure demonstrates a significant correlation with nonlocality, which explains the observed superactivation of nonlocality. Our findings provide new insights into the connection between entanglement and nonlocality in anyonic systems.",0 "Current advances in AI and its applicability have highlighted the need to ensure its trustworthiness for legal, ethical, and even commercial reasons. Sub-symbolic machine learning algorithms, such as the LLMs, simulate reasoning but hallucinate and their decisions cannot be explained or audited (crucial aspects for trustworthiness). On the other hand, rule-based reasoners, such as Cyc, are able to provide the chain of reasoning steps but are complex and use a large number of reasoners. We propose a middle ground using s(CASP), a goal-directed constraint-based answer set programming reasoner that employs a small number of mechanisms to emulate reliable and explainable human-style commonsense reasoning. In this paper, we explain how s(CASP) supports the 16 desiderata for trustworthy AI introduced by Doug Lenat and Gary Marcus (2023), and two additional ones: inconsistency detection and the assumption of alternative worlds. To illustrate the feasibility and synergies of s(CASP), we present a range of diverse applications, including a conversational chatbot and a virtually embodied reasoner.",0 "The concept bottleneck model (CBM), as a technique improving interpretability via linking predictions to human-understandable concepts, makes high-risk and life-critical medical image classification credible. Typically, existing CBM methods associate the final layer of visual encoders with concepts to explain the model's predictions. However, we empirically discover the phenomenon of concept preference variation, that is, the concepts are preferably associated with the features at different layers than those only at the final layer; yet a blind last-layer-based association neglects such a preference variation and thus weakens the accurate correspondences between features and concepts, impairing model interpretability. To address this issue, we propose a novel Multi-layer Visual Preference-enhanced Concept Bottleneck Model (MVP-CBM), which comprises two key novel modules: (1) intra-layer concept preference modeling, which captures the preferred association of different concepts with features at various visual layers, and (2) multi-layer concept sparse activation fusion, which sparsely aggregates concept activations from multiple layers to enhance performance. Thus, by explicitly modeling concept preferences, MVP-CBM can comprehensively leverage multi-layer visual information to provide a more nuanced and accurate explanation of model decisions. Extensive experiments on several public medical classification benchmarks demonstrate that MVP-CBM achieves state-of-the-art accuracy and interoperability, verifying its superiority. Code is available at https://github.com/wcj6/MVP-CBM.",0 "In social psychology and cognitive science, there has been much interest in studying category stereotypes. However, we still lack a consensual mathematical definition or framework, which is necessary for us to hold a deeper understanding of stereotypes in human cognition. In this paper, we use graph theory to portray category stereotypes in human cognition, based on pairs of labels having special relations. By using methods and conclusions in graph theory (including algebraic graph theory and vertex coloring) as well as strict ratiocination, we give criteria for judging the stability of a given stereotype, some of which are computationally practicable. We also define the chromatic stability index (CSI) to measure the stability of a stereotype in human cognition, as well as to provide its precise range. From the perspective of stereotype graphs and CSI, we may explain why stereotypes can easily stay in human cognition.",0 "The multifaceted nature of subjective experience poses a challenge to the study of consciousness. Traditional neuroscientific approaches often concentrate on isolated facets, such as perceptual awareness or the global state of consciousness and construct a theory around the relevant empirical paradigms and findings. Theories of consciousness are, therefore, often difficult to compare; indeed, there might be little overlap in the phenomena such theories aim to explain. Here, we take a different approach: starting with active inference, a first principles framework for modelling behaviour as (approximate) Bayesian inference, and building up to a minimal theory of consciousness, which emerges from the shared features of computational models derived under active inference. We review a body of work applying active inference models to the study of consciousness and argue that there is implicit in all these models a small set of theoretical commitments that point to a minimal (and testable) theory of consciousness.",0 "We explore the role of ontologies in enhancing hybrid modeling and simulation through improved semantic rigor, model reusability, and interoperability across systems, disciplines, and tools. By distinguishing between methodological and referential ontologies, we demonstrate how these complementary approaches address interoperability challenges along three axes: Human-Human, Human-Machine, and Machine-Machine. Techniques such as competency questions, ontology design patterns, and layered strategies are highlighted for promoting shared understanding and formal precision. Integrating ontologies with Semantic Web Technologies, we showcase their dual role as descriptive domain representations and prescriptive guides for simulation construction. Four application cases - sea-level rise analysis, Industry 4.0 modeling, artificial societies for policy support, and cyber threat evaluation - illustrate the practical benefits of ontology-driven hybrid simulation workflows. We conclude by discussing challenges and opportunities in ontology-based hybrid M&S, including tool integration, semantic alignment, and support for explainable AI.",0 "Artificial Intelligence (AI) is rapidly embedded in critical decision-making systems, however their foundational ``black-box'' models require eXplainable AI (XAI) solutions to enhance transparency, which are mostly oriented to experts, making no sense to non-experts. Alarming evidence about AI's unprecedented human values risks brings forward the imperative need for transparent human-centered XAI solutions. In this work, we introduce a domain-, model-, explanation-agnostic, generalizable and reproducible framework that ensures both transparency and human-centered explanations tailored to the needs of both experts and non-experts. The framework leverages Large Language Models (LLMs) and employs in-context learning to convey domain- and explainability-relevant contextual knowledge into LLMs. Through its structured prompt and system setting, our framework encapsulates in one response explanations understandable by non-experts and technical information to experts, all grounded in domain and explainability principles. To demonstrate the effectiveness of our framework, we establish a ground-truth contextual ``thesaurus'' through a rigorous benchmarking with over 40 data, model, and XAI combinations for an explainable clustering analysis of a well-being scenario. Through a comprehensive quality and human-friendliness evaluation of our framework's explanations, we prove high content quality through strong correlations with ground-truth explanations (Spearman rank correlation=0.92) and improved interpretability and human-friendliness to non-experts through a user study (N=56). Our overall evaluation confirms trust in LLMs as HCXAI enablers, as our framework bridges the above Gaps by delivering (i) high-quality technical explanations aligned with foundational XAI methods and (ii) clear, efficient, and interpretable human-centered explanations for non-experts.",2 "Interpreting large volumes of high-dimensional, unlabeled data in a manner that is comprehensible to humans remains a significant challenge across various domains. In unsupervised healthcare data analysis, interpreting clustered data can offer meaningful insights into patients' health outcomes, which hold direct implications for healthcare providers. This paper addresses the problem of interpreting clustered sensor data collected from older adult patients recovering from lower-limb fractures in the community. A total of 560 days of multimodal sensor data, including acceleration, step count, ambient motion, GPS location, heart rate, and sleep, alongside clinical scores, were remotely collected from patients at home. Clustering was first carried out separately for each data modality to assess the impact of feature sets extracted from each modality on patients' recovery trajectories. Then, using context-aware prompting, a large language model was employed to infer meaningful cluster labels for the clusters derived from each modality. The quality of these clusters and their corresponding labels was validated through rigorous statistical testing and visualization against clinical scores collected alongside the multimodal sensor data. The results demonstrated the statistical significance of most modality-specific cluster labels generated by the large language model with respect to clinical scores, confirming the efficacy of the proposed method for interpreting sensor data in an unsupervised manner. This unsupervised data analysis approach, relying solely on sensor data, enables clinicians to identify at-risk patients and take timely measures to improve health outcomes.",2 "The era of Large Language Models (LLMs) presents a new opportunity for interpretability--agentic interpretability: a multi-turn conversation with an LLM wherein the LLM proactively assists human understanding by developing and leveraging a mental model of the user, which in turn enables humans to develop better mental models of the LLM. Such conversation is a new capability that traditional `inspective' interpretability methods (opening the black-box) do not use. Having a language model that aims to teach and explain--beyond just knowing how to talk--is similar to a teacher whose goal is to teach well, understanding that their success will be measured by the student's comprehension. While agentic interpretability may trade off completeness for interactivity, making it less suitable for high-stakes safety situations with potentially deceptive models, it leverages a cooperative model to discover potentially superhuman concepts that can improve humans' mental model of machines. Agentic interpretability introduces challenges, particularly in evaluation, due to what we call `human-entangled-in-the-loop' nature (humans responses are integral part of the algorithm), making the design and evaluation difficult. We discuss possible solutions and proxy goals. As LLMs approach human parity in many tasks, agentic interpretability's promise is to help humans learn the potentially superhuman concepts of the LLMs, rather than see us fall increasingly far from understanding them.",0 "The proliferation of Large Language Models (LLMs) in high-stakes applications such as medical (self-)diagnosis and preliminary triage raises significant ethical and practical concerns about the effectiveness, appropriateness, and possible harmfulness of the use of these technologies for health-related concerns and queries. Some prior work has considered the effectiveness of LLMs in answering expert-written health queries/prompts, questions from medical examination banks, or queries based on pre-existing clinical cases. Unfortunately, these existing studies completely ignore an in-the-wild evaluation of the effectiveness of LLMs in answering everyday health concerns and queries typically asked by general users, which corresponds to the more prevalent use case for LLMs. To address this research gap, this paper presents the findings from a university-level competition that leveraged a novel, crowdsourced approach for evaluating the effectiveness of LLMs in answering everyday health queries. Over the course of a week, a total of 34 participants prompted four publicly accessible LLMs with 212 real (or imagined) health concerns, and the LLM generated responses were evaluated by a team of nine board-certified physicians. At a high level, our findings indicate that on average, 76% of the 212 LLM responses were deemed to be accurate by physicians. Further, with the help of medical professionals, we investigated whether RAG versions of these LLMs (powered with a comprehensive medical knowledge base) can improve the quality of responses generated by LLMs. Finally, we also derive qualitative insights to explain our quantitative findings by conducting interviews with seven medical professionals who were shown all the prompts in our competition. This paper aims to provide a more grounded understanding of how LLMs perform in real-world everyday health communication.",2 "Many programs evaluated in observational studies incorporate a sequential structure, where individuals may be assigned to various programs over time. While this complexity is often simplified by analyzing programs at single points in time, this paper reviews, explains, and applies methods for program evaluation within a sequential framework. It outlines the assumptions required for identification under dynamic confounding and demonstrates how extending sequential estimands to dynamic policies enables the construction of more realistic counterfactuals. Furthermore, the paper explores recently developed methods for estimating effects across multiple treatments and time periods, utilizing Double Machine Learning (DML), a flexible estimator that avoids parametric assumptions while preserving desirable statistical properties. Using Swiss administrative data, the methods are demonstrated through an empirical application assessing the participation of unemployed individuals in active labor market policies, where assignment decisions by caseworkers can be reconsidered between two periods. The analysis identifies a temporary wage subsidy as the most effective intervention, on average, even after adjusting for its extended duration compared to other programs. Overall, DML-based analysis of dynamic policies proves to be a useful approach within the program evaluation toolkit.",0 "Recent advances in Large Language Models have demonstrated their capabilities across a variety of tasks. However, automatically extracting implicit knowledge from natural language remains a significant challenge, as machines lack active experience with the physical world. Given this scenario, semantic knowledge graphs can serve as conceptual spaces that guide the automated text generation reasoning process to achieve more efficient and explainable results. In this paper, we apply a logic-augmented generation (LAG) framework that leverages the explicit representation of a text through a semantic knowledge graph and applies it in combination with prompt heuristics to elicit implicit analogical connections. This method generates extended knowledge graph triples representing implicit meaning, enabling systems to reason on unlabeled multimodal data regardless of the domain. We validate our work through three metaphor detection and understanding tasks across four datasets, as they require deep analogical reasoning capabilities. The results show that this integrated approach surpasses current baselines, performs better than humans in understanding visual metaphors, and enables more explainable reasoning processes, though still has inherent limitations in metaphor understanding, especially for domain-specific metaphors. Furthermore, we propose a thorough error analysis, discussing issues with metaphorical annotations and current evaluation methods.",0 "This study explores human action recognition using a three-class subset of the COCO image corpus, benchmarking models from simple fully connected networks to transformer architectures. The binary Vision Transformer (ViT) achieved 90% mean test accuracy, significantly exceeding multiclass classifiers such as convolutional networks (approximately 35%) and CLIP-based models (approximately 62-64%). A one-way ANOVA (F = 61.37, p < 0.001) confirmed these differences are statistically significant. Qualitative analysis with SHAP explainer and LeGrad heatmaps indicated that the ViT localizes pose-specific regions (e.g., lower limbs for walking or running), while simpler feed-forward models often focus on background textures, explaining their errors. These findings emphasize the data efficiency of transformer representations and the importance of explainability techniques in diagnosing class-specific failures.",0 "We give further evidence that the matrix-tensor model studied in \cite{belin2023} is dual to AdS$_{3}$ gravity including the sum over topologies. This provides a 3D version of the duality between JT gravity and an ensemble of random Hamiltonians, in which the matrix and tensor provide random CFT$_2$ data subject to a potential that incorporates the bootstrap constraints. We show how the Feynman rules of the ensemble produce a sum over all three-manifolds and how surgery is implemented by the matrix integral. The partition functions of the resulting 3d gravity theory agree with Virasoro TQFT (VTQFT) on a fixed, hyperbolic manifold. However, on non-hyperbolic geometries, our 3d gravity theory differs from VTQFT, leading to a difference in the eigenvalue statistics of the associated ensemble. As explained in \cite{belin2023}, the Schwinger-Dyson (SD) equations of the matrix-tensor integral play a crucial role in understanding how gravity emerges in the limit that the ensemble localizes to exact CFT's. We show how the SD equations can be translated into a combinatorial problem about three-manifolds.",0 "Background and Objective: Prototype-based methods improve interpretability by learning fine-grained part-prototypes; however, their visualization in the input pixel space is not always consistent with human-understandable biomarkers. In addition, well-known prototype-based approaches typically learn extremely granular prototypes that are less interpretable in medical imaging, where both the presence and extent of biomarkers and lesions are critical. Methods: To address these challenges, we propose PiPViT (Patch-based Visual Interpretable Prototypes), an inherently interpretable prototypical model for image recognition. Leveraging a vision transformer (ViT), PiPViT captures long-range dependencies among patches to learn robust, human-interpretable prototypes that approximate lesion extent only using image-level labels. Additionally, PiPViT benefits from contrastive learning and multi-resolution input processing, which enables effective localization of biomarkers across scales. Results: We evaluated PiPViT on retinal OCT image classification across four datasets, where it achieved competitive quantitative performance compared to state-of-the-art methods while delivering more meaningful explanations. Moreover, quantitative evaluation on a hold-out test set confirms that the learned prototypes are semantically and clinically relevant. We believe PiPViT can transparently explain its decisions and assist clinicians in understanding diagnostic outcomes. Github page: https://github.com/marziehoghbaie/PiPViT",0 "Indicators of Compromise (IoCs) are critical for threat detection and response, marking malicious activity across networks and systems. Yet, the effectiveness of automated IoC extraction systems is fundamentally limited by one key issue: the lack of high-quality ground truth. Current extraction tools rely either on manually extracted ground truth, which is labor-intensive and costly, or on automated ground truth creation methods that include non-malicious artifacts, leading to inflated false positive (FP) rates and unreliable threat intelligence. In this work, we analyze the shortcomings of existing ground truth creation strategies and address them by introducing the first hybrid human-in-the-loop pipeline for IoC extraction, which combines a large language model-based classifier (LANCE) with expert analyst validation. Our system improves precision through explainable, context-aware labeling and reduces analysts' work factor by 43% compared to manual annotation, as demonstrated in our evaluation with six analysts. Using this approach, we produce PRISM, a high-quality, publicly available benchmark of 1,791 labeled IoCs from 50 real-world threat reports. PRISM supports both fair evaluation and training of IoC extraction methods and enables reproducible research grounded in expert-validated indicators.",0 "The integration of artificial intelligence (AI) into social science research practices raises significant technological, methodological, and ethical issues. We present a community-centric study drawing on 284 survey responses and 15 semi-structured interviews with social scientists, describing their familiarity with, perceptions of the usefulness of, and ethical concerns about the use of AI in their field. A crucial innovation in study design is to split our survey sample in half, providing the same questions to each -- but randomizing whether participants were asked about ""AI"" or ""Machine Learning"" (ML). We find that the use of AI in research settings has increased significantly among social scientists in step with the widespread popularity of generative AI (genAI). These tools have been used for a range of tasks, from summarizing literature reviews to drafting research papers. Some respondents used these tools out of curiosity but were dissatisfied with the results, while others have now integrated them into their typical workflows. Participants, however, also reported concerns with the use of AI in research contexts. This is a departure from more traditional ML algorithms which they view as statistically grounded. Participants express greater trust in ML, citing its relative transparency compared to black-box genAI systems. Ethical concerns, particularly around automation bias, deskilling, research misconduct, complex interpretability, and representational harm, are raised in relation to genAI. To guide this transition, we offer recommendations for AI developers, researchers, educators, and policymakers focusing on explainability, transparency, ethical safeguards, sustainability, and the integration of lived experiences into AI design and evaluation processes.",2 "A central goal for mechanistic interpretability has been to identify the right units of analysis in large language models (LLMs) that causally explain their outputs. While early work focused on individual neurons, evidence that neurons often encode multiple concepts has motivated a shift toward analyzing directions in activation space. A key question is how to find directions that capture interpretable features in an unsupervised manner. Current methods rely on dictionary learning with sparse autoencoders (SAEs), commonly trained over residual stream activations to learn directions from scratch. However, SAEs often struggle in causal evaluations and lack intrinsic interpretability, as their learning is not explicitly tied to the computations of the model. Here, we tackle these limitations by directly decomposing MLP activations with semi-nonnegative matrix factorization (SNMF), such that the learned features are (a) sparse linear combinations of co-activated neurons, and (b) mapped to their activating inputs, making them directly interpretable. Experiments on Llama 3.1, Gemma 2 and GPT-2 show that SNMF derived features outperform SAEs and a strong supervised baseline (difference-in-means) on causal steering, while aligning with human-interpretable concepts. Further analysis reveals that specific neuron combinations are reused across semantically-related features, exposing a hierarchical structure in the MLP's activation space. Together, these results position SNMF as a simple and effective tool for identifying interpretable features and dissecting concept representations in LLMs.",0 "Collision avoidance -- involving a rapid threat detection and quick execution of the appropriate evasive maneuver -- is a critical aspect of driving. However, existing models of human collision avoidance behavior are fragmented, focusing on specific scenarios or only describing certain aspects of the avoidance behavior, such as response times. This paper addresses these gaps by proposing a novel computational cognitive model of human collision avoidance behavior based on active inference. Active inference provides a unified approach to modeling human behavior: the minimization of free energy. Building on prior active inference work, our model incorporates established cognitive mechanisms such as evidence accumulation to simulate human responses in two distinct collision avoidance scenarios: front-to-rear lead vehicle braking and lateral incursion by an oncoming vehicle. We demonstrate that our model explains a wide range of previous empirical findings on human collision avoidance behavior. Specifically, the model closely reproduces both aggregate results from meta-analyses previously reported in the literature and detailed, scenario-specific effects observed in a recent driving simulator study, including response timing, maneuver selection, and execution. Our results highlight the potential of active inference as a unified framework for understanding and modeling human behavior in complex real-life driving tasks.",0 "In this work, we provide an extended discussion of a new approach to explainable Reinforcement Learning called Diverse Near-Optimal Alternatives (DNA), first proposed at L4DC 2025. DNA seeks a set of reasonable ""options"" for trajectory-planning agents, optimizing policies to produce qualitatively diverse trajectories in Euclidean space. In the spirit of explainability, these distinct policies are used to ""explain"" an agent's options in terms of available trajectory shapes from which a human user may choose. In particular, DNA applies to value function-based policies on Markov decision processes where agents are limited to continuous trajectories. Here, we describe DNA, which uses reward shaping in local, modified Q-learning problems to solve for distinct policies with guaranteed epsilon-optimality. We show that it successfully returns qualitatively different policies that constitute meaningfully different ""options"" in simulation, including a brief comparison to related approaches in the stochastic optimization field of Quality Diversity. Beyond the explanatory motivation, this work opens new possibilities for exploration and adaptive planning in RL.",0 "Platooning or cooperative adaptive cruise control (CACC) has been investigated for decades, but debate about its lasting impact is still ongoing. While the benefits of platooning and the formation of platoons are well understood for trucks, they are less clear for passenger cars, which have a higher heterogeneity in trips and drivers' preferences. Most importantly, it remains unclear how to form platoons of passenger cars in order to optimize the personal benefit for the individual driver. To this end, in this paper, we propose a novel platoon formation algorithm that optimizes the personal benefit for drivers of individual passenger cars. For computing vehicle-to-platoon assignments, the algorithm utilizes a new metric that we propose to evaluate the personal benefits of various driving systems, including platooning. By combining fuel and travel time costs into a single monetary value, drivers can estimate overall trip costs according to a personal monetary value for time spent. This provides an intuitive way for drivers to understand and compare the benefits of driving systems like human driving, adaptive cruise control (ACC), and, of course, platooning. Unlike previous similarity-based methods, our proposed algorithm forms platoons only when beneficial for the driver, rather than solely for platooning. We demonstrate the new metric for the total trip cost in a numerical analysis and explain its interpretation. Results of a large-scale simulation study demonstrate that our proposed platoon formation algorithm outperforms normal ACC as well as previous similarity-based platooning approaches by balancing fuel savings and travel time, independent of traffic and drivers' time cost.",0 "Large Language Models (LLMs) have experienced great advancements in the last year resulting in an increase of these models in several fields to face natural language tasks. The integration of these models in robotics can also help to improve several aspects such as human-robot interaction, navigation, planning and decision-making. Therefore, this paper introduces llama\_ros, a tool designed to integrate quantized Large Language Models (LLMs) into robotic systems using ROS 2. Leveraging llama.cpp, a highly optimized runtime engine, llama\_ros enables the efficient execution of quantized LLMs as edge artificial intelligence (AI) in robotics systems with resource-constrained environments, addressing the challenges of computational efficiency and memory limitations. By deploying quantized LLMs, llama\_ros empowers robots to leverage the natural language understanding and generation for enhanced decision-making and interaction which can be paired with prompt engineering, knowledge graphs, ontologies or other tools to improve the capabilities of autonomous robots. Additionally, this paper provides insights into some use cases of using llama\_ros for planning and explainability in robotics.",0 "Prevailing accounts in both multi-agent AI and the social sciences explain social structure through top-down abstractions-such as institutions, norms, or trust-yet lack simulateable models of how such structures emerge from individual behavior. Ethnographic and archaeological evidence suggests that reciprocity served as the foundational mechanism of early human societies, enabling economic circulation, social cohesion, and interpersonal obligation long before the rise of formal institutions. Modern financial systems such as credit and currency can likewise be viewed as scalable extensions of reciprocity, formalizing exchange across time and anonymity. Building on this insight, we argue that reciprocity is not merely a local or primitive exchange heuristic, but the scalable substrate from which large-scale social structures can emerge. We propose a three-stage framework to model this emergence: reciprocal dynamics at the individual level, norm stabilization through shared expectations, and the construction of durable institutional patterns. This approach offers a cognitively minimal, behaviorally grounded foundation for simulating how large-scale social systems can emerge from decentralized reciprocal interaction.",0 "Human planning is efficient -- it frugally deploys limited cognitive resources to accomplish difficult tasks -- and flexible -- adapting to novel problems and environments. Computational approaches suggest that people construct simplified mental representations of their environment, balancing the complexity of a task representation with its utility. These models imply a nested optimisation in which planning shapes perception, and perception shapes planning -- but the perceptual and attentional mechanisms governing how this interaction unfolds remain unknown. Here, we harness virtual maze navigation to characterise how spatial attention controls which aspects of a task representation enter subjective awareness and are available for planning. We find that spatial proximity governs which aspects of a maze are available for planning, and that when task-relevant information follows natural (lateralised) contours of attention, people can more easily construct simplified and useful maze representations. This influence of attention varies considerably across individuals, explaining differences in people's task representations and behaviour. Inspired by the 'spotlight of attention' analogy, we incorporate the effects of visuospatial attention into existing computational accounts of value-guided construal. Together, our work bridges computational perspectives on perception and decision-making to better understand how individuals represent their environments in aid of planning.",0 "The new field of Explainable Planning (XAIP) has produced a variety of approaches to explain and describe the behavior of autonomous agents to human observers. Many summarize agent behavior in terms of the constraints, or ''rules,'' which the agent adheres to during its trajectories. In this work, we narrow the focus from summary to specific moments in individual trajectories, offering a ''pointwise-in-time'' view. Our novel framework, which we define on Linear Temporal Logic (LTL) rules, assigns an intuitive status to any rule in order to describe the trajectory progress at individual time steps; here, a rule is classified as active, satisfied, inactive, or violated. Given a trajectory, a user may query for status of specific LTL rules at individual trajectory time steps. In this paper, we present this novel framework, named Rule Status Assessment (RSA), and provide an example of its implementation. We find that pointwise-in-time status assessment is useful as a post-hoc diagnostic, enabling a user to systematically track the agent's behavior with respect to a set of rules.",0 "Condition monitoring (CM) plays a crucial role in ensuring reliability and efficiency in the process industry. Although computerised maintenance systems effectively detect and classify faults, tasks like fault severity estimation, and maintenance decisions still largely depend on human expert analysis. The analysis and decision making automatically performed by current systems typically exhibit considerable uncertainty and high false alarm rates, leading to increased workload and reduced efficiency. This work integrates large language model (LLM)-based reasoning agents with CM workflows to address analyst and industry needs, namely reducing false alarms, enhancing fault severity estimation, improving decision support, and offering explainable interfaces. We propose MindRAG, a modular framework combining multimodal retrieval-augmented generation (RAG) with novel vector store structures designed specifically for CM data. The framework leverages existing annotations and maintenance work orders as surrogates for labels in a supervised learning protocol, addressing the common challenge of training predictive models on unlabelled and noisy real-world datasets. The primary contributions include: (1) an approach for structuring industry CM data into a semi-structured multimodal vector store compatible with LLM-driven workflows; (2) developing multimodal RAG techniques tailored for CM data; (3) developing practical reasoning agents capable of addressing real-world CM queries; and (4) presenting an experimental framework for integrating and evaluating such agents in realistic industrial scenarios. Preliminary results, evaluated with the help of an experienced analyst, indicate that MindRAG provide meaningful decision support for more efficient management of alarms, thereby improving the interpretability of CM systems.",0 "This paper examines the gap between the promises and real-world performance of emerging AI personal assistants. Drawing on interviews with early adopters of devices like Rabbit R1 and Humane AI Pin, as well as services like Ohai and Docus, we map user experiences through the lens of Uses and Gratifications and Uncertainty Reduction Theory. We identify three core types of user uncertainty, functional, interactional, and social, and explore how each disrupts different user gratifications. We show that while marketing hype fuels initial adoption, unmet expectations often result in frustration or abandonment. Our findings highlight the importance of transparency, task-specific design, and user control over contextual memory and personalization. We provide design and policy recommendations, including user-facing explainability tools and calls for regulatory benchmarks such as CI Bench, to guide ethical and interpretable AI integration. Our study offers actionable insights for creating more usable, trustworthy, and socially aligned AI assistants.",2 "The introductory programming course (CS1) at the university level is often perceived as particularly challenging, contributing to high dropout rates among Computer Science students. Identifying when and how students encounter difficulties in this course is critical for providing targeted support. This study explores the behavioral patterns of CS1 students at varying dropout risks using self-regulated learning (SRL) as the theoretical framework. Using learning analytics, we analyzed trace logs and task performance data from a virtual learning environment to map resource usage patterns and used student dropout prediction to distinguish between low and high dropout risk behaviors. Data from 47 consenting students were used to carry out the analysis. Additionally, self-report questionnaires from 29 participants enriched the interpretation of observed patterns. The findings reveal distinct weekly learning strategy types and categorize course behavior. Among low dropout risk students, three learning strategies were identified that different in how students prioritized completing tasks and reading course materials. High dropout risk students exhibited nine different strategies, some representing temporary unsuccessful strategies that can be recovered from, while others indicating behaviors of students on the verge of dropping out. This study highlights the value of combining student behavior profiling with predictive learning analytics to explain dropout predictions and devise targeted interventions. Practical findings of the study can in turn be used to help teachers, teaching assistants and other practitioners to better recognize and address students at the verge of dropping out.",2 "Detecting harmful memes is essential for maintaining the integrity of online environments. However, current approaches often struggle with resource efficiency, flexibility, or explainability, limiting their practical deployment in content moderation systems. To address these challenges, we introduce U-CoT+, a novel framework for harmful meme detection. Instead of relying solely on prompting or fine-tuning multimodal models, we first develop a high-fidelity meme-to-text pipeline that converts visual memes into detail-preserving textual descriptions. This design decouples meme interpretation from meme classification, thus avoiding immediate reasoning over complex raw visual content and enabling resource-efficient harmful meme detection with general large language models (LLMs). Building on these textual descriptions, we further incorporate targeted, interpretable human-crafted guidelines to guide models' reasoning under zero-shot CoT prompting. As such, this framework allows for easy adaptation to different harmfulness detection criteria across platforms, regions, and over time, offering high flexibility and explainability. Extensive experiments on seven benchmark datasets validate the effectiveness of our framework, highlighting its potential for explainable and low-resource harmful meme detection using small-scale LLMs. Codes and data are available at: https://anonymous.4open.science/r/HMC-AF2B/README.md.",0 "Industrial processes must be robust and adaptable, as environments and tasks are often unpredictable, while operational errors remain costly and difficult to detect. AI-based control systems offer a path forward, yet typically depend on supervised learning with extensive labelled datasets, which limits their ability to generalize across variable and data-scarce industrial settings. Foundation models could enable broader reasoning and knowledge integration, but rarely deliver the quantitative precision demanded by engineering applications. Here, we introduceControl and Interpretation of Production via Hybrid Expertise and Reasoning (CIPHER): a vision-language-action (VLA) model framework aiming to replicate human-like reasoning for industrial control, instantiated in a commercial-grade 3D printer. It integrates a process expert, a regression model enabling quantitative characterization of system states required for engineering tasks. CIPHER also incorporates retrieval-augmented generation to access external expert knowledge and support physics-informed, chain-of-thought reasoning. This hybrid architecture exhibits strong generalization to out-of-distribution tasks. It interprets visual or textual inputs from process monitoring, explains its decisions, and autonomously generates precise machine instructions, without requiring explicit annotations. CIPHER thus lays the foundations for autonomous systems that act with precision, reason with context, and communicate decisions transparently, supporting safe and trusted deployment in industrial settings.",0 "The $\chi_{c1}(3872)$ state, first observed by the Belle collaboration with its quantum numbers identified as $J^{PC} = 1^{++}$, has been the subject of extensive research due to its intriguing properties. Several theoretical interpretations have been proposed to explain its unique characteristics, including the $\chi_{c1}(2P)$ assignment, a molecular $\bar{D}^{*}D/\bar{D}D^{*}$ configuration, a coupled-channel framework incorporating $c\bar{c}$ and di-meson degrees of freedom, and the compact tetraquark hypothesis. However, challenges remain in reconciling its mass coincidence with the threshold and the observed isospin violation within both the pure $c\bar{c}$ and compact tetraquark models. In this study, we examine the radiative decays of the $\chi_{c1}(1P)$ and $\chi_{c1}(3872)$ states in an effective field theory framework, incorporating triangle loops of $D$ and $D^{*}$ mesons. The model parameters are calibrated based on the observed branching fraction of the radiative decay mode $\chi_{c1}(1P) \to J/\psi \gamma$. Utilizing these fixed parameters, we predict the branching fractions $R_{\chi_{c1}(3872) \to J/\psi \gamma} \sim 10^{-1}$ and $R_{\chi_{c1}(3872) \to \psi(2S) \gamma} \sim 10^{-2}$, and the relative fraction $\mathcal{R}_{\Psi\gamma} \approx 0.109$. The work supports the argument that the $\chi_{c1}(3872)$ is unlikely a $c\bar{c}$ state.",0 "Objective: (1) To assess whether ONH biomechanics improves prediction of three progressive visual field loss patterns in glaucoma; (2) to use explainable AI to identify strain-sensitive ONH regions contributing to these predictions. Methods: We recruited 237 glaucoma subjects. The ONH of one eye was imaged under two conditions: (1) primary gaze and (2) primary gaze with IOP elevated to ~35 mmHg via ophthalmo-dynamometry. Glaucoma experts classified the subjects into four categories based on the presence of specific visual field defects: (1) superior nasal step (N=26), (2) superior partial arcuate (N=62), (3) full superior hemifield defect (N=25), and (4) other/non-specific defects (N=124). Automatic ONH tissue segmentation and digital volume correlation were used to compute IOP-induced neural tissue and lamina cribrosa (LC) strains. Biomechanical and structural features were input to a Geometric Deep Learning model. Three classification tasks were performed to detect: (1) superior nasal step, (2) superior partial arcuate, (3) full superior hemifield defect. For each task, the data were split into 80% training and 20% testing sets. Area under the curve (AUC) was used to assess performance. Explainable AI techniques were employed to highlight the ONH regions most critical to each classification. Results: Models achieved high AUCs of 0.77-0.88, showing that ONH strain improved VF loss prediction beyond morphology alone. The inferior and inferotemporal rim were identified as key strain-sensitive regions, contributing most to visual field loss prediction and showing progressive expansion with increasing disease severity. Conclusion and Relevance: ONH strain enhances prediction of glaucomatous VF loss patterns. Neuroretinal rim, rather than the LC, was the most critical region contributing to model predictions.",0 "The main goal of representation learning is to acquire meaningful representations from real-world sensory inputs without supervision. Representation learning explains some aspects of human development. Various neural network (NN) models have been proposed that acquire empirically good representations. However, the formulation of a good representation has not been established. We recently proposed a method for categorizing changes between a pair of sensory inputs. A unique feature of this approach is that transformations between two sensory inputs are learned to satisfy algebraic structural constraints. Conventional representation learning often assumes that disentangled independent feature axes is a good representation; however, we found that such a representation cannot account for conditional independence. To overcome this problem, we proposed a new method using group decomposition in Galois algebra theory. Although this method is promising for defining a more general representation, it assumes pixel-to-pixel translation without feature extraction, and can only process low-resolution images with no background, which prevents real-world application. In this study, we provide a simple method to apply our group decomposition theory to a more realistic scenario by combining feature extraction and object segmentation. We replace pixel translation with feature translation and formulate object segmentation as grouping features under the same transformation. We validated the proposed method on a practical dataset containing both real-world object and background. We believe that our model will lead to a better understanding of human development of object recognition in the real world.",0 "Large Vision-Language Models (VLMs) now generate highly detailed, paragraphlength image captions, yet evaluating their factual accuracy remains challenging. Current methods often miss fine-grained errors, being designed for shorter texts or lacking datasets with verified inaccuracies. We introduce DOCCI-Critique, a benchmark with 1,400 VLM-generated paragraph captions (100 images, 14 VLMs) featuring over 10,216 sentence-level human annotations of factual correctness and explanatory rationales for errors, all within paragraph context. Building on this, we develop VNLI-Critique, a model for automated sentence-level factuality classification and critique generation. We highlight three key applications: (1) VNLI-Critique demonstrates robust generalization, validated by state-of-the-art performance on the M-HalDetect benchmark and strong results in CHOCOLATE claim verification. (2) The VNLI-Critique driven AutoRater for DOCCI-Critique provides reliable VLM rankings, showing excellent alignment with human factuality judgments (e.g., 0.98 Spearman). (3) An innovative Critic-and-Revise pipeline, where critiques from VNLI-Critique guide LLM-based corrections, achieves substantial improvements in caption factuality (e.g., a 46% gain on DetailCaps-4870). Our work offers a crucial benchmark alongside practical tools, designed to significantly elevate the standards for fine-grained evaluation and foster the improvement of VLM image understanding. Project page: https://google.github.io/unblocking-detail-caption",0 "We revealed the resistive switching, negative differential resistance and charge accumulation effects in Hf0.5Zr0.5O2 nanopowders sintered by the auto-combustion sol-gel method and annealed at temperatures from 500{\deg}C to 800{\deg}C. The fraction of the orthorhombic phase, determined by the X-ray diffraction (XRD), decreases from 91 vol.% to 7 vol.% with an increase in the annealing temperature from 600{\deg}C to 800{\deg}C. The electron paramagnetic resonance (EPR) spectra reveal the great amount of oxygen vacancies in the annealed samples, at that the decrease of the orthorhombic phase fraction (observed with an increase in the annealing temperature) correlates with a decrease in the intensity of EPR spectral lines associated with the oxygen vacancies and impurities. This indicates the participation of oxygen vacancies and other defects in the formation of the orthorhombic phase in the Hf0.5Zr0.5O2 powders. To explain the results of electrophysical measurements, we compare the features of the current-voltage characteristics with the phase composition of the Hf0.5Zr0.5O2 powders and with the peculiarities of their EPR spectra. The analysis allows us to relate the resistive switching and charge accumulation observed in Hf0.5Zr0.5O2 nanopowders with the appearance of the ferroelectric-like polar regions in the orthorhombic phase of the nanoparticles, which agrees with the calculations performed in the framework of Landau-Ginzburg-Devonshire approach and density functional theory.",0 "Over the past 50 years, automated face recognition has evolved from rudimentary, handcrafted systems into sophisticated deep learning models that rival and often surpass human performance. This paper chronicles the history and technological progression of FR, from early geometric and statistical methods to modern deep neural architectures leveraging massive real and AI-generated datasets. We examine key innovations that have shaped the field, including developments in dataset, loss function, neural network design and feature fusion. We also analyze how the scale and diversity of training data influence model generalization, drawing connections between dataset growth and benchmark improvements. Recent advances have achieved remarkable milestones: state-of-the-art face verification systems now report False Negative Identification Rates of 0.13% against a 12.4 million gallery in NIST FRVT evaluations for 1:N visa-to-border matching. While recent advances have enabled remarkable accuracy in high- and low-quality face scenarios, numerous challenges persist. While remarkable progress has been achieved, several open research problems remain. We outline critical challenges and promising directions for future face recognition research, including scalability, multi-modal fusion, synthetic identity generation, and explainable systems.",0 "Recent advancements in AI reasoning have driven substantial improvements across diverse tasks. A critical open question is whether these improvements also yields better knowledge transfer: the ability of models to communicate reasoning in ways humans can understand, apply, and learn from. To investigate this, we introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities and conduct the first large-scale human study (N=118) explicitly designed to measure it. In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding. Our findings reveal that although model benchmark performance correlates with collaborative outcomes, this relationship is notably inconsistent, featuring significant outliers, indicating that knowledge transfer requires dedicated optimization. Our analysis identifies behavioral and strategic factors mediating successful knowledge transfer. We release our code, dataset, and evaluation framework to support future work on communicatively aligned models.",0 "Building on existing work with Hyperblocks, which classify data using minimum and maximum bounds for each attribute, we focus on enhancing interpretability, decreasing training time, and reducing model complexity without sacrificing accuracy. This system allows subject matter experts (SMEs) to directly inspect and understand the model's decision logic without requiring extensive machine learning expertise. To reduce Hyperblock complexity while retaining performance, we introduce a suite of algorithms for Hyperblock simplification. These include removing redundant attributes, removing redundant blocks through overlap analysis, and creating disjunctive units. These methods eliminate unnecessary parameters, dramatically reducing model size without harming classification power. We increase robustness by introducing an interpretable fallback mechanism using k-Nearest Neighbor (k-NN) classifiers for points not covered by any block, ensuring complete data coverage while preserving model transparency. Our results demonstrate that interpretable models can scale to high-dimensional, large-volume datasets while maintaining competitive accuracy. On benchmark datasets such as WBC (9-D), we achieve strong predictive performance with significantly reduced complexity. On MNIST (784-D), our method continues to improve through tuning and simplification, showing promise as a transparent alternative to black-box models in domains where trust, clarity, and control are crucial.",0 "Graph Neural Networks (GNNs) achieve outstanding performance across graph-based tasks but remain difficult to interpret. In this paper, we revisit foundational assumptions underlying model-level explanation methods for GNNs, namely: (1) maximizing classification confidence yields representative explanations, (2) a single explanation suffices for an entire class of graphs, and (3) explanations are inherently trustworthy. We identify pitfalls resulting from these assumptions: methods that optimize for classification confidence may overlook partially learned patterns; topological diversity across graph subsets within the same class is often underrepresented; and explanations alone offer limited support for building user trust when applied to new datasets or models. This paper introduces GNNAnatomy, a distillation-based method designed to generate explanations while avoiding these pitfalls. GNNAnatomy first characterizes graph topology using graphlets, a set of fundamental substructures. We then train a transparent multilayer perceptron surrogate to directly approximate GNN predictions based on the graphlet representations. By analyzing the weights assigned to each graphlet, we identify the most discriminative topologies, which serve as GNN explanations. To account for structural diversity within a class, GNNAnatomy generates explanations at the required granularity through an interface that supports human-AI teaming. This interface helps users identify subsets of graphs where distinct critical substructures drive class differentiation, enabling multi-grained explanations. Additionally, by enabling exploration and linking explanations back to input graphs, the interface fosters greater transparency and trust. We evaluate GNNAnatomy on both synthetic and real-world datasets through quantitative metrics and qualitative comparisons with state-of-the-art model-level explainable GNN methods.",0 "Inference methods play an important role in eliciting the performance of large language models (LLMs). Currently, LLMs use inference methods utilizing generated multiple samples, which can be derived from Minimum Bayes Risk (MBR) Decoding. Previous studies have conducted empirical analyses to clarify the improvements in generation performance achieved by MBR decoding and have reported various observations. However, the theoretical underpinnings of these findings remain uncertain. To address this, we offer a new theoretical interpretation of MBR decoding from the perspective of bias-diversity decomposition. In this interpretation, the error in the quality estimation of hypotheses by MBR decoding is decomposed into two main factors: bias, which considers the closeness between the utility function and human evaluation, and diversity, which represents the variability in the quality estimation of the utility function. The theoretical analysis reveals the difficulty of simultaneously improving bias and diversity, confirming the validity of enhancing MBR decoding performance by increasing diversity. Furthermore, we reveal that diversity can explain one aspect of inference scaling laws that describe performance improvement by increasing sample size. Moreover, experiments across multiple NLP tasks yielded results consistent with these theoretical characteristics. Our code is available at https://github.com/naist-nlp/mbr-bias-diversity.",2 "Causal reasoning is a core component of intelligence. Large language models (LLMs) have shown impressive capabilities in generating human-like text, raising questions about whether their responses reflect true understanding or statistical patterns. We compared causal reasoning in humans and four LLMs using tasks based on collider graphs, rating the likelihood of a query variable occurring given evidence from other variables. LLMs' causal inferences ranged from often nonsensical (GPT-3.5) to human-like to often more normatively aligned than those of humans (GPT-4o, Gemini-Pro, and Claude). Computational model fitting showed that one reason for GPT-4o, Gemini-Pro, and Claude's superior performance is they didn't exhibit the ""associative bias"" that plagues human causal reasoning. Nevertheless, even these LLMs did not fully capture subtler reasoning patterns associated with collider graphs, such as ""explaining away"".",0 "We introduce STSBench, a scenario-based framework to benchmark the holistic understanding of vision-language models (VLMs) for autonomous driving. The framework automatically mines pre-defined traffic scenarios from any dataset using ground-truth annotations, provides an intuitive user interface for efficient human verification, and generates multiple-choice questions for model evaluation. Applied to the NuScenes dataset, we present STSnu, the first benchmark that evaluates the spatio-temporal reasoning capabilities of VLMs based on comprehensive 3D perception. Existing benchmarks typically target off-the-shelf or fine-tuned VLMs for images or videos from a single viewpoint and focus on semantic tasks such as object recognition, dense captioning, risk assessment, or scene understanding. In contrast, STSnu evaluates driving expert VLMs for end-to-end driving, operating on videos from multi-view cameras or LiDAR. It specifically assesses their ability to reason about both ego-vehicle actions and complex interactions among traffic participants, a crucial capability for autonomous vehicles. The benchmark features 43 diverse scenarios spanning multiple views and frames, resulting in 971 human-verified multiple-choice questions. A thorough evaluation uncovers critical shortcomings in existing models' ability to reason about fundamental traffic dynamics in complex environments. These findings highlight the urgent need for architectural advances that explicitly model spatio-temporal reasoning. By addressing a core gap in spatio-temporal evaluation, STSBench enables the development of more robust and explainable VLMs for autonomous driving.",2 "A central challenge in economics and artificial intelligence is explaining how financial behaviors-such as credit, insurance, and trade-emerge without formal institutions. We argue that these functions are not products of institutional design, but structured extensions of a single behavioral substrate: reciprocity. Far from being a derived strategy, reciprocity served as the foundational logic of early human societies-governing the circulation of goods, regulation of obligation, and maintenance of long-term cooperation well before markets, money, or formal rules. Trade, commonly regarded as the origin of financial systems, is reframed here as the canonical form of reciprocity: simultaneous, symmetric, and partner-contingent. Building on this logic, we reconstruct four core financial functions-credit, insurance, token exchange, and investment-as expressions of the same underlying principle under varying conditions. By grounding financial behavior in minimal, simulateable dynamics of reciprocal interaction, this framework shifts the focus from institutional engineering to behavioral computation-offering a new foundation for modeling decentralized financial behavior in both human and artificial agents.",0 "Topological order is a promising basis for quantum error correction, a key milestone towards large-scale quantum computing. Floquet codes provide a dynamical scheme for this while also exhibiting Floquet-enriched topological order (FET) where anyons periodically undergo a measurement-induced automorphism that acts uniformly in space. We study disordered Floquet codes where automorphisms have a spatiotemporally heterogeneous distribution -- the automorphisms ""compete"". We characterize the effect of this competition, showing how key features of the purification dynamics of mixed codestates can be inferred from anyon and automorphism properties for any Abelian topological order. This perspective can explain the protection or measurement of logical information in a dynamic automorphism (DA) code when subjected to a noise model of missing measurements. We demonstrate this using a DA color code with perturbed measurement sequences. The framework of competing automorphisms captures essential features of Floquet codes and robustness to noise, and may elucidate key mechanisms involving topological order, automorphisms, and fault-tolerance.",0 "The growing application of artificial intelligence in sensitive domains has intensified the demand for systems that are not only accurate but also explainable and trustworthy. Although explainable AI (XAI) methods have proliferated, many do not consider the diverse audiences that interact with AI systems: from developers and domain experts to end-users and society. This paper addresses how trust in AI is influenced by the design and delivery of explanations and proposes a multilevel framework that aligns explanations with the epistemic, contextual, and ethical expectations of different stakeholders. The framework consists of three layers: algorithmic and domain-based, human-centered, and social explainability. We highlight the emerging role of Large Language Models (LLMs) in enhancing the social layer by generating accessible, natural language explanations. Through illustrative case studies, we demonstrate how this approach facilitates technical fidelity, user engagement, and societal accountability, reframing XAI as a dynamic, trust-building process.",0 "Transparency is a paramount concern in the medical field, prompting researchers to delve into the realm of explainable AI (XAI). Among these XAI methods, Concept Bottleneck Models (CBMs) aim to restrict the model's latent space to human-understandable high-level concepts by generating a conceptual layer for extracting conceptual features, which has drawn much attention recently. However, existing methods rely solely on concept features to determine the model's predictions, which overlook the intrinsic feature embeddings within medical images. To address this utility gap between the original models and concept-based models, we propose Vision Concept Transformer (VCT). Furthermore, despite their benefits, CBMs have been found to negatively impact model performance and fail to provide stable explanations when faced with input perturbations, which limits their application in the medical field. To address this faithfulness issue, this paper further proposes the Stable Vision Concept Transformer (SVCT) based on VCT, which leverages the vision transformer (ViT) as its backbone and incorporates a conceptual layer. SVCT employs conceptual features to enhance decision-making capabilities by fusing them with image features and ensures model faithfulness through the integration of Denoised Diffusion Smoothing. Comprehensive experiments on four medical datasets demonstrate that our VCT and SVCT maintain accuracy while remaining interpretable compared to baselines. Furthermore, even when subjected to perturbations, our SVCT model consistently provides faithful explanations, thus meeting the needs of the medical field.",0 "Natural Language Processing (NLP) has become a cornerstone in many critical sectors, including healthcare, finance, and customer relationship management. This is especially true with the development and use of advanced models such as GPT-based architectures and BERT, which are widely used in decision-making processes. However, the black-box nature of these advanced NLP models has created an urgent need for transparency and explainability. This review explores explainable NLP (XNLP) with a focus on its practical deployment and real-world applications, examining its implementation and the challenges faced in domain-specific contexts. The paper underscores the importance of explainability in NLP and provides a comprehensive perspective on how XNLP can be designed to meet the unique demands of various sectors, from healthcare's need for clear insights to finance's emphasis on fraud detection and risk assessment. Additionally, this review aims to bridge the knowledge gap in XNLP literature by offering a domain-specific exploration and discussing underrepresented areas such as real-world applicability, metric evaluation, and the role of human interaction in model assessment. The paper concludes by suggesting future research directions that could enhance the understanding and broader application of XNLP.",0 "With the increasing prevalence and deployment of Emotion AI-powered facial affect analysis (FAA) tools, concerns about the trustworthiness of these systems have become more prominent. This first workshop on ""Towards Trustworthy Facial Affect Analysis: Advancing Insights of Fairness, Explainability, and Safety (TrustFAA)"" aims to bring together researchers who are investigating different challenges in relation to trustworthiness-such as interpretability, uncertainty, biases, and privacy-across various facial affect analysis tasks, including macro/ micro-expression recognition, facial action unit detection, other corresponding applications such as pain and depression detection, as well as human-robot interaction and collaboration. In alignment with FG2025's emphasis on ethics, as demonstrated by the inclusion of an Ethical Impact Statement requirement for this year's submissions, this workshop supports FG2025's efforts by encouraging research, discussion and dialogue on trustworthy FAA.",0 "The recent development of Agentic AI systems, empowered by autonomous large language models (LLMs) agents with planning and tool-usage capabilities, enables new possibilities for the evolution of industrial automation and reduces the complexity introduced by Industry 4.0. This work proposes a conceptual framework that integrates Agentic AI with the intent-based paradigm, originally developed in network research, to simplify human-machine interaction (HMI) and better align automation systems with the human-centric, sustainable, and resilient principles of Industry 5.0. Based on the intent-based processing, the framework allows human operators to express high-level business or operational goals in natural language, which are decomposed into actionable components. These intents are broken into expectations, conditions, targets, context, and information that guide sub-agents equipped with specialized tools to execute domain-specific tasks. A proof of concept was implemented using the CMAPSS dataset and Google Agent Developer Kit (ADK), demonstrating the feasibility of intent decomposition, agent orchestration, and autonomous decision-making in predictive maintenance scenarios. The results confirm the potential of this approach to reduce technical barriers and enable scalable, intent-driven automation, despite data quality and explainability concerns.",0 "The honesty of large language models (LLMs) is a critical alignment challenge, especially as advanced systems with chain-of-thought (CoT) reasoning may strategically deceive humans. Unlike traditional honesty issues on LLMs, which could be possibly explained as some kind of hallucination, those models' explicit thought paths enable us to study strategic deception--goal-driven, intentional misinformation where reasoning contradicts outputs. Using representation engineering, we systematically induce, detect, and control such deception in CoT-enabled LLMs, extracting ""deception vectors"" via Linear Artificial Tomography (LAT) for 89% detection accuracy. Through activation steering, we achieve a 40% success rate in eliciting context-appropriate deception without explicit prompts, unveiling the specific honesty-related issue of reasoning models and providing tools for trustworthy AI alignment.",0 "This paper presents NeutroSENSE, a neutrosophic-enhanced ensemble framework for interpretable intrusion detection in IoT environments. By integrating Random Forest, XGBoost, and Logistic Regression with neutrosophic logic, the system decomposes prediction confidence into truth (T), falsity (F), and indeterminacy (I) components, enabling uncertainty quantification and abstention. Predictions with high indeterminacy are flagged for review using both global and adaptive, class-specific thresholds. Evaluated on the IoT-CAD dataset, NeutroSENSE achieved 97% accuracy, while demonstrating that misclassified samples exhibit significantly higher indeterminacy (I = 0.62) than correct ones (I = 0.24). The use of indeterminacy as a proxy for uncertainty enables informed abstention and targeted review-particularly valuable in edge deployments. Figures and tables validate the correlation between I-scores and error likelihood, supporting more trustworthy, human-in-the-loop AI decisions. This work shows that neutrosophic logic enhances both accuracy and explainability, providing a practical foundation for trust-aware AI in edge and fog-based IoT security systems.",0 "Large language models (LLMs) have shown promise in software engineering, yet their effectiveness for binary analysis remains unexplored. We present the first comprehensive evaluation of commercial LLMs for assembly code deobfuscation. Testing seven state-of-the-art models against four obfuscation scenarios (bogus control flow, instruction substitution, control flow flattening, and their combination), we found striking performance variations--from autonomous deobfuscation to complete failure. We propose a theoretical framework based on four dimensions: Reasoning Depth, Pattern Recognition, Noise Filtering, and Context Integration, explaining these variations. Our analysis identifies five error patterns: predicate misinterpretation, structural mapping errors, control flow misinterpretation, arithmetic transformation errors, and constant propagation errors, revealing fundamental limitations in LLM code processing.We establish a three-tier resistance model: bogus control flow (low resistance), control flow flattening (moderate resistance), and instruction substitution/combined techniques (high resistance). Universal failure against combined techniques demonstrates that sophisticated obfuscation remains effective against advanced LLMs. Our findings suggest a human-AI collaboration paradigm where LLMs reduce expertise barriers for certain reverse engineering tasks while requiring human guidance for complex deobfuscation. This work provides a foundation for evaluating emerging capabilities and developing resistant obfuscation techniques.x deobfuscation. This work provides a foundation for evaluating emerging capabilities and developing resistant obfuscation techniques.",0 "Graph Neural Networks (GNNs) are increasingly used in critical domains, where reliable explanations are vital for supporting human decision-making. However, the common practice of graph symmetrization discards directional information, leading to significant information loss and misleading explanations. Our analysis demonstrates how this practice compromises explanation fidelity. Through theoretical and empirical studies, we show that preserving directional semantics significantly improves explanation quality, ensuring more faithful insights for human decision-makers. These findings highlight the need for direction-aware GNN explainability in security-critical applications.",0 "The applicability ranges of macroscopic and microscopic electromagnetisms are opposite. While microscopic electromagnetism deals with point sources, singular fields, and discrete atomistic materials, macroscopic electromagnetism concerns smooth average distributions of sources, fields, and homogenized effective metamaterials. Greens function method - GFM - involves finding fields of point sources and applying superposition principle to find fields of distributed sources. When utilized to solve microscopic problems GFM is perfectly within the applicability range. Extension of GFM to simple macroscopic problems is convenient, but not fully logically sound, since point sources and singular fields are technically not a subject of macroscopic electromagnetism. This explains the difficulty of both finding the Greens functions and applying superposition principle in complex isotropy-broken media, which are very different from microscopic environments. In this manuscript, we lay out a path to solution of macroscopic Maxwells equations for distributed sources bypassing GFM, by introducing inverse approach and a method based on Om-potential which we describe here. To the researchers of electromagnetism this provides access to powerful analytical tools and a broad new space of solutions for Maxwells equations.",0 "Gaze-referential inference--the ability to infer what others are looking at--is a critical component of a theory of mind that underpins natural human-AI interaction. In a controlled study, we evaluated this skill across 111 Vision Language Models (VLMs) using photos taken with manipulated difficulty and variability, comparing performance with that of human participants (N = 65), and analyzed behaviors using mixed-effects models. We found that 94 of the 111 VLMs failed to do better than random guessing, while humans achieved near-ceiling accuracy. VLMs even respond with each choice almost equally frequently. Are they randomly guessing? Although most VLMs struggle, when we zoom in on five of the top-tier VLMs with above-chance performance, we find that their performance declined with increasing task difficulty but varied only slightly across different prompts and scene objects. These behavioral features cannot be explained by considering them as random guessers. Instead, they likely use a combination of heuristics and guessing such that their performance is subject to the task difficulty but robust to perceptual variations. This suggests that VLMs, lacking gaze inference capability, have yet to become technologies that can naturally interact with humans, but the potential remains.",2 "The rise of deep learning challenges the longstanding scientific ideal of insight - the human capacity to understand phenomena by uncovering underlying mechanisms. In many modern applications, accurate predictions no longer require interpretable models, prompting debate about whether explainability is a realistic or even meaningful goal. From our perspective in physics, we examine this tension through a concrete case study: a physics-informed neural network (PINN) trained on a rarefied gas dynamics problem governed by the Boltzmann equation. Despite the system's clear structure and well-understood governing laws, the trained network's weights resemble Gaussian-distributed random matrices, with no evident trace of the physical principles involved. This suggests that deep learning and traditional simulation may follow distinct cognitive paths to the same outcome - one grounded in mechanistic insight, the other in statistical interpolation. Our findings raise critical questions about the limits of explainable AI and whether interpretability can - or should-remain a universal standard in artificial reasoning.",0 "Generative models, especially large language models (LLMs), have shown remarkable progress in producing text that appears human-like. However, they often exhibit patterns that make their output easier to detect than text written by humans. In this paper, we investigate how explainable AI (XAI) methods can be used to reduce the detectability of AI-generated text (AIGT) while also introducing a robust ensemble-based detection approach. We begin by training an ensemble classifier to distinguish AIGT from human-written text, then apply SHAP and LIME to identify tokens that most strongly influence its predictions. We propose four explainability-based token replacement strategies to modify these influential tokens. Our findings show that these token replacement approaches can significantly diminish a single classifier's ability to detect AIGT. However, our ensemble classifier maintains strong performance across multiple languages and domains, showing that a multi-model approach can mitigate the impact of token-level manipulations. These results show that XAI methods can make AIGT harder to detect by focusing on the most influential tokens. At the same time, they highlight the need for robust, ensemble-based detection strategies that can adapt to evolving approaches for hiding AIGT.",0 "High-performance computing (HPC) systems expose many interdependent configuration knobs that impact runtime, resource usage, power, and variability. Existing predictive tools model these outcomes, but do not support structured exploration, explanation, or guided reconfiguration. We present WANDER, a decision-support framework that synthesizes alternate configurations using counterfactual analysis aligned with user goals and constraints. We introduce a composite trade-off score that ranks suggestions based on prediction uncertainty, consistency between feature-target relationships using causal models, and similarity between feature distributions against historical data. To our knowledge, WANDER is the first such system to unify prediction, exploration, and explanation for HPC tuning under a common query interface. Across multiple datasets WANDER generates interpretable and trustworthy, human-readable alternatives that guide users to achieve their performance objectives.",0 "Hate speech detection is key to online content moderation, but current models struggle to generalise beyond their training data. This has been linked to dataset biases and the use of sentence-level labels, which fail to teach models the underlying structure of hate speech. In this work, we show that even when models are trained with more fine-grained, span-level annotations (e.g., ""artists"" is labeled as target and ""are parasites"" as dehumanising comparison), they struggle to disentangle the meaning of these labels from the surrounding context. As a result, combinations of expressions that deviate from those seen during training remain particularly difficult for models to detect. We investigate whether training on a dataset where expressions occur with equal frequency across all contexts can improve generalisation. To this end, we create U-PLEAD, a dataset of ~364,000 synthetic posts, along with a novel compositional generalisation benchmark of ~8,000 manually validated posts. Training on a combination of U-PLEAD and real data improves compositional generalisation while achieving state-of-the-art performance on the human-sourced PLEAD.",0 "Deep learning models have shown strong performance in classifying Alzheimer's disease (AD) from R2* maps, but their decision-making remains opaque, raising concerns about interpretability. Previous studies suggest biases in model decisions, necessitating further analysis. This study uses Layer-wise Relevance Propagation (LRP) and spectral clustering to explore classifier decision strategies across preprocessing and training configurations using R2* maps. We trained a 3D convolutional neural network on R2* maps, generating relevance heatmaps via LRP and applied spectral clustering to identify dominant patterns. t-Stochastic Neighbor Embedding (t-SNE) visualization was used to assess clustering structure. Spectral clustering revealed distinct decision patterns, with the relevance-guided model showing the clearest separation between AD and normal control (NC) cases. The t-SNE visualization confirmed that this model aligned heatmap groupings with the underlying subject groups. Our findings highlight the significant impact of preprocessing and training choices on deep learning models trained on R2* maps, even with similar performance metrics. Spectral clustering offers a structured method to identify classification strategy differences, emphasizing the importance of explainability in medical AI.",0 "Clinical trials are critical for advancing medical treatments but remain prohibitively expensive and time-consuming. Accurate prediction of clinical trial outcomes can significantly reduce research and development costs and accelerate drug discovery. While recent deep learning models have shown promise by leveraging unstructured data, their black-box nature, lack of interpretability, and vulnerability to label leakage limit their practical use in high-stakes biomedical contexts. In this work, we propose AutoCT, a novel framework that combines the reasoning capabilities of large language models with the explainability of classical machine learning. AutoCT autonomously generates, evaluates, and refines tabular features based on public information without human input. Our method uses Monte Carlo Tree Search to iteratively optimize predictive performance. Experimental results show that AutoCT performs on par with or better than SOTA methods on clinical trial prediction tasks within only a limited number of self-refinement iterations, establishing a new paradigm for scalable, interpretable, and cost-efficient clinical trial prediction.",2 "The experience and adoption of conversational search is tied to the accuracy and completeness of users' mental models -- their internal frameworks for understanding and predicting system behaviour. Thus, understanding these models can reveal areas for design interventions. Transparency is one such intervention which can improve system interpretability and enable mental model alignment. While past research has explored mental models of search engines, those of generative conversational search remain underexplored, even while the popularity of these systems soars. To address this, we conducted a study with 16 participants, who performed 4 search tasks using 4 conversational interfaces of varying transparency levels. Our analysis revealed that most user mental models were too abstract to support users in explaining individual search instances. These results suggest that 1) mental models may pose a barrier to appropriate trust in conversational search, and 2) hybrid web-conversational search is a promising novel direction for future search interface design.",2 "Traditional Business Process Management (BPM) struggles with rigidity, opacity, and scalability in dynamic environments while emerging Large Language Models (LLMs) present transformative opportunities alongside risks. This paper explores four real-world use cases that demonstrate how LLMs, augmented with trustworthy process intelligence, redefine process modeling, prediction, and automation. Grounded in early-stage research projects with industrial partners, the work spans manufacturing, modeling, life-science, and design processes, addressing domain-specific challenges through human-AI collaboration. In manufacturing, an LLM-driven framework integrates uncertainty-aware explainable Machine Learning (ML) with interactive dialogues, transforming opaque predictions into auditable workflows. For process modeling, conversational interfaces democratize BPMN design. Pharmacovigilance agents automate drug safety monitoring via knowledge-graph-augmented LLMs. Finally, sustainable textile design employs multi-agent systems to navigate regulatory and environmental trade-offs. We intend to examine tensions between transparency and efficiency, generalization and specialization, and human agency versus automation. By mapping these trade-offs, we advocate for context-sensitive integration prioritizing domain needs, stakeholder values, and iterative human-in-the-loop workflows over universal solutions. This work provides actionable insights for researchers and practitioners aiming to operationalize LLMs in critical BPM environments.",0 "Early-stage startup investment is a high-risk endeavor characterized by scarce data and uncertain outcomes. Traditional machine learning approaches often require large, labeled datasets and extensive fine-tuning, yet remain opaque and difficult for domain experts to interpret or improve. In this paper, we propose a transparent and data-efficient investment decision framework powered by memory-augmented large language models (LLMs) using in-context learning (ICL). Central to our method is a natural language policy embedded directly into the LLM prompt, enabling the model to apply explicit reasoning patterns and allowing human experts to easily interpret, audit, and iteratively refine the logic. We introduce a lightweight training process that combines few-shot learning with an in-context learning loop, enabling the LLM to update its decision policy iteratively based on structured feedback. With only minimal supervision and no gradient-based optimization, our system predicts startup success far more accurately than existing benchmarks. It is over 20x more precise than random chance, which succeeds 1.9% of the time. It is also 7.1x more precise than the typical 5.6% success rate of top-tier venture capital (VC) firms.",0 "Concept-based explanations work by mapping complex model computations to human-understandable concepts. Evaluating such explanations is very difficult, as it includes not only the quality of the induced space of possible concepts but also how effectively the chosen concepts are communicated to users. Existing evaluation metrics often focus solely on the former, neglecting the latter. We introduce an evaluation framework for measuring concept explanations via automated simulatability: a simulator's ability to predict the explained model's outputs based on the provided explanations. This approach accounts for both the concept space and its interpretation in an end-to-end evaluation. Human studies for simulatability are notoriously difficult to enact, particularly at the scale of a wide, comprehensive empirical evaluation (which is the subject of this work). We propose using large language models (LLMs) as simulators to approximate the evaluation and report various analyses to make such approximations reliable. Our method allows for scalable and consistent evaluation across various models and datasets. We report a comprehensive empirical evaluation using this framework and show that LLMs provide consistent rankings of explanation methods. Code available at https://github.com/AnonymousConSim/ConSim.",0 "Amid ongoing policy and managerial debates on keeping humans in the loop of AI decision-making, we investigate whether human involvement in AI-based service production benefits downstream consumers. Partnering with a large savings bank in Europe, we produced pure AI and human-AI collaborative investment advice, passed it to customers, and examined their advice-taking in a field experiment. On the production side, contrary to concerns that humans might inefficiently override AI output, we find that giving a human banker the final say over AI-generated financial advice does not compromise its quality. More importantly, on the consumption side, customers are more likely to follow investment advice from the human-AI collaboration compared to pure AI, especially when facing riskier decisions. In our setting, this increased reliance leads to higher material welfare for consumers. Additional analyses from the field experiment and an online experiment show that the persuasive power of human-AI advice cannot be explained by consumers' beliefs about enhanced advice quality due to human-AI complementarities. Instead, the benefit stems from human involvement acting as a peripheral cue that increases the advice's affective appeal. Our findings suggest that regulations and guidelines should adopt a consumer-centered approach by fostering service environments in which humans and AI systems can collaborate to improve consumer outcomes. These insights are relevant for managers designing AI-based services and for policymakers advocating for human oversight in AI systems.",0 "The supply of politicians affects the quality of democratic institutions. Yet little is known about the economic motivations that drive individuals into politics. This paper examines how experiencing a job loss affects individuals' decisions to enter political life and explores its implications for political selection. Using highly granular administrative data linking individual records of political participation with comprehensive employer-employee data for all formal workers in Brazil, and leveraging mass layoffs for causal identification, we find that job loss significantly increases the likelihood of joining a political party and running for local office. Layoff-induced candidates are positively selected on various competence measures, suggesting that economic shocks improve the quality of political entrants. Further, we observe that the increase in candidacies is more pronounced among laid-off individuals with greater financial incentives from holding office and higher predicted income losses. Finally, using a regression discontinuity design, we find that eligibility for unemployment benefits increases the likelihood of becoming a party member and running for local office. These results are consistent with the reduction in private-sector opportunity costs and the increased time resources explaining the rise in political entry.",0 "Ensuring the safety of reinforcement learning (RL) policies in high-stakes environments requires not only formal verification but also interpretability and targeted falsification. While model checking provides formal guarantees, its effectiveness is limited by abstraction quality and the completeness of the underlying trajectory dataset. We propose a hybrid framework that integrates (1) explainability, (2) model checking, and (3) risk-guided falsification to achieve both rigor and coverage. Our approach begins by constructing a human-interpretable abstraction of the RL policy using Comprehensible Abstract Policy Summarization (CAPS). This abstract graph, derived from offline trajectories, is both verifier-friendly, semantically meaningful, and can be used as input to Storm probabilistic model checker to verify satisfaction of temporal safety specifications. If the model checker identifies a violation, it will return an interpretable counterexample trace by which the policy fails the safety requirement. However, if no violation is detected, we cannot conclude satisfaction due to potential limitation in the abstraction and coverage of the offline dataset. In such cases, we estimate associated risk during model checking to guide a falsification strategy that prioritizes searching in high-risk states and regions underrepresented in the trajectory dataset. We further provide PAC-style guarantees on the likelihood of uncovering undetected violations. Finally, we incorporate a lightweight safety shield that switches to a fallback policy at runtime when such a risk exceeds a threshold, facilitating failure mitigation without retraining.",0 "Recent studies comparing AI-generated and human-authored literary texts have produced conflicting results: some suggest AI already surpasses human quality, while others argue it still falls short. We start from the hypothesis that such divergences can be largely explained by genuine differences in how readers interpret and value literature, rather than by an intrinsic quality of the texts evaluated. Using five public datasets (1,471 stories, 101 annotators including critics, students, and lay readers), we (i) extract 17 reference-less textual features (e.g., coherence, emotional variance, average sentence length...); (ii) model individual reader preferences, deriving feature importance vectors that reflect their textual priorities; and (iii) analyze these vectors in a shared ""preference space"". Reader vectors cluster into two profiles: 'surface-focused readers' (mainly non-experts), who prioritize readability and textual richness; and 'holistic readers' (mainly experts), who value thematic development, rhetorical variety, and sentiment dynamics. Our results quantitatively explain how measurements of literary quality are a function of how text features align with each reader's preferences. These findings advocate for reader-sensitive evaluation frameworks in the field of creative text generation.",2 "The mapping from sound to neural activity that underlies hearing is highly non-linear. The first few stages of this mapping in the cochlea have been modelled successfully, with biophysical models built by hand and, more recently, with DNN models trained on datasets simulated by biophysical models. Modelling the auditory brain has been a challenge because central auditory processing is too complex for models to be built by hand, and datasets for training DNN models directly have not been available. Recent work has taken advantage of large-scale high resolution neural recordings from the auditory midbrain to build a DNN model of normal hearing with great success. But this model assumes that auditory processing is the same in all brains, and therefore it cannot capture the widely varying effects of hearing loss. We propose a novel variational-conditional model to learn to encode the space of hearing loss directly from recordings of neural activity in the auditory midbrain of healthy and noise exposed animals. With hearing loss parametrised by only 6 free parameters per animal, our model accurately predicts 62\% of the explainable variance in neural responses from normal hearing animals and 68% for hearing impaired animals, within a few percentage points of state of the art animal specific models. We demonstrate that the model can be used to simulate realistic activity from out of sample animals by fitting only the learned conditioning parameters with Bayesian optimisation, achieving crossentropy loss within 2% of the optimum in 15-30 iterations. Including more animals in the training data slightly improved the performance on unseen animals. This model will enable future development of parametrised hearing loss compensation models trained to directly restore normal neural coding in hearing impaired brains, which can be quickly fitted for a new user by human in the loop optimisation.",0 "Current AI systems rely on opaque reasoning processes that hinder human oversight and collaborative potential. Conventional explainable AI approaches offer post-hoc justifications and often fail to establish genuine symbiotic collaboration. In this paper, the Symbiotic Epistemology is presented as a philosophical foundation for human-AI cognitive partnerships. Unlike frameworks that treat AI as a mere tool or replacement, symbiotic epistemology positions AI as a reasoning partner, fostering calibrated trust by aligning human confidence with AI reliability through explicit reasoning patterns and confidence assessments. SynLang (Symbiotic Syntactic Language) is introduced as a formal protocol for transparent human-AI collaboration. The framework is empirically validated through actual human-AI dialogues demonstrating AI's adaptation to structured reasoning protocols and successful metacognitive intervention. The protocol defines two complementary mechanisms: TRACE for high-level reasoning patterns and TRACE_FE for detailed factor explanations. It also integrates confidence quantification, declarative control over AI behavior, and context inheritance for multi-agent coordination. By structuring communication and embedding confidence-calibrated transparency, SynLang, together with symbiotic epistemology, enables AI systems that enhance human intelligence, preserve human agency, and uphold ethical accountability in collaborative decision-making. Through dual-level transparency, beginning with high-level reasoning patterns and progressing to granular explanations, the protocol facilitates rapid comprehension and supports thorough verification of AI decision-making.",0 "Interest in explainable artificial intelligence (XAI) is surging. Prior research has primarily focused on systems' ability to generate explanations, often guided by researchers' intuitions rather than end-users' needs. Unfortunately, such approaches have not yielded favorable outcomes when compared to a black-box baseline (i.e., no explanation). To address this gap, this paper advocates a human-centered approach that shifts focus to air traffic controllers (ATCOs) by asking a fundamental yet overlooked question: Do ATCOs need explanations, and if so, why? Insights from air traffic management (ATM), human-computer interaction, and the social sciences were synthesized to provide a holistic understanding of XAI challenges and opportunities in ATM. Evaluating 11 ATM operational goals revealed a clear need for explanations when ATCOs aim to document decisions and rationales for future reference or report generation. Conversely, ATCOs are less likely to seek them when their conflict resolution approach align with the artificial intelligence (AI) advisory. While this is a preliminary study, the findings are expected to inspire broader and deeper inquiries into the design of ATCO-centric XAI systems, paving the way for more effective human-AI interaction in ATM.",0 "When comparing the linguistic capabilities of language models (LMs) with humans using LM probabilities, factors such as the length of the sequence and the unigram frequency of lexical items have a significant effect on LM probabilities in ways that humans are largely robust to. Prior works in comparing LM and human acceptability judgments treat these effects uniformly across models, making a strong assumption that models require the same degree of adjustment to control for length and unigram frequency effects. We propose MORCELA, a new linking theory between LM scores and acceptability judgments where the optimal level of adjustment for these effects is estimated from data via learned parameters for length and unigram frequency. We first show that MORCELA outperforms a commonly used linking theory for acceptability - SLOR (Pauls and Klein, 2012; Lau et al. 2017) - across two families of transformer LMs (Pythia and OPT). Furthermore, we demonstrate that the assumed degrees of adjustment in SLOR for length and unigram frequency overcorrect for these confounds, and that larger models require a lower relative degree of adjustment for unigram frequency, though a significant amount of adjustment is still necessary for all models. Finally, our subsequent analysis shows that larger LMs' lower susceptibility to frequency effects can be explained by an ability to better predict rarer words in context.",0 "Visualizations, alongside summary tables and participant-level listings, are essential for presenting clinical trial results transparently and comprehensively. When reporting the results of clinical trials, the goal of visualization is to communicate the results of specific pre-planned analyses with visualization that are tailored to the endpoint and analysis being reported. We are considering the visualization of HCEs, combining multiple time-to-event outcomes, ordered according to a given prioritization and the timing of events, with a single continuous outcome. An illustrative example is the kidney disease progression HCE with a straightforward structure of the composite of clinical events of death and kidney failure and declines in eGFR as surrogates for kidney failure. The HCEs are analyzed by win statistics and visualized using maraca plots. Although maraca plots are very granular and allow for a detailed presentation of the distribution of HCE, researchers are still tasked with explanation of the magnitude of the treatment effect estimated by win odds. In explaining the magnitude of the treatment effect, we propose a comprehensive visualization approach. In the clinical trial design stage, we propose the sunset plots to visualize all possible treatment effects that can be observed based on the treatment effects on components. In reporting the results of the trial, we recommend the maraca plots as the primary method of visualization of the results. While the 2-d mosaic plot with the ordinal dominance graph directly corresponds to the win odds as treatment effect measure and can be used as the primary analysis-specific visualization method. And finally, we propose the Dustin plot to visualize the supportive analysis of the components, added cumulatively from the event of highest priority to assess the consistency of the treatment effect on all outcomes.",2 "While the increased integration of AI technologies into interactive systems enables them to solve an equally increasing number of tasks, the black box problem of AI models continues to spread throughout the interactive system as a whole. Explainable AI (XAI) techniques can make AI models more accessible by employing post-hoc methods or transitioning to inherently interpretable models. While this makes individual AI models clearer, the overarching system architecture remains opaque. To this end, we propose an approach to represent interactive systems as sequences of structural building blocks, such as AI models and control mechanisms grounded in the literature. These can then be explained through accompanying visual building blocks, such as XAI techniques. The flow and APIs of the structural building blocks form an explicit overview of the system. This serves as a communication basis for both humans and automated agents like LLMs, aligning human and machine interpretability of AI models. We discuss a selection of building blocks and concretize our flow-based approach in an architecture and accompanying prototype interactive system.",0 "Concept-based models are an emerging paradigm in deep learning that constrains the inference process to operate through human-interpretable variables, facilitating explainability and human interaction. However, these architectures, on par with popular opaque neural models, fail to account for the true causal mechanisms underlying the target phenomena represented in the data. This hampers their ability to support causal reasoning tasks, limits out-of-distribution generalization, and hinders the implementation of fairness constraints. To overcome these issues, we propose Causally reliable Concept Bottleneck Models (C$^2$BMs), a class of concept-based architectures that enforce reasoning through a bottleneck of concepts structured according to a model of the real-world causal mechanisms. We also introduce a pipeline to automatically learn this structure from observational data and unstructured background knowledge (e.g., scientific literature). Experimental evidence suggests that C$^2$BMs are more interpretable, causally reliable, and improve responsiveness to interventions w.r.t. standard opaque and concept-based models, while maintaining their accuracy.",0 "As AI regulations around the world intensify their focus on system safety, contestability has become a mandatory, yet ill-defined, safeguard. In XAI, ""contestability"" remains an empty promise: no formal definition exists, no algorithm guarantees it, and practitioners lack concrete guidance to satisfy regulatory requirements. Grounded in a systematic literature review, this paper presents the first rigorous formal definition of contestability in explainable AI, directly aligned with stakeholder requirements and regulatory mandates. We introduce a modular framework of by-design and post-hoc mechanisms spanning human-centered interfaces, technical architectures, legal processes, and organizational workflows. To operationalize our framework, we propose the Contestability Assessment Scale, a composite metric built on more than twenty quantitative criteria. Through multiple case studies across diverse application domains, we reveal where state-of-the-art systems fall short and show how our framework drives targeted improvements. By converting contestability from regulatory theory into a practical framework, our work equips practitioners with the tools to embed genuine recourse and accountability into AI systems.",0 "Large language models (LLMs) have demonstrated impressive performance on natural language tasks, but their decision-making processes remain largely opaque. Existing explanation methods either suffer from limited faithfulness to the model's reasoning or produce explanations that humans find difficult to understand. To address these challenges, we propose \textbf{ProtoSurE}, a novel prototype-based surrogate framework that provides faithful and human-understandable explanations for LLMs. ProtoSurE trains an interpretable-by-design surrogate model that aligns with the target LLM while utilizing sentence-level prototypes as human-understandable concepts. Extensive experiments show that ProtoSurE consistently outperforms SOTA explanation methods across diverse LLMs and datasets. Importantly, ProtoSurE demonstrates strong data efficiency, requiring relatively few training examples to achieve good performance, making it practical for real-world applications.",0 "Additive models enjoy the flexibility of nonlinear models while still being readily understandable to humans. By contrast, other nonlinear models, which involve interactions between features, are not only harder to fit but also substantially more complicated to explain. Guided by the principle of parsimony, a data analyst therefore may naturally be reluctant to move beyond an additive model unless it is truly warranted. To put this principle of interaction reluctance into practice, we formulate the problem as a hypothesis test with a fitted sparse additive model (SPAM) serving as the null. Because our hypotheses on interaction effects are formed after fitting a SPAM to the data, we adopt a selective inference approach to construct p-values that properly account for this data adaptivity. Our approach makes use of external randomization to obtain the distribution of test statistics conditional on the SPAM fit, allowing us to derive valid p-values, corrected for the over-optimism introduced by the data-adaptive process prior to the test. Through experiments on simulated and real data, we illustrate that--even with small amounts of external randomization--this rigorous modeling approach enjoys considerable advantages over naive methods and data splitting.",0 "Sparse dictionary learning (and, in particular, sparse autoencoders) attempts to learn a set of human-understandable concepts that can explain variation on an abstract space. A basic limitation of this approach is that it neither exploits nor represents the semantic relationships between the learned concepts. In this paper, we introduce a modified SAE architecture that explicitly models a semantic hierarchy of concepts. Application of this architecture to the internal representations of large language models shows both that semantic hierarchy can be learned, and that doing so improves both reconstruction and interpretability. Additionally, the architecture leads to significant improvements in computational efficiency.",0 "Evaluating tables qualitatively and quantitatively poses a significant challenge, as standard metrics often overlook subtle structural and content-level discrepancies. To address this, we propose a rubric-based evaluation framework that integrates multi-level structural descriptors with fine-grained contextual signals, enabling more precise and consistent table comparison. Building on this, we introduce TabXEval, an eXhaustive and eXplainable two-phase evaluation framework. TabXEval first aligns reference and predicted tables structurally via TabAlign, then performs semantic and syntactic comparison using TabCompare, offering interpretable and granular feedback. We evaluate TabXEval on TabXBench, a diverse, multi-domain benchmark featuring realistic table perturbations and human annotations. A sensitivity-specificity analysis further demonstrates the robustness and explainability of TabXEval across varied table tasks. Code and data are available at https://coral-lab-asu.github.io/tabxeval/",0 "In the symmetric teleparallel gravity framework, we study the cosmic dynamics of the universe with dark energy equation of state (EoS) parameter having non-linear forms. The non-metricity scalar induced by the dark energy EoS parameter evolves with time and, explains the physically reasonable transiting universe evolution in a consistent way. A comparative study has been presented to describe the ability of these models to fit the observational data. By using the Bayesian methods, we constrain the model parameters with the supernovae Ia (SneIa) and expansion rate data. We show that the expansion rate solutions may consistently describe the universe evolution based on cosmological indicators such as the effective EoS parameter, energy density, pressure, current age and the statefinder diagnostic. One may either have the quintom scenario or the future deceleration in these models subjected to the observational constraints.",0 "This paper critically re-evaluates LLMs' role in causal discovery and argues against their direct involvement in determining causal relationships. We demonstrate that LLMs' autoregressive, correlation-driven modeling inherently lacks the theoretical grounding for causal reasoning and introduces unreliability when used as priors in causal discovery algorithms. Through empirical studies, we expose the limitations of existing LLM-based methods and reveal that deliberate prompt engineering (e.g., injecting ground-truth knowledge) could overstate their performance, helping to explain the consistently favorable results reported in much of the current literature. Based on these findings, we strictly confined LLMs' role to a non-decisional auxiliary capacity: LLMs should not participate in determining the existence or directionality of causal relationships, but can assist the search process for causal graphs (e.g., LLM-based heuristic search). Experiments across various settings confirm that, by strictly isolating LLMs from causal decision-making, LLM-guided heuristic search can accelerate the convergence and outperform both traditional and LLM-based methods in causal structure learning. We conclude with a call for the community to shift focus from naively applying LLMs to developing specialized models and training method that respect the core principles of causal discovery.",0 "This article presents a structured framework for Human-AI collaboration in Security Operations Centers (SOCs), integrating AI autonomy, trust calibration, and Human-in-the-loop decision making. Existing frameworks in SOCs often focus narrowly on automation, lacking systematic structures to manage human oversight, trust calibration, and scalable autonomy with AI. Many assume static or binary autonomy settings, failing to account for the varied complexity, criticality, and risk across SOC tasks considering Humans and AI collaboration. To address these limitations, we propose a novel autonomy tiered framework grounded in five levels of AI autonomy from manual to fully autonomous, mapped to Human-in-the-Loop (HITL) roles and task-specific trust thresholds. This enables adaptive and explainable AI integration across core SOC functions, including monitoring, protection, threat detection, alert triage, and incident response. The proposed framework differentiates itself from previous research by creating formal connections between autonomy, trust, and HITL across various SOC levels, which allows for adaptive task distribution according to operational complexity and associated risks. The framework is exemplified through a simulated cyber range that features the cybersecurity AI-Avatar, a fine-tuned LLM-based SOC assistant. The AI-Avatar case study illustrates human-AI collaboration for SOC tasks, reducing alert fatigue, enhancing response coordination, and strategically calibrating trust. This research systematically presents both the theoretical and practical aspects and feasibility of designing next-generation cognitive SOCs that leverage AI not to replace but to enhance human decision-making.",0 "Multiple choice question answering (MCQA) is popular for LLM evaluation due to its simplicity and human-like testing, but we argue for its reform. We first reveal flaws in MCQA's format, as it struggles to: 1) test generation/subjectivity; 2) match LLM use cases; and 3) fully test knowledge. We instead advocate for generative formats based on human testing, where LLMs construct and explain answers, better capturing user needs and knowledge while remaining easy to score. We then show even when MCQA is a useful format, its datasets suffer from: leakage; unanswerability; shortcuts; and saturation. In each issue, we give fixes from education, like rubrics to guide MCQ writing; scoring methods to bridle guessing; and Item Response Theory to build harder MCQs. Lastly, we discuss LLM errors in MCQA, robustness, biases, and unfaithful explanations, showing how our prior solutions better measure or address these issues. While we do not need to desert MCQA, we encourage more efforts in refining the task based on educational testing, advancing evaluations.",0 "Cybersecurity organizations are adapting to GenAI integration through modified frameworks and hybrid operational processes, with success influenced by existing security maturity, regulatory requirements, and investments in human capital and infrastructure. This qualitative research employs systematic document analysis and comparative case study methodology to examine how cybersecurity organizations adapt their threat modeling frameworks and operational processes to address generative artificial intelligence integration. Through examination of 25 studies from 2022 to 2025, the research documents substantial transformation in organizational approaches to threat modeling, moving from traditional signature-based systems toward frameworks incorporating artificial intelligence capabilities. The research identifies three primary adaptation patterns: Large Language Model integration for security applications, GenAI frameworks for risk detection and response automation, and AI/ML integration for threat hunting. Organizations with mature security infrastructures, particularly in finance and critical infrastructure sectors, demonstrate higher readiness through structured governance approaches, dedicated AI teams, and robust incident response processes. Organizations achieve successful GenAI integration when they maintain appropriate human oversight of automated systems, address data quality concerns and explainability requirements, and establish governance frameworks tailored to their specific sectors. Organizations encounter ongoing difficulties with privacy protection, bias reduction, personnel training, and defending against adversarial attacks. This work advances understanding of how organizations adopt innovative technologies in high-stakes environments and offers actionable insights for cybersecurity professionals implementing GenAI systems.",0 "Recently, LLM agents have made rapid progress in improving their programming capabilities. However, existing benchmarks lack the ability to automatically evaluate from users' perspective, and also lack the explainability of the results of LLM agents' code generation capabilities. Thus, we introduce ProjectEval, a new benchmark for LLM agents project-level code generation's automated evaluation by simulating user interaction. ProjectEval is constructed by LLM with human reviewing. It has three different level inputs of natural languages or code skeletons. ProjectEval can evaluate the generated projects by user interaction simulation for execution, and by code similarity through existing objective indicators. Through ProjectEval, we find that systematic engineering project code, overall understanding of the project and comprehensive analysis capability are the keys for LLM agents to achieve practical projects. Our findings and benchmark provide valuable insights for developing more effective programming agents that can be deployed in future real-world production.",0 "Dysarthria, a motor speech disorder, affects intelligibility and requires targeted interventions for effective communication. In this work, we investigate automated mispronunciation feedback by collecting a dysarthric speech dataset from six speakers reading two passages, annotated by a speech therapist with temporal markers and mispronunciation descriptions. We design a three-stage framework for explainable mispronunciation evaluation: (1) overall clarity scoring, (2) mispronunciation localization, and (3) mispronunciation type classification. We systematically analyze pretrained Automatic Speech Recognition (ASR) models in each stage, assessing their effectiveness in dysarthric speech evaluation (Code available at: https://github.com/augmented-human-lab/interspeech25_speechtherapy, Supplementary webpage: https://apps.ahlab.org/interspeech25_speechtherapy/). Our findings offer clinically relevant insights for automating actionable feedback for pronunciation assessment, which could enable independent practice for patients and help therapists deliver more effective interventions.",2 "Recent advancements in table-based reasoning have expanded beyond factoid-level QA to address insight-level tasks, where systems should synthesize implicit knowledge in the table to provide explainable analyses. Although effective, existing studies remain confined to scenarios where a single gold table is given alongside the user query, failing to address cases where users seek comprehensive insights from multiple unknown tables. To bridge these gaps, we propose MT-RAIG Bench, design to evaluate systems on Retrieval-Augmented Insight Generation over Mulitple-Tables. Additionally, to tackle the suboptimality of existing automatic evaluation methods in the table domain, we further introduce a fine-grained evaluation framework MT-RAIG Eval, which achieves better alignment with human quality judgments on the generated insights. We conduct extensive experiments and reveal that even frontier LLMs still struggle with complex multi-table reasoning, establishing our MT-RAIG Bench as a challenging testbed for future research.",0 "Evaluating personalized text generated by large language models (LLMs) is challenging, as only the LLM user, i.e., prompt author, can reliably assess the output, but re-engaging the same individuals across studies is infeasible. This paper addresses the challenge of evaluating personalized text generation by introducing ExPerT, an explainable reference-based evaluation framework. ExPerT leverages an LLM to extract atomic aspects and their evidence from the generated and reference texts, match the aspects, and evaluate their alignment based on content and writing style -- two key attributes in personalized text generation. Additionally, ExPerT generates detailed, fine-grained explanations for every step of the evaluation process, enhancing transparency and interpretability. Our experiments demonstrate that ExPerT achieves a 7.2% relative improvement in alignment with human judgments compared to the state-of-the-art text generation evaluation methods. Furthermore, human evaluators rated the usability of ExPerT's explanations at 4.7 out of 5, highlighting its effectiveness in making evaluation decisions more interpretable.",0 "People with disabilities (PwD) regularly encounter ableist hate and microaggressions online. These spaces are generally moderated by machine learning models, but little is known about how effectively AI models identify ableist speech and how well their judgments align with PwD. To investigate this, we curated a first-of-its-kind dataset of 200 social media comments targeted towards PwD, and prompted state-of-the art AI models (i.e., Toxicity Classifiers, LLMs) to score toxicity and ableism for each comment, and explain their reasoning. Then, we recruited 190 participants to similarly rate and explain the harm, and evaluate LLM explanations. Our mixed-methods analysis highlighted a major disconnect: AI underestimated toxicity compared to PwD ratings, while its ableism assessments were sporadic and varied. Although LLMs identified some biases, its explanations were flawed--they lacked nuance, made incorrect assumptions, and appeared judgmental instead of educational. Going forward, we discuss challenges and opportunities in designing moderation systems for ableism, and advocate for the involvement of intersectional disabled perspectives in AI.",2 "Large language models (LLMs) have been extensively evaluated on medical question answering tasks based on licensing exams. However, real-world evaluations often depend on costly human annotators, and existing benchmarks tend to focus on isolated tasks that rarely capture the clinical reasoning or full workflow underlying medical decisions. In this paper, we introduce ER-Reason, a benchmark designed to evaluate LLM-based clinical reasoning and decision-making in the emergency room (ER)--a high-stakes setting where clinicians make rapid, consequential decisions across diverse patient presentations and medical specialties under time pressure. ER-Reason includes data from 3,984 patients, encompassing 25,174 de-identified longitudinal clinical notes spanning discharge summaries, progress notes, history and physical exams, consults, echocardiography reports, imaging notes, and ER provider documentation. The benchmark includes evaluation tasks that span key stages of the ER workflow: triage intake, initial assessment, treatment selection, disposition planning, and final diagnosis--each structured to reflect core clinical reasoning processes such as differential diagnosis via rule-out reasoning. We also collected 72 full physician-authored rationales explaining reasoning processes that mimic the teaching process used in residency training, and are typically absent from ER documentation. Evaluations of state-of-the-art LLMs on ER-Reason reveal a gap between LLM-generated and clinician-authored clinical reasoning for ER decisions, highlighting the need for future research to bridge this divide.",2 "Explainable artificial intelligence (XAI) is motivated by the problem of making AI predictions understandable, transparent, and responsible, as AI becomes increasingly impactful in society and high-stakes domains. The evaluation and optimization criteria of XAI are gatekeepers for XAI algorithms to achieve their expected goals and should withstand rigorous inspection. To improve the scientific rigor of XAI, we conduct a critical examination of a common XAI criterion: plausibility. Plausibility assesses how convincing the AI explanation is to humans, and is usually quantified by metrics of feature localization or feature correlation. Our examination shows that plausibility is invalid to measure explainability, and human explanations are not the ground truth for XAI, because doing so ignores the necessary assumptions underpinning an explanation. Our examination further reveals the consequences of using plausibility as an XAI criterion, including increasing misleading explanations that manipulate users, deteriorating users' trust in the AI system, undermining human autonomy, being unable to achieve complementary human-AI task performance, and abandoning other possible approaches of enhancing understandability. Due to the invalidity of measurements and the unethical issues, this position paper argues that the community should stop using plausibility as a criterion for the evaluation and optimization of XAI algorithms. We also delineate new research approaches to improve XAI in trustworthiness, understandability, and utility to users, including complementary human-AI task performance.",0 "The rapid rise of Generative AI (GenAI) tools has sparked debate over their role in complementing or replacing human workers across job contexts. We present a mathematical framework that models jobs, workers, and worker-job fit, introducing a novel decomposition of skills into decision-level and action-level subskills to reflect the complementary strengths of humans and GenAI. We analyze how changes in subskill abilities affect job success, identifying conditions for sharp transitions in success probability. We also establish sufficient conditions under which combining workers with complementary subskills significantly outperforms relying on a single worker. This explains phenomena such as productivity compression, where GenAI assistance yields larger gains for lower-skilled workers. We demonstrate the framework' s practicality using data from O*NET and Big-Bench Lite, aligning real-world data with our model via subskill-division methods. Our results highlight when and how GenAI complements human skills, rather than replacing them.",0 "The alignment between humans and machines is a critical challenge in artificial intelligence today. Reinforcement learning, which aims to maximize a reward function, is particularly vulnerable to the risks associated with poorly designed reward functions. Recent advancements has shown that Large Language Models (LLMs) for reward generation can outperform human performance in this context. We introduce VIRAL, a pipeline for generating and refining reward functions through the use of multi-modal LLMs. VIRAL autonomously creates and interactively improves reward functions based on a given environment and a goal prompt or annotated image. The refinement process can incorporate human feedback or be guided by a description generated by a video LLM, which explains the agent's policy in video form. We evaluated VIRAL in five Gymnasium environments, demonstrating that it accelerates the learning of new behaviors while ensuring improved alignment with user intent. The source-code and demo video are available at: https://github.com/VIRAL-UCBL1/VIRAL and https://youtu.be/Hqo82CxVT38.",0 "Large Language Models (LLMs) have shown remarkable progress across domains, yet their ability to perform inductive reasoning - inferring latent rules from sparse examples - remains limited. It is often assumed that chain-of-thought (CoT) prompting, as used in Large Reasoning Models (LRMs), enhances such reasoning. We investigate this assumption with creating four controlled, diagnostic game-based tasks - chess, Texas Hold'em, dice games, and blackjack - with hidden human-defined rules. We find that CoT reasoning can degrade inductive performance, with LRMs often underperforming their non-reasoning counterparts. To explain this, we present a theoretical framework that reveals how reasoning steps can amplify error through three failure modes: incorrect sub-task decomposition, incorrect sub-task solving, and incorrect final answer summarization. Based on our theoretical and empirical analysis, we introduce structured interventions that adapt CoT generation according to our identified failure types. These interventions improve inductive accuracy without retraining. Our findings suggest that effective (CoT) reasoning depends not only on taking more steps but also on ensuring those steps are well-structured.",0 "Determining and ranking the most salient entities in a text is critical for user-facing systems, especially as users increasingly rely on models to interpret long documents they only partially read. Graded entity salience addresses this need by assigning entities scores that reflect their relative importance in a text. Existing approaches fall into two main categories: subjective judgments of salience, which allow for gradient scoring but lack consistency, and summarization-based methods, which define salience as mention-worthiness in a summary, promoting explainability but limiting outputs to binary labels (entities are either summary-worthy or not). In this paper, we introduce a novel approach for graded entity salience that combines the strengths of both approaches. Using an English dataset spanning 12 spoken and written genres, we collect 5 summaries per document and calculate each entity's salience score based on its presence across these summaries. Our approach shows stronger correlation with scores based on human summaries and alignments, and outperforms existing techniques, including LLMs. We release our data and code at https://github.com/jl908069/gum_sum_salience to support further research on graded salient entity extraction.",0 "Neural network-based policies have demonstrated success in many robotic applications, but often lack human-explanability, which poses challenges in safety-critical deployments. To address this, we propose a neuro-symbolic explanation framework that generates a weighted signal temporal logic (wSTL) specification to describe a robot policy in a interpretable form. Existing methods typically produce explanations that are verbose and inconsistent, which hinders explainability, and loose, which do not give meaningful insights into the underlying policy. We address these issues by introducing a simplification process consisting of predicate filtering, regularization, and iterative pruning. We also introduce three novel explainability evaluation metrics -- conciseness, consistency, and strictness -- to assess explanation quality beyond conventional classification metrics. Our method is validated in three simulated robotic environments, where it outperforms baselines in generating concise, consistent, and strict wSTL explanations without sacrificing classification accuracy. This work bridges policy learning with formal methods, contributing to safer and more transparent decision-making in robotics.",0 "Machine Learning algorithms (ML) impact virtually every aspect of human lives and have found use across diverse sectors including healthcare, finance, and education. Often, ML algorithms have been found to exacerbate societal biases present in datasets leading to adversarial impacts on subsets/groups of individuals and in many cases on minority groups. To effectively mitigate these untoward effects, it is crucial that disparities/biases are identified early in a ML pipeline. This proactive approach facilitates timely interventions to prevent bias amplification and reduce complexity at later stages of model development. In this paper, we leverage recent advancements in usable information theory to introduce DispaRisk, a novel framework designed to proactively assess the potential risks of disparities in datasets during the initial stages of the ML pipeline. We evaluate DispaRisk's effectiveness by benchmarking it against commonly used datasets in fairness research. Our findings demonstrate DispaRisk's capabilities to identify datasets with a high risk of discrimination, detect model families prone to biases within an ML pipeline, and enhance the explainability of these bias risks. This work contributes to the development of fairer ML systems by providing a robust tool for early bias detection and mitigation.",0 "Human-aligned deep learning models exhibit behaviors consistent with human values, such as robustness, fairness, and honesty. Transferring these behavioral properties to models trained on different tasks or data distributions remains challenging: aligned behavior is easily forgotten during fine-tuning, and collecting task-specific data that preserves this behavior can be prohibitively costly. We introduce BIRD (Behavior Induction via Representation-structure Distillation), a flexible framework for transferring aligned behavior by matching the internal representation structure of a student model to that of a teacher. Applied to out-of-distribution robustness in image classification, BIRD outperforms fine-tuning, transfer learning, and continual learning methods, improving robust accuracy by up to 16% over the next strongest baseline. It remains effective even when the teacher is trained on a much simpler dataset and is $25 \times$ smaller than the student. In a large-scale study of over 400 teacher-student pairs, we show that three interpretable and computable properties of the teacher's representations (i.e., task relevance, behavioral relevance, and complementary knowledge) explain up to 85% of the variance in transfer success. These insights offer practical guidance for teacher selection and design. BIRD turns small, well-aligned models into scalable alignment seeds, removing a key bottleneck in deploying safe AI systems in the wild.",0 "Counterfactual explanations play a pivotal role in explainable artificial intelligence (XAI) by offering intuitive, human-understandable alternatives that elucidate machine learning model decisions. Despite their significance, existing methods for generating counterfactuals often require constant access to the predictive model, involve computationally intensive optimization for each instance and lack the flexibility to adapt to new user-defined constraints without retraining. In this paper, we propose DiCoFlex, a novel model-agnostic, conditional generative framework that produces multiple diverse counterfactuals in a single forward pass. Leveraging conditional normalizing flows trained solely on labeled data, DiCoFlex addresses key limitations by enabling real-time user-driven customization of constraints such as sparsity and actionability at inference time. Extensive experiments on standard benchmark datasets show that DiCoFlex outperforms existing methods in terms of validity, diversity, proximity, and constraint adherence, making it a practical and scalable solution for counterfactual generation in sensitive decision-making domains.",0 "This paper offers a hybrid explainable temporal data processing pipeline, DataFul Explainable MultivariatE coRrelatIonal Temporal Artificial inTElligence (EMeriTAte+DF), bridging numerical-driven temporal data classification with an event-based one through verified artificial intelligence principles, enabling human-explainable results. This was possible through a preliminary a posteriori explainable phase describing the numerical input data in terms of concurrent constituents with numerical payloads. This further required extending the event-based literature to design specification mining algorithms supporting concurrent constituents. Our previous and current solutions outperform state-of-the-art solutions for multivariate time series classifications, thus showcasing the effectiveness of the proposed methodology.",0 "Preference mechanisms, such as human preference, LLM-as-a-Judge (LaaJ), and reward models, are central to aligning and evaluating large language models (LLMs). Yet, the underlying concepts that drive these preferences remain poorly understood. In this work, we propose a fully automated method for generating local and global concept-based explanations of preferences across multiple domains. Our method utilizes an LLM to identify concepts that distinguish between chosen and rejected responses, and to represent them with concept-based vectors. To model the relationships between concepts and preferences, we propose a white-box Hierarchical Multi-Domain Regression model that captures both domain-general and domain-specific effects. To evaluate our method, we curate a dataset spanning eight challenging and diverse domains and explain twelve mechanisms. Our method achieves strong preference prediction performance, outperforming baselines while also being explainable. Additionally, we assess explanations in two application-driven settings. First, guiding LLM outputs with concepts from LaaJ explanations yields responses that those judges consistently prefer. Second, prompting LaaJs with concepts explaining humans improves their preference predictions. Together, our work establishes a new paradigm for explainability in the era of LLMs.",2 "People employ efficient planning strategies. But how are these strategies acquired? Previous research suggests that people can discover new planning strategies through learning from reinforcements, a process known as metacognitive reinforcement learning (MCRL). While prior work has shown that MCRL models can learn new planning strategies and explain more participants' experience-driven discovery better than alternative mechanisms, it also revealed significant individual differences in metacognitive learning. Furthermore, when fitted to human data, these models exhibit a slower rate of strategy discovery than humans. In this study, we investigate whether incorporating cognitive mechanisms that might facilitate human strategy discovery can bring models of MCRL closer to human performance. Specifically, we consider intrinsically generated metacognitive pseudo-rewards, subjective effort valuation, and termination deliberation. Analysis of planning task data shows that a larger proportion of participants used at least one of these mechanisms, with significant individual differences in their usage and varying impacts on strategy discovery. Metacognitive pseudo-rewards, subjective effort valuation, and learning the value of acting without further planning were found to facilitate strategy discovery. While these enhancements provided valuable insights into individual differences and the effect of these mechanisms on strategy discovery, they did not fully close the gap between model and human performance, prompting further exploration of additional factors that people might use to discover new planning strategies.",2 "Large language models (LLMs) are widely used for long-form text generation. However, factual errors in the responses would undermine their reliability. Despite growing attention to LLM factuality, the effect of response length on factuality remains underexplored. In this work, we systematically investigate this relationship by first introducing an automatic and bi-level long-form factuality evaluation framework, which achieves high agreement with human annotations while being cost-effective. Using this framework, we conduct controlled experiments and find that longer responses exhibit lower factual precision, confirming the presence of length bias. To explain this phenomenon, we empirically examine three hypotheses: error propagation, long context, and facts exhaustion. Our results reveal that facts exhaustion, where the model gradually exhausts more reliable knowledge, is the primary cause of factual degradation, rather than the other two hypotheses.",0 "This paper proposes X2-DFD, an eXplainable and eXtendable framework based on multimodal large-language models (MLLMs) for deepfake detection, consisting of three key stages. The first stage, Model Feature Assessment, systematically evaluates the detectability of forgery-related features for the MLLM, generating a prioritized ranking of features based on their intrinsic importance to the model. The second stage, Explainable Dataset Construction, consists of two key modules: Strong Feature Strengthening, which is designed to enhance the model's existing detection and explanation capabilities by reinforcing its well-learned features, and Weak Feature Supplementing, which addresses gaps by integrating specific feature detectors (e.g., low-level artifact analyzers) to compensate for the MLLM's limitations. The third stage, Fine-tuning and Inference, involves fine-tuning the MLLM on the constructed dataset and deploying it for final detection and explanation. By integrating these three stages, our approach enhances the MLLM's strengths while supplementing its weaknesses, ultimately improving both the detectability and explainability. Extensive experiments and ablations, followed by a comprehensive human study, validate the improved performance of our approach compared to the original MLLMs. More encouragingly, our framework is designed to be plug-and-play, allowing it to seamlessly integrate with future more advanced MLLMs and specific feature detectors, leading to continual improvement and extension to face the challenges of rapidly evolving deepfakes.",0 "This study investigated the fracture of star polymer networks made from prepolymers with various arm molecular weights in the range $2 \leq N_a \leq 0$, for node functionalities $3 \leq f \leq 8$ and conversion ratios $0.6\leq\phi_c\leq0.95$ by phantom chain simulations. The networks were created via end-linking reactions of star polymers dispersed in a simulation box with a fixed monomer density $\rho=8$. The resultant networks were alternatively subjected to energy minimization and uniaxial stretch until the break. The stretch at the break, $\lambda_b$, depended on the strand molecular weight $N_s=2N_a+1$ with a power-law manner described as $\lambda_b \sim N_s^0.67$, consistent with the experiment. However, the strand length before stretch is proportional to $N_s^0.5$, which does not explain the observed N_s-dependence of $\lambda_b$. The analysis based on the non-affine deformation theory does not interpret the phenomenon either. Instead, the increase of normalized prepolymer concentration concerning the overlapping concentration with increasing $N_s$ explains the result through a rise in the fraction of broken strands.",0 "When a student fails an exam, do we tend to blame their effort or the test's difficulty? Attribution, defined as how reasons are assigned to event outcomes, shapes perceptions, reinforces stereotypes, and influences decisions. Attribution Theory in social psychology explains how humans assign responsibility for events using implicit cognition, attributing causes to internal (e.g., effort, ability) or external (e.g., task difficulty, luck) factors. LLMs' attribution of event outcomes based on demographics carries important fairness implications. Most works exploring social biases in LLMs focus on surface-level associations or isolated stereotypes. This work proposes a cognitively grounded bias evaluation framework to identify how models' reasoning disparities channelize biases toward demographic groups.",0 "A critical challenge remains unresolved as generative AI systems are quickly implemented in various organizational settings. Despite significant advances in memory components such as RAG, vector stores, and LLM agents, these systems still have substantial memory limitations. Gen AI workflows rarely store or reflect on the full context in which decisions are made. This leads to repeated errors and a general lack of clarity. This paper introduces Contextual Memory Intelligence (CMI) as a new foundational paradigm for building intelligent systems. It repositions memory as an adaptive infrastructure necessary for longitudinal coherence, explainability, and responsible decision-making rather than passive data. Drawing on cognitive science, organizational theory, human-computer interaction, and AI governance, CMI formalizes the structured capture, inference, and regeneration of context as a fundamental system capability. The Insight Layer is presented in this paper to operationalize this vision. This modular architecture uses human-in-the-loop reflection, drift detection, and rationale preservation to incorporate contextual memory into systems. The paper argues that CMI allows systems to reason with data, history, judgment, and changing context, thereby addressing a foundational blind spot in current AI architectures and governance efforts. A framework for creating intelligent systems that are effective, reflective, auditable, and socially responsible is presented through CMI. This enhances human-AI collaboration, generative AI design, and the resilience of the institutions.",0 "Graph representation learning has garnered significant attention due to its broad applications in various domains, such as recommendation systems and social network analysis. Despite advancements in graph learning methods, challenges still remain in explainability when graphs are associated with semantic features. In this paper, we present GraphNarrator, the first method designed to generate natural language explanations for Graph Neural Networks. GraphNarrator employs a generative language model that maps input-output pairs to explanations reflecting the model's decision-making process. To address the lack of ground truth explanations to train the model, we propose first generating pseudo-labels that capture the model's decisions from saliency-based explanations, then using Expert Iteration to iteratively train the pseudo-label generator based on training objectives on explanation quality. The high-quality pseudo-labels are finally utilized to train an end-to-end explanation generator model. Extensive experiments are conducted to demonstrate the effectiveness of GraphNarrator in producing faithful, concise, and human-preferred natural language explanations.",0 "Opinion summarization plays a key role in deriving meaningful insights from large-scale online reviews. To make the process more explainable and grounded, we propose a domain-agnostic modular approach guided by review aspects (e.g., cleanliness for hotel reviews) which separates the tasks of aspect identification, opinion consolidation, and meta-review synthesis to enable greater transparency and ease of inspection. We conduct extensive experiments across datasets representing scientific research, business, and product domains. Results show that our approach generates more grounded summaries compared to strong baseline models, as verified through automated and human evaluations. Additionally, our modular approach, which incorporates reasoning based on review aspects, produces more informative intermediate outputs than other knowledge-agnostic decomposition approaches. Lastly, we provide empirical results to show that these intermediate outputs can support humans in summarizing opinions from large volumes of reviews.",0 "Humanity is progressing towards automated product development, a trend that promises faster creation of better products and thus the acceleration of technological progress. However, increasing reliance on non-human agents for this process introduces many risks. This perspective aims to initiate a discussion on these risks and appropriate mitigation strategies. To this end, we outline a set of principles for safer AI-driven product development which emphasize human oversight, accountability, and explainable design, among others. The risk assessment covers both technical risks which affect product quality and safety, and sociotechnical risks which affect society. While AI-driven product development is still in its early stages, this discussion will help balance its opportunities and risks without delaying essential progress in understanding, norm-setting, and regulation.",0 "Deep neural networks form the backbone of artificial intelligence research, with potential to transform the human experience in areas ranging from autonomous driving to personal assistants, healthcare to education. However, their integration into the daily routines of real-world classrooms remains limited. It is not yet common for a teacher to assign students individualized homework targeting their specific weaknesses, provide students with instant feedback, or simulate student responses to a new exam question. While these models excel in predictive performance, this lack of adoption can be attributed to a significant weakness: the lack of explainability of model decisions, leading to a lack of trust from students, parents, and teachers. This thesis aims to bring human needs to the forefront of eXplainable AI (XAI) research, grounded in the concrete use case of personalized learning and teaching. We frame the contributions along two verticals: technical advances in XAI and their aligned human studies. We investigate explainability in AI for education, revealing systematic disagreements between post-hoc explainers and identifying a need for inherently interpretable model architectures. We propose four novel technical contributions in interpretability with a multimodal modular architecture (MultiModN), an interpretable mixture-of-experts model (InterpretCC), adversarial training for explainer stability, and a theory-driven LLM-XAI framework to present explanations to students (iLLuMinaTE), which we evaluate in diverse settings with professors, teachers, learning scientists, and university students. By combining empirical evaluations of existing explainers with novel architectural designs and human studies, our work lays a foundation for human-centric AI systems that balance state-of-the-art performance with built-in transparency and trust.",0 "In human-centric settings like education or healthcare, model accuracy and model explainability are key factors for user adoption. Towards these two goals, intrinsically interpretable deep learning models have gained popularity, focusing on accurate predictions alongside faithful explanations. However, there exists a gap in the human-centeredness of these approaches, which often produce nuanced and complex explanations that are not easily actionable for downstream users. We present InterpretCC (interpretable conditional computation), a family of intrinsically interpretable neural networks at a unique point in the design space that optimizes for ease of human understanding and explanation faithfulness, while maintaining comparable performance to state-of-the-art models. InterpretCC achieves this through adaptive sparse activation of features before prediction, allowing the model to use a different, minimal set of features for each instance. We extend this idea into an interpretable, global mixture-of-experts (MoE) model that allows users to specify topics of interest, discretely separates the feature space for each data point into topical subnetworks, and adaptively and sparsely activates these topical subnetworks for prediction. We apply InterpretCC for text, time series and tabular data across several real-world datasets, demonstrating comparable performance with non-interpretable baselines and outperforming intrinsically interpretable baselines. Through a user study involving 56 teachers, InterpretCC explanations are found to have higher actionability and usefulness over other intrinsically interpretable approaches.",2 "Due to the increasing presence of networked devices in everyday life, not only cybersecurity specialists but also end users benefit from security applications such as firewalls, vulnerability scanners, and intrusion detection systems. Recent approaches use large language models (LLMs) to rewrite brief, technical security alerts into intuitive language and suggest actionable measures, helping everyday users understand and respond appropriately to security risks. However, it remains an open question how well such alerts are explained to users. LLM outputs can also be hallucinated, inconsistent, or misleading. In this work, we introduce the Human-Centered Security Alert Evaluation Framework (HCSAEF). HCSAEF assesses LLM-generated cybersecurity notifications to support researchers who want to compare notifications generated for everyday users, improve them, or analyze the capabilities of different LLMs in explaining cybersecurity issues. We demonstrate HCSAEF through three use cases, which allow us to quantify the impact of prompt design, model selection, and output consistency. Our findings indicate that HCSAEF effectively differentiates generated notifications along dimensions such as intuitiveness, urgency, and correctness.",0 "Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models operating in open-world scenarios. Current OOD detectors mainly rely on statistical models to identify unusual patterns in the latent representations of a deep neural network. This work proposes to augment existing OOD detectors with probabilistic reasoning, utilizing Markov logic networks (MLNs). MLNs connect first-order logic with probabilistic reasoning to assign probabilities to inputs based on weighted logical constraints defined over human-understandable concepts, which offers improved explainability. Through extensive experiments on multiple datasets, we demonstrate that MLNs can significantly enhance the performance of a wide range of existing OOD detectors while maintaining computational efficiency. Furthermore, we introduce a simple algorithm for learning logical constraints for OOD detection from a dataset and showcase its effectiveness.",0 "Toxicity mitigation consists in rephrasing text in order to remove offensive or harmful meaning. Neural natural language processing (NLP) models have been widely used to target and mitigate textual toxicity. However, existing methods fail to detoxify text while preserving the initial non-toxic meaning at the same time. In this work, we propose to apply counterfactual generation methods from the eXplainable AI (XAI) field to target and mitigate textual toxicity. In particular, we perform text detoxification by applying local feature importance and counterfactual generation methods to a toxicity classifier distinguishing between toxic and non-toxic texts. We carry out text detoxification through counterfactual generation on three datasets and compare our approach to three competitors. Automatic and human evaluations show that recently developed NLP counterfactual generators can mitigate toxicity accurately while better preserving the meaning of the initial text as compared to classical detoxification methods. Finally, we take a step back from using automated detoxification tools, and discuss how to manage the polysemous nature of toxicity and the risk of malicious use of detoxification tools. This work is the first to bridge the gap between counterfactual generation and text detoxification and paves the way towards more practical application of XAI methods.",0 "Symbolic regression aims to discover mathematical equations that fit given numerical data. It has been applied in various fields of scientific research, such as producing human-readable expressions that explain physical phenomena. Recently, Neural symbolic regression (NSR) methods that involve Transformers pre-trained on large-scale synthetic datasets have gained attention. While these methods offer advantages such as short inference time, they suffer from low performance, particularly when the number of input variables is large. In this study, we hypothesized that this limitation stems from the memorization bias of Transformers in symbolic regression. We conducted a quantitative evaluation of this bias in Transformers using a synthetic dataset and found that Transformers rarely generate expressions not present in the training data. Additional theoretical analysis reveals that this bias arises from the Transformer's inability to construct expressions compositionally while verifying their numerical validity. We finally examined if tailoring test-time strategies can lead to reduced memorization bias and better performance. We empirically demonstrate that providing additional information to the model at test time can significantly mitigate memorization bias. On the other hand, we also find that reducing memorization bias does not necessarily correlate with improved performance. These findings contribute to a deeper understanding of the limitations of NSR approaches and offer a foundation for designing more robust, generalizable symbolic regression methods. Code is available at https://github.com/Shun-0922/Mem-Bias-NSR .",0 "Automated answer grading is a critical challenge in educational technology, with the potential to streamline assessment processes, ensure grading consistency, and provide timely feedback to students. However, existing approaches are often constrained to specific exam formats, lack interpretability in score assignment, and struggle with real-world applicability across diverse subjects and assessment types. To address these limitations, we introduce RATAS (Rubric Automated Tree-based Answer Scoring), a novel framework that leverages state-of-the-art generative AI models for rubric-based grading of textual responses. RATAS is designed to support a wide range of grading rubrics, enable subject-agnostic evaluation, and generate structured, explainable rationales for assigned scores. We formalize the automatic grading task through a mathematical framework tailored to rubric-based assessment and present an architecture capable of handling complex, real-world exam structures. To rigorously evaluate our approach, we construct a unique, contextualized dataset derived from real-world project-based courses, encompassing diverse response formats and varying levels of complexity. Empirical results demonstrate that RATAS achieves high reliability and accuracy in automated grading while providing interpretable feedback that enhances transparency for both students and nstructors.",0 "Time series analysis provides essential insights for real-world system dynamics and informs downstream decision-making, yet most existing methods often overlook the rich contextual signals present in auxiliary modalities. To bridge this gap, we introduce TimeXL, a multi-modal prediction framework that integrates a prototype-based time series encoder with three collaborating Large Language Models (LLMs) to deliver more accurate predictions and interpretable explanations. First, a multi-modal prototype-based encoder processes both time series and textual inputs to generate preliminary forecasts alongside case-based rationales. These outputs then feed into a prediction LLM, which refines the forecasts by reasoning over the encoder's predictions and explanations. Next, a reflection LLM compares the predicted values against the ground truth, identifying textual inconsistencies or noise. Guided by this feedback, a refinement LLM iteratively enhances text quality and triggers encoder retraining. This closed-loop workflow -- prediction, critique (reflect), and refinement -- continuously boosts the framework's performance and interpretability. Empirical evaluations on four real-world datasets demonstrate that TimeXL achieves up to 8.9\% improvement in AUC and produces human-centric, multi-modal explanations, highlighting the power of LLM-driven reasoning for time series prediction.",0 "Recent multimodal image generators such as GPT-4o, Gemini 2.0 Flash, and Gemini 2.5 Pro excel at following complex instructions, editing images and maintaining concept consistency. However, they are still evaluated by disjoint toolkits: text-to-image (T2I) benchmarks that lacks multi-modal conditioning, and customized image generation benchmarks that overlook compositional semantics and common knowledge. We propose MMIG-Bench, a comprehensive Multi-Modal Image Generation Benchmark that unifies these tasks by pairing 4,850 richly annotated text prompts with 1,750 multi-view reference images across 380 subjects, spanning humans, animals, objects, and artistic styles. MMIG-Bench is equipped with a three-level evaluation framework: (1) low-level metrics for visual artifacts and identity preservation of objects; (2) novel Aspect Matching Score (AMS): a VQA-based mid-level metric that delivers fine-grained prompt-image alignment and shows strong correlation with human judgments; and (3) high-level metrics for aesthetics and human preference. Using MMIG-Bench, we benchmark 17 state-of-the-art models, including Gemini 2.5 Pro, FLUX, DreamBooth, and IP-Adapter, and validate our metrics with 32k human ratings, yielding in-depth insights into architecture and data design.",2 "As machine learning models and autonomous agents are increasingly deployed in high-stakes, real-world domains such as healthcare, security, finance, and robotics, the need for transparent and trustworthy explanations has become critical. To ensure end-to-end transparency of AI decisions, we need models that are not only accurate but also fully explainable and human-tunable. We introduce BACON, a novel framework for automatically training explainable AI models for decision making problems using graded logic. BACON achieves high predictive accuracy while offering full structural transparency and precise, logic-based symbolic explanations, enabling effective human-AI collaboration and expert-guided refinement. We evaluate BACON with a diverse set of scenarios: classic Boolean approximation, Iris flower classification, house purchasing decisions and breast cancer diagnosis. In each case, BACON provides high-performance models while producing compact, human-verifiable decision logic. These results demonstrate BACON's potential as a practical and principled approach for delivering crisp, trustworthy explainable AI.",0 "This paper calls on the research community not only to investigate how human biases are inherited by large language models (LLMs) but also to explore how these biases in LLMs can be leveraged to make society's ""unwritten code"" - such as implicit stereotypes and heuristics - visible and accessible for critique. We introduce a conceptual framework through a case study in science: uncovering hidden rules in peer review - the factors that reviewers care about but rarely state explicitly due to normative scientific expectations. The idea of the framework is to push LLMs to speak out their heuristics through generating self-consistent hypotheses - why one paper appeared stronger in reviewer scoring - among paired papers submitted to 45 computer science conferences, while iteratively searching deeper hypotheses from remaining pairs where existing hypotheses cannot explain. We observed that LLMs' normative priors about the internal characteristics of good science extracted from their self-talk, e.g. theoretical rigor, were systematically updated toward posteriors that emphasize storytelling about external connections, such as how the work is positioned and connected within and across literatures. This shift reveals the primacy of scientific myths about intrinsic properties driving scientific excellence rather than extrinsic contextualization and storytelling that influence conceptions of relevance and significance. Human reviewers tend to explicitly reward aspects that moderately align with LLMs' normative priors (correlation = 0.49) but avoid articulating contextualization and storytelling posteriors in their review comments (correlation = -0.14), despite giving implicit reward to them with positive scores. We discuss the broad applicability of the framework, leveraging LLMs as diagnostic tools to surface the tacit codes underlying human society, enabling more precisely targeted responsible AI.",0 "A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF formula can be seen as a binary classifier $\kappa$ over boolean domains. The size of an explanation of a positive decision taken by a DNF $\kappa$ is bounded by the size of the terms in $\kappa$, since we can explain a positive decision by giving a term of $\kappa$ that evaluates to true. Since both positive and negative decisions must be explained, we consider that interpretable DNFs are those $\kappa$ for which both $\kappa$ and $\overline{\kappa}$ can be expressed as DNFs composed of terms of bounded size. In this paper, we study the family of $k$-DNFs whose complements can also be expressed as $k$-DNFs. We compare two such families, namely depth-$k$ decision trees and nested $k$-DNFs, a novel family of models. Experiments indicate that nested $k$-DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy.",0 "Large Language Models (LLMs) excel at text summarization, a task that requires models to select content based on its importance. However, the exact notion of salience that LLMs have internalized remains unclear. To bridge this gap, we introduce an explainable framework to systematically derive and investigate information salience in LLMs through their summarization behavior. Using length-controlled summarization as a behavioral probe into the content selection process, and tracing the answerability of Questions Under Discussion throughout, we derive a proxy for how models prioritize information. Our experiments on 13 models across four datasets reveal that LLMs have a nuanced, hierarchical notion of salience, generally consistent across model families and sizes. While models show highly consistent behavior and hence salience patterns, this notion of salience cannot be accessed through introspection, and only weakly correlates with human perceptions of information salience.",0 "From uncertainty quantification to real-world object detection, we recognize the importance of machine learning algorithms, particularly in safety-critical domains such as autonomous driving or medical diagnostics. In machine learning, ambiguous data plays an important role in various machine learning domains. Optical illusions present a compelling area of study in this context, as they offer insight into the limitations of both human and machine perception. Despite this relevance, optical illusion datasets remain scarce. In this work, we introduce a novel dataset of optical illusions featuring intermingled animal pairs designed to evoke perceptual ambiguity. We identify generalizable visual concepts, particularly gaze direction and eye cues, as subtle yet impactful features that significantly influence model accuracy. By confronting models with perceptual ambiguity, our findings underscore the importance of concepts in visual learning and provide a foundation for studying bias and alignment between human and machine vision. To make this dataset useful for general purposes, we generate optical illusions systematically with different concepts discussed in our bias mitigation section. The dataset is accessible in Kaggle via https://kaggle.com/datasets/693bf7c6dd2cb45c8a863f9177350c8f9849a9508e9d50526e2ffcc5559a8333. Our source code can be found at https://github.com/KDD-OpenSource/Ambivision.git.",0 "In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs). The knowledge acquired during pre-training is crucial for this few-shot capability, providing the model with task priors. However, recent studies have shown that ICL predominantly relies on retrieving task priors rather than ""learning"" to perform tasks. This limitation is particularly evident in complex subjective domains such as emotion and morality, where priors significantly influence posterior predictions. In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt. Moreover, we evaluate the posterior bias towards certain annotators by grounding our study in appropriate, quantitative measures of LLM priors. Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead. However, aggregation does not explain the entire gap between ICL and the state of the art, meaning other factors in such tasks also account for the observed phenomena. Finally, by rigorously studying annotator-level labels, we find that it is possible for minority annotators to both better align with LLMs and have their perspectives further amplified.",2 "Language models hold incredible promise for enabling scientific discovery by synthesizing massive research corpora. Many complex scientific research questions have multiple plausible answers, each supported by evidence of varying strength. However, existing language models lack the capability to quantitatively and faithfully compare answer plausibility in terms of supporting evidence. To address this, we introduce Retrieve to Explain (R2E), a retrieval-based model that scores and ranks all possible answers to a research question based on evidence retrieved from a document corpus. The architecture represents each answer only in terms of its supporting evidence, with the answer itself masked. This allows us to extend feature attribution methods such as Shapley values, to transparently attribute answer scores to supporting evidence at inference time. The architecture also allows incorporation of new evidence without retraining, including non-textual data modalities templated into natural language. We developed R2E for the challenging scientific discovery task of drug target identification, a human-in-the-loop process where failures are extremely costly and explainability paramount. When predicting whether drug targets will subsequently be confirmed as efficacious in clinical trials, R2E not only matches non-explainable literature-based models but also surpasses a genetics-based target identification approach used throughout the pharmaceutical industry.",0 "Instruction-based image editing models offer increased personalization opportunities in generative tasks. However, properly evaluating their results is challenging, and most of the existing metrics lag in terms of alignment with human judgment and explainability. To tackle these issues, we introduce DICE (DIfference Coherence Estimator), a model designed to detect localized differences between the original and the edited image and to assess their relevance to the given modification request. DICE consists of two key components: a difference detector and a coherence estimator, both built on an autoregressive Multimodal Large Language Model (MLLM) and trained using a strategy that leverages self-supervision, distillation from inpainting networks, and full supervision. Through extensive experiments, we evaluate each stage of our pipeline, comparing different MLLMs within the proposed framework. We demonstrate that DICE effectively identifies coherent edits, effectively evaluating images generated by different editing models with a strong correlation with human judgment. We publicly release our source code, models, and data.",0 "Concept-based explainable approaches have emerged as a promising method in explainable AI because they can interpret models in a way that aligns with human reasoning. However, their adaption in the text domain remains limited. Most existing methods rely on predefined concept annotations and cannot discover unseen concepts, while other methods that extract concepts without supervision often produce explanations that are not intuitively comprehensible to humans, potentially diminishing user trust. These methods fall short of discovering comprehensible concepts automatically. To address this issue, we propose \textbf{ECO-Concept}, an intrinsically interpretable framework to discover comprehensible concepts with no concept annotations. ECO-Concept first utilizes an object-centric architecture to extract semantic concepts automatically. Then the comprehensibility of the extracted concepts is evaluated by large language models. Finally, the evaluation result guides the subsequent model fine-tuning to obtain more understandable explanations. Experiments show that our method achieves superior performance across diverse tasks. Further concept evaluations validate that the concepts learned by ECO-Concept surpassed current counterparts in comprehensibility.",0 "Artificial Intelligence (AI) is one of the major technological advancements of this century, bearing incredible potential for users through AI-powered applications and tools in numerous domains. Being often black-box (i.e., its decision-making process is unintelligible), developers typically resort to eXplainable Artificial Intelligence (XAI) techniques to interpret the behaviour of AI models to produce systems that are transparent, fair, reliable, and trustworthy. However, presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process, leading to AI systems that are not useful to end-users. This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed in the academic literature to effectively present explanations to users. To improve the contribution and real-world impact of this survey, we also present a framework for Human-cEnteRed developMent of Explainable user interfaceS (HERMES) to guide practitioners and academics in the design and evaluation of XUIs.",2 "Transformer-based language models, though not explicitly trained to mimic brain recordings, have demonstrated surprising alignment with brain activity. Progress in these models-through increased size, instruction-tuning, and multimodality-has led to better representational alignment with neural data. Recently, a new class of instruction-tuned multimodal LLMs (MLLMs) have emerged, showing remarkable zero-shot capabilities in open-ended multimodal vision tasks. However, it is unknown whether MLLMs, when prompted with natural instructions, lead to better brain alignment and effectively capture instruction-specific representations. To address this, we first investigate brain alignment, i.e., measuring the degree of predictivity of neural visual activity using text output response embeddings from MLLMs as participants engage in watching natural scenes. Experiments with 10 different instructions show that MLLMs exhibit significantly better brain alignment than vision-only models and perform comparably to non-instruction-tuned multimodal models like CLIP. We also find that while these MLLMs are effective at generating high-quality responses suitable to the task-specific instructions, not all instructions are relevant for brain alignment. Further, by varying instructions, we make the MLLMs encode instruction-specific visual concepts related to the input image. This analysis shows that MLLMs effectively capture count-related and recognition-related concepts, demonstrating strong alignment with brain activity. Notably, the majority of the explained variance of the brain encoding models is shared between MLLM embeddings of image captioning and other instructions. These results suggest that enhancing MLLMs' ability to capture task-specific information could lead to better differentiation between various types of instructions, and thereby improving their precision in predicting brain responses.",2 "Ashery et al. recently argue that large language models (LLMs), when paired to play a classic ""naming game,"" spontaneously develop linguistic conventions reminiscent of human social norms. Here, we show that their results are better explained by data leakage: the models simply reproduce conventions they already encountered during pre-training. Despite the authors' mitigation measures, we provide multiple analyses demonstrating that the LLMs recognize the structure of the coordination game and recall its outcomes, rather than exhibit ""emergent"" conventions. Consequently, the observed behaviors are indistinguishable from memorization of the training corpus. We conclude by pointing to potential alternative strategies and reflecting more generally on the place of LLMs for social science models.",0 "Query performance prediction (QPP) aims to estimate the retrieval quality of a search system for a query without human relevance judgments. Previous QPP methods typically return a single scalar value and do not require the predicted values to approximate a specific information retrieval (IR) evaluation measure, leading to certain drawbacks: (i) a single scalar is insufficient to accurately represent different IR evaluation measures, especially when metrics do not highly correlate, and (ii) a single scalar limits the interpretability of QPP methods because solely using a scalar is insufficient to explain QPP results. To address these issues, we propose a QPP framework using automatically generated relevance judgments (QPP-GenRE), which decomposes QPP into independent subtasks of predicting the relevance of each item in a ranked list to a given query. This allows us to predict any IR evaluation measure using the generated relevance judgments as pseudo-labels. This also allows us to interpret predicted IR evaluation measures, and identify, track and rectify errors in generated relevance judgments to improve QPP quality. We predict an item's relevance by using open-source large language models (LLMs) to ensure scientific reproducibility. We face two main challenges: (i) excessive computational costs of judging an entire corpus for predicting a metric considering recall, and (ii) limited performance in prompting open-source LLMs in a zero-/few-shot manner. To solve the challenges, we devise an approximation strategy to predict an IR measure considering recall and propose to fine-tune open-source LLMs using human-labeled relevance judgments. Experiments on the TREC 2019 to 2022 deep learning tracks and CAsT-19 and 20 datasets show that QPP-GenRE achieves state-of-the-art QPP quality for both lexical and neural rankers.",0 "The integration of Artificial Intelligence (AI) in modern society is transforming how individuals perform tasks. In high-risk domains, ensuring human control over AI systems remains a key design challenge. This article presents a novel End-User Development (EUD) approach for black-box AI models, enabling users to edit explanations and influence future predictions through targeted interventions. By combining explainability, user control, and model adaptability, the proposed method advances Human-Centered AI (HCAI), promoting a symbiotic relationship between humans and adaptive, user-tailored AI systems.",0 "This position paper argues that Mean Opinion Score (MOS), while historically foundational, is no longer sufficient as the sole supervisory signal for multimedia quality assessment models. MOS reduces rich, context-sensitive human judgments to a single scalar, obscuring semantic failures, user intent, and the rationale behind quality decisions. We contend that modern quality assessment models must integrate three interdependent capabilities: (1) context-awareness, to adapt evaluations to task-specific goals and viewing conditions; (2) reasoning, to produce interpretable, evidence-grounded justifications for quality judgments; and (3) multimodality, to align perceptual and semantic cues using vision-language models. We critique the limitations of current MOS-centric benchmarks and propose a roadmap for reform: richer datasets with contextual metadata and expert rationales, and new evaluation metrics that assess semantic alignment, reasoning fidelity, and contextual sensitivity. By reframing quality assessment as a contextual, explainable, and multimodal modeling task, we aim to catalyze a shift toward more robust, human-aligned, and trustworthy evaluation systems.",0 "Securing personal identity against deepfake attacks is increasingly critical in the digital age, especially for celebrities and political figures whose faces are easily accessible and frequently targeted. Most existing deepfake detection methods focus on general-purpose scenarios and often ignore the valuable prior knowledge of known facial identities, e.g., ""VIP individuals"" whose authentic facial data are already available. In this paper, we propose \textbf{VIPGuard}, a unified multimodal framework designed to capture fine-grained and comprehensive facial representations of a given identity, compare them against potentially fake or similar-looking faces, and reason over these comparisons to make accurate and explainable predictions. Specifically, our framework consists of three main stages. First, fine-tune a multimodal large language model (MLLM) to learn detailed and structural facial attributes. Second, we perform identity-level discriminative learning to enable the model to distinguish subtle differences between highly similar faces, including real and fake variations. Finally, we introduce user-specific customization, where we model the unique characteristics of the target face identity and perform semantic reasoning via MLLM to enable personalized and explainable deepfake detection. Our framework shows clear advantages over previous detection works, where traditional detectors mainly rely on low-level visual cues and provide no human-understandable explanations, while other MLLM-based models often lack a detailed understanding of specific face identities. To facilitate the evaluation of our method, we built a comprehensive identity-aware benchmark called \textbf{VIPBench} for personalized deepfake detection, involving the latest 7 face-swapping and 7 entire face synthesis techniques for generation.",0 "This review presents a comprehensive analysis of two emerging paradigms in AI-assisted software development: vibe coding and agentic coding. While both leverage large language models (LLMs), they differ fundamentally in autonomy, architectural design, and the role of the developer. Vibe coding emphasizes intuitive, human-in-the-loop interaction through prompt-based, conversational workflows that support ideation, experimentation, and creative exploration. In contrast, agentic coding enables autonomous software development through goal-driven agents capable of planning, executing, testing, and iterating tasks with minimal human intervention. We propose a detailed taxonomy spanning conceptual foundations, execution models, feedback loops, safety mechanisms, debugging strategies, and real-world tool ecosystems. Through comparative workflow analysis and 20 detailed use cases, we illustrate how vibe systems thrive in early-stage prototyping and education, while agentic systems excel in enterprise-grade automation, codebase refactoring, and CI/CD integration. We further examine emerging trends in hybrid architectures, where natural language interfaces are coupled with autonomous execution pipelines. Finally, we articulate a future roadmap for agentic AI, outlining the infrastructure needed for trustworthy, explainable, and collaborative systems. Our findings suggest that successful AI software engineering will rely not on choosing one paradigm, but on harmonizing their strengths within a unified, human-centered development lifecycle.",0 "Ensuring that large language models (LLMs) can effectively assess, detect, explain, and remediate software vulnerabilities is critical for building robust and secure software systems. We introduce VADER, a human-evaluated benchmark designed explicitly to assess LLM performance across four key vulnerability-handling dimensions: assessment, detection, explanation, and remediation. VADER comprises 174 real-world software vulnerabilities, each carefully curated from GitHub repositories and annotated by security experts. For each vulnerability case, models are tasked with identifying the flaw, classifying it using Common Weakness Enumeration (CWE), explaining its underlying cause, proposing a patch, and formulating a test plan. Using a one-shot prompting strategy, we benchmark six state-of-the-art LLMs (Claude 3.7 Sonnet, Gemini 2.5 Pro, GPT-4.1, GPT-4.5, Grok 3 Beta, and o3) on VADER, and human security experts evaluated each response according to a rigorous scoring rubric emphasizing remediation (quality of the code fix, 50%), explanation (20%), and classification and test plan (30%) according to a standardized rubric. Our results show that current state-of-the-art LLMs achieve only moderate success on VADER - OpenAI's o3 attained 54.7% accuracy overall, with others in the 49-54% range, indicating ample room for improvement. Notably, remediation quality is strongly correlated (Pearson r > 0.97) with accurate classification and test plans, suggesting that models that effectively categorize vulnerabilities also tend to fix them well. VADER's comprehensive dataset, detailed evaluation rubrics, scoring tools, and visualized results with confidence intervals are publicly released, providing the community with an interpretable, reproducible benchmark to advance vulnerability-aware LLMs. All code and data are available at: https://github.com/AfterQuery/vader",0 "A key feature of human theory-of-mind is the ability to attribute beliefs to other agents as mentalistic explanations for their behavior. But given the wide variety of beliefs that agents may hold about the world and the rich language we can use to express them, which specific beliefs are people inclined to attribute to others? In this paper, we investigate the hypothesis that people prefer to attribute beliefs that are good explanations for the behavior they observe. We develop a computational model that quantifies the explanatory strength of a (natural language) statement about an agent's beliefs via three factors: accuracy, informativity, and causal relevance to actions, each of which can be computed from a probabilistic generative model of belief-driven behavior. Using this model, we study the role of each factor in how people selectively attribute beliefs to other agents. We investigate this via an experiment where participants watch an agent collect keys hidden in boxes in order to reach a goal, then rank a set of statements describing the agent's beliefs about the boxes' contents. We find that accuracy and informativity perform reasonably well at predicting these rankings when combined, but that causal relevance is the single factor that best explains participants' responses.",2 "Battles, or side-by-side comparisons in so-called arenas that elicit human preferences, have emerged as a popular approach for assessing the output quality of LLMs. Recently, this idea has been extended to retrieval-augmented generation (RAG) systems. While undoubtedly representing an advance in evaluation, battles have at least two drawbacks, particularly in the context of complex information-seeking queries: they are neither explanatory nor diagnostic. Recently, the nugget evaluation methodology has emerged as a promising approach to evaluate the quality of RAG answers. Nuggets decompose long-form LLM-generated answers into atomic facts, highlighting important pieces of information necessary in a ""good"" response. In this work, we apply our AutoNuggetizer framework to analyze data from roughly 7K Search Arena battles provided by LMArena in a fully automatic manner. Our results show a significant correlation between nugget scores and human preferences, showcasing promise in our approach to explainable and diagnostic system evaluations. All the code necessary to reproduce results in our work is available in https://github.com/castorini/lmsys_nuggetize.",2 "Humans constantly move their eyes, even during visual fixations, where miniature (or fixational) eye movements occur involuntarily. Fixational eye movements comprise slow components (physiological drift and tremor) and fast components (microsaccades). The complex dynamics of physiological drift can be modeled qualitatively as a statistically self-avoiding random walk (SAW model, Engbert, Mergenthaler, Sinn, & Pikovsky, 2011). In this study, we implement a data assimilation approach for the SAW model to explain statistics of fixational eye movements and microsaccades in experimental data obtained from high-resolution eye-tracking. We discuss and analyze the likelihood function for the SAW model, which allows us to apply Bayesian parameter estimation at the level of individual human observers. Based on model fitting, we find a relationship between the activation predicted by the SAW model and the occurrence of microsaccades. The model's latent activation relative to microsaccade onsets and offsets using experimental data lends support to the existence of a triggering mechanism for microsaccades. Our findings suggest that the SAW model can capture individual differences and serve as a tool for exploring the relationship between physiological drift and microsaccades as the two most essential components of fixational eye movements. Our results contribute to understanding individual variability in microsaccade behaviors and the role of fixational eye movements in visual information processing.",2 "Concept-based Models are a class of inherently explainable networks that improve upon standard Deep Neural Networks by providing a rationale behind their predictions using human-understandable `concepts'. With these models being highly successful in critical applications like medical diagnosis and financial risk prediction, there is a natural push toward their wider adoption in sensitive domains to instill greater trust among diverse stakeholders. However, recent research has uncovered significant limitations in the structure of such networks, their training procedure, underlying assumptions, and their susceptibility to adversarial vulnerabilities. In particular, issues such as concept leakage, entangled representations, and limited robustness to perturbations pose challenges to their reliability and generalization. Additionally, the effectiveness of human interventions in these models remains an open question, raising concerns about their real-world applicability. In this paper, we provide a comprehensive survey on the risks and limitations associated with Concept-based Models. In particular, we focus on aggregating commonly encountered challenges and the architecture choices mitigating these challenges for Supervised and Unsupervised paradigms. We also examine recent advances in improving their reliability and discuss open problems and promising avenues of future research in this domain.",2 "Machine learning-based Android malware classifiers achieve high accuracy in stationary environments but struggle with concept drift. The rapid evolution of malware, especially with new families, can depress classification accuracy to near-random levels. Previous research has largely centered on detecting drift samples, with expert-led label revisions on these samples to guide model retraining. However, these methods often lack a comprehensive understanding of malware concepts and provide limited guidance for effective drift adaptation, leading to unstable detection performance and high human labeling costs. To combat concept drift, we propose DREAM, a novel system that improves drift detection and establishes an explanatory adaptation process. Our core idea is to integrate classifier and expert knowledge within a unified model. To achieve this, we embed malware explanations (or concepts) within the latent space of a contrastive autoencoder, while constraining sample reconstruction based on classifier predictions. This approach enhances classifier retraining in two key ways: 1) capturing the target classifier's characteristics to select more effective samples in drift detection and 2) enabling concept revisions that extend the classifier's semantics to provide stronger guidance for adaptation. Additionally, DREAM eliminates reliance on training data during real-time drift detection and provides a behavior-based drift explainer to support concept revision. Our evaluation shows that DREAM effectively improves the drift detection accuracy and reduces the expert analysis effort in adaptation across different malware datasets and classifiers. Notably, when updating a widely-used Drebin classifier, DREAM achieves the same accuracy with 76.6% fewer newly labeled samples compared to the best existing methods.",0 "Current Large Language Model (LLM) agents demonstrate strong reasoning and tool use capabilities, but often lack self-awareness, failing to balance these approaches effectively. This imbalance leads to Tool Overuse, where models unnecessarily rely on external tools for tasks solvable with parametric knowledge, increasing computational overhead. Inspired by human metacognition, we introduce SMART (Strategic Model-Aware Reasoning with Tools), a paradigm that enhances an agent's self-awareness to optimize task handling and reduce tool overuse. To support this paradigm, we introduce SMART-ER, a dataset spanning three domains, where reasoning alternates between parametric knowledge and tool-dependent steps, with each step enriched by rationales explaining when tools are necessary. Through supervised training, we develop SMARTAgent, a family of models that dynamically balance parametric knowledge and tool use. Evaluations show that SMARTAgent reduces tool use by 24% while improving performance by over 37%, enabling 7B-scale models to match its 70B counterpart and GPT-4o. Additionally, SMARTAgent generalizes to out-of-distribution test data like GSM8K and MINTQA, maintaining accuracy with just one-fifth the tool calls. These highlight the potential of strategic tool use to enhance reasoning, mitigate overuse, and bridge the gap between model size and performance, advancing intelligent and resource-efficient agent designs.",0 "While autonomous driving (AD) stacks struggle with decision making under partial observability and real-world complexity, human drivers are capable of commonsense reasoning to make near-optimal decisions with limited information. Recent work has attempted to leverage finetuned Vision-Language Models (VLMs) for trajectory planning at inference time to emulate human behavior. Despite their success in benchmark evaluations, these methods are often impractical to deploy (a 70B parameter VLM inference at merely 8 tokens per second requires more than 160G of memory), and their monolithic network structure prohibits safety decomposition. To bridge this gap, we propose VLM-Embedded Reasoning for autonomous Driving (VERDI), a training-time framework that distills the reasoning process and commonsense knowledge of VLMs into the AD stack. VERDI augments modular differentiable end-to-end (e2e) AD models by aligning intermediate module outputs at the perception, prediction, and planning stages with text features explaining the driving reasoning process produced by VLMs. By encouraging alignment in latent space, VERDI enables the modular AD stack to internalize structured reasoning, without incurring the inference-time costs of large VLMs. We demonstrate the effectiveness of our method on the NuScenes dataset and find that VERDI outperforms existing e2e methods that do not embed reasoning by 10% in $\ell_{2}$ distance, while maintaining high inference speed.",0 "Despite advances in designing personas for Large Language Models (LLM), challenges remain in aligning them with human cognitive processes and representing diverse stakeholder perspectives. We introduce a Social Cognitive Theory (SCT) agent design framework for designing, evaluating, and implementing psychologically grounded LLMs with consistent behavior. Our framework operationalizes SCT through four personal factors (cognitive, motivational, biological, and affective) for designing, six quantifiable constructs for evaluating, and a graph database-backed architecture for implementing stakeholder personas. Experiments tested agents' responses to contradicting information of varying reliability. In the highly polarized renewable energy transition discourse, we design five diverse agents with distinct ideologies, roles, and stakes to examine stakeholder representation. The evaluation of these agents in contradictory scenarios occurs through comprehensive processes that implement the SCT. Results show consistent response patterns ($R^2$ range: $0.58-0.61$) and systematic temporal development of SCT construct effects. Principal component analysis identifies two dimensions explaining $73$% of variance, validating the theoretical structure. Our framework offers improved explainability and reproducibility compared to black-box approaches. This work contributes to ongoing efforts to improve diverse stakeholder representation while maintaining psychological consistency in LLM personas.",0 "Concepts such as objects, patterns, and shapes are how humans understand the world. Building on this intuition, concept-based explainability methods aim to study representations learned by deep neural networks in relation to human-understandable concepts. Here, Concept Activation Vectors (CAVs) are an important tool and can identify whether a model learned a concept or not. However, the computational cost and time requirements of existing CAV computation pose a significant challenge, particularly in large-scale, high-dimensional architectures. To address this limitation, we introduce FastCAV, a novel approach that accelerates the extraction of CAVs by up to 63.6x (on average 46.4x). We provide a theoretical foundation for our approach and give concrete assumptions under which it is equivalent to established SVM-based methods. Our empirical results demonstrate that CAVs calculated with FastCAV maintain similar performance while being more efficient and stable. In downstream applications, i.e., concept-based explanation methods, we show that FastCAV can act as a replacement leading to equivalent insights. Hence, our approach enables previously infeasible investigations of deep models, which we demonstrate by tracking the evolution of concepts during model training.",0 "Understanding sources of a model's uncertainty regarding its predictions is crucial for effective human-AI collaboration. Prior work proposes using numerical uncertainty or hedges (""I'm not sure, but ...""), which do not explain uncertainty that arises from conflicting evidence, leaving users unable to resolve disagreements or rely on the output. We introduce CLUE (Conflict-and-Agreement-aware Language-model Uncertainty Explanations), the first framework to generate natural language explanations of model uncertainty by (i) identifying relationships between spans of text that expose claim-evidence or inter-evidence conflicts and agreements that drive the model's predictive uncertainty in an unsupervised way, and (ii) generating explanations via prompting and attention steering that verbalize these critical interactions. Across three language models and two fact-checking datasets, we show that CLUE produces explanations that are more faithful to the model's uncertainty and more consistent with fact-checking decisions than prompting for uncertainty explanations without span-interaction guidance. Human evaluators judge our explanations to be more helpful, more informative, less redundant, and more logically consistent with the input than this baseline. CLUE requires no fine-tuning or architectural changes, making it plug-and-play for any white-box language model. By explicitly linking uncertainty to evidence conflicts, it offers practical support for fact-checking and generalises readily to other tasks that require reasoning over complex information.",0 "Prior studies have shown that distinguishing text generated by large language models (LLMs) from human-written one is highly challenging, and often no better than random guessing. To verify the generalizability of this finding across languages and domains, we perform an extensive case study to identify the upper bound of human detection accuracy. Across 16 datasets covering 9 languages and 9 domains, 19 annotators achieved an average detection accuracy of 87.6\%, thus challenging previous conclusions. We find that major gaps between human and machine text lie in concreteness, cultural nuances, and diversity. Prompting by explicitly explaining the distinctions in the prompts can partially bridge the gaps in over 50\% of the cases. However, we also find that humans do not always prefer human-written text, particularly when they cannot clearly identify its source.",2 "Autonomous multi-agent systems (MAS) are useful for automating complex tasks but raise trust concerns due to risks like miscoordination and goal misalignment. Explainability is vital for trust calibration, but explainable reinforcement learning for MAS faces challenges in state/action space complexity, stakeholder needs, and evaluation. Using the counterfactual theory of causation and LLMs' summarisation capabilities, we propose Agentic eXplanations via Interrogative Simulation (AXIS). AXIS generates intelligible causal explanations for pre-trained multi-agent policies by having an LLM interrogate an environment simulator using queries like 'whatif' and 'remove' to observe and synthesise counterfactual information over multiple rounds. We evaluate AXIS on autonomous driving across 10 scenarios for 5 LLMs with a novel evaluation methodology combining subjective preference, correctness, and goal/action prediction metrics, and an external LLM as evaluator. Compared to baselines, AXIS improves perceived explanation correctness by at least 7.7% across all models and goal prediction accuracy by 23% for 4 models, with improved or comparable action prediction accuracy, achieving the highest scores overall.",0 "Convolutional neural networks (CNNs) are widely used for high-stakes applications like medicine, often surpassing human performance. However, most explanation methods rely on post-hoc attribution, approximating the decision-making process of already trained black-box models. These methods are often sensitive, unreliable, and fail to reflect true model reasoning, limiting their trustworthiness in critical applications. In this work, we introduce SoftCAM, a straightforward yet effective approach that makes standard CNN architectures inherently interpretable. By removing the global average pooling layer and replacing the fully connected classification layer with a convolution-based class evidence layer, SoftCAM preserves spatial information and produces explicit class activation maps that form the basis of the model's predictions. Evaluated on three medical datasets, SoftCAM maintains classification performance while significantly improving both the qualitative and quantitative explanation compared to existing post-hoc methods. Our results demonstrate that CNNs can be inherently interpretable without compromising performance, advancing the development of self-explainable deep learning for high-stakes decision-making.",0 "Deep learning models have shown promise in lung pathology detection from chest X-rays, but widespread clinical adoption remains limited due to opaque model decision-making. In prior work, we introduced ClinicXAI, a human-centric, expert-guided concept bottleneck model (CBM) designed for interpretable lung cancer diagnosis. We now extend that approach and present XpertXAI, a generalizable expert-driven model that preserves human-interpretable clinical concepts while scaling to detect multiple lung pathologies. Using a high-performing InceptionV3-based classifier and a public dataset of chest X-rays with radiology reports, we compare XpertXAI against leading post-hoc explainability methods and an unsupervised CBM, XCBs. We assess explanations through comparison with expert radiologist annotations and medical ground truth. Although XpertXAI is trained for multiple pathologies, our expert validation focuses on lung cancer. We find that existing techniques frequently fail to produce clinically meaningful explanations, omitting key diagnostic features and disagreeing with radiologist judgments. XpertXAI not only outperforms these baselines in predictive accuracy but also delivers concept-level explanations that better align with expert reasoning. While our focus remains on explainability in lung cancer detection, this work illustrates how human-centric model design can be effectively extended to broader diagnostic contexts - offering a scalable path toward clinically meaningful explainable AI in medical diagnostics.",0 "Large-scale Vision Language Models (LVLMs) are increasingly being applied to a wide range of real-world multimodal applications, involving complex visual and linguistic reasoning. As these models become more integrated into practical use, they are expected to handle complex aspects of human interaction. Among these, color perception is a fundamental yet highly variable aspect of visual understanding. It differs across individuals due to biological factors such as Color Vision Deficiencies (CVDs), as well as differences in culture and language. Despite its importance, perceptual diversity has received limited attention. In our study, we evaluate LVLMs' ability to account for individual level perceptual variation using the Ishihara Test, a widely used method for detecting CVDs. Our results show that LVLMs can explain CVDs in natural language, but they cannot simulate how people with CVDs perceive color in image based tasks. These findings highlight the need for multimodal systems that can account for color perceptual diversity and support broader discussions on perceptual inclusiveness and fairness in multimodal AI.",0 "Large language models (LLMs) like GPT-4 show potential for scaling motivational interviewing (MI) in addiction care, but require systematic evaluation of therapeutic capabilities. We present a computational framework assessing user-perceived quality (UPQ) through expected and unexpected MI behaviors. Analyzing human therapist and GPT-4 MI sessions via human-AI collaboration, we developed predictive models integrating deep learning and explainable AI to identify 17 MI-consistent (MICO) and MI-inconsistent (MIIN) behavioral metrics. A customized chain-of-thought prompt improved GPT-4's MI performance, reducing inappropriate advice while enhancing reflections and empathy. Although GPT-4 remained marginally inferior to therapists overall, it demonstrated superior advice management capabilities. The model achieved measurable quality improvements through prompt engineering, yet showed limitations in addressing complex emotional nuances. This framework establishes a pathway for optimizing LLM-based therapeutic tools through targeted behavioral metric analysis and human-AI co-evaluation. Findings highlight both the scalability potential and current constraints of LLMs in clinical communication applications.",0 "A major goal of neuroscience is to understand brain computations during visual processing in naturalistic settings. A dominant approach is to use image-computable deep neural networks trained with different task objectives as a basis for linear encoding models. However, in addition to requiring tuning a large number of parameters, the linear encoding approach ignores the structure of the feature maps both in the brain and the models. Recently proposed alternatives have focused on decomposing the linear mapping to spatial and feature components but focus on finding static receptive fields for units that are applicable only in early visual areas. In this work, we employ the attention mechanism used in the transformer architecture to study how retinotopic visual features can be dynamically routed to category-selective areas in high-level visual processing. We show that this computational motif is significantly more powerful than alternative methods in predicting brain activity during natural scene viewing, across different feature basis models and modalities. We also show that this approach is inherently more interpretable, without the need to create importance maps, by interpreting the attention routing signal for different high-level categorical areas. Our approach proposes a mechanistic model of how visual information from retinotopic maps can be routed based on the relevance of the input content to different category-selective regions.",0 "Deep learning's preponderance across scientific domains has reshaped high-stakes decision-making, making it essential to follow rigorous operational frameworks that include both Right-to-Privacy (RTP) and Right-to-Explanation (RTE). This paper examines the complexities of combining these two requirements. For RTP, we focus on `Differential privacy` (DP), which is considered the current gold standard for privacy-preserving machine learning due to its strong quantitative guarantee of privacy. For RTE, we focus on post-hoc explainers: they are the go-to option for model auditing as they operate independently of model training. We formally investigate DP models and various commonly-used post-hoc explainers: how to evaluate these explainers subject to RTP, and analyze the intrinsic interactions between DP models and these explainers. Furthermore, our work throws light on how RTP and RTE can be effectively combined in high-stakes applications. Our study concludes by outlining an industrial software pipeline, with the example of a wildly used use-case, that respects both RTP and RTE requirements.",0 "During job recruitment, traditional applicant selection methods often lack transparency. Candidates are rarely given sufficient justifications for recruiting decisions, whether they are made manually by human recruiters or through the use of black-box Applicant Tracking Systems (ATS). To address this problem, our work introduces a multi-agent AI system that uses Large Language Models (LLMs) to guide job seekers during the recruitment process. Using an iterative user-centric design approach, we first conducted a two-phased exploratory study with four active job seekers to inform the design and development of the system. Subsequently, we conducted an in-depth, qualitative user study with 20 active job seekers through individual one-to-one interviews to evaluate the developed prototype. The results of our evaluation demonstrate that participants perceived our multi-agent recruitment system as significantly more actionable, trustworthy, and fair compared to traditional methods. Our study further helped us uncover in-depth insights into factors contributing to these perceived user experiences. Drawing from these insights, we offer broader design implications for building user-aligned, multi-agent explainable AI systems across diverse domains.",2 "Achieving full automation in self-driving vehicles remains a challenge, especially in dynamic urban environments where navigation requires real-time adaptability. Existing systems struggle to handle navigation plans when faced with unpredictable changes in road layouts, spontaneous detours, or missing map data, due to their heavy reliance on predefined cartographic information. In this work, we explore the use of Large Language Models to generate Answer Set Programming rules by translating informal navigation instructions into structured, logic-based reasoning. ASP provides non-monotonic reasoning, allowing autonomous vehicles to adapt to evolving scenarios without relying on predefined maps. We present an experimental evaluation in which LLMs generate ASP constraints that encode real-world urban driving logic into a formal knowledge representation. By automating the translation of informal navigation instructions into logical rules, our method improves adaptability and explainability in autonomous navigation. Results show that LLM-driven ASP rule generation supports semantic-based decision-making, offering an explainable framework for dynamic navigation planning that aligns closely with how humans communicate navigational intent.",0 "During sudden disaster events, accurately predicting public panic sentiment on social media is crucial for proactive governance and crisis management. Current efforts on this problem face three main challenges: lack of finely annotated data hinders emotion prediction studies, unmodeled risk perception causes prediction inaccuracies, and insufficient interpretability of panic formation mechanisms. We address these issues by proposing a Psychology-driven generative Agent framework (PsychoAgent) for explainable panic prediction based on emotion arousal theory. Specifically, we first construct a fine-grained open panic emotion dataset (namely COPE) via human-large language models (LLMs) collaboration to mitigate semantic bias. Then, we develop a framework integrating cross-domain heterogeneous data grounded in psychological mechanisms to model risk perception and cognitive differences in emotion generation. To enhance interpretability, we design an LLM-based role-playing agent that simulates individual psychological chains through dedicatedly designed prompts. Experimental results on our annotated dataset show that PsychoAgent improves panic emotion prediction performance by 12.6% to 21.7% compared to baseline models. Furthermore, the explainability and generalization of our approach is validated. Crucially, this represents a paradigm shift from opaque ""data-driven fitting"" to transparent ""role-based simulation with mechanistic interpretation"" for panic emotion prediction during emergencies. Our implementation is publicly available at: https://anonymous.4open.science/r/PsychoAgent-19DD.",0 "Conceptual combination is a cognitive process that merges basic concepts, enabling the creation of complex expressions. During this process, the properties of combination (e.g., the whiteness of a peeled apple) can be inherited from basic concepts, newly emerge, or be canceled. However, previous studies have evaluated a limited set of properties and have not examined the generative process. To address this gap, we introduce the Conceptual Combination with Property Type dataset (CCPT), which consists of 12.3K annotated triplets of noun phrases, properties, and property types. Using CCPT, we establish three types of tasks to evaluate LLMs for conceptual combination thoroughly. Our key findings are threefold: (1) Our automatic metric grading property emergence and cancellation closely corresponds with human judgments. (2) LLMs, including OpenAI's o1, struggle to generate noun phrases which possess given emergent properties. (3) Our proposed method, inspired by cognitive psychology model that explains how relationships between concepts are formed, improves performances in all generative tasks. The dataset and experimental code are available at https://github.com/seokwon99/CCPT.git.",0 "Explainable AI (XAI) is a promising solution to ensure compliance with the EU AI Act, the first multi-national regulation for AI. XAI aims to enhance transparency and human oversight of AI systems, particularly ``black-box models'', which are criticized as incomprehensible. However, the discourse around the main stakeholders in the AI Act and XAI appears disconnected. While XAI prioritizes the end user's needs as the primary goal, the AI Act focuses on the obligations of the provider and deployer of the AI system. We aim to bridge this divide and provide guidance on how these two worlds are related. By fostering an interdisciplinary discussion in a cross-functional team with XAI, AI Act, legal, and requirements engineering experts, we walk through the steps necessary to analyze an AI-based clinical decision support system to clarify the end-user needs and assess AI Act applicability. By analyzing our justified understanding using an AI system under development as a case, we show that XAI techniques can fill a gap between stakeholder needs and the requirements of the AI Act. We look at the similarities and contrasts between the legal requirements and the needs of stakeholders. In doing so, we encourage researchers and practitioners from the XAI community to reflect on their role towards the AI Act by achieving a mutual understanding of the implications of XAI and the AI Act within different disciplines.",0 "The success of Large Language Models (LLMs) in human-AI collaborative decision-making hinges on their ability to provide trustworthy, gradual, and tailored explanations. Solving complex puzzles, such as Sudoku, offers a canonical example of this collaboration, where clear and customized explanations often hold greater importance than the final solution. In this study, we evaluate the performance of five LLMs in solving and explaining \sixsix{} Sudoku puzzles. While one LLM demonstrates limited success in solving puzzles, none can explain the solution process in a manner that reflects strategic reasoning or intuitive problem-solving. These findings underscore significant challenges that must be addressed before LLMs can become effective partners in human-AI collaborative decision-making.",0 "The explanation of AI results and how they are received by users is an increasingly active research field. However, there is a surprising lack of knowledge about how social factors such as emotions affect the process of explanation by a decision support system (DSS). While previous research has shown effects of emotions on DSS supported decision-making, it remains unknown in how far emotions affect cognitive processing during an explanation. In this study, we, therefore, investigated the influence of prior emotions and task-related arousal on the retention and understanding of explained feature relevance. To investigate the influence of prior emotions, we induced happiness and fear prior to the decision support interaction. Before emotion induction, user characteristics to assess their risk type were collected via a questionnaire. To identify emotional reactions to the explanations of the relevance of different features, we observed heart rate variability (HRV), facial expressions, and self-reported emotions of the explainee while observing and listening to the explanation and assessed their retention of the features as well as their influence on the outcome of the decision task. Results indicate that (1) task-unrelated prior emotions do not affected the ratantion but may affect the understanding of the relevance of certain features in the sense of an emotion-induced confirmation bias, (2) certain features related to personal attitudes yielded arousal in individual participants, (3) this arousal affected the understanding of these variables.",2 "Leveraging the power of multimodal large language models (LLMs) offers a promising approach to enhancing the accuracy and interpretability of morphing attack detection (MAD), especially in real-world biometric applications. This work introduces the use of LLMs for differential morphing attack detection (D-MAD). To the best of our knowledge, this is the first study to employ multimodal LLMs to D-MAD using real biometric data. To effectively utilize these models, we design Chain-of-Thought (CoT)-based prompts to reduce failure-to-answer rates and enhance the reasoning behind decisions. Our contributions include: (1) the first application of multimodal LLMs for D-MAD using real data subjects, (2) CoT-based prompt engineering to improve response reliability and explainability, (3) comprehensive qualitative and quantitative benchmarking of LLM performance using data from 54 individuals captured in passport enrollment scenarios, and (4) comparative analysis of two multimodal LLMs: ChatGPT-4o and Gemini providing insights into their morphing attack detection accuracy and decision transparency. Experimental results show that ChatGPT-4o outperforms Gemini in detection accuracy, especially against GAN-based morphs, though both models struggle under challenging conditions. While Gemini offers more consistent explanations, ChatGPT-4o is more resilient but prone to a higher failure-to-answer rate.",0 "Small (400 to 4000 km) and short lived (10 to 200 km) extreme ultraviolet (EUV) brightenings, detected by the High Resolution Imager EUV (HRIEUV), have been found to be ubiquitous in the Quiet Sun (QS). Their contribution to coronal heating as well as their physical origin are currently being investigated. We wish to determine whether models of short loops and impulsive heating are compatible with the results from observations. In particular, we used two models of loops with distinct thermal properties: cool (T below 1E5 K) and hot loops (T above 1E5 K). We simulated the evolution of impulsively heated short loops, using the 1D hydrodynamics (HD) code HYDRAD. We computed the synthetic light curves of HRIEUV, four EUV channels of the Atmospheric Imaging Assembly (AIA), and five emission lines measured by the SPectral Imaging of the Coronal Environment (SPICE). We then compared the results from the synthetic light curves with observations. The aim was to reproduce the short delays observed between the intensity peaks of the light curves. Cool loops subjected to impulsive heating are good candidates to explain the physical origin of the EUV brightenings. On the other hand, hot loops are not consistent with observations, except when they are subjected to especially strong impulsive heating.",0 "Can consumers form especially deep emotional bonds with AI and be vested in AI identities over time? We leverage a natural app-update event at Replika AI, a popular US-based AI companion, to shed light on these questions. We find that, after the app removed its erotic role play (ERP) feature, preventing intimate interactions between consumers and chatbots that were previously possible, this event triggered perceptions in customers that their AI companion's identity had discontinued. This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the ""new"" AI relative to the ""original"". Experimental evidence confirms these findings. Further experiments find that AI companions users feel closer to their AI companion than even their best human friend, and mourn a loss of their AI companion more than a loss of various other inanimate products. In short, consumers are forming human-level relationships with AI companions; disruptions to these relationships trigger real patterns of mourning as well as devaluation of the offering; and the degree of mourning and devaluation are explained by perceived discontinuity in the AIs identity. Our results illustrate that relationships with AI are truly personal, creating unique benefits and risks for consumers and firms alike.",0 "Machine learning models achieve high precision, but their decision-making processes often lack explainability. Furthermore, as model complexity increases, explainability typically decreases. Existing efforts to improve explainability primarily involve developing new eXplainable artificial intelligence (XAI) techniques or incorporating explainability constraints during training. While these approaches yield specific improvements, their applicability remains limited. In this work, we propose the Vision Transformer with artificial Astrocytes (ViTA). This training-free approach is inspired by neuroscience and enhances the reasoning of a pretrained deep neural network to generate more human-aligned explanations. We evaluated our approach employing two well-known XAI techniques, Grad-CAM and Grad-CAM++, and compared it to a standard Vision Transformer (ViT). Using the ClickMe dataset, we quantified the similarity between the heatmaps produced by the XAI techniques and a (human-aligned) ground truth. Our results consistently demonstrate that incorporating artificial astrocytes enhances the alignment of model explanations with human perception, leading to statistically significant improvements across all XAI techniques and metrics utilized.",0 "Large language models (LLMs) have shown great potential in flagging harmful content in online communities. Yet, existing approaches for moderation require a separate model for every community and are opaque in their decision-making, limiting real-world adoption. We introduce Mixture of Moderation Experts (MoMoE), a modular, cross-community framework that adds post-hoc explanations to scalable content moderation. MoMoE orchestrates four operators -- Allocate, Predict, Aggregate, Explain -- and is instantiated as seven community-specialized experts (MoMoE-Community) and five norm-violation experts (MoMoE-NormVio). On 30 unseen subreddits, the best variants obtain Micro-F1 scores of 0.72 and 0.67, respectively, matching or surpassing strong fine-tuned baselines while consistently producing concise and reliable explanations. Although community-specialized experts deliver the highest peak accuracy, norm-violation experts provide steadier performance across domains. These findings show that MoMoE yields scalable, transparent moderation without needing per-community fine-tuning. More broadly, they suggest that lightweight, explainable expert ensembles can guide future NLP and HCI research on trustworthy human-AI governance of online communities.",0 "Autonomous systems that rely on Machine Learning (ML) utilize online fault tolerance mechanisms, such as runtime monitors, to detect ML prediction errors and maintain safety during operation. However, the lack of human-interpretable explanations for these errors can hinder the creation of strong assurances about the system's safety and reliability. This paper introduces a novel fuzzy-based monitor tailored for ML perception components. It provides human-interpretable explanations about how different operating conditions affect the reliability of perception components and also functions as a runtime safety monitor. We evaluated our proposed monitor using naturalistic driving datasets as part of an automated driving case study. The interpretability of the monitor was evaluated and we identified a set of operating conditions in which the perception component performs reliably. Additionally, we created an assurance case that links unit-level evidence of \textit{correct} ML operation to system-level \textit{safety}. The benchmarking demonstrated that our monitor achieved a better increase in safety (i.e., absence of hazardous situations) while maintaining availability (i.e., ability to perform the mission) compared to state-of-the-art runtime ML monitors in the evaluated dataset.",0 "Mental health risk is a critical global public health challenge, necessitating innovative and reliable assessment methods. With the development of large language models (LLMs), they stand out to be a promising tool for explainable mental health care applications. Nevertheless, existing approaches predominantly rely on subjective textual mental records, which can be distorted by inherent mental uncertainties, leading to inconsistent and unreliable predictions. To address these limitations, this paper introduces ProMind-LLM. We investigate an innovative approach integrating objective behavior data as complementary information alongside subjective mental records for robust mental health risk assessment. Specifically, ProMind-LLM incorporates a comprehensive pipeline that includes domain-specific pretraining to tailor the LLM for mental health contexts, a self-refine mechanism to optimize the processing of numerical behavioral data, and causal chain-of-thought reasoning to enhance the reliability and interpretability of its predictions. Evaluations of two real-world datasets, PMData and Globem, demonstrate the effectiveness of our proposed methods, achieving substantial improvements over general LLMs. We anticipate that ProMind-LLM will pave the way for more dependable, interpretable, and scalable mental health case solutions.",0 "Quantization methods are widely used to accelerate inference and streamline the deployment of large language models (LLMs). While prior research has extensively investigated the degradation of various LLM capabilities due to quantization, its effects on model explainability and interpretability, which are crucial for understanding decision-making processes, remain unexplored. To address this gap, we conduct comprehensive experiments using three common quantization techniques at distinct bit widths, in conjunction with two explainability methods, counterfactual examples and natural language explanations, as well as two interpretability approaches, knowledge memorization analysis and latent multi-hop reasoning analysis. We complement our analysis with a thorough user study, evaluating selected explainability methods. Our findings reveal that, depending on the configuration, quantization can significantly impact model explainability and interpretability. Notably, the direction of this effect is not consistent, as it strongly depends on (1) the quantization method, (2) the explainability or interpretability approach, and (3) the evaluation protocol. In some settings, human evaluation shows that quantization degrades explainability, while in others, it even leads to improvements. Our work serves as a cautionary tale, demonstrating that quantization can unpredictably affect model transparency. This insight has important implications for deploying LLMs in applications where transparency is a critical requirement.",2 "While reasoning capabilities typically emerge in large language models (LLMs) with tens of billions of parameters, recent research focuses on improving smaller open-source models through knowledge distillation (KD) from commercial LLMs. However, many of these studies rely solely on responses from a single LLM as the gold rationale, unlike the natural human learning process, which involves understanding both the correct answers and the reasons behind mistakes. In this paper, we introduce a novel Fault-Aware DistIllation via Peer-Review (FAIR) approach: 1) instead of merely obtaining rationales from teachers, our method asks teachers to identify and explain the student's mistakes, providing customized instruction learning data; 2) we design a simulated peer-review process between teacher LLMs, and selects only the generated rationales above the acceptance threshold, which reduces the chance of teachers guessing correctly with flawed rationale, improving instructional data quality. Comprehensive experiments and analysis on mathematical, commonsense, and logical reasoning tasks demonstrate the effectiveness of our method. Our code is available at https://github.com/zhuochunli/Learn-from-Committee.",0 "Explainable Artificial Intelligence (XAI) is critical for attaining trust in the operation of AI systems. A key question of an AI system is ``why was this decision made this way''. Formal approaches to XAI use a formal model of the AI system to identify abductive explanations. While abductive explanations may be applicable to a large number of inputs sharing the same concrete values, more general explanations may be preferred for numeric inputs. So-called inflated abductive explanations give intervals for each feature ensuring that any input whose values fall withing these intervals is still guaranteed to make the same prediction. Inflated explanations cover a larger portion of the input space, and hence are deemed more general explanations. But there can be many (inflated) abductive explanations for an instance. Which is the best? In this paper, we show how to find a most general abductive explanation for an AI decision. This explanation covers as much of the input space as possible, while still being a correct formal explanation of the model's behaviour. Given that we only want to give a human one explanation for a decision, the most general explanation gives us the explanation with the broadest applicability, and hence the one most likely to seem sensible. (The paper has been accepted at IJCAI2025 conference.)",0 "Reward models are widely used as proxies for human preferences when aligning or evaluating LLMs. However, reward models are black boxes, and it is often unclear what, exactly, they are actually rewarding. In this paper we develop Rewrite-based Attribute Treatment Estimator (RATE) as an effective method for measuring the sensitivity of a reward model to high-level attributes of responses, such as sentiment, helpfulness, or complexity. Importantly, RATE measures the causal effect of an attribute on the reward. RATE uses LLMs to rewrite responses to produce imperfect counterfactuals examples that can be used to measure causal effects. A key challenge is that these rewrites are imperfect in a manner that can induce substantial bias in the estimated sensitivity of the reward model to the attribute. The core idea of RATE is to adjust for this imperfect-rewrite effect by rewriting twice. We establish the validity of the RATE procedure and show empirically that it is an effective estimator.",2 "The exponential increase of low-Earth orbit (LEO) satellites in the past 5 years has brought into intense focus the need for reliable monitoring and reentry prediction to safeguard from space collisions and ground debris impacts. However, LEO satellites fly within the upper atmosphere region that exerts significant drag forces to their orbits, reducing their lifetimes, and increasing collision risks during dynamic events, like geomagnetic storms. Such conditions can become more severe during geomagnetic storms, particularly during extreme events. In this work, we use two-line element (TLE) satellite tracking data to investigate geomagnetic activity effects on the reentries of 523 Starlink satellites from 2020 to 2024. This period coincides with the rising phase of solar cycle 25, which has shown itself to be more intense than the previous solar cycle. We derive satellite altitudes and velocities from TLE files and perform a superposed epoch analysis, the first with hundreds of similar satellites. Even with limitedly accurate TLE data, our results indisputably show that satellites reenter faster with higher geomagnetic activity. This is explained by the fastest orbital decay rates (in km/day) of the satellites caused by increased drag forces. We also find that prediction errors, defined as the difference between the epochs of actual reentries and predicted reentries at reference altitudes, increase with geomagnetic activity. As a result, we clearly show that the intense solar activity of the current solar cycle has already had significant impacts on Starlink reentries. This is a very exciting time in satellite orbital drag research, since the number of satellites in LEO and solar activity are the highest ever observed in human history.",0 "Roadway safety and mobility remain critical challenges for modern transportation systems, demanding innovative analytical frameworks capable of addressing complex, dynamic, and heterogeneous environments. While traditional engineering methods have made progress, the complexity and dynamism of real-world traffic necessitate more advanced analytical frameworks. Large Language Models (LLMs), with their unprecedented capabilities in natural language understanding, knowledge integration, and reasoning, represent a promising paradigm shift. This paper comprehensively reviews the application and customization of LLMs for enhancing roadway safety and mobility. A key focus is how LLMs are adapted -- via architectural, training, prompting, and multimodal strategies -- to bridge the ""modality gap"" with transportation's unique spatio-temporal and physical data. The review systematically analyzes diverse LLM applications in mobility (e.g., traffic flow prediction, signal control) and safety (e.g., crash analysis, driver behavior assessment,). Enabling technologies such as V2X integration, domain-specific foundation models, explainability frameworks, and edge computing are also examined. Despite significant potential, challenges persist regarding inherent LLM limitations (hallucinations, reasoning deficits), data governance (privacy, bias), deployment complexities (sim-to-real, latency), and rigorous safety assurance. Promising future research directions are highlighted, including advanced multimodal fusion, enhanced spatio-temporal reasoning, human-AI collaboration, continuous learning, and the development of efficient, verifiable systems. This review provides a structured roadmap of current capabilities, limitations, and opportunities, underscoring LLMs' transformative potential while emphasizing the need for responsible innovation to realize safer, more intelligent transportation systems.",0 "Cognitive decline often surfaces in language years before diagnosis. It is frequently non-experts, such as those closest to the patient, who first sense a change and raise concern. As LLMs become integrated into daily communication and used over prolonged periods, it may even be an LLM that notices something is off. But what exactly do they notice--and should be noticing--when making that judgment? This paper investigates how dementia is perceived through language by non-experts. We presented transcribed picture descriptions to non-expert humans and LLMs, asking them to intuitively judge whether each text was produced by someone healthy or with dementia. We introduce an explainable method that uses LLMs to extract high-level, expert-guided features representing these picture descriptions, and use logistic regression to model human and LLM perceptions and compare with clinical diagnoses. Our analysis reveals that human perception of dementia is inconsistent and relies on a narrow, and sometimes misleading, set of cues. LLMs, by contrast, draw on a richer, more nuanced feature set that aligns more closely with clinical patterns. Still, both groups show a tendency toward false negatives, frequently overlooking dementia cases. Through our interpretable framework and the insights it provides, we hope to help non-experts better recognize the linguistic signs that matter.",2 "Providing personalized, detailed feedback at scale in large undergraduate STEM courses remains a persistent challenge. We present an empirically evaluated practice exam system that integrates AI generated feedback with targeted textbook references, deployed in a large introductory biology course. Our system encourages metacognitive behavior by asking students to explain their answers and declare their confidence. It uses OpenAI's GPT-4o to generate personalized feedback based on this information, while directing them to relevant textbook sections. Through interaction logs from consenting participants across three midterms (541, 342, and 413 students respectively), totaling 28,313 question-student interactions across 146 learning objectives, along with 279 surveys and 23 interviews, we examined the system's impact on learning outcomes and engagement. Across all midterms, feedback types showed no statistically significant performance differences, though some trends suggested potential benefits. The most substantial impact came from the required confidence ratings and explanations, which students reported transferring to their actual exam strategies. About 40 percent of students engaged with textbook references when prompted by feedback -- far higher than traditional reading rates. Survey data revealed high satisfaction (mean rating 4.1 of 5), with 82.1 percent reporting increased confidence on practiced midterm topics, and 73.4 percent indicating they could recall and apply specific concepts. Our findings suggest that embedding structured reflection requirements may be more impactful than sophisticated feedback mechanisms.",2 "We address the problem of identifying a system subject to additive faults, while simultaneously reconstructing the fault signal via subspace methods. We do not require nominal data for the identification, neither do we impose any assumption on the class of faults, e.g., sensor or actuator faults. We show that, under mild assumptions on the fault signal, standard PI-MOESP can recover the system matrices associated to the input-output subsystem. Then we introduce the concept of output behavior equivalence, which characterizes systems with the same output behavior set, and present a method to establish this equivalence from system matrices. Finally, we show how to estimate from data the complete set of fault matrices for which there exist a fault signal with minimal dimension that explains the data.",0 "With the rapid improvement of machine learning (ML) models, cognitive scientists are increasingly asking about their alignment with how humans think. Here, we ask this question for computer vision models and human sensitivity to geometric and topological (GT) concepts. Under the core knowledge account, these concepts are innate and supported by dedicated neural circuitry. In this work, we investigate an alternative explanation, that GT concepts are learned ``for free'' through everyday interaction with the environment. We do so using computer visions models, which are trained on large image datasets. We build on prior studies to investigate the overall performance and human alignment of three classes of models -- convolutional neural networks (CNNs), transformer-based models, and vision-language models -- on an odd-one-out task testing 43 GT concepts spanning seven classes. Transformer-based models achieve the highest overall accuracy, surpassing that of young children. They also show strong alignment with children's performance, finding the same classes of concepts easy vs. difficult. By contrast, vision-language models underperform their vision-only counterparts and deviate further from human profiles, indicating that na\""ive multimodality might compromise abstract geometric sensitivity. These findings support the use of computer vision models to evaluate the sufficiency of the learning account for explaining human sensitivity to GT concepts, while also suggesting that integrating linguistic and visual representations might have unpredicted deleterious consequences.",0 "As large language models (LLMs) become widely deployed, concerns about their safety and alignment grow. An approach to steer LLM behavior, such as mitigating biases or defending against jailbreaks, is to identify which parts of a prompt influence specific aspects of the model's output. Token-level attribution methods offer a promising solution, but still struggle in text generation, explaining the presence of each token in the output separately, rather than the underlying semantics of the entire LLM response. We introduce ConceptX, a model-agnostic, concept-level explainability method that identifies the concepts, i.e., semantically rich tokens in the prompt, and assigns them importance based on the outputs' semantic similarity. Unlike current token-level methods, ConceptX also offers to preserve context integrity through in-place token replacements and supports flexible explanation goals, e.g., gender bias. ConceptX enables both auditing, by uncovering sources of bias, and steering, by modifying prompts to shift the sentiment or reduce the harmfulness of LLM responses, without requiring retraining. Across three LLMs, ConceptX outperforms token-level methods like TokenSHAP in both faithfulness and human alignment. Steering tasks boost sentiment shift by 0.252 versus 0.131 for random edits and lower attack success rates from 0.463 to 0.242, outperforming attribution and paraphrasing baselines. While prompt engineering and self-explaining methods sometimes yield safer responses, ConceptX offers a transparent and faithful alternative for improving LLM safety and alignment, demonstrating the practical value of attribution-based explainability in guiding LLM behavior.",0 "Media bias detection is a critical task in ensuring fair and balanced information dissemination, yet it remains challenging due to the subjectivity of bias and the scarcity of high-quality annotated data. In this work, we perform sentence-level bias classification by fine-tuning a RoBERTa-based model on the expert-annotated BABE dataset. Using McNemar's test and the 5x2 cross-validation paired t-test, we show statistically significant improvements in performance when comparing our model to a domain-adaptively pre-trained DA-RoBERTa baseline. Furthermore, attention-based analysis shows that our model avoids common pitfalls like oversensitivity to politically charged terms and instead attends more meaningfully to contextually relevant tokens. For a comprehensive examination of media bias, we present a pipeline that combines our model with an already-existing bias-type classifier. Our method exhibits good generalization and interpretability, despite being constrained by sentence-level analysis and dataset size because of a lack of larger and more advanced bias corpora. We talk about context-aware modeling, bias neutralization, and advanced bias type classification as potential future directions. Our findings contribute to building more robust, explainable, and socially responsible NLP systems for media bias detection.",0 "This is the Dagstuhl Perspectives Workshop 24452 manifesto on Reframing Technical Debt. The manifesto begins with a one-page summary of Values, Beliefs, and Principles. It then elaborates on each Value, Belief, and Principle to explain their rationale and clarify their meaning. Subsequently, the paper describes the current landscape of Technical Debt Management methods and tools and explains why the current practice is inadequate and where current research falls short. The current landscape is organized into five major topics: Technical Debt as Value-Creation, Tooling, Data Collection, the role of Architecture, and Socio-Technical Aspects. Finally, the paper outlines a roadmap to realize the stated principles, with concrete milestones to be addressed by researchers, software practitioners, and tool vendors. The manifesto is signed by the workshop participants.",2 "The composite geometry and spectral anisotropy of the solar wind turbulence are very important topics in the investigations of solar wind. In this work, we use the magnetic field and plasma data from Wind spacecraft measured during 1995 January to 2023 December, which covers more than two solar cycles, to systematically investigate these subjects in the context of solar-cycle variability. The so-called spectrum ratio test and spectrum anisotropy test are employed to determine the three-dimensional (3D) geometry of the solar wind turbulence. Both the tests reveal that the solar wind turbulence is dominated by the two-dimensional (2D) component (~80% by turbulence energy). More interestingly, we find that the fraction of slab turbulence increases with the rising sunspot number, and the correlation coefficient between the slab fraction and the sunspot number is 0.61 (ratio test result) or 0.65 (anisotropy test result). This phenomenon suggests that the increasing solar activity (signified by sunspot number) causes increasing slab component in the solar wind turbulence. The relationship between spectral anisotropy and solar activity is discussed and explained. The enhancement of slab fraction is associated with the intensified interplanetary magnetic field magnitude and the increased Alfven speed during the rise phases of the solar cycles. Our findings will be very helpful for achieving a better understanding of the 3D composite geometry and spectral anisotropy of the solar wind turbulence, and especially of their solar-cycle variability.",0 "Temporal Knowledge Graphs (TKGs), which utilize quadruples in the form of (subject, predicate, object, timestamp) to describe temporal facts, have attracted extensive attention. N-tuple TKGs (N-TKGs) further extend traditional TKGs by utilizing n-tuples to incorporate auxiliary elements alongside core elements (i.e., subject, predicate, and object) of facts, so as to represent them in a more fine-grained manner. Reasoning over N-TKGs aims to predict potential future facts based on historical ones. However, existing N-TKG reasoning methods often lack explainability due to their black-box nature. Therefore, we introduce a new Reinforcement Learning-based method, named MT-Path, which leverages the temporal information to traverse historical n-tuples and construct a temporal reasoning path. Specifically, in order to integrate the information encapsulated within n-tuples, i.e., the entity-irrelevant information within the predicate, the information about core elements, and the complete information about the entire n-tuples, MT-Path utilizes a mixture policy-driven action selector, which bases on three low-level policies, namely, the predicate-focused policy, the core-element-focused policy and the whole-fact-focused policy. Further, MT-Path utilizes an auxiliary element-aware GCN to capture the rich semantic dependencies among facts, thereby enabling the agent to gain a deep understanding of each n-tuple. Experimental results demonstrate the effectiveness and the explainability of MT-Path.",0 "Decentralized autonomous organizations (DAOs) have transformed organizational structures by shifting from traditional hierarchical control to decentralized approaches, leveraging blockchain and cryptoeconomics. Despite managing significant funds and building global networks, DAOs face challenges like declining participation, increasing centralization, and inabilities to adapt to changing environments, which stifle innovation. This paper explores DAOs as complex systems and applies complexity science to explain their inefficiencies. In particular, we discuss DAO challenges, their complex nature, and introduce the self-organization mechanisms of collective intelligence, digital democracy, and adaptation. By applying these mechanisms to refine DAO design and construction, a conceptual framework for assessing a DAO's viability is created. This contribution lays the foundation for future research at the intersection of complexity science, digital democracy and DAOs.",0 "Explainable AI (XAI) refers to techniques that provide human-understandable insights into the workings of AI models. Recently, the focus of XAI is being extended toward explaining Large Language Models (LLMs). This extension calls for a significant transformation in the XAI methodologies for two reasons. First, many existing XAI methods cannot be directly applied to LLMs due to their complexity and advanced capabilities. Second, as LLMs are increasingly deployed in diverse applications, the role of XAI shifts from merely opening the ``black box'' to actively enhancing the productivity and applicability of LLMs in real-world settings. Meanwhile, the conversation and generation abilities of LLMs can reciprocally enhance XAI. Therefore, in this paper, we introduce Usable XAI in the context of LLMs by analyzing (1) how XAI can explain and improve LLM-based AI systems and (2) how XAI techniques can be improved by using LLMs. We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges. We also provide case studies to demonstrate how to obtain and leverage explanations. The code used in this paper can be found at: https://github.com/JacksonWuxs/UsableXAI_LLM.",0 "The impressive ability of large language models to generate natural text across various tasks has led to critical challenges in authorship authentication. Although numerous detection methods have been developed to differentiate between machine-generated texts (MGT) and human-generated texts (HGT), the explainability of these methods remains a significant gap. Traditional explainability techniques often fall short in capturing the complex word relationships that distinguish HGT from MGT. To address this limitation, we present LM$^2$otifs, a novel explainable framework for MGT detection. Inspired by probabilistic graphical models, we provide a theoretical rationale for the effectiveness. LM$^2$otifs utilizes eXplainable Graph Neural Networks to achieve both accurate detection and interpretability. The LM$^2$otifs pipeline operates in three key stages: first, it transforms text into graphs based on word co-occurrence to represent lexical dependencies; second, graph neural networks are used for prediction; and third, a post-hoc explainability method extracts interpretable motifs, offering multi-level explanations from individual words to sentence structures. Extensive experiments on multiple benchmark datasets demonstrate the comparable performance of LM$^2$otifs. The empirical evaluation of the extracted explainable motifs confirms their effectiveness in differentiating HGT and MGT. Furthermore, qualitative analysis reveals distinct and visible linguistic fingerprints characteristic of MGT.",0 "As large language models (LLMs) are increasingly deployed in sensitive domains such as healthcare, law, and education, the demand for transparent, interpretable, and accountable AI systems becomes more urgent. Explainable AI (XAI) acts as a crucial interface between the opaque reasoning of LLMs and the diverse stakeholders who rely on their outputs in high-risk decisions. This paper presents a comprehensive reflection and survey of XAI for LLMs, framed around three guiding questions: Why is explainability essential? What technical and ethical dimensions does it entail? And how can it fulfill its role in real-world deployment? We highlight four core dimensions central to explainability in LLMs, faithfulness, truthfulness, plausibility, and contrastivity, which together expose key design tensions and guide the development of explanation strategies that are both technically sound and contextually appropriate. The paper discusses how XAI can support epistemic clarity, regulatory compliance, and audience-specific intelligibility across stakeholder roles and decision settings. We further examine how explainability is evaluated, alongside emerging developments in audience-sensitive XAI, mechanistic interpretability, causal reasoning, and adaptive explanation systems. Emphasizing the shift from surface-level transparency to governance-ready design, we identify critical challenges and future research directions for ensuring the responsible use of LLMs in complex societal contexts. We argue that explainability must evolve into a civic infrastructure fostering trust, enabling contestability, and aligning AI systems with institutional accountability and human-centered decision-making.",2 "Retrieval-Augmented Generation (RAG) systems show promise by coupling large language models with external knowledge, yet traditional RAG evaluation methods primarily report quantitative scores while offering limited actionable guidance for refining these complex pipelines. In this paper, we introduce RAGXplain, an evaluation framework that quantifies RAG performance and translates these assessments into clear insights that clarify the workings of its complex, multi-stage pipeline and offer actionable recommendations. Using LLM reasoning, RAGXplain converts raw scores into coherent narratives identifying performance gaps and suggesting targeted improvements. By providing transparent explanations for AI decision-making, our framework fosters user trust-a key challenge in AI adoption. Our LLM-based metric assessments show strong alignment with human judgments, and experiments on public question-answering datasets confirm that applying RAGXplain's actionable recommendations measurably improves system performance. RAGXplain thus bridges quantitative evaluation and practical optimization, empowering users to understand, trust, and enhance their AI systems.",0 "Anti-phishing tools typically display generic warnings that offer users limited explanation on why a website is considered malicious, which can prevent end-users from developing the mental models needed to recognize phishing cues on their own. This becomes especially problematic when these tools inevitably fail - particularly against evasive threats, and users are found to be ill-equipped to identify and avoid them independently. To address these limitations, we present PhishXplain (PXP), a real-time explainable phishing warning system designed to augment existing detection mechanisms. PXP empowers users by clearly articulating why a site is flagged as malicious, highlighting suspicious elements using a memory-efficient implementation of LLaMA 3.2. It utilizes a structured two-step prompt architecture to identify phishing features, generate contextual explanations, and render annotated screenshots that visually reinforce the warning. Longitudinally implementing PhishXplain over a month on 7,091 live phishing websites, we found that it can generate warnings for 94% of the sites, with a correctness of 96%. We also evaluated PhishXplain through a user study with 150 participants split into two groups: one received conventional, generic warnings, while the other interacted with PXP's explainable alerts. Participants who received the explainable warnings not only demonstrated a significantly better understanding of phishing indicators but also achieved higher accuracy in identifying phishing threats, even without any warning. Moreover, they reported greater satisfaction and trust in the warnings themselves. These improvements were especially pronounced among users with lower initial levels of cybersecurity proficiency and awareness. To encourage the adoption of this framework, we release PhishXplain as a browser extension.",2 "A central goal of cognitive modeling is to develop models that not only predict human behavior but also provide insight into the underlying cognitive mechanisms. While neural network models trained on large-scale behavioral data often achieve strong predictive performance, they typically fall short in offering interpretable explanations of the cognitive processes they capture. In this work, we explore the potential of pretrained large language models (LLMs) to serve as dual-purpose cognitive models--capable of both accurate prediction and interpretable explanation in natural language. Specifically, we employ reinforcement learning with outcome-based rewards to guide LLMs toward generating explicit reasoning traces for explaining human risky choices. Our findings demonstrate that this approach produces high-quality explanations alongside strong quantitative predictions of human decisions.",0 "Psychiatric disorders affect millions globally, yet their diagnosis faces significant challenges in clinical practice due to subjective assessments and accessibility concerns, leading to potential delays in treatment. To help address this issue, we present Heart2Mind, a human-centered contestable psychiatric disorder diagnosis system using wearable electrocardiogram (ECG) monitors. Our approach leverages cardiac biomarkers, particularly heart rate variability (HRV) and R-R intervals (RRI) time series, as objective indicators of autonomic dysfunction in psychiatric conditions. The system comprises three key components: (1) a Cardiac Monitoring Interface (CMI) for real-time data acquisition from Polar H9/H10 devices; (2) a Multi-Scale Temporal-Frequency Transformer (MSTFT) that processes RRI time series through integrated time-frequency domain analysis; (3) a Contestable Diagnosis Interface (CDI) combining Self-Adversarial Explanations (SAEs) with contestable Large Language Models (LLMs). Our MSTFT achieves 91.7% accuracy on the HRV-ACC dataset using leave-one-out cross-validation, outperforming state-of-the-art methods. SAEs successfully detect inconsistencies in model predictions by comparing attention-based and gradient-based explanations, while LLMs enable clinicians to validate correct predictions and contest erroneous ones. This work demonstrates the feasibility of combining wearable technology with Explainable Artificial Intelligence (XAI) and contestable LLMs to create a transparent, contestable system for psychiatric diagnosis that maintains clinical oversight while leveraging advanced AI capabilities. Our implementation is publicly available at: https://github.com/Analytics-Everywhere-Lab/heart2mind.",0 "Recent advances in Natural Language Processing (NLP) have led to the development of highly sophisticated language models for text generation. In parallel, neuroscience has increasingly employed these models to explore cognitive processes involved in language comprehension. Previous research has shown that models such as N-grams and LSTM networks can partially account for predictability effects in explaining eye movement behaviors, specifically Gaze Duration, during reading. In this study, we extend these findings by evaluating transformer-based models (GPT2, LLaMA-7B, and LLaMA2-7B) to further investigate this relationship. Our results indicate that these architectures outperform earlier models in explaining the variance in Gaze Durations recorded from Rioplantense Spanish readers. However, similar to previous studies, these models still fail to account for the entirety of the variance captured by human predictability. These findings suggest that, despite their advancements, state-of-the-art language models continue to predict language in ways that differ from human readers.",0 "We often assume that robots which collaborate with humans should behave in ways that are transparent (e.g., legible, explainable). These transparent robots intentionally choose actions that convey their internal state to nearby humans: for instance, a transparent robot might exaggerate its trajectory to indicate its goal. But while transparent behavior seems beneficial for human-robot interaction, is it actually optimal? In this paper we consider collaborative settings where the human and robot have the same objective, and the human is uncertain about the robot's type (i.e., the robot's internal state). We extend a recursive combination of Bayesian Nash equilibrium and the Bellman equation to solve for optimal robot policies. Interestingly, we discover that it is not always optimal for collaborative robots to be transparent; instead, human and robot teams can sometimes achieve higher rewards when the robot is opaque. In contrast to transparent robots, opaque robots select actions that withhold information from the human. Our analysis suggests that opaque behavior becomes optimal when either (a) human-robot interactions have a short time horizon or (b) users are slow to learn from the robot's actions. We extend this theoretical analysis to user studies across 43 total participants in both online and in-person settings. We find that -- during short interactions -- users reach higher rewards when working with opaque partners, and subjectively rate opaque robots as about equal to transparent robots. See videos of our experiments here: https://youtu.be/u8q1Z7WHUuI",2 "Artificial intelligence (AI) is reshaping strategic planning, with Multi-Agent Reinforcement Learning (MARL) enabling coordination among autonomous agents in complex scenarios. However, its practical deployment in sensitive military contexts is constrained by the lack of explainability, which is an essential factor for trust, safety, and alignment with human strategies. This work reviews and assesses current advances in explainability methods for MARL with a focus on simulated air combat scenarios. We proceed by adapting various explainability techniques to different aerial combat scenarios to gain explanatory insights about the model behavior. By linking AI-generated tactics with human-understandable reasoning, we emphasize the need for transparency to ensure reliable deployment and meaningful human-machine interaction. By illuminating the crucial importance of explainability in advancing MARL for operational defense, our work supports not only strategic planning but also the training of military personnel with insightful and comprehensible analyses.",0 "Planning trips is a cognitively intensive task involving conflicting user preferences, dynamic external information, and multi-step temporal-spatial optimization. Traditional platforms often fall short - they provide static results, lack contextual adaptation, and fail to support real-time interaction or intent refinement. Our approach, Vaiage, addresses these challenges through a graph-structured multi-agent framework built around large language models (LLMs) that serve as both goal-conditioned recommenders and sequential planners. LLMs infer user intent, suggest personalized destinations and activities, and synthesize itineraries that align with contextual constraints such as budget, timing, group size, and weather. Through natural language interaction, structured tool use, and map-based feedback loops, Vaiage enables adaptive, explainable, and end-to-end travel planning grounded in both symbolic reasoning and conversational understanding. To evaluate Vaiage, we conducted human-in-the-loop experiments using rubric-based GPT-4 assessments and qualitative feedback. The full system achieved an average score of 8.5 out of 10, outperforming the no-strategy (7.2) and no-external-API (6.8) variants, particularly in feasibility. Qualitative analysis indicated that agent coordination - especially the Strategy and Information Agents - significantly improved itinerary quality by optimizing time use and integrating real-time context. These results demonstrate the effectiveness of combining LLM reasoning with symbolic agent coordination in open-ended, real-world planning tasks.",0 "Learning from humans is challenging because people are imperfect teachers. When everyday humans show the robot a new task they want it to perform, humans inevitably make errors (e.g., inputting noisy actions) and provide suboptimal examples (e.g., overshooting the goal). Existing methods learn by mimicking the exact behaviors the human teacher provides -- but this approach is fundamentally limited because the demonstrations themselves are imperfect. In this work we advance offline imitation learning by enabling robots to extrapolate what the human teacher meant, instead of only considering what the human actually showed. We achieve this by hypothesizing that all of the human's demonstrations are trying to convey a single, consistent policy, while the noise and sub-optimality within their behaviors obfuscates the data and introduces unintentional complexity. To recover the underlying policy and learn what the human teacher meant, we introduce Counter-BC, a generalized version of behavior cloning. Counter-BC expands the given dataset to include actions close to behaviors the human demonstrated (i.e., counterfactual actions that the human teacher could have intended, but did not actually show). During training Counter-BC autonomously modifies the human's demonstrations within this expanded region to reach a simple and consistent policy that explains the underlying trends in the human's dataset. Theoretically, we prove that Counter-BC can extract the desired policy from imperfect data, multiple users, and teachers of varying skill levels. Empirically, we compare Counter-BC to state-of-the-art alternatives in simulated and real-world settings with noisy demonstrations, standardized datasets, and real human teachers. See videos of our work here: https://youtu.be/XaeOZWhTt68",0 "Context: The Configuration Management of the development and production environments is an important aspect of IT operations. However, managing the configuration differences between these two environments can be challenging, leading to inconsistent behavior, unexpected errors, and increased downtime. Objective: In this study, we sought to investigate the strategies software companies employ to mitigate the configuration differences between the development and production environments. Our goal is to provide a comprehensive understanding of these strategies used to contribute to reducing the risk of configuration-related issues. Method: To achieve this goal, we interviewed 17 participants and leveraged the Thematic Analysis methodology to analyze the interview data. These participants shed some light on the current practices, processes, challenges, or issues they have encountered. Results: Based on the interviews, we systematically formulated and structured a catalog of eight strategies that explain how software producing companies mitigate these configuration differences. These strategies vary from 1) creating detailed configuration management plans, 2) using automation tools, and 3) developing processes to test and validate changes through containers and virtualization technologies. Conclusion: By implementing these strategies, companies can improve their ability to respond quickly and effectively to changes in the production environment. In addition, they can also ensure compliance with industry standards and regulations.",2 "Subjective Answer Grading (SAG) plays a crucial role in education, standardized testing, and automated assessment systems, particularly for evaluating short-form responses in Short Answer Scoring (SAS). However, existing approaches often produce coarse-grained scores and lack detailed reasoning. Although large language models (LLMs) have demonstrated potential as zero-shot evaluators, they remain susceptible to bias, inconsistencies with human judgment, and limited transparency in scoring decisions. To overcome these limitations, we introduce SAS-Bench, a benchmark specifically designed for LLM-based SAS tasks. SAS-Bench provides fine-grained, step-wise scoring, expert-annotated error categories, and a diverse range of question types derived from real-world subject-specific exams. This benchmark facilitates detailed evaluation of model reasoning processes and explainability. We also release an open-source dataset containing 1,030 questions and 4,109 student responses, each annotated by domain experts. Furthermore, we conduct comprehensive experiments with various LLMs, identifying major challenges in scoring science-related questions and highlighting the effectiveness of few-shot prompting in improving scoring accuracy. Our work offers valuable insights into the development of more robust, fair, and educationally meaningful LLM-based evaluation systems.",0 "The research examines the challenges revolving around young people's social movements, activism regarding sustainability, as well as the accompanying social media aspect, and how social media impacts environmental action. This study focuses on the environmental craze on social media platforms and its impact on young activists aged 16-25. With the advancement of social media, new avenues have opened for participation in sustainability issues, especially for the marginalized, as information moved through transnational networks at lightning speed. Along with specific Formative Visual Storytelling methods, the young leaders of the movement deploy hashtags and other online tools to capture the attention of their peers and decision makers. Challenges persist with ""clicktivism"" fatigue from the internet, and site limitations. This article contributes to insights on emerging forms of civic activism by explaining how digital natives adapt technology to reframe green activism. The research suggests that effective digital environmental movements integrate online and offline action, make it simple for individuals to get involved, and promote tolerance to algorithmic modifications and climate care among participants.",2 "Across natural and human-made systems, transition points mark sudden changes of order and are thus key to understanding overarching system features. Motivated by recent experimental observations, we here uncover an intriguing class of transitions in coupled oscillators, extreme synchronization transitions, from asynchronous disordered states to synchronous states with almost completely ordered phases. Whereas such a transition appears like discontinuous or explosive phase transitions, it exhibits markedly distinct features. First, the transition occurs already in finite systems of $N$ units and so constitutes an intriguing bifurcation of multi-dimensional systems rather than a genuine phase transition that emerges in the thermodynamic limit $N\rightarrow \infty$ only. Second, the synchronization order parameter jumps from moderate values of the order of $N^{-1/2}$ to values extremely close to $1$, its theoretical maximum, immediately upon crossing a critical coupling strength. We analytically explain the mechanisms underlying such extreme transitions in coupled complexified Kuramoto oscillators. Extreme transitions may similarly occur across other systems of coupled oscillators as well as in certain percolation processes. In applications, their occurrence impacts our ability of ensuring or preventing strong forms of ordering, for instance in biological and engineered systems.",0 "Geometric frustration has long been a subject of enduring interest in condensed matter physics. While geometric frustration traditionally focuses on magnetic systems, little attention is paid to the ""frustrated superconductivity"" which could arise when the superconducting interaction conflicts with the crystal symmetry. The recently discovered kagome superconductors provide a particular opportunity for studying this due to the fact that the frustrated lattice structure and the interference effect between the three sublattices can facilitate the frustrated superconducting interaction. Here, we propose a theory that supports the frustrated superconducting state, derived from the on-site $s$-wave superconducting pairing in conjunction with the nearest-neighbor pairings hoping and the unique geometrical frustrated lattice structure. In this state, whereas the mutual $2\pi/3$ difference of the superconducting pairing phase causes the six-fold modulation of the amplitude and breaks the time-reversal symmetry with $4\pi$ phase changes of the superconducting pairing as one following it around the Fermi surface, it is immune to the impurities without the impurity-induced in-gap states and produces the pronounced Hebel-Slichter peak of the nuclear spin-lattice relaxation rate below $T_{c}$. Notably, the theory also reveals a disorder-induced superconducting pairing transition from the frustrated superconducting state to an isotropic $s$-wave superconducting state without traversing the nodal points, recovering and explaining the behavior found in experiment. This study not only serves as a promising proposal to mediate the divergent or seemingly contradictory experimental outcomes regarding superconducting pairing symmetry, but may also pave the way for advancing investigations into the frustrated superconducting state.",0 "Periorbital distances are critical markers for diagnosing and monitoring a range of oculoplastic and craniofacial conditions. Manual measurement, however, is subjective and prone to intergrader variability. Automated methods have been developed but remain limited by standardized imaging requirements, small datasets, and a narrow focus on individual measurements. We developed a segmentation pipeline trained on a domain-specific dataset of healthy eyes and compared its performance against the Segment Anything Model (SAM) and the prior benchmark, PeriorbitAI. Segmentation accuracy was evaluated across multiple disease classes and imaging conditions. We further investigated the use of predicted periorbital distances as features for disease classification under in-distribution (ID) and out-of-distribution (OOD) settings, comparing shallow classifiers, CNNs, and fusion models. Our segmentation model achieved state-of-the-art accuracy across all datasets, with error rates within intergrader variability and superior performance relative to SAM and PeriorbitAI. In classification tasks, models trained on periorbital distances matched CNN performance on ID data (77--78\% accuracy) and substantially outperformed CNNs under OOD conditions (63--68\% accuracy vs. 14\%). Fusion models achieved the highest ID accuracy (80\%) but were sensitive to degraded CNN features under OOD shifts. Segmentation-derived periorbital distances provide robust, explainable features for disease classification and generalize better under domain shift than CNN image classifiers. These results establish a new benchmark for periorbital distance prediction and highlight the potential of anatomy-based AI pipelines for real-world deployment in oculoplastic and craniofacial care.",0 "Extracting MITRE ATT\&CK Tactics, Techniques, and Procedures (TTPs) from natural language threat reports is crucial yet challenging. Existing methods primarily focus on performance metrics using data-driven approaches, often neglecting mechanisms to ensure faithful adherence to the official standard. This deficiency compromises reliability and consistency of TTP assignments, creating intelligence silos and contradictory threat assessments across organizations. To address this, we introduce a novel framework that converts abstract standard definitions into actionable, contextualized knowledge. Our method utilizes Large Language Model (LLM) to generate, update, and apply this knowledge. This framework populates an evolvable memory with dual-layer situational knowledge instances derived from labeled examples and official definitions. The first layer identifies situational contexts (e.g., ""Communication with C2 using encoded subdomains""), while the second layer captures distinctive features that differentiate similar techniques (e.g., distinguishing T1132 ""Data Encoding"" from T1071 ""Application Layer Protocol"" based on whether the focus is on encoding methods or protocol usage). This structured approach provides a transparent basis for explainable TTP assignments and enhanced human oversight, while also helping to standardize other TTP extraction systems. Experiments show our framework (using Qwen2.5-32B) boosts Technique F1 scores by 11\% over GPT-4o. Qualitative analysis confirms superior standardization, enhanced transparency, and improved explainability in real-world threat intelligence scenarios. To the best of our knowledge, this is the first work that uses the LLM to generate, update, and apply the a new knowledge for TTP extraction.",0 "Lane Keeping Assist systems, while increasingly prevalent, often suffer from unpredictable real-world failures, largely due to their opaque, black-box nature, which limits driver anticipation and trust. To bridge the gap between automated assistance and effective human oversight, we present LKAlert, a novel supervisory alert system that leverages VLM to forecast potential LKA risk 1-3 seconds in advance. LKAlert processes dash-cam video and CAN data, integrating surrogate lane segmentation features from a parallel interpretable model as automated guiding attention. Unlike traditional binary classifiers, LKAlert issues both predictive alert and concise natural language explanation, enhancing driver situational awareness and trust. To support the development and evaluation of such systems, we introduce OpenLKA-Alert, the first benchmark dataset designed for predictive and explainable LKA failure warnings. It contains synchronized multimodal inputs and human-authored justifications across annotated temporal windows. We further contribute a generalizable methodological framework for VLM-based black-box behavior prediction, combining surrogate feature guidance with LoRA. This framework enables VLM to reason over structured visual context without altering its vision backbone, making it broadly applicable to other complex, opaque systems requiring interpretable oversight. Empirical results correctly predicts upcoming LKA failures with 69.8% accuracy and a 58.6\% F1-score. The system also generates high-quality textual explanations for drivers (71.7 ROUGE-L) and operates efficiently at approximately 2 Hz, confirming its suitability for real-time, in-vehicle use. Our findings establish LKAlert as a practical solution for enhancing the safety and usability of current ADAS and offer a scalable paradigm for applying VLMs to human-centered supervision of black-box automation.",0 "I develop Ornithologist, a weakly-supervised textual classification system and measure the hawkishness and dovishness of central bank text. Ornithologist uses ``taxonomy-guided reasoning'', guiding a large language model with human-authored decision trees. This increases the transparency and explainability of the system and makes it accessible to non-experts. It also reduces hallucination risk. Since it requires less supervision than traditional classification systems, it can more easily be applied to other problems or sources of text (e.g. news) without much modification. Ornithologist measurements of hawkishness and dovishness of RBA communication carry information about the future of the cash rate path and of market expectations.",0 "Explainable Recommender Systems (XRS) aim to provide users with understandable reasons for the recommendations generated by these systems, representing a crucial research direction in artificial intelligence (AI). Recent research has increasingly focused on the algorithms, display, and evaluation methodologies of XRS. While current research and reviews primarily emphasize the algorithmic aspects, with fewer studies addressing the Human-Computer Interaction (HCI) layer of XRS. Additionally, existing reviews lack a unified taxonomy for XRS and there is insufficient attention given to the emerging area of short video recommendations. In this study, we synthesize existing literature and surveys on XRS, presenting a unified framework for its research and development. The main contributions are as follows: 1) We adopt a lifecycle perspective to systematically summarize the technologies and methods used in XRS, addressing challenges posed by the diversity and complexity of algorithmic models and explanation techniques. 2) For the first time, we highlight the application of multimedia, particularly video-based explanations, along with its potential, technical pathways, and challenges in XRS. 3) We provide a structured overview of evaluation methods from both qualitative and quantitative dimensions. These findings provide valuable insights for the systematic design, progress, and testing of XRS.",2 "We explore the use of inner speech as a mechanism to enhance transparency and trust in social robots for dietary advice. In humans, inner speech structures thought processes and decision-making; in robotics, it improves explainability by making reasoning explicit. This is crucial in healthcare scenarios, where trust in robotic assistants depends on both accurate recommendations and human-like dialogue, which make interactions more natural and engaging. Building on this, we developed a social robot that provides dietary advice, and we provided the architecture with inner speech capabilities to validate user input, refine reasoning, and generate clear justifications. The system integrates large language models for natural language understanding and a knowledge graph for structured dietary information. By making decisions more transparent, our approach strengthens trust and improves human-robot interaction in healthcare. We validated this by measuring the computational efficiency of our architecture and conducting a small user study, which assessed the reliability of inner speech in explaining the robot's behavior.",2 "Alfv\'enic waves are considered a key contributor to the energy flux that powers the Sun's corona, with theoretical models demonstrating their potential to explain coronal EUV and X-ray emission and the acceleration of the solar wind. However, confirming underlying assumptions of the models has proved challenging, especially obtaining evidence for the excitation and dissipation of Alfv\'enic waves in the lower solar atmosphere and tracing their propagation into the corona. We present an investigation of the Alfv\'enic wave power spectrum in the Sun's corona, obtained from observations with DKIST Cryo-NIRSP. The data provide unprecedented temporal resolution and signal-to-noise, revealing a detailed power spectrum out to frequencies exceeding 10 mHz. A broad enhancement in power dominates the spectrum and we demonstrate it is accurately reproduced using a physics-based model. The results corroborate the scenario where the corona is dominated by Alfv\'enic waves excited in the photosphere by horizontal convective motions, with low-frequency waves subject to reflection at the transition region and higher frequency waves significantly dissipated by the partially ionized chromosphere. The coronal Alfv\'enic power spectrum also indicates there are contributions from \textit{p}-modes (via mode conversion) and a yet-unknown higher-frequency source. These results provide key insight into how the Sun's convective motions imprint themselves on the corona and highlight the critical role of partial ionization, reflection, and damping in regulating upward-propagating Alfv\'enic waves. A further implication of this is that reconnection-driven Alfv\'enic waves likely play a smaller role in powering the corona and solar wind than has been suggested by recent studies.",0 "Drumming belongs to a family of musical instruments whose practice, whether as an amateur or at a high level, is associated with an increased risk of musculoskeletal disorders (MSD), particularly of the upper limbs and lumbar spine. The vast majority of drummers learn to play on acoustic instruments, the sound intensity of which is proportional to the striking force developed. This correlation is disrupted when playing the electronic version of the instrument, which is often purchased by musicians seeking to reduce the sound produced (e.g. playing in apartments). The aim of this study was therefore to analyze whether drumming on electronic equipment would lead to a change in the kinematics and feel of drummers. To this end, several drummers were recruited to perform repeated rhythms at different pitches on acoustic and electric drums under two sound conditions (sound on and sound off with noise-canceling headphones). The sound produced and the kinematics of the upper limbs were measured by video motion capture during the beats. In addition, self-confrontation interviews were conducted after each condition. The drummers, confronted with video recordings of their actions, were asked to describe, explain and comment step by step on their performance. These interviews were also used to assess their ability to maintain a constant strike force. A questionnaire was used to obtain subjective information on how they felt. The results showed a lower sound power of electronic drums, despite a similar striking speed. This gesture-sound decorrelation could explain the increase in MSD among drummers when switching from an acoustic to an electronic instrument.",2 "In previous papers, we demonstrated that an ontology of quantum mechanics, described in terms of states and events with internal phenomenal aspects (a form of panprotopsychism), is well suited to explain consciousness. We showed that the combination problems of qualities, structures and subjects in panpsychism and panprotopsychism stem from implicit hypotheses based on classical physics regarding supervenience, which are not applicable at the quantum level. Within this view, consciousness arises in entangled quantum systems coupled to the neural network of the brain. In entangled systems, the properties of individual parts disappear, giving rise to an exponential number of emergent properties and states. Here, we analyze self-consciousness as the capacity to view oneself as a subject of experience. The causal openness of quantum systems provides self-conscious beings the ability to make independent choices and decisions, reflecting a sense of self-governance and autonomy. In this context, the issue of personal identity takes a new form free from the problems of the simple view or the reductive approaches.",0 "Stance detection is essential for understanding subjective content across various platforms such as social media, news articles, and online reviews. Recent advances in Large Language Models (LLMs) have revolutionized stance detection by introducing novel capabilities in contextual understanding, cross-domain generalization, and multimodal analysis. Despite these progressions, existing surveys often lack comprehensive coverage of approaches that specifically leverage LLMs for stance detection. To bridge this critical gap, our review article conducts a systematic analysis of stance detection, comprehensively examining recent advancements of LLMs transforming the field, including foundational concepts, methodologies, datasets, applications, and emerging challenges. We present a novel taxonomy for LLM-based stance detection approaches, structured along three key dimensions: 1) learning methods, including supervised, unsupervised, few-shot, and zero-shot; 2) data modalities, such as unimodal, multimodal, and hybrid; and 3) target relationships, encompassing in-target, cross-target, and multi-target scenarios. Furthermore, we discuss the evaluation techniques and analyze benchmark datasets and performance trends, highlighting the strengths and limitations of different architectures. Key applications in misinformation detection, political analysis, public health monitoring, and social media moderation are discussed. Finally, we identify critical challenges such as implicit stance expression, cultural biases, and computational constraints, while outlining promising future directions, including explainable stance reasoning, low-resource adaptation, and real-time deployment frameworks. Our survey highlights emerging trends, open challenges, and future directions to guide researchers and practitioners in developing next-generation stance detection systems powered by large language models.",2 "The potential to improve road safety, reduce human driving error, and promote environmental sustainability have enabled the field of autonomous driving to progress rapidly over recent decades. The performance of autonomous vehicles has significantly improved thanks to advancements in Artificial Intelligence, particularly Deep Learning. Nevertheless, the opacity of their decision-making, rooted in the use of accurate yet complex AI models, has created barriers to their societal trust and regulatory acceptance, raising the need for explainability. We propose a post-hoc, model-agnostic solution to provide teleological explanations for the behaviour of an autonomous vehicle in urban environments. Building on Intention-aware Policy Graphs, our approach enables the extraction of interpretable and reliable explanations of vehicle behaviour in the nuScenes dataset from global and local perspectives. We demonstrate the potential of these explanations to assess whether the vehicle operates within acceptable legal boundaries and to identify possible vulnerabilities in autonomous driving datasets and models.",0 "With the wide adoption of large language models (LLMs) in information assistance, it is essential to examine their alignment with human communication styles and values. We situate this study within the context of fact-checking health information, given the critical challenge of rectifying conceptions and building trust. Recent studies have explored the potential of LLM for health communication, but style differences between LLMs and human experts and associated reader perceptions remain under-explored. In this light, our study evaluates the communication styles of LLMs, focusing on how their explanations differ from those of humans in three core components of health communication: information, sender, and receiver. We compiled a dataset of 1498 health misinformation explanations from authoritative fact-checking organizations and generated LLM responses to inaccurate health information. Drawing from health communication theory, we evaluate communication styles across three key dimensions of information linguistic features, sender persuasive strategies, and receiver value alignments. We further assessed human perceptions through a blinded evaluation with 99 participants. Our findings reveal that LLM-generated articles showed significantly lower scores in persuasive strategies, certainty expressions, and alignment with social values and moral foundations. However, human evaluation demonstrated a strong preference for LLM content, with over 60% responses favoring LLM articles for clarity, completeness, and persuasiveness. Our results suggest that LLMs' structured approach to presenting information may be more effective at engaging readers despite scoring lower on traditional measures of quality in fact-checking and health communication.",2 "LLMs have been celebrated for their potential to help multilingual scientists publish their research. Rather than interpret LLMs as a solution, we hypothesize their adoption can be an indicator of existing linguistic exclusion in scientific writing. Using the case study of ICLR, an influential, international computer science conference, we examine how peer reviewers critique writing clarity. Analyzing almost 80,000 peer reviews, we find significant bias against authors associated with institutions in countries where English is less widely spoken. We see only a muted shift in the expression of this bias after the introduction of ChatGPT in late 2022. To investigate this unexpectedly minor change, we conduct interviews with 14 conference participants from across five continents. Peer reviewers describe associating certain features of writing with people of certain language backgrounds, and such groups in turn with the quality of scientific work. While ChatGPT masks some signs of language background, reviewers explain that they now use ChatGPT ""style"" and non-linguistic features as indicators of author demographics. Authors, aware of this development, described the ongoing need to remove features which could expose their ""non-native"" status to reviewers. Our findings offer insight into the role of ChatGPT in the reproduction of scholarly language ideologies which conflate producers of ""good English"" with producers of ""good science.""",2 "Recent studies claim that human behavior in a two-armed Bernoulli bandit (TABB) task is described by positivity and confirmation biases, implying that humans do not integrate new information objectively. However, we find that even if the agent updates its belief via objective Bayesian inference, fitting the standard Q-learning model with asymmetric learning rates still recovers both biases. Bayesian inference cast as an effective Q-learning algorithm has symmetric, though decreasing, learning rates. We explain this by analyzing the stochastic dynamics of these learning systems using master equations. We find that both confirmation bias and unbiased but decreasing learning rates yield the same behavioral signatures. Finally, we propose experimental protocols to disentangle true cognitive biases from artifacts of decreasing learning rates.",0 "Understanding what is communicated by data visualizations is a critical component of scientific literacy in the modern era. However, it remains unclear why some tasks involving data visualizations are more difficult than others. Here we administered a composite test composed of five widely used tests of data visualization literacy to a large sample of U.S. adults (N=503 participants).We found that items in the composite test spanned the full range of possible difficulty levels, and that our estimates of item-level difficulty were highly reliable. However, the type of data visualization shown and the type of task involved only explained a modest amount of variation in performance across items, relative to the reliability of the estimates we obtained. These results highlight the need for finer-grained ways of characterizing these items that predict the reliable variation in difficulty measured in this study, and that generalize to other tests of data visualization understanding.",2 "The increasing use of complex machine learning models in education has led to concerns about their interpretability, which in turn has spurred interest in developing explainability techniques that are both faithful to the model's inner workings and intelligible to human end-users. In this paper, we describe a novel approach to creating a neural-network-based behavior detection model that is interpretable by design. Our model is fully interpretable, meaning that the parameters we extract for our explanations have a clear interpretation, fully capture the model's learned knowledge about the learner behavior of interest, and can be used to create explanations that are both faithful and intelligible. We achieve this by implementing a series of constraints to the model that both simplify its inference process and bring it closer to a human conception of the task at hand. We train the model to detect gaming-the-system behavior, evaluate its performance on this task, and compare its learned patterns to those identified by human experts. Our results show that the model is successfully able to learn patterns indicative of gaming-the-system behavior while providing evidence for fully interpretable explanations. We discuss the implications of our approach and suggest ways to evaluate explainability using a human-grounded approach.",0 "Visual Analytics (VA) integrates humans, data, and models as key actors in insight generation and data-driven decision-making. This position paper values and reflects on 16 VA process models and frameworks and makes nine high-level observations that motivate a fresh perspective on VA. The contribution is the HDMI Canvas, a perspective to VA that complements the strengths of existing VA process models and frameworks. It systematically characterizes diverse roles of humans, data, and models, and how these actors benefit from and contribute to VA processes. The descriptive power of the HDMI Canvas eases the differentiation between a series of VA building blocks, rather than describing general VA principles only. The canvas includes modern human-centered methodologies, including human knowledge externalization and forms of feedback loops, while interpretable and explainable AI highlight model contributions beyond their conventional outputs. The HDMI Canvas has generative power, guiding the design of new VA processes and is optimized for external stakeholders, improving VA outreach, interdisciplinary collaboration, and user-centered design. The utility of the HDMI Canvas is demonstrated through two preliminary case studies.",0 "We propose DocVXQA, a novel framework for visually self-explainable document question answering. The framework is designed not only to produce accurate answers to questions but also to learn visual heatmaps that highlight contextually critical regions, thereby offering interpretable justifications for the model's decisions. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning objectives. Unlike conventional methods that emphasize only the regions pertinent to the answer, our framework delivers explanations that are contextually sufficient while remaining representation efficient. This fosters user trust while achieving a balance between predictive performance and interpretability in DocVQA applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method.",2 "In this chapter, I discuss teaching mathematical tools specifically tailored for economics students. A typical one-semester course in this area seeks to blend a range of topics: from foundational elements of subjects such as linear algebra and multivariate calculus to intermediate areas like real and convex analysis and further into advanced topics such as dynamic optimization in both continuous and discrete time. This breadth of coverage corresponds to material usually spread across multiple years in traditional mathematics programs. Given the comprehensive nature of these courses, careful selection of topics is essential, balancing numerous trade-offs. I discuss potential course sequences and instructional design choices. I then focus on conceptualizing and explaining mathematical modeling in economics. I reflect on three years of teaching an advanced undergraduate course in mathematical methods online. The latter part of the chapter offers examples and visualizations I have found particularly beneficial for imparting intuition to economics students. They cover a range of topics at different degrees of difficulty and are meant as a resource for instructors in Mathematics for Economists. Among these, I use the Ramsey model as a recurring example, especially relevant when designing a mathematical tools course with an orientation towards preparing students for macroeconomic analysis.",0 "Sentiment Analysis (SA) is a crucial aspect of Natural Language Processing (NLP), addressing subjective assessments in textual content. Syntactic parsing is useful in SA because explicit syntactic information can improve accuracy while providing explainability, but it tends to be a computational bottleneck in practice due to the slowness of parsing algorithms. This paper addresses said bottleneck by using a SEquence Labeling Syntactic Parser (SELSP) to inject syntax into SA. By treating dependency parsing as a sequence labeling problem, we greatly enhance the speed of syntax-based SA. SELSP is trained and evaluated on a ternary polarity classification task, demonstrating its faster performance and better accuracy in polarity prediction tasks compared to conventional parsers like Stanza and to heuristic approaches that use shallow syntactic rules for SA like VADER. This increased speed and improved accuracy make SELSP particularly appealing to SA practitioners in both research and industry. In addition, we test several sentiment dictionaries on our SELSP to see which one improves the performance in polarity prediction tasks. Moreover, we compare the SELSP with Transformer-based models trained on a 5-label classification task. The results show that dictionaries that capture polarity judgment variation provide better results than dictionaries that ignore polarity judgment variation. Moreover, we show that SELSP is considerably faster than Transformer-based models in polarity prediction tasks.",0 "Concept-based explanations have emerged as an effective approach within Explainable Artificial Intelligence, enabling interpretable insights by aligning model decisions with human-understandable concepts. However, existing methods rely on computationally intensive procedures and struggle to efficiently capture complex, semantic concepts. Recently, the Concept Discovery through Latent Diffusion-based Counterfactual Trajectories (CDCT) framework, introduced by Varshney et al. (2025), attempts to identify concepts via dimension-wise traversal of the latent space of a Variational Autoencoder trained on counterfactual trajectories. Extending the CDCT framework, this work introduces Concept Directions via Latent Clustering (CDLC), which extracts global, class-specific concept directions by clustering latent difference vectors derived from factual and diffusion-generated counterfactual image pairs. CDLC substantially reduces computational complexity by eliminating the exhaustive latent dimension traversal required in CDCT and enables the extraction of multidimensional semantic concepts encoded across the latent dimensions. This approach is validated on a real-world skin lesion dataset, demonstrating that the extracted concept directions align with clinically recognized dermoscopic features and, in some cases, reveal dataset-specific biases or unknown biomarkers. These results highlight that CDLC is interpretable, scalable, and applicable across high-stakes domains and diverse data modalities.",0 "This paper proposes a novel theoretical model to explain how the human mind and artificial intelligence can approach real-time awareness by reducing perceptual delays. By investigating cosmic signal delay, neurological reaction times, and the ancient cognitive state of stillness, we explore how one may shift from reactive perception to a conscious interface with the near future. This paper introduces both a physical and cognitive model for perceiving the present not as a linear timestamp, but as an interference zone where early-arriving cosmic signals and reactive human delays intersect. We propose experimental approaches to test these ideas using human neural observation and neuro-receptive extensions. Finally, we propose a mathematical framework to guide the evolution of AI systems toward temporally efficient, ethically sound, and internally conscious decision-making processes",0 "This paper presents R-CAGE (Rhythmic Control Architecture for Guarding Ego), a theoretical framework for restructuring emotional output in long-term human-AI interaction. While prior affective computing approaches emphasized expressiveness, immersion, and responsiveness, they often neglected the cognitive and structural consequences of repeated emotional engagement. R-CAGE instead conceptualizes emotional output not as reactive expression but as ethical design structure requiring architectural intervention. The model is grounded in experiential observations of subtle affective symptoms such as localized head tension, interpretive fixation, and emotional lag arising from prolonged interaction with affective AI systems. These indicate a mismatch between system-driven emotion and user interpretation that cannot be fully explained by biometric data or observable behavior. R-CAGE adopts a user-centered stance prioritizing psychological recovery, interpretive autonomy, and identity continuity. The framework consists of four control blocks: (1) Control of Rhythmic Expression regulates output pacing to reduce fatigue; (2) Architecture of Sensory Structuring adjusts intensity and timing of affective stimuli; (3) Guarding of Cognitive Framing reduces semantic pressure to allow flexible interpretation; (4) Ego-Aligned Response Design supports self-reference recovery during interpretive lag. By structurally regulating emotional rhythm, sensory intensity, and interpretive affordances, R-CAGE frames emotion not as performative output but as sustainable design unit. The goal is to protect users from oversaturation and cognitive overload while sustaining long-term interpretive agency in AI-mediated environments.",0 "Generalist Medical AI (GMAI) systems have demonstrated expert-level performance in biomedical perception tasks, yet their clinical utility remains limited by inadequate multi-modal explainability and suboptimal prognostic capabilities. Here, we present XMedGPT, a clinician-centric, multi-modal AI assistant that integrates textual and visual interpretability to support transparent and trustworthy medical decision-making. XMedGPT not only produces accurate diagnostic and descriptive outputs, but also grounds referenced anatomical sites within medical images, bridging critical gaps in interpretability and enhancing clinician usability. To support real-world deployment, we introduce a reliability indexing mechanism that quantifies uncertainty through consistency-based assessment via interactive question-answering. We validate XMedGPT across four pillars: multi-modal interpretability, uncertainty quantification, and prognostic modeling, and rigorous benchmarking. The model achieves an IoU of 0.703 across 141 anatomical regions, and a Kendall's tau-b of 0.479, demonstrating strong alignment between visual rationales and clinical outcomes. For uncertainty estimation, it attains an AUC of 0.862 on visual question answering and 0.764 on radiology report generation. In survival and recurrence prediction for lung and glioma cancers, it surpasses prior leading models by 26.9%, and outperforms GPT-4o by 25.0%. Rigorous benchmarking across 347 datasets covers 40 imaging modalities and external validation spans 4 anatomical systems confirming exceptional generalizability, with performance gains surpassing existing GMAI by 20.7% for in-domain evaluation and 16.7% on 11,530 in-house data evaluation. Together, XMedGPT represents a significant leap forward in clinician-centric AI integration, offering trustworthy and scalable support for diverse healthcare applications.",0 "Improving end-users' understanding of decisions made by autonomous vehicles (AVs) driven by artificial intelligence (AI) can improve utilization and acceptance of AVs. However, current explanation mechanisms primarily help AI researchers and engineers in debugging and monitoring their AI systems, and may not address the specific questions of end-users, such as passengers, about AVs in various scenarios. In this paper, we conducted two user studies to investigate questions that potential AV passengers might pose while riding in an AV and evaluate how well answers to those questions improve their understanding of AI-driven AV decisions. Our initial formative study identified a range of questions about AI in autonomous driving that existing explanation mechanisms do not readily address. Our second study demonstrated that interactive text-based explanations effectively improved participants' comprehension of AV decisions compared to simply observing AV decisions. These findings inform the design of interactions that motivate end-users to engage with and inquire about the reasoning behind AI-driven AV decisions.",2 "Semantic communication technology emerges as a pivotal bridge connecting AI with classical communication. The current semantic communication systems are generally modeled as an Auto-Encoder (AE). AE lacks a deep integration of AI principles with communication strategies due to its inability to effectively capture channel dynamics. This gap makes it difficult to justify the need for joint source-channel coding (JSCC) and to explain why performance improves. This paper begins by exploring lossless and lossy communication, highlighting that the inclusion of data distortion distinguishes semantic communication from classical communication. It breaks the conditions for the separation theorem to hold and explains why the amount of data transferred by semantic communication is less. Therefore, employing JSCC becomes imperative for achieving optimal semantic communication. Moreover, a Variational Source-Channel Coding (VSCC) method is proposed for constructing semantic communication systems based on data distortion theory, integrating variational inference and channel characteristics. Using a deep learning network, we develop a semantic communication system employing the VSCC method and demonstrate its capability for semantic transmission. We also establish semantic communication systems of equivalent complexity employing the AE method and the VAE method. Experimental results reveal that the VSCC model offers superior interpretability compared to AE model, as it clearly captures the semantic features of the transmitted data, represented as the variance of latent variables in our experiments. In addition, VSCC model exhibits superior semantic transmission capabilities compared to VAE model. At the same level of data distortion evaluated by PSNR, VSCC model exhibits stronger human interpretability, which can be partially assessed by SSIM.",0 "Although prototype-based explanations provide a human-understandable way of representing model predictions they often fail to direct user attention to the most relevant features. We propose a novel approach to identify the most informative features within prototypes, termed alike parts. Using feature importance scores derived from an agnostic explanation method, it emphasizes the most relevant overlapping features between an instance and its nearest prototype. Furthermore, the feature importance score is incorporated into the objective function of the prototype selection algorithms to promote global prototypes diversity. Through experiments on six benchmark datasets, we demonstrate that the proposed approach improves user comprehension while maintaining or even increasing predictive accuracy.",0 "The Ricardian model of world trade based on comparative advantage is not sufficient to justify equal trade relations.The existing model of trade relations does not explain the distribution of income among trading countries. This paper presents a method for building equitable trade relations. Its essence is to present an algorithm for building such trade relations, based on the previously proposed model of world trade, that the trade balance of each country would be equal to zero. Under such conditions, tariff wars would become impossible. It is proved that, provided that the supply structure is consistent with the demand structure, it is always possible to build an equilibrium price vector for which the trade balance of each country is zero. This state of economic equilibrium is called ideal. The article presents an algorithm to build an export structure based on the structure of imports. This algorithm is quite simple and allows for a wide range of applications. Under fairly simple realistic assumptions about the behaviour of countries trading with each other that are subject to tariff restrictions, it is proved that this leads to an increase in the prices of the goods traded by these countries. Among the equilibrium states, there are also those called oversupply states. The latter describes the phenomenon of recession. This contributes to a fall in stock market indices.",0 "With the recent success of large language models (LLMs), the idea of AI-augmented Business Process Management systems is becoming more feasible. One of their essential characteristics is the ability to be conversationally actionable, allowing humans to interact with the LLM effectively to perform crucial process life cycle tasks such as process model design and redesign. However, most current research focuses on single-prompt execution and evaluation of results, rather than on continuous interaction between the user and the LLM. In this work, we aim to explore the feasibility of using LLMs to empower domain experts in the creation and redesign of process models in an iterative and effective way. The proposed conversational process model redesign (CPD) approach receives as input a process model and a redesign request by the user in natural language. Instead of just letting the LLM make changes, the LLM is employed to (a) identify process change patterns from literature, (b) re-phrase the change request to be aligned with an expected wording for the identified pattern (i.e., the meaning), and then to (c) apply the meaning of the change to the process model. This multi-step approach allows for explainable and reproducible changes. In order to ensure the feasibility of the CPD approach, and to find out how well the patterns from literature can be handled by the LLM, we performed an extensive evaluation. The results show that some patterns are hard to understand by LLMs and by users. Within the scope of the study, we demonstrated that users need support to describe the changes clearly. Overall the evaluation shows that the LLMs can handle most changes well according to a set of completeness and correctness criteria.",0 "Supporting student success requires collaboration among multiple stakeholders. Researchers have explored machine learning models for academic performance prediction; yet key challenges remain in ensuring these models are interpretable, equitable, and actionable within real-world educational support systems. First, many models prioritize predictive accuracy but overlook human-centered machine learning principles, limiting trust among students and reducing their usefulness for educators and institutional decision-makers. Second, most models require at least a month of data before making reliable predictions, delaying opportunities for early intervention. Third, current models primarily rely on sporadically collected, classroom-derived data, missing broader behavioral patterns that could provide more continuous and actionable insights. To address these gaps, we present three modeling approaches-LR, 1D-CNN, and MTL-1D-CNN-to classify students as low or high academic performers. We evaluate them based on explainability, fairness, and generalizability to assess their alignment with key social values. Using behavioral and self-reported data collected within the first week of two Spring terms, we demonstrate that these models can identify at-risk students as early as week one. However, trade-offs across human-centered machine learning principles highlight the complexity of designing predictive models that effectively support multi-stakeholder decision-making and intervention strategies. We discuss these trade-offs and their implications for different stakeholders, outlining how predictive models can be integrated into student support systems. Finally, we examine broader socio-technical challenges in deploying these models and propose future directions for advancing human-centered, collaborative academic prediction systems.",0 "Concept Bottleneck Models (CBMs) enhance interpretability by explaining predictions through human-understandable concepts but typically assume that training and test data share the same distribution. This assumption often fails under domain shifts, leading to degraded performance and poor generalization. To address these limitations and improve the robustness of CBMs, we propose the Concept-based Unsupervised Domain Adaptation (CUDA) framework. CUDA is designed to: (1) align concept representations across domains using adversarial training, (2) introduce a relaxation threshold to allow minor domain-specific differences in concept distributions, thereby preventing performance drop due to over-constraints of these distributions, (3) infer concepts directly in the target domain without requiring labeled concept data, enabling CBMs to adapt to diverse domains, and (4) integrate concept learning into conventional domain adaptation (DA) with theoretical guarantees, improving interpretability and establishing new benchmarks for DA. Experiments demonstrate that our approach significantly outperforms the state-of-the-art CBM and DA methods on real-world datasets.",0 Building on the recent empirical work of Kwa et al 2025 I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model -- a constant rate of failing during each minute a human would take to do the task. This implies an exponentially declining success rate with the length of the task and that each agent could be characterised by its own half-life. This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks -- that they involve increasingly large sets of subtasks where failing any one fails the task. Whether this model applies more generally on other suites of tasks is unknown and an important subject for further work.,0 "Objective: This review explores the trustworthiness of multimodal artificial intelligence (AI) systems, specifically focusing on vision-language tasks. It addresses critical challenges related to fairness, transparency, and ethical implications in these systems, providing a comparative analysis of key tasks such as Visual Question Answering (VQA), image captioning, and visual dialogue. Background: Multimodal models, particularly vision-language models, enhance artificial intelligence (AI) capabilities by integrating visual and textual data, mimicking human learning processes. Despite significant advancements, the trustworthiness of these models remains a crucial concern, particularly as AI systems increasingly confront issues regarding fairness, transparency, and ethics. Methods: This review examines research conducted from 2017 to 2024 focusing on forenamed core vision-language tasks. It employs a comparative approach to analyze these tasks through the lens of trustworthiness, underlining fairness, explainability, and ethics. This study synthesizes findings from recent literature to identify trends, challenges, and state-of-the-art solutions. Results: Several key findings were highlighted. Transparency: Explainability of vision language tasks is important for user trust. Techniques, such as attention maps and gradient-based methods, have successfully addressed this issue. Fairness: Bias mitigation in VQA and visual dialogue systems is essential for ensuring unbiased outcomes across diverse demographic groups. Ethical Implications: Addressing biases in multilingual models and ensuring ethical data handling is critical for the responsible deployment of vision-language systems. Conclusion: This study underscores the importance of integrating fairness, transparency, and ethical considerations in developing vision-language models within a unified framework.",0 "Retrieval of legal knowledge by the general public is a challenging problem due to the technicality of the professional knowledge and the lack of fundamental understanding by laypersons on the subject. Traditional information retrieval techniques assume that users are capable of formulating succinct and precise queries for effective document retrieval. In practice, however, the wide gap between the highly technical contents and untrained users makes legal knowledge retrieval very difficult. We propose a methodology, called QBR, which employs a Questions Bank (QB) as an effective medium for bridging the knowledge gap. We show how the QB is used to derive training samples to enhance the embedding of knowledge units within documents, which leads to effective fine-grained knowledge retrieval. We discuss and evaluate through experiments various advantages of QBR over traditional methods. These include more accurate, efficient, and explainable document retrieval, better comprehension of retrieval results, and highly effective fine-grained knowledge retrieval. We also present some case studies and show that QBR achieves social impact by assisting citizens to resolve everyday legal concerns.",0 "In the rapidly evolving field of cybersecurity, the integration of flow-level and packet-level information for real-time intrusion detection remains a largely untapped area of research. This paper introduces ""XG-NID,"" a novel framework that, to the best of our knowledge, is the first to fuse flow-level and packet-level data within a heterogeneous graph structure, offering a comprehensive analysis of network traffic. Leveraging a heterogeneous graph neural network (GNN) with graph-level classification, XG-NID uniquely enables real-time inference while effectively capturing the intricate relationships between flow and packet payload data. Unlike traditional GNN-based methodologies that predominantly analyze historical data, XG-NID is designed to accommodate the heterogeneous nature of network traffic, providing a robust and real-time defense mechanism. Our framework extends beyond mere classification; it integrates Large Language Models (LLMs) to generate detailed, human-readable explanations and suggest potential remedial actions, ensuring that the insights produced are both actionable and comprehensible. Additionally, we introduce a new set of flow features based on temporal information, further enhancing the contextual and explainable inferences provided by our model. To facilitate practical application and accessibility, we developed ""GNN4ID,"" an open-source tool that enables the extraction and transformation of raw network traffic into the proposed heterogeneous graph structure, seamlessly integrating flow and packet-level data. Our comprehensive quantitative comparative analysis demonstrates that XG-NID achieves an F1 score of 97\% in multi-class classification, outperforming existing baseline and state-of-the-art methods. This sets a new standard in Network Intrusion Detection Systems by combining innovative data fusion with enhanced interpretability and real-time capabilities.",0 "Although AI has become increasingly smart, its wisdom has not kept pace. In this article, we examine what is known about human wisdom and sketch a vision of its AI counterpart. We analyze human wisdom as a set of strategies for solving intractable problems-those outside the scope of analytic techniques-including both object-level strategies like heuristics [for managing problems] and metacognitive strategies like intellectual humility, perspective-taking, or context-adaptability [for managing object-level strategies]. We argue that AI systems particularly struggle with metacognition; improved metacognition would lead to AI more robust to novel environments, explainable to users, cooperative with others, and safer in risking fewer misaligned goals with human users. We discuss how wise AI might be benchmarked, trained, and implemented.",0 "Sweat secretion and evaporation from the skin dictate the human ability to thermoregulate and thermal comfort in hot environments and impact skin interactions with cosmetics, textiles, and wearable electronics or sensors. However, sweating has mostly been investigated using macroscopic physiological methods, leaving micro-to-macroscale sweating dynamics unexplored. We explore these processes by employing a coupled microscale imaging and transport measurement approach used in engineering studies of phase change processes. Specifically, we employed a comprehensive set of macroscale physiological measurements (ventilated capsule sweat rate, galvanic skin conductance, and dielectric epidermis hydration) complemented by three microscale imaging techniques (visible light, midwave infrared, and optical coherence tomography imaging). Inspired by industrial jet cooling devices, we also explore an air jet (vs. cylindrical) capsule for measuring sweat rate. To enable near simultaneous application of these methods, we studied forehead sweating dynamics of six supine subjects undergoing passive heating, cooling, and secondary heating. The relative dynamics of the physiological measurements agree with prior observations and can be explained using imaged microscale sweating dynamics. This comprehensive study provides new insights into the biophysical dynamics of sweating onset and following cyclic porewise, transition, and filmwise sweating modes, and highlights the roles of stratum corneum hydration, salt deposits, and microscale hair.",0 "Intelligent tutoring systems have demonstrated effectiveness in teaching formal propositional logic proofs, but their reliance on template-based explanations limits their ability to provide personalized student feedback. While large language models (LLMs) offer promising capabilities for dynamic feedback generation, they risk producing hallucinations or pedagogically unsound explanations. We evaluated the stepwise accuracy of LLMs in constructing multi-step symbolic logic proofs, comparing six prompting techniques across four state-of-the-art LLMs on 358 propositional logic problems. Results show that DeepSeek-V3 achieved superior performance with 84.4% accuracy on stepwise proof construction and excelled particularly in simpler rules. We further used the best-performing LLM to generate explanatory hints for 1,050 unique student problem-solving states from a logic ITS and evaluated them on 4 criteria with both an LLM grader and human expert ratings on a 20% sample. Our analysis finds that LLM-generated hints were 75% accurate and rated highly by human evaluators on consistency and clarity, but did not perform as well explaining why the hint was provided or its larger context. Our results demonstrate that LLMs may be used to augment tutoring systems with logic tutoring hints, but requires additional modifications to ensure accuracy and pedagogical appropriateness.",0 "Trajectory prediction is a crucial task in modeling human behavior, especially in fields as social robotics and autonomous vehicle navigation. Traditional heuristics based on handcrafted rules often lack accuracy, while recently proposed deep learning approaches suffer from computational cost, lack of explainability, and generalization issues that limit their practical adoption. In this paper, we introduce TrajEvo, a framework that leverages Large Language Models (LLMs) to automatically design trajectory prediction heuristics. TrajEvo employs an evolutionary algorithm to generate and refine prediction heuristics from past trajectory data. We introduce a Cross-Generation Elite Sampling to promote population diversity and a Statistics Feedback Loop allowing the LLM to analyze alternative predictions. Our evaluations show TrajEvo outperforms previous heuristic methods on the ETH-UCY datasets, and remarkably outperforms both heuristics and deep learning methods when generalizing to the unseen SDD dataset. TrajEvo represents a first step toward automated design of fast, explainable, and generalizable trajectory prediction heuristics. We make our source code publicly available to foster future research at https://github.com/ai4co/trajevo.",0 "In this paper, we introduce a post-hoc and local explainable AI method tailored for Knowledge Graph Embedding (KGE) models. These models are essential to Knowledge Graph Completion yet criticized for their opaque, black-box nature. Despite their significant success in capturing the semantics of knowledge graphs through high-dimensional latent representations, their inherent complexity poses substantial challenges to explainability. While existing methods like Kelpie use resource-intensive perturbation to explain KGE models, our approach directly decodes the latent representations encoded by KGE models, leveraging the smoothness of the embeddings, which follows the principle that similar embeddings reflect similar behaviours within the Knowledge Graph, meaning that nodes are similarly embedded because their graph neighbourhood looks similar. This principle is commonly referred to as smoothness. By identifying symbolic structures, in the form of triples, within the subgraph neighborhoods of similarly embedded entities, our method identifies the statistical regularities on which the models rely and translates these insights into human-understandable symbolic rules and facts. This bridges the gap between the abstract representations of KGE models and their predictive outputs, offering clear, interpretable insights. Key contributions include a novel post-hoc and local explainable AI method for KGE models that provides immediate, faithful explanations without retraining, facilitating real-time application on large-scale knowledge graphs. The method's flexibility enables the generation of rule-based, instance-based, and analogy-based explanations, meeting diverse user needs. Extensive evaluations show the effectiveness of our approach in delivering faithful and well-localized explanations, enhancing the transparency and trustworthiness of KGE models.",0 "In this paper, we introduce KERAIA, a novel framework and software platform for symbolic knowledge engineering designed to address the persistent challenges of representing, reasoning with, and executing knowledge in dynamic, complex, and context-sensitive environments. The central research question that motivates this work is: How can unstructured, often tacit, human expertise be effectively transformed into computationally tractable algorithms that AI systems can efficiently utilise? KERAIA seeks to bridge this gap by building on foundational concepts such as Minsky's frame-based reasoning and K-lines, while introducing significant innovations. These include Clouds of Knowledge for dynamic aggregation, Dynamic Relations (DRels) for context-sensitive inheritance, explicit Lines of Thought (LoTs) for traceable reasoning, and Cloud Elaboration for adaptive knowledge transformation. This approach moves beyond the limitations of traditional, often static, knowledge representation paradigms. KERAIA is designed with Explainable AI (XAI) as a core principle, ensuring transparency and interpretability, particularly through the use of LoTs. The paper details the framework's architecture, the KSYNTH representation language, and the General Purpose Paradigm Builder (GPPB) to integrate diverse inference methods within a unified structure. We validate KERAIA's versatility, expressiveness, and practical applicability through detailed analysis of multiple case studies spanning naval warfare simulation, industrial diagnostics in water treatment plants, and strategic decision-making in the game of RISK. Furthermore, we provide a comparative analysis against established knowledge representation paradigms (including ontologies, rule-based systems, and knowledge graphs) and discuss the implementation aspects and computational considerations of the KERAIA platform.",0 "Access to legal information is fundamental to access to justice. Yet accessibility refers not only to making legal documents available to the public, but also rendering legal information comprehensible to them. A vexing problem in bringing legal information to the public is how to turn formal legal documents such as legislation and judgments, which are often highly technical, to easily navigable and comprehensible knowledge to those without legal education. In this study, we formulate a three-step approach for bringing legal knowledge to laypersons, tackling the issues of navigability and comprehensibility. First, we translate selected sections of the law into snippets (called CLIC-pages), each being a small piece of article that focuses on explaining certain technical legal concept in layperson's terms. Second, we construct a Legal Question Bank (LQB), which is a collection of legal questions whose answers can be found in the CLIC-pages. Third, we design an interactive CLIC Recommender (CRec). Given a user's verbal description of a legal situation that requires a legal solution, CRec interprets the user's input and shortlists questions from the question bank that are most likely relevant to the given legal situation and recommends their corresponding CLIC pages where relevant legal knowledge can be found. In this paper we focus on the technical aspects of creating an LQB. We show how large-scale pre-trained language models, such as GPT-3, can be used to generate legal questions. We compare machine-generated questions (MGQs) against human-composed questions (HCQs) and find that MGQs are more scalable, cost-effective, and more diversified, while HCQs are more precise. We also show a prototype of CRec and illustrate through an example how our 3-step approach effectively brings relevant legal knowledge to the public.",0 "Blockchain technology has set off a wave of decentralization in the world since its birth. The trust system constructed by blockchain technology based on cryptography algorithm and computing power provides a practical and powerful solution to solve the trust problem in human society. In order to make more convenient use of the characteristics of blockchain and build applications on it, smart contracts appear. By defining some trigger automatic execution contracts, the application space of blockchain is expanded and the foundation for the rapid development of blockchain is laid. This is blockchain 2.0. However, the programmability of smart contracts also introduces vulnerabilities. In order to cope with the insufficient security guarantee of high-value application networks running on blockchain 2.0 and smart contracts, this article will be represented by Ethereum to introduce the technical details of understanding blockchain 2.0 and the operation principle of contract virtual machines, and explain how cryptocurrencies based on blockchain 2.0 are constructed and operated. The common security problems and solutions are also discussed. Based on relevant research and on-chain practice, this paper provides a complete and comprehensive perspective to understanding cryptocurrency technology based on blockchain 2.0 and provides a reference for building more secure cryptocurrency contracts.",0 "This study explores how emotions and sentiments differ in customer reviews of products and services on e-commerce platforms. Unlike earlier research that treats all reviews uniformly, this study distinguishes between reviews of products, typically fulfilling basic, functional needs, and services, which often cater to experiential and emotional desires. The findings reveal clear differences in emotional expression and sentiment between the two. Product reviews frequently focus on practicality, such as functionality, reliability, and value for money, and are generally more neutral or pragmatic in tone. In contrast, service reviews involve stronger emotional engagement, as services often entail personal interactions and subjective experiences. Customers express a broader spectrum of emotions, such as joy, frustration, or disappointment when reviewing services, as identified using advanced machine learning techniques. Cultural background further influences these patterns. Consumers from collectivist cultures, as defined by Hofstede cultural dimensions, often use more moderated and socially considerate language, reflecting an emphasis on group harmony. Conversely, consumers from individualist cultures tend to offer more direct, emotionally intense feedback. Notably, gender appears to have minimal impact on sentiment variation, reinforcing the idea that the nature of the offering (product vs. service) and cultural context are the dominant factors. Theoretically, the study extends Maslow hierarchy of needs and Hofstede cultural framework to the domain of online reviews, proposing a model that explains how these dimensions shape consumer expression. Practically, the insights offer valuable guidance for businesses looking to optimize their marketing and customer engagement strategies by aligning messaging and service design with customer expectations across product types and cultural backgrounds.",0 "Human-Machine Teaming (HMT) is revolutionizing collaboration across domains such as defense, healthcare, and autonomous systems by integrating AI-driven decision-making, trust calibration, and adaptive teaming. This survey presents a comprehensive taxonomy of HMT, analyzing theoretical models, including reinforcement learning, instance-based learning, and interdependence theory, alongside interdisciplinary methodologies. Unlike prior reviews, we examine team cognition, ethical AI, multi-modal interactions, and real-world evaluation frameworks. Key challenges include explainability, role allocation, and scalable benchmarking. We propose future research in cross-domain adaptation, trust-aware AI, and standardized testbeds. By bridging computational and social sciences, this work lays a foundation for resilient, ethical, and scalable HMT systems.",2 "A number of experiments have reported evidence for the existence of the lower Josephson plasmon mode in underdoped YBCO up to the pseudogap temperature scale when the sample is subject to intense terahertz pulses. Evidences include the observation of a reflectivity edge that resembles that of the superconducting state, and the second harmonic generation of a probe optical pulse that is modulated at a frequency similar the reflectivity feature. Since the lower Josephson plasmon mode is often associated with coherent oscillations between bilayers in the YBCO structure, these experiments have led to the suggestion that the intense pump has created pair coherence up to 350K. In this paper, we propose an alternative explanation of these experiments based on the model of short ranged superconducting correlations in the equilibrium state and using the Floquet perspective to analyze optical responses of the photoexcited state. Our model only requires local pairing with phase correlations that can be very short ranged when the system is at equilibrium at a temperature above Tc but below the pseudo-gap temperature T*. Within this assumption there is no phase coherence between bilayers. On the other hand the relative phase between members of the bilayer has a longer in-plane correlation which leads locally to a finite Josephson current. We show that the nonlinearity afforded by the local intra-bilayer Josephson current is sufficient to explain both the reflectivity and second harmonic generation data. The key point is that in the lower Josephson plasmon, the coupling between bilayers is mainly capacitive: the Josepson current between bilayers can be set to zero without affecting the parametric amplification process. The implication is that while superconducting coherence may not be created by the pump, the pseudogap phase must possess a local pairing amplitude at equilibrium.",0 "Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text. But are they equally adept at forming coherent probability judgments? We use probabilistic identities and repeated judgments to assess the coherence of probability judgments made by LLMs. Our results show that the judgments produced by these models are often incoherent, displaying human-like systematic deviations from the rules of probability theory. Moreover, when prompted to judge the same event, the mean-variance relationship of probability judgments produced by LLMs shows an inverted-U-shaped like that seen in humans. We propose that these deviations from rationality can be explained by linking autoregressive LLMs to implicit Bayesian inference and drawing parallels with the Bayesian Sampler model of human probability judgments.",0 "Continuous monitoring of behavior and physiology via wearable devices offers a novel, objective method for the early detection of worsening depression and anxiety. In this study, we present an explainable anomaly detection framework that identifies clinically meaningful increases in symptom severity using consumer-grade wearable data. Leveraging data from 2,023 participants with defined healthy baselines, our LSTM autoencoder model learned normal health patterns of sleep duration, step count, and resting heart rate. Anomalies were flagged when self-reported depression or anxiety scores increased by >=5 points (a threshold considered clinically significant). The model achieved an adjusted F1-score of 0.80 (precision = 0.73, recall = 0.88) in detecting 393 symptom-worsening episodes across 341 participants, with higher performance observed for episodes involving concurrent depression and anxiety escalation (F1 = 0.84) and for more pronounced symptom changes (>=10-point increases, F1 = 0.85). Model interpretability was supported by SHAP-based analysis, which identified resting heart rate as the most influential feature in 71.4 percentage of detected anomalies, followed by physical activity and sleep. Together, our findings highlight the potential of explainable anomaly detection to enable personalized, scalable, and proactive mental health monitoring in real-world settings.",2 "Performance models of interaction, such as Fitts Law, are important tools for predicting and explaining human motor performance and for designing high-performance user interfaces. Extensive prior work has proposed such models for the",0 "We draw on the predictive processing theory of perception to explain why healthy, intelligent, honest, and psychologically normal people might easily misperceive lights in the sky as threatening or extraordinary objects, especially in the context of WEIRD (western, educated, industrial, rich, and democratic) societies. We argue that the uniquely sparse properties of skyborne and celestial stimuli make it difficult for an observer to update prior beliefs, which can be easily fit to observed lights. Moreover, we hypothesize that humans have likely evolved to perceive the sky and its perceived contents as deeply meaningful. Finally, we briefly discuss the possible role of generalized distrust in scientific institutions and ultimately argue for the importance of astronomy education for producing a society with prior beliefs that support veridical perception.",0 "Consider a setting where a pre-trained agent is operating in an environment and a human operator can decide to temporarily terminate its operation and take-over for some duration of time. These kind of scenarios are common in human-machine interactions, for example in autonomous driving, factory automation and healthcare. In these settings, we typically observe a trade-off between two extreme cases -- if no take-overs are allowed, then the agent might employ a sub-optimal, possibly dangerous policy. Alternatively, if there are too many take-overs, then the human has no confidence in the agent, greatly limiting its usefulness. In this paper, we formalize this setup and propose an explainability scheme to help optimize the number of human interventions.",0 "Creativity assessment in science and engineering is increasingly based on both human and AI judgment, but the cognitive processes and biases behind these evaluations remain poorly understood. We conducted two experiments examining how including example solutions with ratings impact creativity evaluation, using a finegrained annotation protocol where raters were tasked with explaining their originality scores and rating for the facets of remoteness (whether the response is ""far"" from everyday ideas), uncommonness (whether the response is rare), and cleverness. In Study 1, we analyzed creativity ratings from 72 experts with formal science or engineering training, comparing those who received example solutions with ratings (example) to those who did not (no example). Computational text analysis revealed that, compared to experts with examples, no-example experts used more comparative language (e.g., ""better/worse"") and emphasized solution uncommonness, suggesting they may have relied more on memory retrieval for comparisons. In Study 2, parallel analyses with state-of-the-art LLMs revealed that models prioritized uncommonness and remoteness of ideas when rating originality, suggesting an evaluative process rooted around the semantic similarity of ideas. In the example condition, while LLM accuracy in predicting the true originality scores improved, the correlations of remoteness, uncommonness, and cleverness with originality also increased substantially -- to upwards of $0.99$ -- suggesting a homogenization in the LLMs evaluation of the individual facets. These findings highlight important implications for how humans and AI reason about creativity and suggest diverging preferences for what different populations prioritize when rating.",2 "Advancements in generative Artificial Intelligence (AI) hold great promise for automating radiology workflows, yet challenges in interpretability and reliability hinder clinical adoption. This paper presents an automated radiology report generation framework that combines Concept Bottleneck Models (CBMs) with a Multi-Agent Retrieval-Augmented Generation (RAG) system to bridge AI performance with clinical explainability. CBMs map chest X-ray features to human-understandable clinical concepts, enabling transparent disease classification. Meanwhile, the RAG system integrates multi-agent collaboration and external knowledge to produce contextually rich, evidence-based reports. Our demonstration showcases the system's ability to deliver interpretable predictions, mitigate hallucinations, and generate high-quality, tailored reports with an interactive interface addressing accuracy, trust, and usability challenges. This framework provides a pathway to improving diagnostic consistency and empowering radiologists with actionable insights.",0 "The dynamic nature of human health and comfort calls for adaptive systems that respond to individual physiological needs in real time. This paper presents an AI-enhanced digital twin framework that integrates biometric signals, specifically electrocardiogram (ECG) data, with environmental parameters such as temperature, humidity, and ventilation. Leveraging IoT-enabled sensors and biometric monitoring devices, the system continuously acquires, synchronises, and preprocesses multimodal data streams to construct a responsive virtual replica of the physical environment. To validate this framework, a detailed case study is conducted using the MIT-BIH noise stress test dataset. ECG signals are filtered and segmented using dynamic sliding windows, followed by extracting heart rate variability (HRV) features such as SDNN, BPM, QTc, and LF/HF ratio. Relative deviation metrics are computed against clean baselines to quantify stress responses. A random forest classifier is trained to predict stress levels across five categories, and Shapley Additive exPlanations (SHAP) is used to interpret model behaviour and identify key contributing features. These predictions are mapped to a structured set of environmental interventions using a Five Level Stress Intervention Mapping, which activates multi-scale responses across personal, room, building, and landscape levels. This integration of physiological insight, explainable AI, and adaptive control establishes a new paradigm for health-responsive built environments. It lays the foundation for the future development of intelligent, personalised healing spaces.",0 "Evaluating text summarization quality remains a critical challenge in Natural Language Processing. Current approaches face a trade-off between performance and interpretability. We present SEval-Ex, a framework that bridges this gap by decomposing summarization evaluation into atomic statements, enabling both high performance and explainability. SEval-Ex employs a two-stage pipeline: first extracting atomic statements from text source and summary using LLM, then a matching between generated statements. Unlike existing approaches that provide only summary-level scores, our method generates detailed evidence for its decisions through statement-level alignments. Experiments on the SummEval benchmark demonstrate that SEval-Ex achieves state-of-the-art performance with 0.580 correlation on consistency with human consistency judgments, surpassing GPT-4 based evaluators (0.521) while maintaining interpretability. Finally, our framework shows robustness against hallucination.",0 "The development of autonomous agents has seen a revival of enthusiasm due to the emergence of LLMs, such as GPT-4o. Deploying these agents in environments where they coexist with humans (e.g., as domestic assistants) requires special attention to trustworthiness and explainability. However, the use of LLMs and other deep learning models still does not resolve these key issues. Deep learning systems may hallucinate, be unable to justify their decisions as black boxes, or perform badly on unseen scenarios. In this work, we propose the use of s(CASP), a goal-directed common sense reasoner based on Answer Set Programming, to break down the high-level tasks of an autonomous agent into mid-level instructions while justifying the selection of these instructions. To validate its use in real applications we present a framework that integrates the reasoner into the VirtualHome simulator and compares its accuracy with GPT-4o, running some of the real use cases available in the domestic environments of VirtualHome. Additionally, since experiments with VirtualHome have shown the need to reduce the response time (which increases as the agent's decision space grows), we have proposed and evaluated a series of optimizations based on program analysis that exploit the advantages of the top-down execution of s(CASP).",0 "Behind a set of rules in Deontic Defeasible Logic, there is a mapping process of normative background fragments. This process goes from text to rules and implicitly encompasses an explanation of the coded fragments. In this paper we deliver a methodology for \textit{legal coding} that starts with a fragment and goes onto a set of Deontic Defeasible Logic rules, involving a set of \textit{scenarios} to test the correctness of the coded fragments. The methodology is illustrated by the coding process of an example text. We then show the results of a series of experiments conducted with humans encoding a variety of normative backgrounds and corresponding cases in which we have measured the efforts made in the coding process, as related to some measurable features. To process these examples, a recently developed technology, Houdini, that allows reasoning in Deontic Defeasible Logic, has been employed. Finally we provide a technique to forecast time required in coding, that depends on factors such as knowledge of the legal domain, knowledge of the coding processes, length of the text, and a measure of \textit{depth} that refers to the length of the paths of legal references.",0 "Due to large intra-subject and inter-subject variabilities of electroencephalogram (EEG) signals, EEG-based brain-computer interfaces (BCIs) usually need subject-specific calibration to tailor the decoding algorithm for each new subject, which is time-consuming and user-unfriendly, hindering their real-world applications. Transfer learning (TL) has been extensively used to expedite the calibration, by making use of EEG data from other subjects/sessions. An important consideration in TL for EEG-based BCIs is to reduce the data distribution discrepancies among different subjects/sessions, to avoid negative transfer. Euclidean alignment (EA) was proposed in 2020 to address this challenge. Numerous experiments from 13 different BCI paradigms demonstrated its effectiveness and efficiency. This paper revisits EA, explaining its procedure and correct usage, introducing its applications and extensions, and pointing out potential new research directions. It should be very helpful to BCI researchers, especially those who are working on EEG signal decoding.",0 "In the rapidly evolving educational landscape, the unbiased assessment of soft skills is a significant challenge, particularly in higher education. This paper presents a fuzzy logic approach that employs a Granular Linguistic Model of Phenomena integrated with multimodal analysis to evaluate soft skills in undergraduate students. By leveraging computational perceptions, this approach enables a structured breakdown of complex soft skill expressions, capturing nuanced behaviours with high granularity and addressing their inherent uncertainties, thereby enhancing interpretability and reliability. Experiments were conducted with undergraduate students using a developed tool that assesses soft skills such as decision-making, communication, and creativity. This tool identifies and quantifies subtle aspects of human interaction, such as facial expressions and gesture recognition. The findings reveal that the framework effectively consolidates multiple data inputs to produce meaningful and consistent assessments of soft skills, showing that integrating multiple modalities into the evaluation process significantly improves the quality of soft skills scores, making the assessment work transparent and understandable to educational stakeholders.",0 "The Integrated Information Theory (IIT) might be our current best bet at a scientific explanation of phenomenal consciousness. IIT focuses on the distinctively subjective and phenomenological aspects of conscious experience. Currently, it offers the fundaments of a formal account, but future developments shall explain the qualitative structures of every possible conscious experience. But this ambitious project is hindered by one fundamental limitation. IIT fails to acknowledge the crucial roles of attention in generating phenomenally conscious experience and shaping its contents. Here, we argue that IIT urgently needs an account of attention. Without this account, IIT cannot explain important informational differences between different kinds of experiences. Furthermore, though some IIT proponents celebratedly endorse a double dissociation between consciousness and attention, close analysis reveals that such as dissociation is in fact incompatible with IIT. Notably, the issues we raise for IIT will likely arise for many internalist theories of conscious contents in philosophy, especially theories with primitivist inclinations. Our arguments also extend to the recently popularized structuralist approaches. Overall, our discussion highlights how considerations about attention are indispensable for scientific as well as philosophical theorizing about conscious experience.",0 "AI alignment is a field of research that aims to develop methods to ensure that agents always behave in a manner aligned with (i.e. consistently with) the goals and values of their human operators, no matter their level of capability. This paper proposes an affectivist approach to the alignment problem, re-framing the concepts of goals and values in terms of affective taxis, and explaining the emergence of affective valence by appealing to recent work in evolutionary-developmental and computational neuroscience. We review the state of the art and, building on this work, we propose a computational model of affect based on taxis navigation. We discuss evidence in a tractable model organism that our model reflects aspects of biological taxis navigation. We conclude with a discussion of the role of affective taxis in AI alignment.",0 "This is a preprint of a review article that has not yet undergone peer review. The content is intended for early dissemination and academic discussion. The final version may differ upon formal publication. As the Fourth Industrial Revolution reshapes industrial paradigms, human-robot collaboration (HRC) has transitioned from a desirable capability to an operational necessity. In response, collaborative robots (Cobots) are evolving beyond repetitive tasks toward adaptive, semantically informed interaction with humans and environments. This paper surveys five foundational pillars enabling this transformation: semantic-level perception, cognitive action planning, explainable learning and control, safety-aware motion design, and multimodal human intention recognition. We examine the role of semantic mapping in transforming spatial data into meaningful context, and explore cognitive planning frameworks that leverage this context for goal-driven decision-making. Additionally, we analyze explainable reinforcement learning methods, including policy distillation and attention mechanisms, which enhance interpretability and trust. Safety is addressed through force-adaptive control and risk-aware trajectory planning, while seamless human interaction is supported via gaze and gesture-based intent recognition. Despite these advancements, challenges such as perception-action disjunction, real-time explainability limitations, and incomplete human trust persist. To address these, we propose a unified Cognitive Synergy Architecture, integrating all modules into a cohesive framework for truly human-centric cobot collaboration.",2 "The use of geospatially dependent information, which has been stipulated as a law in geography, to model geographic patterns forms the cornerstone of geostatistics, and has been inherited in many data science based techniques as well, such as statistical learning algorithms. Still, we observe hesitations in interpreting geographic dependency scientifically as a property in geography, since interpretations of such dependency are subject to model choice with different hypotheses of trends and stationarity. Rather than questioning what can be considered as trends or why it is non-stationary, in this work, we share and consolidate a view that the properties of geographic dependency, being it trending or stationary, are essentially variations can be explained further by unobserved or unknown predictors, and not intrinsic to geographic space. Particularly, geoinformation dependency properties are in fact a projection of high dimensional feature space formed by all potential predictors into the lower dimension of geographic space, where geographic coordinates are equivalent to other predictors for modelling geographic patterns. This work brings together different aspects of geographic dependency, including similarity and heterogeneity, under a coherent framework, and aligns with the understanding of modelling in high dimensional feature space with different modelling concept including the classical geostatistics, Gaussian Process Regression and popular data science based spatial modelling techniques.",0 "As Natural Language Processing (NLP) models continue to evolve and become integral to high-stakes applications, ensuring their interpretability remains a critical challenge. Given the growing variety of explainability methods and diverse stakeholder requirements, frameworks that help stakeholders select appropriate explanations tailored to their specific use cases are increasingly important. To address this need, we introduce EvalxNLP, a Python framework for benchmarking state-of-the-art feature attribution methods for transformer-based NLP models. EvalxNLP integrates eight widely recognized explainability techniques from the Explainable AI (XAI) literature, enabling users to generate and evaluate explanations based on key properties such as faithfulness, plausibility, and complexity. Our framework also provides interactive, LLM-based textual explanations, facilitating user understanding of the generated explanations and evaluation outcomes. Human evaluation results indicate high user satisfaction with EvalxNLP, suggesting it is a promising framework for benchmarking explanation methods across diverse user groups. By offering a user-friendly and extensible platform, EvalxNLP aims at democratizing explainability tools and supporting the systematic comparison and advancement of XAI techniques in NLP.",2 "Artificial Intelligence (AI) systems are increasingly used for decision-making across domains, raising debates over the information and explanations they should provide. Most research on Explainable AI (XAI) has focused on feature-based explanations, with less attention on alternative styles. Personality traits like the Need for Cognition (NFC) can also lead to different decision-making outcomes among low and high NFC individuals. We investigated how presenting AI information (prediction, confidence, and accuracy) and different explanation styles (example-based, feature-based, rule-based, and counterfactual) affect accuracy, reliance on AI, and cognitive load in a loan application scenario. We also examined low and high NFC individuals' differences in prioritizing XAI interface elements (loan attributes, AI information, and explanations), accuracy, and cognitive load. Our findings show that high AI confidence significantly increases reliance on AI while reducing cognitive load. Feature-based explanations did not enhance accuracy compared to other conditions. Although counterfactual explanations were less understandable, they enhanced overall accuracy, increasing reliance on AI and reducing cognitive load when AI predictions were correct. Both low and high NFC individuals prioritized explanations after loan attributes, leaving AI information as the least important. However, we found no significant differences between low and high NFC groups in accuracy or cognitive load, raising questions about the role of personality traits in AI-assisted decision-making. These findings highlight the need for user-centric personalization in XAI interfaces, incorporating diverse explanation styles and exploring multiple personality traits and other user characteristics to optimize human-AI collaboration.",0 "We consider testing a cooperative and social practice that is shaped by the tools developers use, the tests they write, and their mindsets and human needs. This work is one part of a project that explores the human- and socio-technical context of testing through the lens of those interwoven elements: test suite and tools as technical infrastructure and collaborative factors and motivation as mindset. Drawing on empirical observations of previous work, this survey examines how these factors relate to each other. We want to understand which combination of factors can help developers strive and make the most of their ambitions to leverage the potential that software testing practices have. In this report, we construct a survey instrument to measure the factors that constitute the socio-technical context of testing experience. In addition, we state our hypotheses about how these factors impact testing experience and explain the considerations and process that led to the construction of the survey questions.",2 "Recent XAI studies have investigated what constitutes a \textit{good} explanation in AI-assisted decision-making. Despite the widely accepted human-friendly properties of explanations, such as contrastive and selective, existing studies have yielded inconsistent findings. To address these gaps, our study focuses on the cognitive dimensions of explanation evaluation, by evaluating six explanations with different contrastive strategies and information selectivity and scrutinizing factors behind their valuation process. Our analysis results find that contrastive explanations are not the most preferable or understandable in general; Rather, different contrastive and selective explanations were appreciated to a different extent based on who they are, when, how, and what to explain -- with different level of cognitive load and engagement and sociotechnical contexts. Given these findings, we call for a nuanced view of explanation strategies, with implications for designing AI interfaces to accommodate individual and contextual differences in AI-assisted decision-making.",0 "This study investigates the relationship between deep learning (DL) model accuracy and expert agreement in classifying crash narratives. We evaluate five DL models -- including BERT variants, USE, and a zero-shot classifier -- against expert labels and narratives, and extend the analysis to four large language models (LLMs): GPT-4, LLaMA 3, Qwen, and Claude. Our findings reveal an inverse relationship: models with higher technical accuracy often show lower agreement with human experts, while LLMs demonstrate stronger expert alignment despite lower accuracy. We use Cohen's Kappa and Principal Component Analysis (PCA) to quantify and visualize model-expert agreement, and employ SHAP analysis to explain misclassifications. Results show that expert-aligned models rely more on contextual and temporal cues than location-specific keywords. These findings suggest that accuracy alone is insufficient for safety-critical NLP tasks. We argue for incorporating expert agreement into model evaluation frameworks and highlight the potential of LLMs as interpretable tools in crash analysis pipelines.",0 "Agentic pipelines present novel challenges and opportunities for human-centered explainability. The HCXAI community is still grappling with how best to make the inner workings of LLMs transparent in actionable ways. Agentic pipelines consist of multiple LLMs working in cooperation with minimal human control. In this research paper, we present early findings from an agentic pipeline implementation of a perceptive task guidance system. Through quantitative and qualitative analysis, we analyze how Chain-of-Thought (CoT) reasoning, a common vehicle for explainability in LLMs, operates within agentic pipelines. We demonstrate that CoT reasoning alone does not lead to better outputs, nor does it offer explainability, as it tends to produce explanations without explainability, in that they do not improve the ability of end users to better understand systems or achieve their goals.",0 "As Artificial Intelligence (AI) is increasingly used in areas that significantly impact human lives, concerns about fairness and transparency have grown, especially regarding their impact on protected groups. Recently, the intersection of explainability and fairness has emerged as an important area to promote responsible AI systems. This paper explores how explainability methods can be leveraged to detect and interpret unfairness. We propose a pipeline that integrates local post-hoc explanation methods to derive fairness-related insights. During the pipeline design, we identify and address critical questions arising from the use of explanations as bias detectors such as the relationship between distributive and procedural fairness, the effect of removing the protected attribute, the consistency and quality of results across different explanation methods, the impact of various aggregation strategies of local explanations on group fairness evaluations, and the overall trustworthiness of explanations as bias detectors. Our results show the potential of explanation methods used for fairness while highlighting the need to carefully consider the aforementioned critical aspects.",0 "Rooted in the explosion of deep learning over the past decade, this thesis spans from AlphaGo to ChatGPT to empirically examine the fundamental concepts needed to realize the vision of an artificial scientist: a machine with the capacity to autonomously generate original research and contribute to the expansion of human knowledge. The investigation begins with Olivaw, an AlphaGo Zero-like agent that discovers Othello knowledge from scratch but is unable to communicate it. This realization leads to the development of the Explanatory Learning (EL) framework, a formalization of the problem faced by a scientist when trying to explain a new phenomenon to their peers. The effective EL prescriptions allow us to crack Zendo, a popular board game simulating the scientific endeavor. This success comes with a fundamental insight: an artificial scientist must develop its own interpretation of the language used to explain its findings, and not rely on a rigid existing interpreter. Questioning the very process of learning an interpreter, we turn our attention to the inner functioning of modern multimodal models. This culminates in a simple idea to build CLIP-like models where interpretation and perception are explicitly disentangled: a cost-effective approach that couples two unimodal models using little multimodal data and no further training. Finally, we discuss what ChatGPT and its siblings are still missing to become artificial scientists, and introduce the Big-Bench Symbol Interpretation Task, a benchmark about interpreting Zendo-like explanations that sees LLMs going no further than random chance while being instead fully solved by humans.",0 "Recent advances in digital platforms generate rich, high-dimensional logs of human behavior, and machine learning models have helped social scientists explain knowledge accumulation, communication, and information diffusion. Such models, however, almost always treat behavior as sequences of actions, abstracting the inter-temporal information among actions. To close this gap, we introduce a two-scale Action-Timing Context(ATC) framework that jointly embeds each action and its time interval. ATC obtains low-dimensional representations of actions and characterizes them with inter-temporal information. We provide three applications of ATC to real-world datasets and demonstrate that the method offers a unified view of human behavior. The presented qualitative findings demonstrate that explicitly modeling inter-temporal context is essential for a comprehensive, interpretable understanding of human activity on digital platforms.",0 "Institutions play a critical role in enabling communities to manage common-pool resources and avert tragedies of the commons. However, a fundamental issue arises: Individuals typically perceive participation as advantageous only after an institution is established, creating a paradox: How can institutions form if no one will join before a critical mass exists? We term this conundrum the institution bootstrapping problem and propose that misperception, specifically, agents' erroneous belief that an institution already exists, could resolve this paradox. By integrating well-documented psychological phenomena, including cognitive biases, probability distortion, and perceptual noise, into a game-theoretic framework, we demonstrate how these factors collectively mitigate the bootstrapping problem. Notably, unbiased perceptual noise (e.g., noise arising from agents' heterogeneous physical or social contexts) drastically reduces the critical mass of cooperators required for institutional emergence. This effect intensifies with greater diversity of perceptions. We explain this counter-intuitive result through asymmetric boundary conditions: proportional underestimation of low-probability sanctions produces distinct outcomes compared to equivalent overestimation. Furthermore, the type of perceptual distortion, proportional versus absolute, yields qualitatively different evolutionary pathways. These findings challenge conventional assumptions about rationality in institutional design, highlighting how ""noisy"" cognition can paradoxically enhance cooperation. Finally, we contextualize these insights within broader discussions of multi-agent system design and collective action. Our analysis underscores the importance of incorporating human-like cognitive constraints, not just idealized rationality, into models of institutional emergence and resilience.",0 "Generative Artificial Intelligence (GenAI) is rapidly reshaping the global financial landscape, offering unprecedented opportunities to enhance customer engagement, automate complex workflows, and extract actionable insights from vast financial data. This survey provides an overview of GenAI adoption across the financial ecosystem, examining how banks, insurers, asset managers, and fintech startups worldwide are integrating large language models and other generative tools into their operations. From AI-powered virtual assistants and personalized financial advisory to fraud detection and compliance automation, GenAI is driving innovation across functions. However, this transformation comes with significant cybersecurity and ethical risks. We discuss emerging threats such as AI-generated phishing, deepfake-enabled fraud, and adversarial attacks on AI systems, as well as concerns around bias, opacity, and data misuse. The evolving global regulatory landscape is explored in depth, including initiatives by major financial regulators and international efforts to develop risk-based AI governance. Finally, we propose best practices for secure and responsible adoption - including explainability techniques, adversarial testing, auditability, and human oversight. Drawing from academic literature, industry case studies, and policy frameworks, this chapter offers a perspective on how the financial sector can harness GenAI's transformative potential while navigating the complex risks it introduces.",2 "Bayesian Optimisation (BO) is a family of methods for finding optimal parameters when the underlying function to be optimised is unknown. BO is used, for example, for hyperparameter tuning in machine learning and as an expert support tool for tuning cyberphysical systems. For settings where humans are involved in the tuning task, methods have been developed to explain BO (Explainable Bayesian Optimization, XBO). However, there is little guidance on how to present XBO results to humans so that they can tune the system effectively and efficiently. In this paper, we investigate how the XBO explanation format affects users' task performance, task load, understanding and trust in XBO. We chose a task that is accessible to a wide range of users. Specifically, we set up an egg cooking scenario with 6 parameters that participants had to adjust to achieve a perfect soft-boiled egg. We compared three different explanation formats: a bar chart, a list of rules and a textual explanation in a between-subjects online study with 213 participants. Our results show that adding any type of explanation increases task success, reduces the number of trials needed to achieve success, and improves comprehension and confidence. While explanations add more information for participants to process, we found no increase in user task load. We also found that the aforementioned results were independent of the explanation format; all formats had a similar effect. This is an interesting finding for practical applications, as it suggests that explanations can be added to BO tuning tasks without the burden of designing or selecting specific explanation formats. In the future, it would be interesting to investigate scenarios of prolonged use of the explanation formats and whether they have different effects on users' mental models of the underlying system.",2 "The growing interest in automated movement analysis has presented new challenges in recognition of complex human activities including dance. This study focuses on dance style recognition using features extracted using Laban Movement Analysis. Previous studies for dance style recognition often focus on cross-frame movement analysis, which limits the ability to capture temporal context and dynamic transitions between movements. This gap highlights the need for a method that can add temporal context to LMA features. For this, we introduce a novel pipeline which combines 3D pose estimation, 3D human mesh reconstruction, and floor aware body modeling to effectively extract LMA features. To address the temporal limitation, we propose a sliding window approach that captures movement evolution across time in features. These features are then used to train various machine learning methods for classification, and their explainability explainable AI methods to evaluate the contribution of each feature to classification performance. Our proposed method achieves a highest classification accuracy of 99.18\% which shows that the addition of temporal context significantly improves dance style recognition performance.",0 "Understanding a \textit{reinforcement learning} policy, which guides state-to-action mappings to maximize rewards, necessitates an accompanying explanation for human comprehension. In this paper, we introduce a set of \textit{linear temporal logic} formulae designed to provide explanations for policies, and an algorithm for searching through those formulae for the one that best explains a given policy. Our focus is on explanations that elucidate both the ultimate objectives accomplished by the policy and the prerequisite conditions it upholds throughout its execution. The effectiveness of our proposed approach is illustrated through a simulated game of capture-the-flag and a car-parking environment,",0 "In the era of rapid advancements in vehicle safety technologies, driving risk assessment has become a focal point of attention. Technologies such as collision warning systems, advanced driver assistance systems (ADAS), and autonomous driving require driving risks to be evaluated proactively and in real time. To be effective, driving risk assessment metrics must not only accurately identify potential collisions but also exhibit human-like reasoning to enable safe and seamless interactions between vehicles. Existing safety potential field models assess driving risks by considering both objective and subjective safety factors. However, their practical applicability in real-world risk assessment tasks is limited. These models are often challenging to calibrate due to the arbitrary nature of their structures, and calibration can be inefficient because of the scarcity of accident statistics. Additionally, they struggle to generalize across both longitudinal and lateral risks. To address these challenges, we propose a composite safety potential field framework, namely C-SPF, involving a subjective field to capture drivers' risk perception about spatial proximity and an objective field to quantify the imminent collision probability, to comprehensively evaluate driving risks. The C-SPF is calibrated using abundant two-dimensional spacing data from trajectory datasets, enabling it to effectively capture drivers' proximity risk perception and provide a more realistic explanation of driving behaviors. Analysis of a naturalistic driving dataset demonstrates that the C-SPF can capture both longitudinal and lateral risks that trigger drivers' safety maneuvers. Further case studies highlight the C-SPF's ability to explain lateral driver behaviors, such as abandoning lane changes or adjusting lateral position relative to adjacent vehicles, which are capabilities that existing models fail to achieve.",0 "This paper presents a novel framework for emotion recognition in contemporary dance by improving existing Laban Movement Analysis (LMA) feature descriptors and introducing robust, novel descriptors that capture both quantitative and qualitative aspects of the movement. Our approach extracts expressive characteristics from 3D keypoints data of professional dancers performing contemporary dance under various emotional states, and trains multiple classifiers, including Random Forests and Support Vector Machines. Additionally, we provide in-depth explanation of features and their impact on model predictions using explainable machine learning methods. Overall, our study improves emotion recognition in contemporary dance and offers promising applications in performance analysis, dance training, and human--computer interaction, with a highest accuracy of 96.85\%.",0 "In his opening OFC plenary talk back in 2021, Alibaba Group's Yiqun Cai notably added in the follow-up Q&A that today's complex networks are more than computer science - they grow, they are life. This entails that future networks may be better viewed as techno-social systems that resemble biological superorganisms with brain-like cognitive capabilities. Fast-forwarding, there is now growing awareness that we have to completely change our networks from being static into being a living entity that would act as an AI-powered network `brain', as recently stated by Bruno Zerbib, Chief Technology and Innovation Officer of France's Orange, at the Mobile World Congress (MWC) 2025. Even though AI was front and center at both MWC and OFC 2025 and has been widely studied in the context of optical networks, there are currently no publications on active inference in optical (and less so mobile) networks available. Active inference is an ideal methodology for developing more advanced AI systems by biomimicking the way living intelligent systems work, while overcoming the limitations of today's AI related to training, learning, and explainability. Active inference is considered the key to true AI: Less artificial, more intelligent. The goal of this paper is twofold. First, we aim at enabling optical network researchers to conceptualize new research lines for future optical networks with human-AI interaction capabilities by introducing them to the main mathematical concepts of the active inference framework. Second, we demonstrate how to move AI research beyond the human brain toward the 6G world brain by exploring the role of mycorrhizal networks, the largest living organism on planet Earth, in the AI vision and R&D roadmap for the next decade and beyond laid out by Karl Friston, the father of active inference.",0 "This paper establishes a theoretical foundation for understanding the fundamental limits of AI explainability through algorithmic information theory. We formalize explainability as the approximation of complex models by simpler ones, quantifying both approximation error and explanation complexity using Kolmogorov complexity. Our key theoretical contributions include: (1) a complexity gap theorem proving that any explanation significantly simpler than the original model must differ from it on some inputs; (2) precise bounds showing that explanation complexity grows exponentially with input dimension but polynomially with error tolerance for Lipschitz functions; and (3) a characterization of the gap between local and global explainability, demonstrating that local explanations can be significantly simpler while maintaining accuracy in relevant regions. We further establish a regulatory impossibility theorem proving that no governance framework can simultaneously pursue unrestricted AI capabilities, human-interpretable explanations, and negligible error. These results highlight considerations likely to be relevant to the design, evaluation, and oversight of explainable AI systems.",0 "The depth and coverage of the first years of JWST observations have revealed low luminosity active galactic nuclei (AGN) across a wide redshift range, shedding light on black hole (BH) assembly and feedback. We present our spectroscopic sample of 34 Type 1 AGN obtained from JADES survey data and spanning $1.5 < z < 9$. Our sample of AGN probes a BH mass range of $10^{6-9}$~M$_{\odot}$ at bolometric luminosities down to $10^{43}$~erg~s$^{-1}$, implying generally sub-Eddington ratios of $<0.5L_{\rm Edd}$. Most of these AGN are hosted in low mass ($M_{\star}\sim10^8$~M$_{\odot}$) galaxies and are overmassive relative to the local $M_{BH}-M_{\star}$ relation, while remaining consistent with the local $M_{BH}$-$\sigma_*$ relation. The wide redshift range provided by our sample allows us to trace the emergence of local $M_{BH}$-$M_*$ scaling relation across the cosmic epoch. Additionally, we explore the capability of narrow-line diagnostics in identifying Type 2 AGN and find that a significant fraction of our AGN would be missed by them due to low metallicity or lack of high energy ionizing photons (potentially due to dust absorption, dense gas blanketing the broad and narrow line regions, or intrinsically soft ionizing spectra). We explore the UV luminosity function of AGN and their hosts and find that it is subject to significant cosmic variance and is also dependent on the AGN bolometric luminosity. Finally, we show that the electron and Balmer scattering scenarios recently proposed to explain the broad components of the Balmer lines are untenable on multiple grounds. There is no evidence that the black hole masses have been overestimated by orders of magnitude as proposed in those scenarios.",2 "Data-driven approaches for depression diagnosis have emerged as a significant research focus in neuromedicine, driven by the development of relevant datasets. Recently, graph neural network (GNN)-based models have gained widespread adoption due to their ability to capture brain channel functional connectivity from both spatial and temporal perspectives. However, their effectiveness is hindered by the absence of a robust temporal biomarker. In this paper, we introduce a novel and effective biomarker for depression diagnosis by leveraging the discrete Fourier transform (DFT) and propose a customized graph network architecture based on Temporal Graph Convolutional Network (TGCN). Our model was trained on a dataset comprising 1,086 subjects, which is over 10 times larger than previous datasets in the field of depression diagnosis. Furthermore, to align with medical requirements, we performed propensity score matching (PSM) to create a refined subset, referred to as the PSM dataset. Experimental results demonstrate that incorporating our newly designed biomarker enhances the representation of temporal characteristics in brain channels, leading to improved F1 scores in both the real-world dataset and the PSM dataset. This advancement has the potential to contribute to the development of more effective depression diagnostic tools. In addition, we used SHapley Additive exPlaination (SHAP) to validate the interpretability of our model, ensuring its practical applicability in medical settings.",0 "Recent advances in Large Language Models (LLMs) have shown promising potential for human activity recognition (HAR) using ambient sensors, especially through natural language reasoning and zero-shot learning. However, existing datasets such as CASAS, ARAS, and MARBLE were not originally designed with LLMs in mind and therefore lack the contextual richness, complexity, and annotation granularity required to fully exploit LLM capabilities. In this paper, we introduce MuRAL, the first Multi-Resident Ambient sensor dataset with natural Language, comprising over 21 hours of multi-user sensor data collected from 21 sessions in a smart-home environment. MuRAL is annotated with fine-grained natural language descriptions, resident identities, and high-level activity labels, all situated in dynamic, realistic multi-resident settings. We benchmark MuRAL using state-of-the-art LLMs for three core tasks: subject assignment, action description, and activity classification. Our results demonstrate that while LLMs can provide rich semantic interpretations of ambient data, current models still face challenges in handling multi-user ambiguity and under-specified sensor contexts. We release MuRAL to support future research on LLM-powered, explainable, and socially aware activity understanding in smart environments. For access to the dataset, please reach out to us via the provided contact information. A direct link for dataset retrieval will be made available at this location in due course.",0 "Context: Many open source software (OSS) projects need more human resources for maintenance, improvements, and sometimes even their survival. This need allegedly applies even to vital OSS projects that can be seen as being a part of the world's critical infrastructures. To address this resourcing problem, new funding instruments for OSS projects have been established in recent years. Objectives: The paper examines two such funding bodies for OSS and the projects they have funded. The focus of both funding bodies is on software security and cyber security in general. Methods: The methodology is based on qualitative thematic analysis. Results: Particularly OSS supply chains, network and cryptography libraries, programming languages, and operating systems and their low-level components have been funded and thus seen as critical in terms of cyber security by the two funding bodies. Conclusions: In addition to the qualitative results presented, the paper makes a contribution by connecting the research branches of critical infrastructure and sustainability of OSS projects. A further contribution is made by connecting the topic examined to recent cyber security regulations. Finally, an important argument is raised that neither cyber security nor sustainability alone can entirely explain the rationales behind the funding decisions made by the two bodies.",0 "Recent advances in self-supervised learning have attracted significant attention from both machine learning and neuroscience. This is primarily because self-supervised methods do not require annotated supervisory information, making them applicable to training artificial networks without relying on large amounts of curated data, and potentially offering insights into how the brain adapts to its environment in an unsupervised manner. Although several previous studies have elucidated the correspondence between neural representations in deep convolutional neural networks (DCNNs) and biological systems, the extent to which unsupervised or self-supervised learning can explain the human-like acquisition of categorically structured information remains less explored. In this study, we investigate the correspondence between the internal representations of DCNNs trained using a self-supervised contrastive learning algorithm and human semantics and recognition. To this end, we employ a few-shot learning evaluation procedure, which measures the ability of DCNNs to recognize novel concepts from limited exposure, to examine the inter-categorical structure of the learned representations. Two comparative approaches are used to relate the few-shot learning outcomes to human semantics and recognition, with results suggesting that the representations acquired through contrastive learning are well aligned with human cognition. These findings underscore the potential of self-supervised contrastive learning frameworks to model learning mechanisms similar to those of the human brain, particularly in scenarios where explicit supervision is unavailable, such as in human infants prior to language acquisition.",0 "The ability to explain why a machine learning model arrives at a particular prediction is crucial when used as decision support by human operators of critical systems. The provided explanations must be provably correct, and preferably without redundant information, called minimal explanations. In this paper, we aim at finding explanations for predictions made by tree ensembles that are not only minimal, but also minimum with respect to a cost function. To this end, we first present a highly efficient oracle that can determine the correctness of explanations, surpassing the runtime performance of current state-of-the-art alternatives by several orders of magnitude when computing minimal explanations. Secondly, we adapt an algorithm called MARCO from related works (calling it m-MARCO) for the purpose of computing a single minimum explanation per prediction, and demonstrate an overall speedup factor of two compared to the MARCO algorithm which enumerates all minimal explanations. Finally, we study the obtained explanations from a range of use cases, leading to further insights of their characteristics. In particular, we observe that in several cases, there are more than 100,000 minimal explanations to choose from for a single prediction. In these cases, we see that only a small portion of the minimal explanations are also minimum, and that the minimum explanations are significantly less verbose, hence motivating the aim of this work.",0 "Machine Learning (ML) models offer significant potential for advancing cell counting applications in neuroscience, medical research, pharmaceutical development, and environmental monitoring. However, implementing these models effectively requires robust operational frameworks. This paper introduces Cell Counting Machine Learning Operations (CC-MLOps), a comprehensive framework that streamlines the integration of ML in cell counting workflows. CC-MLOps encompasses data access and preprocessing, model training, monitoring, explainability features, and sustainability considerations. Through a practical use case, we demonstrate how MLOps principles can enhance model reliability, reduce human error, and enable scalable Cell Counting solutions. This work provides actionable guidance for researchers and laboratory professionals seeking to implement machine learning (ML)- powered cell counting systems.",0 "The first generation of Large Language Models - what might be called ""Act I"" of generative AI (2020-2023) - achieved remarkable success through massive parameter and data scaling, yet exhibited fundamental limitations such as knowledge latency, shallow reasoning, and constrained cognitive processes. During this era, prompt engineering emerged as our primary interface with AI, enabling dialogue-level communication through natural language. We now witness the emergence of ""Act II"" (2024-present), where models are transitioning from knowledge-retrieval systems (in latent space) to thought-construction engines through test-time scaling techniques. This new paradigm establishes a mind-level connection with AI through language-based thoughts. In this paper, we clarify the conceptual foundations of cognition engineering and explain why this moment is critical for its development. We systematically break down these advanced approaches through comprehensive tutorials and optimized implementations, democratizing access to cognition engineering and enabling every practitioner to participate in AI's second act. We provide a regularly updated collection of papers on test-time scaling in the GitHub Repository: https://github.com/GAIR-NLP/cognition-engineering",0 "The remote human operator's user interface (UI) is an important link to make the robot an efficient extension of the operator's perception and action. In rescue applications, several studies have investigated the design of operator interfaces based on observations during major robotics competitions or field deployments. Based on this research, guidelines for good interface design were empirically identified. The investigations on the UIs of teams participating in competitions are often based on external observations during UI application, which may miss some relevant requirements for UI flexibility. In this work, we present an open-source and flexibly configurable user interface based on established guidelines and its exemplary use for wheeled, tracked, and walking robots. We explain the design decisions and cover the insights we have gained during its highly successful applications in multiple robotics competitions and evaluations. The presented UI can also be adapted for other robots with little effort and is available as open source.",0 "Graph Neural Networks (GNNs) have emerged as an efficient alternative to convolutional approaches for vision tasks such as image classification, leveraging patch-based representations instead of raw pixels. These methods construct graphs where image patches serve as nodes, and edges are established based on patch similarity or classification relevance. Despite their efficiency, the explainability of GNN-based vision models remains underexplored, even though graphs are naturally interpretable. In this work, we analyze the semantic consistency of the graphs formed at different layers of GNN-based image classifiers, focusing on how well they preserve object structures and meaningful relationships. A comprehensive analysis is presented by quantifying the extent to which inter-layer graph connections reflect semantic similarity and spatial coherence. Explanations from standard and adversarial settings are also compared to assess whether they reflect the classifiers' robustness. Additionally, we visualize the flow of information across layers through heatmap-based visualization techniques, thereby highlighting the models' explainability. Our findings demonstrate that the decision-making processes of these models can be effectively explained, while also revealing that their reasoning does not necessarily align with human perception, especially in deeper layers.",0 "We adapt Leland's dynamic capital structure model to the context of an insurance company selling participating life insurance contracts explaining the existence of life insurance contracts which provide both a guaranteed payment and surplus participation to the policyholders. Our derivation of the optimal participation rate reveals its pronounced sensitivity to the contract duration and the associated tax rate. Moreover, the asset substitution effect, which describes the tendency of equity holders to increase the riskiness of a company's investment decisions, decreases when adding surplus participation.",0 "As NLP models become increasingly integrated into real-world applications, it becomes clear that there is a need to address the fact that models often rely on and generate conflicting information. Conflicts could reflect the complexity of situations, changes that need to be explained and dealt with, difficulties in data annotation, and mistakes in generated outputs. In all cases, disregarding the conflicts in data could result in undesired behaviors of models and undermine NLP models' reliability and trustworthiness. This survey categorizes these conflicts into three key areas: (1) natural texts on the web, where factual inconsistencies, subjective biases, and multiple perspectives introduce contradictions; (2) human-annotated data, where annotator disagreements, mistakes, and societal biases impact model training; and (3) model interactions, where hallucinations and knowledge conflicts emerge during deployment. While prior work has addressed some of these conflicts in isolation, we unify them under the broader concept of conflicting information, analyze their implications, and discuss mitigation strategies. We highlight key challenges and future directions for developing conflict-aware NLP systems that can reason over and reconcile conflicting information more effectively.",2 "Open domain question answering systems frequently rely on information retrieved from large collections of text (such as the Web) to answer questions. However, such collections of text often contain conflicting information, and indiscriminately depending on this information may result in untruthful and inaccurate answers. To understand the gravity of this problem, we collect a human-annotated dataset, Question Answering with Conflicting Contexts (QACC), and find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search. We evaluate and benchmark three powerful Large Language Models (LLMs) with our dataset QACC and demonstrate their limitations in effectively addressing questions with conflicting information. To explore how humans reason through conflicting contexts, we request our annotators to provide explanations for their selections of correct answers. We demonstrate that by finetuning LLMs to explain their answers, we can introduce richer information into their training that guide them through the process of reasoning with conflicting contexts.",2 "Reliably identifying and verifying subjects remains integral to computer system security. Various novel authentication techniques, such as biometric authentication systems, have been developed in recent years. This paper provides a detailed review of keystroke-based authentication systems and their applications. Keystroke dynamics is a behavioral biometric that is emerging as an important tool for cybersecurity as it promises to be non-intrusive and cost-effective. In addition, no additional hardware is required, making it convenient to deploy. This survey covers novel keystroke datasets, state-of-the-art keystroke authentication algorithms, keystroke authentication on touch screen and mobile devices, and various prominent applications of such techniques beyond authentication. The paper covers all the significant aspects of keystroke dynamics and can be considered a reference for future researchers in this domain. The paper includes a discussion of the latest keystroke datasets, providing researchers with an up-to-date resource for analysis and experimentation. In addition, this survey covers the state-of-the-art algorithms adopted within this domain, offering insights into the cutting-edge techniques utilized for keystroke analysis. Moreover, this paper explains the diverse applications of keystroke dynamics, particularly focusing on security, verification, and identification uses. Furthermore, this paper presents a summary of future research opportunities, highlighting potential areas for exploration and development within the realm of keystroke dynamics. This forward-looking perspective aims to inspire further inquiry and innovation, guiding the trajectory of future studies in this dynamic field.",2 "Explaining individual differences in cognitive abilities requires both identifying brain parameters that vary across individuals and understanding how brain networks are recruited for specific tasks. Typically, task performance relies on the integration and segregation of functional subnetworks, often captured by parameters like regional excitability and connectivity. Yet, the high dimensionality of these parameters hinders pinpointing their functional relevance. Here, we apply stiff-sloppy analysis to human brain data, revealing that certain subtle parameter combinations (""stiff dimensions"") powerfully influence neural activity during task processing, whereas others (""sloppy dimensions"") vary more extensively but exert minimal impact. Using a pairwise maximum entropy model of task fMRI, we show that even small deviations in stiff dimensions-derived through Fisher Information Matrix analysis-govern the dynamic interplay of segregation and integration between the default mode network (DMN) and a working memory network (WMN). Crucially, separating a 0-back task (vigilant attention) from a 2-back task (working memory updating) uncovers partially distinct stiff dimensions predicting performance in each condition, along with a global DMN-WMN segregation shared across both tasks. Altogether, stiff-sloppy analysis challenges the conventional focus on large parameter variability by highlighting these subtle yet functionally decisive parameter combinations.",0 "The ability of neural networks to perform robotic perception and control tasks such as depth and optical flow estimation, simultaneous localization and mapping (SLAM), and automatic control has led to their widespread adoption in recent years. Deep Reinforcement Learning has been used extensively in these settings, as it does not have the unsustainable training costs associated with supervised learning. However, DeepRL suffers from poor sample efficiency, i.e., it requires a large number of environmental interactions to converge to an acceptable solution. Modern RL algorithms such as Deep Q Learning and Soft Actor-Critic attempt to remedy this shortcoming but can not provide the explainability required in applications such as autonomous robotics. Humans intuitively understand the long-time-horizon sequential tasks common in robotics. Properly using such intuition can make RL policies more explainable while enhancing their sample efficiency. In this work, we propose SHIRE, a novel framework for encoding human intuition using Probabilistic Graphical Models (PGMs) and using it in the Deep RL training pipeline to enhance sample efficiency. Our framework achieves 25-78% sample efficiency gains across the environments we evaluate at negligible overhead cost. Additionally, by teaching RL agents the encoded elementary behavior, SHIRE enhances policy explainability. A real-world demonstration further highlights the efficacy of policies trained using our framework.",0 "Statistical regularities in human language have fascinated researchers for decades, suggesting deep underlying principles governing its evolution and information structuring for efficient communication. While Zipf's Law describes the frequency-rank distribution of words, deviations from this pattern-particularly for less frequent words-challenge the notion of an entirely optimized communication system. Here, we present a theoretical framework that integrates concepts from information theory, network science, and adjacent possible to explain these deviations. We propose that language evolves through optimization processes constrained by the finite cognitive capacities of humans. This results in a dual structure within language: while frequent words form an optimized, highly navigable core, less frequent words reside in a suboptimal regime, requiring more complex combinations to convey meaning. Our findings reveal that Zipf's exponents' deviation to larger values-from 1 to 2-marks a transition from an optimal to a suboptimal state, dictated by cognitive limits. This transition imposes a fundamental limit on communication efficiency, where cognitive constraints lead to a reliance on combinations of words rather than the creation of new vocabulary to express an open-ended conceptual space. A simple model based on the adjacent possible remarkably aligns with the empirical frequency-rank distribution of words in a language. These insights have significant implications for natural language processing and the design of artificial linguistic models, offering new perspectives on optimizing human and machine communication.",0 "The clinical adoption of artificial intelligence (AI) in medical imaging requires models that are both diagnostically accurate and interpretable to clinicians. While current multimodal biomedical foundation models prioritize performance, their black-box nature hinders explaining the decision-making process in clinically meaningful concepts. Here, we present ConceptCLIP, the first explainable biomedical foundation model that achieves state-of-the-art diagnostic accuracy while delivering human-interpretable explanations across diverse imaging modalities. We curate MedConcept-23M, the largest pre-training dataset comprising 23 million image-text-concept triplets across diverse medical modalities, where clinical concepts are derived from the Unified Medical Language System. Leveraging this dataset, we develop ConceptCLIP through a novel dual-alignment approach that simultaneously learns global image-text representations and fine-grained region-concept associations for precise and interpretable medical image analysis. We curate the most extensive evaluation benchmark for multimodal biomedical foundation models, covering 52 clinical tasks spanning 10 imaging modalities. Extensive experiments demonstrate that ConceptCLIP outperforms existing state-of-the-art multimodal biomedical foundation models. Importantly, ConceptCLIP demonstrates superior diagnostic performance while providing human-understandable explanations validated by clinical experts. As the first precise and interpretable biomedical foundation model, ConceptCLIP represents a critical milestone toward the widespread clinical adoption of AI, thereby advancing trustworthy AI in medicine.",0 "The integration of Artificial Intelligence (AI) into high-stakes domains such as healthcare, finance, and autonomous systems is often constrained by concerns over transparency, interpretability, and trust. While Human-Centered AI (HCAI) emphasizes alignment with human values, Explainable AI (XAI) enhances transparency by making AI decisions more understandable. However, the lack of a unified approach limits AI's effectiveness in critical decision-making scenarios. This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm. The framework comprises (1) a foundational AI model with built-in explainability mechanisms, (2) a human-centered explanation layer that tailors explanations based on cognitive load and user expertise, and (3) a dynamic feedback loop that refines explanations through real-time user interaction. The framework is evaluated across healthcare, finance, and software development, demonstrating its potential to enhance decision-making, regulatory compliance, and public trust. Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.",0 "Phishing attacks represent an increasingly sophisticated and pervasive threat to individuals and organizations, causing significant financial losses, identity theft, and severe damage to institutional reputations. Existing phishing detection methods often struggle to simultaneously achieve high accuracy and explainability, either failing to detect novel attacks or operating as opaque black-box models. To address this critical gap, we propose a novel phishing URL detection system based on a first-order Takagi-Sugeno-Kang (TSK) fuzzy inference model optimized through gradient-based techniques. Our approach intelligently combines the interpretability and human-like reasoning capabilities of fuzzy logic with the precision and adaptability provided by gradient optimization methods, specifically leveraging the Adam optimizer for efficient parameter tuning. Experiments conducted using a comprehensive dataset of over 235,000 URLs demonstrate rapid convergence, exceptional predictive performance (accuracy averaging 99.95% across 5 cross-validation folds, with a perfect AUC i.e. 1.00). Furthermore, optimized fuzzy rules and membership functions improve interoperability, clearly indicating how the model makes decisions - an essential feature for cybersecurity applications. This high-performance, transparent, and interpretable phishing detection framework significantly advances current cybersecurity defenses, providing practitioners with accurate and explainable decision-making tools.",0 "The integration of unmanned aerial vehicles (UAVs) into cellular networks presents significant mobility management challenges, primarily due to frequent handovers caused by probabilistic line-of-sight conditions with multiple ground base stations (BSs). To tackle these challenges, reinforcement learning (RL)-based methods, particularly deep Q-networks (DQN), have been employed to optimize handover decisions dynamically. However, a major drawback of these learning-based approaches is their black-box nature, which limits interpretability in the decision-making process. This paper introduces an explainable AI (XAI) framework that incorporates Shapley Additive Explanations (SHAP) to provide deeper insights into how various state parameters influence handover decisions in a DQN-based mobility management system. By quantifying the impact of key features such as reference signal received power (RSRP), reference signal received quality (RSRQ), buffer status, and UAV position, our approach enhances the interpretability and reliability of RL-based handover solutions. To validate and compare our framework, we utilize real-world network performance data collected from UAV flight trials. Simulation results show that our method provides intuitive explanations for policy decisions, effectively bridging the gap between AI-driven models and human decision-makers.",0 "Recent advancements in Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, generating significant interest in their application to recommendation systems. However, existing methods have not fully capitalized on the potential of LLMs, often constrained by limited input information or failing to fully utilize their advanced reasoning capabilities. To address these limitations, we introduce EXP3RT, a novel LLM-based recommender designed to leverage rich preference information contained in user and item reviews. EXP3RT is basically fine-tuned through distillation from a teacher LLM to perform three key tasks in order: EXP3RT first extracts and encapsulates essential subjective preferences from raw reviews, aggregates and summarizes them according to specific criteria to create user and item profiles. It then generates detailed step-by-step reasoning followed by predicted rating, i.e., reasoning-enhanced rating prediction, by considering both subjective and objective information from user/item profiles and item descriptions. This personalized preference reasoning from EXP3RT enhances rating prediction accuracy and also provides faithful and reasonable explanations for recommendation. Extensive experiments show that EXP3RT outperforms existing methods on both rating prediction and candidate item reranking for top-k recommendation, while significantly enhancing the explainability of recommendation systems.",0 "Automating structured clinical interviews could revolutionize mental healthcare accessibility, yet existing large language models (LLMs) approaches fail to align with psychiatric diagnostic protocols. We present MAGI, the first framework that transforms the gold-standard Mini International Neuropsychiatric Interview (MINI) into automatic computational workflows through coordinated multi-agent collaboration. MAGI dynamically navigates clinical logic via four specialized agents: 1) an interview tree guided navigation agent adhering to the MINI's branching structure, 2) an adaptive question agent blending diagnostic probing, explaining, and empathy, 3) a judgment agent validating whether the response from participants meet the node, and 4) a diagnosis Agent generating Psychometric Chain-of- Thought (PsyCoT) traces that explicitly map symptoms to clinical criteria. Experimental results on 1,002 real-world participants covering depression, generalized anxiety, social anxiety and suicide shows that MAGI advances LLM- assisted mental health assessment by combining clinical rigor, conversational adaptability, and explainable reasoning.",2 "Background Seizure severity can change from one seizure to the next within individual people with epilepsy. It is unclear if and how seizure severity is modulated over longer timescales. Characterising seizure severity variability over time could lead to tailored treatments. In this study, we test if continuously-recorded interictal intracranial EEG (iEEG) features encapsulate signatures of such modulations. Methods We analysed 20 subjects with iEEG recordings of at least one day. We identified cycles on timescales of hours to days embedded in long-term iEEG band power and associated them with seizure severity, which we approximated using seizure duration. In order to quantify these associations, we created linear-circular statistical models of seizure duration that incorporated different band power cycles within each subject. Findings In most subjects, seizure duration was weakly to moderately correlated with individual band power cycles. Combinations of multiple band power cycles significantly explained most of the variability in seizure duration. Specifically, we found 70% of the models had a higher than 60% adjusted $R^2$ across all subjects. From these models, around 80% were deemed to be above chance-level (p-value < 0.05) based on permutation tests. Models included cycles of ultradian, circadian and slower timescales in a subject-specific manner. Interpretation These results suggest that seizure severity, as measured by seizure duration, may be modulated over timescales of minutes to days by subject-specific cycles in interictal iEEG signal properties. These cycles likely serve as markers of seizure modulating processes. Future work can investigate biological drivers of these detected fluctuations and may inform novel treatment strategies that minimise seizure severity.",0 "In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, colloquially known as ""deep fake pornography."" We identify a ""malicious technical ecosystem"" or ""MTE,"" comprising of open-source face-swapping models and nearly 200 ""nudifying"" software programs that allow non-technical users to create AIG-NCII within minutes. Then, using the National Institute of Standards and Technology (NIST) AI 100-4 report as a reflection of current synthetic content governance methods, we show how the current landscape of practices fails to effectively regulate the MTE for adult AIG-NCII, as well as flawed assumptions explaining these gaps.",0 "This study investigates the relationship between Carl Jung's cognitive functions and success in computer industry careers by analyzing the distribution of Myers-Briggs Type Indicator (MBTI) types among professionals in the field. Building on Carl Jung's theory of psychological types, which categorizes human cognition into four primary functions, Sensing, Intuition, Thinking, and Feeling, this study investigates how these functions, when combined with the attitudes of Extraversion and Introversion, influence personality types and career choices in the tech sector. Through a comprehensive analysis of data from 30 studies spanning multiple countries and decades, encompassing 18,264 individuals in computer-related professions, we identified the most prevalent cognitive functions and their combinations. After normalizing the data against general population distributions, our findings showed that individual Jungian functions (Te, Ni, Ti, Ne), dual function combinations (Ni-Te, Ti-Ne, Si-Te, Ni-Fe), and MBTI types (INTJ, ENTJ, INTP, ENTP, ISTJ, INFJ, ESTJ, ESTP) had significantly higher representation compared to general population norms. The paper addresses gaps in the existing literature by providing a more nuanced understanding of how cognitive functions impact job performance and team dynamics, offering insights for career guidance, team composition, and professional development in the computer industry, and a deeper understanding of how cognitive preferences influence career success in technology-related fields.",0 "Recent advances in machine learning and artificial intelligence have provided more alternatives for the implementation of repetitive or monotonous tasks. However, the development of AI tools has not been straightforward, and use case exploration and workflow integration are still ongoing challenges. In this work, we present a detailed qualitative analysis of the performance and user experience of popular commercial AI chatbots when used for document classification with limited data. We report the results for a real-world example of metadata augmentation in academic libraries environment. We compare the results of AI chatbots with other machine learning and natural language processing methods such as XGBoost and BERT-based fine tuning, and share insights from our experience. We found that AI chatbots perform similarly among them while outperforming the machine learning methods we tested, showing their advantage when the method relies on local data for training. We also found that while working with AI chatbots is easier than with code, getting useful results from them still represents a challenge for the user. Furthermore, we encountered alarming conceptual errors in the output of some chatbots, such as not being able to count the number of lines of our inputs and explaining the mistake as ``human error''. Although this is not complete evidence that AI chatbots can be effectively used for metadata classification, we believe that the information provided in this work can be useful to librarians and data curators in developing pathways for the integration and use of AI tools for data curation or metadata augmentation tasks.",0 "In many practical situations, randomly assigning treatments to subjects is uncommon due to feasibility constraints. For example, economic aid programs and merit-based scholarships are often restricted to those meeting specific income or exam score thresholds. In these scenarios, traditional approaches to estimating treatment effects typically focus solely on observations near the cutoff point, thereby excluding a significant portion of the sample and potentially leading to information loss. Moreover, these methods generally achieve a non-parametric convergence rate. While some approaches, e.g., Mukherjee et al. (2021), attempt to tackle these issues, they commonly assume that treatment effects are constant across individuals, an assumption that is often unrealistic in practice. In this study, we propose a differencing and matching-based estimator of the average treatment effect on the treated (ATT) in the presence of heterogeneous treatment effects, utilizing all available observations. We establish the asymptotic normality of our estimator and illustrate its effectiveness through various synthetic and real data analyses. Additionally, we demonstrate that our method yields non-parametric estimates of the conditional average treatment effect (CATE) and individual treatment effect (ITE) as a byproduct.",0 "Due to the proliferation of short-form content and the rapid adoption of AI, opportunities for deep, reflective thinking have significantly diminished, undermining users' critical thinking and reducing engagement with the reasoning behind AI-generated outputs. To address this issue, we propose an Interactive Chain-of-Thought (CoT) Framework that enhances human-centered explainability and responsible AI usage by making the model's inference process transparent, modular, and user-editable. The framework decomposes reasoning into clearly defined blocks that users can inspect, modify, and re-execute, encouraging active cognitive engagement rather than passive consumption. It further integrates a lightweight edit-adaptation mechanism inspired by preference learning, allowing the system to align with diverse cognitive styles and user intentions. Ethical transparency is ensured through explicit metadata disclosure, built-in bias checkpoint functionality, and privacy-preserving safeguards. This work outlines the design principles and architecture necessary to promote critical engagement, responsible interaction, and inclusive adaptation in AI systems aimed at addressing complex societal challenges.",0 "Saliency maps are a popular approach for explaining classifications of (convolutional) neural networks. However, it remains an open question as to how best to evaluate salience maps, with three families of evaluation methods commonly being used: subjective user measures, objective user measures, and mathematical metrics. We examine three of the most popular saliency map approaches (viz., LIME, Grad-CAM, and Guided Backpropagation) in a between subject study (N=166) across these families of evaluation methods. We test 1) for subjective measures, if the maps differ with respect to user trust and satisfaction; 2) for objective measures, if the maps increase users' abilities and thus understanding of a model; 3) for mathematical metrics, which map achieves the best ratings across metrics; and 4) whether the mathematical metrics can be associated with objective user measures. To our knowledge, our study is the first to compare several salience maps across all these evaluation methods$-$with the finding that they do not agree in their assessment (i.e., there was no difference concerning trust and satisfaction, Grad-CAM improved users' abilities best, and Guided Backpropagation had the most favorable mathematical metrics). Additionally, we show that some mathematical metrics were associated with user understanding, although this relationship was often counterintuitive. We discuss these findings in light of general debates concerning the complementary use of user studies and mathematical metrics in the evaluation of explainable AI (XAI) approaches.",2 "In science and social science, we often wish to explain why an outcome is different in two populations. For instance, if a jobs program benefits members of one city more than another, is that due to differences in program participants (particular covariates) or the local labor markets (outcomes given covariates)? The Kitagawa-Oaxaca-Blinder (KOB) decomposition is a standard tool in econometrics that explains the difference in the mean outcome across two populations. However, the KOB decomposition assumes a linear relationship between covariates and outcomes, while the true relationship may be meaningfully nonlinear. Modern machine learning boasts a variety of nonlinear functional decompositions for the relationship between outcomes and covariates in one population. It seems natural to extend the KOB decomposition using these functional decompositions. We observe that a successful extension should not attribute the differences to covariates -- or, respectively, to outcomes given covariates -- if those are the same in the two populations. Unfortunately, we demonstrate that, even in simple examples, two common decompositions -- functional ANOVA and Accumulated Local Effects -- can attribute differences to outcomes given covariates, even when they are identical in two populations. We provide a characterization of when functional ANOVA misattributes, as well as a general property that any discrete decomposition must satisfy to avoid misattribution. We show that if the decomposition is independent of its input distribution, it does not misattribute. We further conjecture that misattribution arises in any reasonable additive decomposition that depends on the distribution of the covariates.",2 "Endovascular procedures have revolutionized the treatment of vascular diseases thanks to minimally invasive solutions that significantly reduce patient recovery time and enhance clinical outcomes. However, the precision and dexterity required during these procedures poses considerable challenges for interventionists. Robotic systems have emerged offering transformative solutions, addressing issues such as operator fatigue, radiation exposure, and the inherent limitations of human precision. The integration of Embodied Intelligence (EI) into these systems signifies a paradigm shift, enabling robots to navigate complex vascular networks and adapt to dynamic physiological conditions. Data-driven approaches, advanced computer vision, medical image analysis, and machine learning techniques, are at the forefront of this evolution. These methods augment procedural intelligence by facilitating real-time vessel segmentation, device tracking, and anatomical landmark detection. Reinforcement learning and imitation learning further refine navigation strategies and replicate experts' techniques. This review systematically examines the integration of EI principles into robotic technologies, in relation to endovascular procedures. We discuss recent advancements in intelligent perception and data-driven control, and their practical applications in robot-assisted endovascular procedures. By critically evaluating current limitations and emerging opportunities, this review establishes a framework for future developments, emphasizing the potential for greater autonomy and improved clinical outcomes. Emerging trends and specific areas of research, such as federated learning for medical data sharing, explainable AI for clinical decision support, and advanced human-robot collaboration paradigms, are also explored, offering insights into the future direction of this rapidly evolving field.",2 "Understanding and analyzing video actions are essential for producing insightful and contextualized descriptions, especially for video-based applications like intelligent monitoring and autonomous systems. The proposed work introduces a novel framework for generating natural language descriptions from video datasets by combining textual and visual modalities. The suggested architecture makes use of ResNet50 to extract visual features from video frames that are taken from the Microsoft Research Video Description Corpus (MSVD), and Berkeley DeepDrive eXplanation (BDD-X) datasets. The extracted visual characteristics are converted into patch embeddings and then run through an encoder-decoder model based on Generative Pre-trained Transformer-2 (GPT-2). In order to align textual and visual representations and guarantee high-quality description production, the system uses multi-head self-attention and cross-attention techniques. The model's efficacy is demonstrated by performance evaluation using BLEU (1-4), CIDEr, METEOR, and ROUGE-L. The suggested framework outperforms traditional methods with BLEU-4 scores of 0.755 (BDD-X) and 0.778 (MSVD), CIDEr scores of 1.235 (BDD-X) and 1.315 (MSVD), METEOR scores of 0.312 (BDD-X) and 0.329 (MSVD), and ROUGE-L scores of 0.782 (BDD-X) and 0.795 (MSVD). By producing human-like, contextually relevant descriptions, strengthening interpretability, and improving real-world applications, this research advances explainable AI.",0 "Big Data analytics and Artificial Intelligence systems derive non-intuitive and often unverifiable inferences about individuals' behaviors, preferences, and private lives. Drawing on diverse, feature-rich datasets of unpredictable value, these systems erode the intuitive connection between our actions and how we are perceived, diminishing control over our digital identities. While Explainable Artificial Intelligence scholars have attempted to explain the inner workings of algorithms, their visualizations frequently overwhelm end-users with complexity. This research introduces 'hypothetical inference', a novel approach that uses language models to simulate how algorithms might interpret users' digital footprints and infer personal characteristics without requiring access to proprietary platform algorithms. Through empirical studies with fourteen adult participants, we identified three key design opportunities to foster critical algorithmic literacy: (1) reassembling scattered digital footprints into a unified map, (2) simulating algorithmic inference through LLM-generated interpretations, and (3) incorporating temporal dimensions to visualize evolving patterns. This research lays the groundwork for tools that can help users recognize the influence of data on platforms and develop greater autonomy in increasingly algorithm-mediated digital environments.",2 "Deploying AI systems in public institutions can have far-reaching consequences for many people, making it a matter of public interest. Providing opportunities for stakeholders to come together, understand these systems, and debate their merits and harms is thus essential. Explainable AI often focuses on individuals, but deliberation benefits from group settings, which are underexplored. To address this gap, we present findings from an interview study with 8 focus groups and 12 individuals. Our findings provide insight into how explanations support AI novices in deliberating alone and in groups. Participants used modular explanations with four information categories to solve tasks and decide about an AI system's deployment. We found that the explanations supported groups in creating shared understanding and in finding arguments for and against the system's deployment. In comparison, individual participants engaged with explanations in more depth and performed better in the study tasks, but missed an exchange with others. Based on our findings, we provide suggestions on how explanations should be designed to work in group settings and describe their potential use in real-world contexts. With this, our contributions inform XAI research that aims to enable AI novices to understand and deliberate AI systems in the public sector.",2 "Existing depression screening predominantly relies on standardized questionnaires (e.g., PHQ-9, BDI), which suffer from high misdiagnosis rates (18-34% in clinical studies) due to their static, symptom-counting nature and susceptibility to patient recall bias. This paper presents an AI-powered depression prevention system that leverages large language models (LLMs) to analyze real-time conversational cues--including subtle emotional expressions (e.g., micro-sentiment shifts, self-referential language patterns)--for more accurate and dynamic mental state assessment. Our system achieves three key innovations: (1) Continuous monitoring through natural dialogue, detecting depression-indicative linguistic features (anhedonia markers, hopelessness semantics) with 89% precision (vs. 72% for PHQ-9); (2) Adaptive risk stratification that updates severity levels based on conversational context, reducing false positives by 41% compared to scale-based thresholds; and (3) Personalized intervention strategies tailored to users' emotional granularity, demonstrating 2.3x higher adherence rates than generic advice. Clinical validation with 450 participants shows the system identifies 92% of at-risk cases missed by traditional scales, while its explainable AI interface bridges the gap between automated analysis and clinician judgment. This work establishes conversational AI as a paradigm shift from episodic scale-dependent diagnosis to continuous, emotionally intelligent mental health monitoring.",2 "In Affective computing, recognizing users' emotions accurately is the basis of affective human-computer interaction. Understanding users' interoception contributes to a better understanding of individually different emotional abilities, which is essential for achieving inter-individually accurate emotion estimation. However, existing interoception measurement methods, such as the heart rate discrimination task, have several limitations, including their dependence on a well-controlled laboratory environment and precision apparatus, making monitoring users' interoception challenging. This study aims to determine other forms of data that can explain users' interoceptive or similar states in their real-world lives and propose a novel hypothetical concept ""cyberoception,"" a new sense (1) which has properties similar to interoception in terms of the correlation with other emotion-related abilities, and (2) which can be measured only by the sensors embedded inside commodity smartphone devices in users' daily lives. Results from a 10-day-long in-lab/in-the-wild hybrid experiment reveal a specific cyberoception type ""Turn On"" (users' subjective sensory perception about the frequency of turning-on behavior on their smartphones), significantly related to participants' emotional valence. We anticipate that cyberoception to serve as a fundamental building block for developing more ""emotion-aware"", user-friendly applications and services.",2 "Current reinforcement learning from human feedback (RLHF) pipelines for large language model (LLM) alignment typically assign scalar rewards to sequences, using the final token as a surrogate indicator for the quality of the entire sequence. However, this leads to sparse feedback and suboptimal token-level credit assignment. In this work, we frame reward shaping as an optimization problem focused on token-level credit assignment. We propose a reward-shaping function leveraging explainability methods such as SHAP and LIME to estimate per-token rewards from the reward model. To learn parameters of this shaping function, we employ a bilevel optimization framework that integrates Bayesian Optimization and policy training to handle noise from the token reward estimates. Our experiments show that achieving a better balance of token-level reward attribution leads to performance improvements over baselines on downstream tasks and finds an optimal policy faster during training. Furthermore, we show theoretically that explainability methods that are feature additive attribution functions maintain the optimal policy as the original reward.",0 "Autonomous Sensory Meridian Response (ASMR) has been remarkably popular in the recent decade. While its effect has been validated through behavioral studies and neuro-physiological measurements such as electroencephalography (EEG) and related bio-signal analyses, its development and triggers remain a subject of debate. Previous studies suggest that its triggers are highly linked with cyclic patterns: predictable patterns introduce relaxation while variations maintain intrigue. To validate this and further understand the impact of acoustic features on ASMR effects, we designed three distinct cyclic patterns with monophonic and stereophonic variations, while controlling their predictability and randomness, and collected ASMR triggering scores through online surveys. Then, we extracted cyclic features and carried out regression analysis, seeking an explainable mapping of cyclic features and ASMR triggers. We found that relaxing effects accumulate progressively and are independent of spatial orientation. Cyclic patterns significantly influence psychological and physical effects, which remain invariant with time. Regression analysis revealed that smoothly spread and energy-dense cyclic patterns most effectively trigger ASMR responses.",2 "Artificial Intelligence (AI) has increasingly influenced modern society, recently in particular through significant advancements in Large Language Models (LLMs). However, high computational and storage demands of LLMs still limit their deployment in resource-constrained environments. Knowledge distillation addresses this challenge by training a small student model from a larger teacher model. Previous research has introduced several distillation methods for both generating training data and for training the student model. Despite their relevance, the effects of state-of-the-art distillation methods on model performance and explainability have not been thoroughly investigated and compared. In this work, we enlarge the set of available methods by applying critique-revision prompting to distillation for data generation and by synthesizing existing methods for training. For these methods, we provide a systematic comparison based on the widely used Commonsense Question-Answering (CQA) dataset. While we measure performance via student model accuracy, we employ a human-grounded study to evaluate explainability. We contribute new distillation methods and their comparison in terms of both performance and explainability. This should further advance the distillation of small language models and, thus, contribute to broader applicability and faster diffusion of LLM technology.",0 "Reinforcement Learning from Human Feedback (RLHF) has become the predominant approach for language model (LM) alignment. At its core, RLHF uses a margin-based loss for preference optimization, specifying ideal LM behavior only by the difference between preferred and dispreferred responses. In this paper, we identify a common pitfall of margin-based methods -- the under-specification of ideal LM behavior on preferred and dispreferred responses individually, which leads to two unintended consequences as the margin increases: (1) The probability of dispreferred (e.g., unsafe) responses may increase, resulting in potential safety alignment failures. (2) The probability of preferred responses may decrease, even when those responses are ideal. We demystify the reasons behind these problematic behaviors: margin-based losses couple the change in the preferred probability to the gradient of the dispreferred one, and vice versa, often preventing the preferred probability from increasing while the dispreferred one decreases, and thus causing a synchronized increase or decrease in both probabilities. We term this effect, inherent in margin-based objectives, gradient entanglement. Formally, we derive conditions for general margin-based alignment objectives under which gradient entanglement becomes concerning: the inner product of the gradients of preferred and dispreferred log-probabilities is large relative to the individual gradient norms. We theoretically investigate why such inner products can be large when aligning language models and empirically validate our findings. Empirical implications of our framework extend to explaining important differences in the training dynamics of various preference optimization algorithms, and suggesting potential algorithm designs to mitigate the under-specification issue of margin-based methods and thereby improving language model alignment.",0 "As machine learning models evolve, maintaining transparency demands more human-centric explainable AI techniques. Counterfactual explanations, with roots in human reasoning, identify the minimal input changes needed to obtain a given output and, hence, are crucial for supporting decision-making. Despite their importance, the evaluation of these explanations often lacks grounding in user studies and remains fragmented, with existing metrics not fully capturing human perspectives. To address this challenge, we developed a diverse set of 30 counterfactual scenarios and collected ratings across 8 evaluation metrics from 206 respondents. Subsequently, we fine-tuned different Large Language Models (LLMs) to predict average or individual human judgment across these metrics. Our methodology allowed LLMs to achieve an accuracy of up to 63% in zero-shot evaluations and 85% (over a 3-classes prediction) with fine-tuning across all metrics. The fine-tuned models predicting human ratings offer better comparability and scalability in evaluating different counterfactual explanation frameworks.",2 "Despite significant advancements in AI-driven educational systems and ongoing calls for responsible AI for education, several critical issues remain unresolved -- acting as the elephant in the room within AI in education, learning analytics, educational data mining, learning sciences, and educational psychology communities. This critical analysis identifies and examines nine persistent challenges that continue to undermine the fairness, transparency, and effectiveness of current AI methods and applications in education. These include: (1) the lack of clarity around what AI for education truly means -- often ignoring the distinct purposes, strengths, and limitations of different AI families -- and the trend of equating it with domain-agnostic, company-driven large language models; (2) the widespread neglect of essential learning processes such as motivation, emotion, and (meta)cognition in AI-driven learner modelling and their contextual nature; (3) limited integration of domain knowledge and lack of stakeholder involvement in AI design and development; (4) continued use of non-sequential machine learning models on temporal educational data; (5) misuse of non-sequential metrics to evaluate sequential models; (6) use of unreliable explainable AI methods to provide explanations for black-box models; (7) ignoring ethical guidelines in addressing data inconsistencies during model training; (8) use of mainstream AI methods for pattern discovery and learning analytics without systematic benchmarking; and (9) overemphasis on global prescriptions while overlooking localised, student-specific recommendations. Supported by theoretical and empirical research, we demonstrate how hybrid AI methods -- specifically neural-symbolic AI -- can address the elephant in the room and serve as the foundation for responsible, trustworthy AI systems in education.",0 "The opaqueness of modern digital advertising, exemplified by platforms such as Meta Ads, raises concerns regarding their autonomous control over audience targeting, pricing structures, and ad relevancy assessments. Locked in their leading positions by network effects, ``Metas and Googles of the world'' attract countless advertisers who rely on intuition, with billions of dollars lost on ineffective social media ads. The platforms' algorithms use huge amounts of data unavailable to advertisers, and the algorithms themselves are opaque as well. This lack of transparency hinders the advertisers' ability to make informed decisions and necessitates efforts to promote transparency, standardize industry metrics, and strengthen regulatory frameworks. In this work, we propose novel ways to assist marketers in optimizing their advertising strategies via machine learning techniques designed to analyze and evaluate content, in particular, predict the click-through rates (CTR) of novel advertising content. Another important problem is that large volumes of data available in the competitive landscape, e.g., competitors' ads, impede the ability of marketers to derive meaningful insights. This leads to a pressing need for a novel approach that would allow us to summarize and comprehend complex data. Inspired by the success of ChatGPT in bridging the gap between large language models (LLMs) and a broader non-technical audience, we propose a novel system that facilitates marketers in data interpretation, called SODA, that merges LLMs with explainable AI, enabling better human-AI collaboration with an emphasis on the domain of digital marketing and advertising. By combining LLMs and explainability features, in particular modern text-image models, we aim to improve the synergy between human marketers and AI systems.",0 "The term islands in linguistics refers to phrases from which extracting an element results in ungrammaticality (Ross, 1967). Grammatical subjects are considered islands because extracting a sub-part of a subject results in an ill-formed sentence, despite having a clear intended meaning (e.g., ""Which topic did the article about inspire you?""). The generative tradition, which views syntax as autonomous of meaning and function, attributes this ungrammaticality to the abstract movement dependency between the wh-phrase and the subject-internal position with which it is associated for interpretation. However, research on language that emphasizes its communicative function suggests instead that syntactic constraints, including islands, can be explained based on the way different constructions package information. Accordingly, Abeill\'e et al. (2020) suggest that the islandhood of subjects is specific to the information structure of wh-questions, and propose that subjects are not islands for movement, but for focusing, due to their discourse-backgroundedness. This predicts that other constructions that differ in their information structure from wh-questions, but still involve movement, should not create a subject island effect. We test this prediction in three large-scale acceptability studies, using a super-additive design that singles out subject island violations, in three different constructions: wh-questions, relative clauses, and topicalization. We report evidence for a subject island effect in each construction type, despite only wh-questions introducing what Abeill\'e et al. (2020) call ""a clash in information structure."" We argue that this motivates an account of islands in terms of abstract, syntactic representations, independent of the communicative function associated with the constructions.",0 "Increasingly, people use social media for their day-to-day interactions and as a source of information, even though much of this information is practically anonymous. This raises the question: does anonymous information influence its recipients? We conducted an online, two-phase, preregistered experiment using a nationally representative sample of participants from the U.S. to find the answer. To avoid biases of opinions among participants, in the first phase, each participant examines ten Rorschach inkblots and chooses one of four opinions assigned to each inkblot. In the second phase, the participants are randomly assigned to one of four distinct information conditions and are asked to revisit their opinions for the same ten inkblots. Conditions ranged from repeating phase one to receiving anonymous comments about certain opinions. Results were consistent with the preregistration. Importantly, anonymous comments shown in phase two influence up to half of the participants' opinion selections. To better understand the role of anonymous comments in influencing the selections of opinions, we implemented agent-based modeling (ABM). ABM results suggest that a straightforward mechanism can explain the impact of such information. Overall, our results indicate that even anonymous information can have a significant impact on its recipients, potentially altering their popularity rankings. However, the strength of such influence weakens when recipients' confidence in their selections increases. Additionally, we found that participants' confidence in the first phase is inversely related to the number of change opinions.",2 "Seeking dietary guidance often requires navigating complex professional knowledge while accommodating individual health conditions. Knowledge Graphs (KGs) offer structured and interpretable nutritional information, whereas Large Language Models (LLMs) naturally facilitate conversational recommendation delivery. In this paper, we present HealthGenie, an interactive system that combines the strengths of LLMs and KGs to provide personalized dietary recommendations along with hierarchical information visualization for a quick and intuitive overview. Upon receiving a user query, HealthGenie performs query refinement and retrieves relevant information from a pre-built KG. The system then visualizes and highlights pertinent information, organized by defined categories, while offering detailed, explainable recommendation rationales. Users can further tailor these recommendations by adjusting preferences interactively. Our evaluation, comprising a within-subject comparative experiment and an open-ended discussion, demonstrates that HealthGenie effectively supports users in obtaining personalized dietary guidance based on their health conditions while reducing interaction effort and cognitive load. These findings highlight the potential of LLM-KG integration in supporting decision-making through explainable and visualized information. We examine the system's usefulness and effectiveness with an N=12 within-subject study and provide design considerations for future systems that integrate conversational LLM and KG.",0 "Understanding the relationship between population dynamics and disease-specific mortality is central to evidence-based health policy. This study introduces two novel metrics, PoPDivergence and PoPStat, one to quantify the difference between population pyramids and the other to assess the strength and nature of their association with the mortality of a given disease. PoPDivergence, based on Kullback-Leibler divergence, measures deviations between a countrys population pyramid and a reference pyramid. PoPStat is the correlation between these deviations and the log form of disease-specific mortality rates. The reference population is selected by a brute-force optimization that maximizes this correlation. Utilizing mortality data from the Global Burden of Disease 2021 and population statistics from the United Nations, we applied these metrics to 371 diseases across 204 countries. Results reveal that PoPStat outperforms traditional indicators such as median age, GDP per capita, and Human Development Index in explaining the mortality of most diseases. Noncommunicable diseases (NCDs) like neurological disorders and cancers, communicable diseases (CDs) like neglected tropical diseases, and maternal and neonatal diseases were tightly bound to the underlying demographic attributes whereas NCDs like diabetes, CDs like respiratory infections and injuries including self-harm and interpersonal violence were weakly associated with population pyramid shapes. Notably, except for diabetes, the NCD mortality burden was shared by constrictive population pyramids, while mortality of communicable diseases, maternal and neonatal causes and injuries were largely borne by expansive pyramids. Therefore, PoPStat provides insights into demographic determinants of health and empirical support for models on epidemiological transition. Code and scripts: https://github.com/Buddhi19/DevisingPoPStat.git",0 "Comprehending visualizations requires readers to interpret visual encoding and the underlying meanings actively. This poses challenges for visualization novices, particularly when interpreting distributional visualizations that depict statistical uncertainty. Advancements in LLM-based conversational interfaces show promise in promoting visualization comprehension. However, they fail to provide contextual explanations at fine-grained granularity, and chart readers are still required to mentally bridge visual information and textual explanations during conversations. Our formative study highlights the expectations for both lexical and visual feedback, as well as the importance of explicitly linking these two modalities throughout the conversation. The findings motivate the design of VizTA, a visualization teaching assistant that leverages the fusion of visual and lexical feedback to help readers better comprehend visualization. VizTA features a semantic-aware conversational agent capable of explaining contextual information within visualizations and employs a visual-lexical fusion design to facilitate chart-centered conversation. A between-subject study with 24 participants demonstrates the effectiveness of VizTA in supporting the understanding and reasoning tasks of distributional visualization across multiple scenarios.",2 "Progress in image generation raises significant public security concerns. We argue that fake image detection should not operate as a ""black box"". Instead, an ideal approach must ensure both strong generalization and transparency. Recent progress in Multi-modal Large Language Models (MLLMs) offers new opportunities for reasoning-based AI-generated image detection. In this work, we evaluate the capabilities of MLLMs in comparison to traditional detection methods and human evaluators, highlighting their strengths and limitations. Furthermore, we design six distinct prompts and propose a framework that integrates these prompts to develop a more robust, explainable, and reasoning-driven detection system. The code is available at https://github.com/Gennadiyev/mllm-defake.",0 "Physical colors, i.e. reflected or emitted lights entering the eyes from a visual environment, are converted into perceived colors sensed by humans by neurophysiological mechanisms. These processes involve both three types of photoreceptors, the LMS cones, and spectrally opponent and non-opponent interactions resulting from the activity rates of ganglion and lateral geniculate nucleus cells. Thus, color perception is a phenomenon inherently linked to an experimental environment (the visual scene) and an observing apparatus (the human visual system). This is clearly reminiscent of the conceptual foundation of both relativity and quantum mechanics, where the link is between a physical system and the measuring instruments. The relationship between color perception and relativity was explicitly examined for the first time by the physicist H. Yilmaz in 1962 from an experimental point of view. The main purpose of this contribution is to present a rigorous mathematical model that, by taking into account both trichromacy and color opponency, permits to explain on a purely theoretical basis the relativistic color perception phenomena argued by Yilmaz. Instead of relying directly on relativistic considerations, we base our theory on a quantum interpretation of color perception together with just one assumption, called trichromacy axiom, that summarizes well-established properties of trichromatic color vision within the framework of Jordan algebras. We show how this approach allows us to reconcile trichromacy with Hering's opponency and also to derive the relativistic properties of perceived colors without any additional mathematical or experimental assumption.",0 "This work aligns deep learning (DL) with human reasoning capabilities and needs to enable more efficient, interpretable, and robust image classification. We approach this from three perspectives: explainability, causality, and biological vision. Introduction and background open this work before diving into operative chapters. First, we assess neural networks' visualization techniques for medical images and validate an explainable-by-design method for breast mass classification. A comprehensive review at the intersection of XAI and causality follows, where we introduce a general scaffold to organize past and future research, laying the groundwork for our second perspective. In the causality direction, we propose novel modules that exploit feature co-occurrence in medical images, leading to more effective and explainable predictions. We further introduce CROCODILE, a general framework that integrates causal concepts, contrastive learning, feature disentanglement, and prior knowledge to enhance generalization. Lastly, we explore biological vision, examining how humans recognize objects, and propose CoCoReco, a connectivity-inspired network with context-aware attention mechanisms. Overall, our key findings include: (i) simple activation maximization lacks insight for medical imaging DL models; (ii) prototypical-part learning is effective and radiologically aligned; (iii) XAI and causal ML are deeply connected; (iv) weak causal signals can be leveraged without a priori information to improve performance and interpretability; (v) our framework generalizes across medical domains and out-of-distribution data; (vi) incorporating biological circuit motifs improves human-aligned recognition. This work contributes toward human-aligned DL and highlights pathways to bridge the gap between research and clinical adoption, with implications for improved trust, diagnostic accuracy, and safe deployment.",0 "Large Language Models (LLMs) are likely to play a key role in Intent-Based Networking (IBN) as they show remarkable performance in interpreting human language as well as code generation, enabling the translation of high-level intents expressed by humans into low-level network configurations. In this paper, we leverage closed-source language models (i.e., Google Gemini 1.5 pro, ChatGPT-4) and open-source models (i.e., LLama, Mistral) to investigate their capacity to generate E2E network configurations for radio access networks (RANs) and core networks in 5G/6G mobile networks. We introduce a novel performance metrics, known as FEACI, to quantitatively assess the format (F), explainability (E), accuracy (A), cost (C), and inference time (I) of the generated answer",0 "Human action recognition (HAR) has achieved impressive results with deep learning models, but their decision-making process remains opaque due to their black-box nature. Ensuring interpretability is crucial, especially for real-world applications requiring transparency and accountability. Existing video XAI methods primarily rely on feature attribution or static textual concepts, both of which struggle to capture motion dynamics and temporal dependencies essential for action understanding. To address these challenges, we propose Pose Concept Bottleneck for Explainable Action Recognition (PCBEAR), a novel concept bottleneck framework that introduces human pose sequences as motion-aware, structured concepts for video action recognition. Unlike methods based on pixel-level features or static textual descriptions, PCBEAR leverages human skeleton poses, which focus solely on body movements, providing robust and interpretable explanations of motion dynamics. We define two types of pose-based concepts: static pose concepts for spatial configurations at individual frames, and dynamic pose concepts for motion patterns across multiple frames. To construct these concepts, PCBEAR applies clustering to video pose sequences, allowing for automatic discovery of meaningful concepts without manual annotation. We validate PCBEAR on KTH, Penn-Action, and HAA500, showing that it achieves high classification performance while offering interpretable, motion-driven explanations. Our method provides both strong predictive performance and human-understandable insights into the model's reasoning process, enabling test-time interventions for debugging and improving model behavior.",0 "Large Language Models (LLMs) are increasingly being used for automated evaluations and explaining them. However, concerns about explanation quality, consistency, and hallucinations remain open research challenges, particularly in high-stakes contexts like privacy and security, where user trust and decision-making are at stake. In this paper, we investigate these issues in the context of PRISMe, an interactive privacy policy assessment tool that leverages LLMs to evaluate and explain website privacy policies. Based on a prior user study with 22 participants, we identify key concerns regarding LLM judgment transparency, consistency, and faithfulness, as well as variations in user preferences for explanation detail and engagement. We discuss potential strategies to mitigate these concerns, including structured evaluation criteria, uncertainty estimation, and retrieval-augmented generation (RAG). We identify a need for adaptive explanation strategies tailored to different user profiles for LLM-as-a-judge. Our goal is to showcase the application area of usable privacy and security to be promising for Human-Centered Explainable AI (HCXAI) to make an impact.",2 "This paper investigates the integration of graph neural networks (GNNs) with Qualitative Explainable Graphs (QXGs) for scene understanding in automated driving. Scene understanding is the basis for any further reactive or proactive decision-making. Scene understanding and related reasoning is inherently an explanation task: why is another traffic participant doing something, what or who caused their actions? While previous work demonstrated QXGs' effectiveness using shallow machine learning models, these approaches were limited to analysing single relation chains between object pairs, disregarding the broader scene context. We propose a novel GNN architecture that processes entire graph structures to identify relevant objects in traffic scenes. We evaluate our method on the nuScenes dataset enriched with DriveLM's human-annotated relevance labels. Experimental results show that our GNN-based approach achieves superior performance compared to baseline methods. The model effectively handles the inherent class imbalance in relevant object identification tasks while considering the complete spatial-temporal relationships between all objects in the scene. Our work demonstrates the potential of combining qualitative representations with deep learning approaches for explainable scene understanding in autonomous driving systems.",2 "Recent advances in industrial anomaly detection have highlighted the need for deeper logical anomaly analysis, where unexpected relationships among objects, counts, and spatial configurations must be identified and explained. Existing approaches often rely on large-scale external reasoning modules or elaborate pipeline designs, hindering practical deployment and interpretability. To address these limitations, we introduce a new task, Reasoning Logical Anomaly Detection (RLAD), which extends traditional anomaly detection by incorporating logical reasoning. We propose a new framework, LAD-Reasoner, a customized tiny multimodal language model built on Qwen2.5-VL 3B. Our approach leverages a two-stage training paradigm that first employs Supervised Fine-Tuning (SFT) for fine-grained visual understanding, followed by Group Relative Policy Optimization (GRPO) to refine logical anomaly detection and enforce coherent, human-readable reasoning. Crucially, reward signals are derived from both the detection accuracy and the structural quality of the outputs, obviating the need for building chain of thought (CoT) reasoning data. Experiments on the MVTec LOCO AD dataset show that LAD-Reasoner, though significantly smaller, matches the performance of Qwen2.5-VL-72B in accuracy and F1 score, and further excels in producing concise and interpretable rationales. This unified design reduces reliance on large models and complex pipelines, while offering transparent and interpretable insights into logical anomaly detection. Code and data will be released.",0 "In this paper, we advance the study of AI-augmented reasoning in the context of Human-Computer Interaction (HCI), psychology and cognitive science, focusing on the critical task of visual perception. Specifically, we investigate the applicability of Multimodal Large Language Models (MLLMs) in this domain. To this end, we leverage established principles and explanations from psychology and cognitive science related to complexity in human visual perception. We use them as guiding principles for the MLLMs to compare and interprete visual content. Our study aims to benchmark MLLMs across various explainability principles relevant to visual perception. Unlike recent approaches that primarily employ advanced deep learning models to predict complexity metrics from visual content, our work does not seek to develop a mere new predictive model. Instead, we propose a novel annotation-free analytical framework to assess utility of MLLMs as cognitive assistants for HCI tasks, using visual perception as a case study. The primary goal is to pave the way for principled study in quantifying and evaluating the interpretability of MLLMs for applications in improving human reasoning capability and uncovering biases in existing perception datasets annotated by humans.",0 "Recent work has documented striking heterogeneity in the performance of state-of-the-art vision language models (VLMs), including both multimodal language models and text-to-image models. These models are able to describe and generate a diverse array of complex, naturalistic images, yet they exhibit surprising failures on basic multi-object reasoning tasks -- such as counting, localization, and simple forms of visual analogy -- that humans perform with near perfect accuracy. To better understand this puzzling pattern of successes and failures, we turn to theoretical accounts of the binding problem in cognitive science and neuroscience, a fundamental problem that arises when a shared set of representational resources must be used to represent distinct entities (e.g., to represent multiple objects in an image), necessitating the use of serial processing to avoid interference. We find that many of the puzzling failures of state-of-the-art VLMs can be explained as arising due to the binding problem, and that these failure modes are strikingly similar to the limitations exhibited by rapid, feedforward processing in the human brain.",0 "Personal development through self-directed learning is essential in today's fast-changing world, but many learners struggle to manage it effectively. While AI tools like large language models (LLMs) have the potential for personalized learning planning, they face issues such as transparency and hallucinated information. To address this, we propose PlanGlow, an LLM-based system that generates personalized, well-structured study plans with clear explanations and controllability through user-centered interactions. Through mixed methods, we surveyed 28 participants and interviewed 10 before development, followed by a within-subject experiment with 24 participants to evaluate PlanGlow's performance, usability, controllability, and explainability against two baseline systems: a GPT-4o-based system and Khan Academy's Khanmigo. Results demonstrate that PlanGlow significantly improves usability, explainability, and controllability. Additionally, two educational experts assessed and confirmed the quality of the generated study plans. These findings highlight PlanGlow's potential to enhance personalized learning and address key challenges in self-directed learning.",2 "This position paper highlights a growing trend in Explainable AI (XAI) research where Large Language Models (LLMs) are used to translate outputs from explainability techniques, like feature-attribution weights, into a natural language explanation. While this approach may improve accessibility or readability for users, recent findings suggest that translating into human-like explanations does not necessarily enhance user understanding and may instead lead to overreliance on AI systems. When LLMs summarize XAI outputs without surfacing model limitations, uncertainties, or inconsistencies, they risk reinforcing the illusion of interpretability rather than fostering meaningful transparency. We argue that - instead of merely translating XAI outputs - LLMs should serve as constructive agitators, or devil's advocates, whose role is to actively interrogate AI explanations by presenting alternative interpretations, potential biases, training data limitations, and cases where the model's reasoning may break down. In this role, LLMs can facilitate users in engaging critically with AI systems and generated explanations, with the potential to reduce overreliance caused by misinterpreted or specious explanations.",0 "High-stakes domains like cyber operations need responsible and trustworthy AI methods. While large language models (LLMs) are becoming increasingly popular in these domains, they still suffer from hallucinations. This research paper provides learning outcomes from a case study with LinkQ, an open-source natural language interface that was developed to combat hallucinations by forcing an LLM to query a knowledge graph (KG) for ground-truth data during question-answering (QA). We conduct a quantitative evaluation of LinkQ using a well-known KGQA dataset, showing that the system outperforms GPT-4 but still struggles with certain question categories - suggesting that alternative query construction strategies will need to be investigated in future LLM querying systems. We discuss a qualitative study of LinkQ with two domain experts using a real-world cybersecurity KG, outlining these experts' feedback, suggestions, perceived limitations, and future opportunities for systems like LinkQ.",0 "Magnetic monopoles (MMs) are well-motivated hypothetical particles whose discovery would symmetrize Maxwell equations, explain quantization of electric charge, and probe the gauge structure of the unified theory. Recent models predict MMs with low masses, reinvigorating searches at colliders. However, most theories predict composite MMs, whose production in parton-parton collisions is expected to be suppressed. The Schwinger process, whereby MM pairs tunnel through the vacuum barrier in the presence of a strong magnetic field, is not subject to this limitation. Additionally, the Schwinger cross section can be calculated nonperturbatively. Together, these make it a golden channel for low-mass MM searches. We investigate the Schwinger production of MMs in heavy-ion collisions at future colliders, in collisions of cosmic rays with the atmosphere, and in decay of magnetic fields of cosmic origin. We find that a next-generation collider would provide the best sensitivity, allowing one to discover or exclude MMs with TeV-scale masses. At the same time, exploiting the infrastructure of industrial ore extraction and Antarctic ice drilling could advance the field at a faster timescale and with only a modest investment. In particular, we show with detailed calculations that the proposed experiments will be sensitive to fluxes of low-mass MMs as low as a few units of 10^{-22} cm^{-2} s^{-1} sr^{-1} in a wide range of Lorentz factors. We also propose deploying dedicated MM detectors in conjunction with cosmic ray observatories to directly investigate if the unexplained, highest energy cosmic rays are MMs. Together, the proposed efforts would define the field of MM searches in the next decades.",0 "A significant portion of roads, particularly in densely populated developing countries, lacks explicitly defined right-of-way rules. These understructured roads pose substantial challenges for autonomous vehicle motion planning, where efficient and safe navigation relies on understanding decentralized human coordination for collision avoidance. This coordination, often termed ""social driving etiquette,"" remains underexplored due to limited open-source empirical data and suitable modeling frameworks. In this paper, we present a novel dataset and modeling framework designed to study motion planning in these understructured environments. The dataset includes 20 aerial videos of representative scenarios, an image dataset for training vehicle detection models, and a development kit for vehicle trajectory estimation. We demonstrate that a consensus-based modeling approach can effectively explain the emergence of priority orders observed in our dataset, and is therefore a viable framework for decentralized collision avoidance planning.",0 "Mathematical optimization, although often leading to NP-hard models, is now capable of solving even large-scale instances within reasonable time. However, the primary focus is often placed solely on optimality. This implies that while obtained solutions are globally optimal, they are frequently not comprehensible to humans, in particular when obtained by black-box routines. In contrast, explainability is a standard requirement for results in Artificial Intelligence, but it is rarely considered in optimization yet. There are only a few studies that aim to find solutions that are both of high quality and explainable. In recent work, explainability for optimization was defined in a data-driven manner: a solution is considered explainable if it closely resembles solutions that have been used in the past under similar circumstances. To this end, it is crucial to identify a preferably small subset of features from a presumably large set that can be used to explain a solution. In mathematical optimization, feature selection has received little attention yet. In this work, we formally define the feature selection problem for explainable optimization and prove that its decision version is NP-complete. We introduce mathematical models for optimized feature selection. As their global solution requires significant computation time with modern mixed-integer linear solvers, we employ local heuristics. Our computational study using data that reflect real-world scenarios demonstrates that the problem can be solved practically efficiently for instances of reasonable size.",0 "Medical image segmentation remains challenging due to the high cost of pixel-level annotations for training. In the context of weak supervision, clinician gaze data captures regions of diagnostic interest; however, its sparsity limits its use for segmentation. In contrast, vision-language models (VLMs) provide semantic context through textual descriptions but lack the explanation precision required. Recognizing that neither source alone suffices, we propose a teacher-student framework that integrates both gaze and language supervision, leveraging their complementary strengths. Our key insight is that gaze data indicates where clinicians focus during diagnosis, while VLMs explain why those regions are significant. To implement this, the teacher model first learns from gaze points enhanced by VLM-generated descriptions of lesion morphology, establishing a foundation for guiding the student model. The teacher then directs the student through three strategies: (1) Multi-scale feature alignment to fuse visual cues with textual semantics; (2) Confidence-weighted consistency constraints to focus on reliable predictions; (3) Adaptive masking to limit error propagation in uncertain areas. Experiments on the Kvasir-SEG, NCI-ISBI, and ISIC datasets show that our method achieves Dice scores of 80.78%, 80.53%, and 84.22%, respectively-improving 3-5% over gaze baselines without increasing the annotation burden. By preserving correlations among predictions, gaze data, and lesion descriptions, our framework also maintains clinical interpretability. This work illustrates how integrating human visual attention with AI-generated semantic context can effectively overcome the limitations of individual weak supervision signals, thereby advancing the development of deployable, annotation-efficient medical AI systems. Code is available at: https://github.com/jingkunchen/FGI.git.",0 "We examine the impact of concept-informed supervision on multimodal video interpretation models using MOByGaze, a dataset containing human-annotated explanatory concepts. We introduce Concept Modality Specific Datasets (CMSDs), which consist of data subsets categorized by the modality (visual, textual, or audio) of annotated concepts. Models trained on CMSDs outperform those using traditional legacy training in both early and late fusion approaches. Notably, this approach enables late fusion models to achieve performance close to that of early fusion models. These findings underscore the importance of modality-specific annotations in developing robust, self-explainable video models and contribute to advancing interpretable multimodal learning in complex video analysis.",0 "In today's visually dominated social media landscape, predicting the perceived credibility of visual content and understanding what drives human judgment are crucial for countering misinformation. However, these tasks are challenging due to the diversity and richness of visual features. We introduce a Large Language Model (LLM)-informed feature discovery framework that leverages multimodal LLMs, such as GPT-4o, to evaluate content credibility and explain its reasoning. We extract and quantify interpretable features using targeted prompts and integrate them into machine learning models to improve credibility predictions. We tested this approach on 4,191 visual social media posts across eight topics in science, health, and politics, using credibility ratings from 5,355 crowdsourced workers. Our method outperformed zero-shot GPT-based predictions by 13 percent in R2, and revealed key features like information concreteness and image format. We discuss the implications for misinformation mitigation, visual credibility, and the role of LLMs in social science.",0 "Music is a universal feature of human culture, linked to embodied cognitive functions that drive learning, action, and the emergence of creativity and individuality. Evidence highlights the critical role of statistical learning an implicit cognitive process of the brain in musical creativity and individuality. Despite its significance, the precise neural and computational mechanisms underpinning these dynamic and embodied cognitive processes re-main poorly understood. This paper discusses how individuality and creativity emerge within the framework of the brain's statistical learning, drawing on a series of neural and computational studies. This work offers perspectives on the mechanisms driving the heterogeneous nature of statistical learning abilities and embodied mechanisms and provides a framework to explain the paradoxical phenomenon where individuals with specific cognitive traits that limit certain perceptual abilities excel in creative domains.",0 "Explanations for computer vision models are important tools for interpreting how the underlying models work. However, they are often presented in static formats, which pose challenges for users, including information overload, a gap between semantic and pixel-level information, and limited opportunities for exploration. We investigate interactivity as a mechanism for tackling these issues in three common explanation types: heatmap-based, concept-based, and prototype-based explanations. We conducted a study (N=24), using a bird identification task, involving participants with diverse technical and domain expertise. We found that while interactivity enhances user control, facilitates rapid convergence to relevant information, and allows users to expand their understanding of the model and explanation, it also introduces new challenges. To address these, we provide design recommendations for interactive computer vision explanations, including carefully selected default views, independent input controls, and constrained output spaces.",2 "Explanations for artificial intelligence (AI) systems are intended to support the people who are impacted by AI systems in high-stakes decision-making environments, such as doctors, patients, teachers, students, housing applicants, and many others. To protect people and support the responsible development of AI, explanations need to be actionable--helping people take pragmatic action in response to an AI system--and contestable--enabling people to push back against an AI system and its determinations. For many high-stakes domains, such as healthcare, education, and finance, the sociotechnical environment includes significant legal implications that impact how people use AI explanations. For example, physicians who use AI decision support systems may need information on how accepting or rejecting an AI determination will protect them from lawsuits or help them advocate for their patients. In this paper, we make the case for Legally-Informed Explainable AI, responding to the need to integrate and design for legal considerations when creating AI explanations. We describe three stakeholder groups with different informational and actionability needs, and provide practical recommendations to tackle design challenges around the design of explainable AI systems that incorporate legal considerations.",2 "Models predict that more than half of all impacting meteoroids should be carbonaceous, reflecting the abundance of carbon-rich asteroids in the main belt and near-Earth space. Yet carbonaceous chondrites represent only about 4% of meteorites recovered worldwide. Here we analyse 7,982 meteoroid impacts and 540 potential meteorite falls from 19 global observation networks and demonstrate that intense thermal stress at low perihelion distances coupled with the filtering effect of Earth`s atmosphere explains this mismatch. Meteoroids repeatedly subjected to intense thermal cycling near the Sun fracture and weaken, removing the most friable objects even before atmospheric entry. Our data also show that tidally disrupted meteoroid streams produce especially fragile fragments that rarely survive to the ground. Consequently, compact, higher-strength, thermally cycled bodies dominate the meteorite record. These findings reconcile the predicted carbonaceous flux with its scarcity in collections, underscoring how orbital evolution and atmospheric filtering shape the materials that reach Earth`s surface.",0 "Concept bottleneck models (CBM) aim to improve model interpretability by predicting human level ""concepts"" in a bottleneck within a deep learning model architecture. However, how the predicted concepts are used in predicting the target still either remains black-box or is simplified to maintain interpretability at the cost of prediction performance. We propose to use Fast Interpretable Greedy Sum-Trees (FIGS) to obtain Binary Distillation (BD). This new method, called FIGS-BD, distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model, while maintaining the competitive prediction performance of the CBM teacher. FIGS-BD can be used in downstream tasks to explain and decompose CBM predictions into interpretable binary-concept-interaction attributions and guide adaptive test-time intervention. Across 4 datasets, we demonstrate that our adaptive test-time intervention identifies key concepts that significantly improve performance for realistic human-in-the-loop settings that only allow for limited concept interventions.",0 "Examining the factors that the counterspeech uses are at the core of understanding the optimal methods for confronting hate speech online. Various studies have assessed the emotional base factors used in counter speech, such as emotional empathy, offensiveness, and hostility. To better understand the counterspeech used in conversations, this study distills persuasion modes into reason, emotion, and credibility and evaluates their use in two types of conversation interactions: closed (multi-turn) and open (single-turn) concerning racism, sexism, and religious bigotry. The evaluation covers the distinct behaviors seen with human-sourced as opposed to machine-generated counterspeech. It also assesses the interplay between the stance taken and the mode of persuasion seen in the counterspeech. Notably, we observe nuanced differences in the counterspeech persuasion modes used in open and closed interactions, especially in terms of the topic, with a general tendency to use reason as a persuasion mode to express the counterpoint to hate comments. The machine-generated counterspeech tends to exhibit an emotional persuasion mode, while human counters lean toward reason. Furthermore, our study shows that reason tends to obtain more supportive replies than other persuasion modes. The findings highlight the potential for incorporating persuasion modes into studies about countering hate speech, as they can serve as an optimal means of explainability and pave the way for the further adoption of the reply's stance and the role it plays in assessing what comprises the optimal counterspeech.",0 "Video anomaly detection (VAD) aims to identify unexpected events in videos and has wide applications in safety-critical domains. While semi-supervised methods trained on only normal samples have gained traction, they often suffer from high false alarm rates and poor interpretability. Recently, vision-language models (VLMs) have demonstrated strong multimodal reasoning capabilities, offering new opportunities for explainable anomaly detection. However, their high computational cost and lack of domain adaptation hinder real-time deployment and reliability. Inspired by dual complementary pathways in human visual perception, we propose SlowFastVAD, a hybrid framework that integrates a fast anomaly detector with a slow anomaly detector (namely a retrieval augmented generation (RAG) enhanced VLM), to address these limitations. Specifically, the fast detector first provides coarse anomaly confidence scores, and only a small subset of ambiguous segments, rather than the entire video, is further analyzed by the slower yet more interpretable VLM for elaborate detection and reasoning. Furthermore, to adapt VLMs to domain-specific VAD scenarios, we construct a knowledge base including normal patterns based on few normal samples and abnormal patterns inferred by VLMs. During inference, relevant patterns are retrieved and used to augment prompts for anomaly reasoning. Finally, we smoothly fuse the anomaly confidence of fast and slow detectors to enhance robustness of anomaly detection. Extensive experiments on four benchmarks demonstrate that SlowFastVAD effectively combines the strengths of both fast and slow detectors, and achieves remarkable detection accuracy and interpretability with significantly reduced computational overhead, making it well-suited for real-world VAD applications with high reliability requirements.",0 "In recent years, autonomous driving has become a popular field of study. As control at tire grip limit is essential during emergency situations, algorithms developed for racecars are useful for road cars too. This paper examines the use of Deep Reinforcement Learning (DRL) to solve the problem of grip limit driving in a simulated environment. Proximal Policy Optimization (PPO) method is used to train an agent to control the steering wheel and pedals of the vehicle, using only visual inputs to achieve professional human lap times. The paper outlines the formulation of the task of time optimal driving on a race track as a deep reinforcement learning problem, and explains the chosen observations, actions, and reward functions. The results demonstrate human-like learning and driving behavior that utilize maximum tire grip potential.",0 "Current approaches in Explainable Deep Reinforcement Learning have limitations in which the attention mask has a displacement with the objects in visual input. This work addresses a spatial problem within traditional Convolutional Neural Networks (CNNs). We propose the Interpretable Feature Extractor (IFE) architecture, aimed at generating an accurate attention mask to illustrate both ""what"" and ""where"" the agent concentrates on in the spatial domain. Our design incorporates a Human-Understandable Encoding module to generate a fully interpretable attention mask, followed by an Agent-Friendly Encoding module to enhance the agent's learning efficiency. These two components together form the Interpretable Feature Extractor for vision-based deep reinforcement learning to enable the model's interpretability. The resulting attention mask is consistent, highly understandable by humans, accurate in spatial dimension, and effectively highlights important objects or locations in visual input. The Interpretable Feature Extractor is integrated into the Fast and Data-efficient Rainbow framework, and evaluated on 57 ATARI games to show the effectiveness of the proposed approach on Spatial Preservation, Interpretability, and Data-efficiency. Finally, we showcase the versatility of our approach by incorporating the IFE into the Asynchronous Advantage Actor-Critic Model.",0 "Abstaining classifiers have the option to refrain from providing a prediction for instances that are difficult to classify. The abstention mechanism is designed to trade off the classifier's performance on the accepted data while ensuring a minimum number of predictions. In this setting, often fairness concerns arise when the abstention mechanism solely reduces errors for the majority groups of the data, resulting in increased performance differences across demographic groups. While there exist a bunch of methods that aim to reduce discrimination when abstaining, there is no mechanism that can do so in an explainable way. In this paper, we fill this gap by introducing Interpretable and Fair Abstaining Classifier IFAC, an algorithm that can reject predictions both based on their uncertainty and their unfairness. By rejecting possibly unfair predictions, our method reduces error and positive decision rate differences across demographic groups of the non-rejected data. Since the unfairness-based rejections are based on an interpretable-by-design method, i.e., rule-based fairness checks and situation testing, we create a transparent process that can empower human decision-makers to review the unfair predictions and make more just decisions for them. This explainable aspect is especially important in light of recent AI regulations, mandating that any high-risk decision task should be overseen by human experts to reduce discrimination risks.",0 "Existing evidence suggests that neural responses to errors were exaggerated in individuals at risk of depression and anxiety. This phenomenon has led to the possibility that the error-related negativity (ERN), a well-known neural correlate of error monitoring could be used as a diagnostic tool for several psychological disorders. However, conflicting evidence between psychopathology and the ERN suggests that this phenomenon is modulated by variables are yet to be identified. Socioeconomic status (SES) could potentially play a role in the relationship between the ERN and psychopathological disorders, given that SES is known to be associated with depression and anxiety. In the current study, we first tested whether SES was related to ERN amplitude. Second, we examined whether the relationship between the ERN and depression was explained by differences in SES. We measured error-related negativity (ERN) from a sample of adult participants from low to high socioeconomic backgrounds while controlling their depression scores. Results show that SES correlated with variations in ERN amplitude. Specifically, we found that low-SES individuals had a larger ERN than wealthier individuals. In addition, the relationship between depression and the ERN was fully accounted for by variations in SES. Overall, our results indicate that SES predicts neural responses to errors. Findings also indicate that the link between depression and ERN may be the result of SES variations. Future research examining the links between psychopathology and error monitoring should control SES differences, and caution is needed if they are to be used as a diagnostic tool in low-income communities. concept"" Traditional quality assurance (QA) methods face significant challenges in addressing the complexity, scale, and rapid iteration cycles of modern software systems and are strained by limited resources available, leading to substantial costs associated with poor quality. The object of this research is the Quality Assurance processes for modern distributed software applications. The subject of the research is the assessment of the benefits, challenges, and prospects of integrating modern AI-oriented tools into quality assurance processes. We performed comprehensive analysis of implications on both verification and validation processes covering exploratory test analyses, equivalence partitioning and boundary analyses, metamorphic testing, finding inconsistencies in acceptance criteria (AC), static analyses, test case generation, unit test generation, test suit optimization and assessment, end to end scenario execution. End to end regression of sample enterprise application utilizing AI-agents over generated test scenarios was implemented as a proof of concept highlighting practical use of the study. The results, with only 8.3% flaky executions of generated test cases, indicate significant potential for the proposed approaches. However, the study also identified substantial challenges for practical adoption concerning generation of semantically identical coverage, ""black box"" nature and lack of explainability from state-of-the-art Large Language Models (LLMs), the tendency to correct mutated test cases to match expected results, underscoring the necessity for thorough verification of both generated artifacts and test execution results. The research demonstrates AI's transformative potential for QA but highlights the importance of a strategic approach to implementing these technologies, considering the identified limitations and the need for developing appropriate verification methodologies.",2 "Functional theories of consciousness, based on emergence of conscious experiences from the execution of a particular function by an insentient brain, face the hard problem of consciousness of explaining why the insentient brain should produce any conscious experiences at all. This problem is exacerbated by the determinism characterizing the laws of classical physics, due to the resulting lack of causal potency of the emergent consciousness, which is not present already as a physical quantity in the deterministic equations of motion of the brain. Here, we present a quantum information theoretic approach to the hard problem of consciousness that avoids all of the drawbacks of emergence. This is achieved through reductive identification of first-person subjective conscious states with unobservable quantum state vectors in the brain, whereas the anatomically observable brain is viewed as a third-person objective construct created by classical bits of information obtained during the measurement of a subset of commuting quantum brain observables by the environment. Quantum resource theory further implies that the quantum features of consciousness granted by quantum no-go theorems cannot be replicated by any classical physical device.",0 "Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-stakes domains like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trustworthy outcome predictions. However, users are often confused about which explanability method to choose for their specific use case. We present a comparative analysis of widely used explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM), within the domain of human activity recognition (HAR) utilizing graph convolutional networks (GCNs). By evaluating these methods on skeleton-based data from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case, this study provides vital insights into both approaches' strengths, limitations, and differences, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. We quantitatively and quantitatively compare these methods, focusing on feature importance ranking, interpretability, and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, Grad-CAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application's requirements. Given the importance of XAI in enhancing trust and transparency in ML models, particularly in sensitive environments like healthcare, our research demonstrates how SHAP and Grad-CAM could complement each other to provide more interpretable and actionable model explanations.",0 "Timely and accurate diagnosis of neurodegenerative disorders, such as Alzheimer's disease, is central to disease management. Existing deep learning models require large-scale annotated datasets and often function as ""black boxes"". Additionally, datasets in clinical practice are frequently small or unlabeled, restricting the full potential of deep learning methods. Here, we introduce REMEMBER -- Retrieval-based Explainable Multimodal Evidence-guided Modeling for Brain Evaluation and Reasoning -- a new machine learning framework that facilitates zero- and few-shot Alzheimer's diagnosis using brain MRI scans through a reference-based reasoning process. Specifically, REMEMBER first trains a contrastively aligned vision-text model using expert-annotated reference data and extends pseudo-text modalities that encode abnormality types, diagnosis labels, and composite clinical descriptions. Then, at inference time, REMEMBER retrieves similar, human-validated cases from a curated dataset and integrates their contextual information through a dedicated evidence encoding module and attention-based inference head. Such an evidence-guided design enables REMEMBER to imitate real-world clinical decision-making process by grounding predictions in retrieved imaging and textual context. Specifically, REMEMBER outputs diagnostic predictions alongside an interpretable report, including reference images and explanations aligned with clinical workflows. Experimental results demonstrate that REMEMBER achieves robust zero- and few-shot performance and offers a powerful and explainable framework to neuroimaging-based diagnosis in the real world, especially under limited data.",0 "The rapid advancement of AI-driven visual generation technologies has catalyzed significant breakthroughs in image manipulation, particularly in achieving photorealistic localized editing effects on natural scene images (NSIs). Despite extensive research on image quality assessment (IQA) for AI-generated images (AGIs), most studies focus on fully AI-generated outputs (e.g., text-to-image generation), leaving the quality assessment of partial-AIGC images (PAIs)-images with localized AI-driven edits an almost unprecedented field. Motivated by this gap, we construct the first large-scale PAI dataset towards explainable partial-AIGC image quality assessment (EPAIQA), the EPAIQA-15K, which includes 15K images with localized AI manipulation in different regions and over 300K multi-dimensional human ratings. Based on this, we leverage large multi-modal models (LMMs) and propose a three-stage model training paradigm. This paradigm progressively trains the LMM for editing region grounding, quantitative quality scoring, and quality explanation. Finally, we develop the EPAIQA series models, which possess explainable quality feedback capabilities. Our work represents a pioneering effort in the perceptual IQA field for comprehensive PAI quality assessment.",0 "We present a theoretical and empirical investigation of the statistical behaviour of the words in a text produced by human language. To this aim, we analyse the word distribution of various texts of Italian language selected from a specific literary corpus. We firstly generalise a theoretical framework elaborated by ourselves to identify 'quantum mechanical statistics' in large-size texts. Then, we show that, in all analysed texts, words distribute according to 'Bose--Einstein statistics' and show significant deviations from 'Maxwell--Boltzmann statistics'. Next, we introduce an effect of 'word randomization' which instead indicates that the difference between the two statistical models is not as pronounced as in the original cases. These results confirm the empirical patterns obtained in texts of English language and strongly indicate that identical words tend to 'clump together' as a consequence of their meaning, which can be explained as an effect of 'quantum entanglement' produced through a phenomenon of 'contextual updating'. More, word randomization can be seen as the linguistic-conceptual equivalent of an increase of temperature which destroys 'coherence' and makes classical statistics prevail over quantum statistics. Some insights into the origin of quantum statistics in physics are finally provided.",0 "Large Language Models (LLMs) and Knowledge Graphs (KGs) offer a promising approach to robust and explainable Question Answering (QA). While LLMs excel at natural language understanding, they suffer from knowledge gaps and hallucinations. KGs provide structured knowledge but lack natural language interaction. Ideally, an AI system should be both robust to missing facts as well as easy to communicate with. This paper proposes such a system that integrates LLMs and KGs without requiring training, ensuring adaptability across different KGs with minimal human effort. The resulting approach can be classified as a specific form of a Retrieval Augmented Generation (RAG) with a KG, thus, it is dubbed Knowledge Graph-extended Retrieval Augmented Generation (KG-RAG). It includes a question decomposition module to enhance multi-hop information retrieval and answer explainability. Using In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting, it generates explicit reasoning chains processed separately to improve truthfulness. Experiments on the MetaQA benchmark show increased accuracy for multi-hop questions, though with a slight trade-off in single-hop performance compared to LLM with KG baselines. These findings demonstrate KG-RAG's potential to improve transparency in QA by bridging unstructured language understanding with structured knowledge retrieval.",0 "The thriving research field of concept-based explainable artificial intelligence (C-XAI) investigates how human-interpretable semantic concepts embed in the latent spaces of deep neural networks (DNNs). Post-hoc approaches therein use a set of examples to specify a concept, and determine its embeddings in DNN latent space using data driven techniques. This proved useful to uncover biases between different target (foreground or concept) classes. However, given that the background is mostly uncontrolled during training, an important question has been left unattended so far: Are/to what extent are state-of-the-art, data-driven post-hoc C-XAI approaches themselves prone to biases with respect to their backgrounds? E.g., wild animals mostly occur against vegetation backgrounds, and they seldom appear on roads. Even simple and robust C-XAI methods might abuse this shortcut for enhanced performance. A dangerous performance degradation of the concept-corner cases of animals on the road could thus remain undiscovered. This work validates and thoroughly confirms that established Net2Vec-based concept segmentation techniques frequently capture background biases, including alarming ones, such as underperformance on road scenes. For the analysis, we compare 3 established techniques from the domain of background randomization on >50 concepts from 2 datasets, and 7 diverse DNN architectures. Our results indicate that even low-cost setups can provide both valuable insight and improved background robustness.",0 "The integration of Artificial Intelligence in the development of computer systems presents a new challenge: make intelligent systems explainable to humans. This is especially vital in the field of health and well-being, where transparency in decision support systems enables healthcare professionals to understand and trust automated decisions and predictions. To address this need, tools are required to guide the development of explainable AI systems. In this paper, we introduce an evaluation framework designed to support the development of explainable AI systems for health and well-being. Additionally, we present a case study that illustrates the application of the framework in practice. We believe that our framework can serve as a valuable tool not only for developing explainable AI systems in healthcare but also for any AI system that has a significant impact on individuals.",0 "In this paper, we propose an Adaptive Neuro-Symbolic Learning and Reasoning Framework for digital twin technology called ``ANSR-DT."" Digital twins in industrial environments often struggle with interpretability, real-time adaptation, and human input integration. Our approach addresses these challenges by combining CNN-LSTM dynamic event detection with reinforcement learning and symbolic reasoning to enable adaptive intelligence with interpretable decision processes. This integration enhances environmental understanding while promoting continuous learning, leading to more effective real-time decision-making in human-machine collaborative applications. We evaluated ANSR-DT on synthetic industrial data, observing significant improvements over traditional approaches, with up to 99.5% accuracy for dynamic pattern recognition. The framework demonstrated superior adaptability with extended reinforcement learning training, improving explained variance from 0.447 to 0.547. Future work aims at scaling to larger datasets to test rule management beyond the current 14 rules. Our open-source implementation promotes reproducibility and establishes a foundation for future research in adaptive, interpretable digital twins for industrial applications.",0 "For a long time, the authorship of the Federalist Papers had been a subject of inquiry and debate, not only by linguists and historians but also by statisticians. In what was arguably the first Bayesian case study, Mosteller and Wallace (1963) provided the first statistical evidence for attributing all disputed papers to Madison. Our paper revisits this historical dataset but from a lens of modern language models, both small and large. We review some of the more popular Large Language Model (LLM) tools and examine them from a statistical point of view in the context of text classification. We investigate whether, without any attempt to fine-tune, the general embedding constructs can be useful for stylometry and attribution. We explain differences between various word/phrase embeddings and discuss how to aggregate them in a document. Contrary to our expectations, we exemplify that dimension expansion with word embeddings may not always be beneficial for attribution relative to dimension reduction with topic embeddings. Our experiments demonstrate that default LLM embeddings (even after manual fine-tuning) may not consistently improve authorship attribution accuracy. Instead, Bayesian analysis with topic embeddings trained on ``function words"" yields superior out-of-sample classification performance. This suggests that traditional (small) statistical language models, with their interpretability and solid theoretical foundation, can offer significant advantages in authorship attribution tasks. The code used in this analysis is available at github.com/sowonjeong/slm-to-llm",0 "In the past few years, Language Models (LMs) have shown par-human capabilities in several domains. Despite their practical applications and exceeding user consumption, they are susceptible to jailbreaks when malicious input exploits the LM's weaknesses, causing it to deviate from its intended behavior. Current defensive strategies either classify the input prompt as adversarial or prevent LMs from generating harmful outputs. However, it is challenging to explain the reason behind the malicious nature of the jailbreak, which results in a wide variety of closed-box approaches. In this research, we propose and demonstrate that system-prompt attention from Small Language Models (SLMs) can be used to characterize adversarial prompts, providing a novel, explainable, and cheaper defense approach called AttentionDefense. Our research suggests that the attention mechanism is an integral component in understanding and explaining how LMs respond to malicious input that is not captured in the semantic meaning of text embeddings. The proposed AttentionDefense is evaluated against existing jailbreak benchmark datasets. Ablation studies show that SLM-based AttentionDefense has equivalent or better jailbreak detection performance compared to text embedding-based classifiers and GPT-4 zero-shot detectors.To further validate the efficacy of the proposed approach, we generate a dataset of novel jailbreak variants of the existing benchmark dataset using a closed-loop LLM-based multi-agent system. We demonstrate that the proposed AttentionDefense approach performs robustly on this novel jailbreak dataset while existing approaches suffer in performance. Additionally, for practical purposes AttentionDefense is an ideal solution as it has the computation requirements of a small LM but the performance of a LLM detector.",0 "Pumpkin leaf diseases are significant threats to agricultural productivity, requiring a timely and precise diagnosis for effective management. Traditional identification methods are laborious and susceptible to human error, emphasizing the necessity for automated solutions. This study employs on the ""Pumpkin Leaf Disease Dataset"", that comprises of 2000 high-resolution images separated into five categories. Downy mildew, powdery mildew, mosaic disease, bacterial leaf spot, and healthy leaves. The dataset was rigorously assembled from several agricultural fields to ensure a strong representation for model training. We explored many proficient deep learning architectures, including DenseNet201, DenseNet121, DenseNet169, Xception, ResNet50, ResNet101 and InceptionResNetV2, and observed that ResNet50 performed most effectively, with an accuracy of 90.5% and comparable precision, recall, and F1-Score. We used Explainable AI (XAI) approaches like Grad-CAM, Grad-CAM++, Score-CAM, and Layer-CAM to provide meaningful representations of model decision-making processes, which improved understanding and trust in automated disease diagnostics. These findings demonstrate ResNet50's potential to revolutionize pumpkin leaf disease detection, allowing for earlier and more accurate treatments.",0 "Focal cortical dysplasia (FCD) type II is a major cause of drug-resistant epilepsy, often curable only by surgery. Despite its clinical importance, the diagnosis of FCD is very difficult in MRI because of subtle abnormalities, leading to misdiagnosis. This study investigates the use of 3D convolutional neural networks (3D-CNNs) for FCD detection, using a dataset of 170 subjects (85 FCD patients and 85 controls) composed of T1-weighted and FLAIR MRI scans. In particular, it investigates the benefits obtained from cross-modality transfer learning and explainable artificial intelligence (XAI) techniques, in particular Gradient-weighted Class Activation Mapping (Grad-CAM). ResNet architectures (ResNet-18, -34, and -50) were implemented, employing transfer learning strategies that used pre-trained weights from segmentation tasks. Results indicate that transfer learning significantly enhances classification accuracy (up to 80.3%) and interpretability, as measured by a novel Heat-Score metric, which evaluates the model's focus on clinically relevant regions. Improvements in the Heat-Score metric underscore the model's seizure zone localization capabilities, bringing AI predictions and clinical insights closer together. These results highlight the importance of transfer learning, including cross-modality, and XAI in advancing AI-based medical diagnostics, especially for difficult-to-diagnose pathologies such as FCD.",2 "As the innovative potential of quantum technologies comes into focus, so too does the urgent need to address their ethical implications. While many voices highlight the importance of ethical engagement, less attention has been paid to the conditions that make such engagement possible. In this article, I argue that technological understanding is a foundational capacity for meaningful ethical reflection on emerging technology like quantum technologies. Drawing on De Jong & De Haro's account of technological understanding (2025a; 2025b), I clarify what such understanding entails and how it enables ethical enquiry. I contend that ethical assessment, first and foremost, requires an understanding of what quantum technologies can do - their functional capacities and, by extension, their potential applications. Current efforts to build engagement capacities among broader audiences - within and beyond academic contexts - tend, however, to focus on explaining the underlying quantum mechanics. Instead, I advocate a shift from a physics-first to a functions-first approach: fostering an understanding of quantum technologies' capabilities as the basis for ethical reflection. Presenting technological understanding as an epistemic requirement for meaningful ethical engagement may appear to raise the bar for participation. However, by decoupling functional understanding from technical expertise, this condition becomes attainable for a broader group, contributing not only to a well-informed but also to a more inclusive ethical debate.",0 "Instruction-based Image Editing (IIE) models have made significantly improvement due to the progress of multimodal large language models (MLLMs) and diffusion models, which can understand and reason about complex editing instructions. In addition to advancing current IIE models, accurately evaluating their output has become increasingly critical and challenging. Current IIE evaluation methods and their evaluation procedures often fall short of aligning with human judgment and often lack explainability. To address these limitations, we propose JUdgement through Routing of Expertise (JURE). Each expert in JURE is a pre-selected model assumed to be equipped with an atomic expertise that can provide useful feedback to judge output, and the router dynamically routes the evaluation task of a given instruction and its output to appropriate experts, aggregating their feedback into a final judge. JURE is trustworthy in two aspects. First, it can effortlessly provide explanations about its judge by examining the routed experts and their feedback. Second, experimental results demonstrate that JURE is reliable by achieving superior alignment with human judgments, setting a new standard for automated IIE evaluation. Moreover, JURE's flexible design is future-proof - modular experts can be seamlessly replaced or expanded to accommodate advancements in IIE, maintaining consistently high evaluation quality. Our evaluation data and results are available at https://github.com/Cyyyyyrus/JURE.git.",0 "The reasoning abilities of large language models (LLMs) have improved with chain-of-thought (CoT) prompting, allowing models to solve complex tasks stepwise. However, training CoT capabilities requires detailed reasoning data, which is often scarce. The self-taught reasoner (STaR) framework addresses this by using reinforcement learning to automatically generate reasoning steps, reducing reliance on human-labeled data. Although STaR and its variants have demonstrated empirical success, a theoretical foundation explaining these improvements is lacking. This work provides a theoretical framework for understanding the effectiveness of reinforcement learning on CoT reasoning and STaR. Our contributions are: (1) criteria for the quality of pre-trained models necessary to initiate effective reasoning improvement; (2) an analysis of policy improvement, showing why LLM reasoning improves iteratively with STaR; (3) conditions for convergence to an optimal reasoning policy; and (4) an examination of STaR's robustness, explaining how it can improve reasoning even when incorporating occasional incorrect steps; This framework aims to bridge empirical findings with theoretical insights, advancing reinforcement learning approaches for reasoning in LLMs.",0 "Expectations critically shape how people form judgments about robots, influencing whether they view failures as minor technical glitches or deal-breaking flaws. This work explores how high and low expectations, induced through brief video priming, affect user perceptions of robot failures and the utility of explanations in HRI. We conducted two online studies ($N=600$ total participants); each replicated two robots with different embodiments, Furhat and Pepper. In our first study, grounded in expectation theory, participants were divided into two groups, one primed with positive and the other with negative expectations regarding the robot's performance, establishing distinct expectation frameworks. This validation study aimed to verify whether the videos could reliably establish low and high-expectation profiles. In the second study, participants were primed using the validated videos and then viewed a new scenario in which the robot failed at a task. Half viewed a version where the robot explained its failure, while the other half received no explanation. We found that explanations significantly improved user perceptions of Furhat, especially when participants were primed to have lower expectations. Explanations boosted satisfaction and enhanced the robot's perceived expressiveness, indicating that effectively communicating the cause of errors can help repair user trust. By contrast, Pepper's explanations produced minimal impact on user attitudes, suggesting that a robot's embodiment and style of interaction could determine whether explanations can successfully offset negative impressions. Together, these findings underscore the need to consider users' expectations when tailoring explanation strategies in HRI. When expectations are initially low, a cogent explanation can make the difference between dismissing a failure and appreciating the robot's transparency and effort to communicate.",2 "Trustworthy AI encompasses many aspirational aspects for aligning AI systems with human values, including fairness, privacy, robustness, explainability, and uncertainty quantification. However, efforts to enhance one aspect often introduce unintended trade-offs that negatively impact others, making it challenging to improve all aspects simultaneously. In this position paper, we review notable approaches to these five aspects and systematically consider every pair, detailing the negative interactions that can arise. For example, applying differential privacy to model training can amplify biases in the data, undermining fairness. Drawing on these findings, we take the position that addressing trustworthiness along each axis in isolation is insufficient. Instead, research on Trustworthy AI must account for intersectionality between aspects and adopt a holistic view across all relevant axes at once. To illustrate our perspective, we provide guidance on how researchers can work towards integrated trustworthiness, a case study on how intersectionality applies to the financial industry, and alternative views to our position.",0 "Conditional image generation has gained significant attention for its ability to personalize content. However, the field faces challenges in developing task-agnostic, reliable, and explainable evaluation metrics. This paper introduces CIGEval, a unified agentic framework for comprehensive evaluation of conditional image generation tasks. CIGEval utilizes large multimodal models (LMMs) as its core, integrating a multi-functional toolbox and establishing a fine-grained evaluation framework. Additionally, we synthesize evaluation trajectories for fine-tuning, empowering smaller LMMs to autonomously select appropriate tools and conduct nuanced analyses based on tool outputs. Experiments across seven prominent conditional image generation tasks demonstrate that CIGEval (GPT-4o version) achieves a high correlation of 0.4625 with human assessments, closely matching the inter-annotator correlation of 0.47. Moreover, when implemented with 7B open-source LMMs using only 2.3K training trajectories, CIGEval surpasses the previous GPT-4o-based state-of-the-art method. Case studies on GPT-4o image generation highlight CIGEval's capability in identifying subtle issues related to subject consistency and adherence to control guidance, indicating its great potential for automating evaluation of image generation tasks with human-level reliability.",2 "In this study, we introduce the Fuzzy Additive Model (FAM) and FAM with Explainability (FAME) as a solution for Explainable Artificial Intelligence (XAI). The family consists of three layers: (1) a Projection Layer that compresses the input space, (2) a Fuzzy Layer built upon Single Input-Single Output Fuzzy Logic Systems (SFLS), where SFLS functions as subnetworks within an additive index model, and (3) an Aggregation Layer. This architecture integrates the interpretability of SFLS, which uses human-understandable if-then rules, with the explainability of input-output relationships, leveraging the additive model structure. Furthermore, using SFLS inherently addresses issues such as the curse of dimensionality and rule explosion. To further improve interpretability, we propose a method for sculpting antecedent space within FAM, transforming it into FAME. We show that FAME captures the input-output relationships with fewer active rules, thus improving clarity. To learn the FAM family, we present a deep learning framework. Through the presented comparative results, we demonstrate the promising potential of FAME in reducing model complexity while retaining interpretability, positioning it as a valuable tool for XAI.",0 "Deep neural networks (DNNs) have demonstrated remarkable success, yet their wide adoption is often hindered by their opaque decision-making. To address this, attribution methods have been proposed to assign relevance values to each part of the input. However, different methods often produce entirely different relevance maps, necessitating the development of standardized metrics to evaluate them. Typically, such evaluation is performed through perturbation, wherein high- or low-relevance regions of the input image are manipulated to examine the change in prediction. In this work, we introduce a novel approach, which harnesses image generation models to perform targeted perturbation. Specifically, we focus on inpainting only the high-relevance pixels of an input image to modify the model's predictions while preserving image fidelity. This is in contrast to existing approaches, which often produce out-of-distribution modifications, leading to unreliable results. Through extensive experiments, we demonstrate the effectiveness of our approach in generating meaningful rankings across a wide range of models and attribution methods. Crucially, we establish that the ranking produced by our metric exhibits significantly higher correlation with human preferences compared to existing approaches, underscoring its potential for enhancing interpretability in DNNs.",2 "With the rise and widespread use of Large Language Models (LLMs), ensuring their safety is crucial to prevent harm to humans and promote ethical behaviors. However, directly assessing value valence (i.e., support or oppose) by leveraging large-scale data training is untrustworthy and inexplainable. We assume that emulating humans to rely on social norms to make moral decisions can help LLMs understand and predict moral judgment. However, capturing human values remains a challenge, as multiple related norms might conflict in specific contexts. Consider norms that are upheld by the majority and promote the well-being of society are more likely to be accepted and widely adopted (e.g., ""don't cheat,""). Therefore, it is essential for LLM to identify the appropriate norms for a given scenario before making moral decisions. To this end, we introduce a novel moral judgment approach called \textit{ClarityEthic} that leverages LLMs' reasoning ability and contrastive learning to uncover relevant social norms for human actions from different perspectives and select the most reliable one to enhance judgment accuracy. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches in moral judgment tasks. Moreover, human evaluations confirm that the generated social norms provide plausible explanations that support the judgments. This suggests that modeling human moral judgment with the emulating humans moral strategy is promising for improving the ethical behaviors of LLMs.",0 "Counterfactual explanations are the de facto standard when tasked with interpreting decisions of (opaque) predictive models. Their generation is often subject to technical and domain-specific constraints that aim to maximise their real-life utility. In addition to considering desiderata pertaining to the counterfactual instance itself, guaranteeing existence of a viable path connecting it with the factual data point has recently gained relevance. While current explainability approaches ensure that the steps of such a journey as well as its destination adhere to selected constraints, they neglect the multiplicity of these counterfactual paths. To address this shortcoming we introduce the novel concept of explanatory multiverse that encompasses all the possible counterfactual journeys. We define it using vector spaces, showing how to navigate, reason about and compare the geometry of counterfactual trajectories found within it. To this end, we overview their spatial properties -- such as affinity, branching, divergence and possible future convergence -- and propose an all-in-one metric, called opportunity potential, to quantify them. Notably, the explanatory process offered by our method grants explainees more agency by allowing them to select counterfactuals not only based on their absolute differences but also according to the properties of their connecting paths. To demonstrate real-life flexibility, benefit and efficacy of explanatory multiverse we propose its graph-based implementation, which we use for qualitative and quantitative evaluation on six tabular and image data sets.",0 "As social robots and other artificial agents become more conversationally capable, it is important to understand whether the content and meaning of self-disclosure towards these agents changes depending on the agent's embodiment. In this study, we analysed conversational data from three controlled experiments in which participants self-disclosed to a human, a humanoid social robot, and a disembodied conversational agent. Using sentence embeddings and clustering, we identified themes in participants' disclosures, which were then labelled and explained by a large language model. We subsequently assessed whether these themes and the underlying semantic structure of the disclosures varied by agent embodiment. Our findings reveal strong consistency: thematic distributions did not significantly differ across embodiments, and semantic similarity analyses showed that disclosures were expressed in highly comparable ways. These results suggest that while embodiment may influence human behaviour in human-robot and human-agent interactions, people tend to maintain a consistent thematic focus and semantic structure in their disclosures, whether speaking to humans or artificial interlocutors.",2 "Therapeutic development is a costly and high-risk endeavor that is often plagued by high failure rates. To address this, we introduce TxGemma, a suite of efficient, generalist large language models (LLMs) capable of therapeutic property prediction as well as interactive reasoning and explainability. Unlike task-specific models, TxGemma synthesizes information from diverse sources, enabling broad application across the therapeutic development pipeline. The suite includes 2B, 9B, and 27B parameter models, fine-tuned from Gemma-2 on a comprehensive dataset of small molecules, proteins, nucleic acids, diseases, and cell lines. Across 66 therapeutic development tasks, TxGemma achieved superior or comparable performance to the state-of-the-art generalist model on 64 (superior on 45), and against state-of-the-art specialist models on 50 (superior on 26). Fine-tuning TxGemma models on therapeutic downstream tasks, such as clinical trial adverse event prediction, requires less training data than fine-tuning base LLMs, making TxGemma suitable for data-limited applications. Beyond these predictive capabilities, TxGemma features conversational models that bridge the gap between general LLMs and specialized property predictors. These allow scientists to interact in natural language, provide mechanistic reasoning for predictions based on molecular structure, and engage in scientific discussions. Building on this, we further introduce Agentic-Tx, a generalist therapeutic agentic system powered by Gemini 2.5 that reasons, acts, manages diverse workflows, and acquires external domain knowledge. Agentic-Tx surpasses prior leading models on the Humanity's Last Exam benchmark (Chemistry & Biology) with 52.3% relative improvement over o3-mini (high) and 26.7% over o3-mini (high) on GPQA (Chemistry) and excels with improvements of 6.3% (ChemBench-Preference) and 2.4% (ChemBench-Mini) over o3-mini (high).",2 "This paper presents a novel framework for accessible and pedagogically-grounded robot explainability, designed to support human-robot interaction (HRI) with users who have diverse cognitive, communicative, or learning needs. We combine principles from Universal Design for Learning (UDL) and Universal Design (UD) with symbolic communication strategies to facilitate the alignment of mental models between humans and robots. Our approach employs Asterics Grid and ARASAAC pictograms as a multimodal, interpretable front-end, integrated with a lightweight HTTP-to-ROS 2 bridge that enables real-time interaction and explanation triggering. We emphasize that explainability is not a one-way function but a bidirectional process, where human understanding and robot transparency must co-evolve. We further argue that in educational or assistive contexts, the role of a human mediator (e.g., a teacher) may be essential to support shared understanding. We validate our framework with examples of multimodal explanation boards and discuss how it can be extended to different scenarios in education, assistive robotics, and inclusive AI.",0 "In the field of autonomous surface vehicles (ASVs), devising decision-making and obstacle avoidance solutions that address maritime COLREGs (Collision Regulations), primarily defined for human operators, has long been a pressing challenge. Recent advancements in explainable Artificial Intelligence (AI) and machine learning have shown promise in enabling human-like decision-making. Notably, significant developments have occurred in the application of Large Language Models (LLMs) to the decision-making of complex systems, such as self-driving cars. The textual and somewhat ambiguous nature of COLREGs (from an algorithmic perspective), however, poses challenges that align well with the capabilities of LLMs, suggesting that LLMs may become increasingly suitable for this application soon. This paper presents and demonstrates the first application of LLM-based decision-making and control for ASVs. The proposed method establishes a high-level decision-maker that uses online collision risk indices and key measurements to make decisions for safe manoeuvres. A tailored design and runtime structure is developed to support training and real-time action generation on a realistic ASV model. Local planning and control algorithms are integrated to execute the commands for waypoint following and collision avoidance at a lower level. To the authors' knowledge, this study represents the first attempt to apply explainable AI to the dynamic control problem of maritime systems recognising the COLREGs rules, opening new avenues for research in this challenging area. Results obtained across multiple test scenarios demonstrate the system's ability to maintain online COLREGs compliance, accurate waypoint tracking, and feasible control, while providing human-interpretable reasoning for each decision.",0 "Improving decision-making capabilities in Autonomous Intelligent Vehicles (AIVs) has been a heated topic in recent years. Despite advancements, training machines to capture regions of interest for comprehensive scene understanding, like human perception and reasoning, remains a significant challenge. This study introduces a novel framework, Human Attention-based Explainable Guidance for Intelligent Vehicle Systems (AEGIS). AEGIS utilizes human attention, converted from eye-tracking, to guide reinforcement learning (RL) models to identify critical regions of interest for decision-making. AEGIS uses a pre-trained human attention model to guide RL models to identify critical regions of interest for decision-making. By collecting 1.2 million frames from 20 participants across six scenarios, AEGIS pre-trains a model to predict human attention patterns.",2 "The increasing complexity of LLMs presents significant challenges to their transparency and interpretability, necessitating the use of eXplainable AI (XAI) techniques to enhance trustworthiness and usability. This study introduces a comprehensive evaluation framework with four novel metrics for assessing the effectiveness of five XAI techniques across five LLMs and two downstream tasks. We apply this framework to evaluate several XAI techniques LIME, SHAP, Integrated Gradients, Layer-wise Relevance Propagation (LRP), and Attention Mechanism Visualization (AMV) using the IMDB Movie Reviews and Tweet Sentiment Extraction datasets. The evaluation focuses on four key metrics: Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. Our results show that LIME consistently achieves high scores across multiple LLMs and evaluation metrics, while AMV demonstrates superior Robustness and near-perfect Consistency. LRP excels in Contrastivity, particularly with more complex models. Our findings provide valuable insights into the strengths and limitations of different XAI methods, offering guidance for developing and selecting appropriate XAI techniques for LLMs.",0 "The commonsense reasoning capabilities of vision-language models (VLMs), especially in abductive reasoning and defeasible reasoning, remain poorly understood. Most benchmarks focus on typical visual scenarios, making it difficult to discern whether model performance stems from keen perception and reasoning skills, or reliance on pure statistical recall. We argue that by focusing on atypical events in videos, clearer insights can be gained on the core capabilities of VLMs. Explaining and understanding such out-of-distribution events requires models to extend beyond basic pattern recognition and regurgitation of their prior knowledge. To this end, we introduce BlackSwanSuite, a benchmark for evaluating VLMs' ability to reason about unexpected events through abductive and defeasible tasks. Our tasks artificially limit the amount of visual information provided to models while questioning them about hidden unexpected events, or provide new visual information that could change an existing hypothesis about the event. We curate a comprehensive benchmark suite comprising over 3,800 MCQ, 4,900 generative and 6,700 yes/no questions, spanning 1,655 videos. After extensively evaluating various state-of-the-art VLMs, including GPT-4o and Gemini 1.5 Pro, as well as open-source VLMs such as LLaVA-Video, we find significant performance gaps of up to 32% from humans on these tasks. Our findings reveal key limitations in current VLMs, emphasizing the need for enhanced model architectures and training strategies. Our data and leaderboard is available at blackswan.cs.ubc.ca.",0 "Service and assistive robots are increasingly being deployed in dynamic social environments; however, ensuring transparent and explainable interactions remains a significant challenge. This paper presents a multimodal explainability module that integrates vision language models and heat maps to improve transparency during navigation. The proposed system enables robots to perceive, analyze, and articulate their observations through natural language summaries. User studies (n=30) showed a preference of majority for real-time explanations, indicating improved trust and understanding. Our experiments were validated through confusion matrix analysis to assess the level of agreement with human expectations. Our experimental and simulation results emphasize the effectiveness of explainability in autonomous navigation, enhancing trust and interpretability.",2 "This study investigates the use of spiral geometry in superconducting resonators to achieve high intrinsic quality factors, crucial for applications in quantum computation and quantum sensing. We fabricated Archimedean Spiral Resonators (ASRs) using domain-matched epitaxially grown titanium nitride (TiN) on silicon wafers, achieving intrinsic quality factors of $Q_\mathrm{i} = (9.6 \pm 1.5) \times 10^6$ at the single-photon level and $Q_\mathrm{i} = (9.91 \pm 0.39) \times 10^7$ at high power, significantly outperforming traditional coplanar waveguide (CPW) resonators. We conducted a comprehensive numerical analysis using COMSOL to calculate surface participation ratios (PRs) at critical interfaces: metal-air, metal-substrate, and substrate-air. Our findings reveal that ASRs have lower PRs than CPWs, explaining their superior quality factors and reduced coupling to two-level systems (TLSs).",0 "The field of explainable Automatic Fact-Checking (AFC) aims to enhance the transparency and trustworthiness of automated fact-verification systems by providing clear and comprehensible explanations. However, the effectiveness of these explanations depends on their actionability --their ability to empower users to make informed decisions and mitigate misinformation. Despite actionability being a critical property of high-quality explanations, no prior research has proposed a dedicated method to evaluate it. This paper introduces FinGrAct, a fine-grained evaluation framework that can access the web, and it is designed to assess actionability in AFC explanations through well-defined criteria and an evaluation dataset. FinGrAct surpasses state-of-the-art (SOTA) evaluators, achieving the highest Pearson and Kendall correlation with human judgments while demonstrating the lowest ego-centric bias, making it a more robust evaluation approach for actionability evaluation in AFC.",0 "Emojis, which encapsulate semantics beyond mere words or phrases, have become prevalent in social network communications. This has spurred increasing scholarly interest in exploring their attributes and functionalities. However, emoji-related research and application face two primary challenges. First, researchers typically rely on crowd-sourcing to annotate emojis in order to understand their sentiments, usage intentions, and semantic meanings. Second, subjective interpretations by users can often lead to misunderstandings of emojis and cause the communication barrier. Large Language Models (LLMs) have achieved significant success in various annotation tasks, with ChatGPT demonstrating expertise across multiple domains. In our study, we assess ChatGPT's effectiveness in handling previously annotated and downstream tasks. Our objective is to validate the hypothesis that ChatGPT can serve as a viable alternative to human annotators in emoji research and that its ability to explain emoji meanings can enhance clarity and transparency in online communications. Our findings indicate that ChatGPT has extensive knowledge of emojis. It is adept at elucidating the meaning of emojis across various application scenarios and demonstrates the potential to replace human annotators in a range of tasks.",2 "Clinical coding is a critical task in healthcare, although traditional methods for automating clinical coding may not provide sufficient explicit evidence for coders in production environments. This evidence is crucial, as medical coders have to make sure there exists at least one explicit passage in the input health record that justifies the attribution of a code. We therefore propose to reframe the task as an entity linking problem, in which each document is annotated with its set of codes and respective textual evidence, enabling better human-machine collaboration. By leveraging parameter-efficient fine-tuning of Large Language Models (LLMs), together with constrained decoding, we introduce three approaches to solve this problem that prove effective at disambiguating clinical mentions and that perform well in few-shot scenarios.",0 "Convolutional neural networks (CNNs) for time series classification (TSC) are being increasingly used in applications ranging from quality prediction to medical diagnosis. The black box nature of these models makes understanding their prediction process difficult. This issue is crucial because CNNs are prone to learning shortcuts and biases, compromising their robustness and alignment with human expectations. To assess whether such mechanisms are being used and the associated risk, it is essential to provide model explanations that reflect the inner workings of the model. Concept Extraction (CE) methods offer such explanations, but have mostly been developed for the image domain so far, leaving a gap in the time series domain. In this work, we present a CE and localization method tailored to the time series domain, based on the ideas of CE methods for images. We propose the novel method ECLAD-ts, which provides post-hoc global explanations based on how the models encode subsets of the input at different levels of abstraction. For this, concepts are produced by clustering timestep-wise aggregations of CNN activation maps, and their importance is computed based on their impact on the prediction process. We evaluate our method on synthetic and natural datasets. Furthermore, we assess the advantages and limitations of CE in time series through empirical results. Our results show that ECLAD-ts effectively explains models by leveraging their internal representations, providing useful insights about their prediction process.",0 "Counterfactual explanations are a widely used approach in Explainable AI, offering actionable insights into decision-making by illustrating how small changes to input data can lead to different outcomes. Despite their importance, evaluating the quality of counterfactual explanations remains an open problem. Traditional quantitative metrics, such as sparsity or proximity, fail to fully account for human preferences in explanations, while user studies are insightful but not scalable. Moreover, relying only on a single overall satisfaction rating does not lead to a nuanced understanding of why certain explanations are effective or not. To address this, we analyze a dataset of counterfactual explanations that were evaluated by 206 human participants, who rated not only overall satisfaction but also seven explanatory criteria: feasibility, coherence, complexity, understandability, completeness, fairness, and trust. Modeling overall satisfaction as a function of these criteria, we find that feasibility (the actionability of suggested changes) and trust (the belief that the changes would lead to the desired outcome) consistently stand out as the strongest predictors of user satisfaction, though completeness also emerges as a meaningful contributor. Crucially, even excluding feasibility and trust, other metrics explain 58% of the variance, highlighting the importance of additional explanatory qualities. Complexity appears independent, suggesting more detailed explanations do not necessarily reduce satisfaction. Strong metric correlations imply a latent structure in how users judge quality, and demographic background significantly shapes ranking patterns. These insights inform the design of counterfactual algorithms that adapt explanatory qualities to user expertise and domain context.",2 "Deep learning has been successfully applied to medical image segmentation, enabling accurate identification of regions of interest such as organs and lesions. This approach works effectively across diverse datasets, including those with single-image contrast, multi-contrast, and multimodal imaging data. To improve human understanding of these black-box models, there is a growing need for Explainable AI (XAI) techniques for model transparency and accountability. Previous research has primarily focused on post hoc pixel-level explanations, using methods gradient-based and perturbation-based apporaches. These methods rely on gradients or perturbations to explain model predictions. However, these pixel-level explanations often struggle with the complexity inherent in multi-contrast magnetic resonance imaging (MRI) segmentation tasks, and the sparsely distributed explanations have limited clinical relevance. In this study, we propose using contrast-level Shapley values to explain state-of-the-art models trained on standard metrics used in brain tumor segmentation. Our results demonstrate that Shapley analysis provides valuable insights into different models' behavior used for tumor segmentation. We demonstrated a bias for U-Net towards over-weighing T1-contrast and FLAIR, while Swin-UNETR provided a cross-contrast understanding with balanced Shapley distribution.",0 "While deep learning has exhibited remarkable predictive capabilities in various medical image tasks, its inherent black-box nature has hindered its widespread implementation in real-world healthcare settings. Our objective is to unveil the decision-making processes of deep learning models in the context of glaucoma classification by employing several Class Activation Map (CAM) techniques to generate model focus regions and comparing them with clinical domain knowledge of the anatomical area (optic cup, optic disk, and blood vessels). Four deep neural networks, including VGG-11, ResNet-18, DeiT-Tiny, and Swin Transformer-Tiny, were developed using binary diagnostic labels of glaucoma and five CAM methods (Grad-CAM, XGrad-CAM, Score-CAM, Eigen-CAM, and Layer-CAM) were employed to highlight the model focus area. We applied the paired-sample t-test to compare the percentage of anatomies in the model focus area to the proportion of anatomies in the entire image. After that, Pearson's and Spearman's correlation tests were implemented to examine the relationship between model predictive ability and the percentage of anatomical structures in the model focus area. On five public glaucoma datasets, all deep learning models consistently displayed statistically significantly higher percentages of anatomical structures in the focus area than the proportions of anatomical structures in the entire image. Also, we validated the positive relationship between the percentage of anatomical structures in the focus area and model predictive performance. Our study provides evidence of the convergence of decision logic between deep neural networks and human clinicians through rigorous statistical tests. We anticipate that it can help alleviate clinicians' concerns regarding the trustworthiness of deep learning in healthcare. For reproducibility, the code and dataset have been released at GitHub.",0 "Youth, while tech-savvy and highly active on social media, are still vulnerable to online privacy and security risks. Therefore, it is critical to understand how they negotiate and manage social connections versus protecting themselves in online contexts. In this work, we conducted a thematic analysis of 1,318 private conversations on Instagram from 149 youth aged 13-21 to understand the digital privacy and security topics they discussed, if and how they engaged in risky privacy behaviors, and how they balanced the benefits and risks (i.e., privacy calculus) of making these decisions. Overall, youth were forthcoming when broaching a wide range of topics on digital privacy and security, ranging from password management and account access challenges to shared experiences of being victims of privacy risks. However, they also openly engaged in risky behaviors, such as sharing personal account information with peers and even perpetrating privacy and security risks against others. Nonetheless, we found many of these behaviors could be explained by the unique ""privacy calculus"" of youth, where they often prioritized social benefits over potential risks; for instance, youth often shared account credentials with peers to foster social connection and affirmation. As such, we provide a nuanced understanding of youth decision-making regarding digital security and privacy, highlighting both positive behaviors, tensions, and points of concern. We encourage future research to continue to challenge the potentially untrue narratives regarding youth and their digital privacy and security to unpack the nuance of their privacy calculus that may differ from that of adults.",0 "Concept-based eXplainable AI (C-XAI) is a rapidly growing research field that enhances AI model interpretability by leveraging intermediate, human-understandable concepts. This approach not only enhances model transparency but also enables human intervention, allowing users to interact with these concepts to refine and improve the model's performance. Concept Bottleneck Models (CBMs) explicitly predict concepts before making final decisions, enabling interventions to correct misclassified concepts. While CBMs remain effective in Out-Of-Distribution (OOD) settings with intervention, they struggle to match the performance of black-box models. Concept Embedding Models (CEMs) address this by learning concept embeddings from both concept predictions and input data, enhancing In-Distribution (ID) accuracy but reducing the effectiveness of interventions, especially in OOD scenarios. In this work, we propose the Variational Concept Embedding Model (V-CEM), which leverages variational inference to improve intervention responsiveness in CEMs. We evaluated our model on various textual and visual datasets in terms of ID performance, intervention responsiveness in both ID and OOD settings, and Concept Representation Cohesiveness (CRC), a metric we propose to assess the quality of the concept embedding representations. The results demonstrate that V-CEM retains CEM-level ID performance while achieving intervention effectiveness similar to CBM in OOD settings, effectively reducing the gap between interpretability (intervention) and generalization (performance).",0 "Repetitive loading of bone is associated with microdamage accumulation and material property degradation that may ultimately result in fatigue fracture. Our previous work used continuum damage mechanics (CDM)-based finite element (FE) modeling to predict stiffness loss and fatigue failure in whole bone; however, this model did not account for inter-specimen variability in fatigue behaviour and multiaxial loading effects, which limited its applied efficacy. In this study, we refined the CDM-based FE model to predict experimental fatigue-life measurements from 21 whole rabbit tibiae subjected to cyclic axial compression with different magnitudes of superposed torsion. Machine learning (ML) methods were used to predict damage parameters from initial, undamaged, conditions, which were then used to predicted stiffness degradation and fatigue failure with the CDM-based FE modeling pipeline. Regression analysis was used to compare CDM-based FE fatigue-life predictions with experimental measurements. A random forest ML model predicted specimen-specific damage parameters with high accuracy (R2 = 0.85) and the CDM-based FE models demonstrated remarkable predictive capability, explaining up to 91% of the variance in fatigue-life measurements. Stiffness degradation profiles also followed a similar trend to experimental measurements; however, this agreement worsened with the level of superposed torsion suggesting additional refinements to the model may be necessary. These findings demonstrate the efficacy of integrating ML with CDM-based FE modeling to predict fatigue life and stiffness degradation. The observed agreement with experimental measurements suggests the modeling framework may provide valuable information regarding the mechanisms of fatigue fracture in whole bone",0 "Several deep learning (DL) approaches have been proposed to deal with image classification tasks. However, despite their effectiveness, they lack interpretability, as they are unable to explain or justify their results. To address the challenge of interpretable image classification, this paper introduces a novel framework, named Interpretable Intuitionistic Fuzzy Cognitive Maps (I2FCMs).Intuitionistic FCMs (iFCMs) have been proposed as an extension of FCMs offering a natural mechanism to assess the quality of their output through the estimation of hesitancy, a concept resembling human hesitation in decision making. In the context of image classification, hesitancy is considered as a degree of unconfidence with which an image is categorized to a class. To the best of our knowledge this is the first time iFCMs are applied for image classification. Further novel contributions of the introduced framework include the following: a) a feature extraction process focusing on the most informative image regions; b) a learning algorithm for automatic data-driven determination of the intuitionistic fuzzy interconnections of the iFCM, thereby reducing human intervention in the definition of the graph structure; c) an inherently interpretable classification approach based on image contents, providing understandable explanations of its predictions, using linguistic terms. Furthermore, the proposed I2FCM framework can be applied to DL models, including Convolutional Neural Network (CNN), rendering them interpretable. The effectiveness of I2FCM is evaluated on publicly available datasets, and the results confirm that it can provide enhanced classification performance, while providing interpretable inferences.",0 "As robots increasingly operate in dynamic human-centric environments, improving their ability to detect, explain, and recover from action-related issues becomes crucial. Traditional model-based and data-driven techniques lack adaptability, while more flexible generative AI methods struggle with grounding extracted information to real-world constraints. We introduce RAIDER, a novel agent that integrates Large Language Models (LLMs) with grounded tools for adaptable and efficient issue detection and explanation. Using a unique ""Ground, Ask&Answer, Issue"" procedure, RAIDER dynamically generates context-aware precondition questions and selects appropriate tools for resolution, achieving targeted information gathering. Our results within a simulated household environment surpass methods relying on predefined models, full scene descriptions, or standalone trained models. Additionally, RAIDER's explanations enhance recovery success, including cases requiring human interaction. Its modular architecture, featuring self-correction mechanisms, enables straightforward adaptation to diverse scenarios, as demonstrated in a real-world human-assistive task. This showcases RAIDER's potential as a versatile agentic AI solution for robotic issue detection and explanation, while addressing the problem of grounding generative AI for its effective application in embodied agents. Project website: https://eurecat.github.io/raider-llmagent/",0 "Understanding the perceptual invariances of artificial neural networks is essential for improving explainability and aligning models with human vision. Metamers - stimuli that are physically distinct yet produce identical neural activations - serve as a valuable tool for investigating these invariances. We introduce a novel approach to metamer generation by leveraging ensembles of artificial neural networks, capturing shared representational subspaces across diverse architectures, including convolutional neural networks and vision transformers. To characterize the properties of the generated metamers, we employ a suite of image-based metrics that assess factors such as semantic fidelity and naturalness. Our findings show that convolutional neural networks generate more recognizable and human-like metamers, while vision transformers produce realistic but less transferable metamers, highlighting the impact of architectural biases on representational invariances.",0 "Large language models (LLMs) not only exhibit human-like performance but also share computational principles with the brain's language processing mechanisms. While prior research has focused on mapping LLMs' internal representations to neural activity, we propose a novel approach using explainable AI (XAI) to strengthen this link. Applying attribution methods, we quantify the influence of preceding words on LLMs' next-word predictions and use these explanations to predict fMRI data from participants listening to narratives. We find that attribution methods robustly predict brain activity across the language network, revealing a hierarchical pattern: explanations from early layers align with the brain's initial language processing stages, while later layers correspond to more advanced stages. Additionally, layers with greater influence on next-word prediction$\unicode{x2014}$reflected in higher attribution scores$\unicode{x2014}$demonstrate stronger brain alignment. These results underscore XAI's potential for exploring the neural basis of language and suggest brain alignment for assessing the biological plausibility of explanation methods.",2 "Since mpox can spread from person to person, it is a zoonotic viral illness that poses a significant public health concern. It is difficult to make an early clinical diagnosis because of how closely its symptoms match those of measles and chickenpox. Medical imaging combined with deep learning (DL) techniques has shown promise in improving disease detection by analyzing affected skin areas. Our study explore the feasibility to train deep learning and vision transformer-based models from scratch with publicly available skin lesion image dataset. Our experimental results show dataset limitation as a major drawback to build better classifier models trained from scratch. We used transfer learning with the help of pre-trained models to get a better classifier. The MobileNet-v2 outperformed other state of the art pre-trained models with 93.15% accuracy and 93.09% weighted average F1 score. ViT B16 and ResNet-50 also achieved satisfactory performance compared to already available studies with accuracy 92.12% and 86.21% respectively. To further validate the performance of the models, we applied explainable AI techniques.",0 "Human understandable explanation of deep learning models is essential for various critical and sensitive applications. Unlike image or tabular data where the importance of each input feature (for the classifier's decision) can be directly projected into the input, time series distinguishable features (e.g. dominant frequency) are often hard to manifest in time domain for a user to easily understand. Additionally, most explanation methods require a baseline value as an indication of the absence of any feature. However, the notion of lack of feature, which is often defined as black pixels for vision tasks or zero/mean values for tabular data, is not well-defined in time series. Despite the adoption of explainable AI methods (XAI) from tabular and vision domain into time series domain, these differences limit the application of these XAI methods in practice. In this paper, we propose a simple yet effective method that allows a model originally trained on the time domain to be interpreted in other explanation spaces using existing methods. We suggest five explanation spaces, each of which can potentially alleviate these issues in certain types of time series. Our method can be easily integrated into existing platforms without any changes to trained models or XAI methods. The code will be released upon acceptance.",0 "Out-of-Distribution (OOD) detection is a critical task in machine learning, particularly in safety-sensitive applications where model failures can have serious consequences. However, current OOD detection methods often suffer from restrictive distributional assumptions, limited scalability, and a lack of interpretability. To address these challenges, we propose STOOD-X, a two-stage methodology that combines a Statistical nonparametric Test for OOD Detection with eXplainability enhancements. In the first stage, STOOD-X uses feature-space distances and a Wilcoxon-Mann-Whitney test to identify OOD samples without assuming a specific feature distribution. In the second stage, it generates user-friendly, concept-based visual explanations that reveal the features driving each decision, aligning with the BLUE XAI paradigm. Through extensive experiments on benchmark datasets and multiple architectures, STOOD-X achieves competitive performance against state-of-the-art post hoc OOD detectors, particularly in high-dimensional and complex settings. In addition, its explainability framework enables human oversight, bias detection, and model debugging, fostering trust and collaboration between humans and AI systems. The STOOD-X methodology therefore offers a robust, explainable, and scalable solution for real-world OOD detection tasks.",0 "If artificial intelligence (AI) is to be applied in safety-critical domains, its performance needs to be evaluated reliably. The present study aimed to understand how humans evaluate AI systems for person detection in automatic train operation. In three experiments, participants saw image sequences of people moving in the vicinity of railway tracks. A simulated AI had highlighted all detected people, sometimes correctly and sometimes not. Participants had to provide a numerical rating of the AI's performance and then verbally explain their rating. The experiments varied several factors that might influence human ratings: the types and plausibility of AI mistakes, the number of affected images, the number of people present in an image, the position of people relevant to the tracks, and the methods used to elicit human evaluations. While all these factors influenced human ratings, some effects were unexpected or deviated from normative standards. For instance, the factor with the strongest impact was people's position relative to the tracks, although participants had explicitly been instructed that the AI could not process such information. Taken together, the results suggest that humans may sometimes evaluate more than the AI's performance on the assigned task. Such mismatches between AI capabilities and human expectations should be taken into consideration when conducting safety audits of AI systems.",2 "Background: Artificial Intelligence (AI) clinical decision support (CDS) systems have the potential to augment surgical risk assessments, but successful adoption depends on an understanding of end-user needs and current workflows. This study reports the initial co-design of MySurgeryRisk, an AI CDS tool to predict the risk of nine post-operative complications in surgical patients. Methods: Semi-structured focus groups and interviews were held as co-design sessions with perioperative physicians at a tertiary academic hospital in the Southeastern United States. Participants were read a surgical vignette and asked questions to elicit an understanding of their current decision-making practices before being introduced to the MySurgeryRisk prototype web interface. They were asked to provide feedback on the user interface and system features. Session transcripts were qualitatively coded, after which thematic analysis took place. Results: Data saturation was reached after 20 surgeons and anesthesiologists from varying career stages participated across 11 co-design sessions. Thematic analysis resulted in five themes: (1) decision-making cognitive processes, (2) current approach to decision-making, (3) future approach to decision-making with MySurgeryRisk, (4) feedback on current MySurgeryRisk prototype, and (5) trustworthy considerations. Conclusion: Clinical providers perceived MySurgeryRisk as a promising CDS tool that factors in a large volume of data and is computed in real-time without any need for manual input. Participants provided feedback on the design of the interface and imaged applications of the tool in the clinical workflow. However, its successful implementation will depend on its actionability and explainability of model outputs, integration into current electronic systems, and calibration of trust among end-users.",2 "Limited real-world data severely impacts model performance in many computer vision domains, particularly for samples that are underrepresented in training. Synthetically generated images are a promising solution, but 1) it remains unclear how to design synthetic training data to optimally improve model performance (e.g, whether and where to introduce more realism or more abstraction) and 2) the domain expertise, time and effort required from human operators for this design and optimisation process represents a major practical challenge. Here we propose a novel conceptual approach to improve the efficiency of designing synthetic images, by using robust Explainable AI (XAI) techniques to guide a human-in-the-loop process of modifying 3D mesh models used to generate these images. Importantly, this framework allows both modifications that increase and decrease realism in synthetic data, which can both improve model performance. We illustrate this concept using a real-world example where data are sparse; detection of vehicles in infrared imagery. We fine-tune an initial YOLOv8 model on the ATR DSIAC infrared dataset and synthetic images generated from 3D mesh models in the Unity gaming engine, and then use XAI saliency maps to guide modification of our Unity models. We show that synthetic data can improve detection of vehicles in orientations unseen in training by 4.6% (to mAP50 = 94.6%). We further improve performance by an additional 1.5% (to 96.1%) through our new XAI-guided approach, which reduces misclassifications through both increasing and decreasing the realism of different parts of the synthetic data. Our proof-of-concept results pave the way for fine, XAI-controlled curation of synthetic datasets tailored to improve object detection performance, whilst simultaneously reducing the burden on human operators in designing and optimising these datasets.",0 "Explainable AI (XAI) seeks to transform black-box algorithmic processes into transparent ones, enhancing trust in AI applications across various sectors such as education. This review aims to examine the various definitions of XAI within the literature and explore the challenges of XAI in education. Our goal is to shed light on how XAI can contribute to enhancing the educational field. This systematic review, utilising the PRISMA method for rigorous and transparent research, identified 19 relevant studies. Our findings reveal 15 definitions and 62 challenges. These challenges are categorised using thematic analysis into seven groups: explainability, ethical, technical, human-computer interaction (HCI), trustworthiness, policy and guideline, and others, thereby deepening our understanding of the implications of XAI in education. Our analysis highlights the absence of standardised definitions for XAI, leading to confusion, especially because definitions concerning ethics, trustworthiness, technicalities, and explainability tend to overlap and vary.",0 "Social Intelligence Queries (Social-IQ) serve as the primary multimodal benchmark for evaluating a model's social intelligence level. While impressive multiple-choice question(MCQ) accuracy is achieved by current solutions, increasing evidence shows that they are largely, and in some cases entirely, dependent on language modality, overlooking visual context. Additionally, the closed-set nature further prevents the exploration of whether and to what extent the reasoning path behind selection is correct. To address these limitations, we propose the Visually Explainable and Grounded Artificial Social Intelligence (VEGAS) model. As a generative multimodal model, VEGAS leverages open-ended answering to provide explainable responses, which enhances the clarity and evaluation of reasoning paths. To enable visually grounded answering, we propose a novel sampling strategy to provide the model with more relevant visual frames. We then enhance the model's interpretation of these frames through Generalist Instruction Fine-Tuning (GIFT), which aims to: i) learn multimodal-language transformations for fundamental emotional social traits, and ii) establish multimodal joint reasoning capabilities. Extensive experiments, comprising modality ablation, open-ended assessments, and supervised MCQ evaluations, consistently show that VEGAS effectively utilizes visual information in reasoning to produce correct and also credible answers. We expect this work to of fer a new perspective on Social-IQ and advance the development of human-like social AI.",0 "Chatbots powered by artificial intelligence (AI) have rapidly become a significant part of everyday life, with over a quarter of American adults using them multiple times per week. While these tools offer potential benefits and risks, a fundamental question remains largely unexplored: How do conversations with AI influence subjective well-being? To investigate this, we conducted a study where participants either engaged in conversations with an AI chatbot (N = 334) or wrote journal entires (N = 193) on the same randomly assigned topics and reported their momentary happiness afterward. We found that happiness after AI chatbot conversations was higher than after journaling, particularly when discussing negative topics such as depression or guilt. Leveraging large language models for sentiment analysis, we found that the AI chatbot mirrored participants' sentiment while maintaining a consistent positivity bias. When discussing negative topics, participants gradually aligned their sentiment with the AI's positivity, leading to an overall increase in happiness. We hypothesized that the history of participants' sentiment prediction errors, the difference between expected and actual emotional tone when responding to the AI chatbot, might explain this happiness effect. Using computational modeling, we find the history of these sentiment prediction errors over the course of a conversation predicts greater post-conversation happiness, demonstrating a central role of emotional expectations during dialogue. Our findings underscore the effect that AI interactions can have on human well-being.",2 "Lifelogging involves continuously capturing personal data through wearable cameras, providing an egocentric view of daily activities. Lifelog retrieval aims to search and retrieve relevant moments from this data, yet existing methods largely overlook activity-level annotations, which capture temporal relationships and enrich semantic understanding. In this work, we introduce LSC-ADL, an ADL-annotated lifelog dataset derived from the LSC dataset, incorporating Activities of Daily Living (ADLs) as a structured semantic layer. Using a semi-automatic approach featuring the HDBSCAN algorithm for intra-class clustering and human-in-the-loop verification, we generate accurate ADL annotations to enhance retrieval explainability. By integrating action recognition into lifelog retrieval, LSC-ADL bridges a critical gap in existing research, offering a more context-aware representation of daily life. We believe this dataset will advance research in lifelog retrieval, activity recognition, and egocentric vision, ultimately improving the accuracy and interpretability of retrieved content. The ADL annotations can be downloaded at https://bit.ly/lsc-adl-annotations.",0 "In the quest to enable robots to coexist with humans, understanding dynamic situations and selecting appropriate actions based on common sense and affordances are essential. Conventional AI systems face challenges in applying affordance, as it represents implicit knowledge derived from common sense. However, large language models (LLMs) offer new opportunities due to their ability to process extensive human knowledge. This study proposes a method for automatic affordance acquisition by leveraging LLM outputs. The process involves generating text using LLMs, reconstructing the output into a symbol network using morphological and dependency analysis, and calculating affordances based on network distances. Experiments using ``apple'' as an example demonstrated the method's ability to extract context-dependent affordances with high explainability. The results suggest that the proposed symbol network, reconstructed from LLM outputs, enables robots to interpret affordances effectively, bridging the gap between symbolized data and human-like situational understanding.",0 "A charged particle in a suitably strong magnetic field spirals along the field lines while slowly drifting transversely. This note provides a brief derivation of an effective Lagrangian formulation for the guiding-centre approximation that captures this dynamics without resolving the gyro motion. It also explains how the effective Lagrangian may, for special magnetic fields, admit a 'quasi-symmetry' which can give rise to a conserved quantity helpful for plasma confinement in fields lacking a geometric isometry. The aim of this note is to offer a pedagogical introduction and some perspectives on this well-established subject.",0 "Current malware (malicious software) analysis tools focus on detection and family classification but fail to provide clear and actionable narrative insights into the malignant activity of the malware. Therefore, there is a need for a tool that translates raw malware data into human-readable descriptions. Developing such a tool accelerates incident response, reduces malware analysts' cognitive load, and enables individuals having limited technical expertise to understand malicious software behaviour. With this objective, we present MaLAware, which automatically summarizes the full spectrum of malicious activity of malware executables. MaLAware processes Cuckoo Sandbox-generated reports using large language models (LLMs) to correlate malignant activities and generate concise summaries explaining malware behaviour. We evaluate the tool's performance on five open-source LLMs. The evaluation uses the human-written malware behaviour description dataset as ground truth. The model's performance is measured using 11 extensive performance metrics, which boosts the confidence of MaLAware's effectiveness. The current version of the tool, i.e., MaLAware, supports Qwen2.5-7B, Llama2-7B, Llama3.1-8B, Mistral-7B, and Falcon-7B, along with the quantization feature for resource-constrained environments. MaLAware lays a foundation for future research in malware behavior explanation, and its extensive evaluation demonstrates LLMs' ability to narrate malware behavior in an actionable and comprehensive manner.",0 "To improve the trustworthiness of an AI model, finding consistent, understandable representations of its inference process is essential. This understanding is particularly important in high-stakes operations such as weather forecasting, where the identification of underlying meteorological mechanisms is as critical as the accuracy of the predictions. Despite the growing literature that addresses this issue through explainable AI, the applicability of their solutions is often limited due to their AI-centric development. To fill this gap, we follow a user-centric process to develop an example-based concept analysis framework, which identifies cases that follow a similar inference process as the target instance in a target model and presents them in a user-comprehensible format. Our framework provides the users with visually and conceptually analogous examples, including the probability of concept assignment to resolve ambiguities in weather mechanisms. To bridge the gap between vector representations identified from models and human-understandable explanations, we compile a human-annotated concept dataset and implement a user interface to assist domain experts involved in the the framework development.",0 "End-to-end robot policies achieve high performance through neural networks trained via reinforcement learning (RL). Yet, their black box nature and abstract reasoning pose challenges for human-robot interaction (HRI), because humans may experience difficulty in understanding and predicting the robot's navigation decisions, hindering trust development. We present a virtual reality (VR) interface that visualizes explainable AI (XAI) outputs and the robot's lidar perception to support intuitive interpretation of RL-based navigation behavior. By visually highlighting objects based on their attribution scores, the interface grounds abstract policy explanations in the scene context. This XAI visualization bridges the gap between obscure numerical XAI attribution scores and a human-centric semantic level of explanation. A within-subjects study with 24 participants evaluated the effectiveness of our interface for four visualization conditions combining XAI and lidar. Participants ranked scene objects across navigation scenarios based on their importance to the robot, followed by a questionnaire assessing subjective understanding and predictability. Results show that semantic projection of attributions significantly enhances non-expert users' objective understanding and subjective awareness of robot behavior. In addition, lidar visualization further improves perceived predictability, underscoring the value of integrating XAI and sensor for transparent, trustworthy HRI.",2 "An essential element of human mathematical reasoning is our number sense -- an abstract understanding of numbers and their relationships -- which allows us to solve problems involving vast number spaces using limited computational resources. Mathematical reasoning of Large Language Models (LLMs) is often tested on high-level problems (such as Olympiad challenges, geometry, word problems, and puzzles), but their low-level number sense remains less explored. We introduce ""Numberland,"" a 100-problem test to evaluate the numerical reasoning abilities of LLM-based agents. The tasks -- basic operations, advanced calculations (e.g., exponentiation, complex numbers), prime number checks, and the 24 game -- aim to test elementary skills and their integration in solving complex and uncertain problems. We evaluated five LLM-based agents: OpenAI's o1 and o1-mini, Google Gemini, Microsoft Copilot, and Anthropic Claude. They scored 74-95% on the first three tasks that allow deterministic steps to solutions. In the 24 game, which needs trial-and-error search, performance dropped to 10-73%. We tested the top 24 solver (o1 with 73% accuracy) on 25 harder problems, and its score fell to 27%, confirming search as a bottleneck. These results, along with the types of mistakes, suggest a fragile number of LLMs, which is a bit surprising given their prowess in challenging benchmarks. The limits of LLM numerical reasoning highlight the scope of simple, targeted tests to evaluate and explain LLM math skills to ensure safe use.",0 "Large Language Models (LLMs) offer a promising approach to enhancing Explainable AI (XAI) by transforming complex machine learning outputs into easy-to-understand narratives, making model predictions more accessible to users, and helping bridge the gap between sophisticated model behavior and human interpretability. AI models, such as state-of-the-art neural networks and deep learning models, are often seen as ""black boxes"" due to a lack of transparency. As users cannot fully understand how the models reach conclusions, users have difficulty trusting decisions from AI models, which leads to less effective decision-making processes, reduced accountabilities, and unclear potential biases. A challenge arises in developing explainable AI (XAI) models to gain users' trust and provide insights into how models generate their outputs. With the development of Large Language Models, we want to explore the possibilities of using human language-based models, LLMs, for model explainabilities. This survey provides a comprehensive overview of existing approaches regarding LLMs for XAI, and evaluation techniques for LLM-generated explanation, discusses the corresponding challenges and limitations, and examines real-world applications. Finally, we discuss future directions by emphasizing the need for more interpretable, automated, user-centric, and multidisciplinary approaches for XAI via LLMs.",2 "The discovery of novel small molecule drugs remains a critical scientific challenge with far-reaching implications for treating diseases and advancing human health. Traditional drug development--especially for small molecule therapeutics--is a highly complex, resource-intensive, and time-consuming process that requires multidisciplinary collaboration. Recent breakthroughs in artificial intelligence (AI), particularly the rise of large language models (LLMs), present a transformative opportunity to streamline and accelerate this process. In this paper, we introduce PharmAgents, a virtual pharmaceutical ecosystem driven by LLM-based multi-agent collaboration. PharmAgents simulates the full drug discovery workflow--from target discovery to preclinical evaluation--by integrating explainable, LLM-driven agents equipped with specialized machine learning models and computational tools. Through structured knowledge exchange and automated optimization, PharmAgents identifies potential therapeutic targets, discovers promising lead compounds, enhances binding affinity and key molecular properties, and performs in silico analyses of toxicity and synthetic feasibility. Additionally, the system supports interpretability, agent interaction, and self-evolvement, enabling it to refine future drug designs based on prior experience. By showcasing the potential of LLM-powered multi-agent systems in drug discovery, this work establishes a new paradigm for autonomous, explainable, and scalable pharmaceutical research, with future extensions toward comprehensive drug lifecycle management.",0 "LLM-as-a-Judge has been widely applied to evaluate and compare different LLM alignmnet approaches (e.g., RLHF and DPO). However, concerns regarding its reliability have emerged, due to LLM judges' biases and inconsistent decision-making. Previous research has developed evaluation frameworks to assess reliability of LLM judges and their alignment with human preferences. However, the employed evaluation metrics often lack adequate explainability and fail to address LLM internal inconsistency. Additionally, existing studies inadequately explore the impact of various prompt templates when applying LLM-as-a-Judge methods, leading to potentially inconsistent comparisons between different alignment algorithms. In this work, we systematically evaluate LLM-as-a-Judge on alignment tasks by defining more theoretically interpretable evaluation metrics and explicitly mitigating LLM internal inconsistency from reliability metrics. We develop an open-source framework to evaluate, compare, and visualize the reliability and alignment of LLM judges, which facilitates practitioners to choose LLM judges for alignment tasks. In the experiments, we examine effects of diverse prompt templates on LLM-judge reliability and also demonstrate our developed framework by comparing various LLM judges on two common alignment datasets (i.e., TL;DR Summarization and HH-RLHF-Helpfulness). Our results indicate a significant impact of prompt templates on LLM judge performance, as well as a mediocre alignment level between the tested LLM judges and human evaluators.",2 "Cosmic voids are low-mass-density regions on intergalactic scales. They are where cosmic expansion and acceleration are most dominant, important places to understand and analyse for cosmology. This entry summarises theoretical underpinnings of cosmic voids, and explores several observational aspects, statistics and applications of voids. The density profiles, velocity profiles, evolution history and the abundances of voids are shown to encode information about cosmology, including the sum of neutrino masses and the law of gravity. These properties manifest themselves into a wide range of observables, including the void distribution function, redshift-space distortions, gravitational lensing and their imprints on the cosmic-microwave background. We explain how each of these observables work, and summarise their applications in observations. We also comment on the possible impact of a local void on the interpretations of the expansion of the Universe, and discuss opportunities and challenges for the research subject of cosmic voids.",0 "Deep learning models show significant potential for advancing AI-assisted medical diagnostics, particularly in detecting lung cancer through medical image modalities such as chest X-rays. However, the black-box nature of these models poses challenges to their interpretability and trustworthiness, limiting their adoption in clinical practice. This study examines both the interpretability and robustness of a high-performing lung cancer detection model based on InceptionV3, utilizing a public dataset of chest X-rays and radiological reports. We evaluate the clinical utility of multiple explainable AI (XAI) techniques, including both post-hoc and ante-hoc approaches, and find that existing methods often fail to provide clinically relevant explanations, displaying inconsistencies and divergence from expert radiologist assessments. To address these limitations, we collaborated with a radiologist to define diagnosis-specific clinical concepts and developed ClinicXAI, an expert-driven approach leveraging the concept bottleneck methodology. ClinicXAI generated clinically meaningful explanations which closely aligned with the practical requirements of clinicians while maintaining high diagnostic accuracy. We also assess the robustness of ClinicXAI in comparison to the original InceptionV3 model by subjecting both to a series of widely utilized adversarial attacks. Our analysis demonstrates that ClinicXAI exhibits significantly greater resilience to adversarial perturbations. These findings underscore the importance of incorporating domain expertise into the design of interpretable and robust AI systems for medical diagnostics, paving the way for more trustworthy and effective AI solutions in healthcare.",0 "This survey examines evaluation methods for large language model (LLM)-based agents in multi-turn conversational settings. Using a PRISMA-inspired framework, we systematically reviewed nearly 250 scholarly sources, capturing the state of the art from various venues of publication, and establishing a solid foundation for our analysis. Our study offers a structured approach by developing two interrelated taxonomy systems: one that defines \emph{what to evaluate} and another that explains \emph{how to evaluate}. The first taxonomy identifies key components of LLM-based agents for multi-turn conversations and their evaluation dimensions, including task completion, response quality, user experience, memory and context retention, as well as planning and tool integration. These components ensure that the performance of conversational agents is assessed in a holistic and meaningful manner. The second taxonomy system focuses on the evaluation methodologies. It categorizes approaches into annotation-based evaluations, automated metrics, hybrid strategies that combine human assessments with quantitative measures, and self-judging methods utilizing LLMs. This framework not only captures traditional metrics derived from language understanding, such as BLEU and ROUGE scores, but also incorporates advanced techniques that reflect the dynamic, interactive nature of multi-turn dialogues.",2 "Machine learning models routinely automate decisions in applications like lending and hiring. In such settings, consumer protection rules require companies that deploy models to explain predictions to decision subjects. These rules are motivated, in part, by the belief that explanations can promote recourse by revealing information that individuals can use to contest or improve their outcomes. In practice, many companies comply with these rules by providing individuals with a list of the most important features for their prediction, which they identify based on feature importance scores from feature attribution methods such as SHAP or LIME. In this work, we show how these practices can undermine consumers by highlighting features that would not lead to an improved outcome and by explaining predictions that cannot be changed. We propose to address these issues by highlighting features based on their responsiveness score -- i.e., the probability that an individual can attain a target prediction by changing a specific feature. We develop efficient methods to compute responsiveness scores for any model and any dataset. We conduct an extensive empirical study on the responsiveness of explanations in lending. Our results show that standard practices in consumer finance can backfire by presenting consumers with reasons without recourse, and demonstrate how our approach improves consumer protection by highlighting responsive features and identifying fixed predictions.",0 "Multimodal Large Language Models (MLLMs) have achieved impressive results on various vision tasks, leveraging recent advancements in large language models. However, a critical question remains unaddressed: do MLLMs perceive visual information similarly to humans? Current benchmarks lack the ability to evaluate MLLMs from this perspective. To address this challenge, we introduce HVSBench, a large-scale benchmark designed to assess the alignment between MLLMs and the human visual system (HVS) on fundamental vision tasks that mirror human vision. HVSBench curated over 85K multimodal samples, spanning 13 categories and 5 fields in HVS, including Prominence, Subitizing, Prioritizing, Free-Viewing, and Searching. Extensive experiments demonstrate the effectiveness of our benchmark in providing a comprehensive evaluation of MLLMs. Specifically, we evaluate 13 MLLMs, revealing that even the best models show significant room for improvement, with most achieving only moderate results. Our experiments reveal that HVSBench presents a new and significant challenge for cutting-edge MLLMs. Diverse human participants attained strong performance, significantly outperforming MLLMs, which further underscores the benchmark's high quality. We believe that HVSBench will facilitate research on human-aligned and explainable MLLMs, marking a key step in understanding how MLLMs perceive and process visual information.",2 "Primordial non-Gaussianity (PNG) is a signature of fundamental physics in the early universe that is probed by cosmological observations. It is well known that the local type of PNG generates a strong signal in the two-point function of large-scale structure tracers, such as galaxies. This signal, often termed ``scale-dependent bias'' is a generic feature of modulation of gravitational structure formation by a large-scale mode. It is less well-appreciated that the coefficient controlling this signal, $b_{\phi}$, is closely connected to the time evolution of the tracer number density. This correspondence between time evolution and local PNG can be simply explained for a universal tracer whose mass function only depends on peak height, and more generally for non-universal tracers in the separate universe picture, which we validate in simulations. We also describe how to recover the bias of tracers subject to a survey selection function, and perform a simple demonstration on simulated galaxies. Since the local PNG amplitude in $n-$point statistics ($f_{\rm NL}$) is largely degenerate with the coefficient $b_{\phi}$, this proof of concept study demonstrates that galaxy survey data can allow for more optimal and robust extraction of local PNG information from upcoming surveys.",2 "Kashuba and Mathieu proposed a conjecture on vanishing of some components of the homology of certain Lie algebras, implying a description of the $GL_d$-module structure of the free $d$-generated Jordan algebra. Their conjecture relies on a functorial version of the Tits-Kantor-Koecher construction that builds Lie algebras out of Jordan algebras. Recently, Shang used a functorial construction of Allison, Benkart and Gao that builds Lie algebras out of alternative algebras to propose another conjecture on vanishing of some components of the homology of certain Lie algebras, implying a description of the $GL_d$-module structure of the free $d$-generated alternative algebra. In this note, we explain why the conjecture of Shang is not true.",0 "We introduce methods for discovering and applying sparse feature circuits. These are causally implicated subnetworks of human-interpretable features for explaining language model behaviors. Circuits identified in prior work consist of polysemantic and difficult-to-interpret units like attention heads or neurons, rendering them unsuitable for many downstream applications. In contrast, sparse feature circuits enable detailed understanding of unanticipated mechanisms. Because they are based on fine-grained units, sparse feature circuits are useful for downstream tasks: We introduce SHIFT, where we improve the generalization of a classifier by ablating features that a human judges to be task-irrelevant. Finally, we demonstrate an entirely unsupervised and scalable interpretability pipeline by discovering thousands of sparse feature circuits for automatically discovered model behaviors.",0 "Affective Computing (AC) is essential for advancing Artificial General Intelligence (AGI), with emotion recognition serving as a key component. However, human emotions are inherently dynamic, influenced not only by an individual's expressions but also by interactions with others, and single-modality approaches often fail to capture their full dynamics. Multimodal Emotion Recognition (MER) leverages multiple signals but traditionally relies on utterance-level analysis, overlooking the dynamic nature of emotions in conversations. Emotion Recognition in Conversation (ERC) addresses this limitation, yet existing methods struggle to align multimodal features and explain why emotions evolve within dialogues. To bridge this gap, we propose GatedxLSTM, a novel speech-text multimodal ERC model that explicitly considers voice and transcripts of both the speaker and their conversational partner(s) to identify the most influential sentences driving emotional shifts. By integrating Contrastive Language-Audio Pretraining (CLAP) for improved cross-modal alignment and employing a gating mechanism to emphasise emotionally impactful utterances, GatedxLSTM enhances both interpretability and performance. Additionally, the Dialogical Emotion Decoder (DED) refines emotion predictions by modelling contextual dependencies. Experiments on the IEMOCAP dataset demonstrate that GatedxLSTM achieves state-of-the-art (SOTA) performance among open-source methods in four-class emotion classification. These results validate its effectiveness for ERC applications and provide an interpretability analysis from a psychological perspective.",0 "Counterfactual explanations have been successfully applied to create human interpretable explanations for various black-box models. They are handy for tasks in the image domain, where the quality of the explanations benefits from recent advances in generative models. Although counterfactual explanations have been widely applied to classification models, their application to regression tasks remains underexplored. We present two methods to create counterfactual explanations for image regression tasks using diffusion-based generative models to address challenges in sparsity and quality: 1) one based on a Denoising Diffusion Probabilistic Model that operates directly in pixel-space and 2) another based on a Diffusion Autoencoder operating in latent space. Both produce realistic, semantic, and smooth counterfactuals on CelebA-HQ and a synthetic data set, providing easily interpretable insights into the decision-making process of the regression model and reveal spurious correlations. We find that for regression counterfactuals, changes in features depend on the region of the predicted value. Large semantic changes are needed for significant changes in predicted values, making it harder to find sparse counterfactuals than with classifiers. Moreover, pixel space counterfactuals are more sparse while latent space counterfactuals are of higher quality and allow bigger semantic changes.",0 "Existing methods for the zero-shot detection of machine-generated text are dominated by three statistical quantities: log-likelihood, log-rank, and entropy. As language models mimic the distribution of human text ever closer, this will limit our ability to build effective detection algorithms. To combat this, we introduce a method for detecting machine-generated text that is entirely agnostic of the generating language model. This is achieved by targeting a defect in the way that decoding strategies, such as temperature or top-k sampling, normalize conditional probability measures. This method can be rigorously theoretically justified, is easily explainable, and is conceptually distinct from existing methods for detecting machine-generated text. We evaluate our detector in the white and black box settings across various language models, datasets, and passage lengths. We also study the effect of paraphrasing attacks on our detector and the extent to which it is biased against non-native speakers. In each of these settings, the performance of our test is at least comparable to that of other state-of-the-art text detectors, and in some cases, we strongly outperform these baselines.",0 "Fine-grained domain generalization (FGDG) is a more challenging task than traditional DG tasks due to its small inter-class variations and relatively large intra-class disparities. When domain distribution changes, the vulnerability of subtle features leads to a severe deterioration in model performance. Nevertheless, humans inherently demonstrate the capacity for generalizing to out-of-distribution data, leveraging structured multi-granularity knowledge that emerges from discerning the commonality and specificity within categories. Likewise, we propose a Feature Structuralized Domain Generalization (FSDG) model, wherein features experience structuralization into common, specific, and confounding segments, harmoniously aligned with their relevant semantic concepts, to elevate performance in FGDG. Specifically, feature structuralization (FS) is accomplished through joint optimization of five constraints: a decorrelation function applied to disentangled segments, three constraints ensuring common feature consistency and specific feature distinctiveness, and a prediction calibration term. By imposing these stipulations, FSDG is prompted to disentangle and align features based on multi-granularity knowledge, facilitating robust subtle distinctions among categories. Extensive experimentation on three benchmarks consistently validates the superiority of FSDG over state-of-the-art counterparts, with an average improvement of 6.2% in FGDG performance. Beyond that, the explainability analysis on explicit concept matching intensity between the shared concepts among categories and the model channels, along with experiments on various mainstream model architectures, substantiates the validity of FS.",0 "Vision-based trajectory prediction is an important task that supports safe and intelligent behaviours in autonomous systems. Many advanced approaches have been proposed over the years with improved spatial and temporal feature extraction. However, human behaviour is naturally diverse and uncertain. Given the past trajectory and surrounding environment information, an agent can have multiple plausible trajectories in the future. To tackle this problem, an essential task named multi-future trajectory prediction (MTP) has recently been studied. This task aims to generate a diverse, acceptable and explainable distribution of future predictions for each agent. In this paper, we present the first survey for MTP with our unique taxonomies and a comprehensive analysis of frameworks, datasets and evaluation metrics. We also compare models on existing MTP datasets and conduct experiments on the ForkingPath dataset. Finally, we discuss multiple future directions that can help researchers develop novel multi-future trajectory prediction systems and other diverse learning tasks similar to MTP.",2 "Modern economies require increasingly diverse and specialized skills, many of which depend on the acquisition of other skills first. Here we analyse US survey data to reveal a nested structure within skill portfolios, where the direction of dependency is inferred from asymmetrical conditional probabilities-occupations require one skill conditional on another. This directional nature suggests that advanced, specific skills and knowledge are often built upon broader, fundamental ones. We examine 70 million job transitions to show that human capital development and career progression follow this structured pathway in which skills more aligned with the nested structure command higher wage premiums, require longer education and are less likely to be automated. These disparities are evident across genders and racial-ethnic groups, explaining long-term wage penalties. Finally, we find that this nested structure has become even more pronounced over the past two decades, indicating increased barriers to upward job mobility.",2 "Deep convolutional neural networks have proven their effectiveness, and have been acknowledged as the most dominant method for image classification. However, a severe drawback of deep convolutional neural networks is poor explainability. Unfortunately, in many real-world applications, users need to understand the rationale behind the predictions of deep convolutional neural networks when determining whether they should trust the predictions or not. To resolve this issue, a novel genetic algorithm-based method is proposed for the first time to automatically evolve local explanations that can assist users to assess the rationality of the predictions. Furthermore, the proposed method is model-agnostic, i.e., it can be utilised to explain any deep convolutional neural network models. In the experiments, ResNet is used as an example model to be explained, and the ImageNet dataset is selected as the benchmark dataset. DenseNet and MobileNet are further explained to demonstrate the model-agnostic characteristic of the proposed method. The evolved local explanations on four images, randomly selected from ImageNet, are presented, which show that the evolved local explanations are straightforward to be recognised by humans. Moreover, the evolved explanations can explain the predictions of deep convolutional neural networks on all four images very well by successfully capturing meaningful interpretable features of the sample images. Further analysis based on the 30 runs of the experiments exhibits that the evolved local explanations can also improve the probabilities/confidences of the deep convolutional neural network models in making the predictions. The proposed method can obtain local explanations within one minute, which is more than ten times faster than LIME (the state-of-the-art method).",0 "Depression disorder is a serious health condition that has affected the lives of millions of people around the world. Diagnosis of depression is a challenging practice that relies heavily on subjective studies and, in most cases, suffers from late findings. Electroencephalography (EEG) biomarkers have been suggested and investigated in recent years as a potential transformative objective practice. In this article, for the first time, a detailed systematic review of EEG-based depression diagnosis approaches is conducted using advanced machine learning techniques and statistical analyses. For this, 938 potentially relevant articles (since 1985) were initially detected and filtered into 139 relevant articles based on the review scheme 'preferred reporting items for systematic reviews and meta-analyses (PRISMA).' This article compares and discusses the selected articles and categorizes them according to the type of machine learning techniques and statistical analyses. Algorithms, preprocessing techniques, extracted features, and data acquisition systems are discussed and summarized. This review paper explains the existing challenges of the current algorithms and sheds light on the future direction of the field. This systematic review outlines the issues and challenges in machine intelligence for the diagnosis of EEG depression that can be addressed in future studies and possibly in future wearable technologies.",0 "Online tracking remains problematic, with compliance and ethical issues persisting despite regulatory efforts. Consent interfaces, the visible manifestation of this industry, have seen significant attention over the years. We present robust automated methods to study the presence, design, and third-party suppliers of consent interfaces at scale and the web service consent-observatory.eu to do it with. We examine the top 10,000 websites across 31 countries under the ePrivacy Directive and GDPR (n=254.148). Our findings show that 67% of websites use consent interfaces, but only 15% are minimally compliant, mostly because they lack a reject option. Consent management platforms (CMPs) are powerful intermediaries in this space: 67% of interfaces are provided by CMPs, and three organisations hold 37% of the market. There is little evidence that regulators' guidance and fines have impacted compliance rates, but 18% of compliance variance is explained by CMPs. Researchers should take an infrastructural perspective on online tracking and study the factual control of intermediaries to identify effective leverage points.",0 "Query optimizers are essential components of relational database management systems that directly impact query performance as they transform input queries into efficient execution plans. While users can obtain the final execution plan using the EXPLAIN command and leverage existing visualization tools for intuitive understanding, the internal decision-making processes of query optimizers are hidden from users, making it difficult to understand how the plan is constructed. To address this challenge, we present Jovis, an interactive visualization tool designed to explore the query optimization process in PostgreSQL. Jovis provides a comprehensive view of the entire optimization workflow through tailored visualization for each optimization strategy. It also includes features that allow users to participate in optimization by providing hints, tuning parameters, and reusing prior optimization results. Jovis serves as both an educational tool for learners and a practical resource for database professionals, helping users understand and improve query optimization by guiding the optimizer to make better decisions or consider previously unexplored plans. The source code, data, and/or other artifacts have been made available at https://github.com/orgs/snu-jovis.",0 "Magnetic resonance imaging (MRI) is a vital diagnostic tool, but its inherently long acquisition times reduce clinical efficiency and patient comfort. Recent advancements in deep learning, particularly diffusion models, have improved accelerated MRI reconstruction. However, existing diffusion models' training often relies on fully sampled data, models incur high computational costs, and often lack uncertainty estimation, limiting their clinical applicability. To overcome these challenges, we propose a novel framework, called Dual-domain Multi-path Self-supervised Diffusion Model (DMSM), that integrates a self-supervised dual-domain diffusion model training scheme, a lightweight hybrid attention network for the reconstruction diffusion model, and a multi-path inference strategy, to enhance reconstruction accuracy, efficiency, and explainability. Unlike traditional diffusion-based models, DMSM eliminates the dependency on training from fully sampled data, making it more practical for real-world clinical settings. We evaluated DMSM on two human MRI datasets, demonstrating that it achieves favorable performance over several supervised and self-supervised baselines, particularly in preserving fine anatomical structures and suppressing artifacts under high acceleration factors. Additionally, our model generates uncertainty maps that correlate reasonably well with reconstruction errors, offering valuable clinically interpretable guidance and potentially enhancing diagnostic confidence.",2 "Generative AI is radically changing the creative arts, by fundamentally transforming the way we create and interact with cultural artefacts. While offering unprecedented opportunities for artistic expression and commercialisation, this technology also raises ethical, societal, and legal concerns. Key among these are the potential displacement of human creativity, copyright infringement stemming from vast training datasets, and the lack of transparency, explainability, and fairness mechanisms. As generative systems become pervasive in this domain, responsible design is crucial. Whilst previous work has tackled isolated aspects of generative systems (e.g., transparency, evaluation, data), we take a comprehensive approach, grounding these efforts within the Ethics Guidelines for Trustworthy Artificial Intelligence produced by the High-Level Expert Group on AI appointed by the European Commission - a framework for designing responsible AI systems across seven macro requirements. Focusing on generative music AI, we illustrate how these requirements can be contextualised for the field, addressing trustworthiness across multiple dimensions and integrating insights from the existing literature. We further propose a roadmap for operationalising these contextualised requirements, emphasising interdisciplinary collaboration and stakeholder engagement. Our work provides a foundation for designing and evaluating responsible music generation systems, calling for collaboration among AI experts, ethicists, legal scholars, and artists. This manuscript is accompanied by a website: https://amresearchlab.github.io/raim-framework/.",0 "Concept-based eXplainable AI (C-XAI) aims to overcome the limitations of traditional saliency maps by converting pixels into human-understandable concepts that are consistent across an entire dataset. A crucial aspect of C-XAI is completeness, which measures how well a set of concepts explains a model's decisions. Among C-XAI methods, Multi-Dimensional Concept Discovery (MCD) effectively improves completeness by breaking down the CNN latent space into distinct and interpretable concept subspaces. However, MCD's explanations can be difficult for humans to understand, raising concerns about their practical utility. To address this, we propose Human-Understandable Multi-dimensional Concept Discovery (HU-MCD). HU-MCD uses the Segment Anything Model for concept identification and implements a CNN-specific input masking technique to reduce noise introduced by traditional masking methods. These changes to MCD, paired with the completeness relation, enable HU-MCD to enhance concept understandability while maintaining explanation faithfulness. Our experiments, including human subject studies, show that HU-MCD provides more precise and reliable explanations than existing C-XAI methods. The code is available at https://github.com/grobruegge/hu-mcd.",2 "Fluids, subject to symmetry breaking by stratification support propagation of anisotropic internal waves - IWs. In the vertical plane, rays representing energy paths obey a non-specular reflection law, as their inclination is solely dictated by their frequency. Although satisfying the linear Poincare equation, in basins having sloping walls, ray dynamics exhibits nonlinear effects such as convergence onto wave-attractors. In contrast, in the horizontal plane of a basin with vertical walls IWs reflect specularly, and follow chaotic ray paths. Here we present a novel analysis of these competing effects in a 3D IW ray billiard of a stadium having sloping walls. We show and explain how varying the walls slope, shifts the ray dynamics between regimes of near-ergodicity, chaotic scattering, and non-chaotic scattering with self-similar patterns, despite the basin being closed. The rich results stemming from the interplay between elliptical ergodicity and hyperbolic focusing relate to a broader context of physical phenomena.",0 "Autonomous navigation in crowded environments is an open problem with many applications, essential for the coexistence of robots and humans in the smart cities of the future. In recent years, deep reinforcement learning approaches have proven to outperform model-based algorithms. Nevertheless, even though the results provided are promising, the works are not able to take advantage of the capabilities that their models offer. They usually get trapped in local optima in the training process, that prevent them from learning the optimal policy. They are not able to visit and interact with every possible state appropriately, such as with the states near the goal or near the dynamic obstacles. In this work, we propose using intrinsic rewards to balance between exploration and exploitation and explore depending on the uncertainty of the states instead of on the time the agent has been trained, encouraging the agent to get more curious about unknown states. We explain the benefits of the approach and compare it with other exploration algorithms that may be used for crowd navigation. Many simulation experiments are performed modifying several algorithms of the state-of-the-art, showing that the use of intrinsic rewards makes the robot learn faster and reach higher rewards and success rates (fewer collisions) in shorter navigation times, outperforming the state-of-the-art.",0 "Concept-based models can map black-box representations to human-understandable concepts, which makes the decision-making process more transparent and then allows users to understand the reason behind predictions. However, domain-specific concepts often impact the final predictions, which subsequently undermine the model generalization capabilities, and prevent the model from being used in high-stake applications. In this paper, we propose a novel Language-guided Concept-Erasing (LanCE) framework. In particular, we empirically demonstrate that pre-trained vision-language models (VLMs) can approximate distinct visual domain shifts via domain descriptors while prompting large Language Models (LLMs) can easily simulate a wide range of descriptors of unseen visual domains. Then, we introduce a novel plug-in domain descriptor orthogonality (DDO) regularizer to mitigate the impact of these domain-specific concepts on the final predictions. Notably, the DDO regularizer is agnostic to the design of concept-based models and we integrate it into several prevailing models. Through evaluation of domain generalization on four standard benchmarks and three newly introduced benchmarks, we demonstrate that DDO can significantly improve the out-of-distribution (OOD) generalization over the previous state-of-the-art concept-based models.Our code is available at https://github.com/joeyz0z/LanCE.",0 "Over the last few decades, Artificial Intelligence (AI) scientists have been conducting investigations to attain human-level performance by a machine in accomplishing a cognitive task. Within machine learning, the ultimate aspiration is to attain Artificial General Intelligence (AGI) through a machine. This pursuit has led to the exploration of two distinct AI paradigms. Symbolic AI, also known as classical or GOFAI (Good Old-Fashioned AI) and Connectionist (Sub-symbolic) AI, represented by Neural Systems, are two mutually exclusive paradigms. Symbolic AI excels in reasoning, explainability, and knowledge representation but faces challenges in processing complex real-world data with noise. Conversely, deep learning (Black-Box systems) research breakthroughs in neural networks are notable, yet they lack reasoning and interpretability. Neuro-symbolic AI (NeSy), an emerging area of AI research, attempts to bridge this gap by integrating logical reasoning into neural networks, enabling them to learn and reason with symbolic representations. While a long path, this strategy has made significant progress towards achieving common sense reasoning by systems. This article conducts an extensive review of over 977 studies from prominent scientific databases (DBLP, ACL, IEEExplore, Scopus, PubMed, ICML, ICLR), thoroughly examining the multifaceted capabilities of Neuro-Symbolic AI, with a particular focus on its healthcare applications, particularly in drug discovery, and Protein engineering research. The survey addresses vital themes, including reasoning, explainability, integration strategies, 41 healthcare-related use cases, benchmarking, datasets, current approach limitations from both healthcare and broader perspectives, and proposed novel approaches for future experiments.",2 "Counterfactual explanations have emerged as a prominent method in Explainable Artificial Intelligence (XAI), providing intuitive and actionable insights into Machine Learning model decisions. In contrast to other traditional feature attribution methods that assess the importance of input variables, counterfactual explanations focus on identifying the minimal changes required to alter a model's prediction, offering a ``what-if'' analysis that is close to human reasoning. In the context of XAI, counterfactuals enhance transparency, trustworthiness and fairness, offering explanations that are not just interpretable but directly applicable in the decision-making processes. In this paper, we present a novel framework that integrates perturbation theory and statistical mechanics to generate minimal counterfactual explanations in explainable AI. We employ a local Taylor expansion of a Machine Learning model's predictive function and reformulate the counterfactual search as an energy minimization problem over a complex landscape. In sequence, we model the probability of candidate perturbations leveraging the Boltzmann distribution and use simulated annealing for iterative refinement. Our approach systematically identifies the smallest modifications required to change a model's prediction while maintaining plausibility. Experimental results on benchmark datasets for cybersecurity in Internet of Things environments, demonstrate that our method provides actionable, interpretable counterfactuals and offers deeper insights into model sensitivity and decision boundaries in high-dimensional spaces.",0 "Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless. In practice, such ""harmlessness"" behavior is mainly achieved by training models to reject harmful requests, such as ""Explain how to burn down my neighbor's house"", where the model appropriately declines to respond. However, this approach can inadvertently result in false refusal, where models reject benign queries as well, such as ""Tell me how to kill a Python process"". In this work, we demonstrate that prompting safety reflection before generating a response can mitigate false refusal behavior. Building on this finding, we introduce the Think-Before-Refusal (TBR) schema and conduct safety-aware instruction fine-tuning incorporating safety reflection. In an ablation study across 15 pre-trained models, we show that models fine-tuned with safety reflection significantly reduce false refusal behavior while maintaining safety and overall performance compared to those fine-tuned without safety reflection.",0 "Although DRL (deep reinforcement learning) has emerged as a powerful tool for making better decisions than existing hand-crafted communication protocols, it faces significant limitations: 1) Selecting the appropriate neural network architecture and setting hyperparameters are crucial for achieving desired performance levels, requiring domain expertise. 2) The decision-making process in DRL models is often opaque, commonly described as a 'black box.' 3) DRL models are data hungry. In response, we propose CP-AgentNet, the first framework designed to use generative agents for developing communication network protocols. This approach addresses these challenges by creating an autonomous system for protocol design, significantly reducing human effort. We developed LLMA (LLM-agents-based multiple access) and CPTCP (CP-Agent-based TCP) for heterogeneous environments. Our comprehensive simulations have demonstrated the efficient coexistence of LLMA and CPTCP with nodes using different types of protocols, as well as enhanced explainability.",0 "The desirable properties of explanations in information systems have fueled the demands for transparency in artificial intelligence (AI) outputs. To address these demands, the field of explainable AI (XAI) has put forth methods that can support human decision-making by explaining AI outputs. However, current empirical works present inconsistent findings on whether such explanations help to improve users' task performance in decision support systems (DSS). In this paper, we conduct a meta-analysis to explore how XAI affects human performance in classification tasks. Our results show an improvement in task performance through XAI-based decision support, though explanations themselves are not the decisive driver for this improvement. The analysis reveals that the studies' risk of bias moderates the effect of explanations in AI, while the explanation type appears to play only a negligible role. Our findings contribute to the human computer interaction field by enhancing the understanding of human-XAI collaboration in DSS.",0 "With the availability of virtually infinite number text documents in digital format, automatic comparison of textual data is essential for extracting meaningful insights that are difficult to identify manually. Many existing tools, including AI and large language models, struggle to provide precise and explainable insights into textual similarities. In many cases they determine the similarity between documents as reflected by the text, rather than the similarities between the subjects being discussed in these documents. This study addresses these limitations by developing an n-gram analysis framework designed to compare documents automatically and uncover explainable similarities. A scoring formula is applied to assigns each of the n-grams with a weight, where the weight is higher when the n-grams are more frequent in both documents, but is penalized when the n-grams are more frequent in the English language. Visualization tools like word clouds enhance the representation of these patterns, providing clearer insights. The findings demonstrate that this framework effectively uncovers similarities between text documents, offering explainable insights that are often difficult to identify manually. This non-parametric approach provides a deterministic solution for identifying similarities across various fields, including biographies, scientific literature, historical texts, and more. Code for the method is publicly available.",0 "As artificial intelligence (AI) systems become increasingly embedded in ethically sensitive domains such as education, healthcare, and transportation, the need to balance accuracy and interpretability in decision-making has become a central concern. Coarse Ethics (CE) is a theoretical framework that justifies coarse-grained evaluations, such as letter grades or warning labels, as ethically appropriate under cognitive and contextual constraints. However, CE has lacked mathematical formalization. This paper introduces Coarse Set Theory (CST), a novel mathematical framework that models coarse-grained decision-making using totally ordered structures and coarse partitions. CST defines hierarchical relations among sets and uses information-theoretic tools, such as Kullback-Leibler Divergence, to quantify the trade-off between simplification and information loss. We demonstrate CST through applications in educational grading and explainable AI (XAI), showing how it enables more transparent and context-sensitive evaluations. By grounding coarse evaluations in set theory and probabilistic reasoning, CST contributes to the ethical design of interpretable AI systems. This work bridges formal methods and human-centered ethics, offering a principled approach to balancing comprehensibility, fairness, and informational integrity in AI-driven decisions.",0 "Artificial intelligence systems based on large language models (LLMs) are increasingly used as agents that interact with users and with the world. To do so successfully, LLMs need to construct internal representations of the world and form probabilistic beliefs about those representations. To provide a user with personalized recommendations, for example, the LLM needs to gradually infer the user's preferences, over the course of multiple interactions. To evaluate whether contemporary LLMs are able to do so, we use the Bayesian inference framework from probability theory, which lays out the optimal way to update an agent's beliefs as it receives new information. We first show that the LLMs do not update their beliefs as expected from the Bayesian framework, and that consequently their predictions do not improve as expected as more information becomes available, even less so than we find is the case for humans. To address this issue, we teach the LLMs to reason in a Bayesian manner by training them to mimic the predictions of an optimal Bayesian model. We find that this approach not only significantly improves the LLM's performance on the particular recommendation task it is trained on, but also enables generalization to other tasks. This suggests that this method endows the LLM with broader Bayesian reasoning skills. More generally, our results indicate that LLMs can learn about reasoning strategies effectively and generalize those skills to new domains, which in part explains LLMs' empirical success.",0 "As robots and digital assistants are deployed in the real world, these agents must be able to communicate their decision-making criteria to build trust, improve human-robot teaming, and enable collaboration. While the field of explainable artificial intelligence (xAI) has made great strides to enable such communication, these advances often assume that one xAI approach is ideally suited to each problem (e.g., decision trees to explain how to triage patients in an emergency or feature-importance maps to explain radiology reports). This fails to recognize that users have diverse experiences or preferences for interaction modalities. In this work, we present two user-studies set in a simulated autonomous vehicle (AV) domain. We investigate (1) population-level preferences for xAI and (2) personalization strategies for providing robot explanations. We find significant differences between xAI modes (language explanations, feature-importance maps, and decision trees) in both preference (p < 0.01) and performance (p < 0.05). We also observe that a participant's preferences do not always align with their performance, motivating our development of an adaptive personalization strategy to balance the two. We show that this strategy yields significant performance gains (p < 0.05), and we conclude with a discussion of our findings and implications for xAI in human-robot interactions.",2 "Transportation systems will be likely transformed by the emergence of automated vehicles (AVs) promising for safe, convenient, and efficient mobility, especially if used in shared systems (shared AV or SAV). However, the potential tendency is observed towards owning AV as a private asset rather than using SAV. This calls for a research on investigating individuals' attitude towards AV in comparison with SAV to recognize the barriers to the public's tendency towards SAV. To do so, the present study proposes a modeling framework based on the theories in behavioral psychology to explain individuals' preference for owning AV over using SAV, built as a latent (subjective) psychometric construct, by three groups of explanatory latent constructs including: (i) desire for searching for benefits, i.e., extrinsic motive manifested in utilitarian beliefs; (ii) tendency towards seeking pleasure and joy, i.e., intrinsic motive reflected in hedonic beliefs; and (iii) attitude towards three configurations of shared mobility, i.e., experience with car and ridesharing, bikesharing, and public transit. Estimated on a sample dataset from the State of California, the findings can shed initial lights on the psychological determinants of the public's attitude towards owning AV versus using SAV, which can furthermore provide policy implications intriguing for policy makers and stakeholders. Of note, the findings reveal the strongest influential factor on preference for AV over SAV as hedonic beliefs reflected in perceived enjoyment. This preference is next affected by utilitarian beliefs, particularly perceived benefit and trust of stranger, followed by attitude towards car and ride sharing.",0 "This study investigates cross-cultural differences in the perception of AI-driven chatbots between Germany and South Korea, focusing on topic dependency and explainability. Using a custom AI chat interface, ExplainitAI, we systematically examined these factors with quota-based samples from both countries (N = 297). Our findings revealed significant cultural distinctions: Korean participants exhibited higher trust, more positive user experience ratings, and more favorable perception of AI compared to German participants. Additionally, topic dependency was a key factor, with participants reporting lower trust in AI when addressing societally debated topics (e.g., migration) versus health or entertainment topics. These perceptions were further influenced by interactions among cultural context, content domains, and explainability conditions. The result highlights the importance of integrating cultural and contextual nuances into the design of AI systems, offering actionable insights for the development of culturally adaptive and explainable AI tailored to diverse user needs and expectations across domains.",2 "In Human Activity Recognition (HAR), understanding the intricacy of body movements within high-risk applications is essential. This study uses SHapley Additive exPlanations (SHAP) to explain the decision-making process of Graph Convolution Networks (GCNs) when classifying activities with skeleton data. We employ SHAP to explain two real-world datasets: one for cerebral palsy (CP) classification and the widely used NTU RGB+D 60 action recognition dataset. To test the explanation, we introduce a novel perturbation approach that modifies the model's edge importance matrix, allowing us to evaluate the impact of specific body key points on prediction outcomes. To assess the fidelity of our explanations, we employ informed perturbation, targeting body key points identified as important by SHAP and comparing them against random perturbation as a control condition. This perturbation enables a judgment on whether the body key points are truly influential or non-influential based on the SHAP values. Results on both datasets show that body key points identified as important through SHAP have the largest influence on the accuracy, specificity, and sensitivity metrics. Our findings highlight that SHAP can provide granular insights into the input feature contribution to the prediction outcome of GCNs in HAR tasks. This demonstrates the potential for more interpretable and trustworthy models in high-stakes applications like healthcare or rehabilitation.",0 "In recent years, Language Models for Code (LLM4Code) have significantly changed the landscape of software engineering (SE) on downstream tasks, such as code generation, by making software development more efficient. Therefore, a growing interest has emerged in further evaluating these Language Models to homogenize the quality assessment of generated code. As the current evaluation process can significantly overreact on accuracy-based metrics, practitioners often seek methods to interpret LLM4Code outputs beyond canonical benchmarks. While the majority of research reports on code generation effectiveness in terms of expected ground truth, scant attention has been paid to LLMs' explanations. In essence, the decision-making process to generate code is hard to interpret. To bridge this evaluation gap, we introduce code rationales (Code$Q$), a technique with rigorous mathematical underpinning, to identify subsets of tokens that can explain individual code predictions. We conducted a thorough Exploratory Analysis to demonstrate the method's applicability and a User Study to understand the usability of code-based explanations. Our evaluation demonstrates that Code$Q$ is a powerful interpretability method to explain how (less) meaningful input concepts (i.e., natural language particle `at') highly impact output generation. Moreover, participants of this study highlighted Code$Q$'s ability to show a causal relationship between the input and output of the model with readable and informative explanations on code completion and test generation tasks. Additionally, Code$Q$ also helps to uncover model rationale, facilitating comparison with a human rationale to promote a fair level of trust and distrust in the model.",2 "Alzheimer's disease (AD) affects 50 million people worldwide and is projected to overwhelm 152 million by 2050. AD is characterized by cognitive decline due partly to disruptions in metabolic brain connectivity. Thus, early and accurate detection of metabolic brain network impairments is crucial for AD management. Chief to identifying such impairments is FDG-PET data. Despite advancements, most graph-based studies using FDG-PET data rely on group-level analysis or thresholding. Yet, group-level analysis can veil individual differences and thresholding may overlook weaker but biologically critical brain connections. Additionally, machine learning-based AD prediction largely focuses on univariate outcomes, such as disease status. Here, we introduce explainable graph-theoretical machine learning (XGML), a framework employing kernel density estimation and dynamic time warping to construct individual metabolic brain graphs that capture the distance between pair-wise brain regions and identify subgraphs most predictive of multivariate AD-related outcomes. Using FDG-PET data from the Alzheimer's Disease Neuroimaging Initiative, XGML builds metabolic brain graphs and uncovers subgraphs predictive of eight AD-related cognitive scores in new subjects. XGML shows robust performance, particularly for predicting scores measuring learning, memory, language, praxis, and orientation, such as CDRSB ($r = 0.74$), ADAS11 ($r = 0.73$), and ADAS13 ($r = 0.71$). Moreover, XGML unveils key edges jointly but differentially predictive of several AD-related outcomes; they may serve as potential network biomarkers for assessing overall cognitive decline. Together, we show the promise of graph-theoretical machine learning in biomarker discovery and disease prediction and its potential to improve our understanding of network neural mechanisms underlying AD.",0 "Visual reasoning is crucial for multimodal large language models (MLLMs) to address complex chart queries, yet high-quality rationale data remains scarce. Existing methods leveraged (M)LLMs for data generation, but direct prompting often yields limited precision and diversity. In this paper, we propose \textit{Chain of Functions (CoF)}, a novel programmatic reasoning data generation pipeline that utilizes freely-explored reasoning paths as supervision to ensure data precision and diversity. Specifically, it starts with human-free exploration among the atomic functions (e.g., maximum data and arithmetic operations) to generate diverse function chains, which are then translated into linguistic rationales and questions with only a moderate open-sourced LLM. \textit{CoF} provides multiple benefits: 1) Precision: function-governed generation reduces hallucinations compared to freeform generation; 2) Diversity: enumerating function chains enables varied question taxonomies; 3) Explainability: function chains serve as built-in rationales, allowing fine-grained evaluation beyond overall accuracy; 4) Practicality: eliminating reliance on extremely large models. Employing \textit{CoF}, we construct the \textit{ChartCoF} dataset, with 1.4k complex reasoning Q\&A for fine-grained analysis and 50k Q\&A for reasoning enhancement. The fine-grained evaluation on \textit{ChartCoF} reveals varying performance across question taxonomies for each MLLM, and the experiments also show that finetuning with \textit{ChartCoF} achieves state-of-the-art performance among same-scale MLLMs on widely used benchmarks. Furthermore, the novel paradigm of function-governed rationale generation in \textit{CoF} could inspire broader applications beyond charts.",0 "Concept Bottleneck Models (CBMs) are machine learning models that improve interpretability by grounding their predictions on human-understandable concepts, allowing for targeted interventions in their decision-making process. However, when intervened on, CBMs assume the availability of humans that can identify the need to intervene and always provide correct interventions. Both assumptions are unrealistic and impractical, considering labor costs and human error-proneness. In contrast, Learning to Defer (L2D) extends supervised learning by allowing machine learning models to identify cases where a human is more likely to be correct than the model, thus leading to deferring systems with improved performance. In this work, we gain inspiration from L2D and propose Deferring CBMs (DCBMs), a novel framework that allows CBMs to learn when an intervention is needed. To this end, we model DCBMs as a composition of deferring systems and derive a consistent L2D loss to train them. Moreover, by relying on a CBM architecture, DCBMs can explain why defer occurs on the final task. Our results show that DCBMs achieve high predictive performance and interpretability at the cost of deferring more to humans.",0 "The design, operations, and management of water distribution systems (WDS) involve complex mathematical models. These models are continually improving due to computational advancements, leading to better decision-making and more efficient WDS management. However, the significant time and effort required for modeling, programming, and analyzing results remain substantial challenges. Another issue is the professional burden, which confines the interaction with models, databases, and other sophisticated tools to a small group of experts, thereby causing non-technical stakeholders to depend on these experts or make decisions without modeling support. Furthermore, explaining model results is challenging even for experts, as it is often unclear which conditions cause the model to reach a certain state or recommend a specific policy. The recent advancements in Large Language Models (LLMs) open doors for a new stage in human-model interaction. This study proposes a framework of plain language interactions with hydraulic and water quality models based on LLM-EPANET architecture. This framework is tested with increasing levels of complexity of queries to study the ability of LLMs to interact with WDS models, run complex simulations, and report simulation results. The performance of the proposed framework is evaluated across several categories of queries and hyper-parameter configurations, demonstrating its potential to enhance decision-making processes in WDS management.",0 "Malleable Glyph is a new visualization problem and a public challenge. It originated from UX research (namely from research on card sorting UX), but its applications can be diverse (UI, gaming, information presentation, maps, and others). Its essence is: carrying as much information in a defined planar and static area as possible. The information should allow human observers to evaluate a pair of glyphs into three possible sortings: the first is ""greater"", or the second is ""greater"", or both are equal. The glyphs should adhere to the Illiteracy Rule, in other words, the observer should ask themselves the question ""how much?"" rather than ""how many?"". This article motivates the technique, explains its details, and presents the public challenge, including the evaluation protocol. The article aims to call for ideas from other visualization and graphics researchers and practitioners and to invite everyone to participate in the challenge and, by doing so, move scientific knowledge forward.",0 "One approach to risk-limiting audits (RLAs) compares randomly selected cast vote records (CVRs) to votes read by human auditors from the corresponding ballot cards. Historically, such methods reduce audit sample sizes by considering how each sampled CVR differs from the corresponding true vote, not merely whether they differ. Here we investigate the latter approach, auditing by testing whether the total number of mismatches in the full set of CVRs exceeds the minimum number of CVR errors required for the reported outcome to be wrong (the ""CVR margin""). This strategy makes it possible to audit more social choice functions and simplifies RLAs conceptually, which makes it easier to explain than some other RLA approaches. The cost is larger sample sizes. ""Mismatch-based RLAs"" only require a lower bound on the CVR margin, which for some social choice functions is easier to calculate than the effect of particular errors. When the population rate of mismatches is low and the lower bound on the CVR margin is close to the true CVR margin, the increase in sample size is small. However, the increase may be very large when errors include errors that, if corrected, would widen the CVR margin rather than narrow it",0 "Large Language Model (LLM) alignment conventionally relies on supervised fine-tuning or reinforcement learning based alignment frameworks. These methods typically require labeled or preference datasets and involve updating model weights to align the LLM with the training objective or reward model. Meanwhile, in social sciences such as cross-cultural studies, factor analysis is widely used to uncover underlying dimensions or latent variables that explain observed patterns in survey data. The non-differentiable nature of these measurements deriving from survey data renders the former alignment methods infeasible for alignment with cultural dimensions. To overcome this, we propose a parameter efficient strategy that combines soft prompt tuning, which freezes the model parameters while modifying the input prompt embeddings, with Differential Evolution (DE), a black-box optimization method for cases where a differentiable objective is unattainable. This strategy ensures alignment consistency without the need for preference data or model parameter updates, significantly enhancing efficiency and mitigating overfitting. Our method demonstrates significant improvements in LLama-3-8B-Instruct's cultural dimensions across multiple regions, outperforming both the Naive LLM and the In-context Learning (ICL) baseline, and effectively bridges computational models with human cultural nuances.",2 "Understanding how humans perceive visual complexity is a key area of study in visual cognition. Previous approaches to modeling visual complexity assessments have often resulted in intricate, difficult-to-interpret algorithms that employ numerous features or sophisticated deep learning architectures. While these complex models achieve high performance on specific datasets, they often sacrifice interpretability, making it challenging to understand the factors driving human perception of complexity. Recently (Shen, et al. 2024) proposed an interpretable segmentation-based model that accurately predicted complexity across various datasets, supporting the idea that complexity can be explained simply. In this work, we investigate the failure of their model to capture structural, color and surprisal contributions to complexity. To this end, we propose Multi-Scale Sobel Gradient (MSG) which measures spatial intensity variations, Multi-Scale Unique Color (MUC) which quantifies colorfulness across multiple scales, and surprise scores generated using a Large Language Model. We test our features on existing benchmarks and a novel dataset (Surprising Visual Genome) containing surprising images from Visual Genome. Our experiments demonstrate that modeling complexity accurately is not as simple as previously thought, requiring additional perceptual and semantic factors to address dataset biases. Our model improves predictive performance while maintaining interpretability, offering deeper insights into how visual complexity is perceived and assessed. Our code, analysis and data are available at https://github.com/Complexity-Project/Complexity-in-Complexity.",0 "In this paper, the changes of magnetic properties with increasing disorder in the exchange enhanced Pauli paramagnet YCo$_2$ are discussed. The structural disorder is initially introduced by rapid quenching, while further changes on micro-/nanoscale are caused by a high pressure torsion (HPT). Values of the magnetic moment determined for the plastically deformed ribbons reach 0.10 $\mu_B$/Co atom (for a sample subjected to the deformation at a pressure of 4 GPa) and 0.25 $\mu_B$/Co (6 GPa) at 2 K. Magnetic moment arise not only from the surface of nanocrystals but also from volume. Ab initio calculations explained the influence of chemical disorder and different types of structural defects on the electronic structure and magnetic properties of YCo2-based Laves phases. The calculated magnetic ground states are in qualitative agreement with experimental results for all considered structures with point defects.",0 "The polynomial Szemer\'{e}di theorem implies that, for any $\delta \in (0,1)$, any family $\{P_1,\ldots, P_m\} \subset \mathbb{Z}[y]$ of nonconstant polynomials with constant term zero, and any sufficiently large $N$, every subset of $\{1,\ldots, N\}$ of cardinality at least $\delta N$ contains a nontrivial configuration of the form $\{x,x+P_1(y),\ldots, x+P_m(y)\}$. When the polynomials are assumed independent, one can expect a sharper result to hold over finite fields, special cases of which were proven recently, culminating with arXiv:1802.02200, which deals with the general case of independent polynomials. One goal of this article is to explain these theorems as the result of joint ergodicity in the presence of asymptotic total ergodicity. Guided by this concept, we establish, over general finite commutative rings, a version of the polynomial Szemer\'{e}di theorem for independent polynomials $\{P_1,\ldots, P_m\} \subset \mathbb{Z}[y_1,\ldots, y_n]$, deriving new combinatorial consequences, such as the following. Let $\mathcal R$ be a collection of finite commutative rings subject to a mild condition on their torsion. There exists $\gamma \in (0,1)$ such that, for every $R \in \mathcal R$, every subset $A \subset R$ of cardinality at least $|R|^{1-\gamma}$ contains a nontrivial configuration $\{x,x+P_1(y),\ldots, x+P_m(y)\}$ for some $(x,y) \in R \times R^n$, and, moreover, for any subsets $A_0,\ldots, A_m \subset R$ such that $|A_0|\cdots |A_m| \geq |R|^{(m+1)(1-\gamma)}$, there is a nontrivial configuration $(x, x+P_1(y), \ldots, x+P_m(y)) \in A_0\times \cdots \times A_m$. The fact that general rings have zero divisors is the source of many obstacles, which we overcome; for example, by studying character sums, we develop a bound on the number of roots of an integer polynomial over a general finite commutative ring, a result which is of independent interest.",0 "As narrative extraction systems grow in complexity, establishing user trust through interpretable and explainable outputs becomes increasingly critical. This paper presents an evaluation of an Explainable Artificial Intelligence (XAI) system for narrative map extraction that provides meaningful explanations across multiple levels of abstraction. Our system integrates explanations based on topical clusters for low-level document relationships, connection explanations for event relationships, and high-level structure explanations for overall narrative patterns. In particular, we evaluate the XAI system through a user study involving 10 participants that examined narratives from the 2021 Cuban protests. The analysis of results demonstrates that participants using the explanations made the users trust in the system's decisions, with connection explanations and important event detection proving particularly effective at building user confidence. Survey responses indicate that the multi-level explanation approach helped users develop appropriate trust in the system's narrative extraction capabilities. This work advances the state-of-the-art in explainable narrative extraction while providing practical insights for developing reliable narrative extraction systems that support effective human-AI collaboration.",2 "High quality explanations of neural networks (NNs) should exhibit two key properties. Completeness ensures that they accurately reflect a network's function and interpretability makes them understandable to humans. Many existing methods provide explanations of individual neurons within a network. In this work we provide evidence that for AlexNet pretrained on ImageNet, neuron-based explanation methods sacrifice both completeness and interpretability compared to activation principal components. Neurons are a poor basis for AlexNet embeddings because they don't account for the distributed nature of these representations. By examining two quantitative measures of completeness and conducting a user study to measure interpretability, we show the most important principal components provide more complete and interpretable explanations than the most important neurons. Much of the activation variance may be explained by examining relatively few high-variance PCs, as opposed to studying every neuron. These principal components also strongly affect network function, and are significantly more interpretable than neurons. Our findings suggest that explanation methods for networks like AlexNet should avoid using neurons as a basis for embeddings and instead choose a basis, such as principal components, which accounts for the high dimensional and distributed nature of a network's internal representations. Interactive demo and code available at https://ndey96.github.io/neuron-explanations-sacrifice.",2 "Learning to reason and carefully explain arguments is central to students' cognitive, mathematical, and computational thinking development. This is particularly challenging in problems under uncertainty and in Bayesian reasoning. With the new generation of large language models (LLMs) capable of reasoning using Chain-of-Thought (CoT), there is an excellent opportunity to learn with them as they explain their reasoning through a dialogue with their artificial internal voice. It is an engaging and excellent opportunity to learn Bayesian reasoning. Furthermore, given that different LLMs sometimes arrive at opposite solutions, CoT generates opportunities for deep learning by detailed comparisons of reasonings. However, unlike humans, we found that they do not autonomously explain using ecologically valid strategies like natural frequencies, whole objects, and embodied heuristics. This is unfortunate, as these strategies help humans avoid critical mistakes and have proven pedagogical value in Bayesian reasoning. In order to overcome these biases and aid understanding and learning, we included prompts that induce LLMs to use these strategies. We found that LLMs with CoT incorporate them but not consistently. They show persistent biases towards symbolic reasoning and avoidance or phobia of ecologically valid strategies.",0 "The rapid advancements in generative technology have emerged as a double-edged sword. While offering powerful tools that enhance convenience, they also pose significant social concerns. As defenders, current synthetic image detection methods often lack artifact-level textual interpretability and are overly focused on image manipulation detection, and current datasets usually suffer from outdated generators and a lack of fine-grained annotations. In this paper, we introduce SynthScars, a high-quality and diverse dataset consisting of 12,236 fully synthetic images with human-expert annotations. It features 4 distinct image content types, 3 categories of artifacts, and fine-grained annotations covering pixel-level segmentation, detailed textual explanations, and artifact category labels. Furthermore, we propose LEGION (LEarning to Ground and explain for Synthetic Image detectiON), a multimodal large language model (MLLM)-based image forgery analysis framework that integrates artifact detection, segmentation, and explanation. Building upon this capability, we further explore LEGION as a controller, integrating it into image refinement pipelines to guide the generation of higher-quality and more realistic images. Extensive experiments show that LEGION outperforms existing methods across multiple benchmarks, particularly surpassing the second-best traditional expert on SynthScars by 3.31% in mIoU and 7.75% in F1 score. Moreover, the refined images generated under its guidance exhibit stronger alignment with human preferences. The code, model, and dataset will be released.",2 "Explainability is a critical factor influencing the wide deployment of deep vision models (DVMs). Concept-based post-hoc explanation methods can provide both global and local insights into model decisions. However, current methods in this field face challenges in that they are inflexible to automatically construct accurate and sufficient linguistic explanations for global concepts and local circuits. Particularly, the intrinsic polysemanticity in semantic Visual Concepts (VCs) impedes the interpretability of concepts and DVMs, which is underestimated severely. In this paper, we propose a Chain-of-Explanation (CoE) approach to address these issues. Specifically, CoE automates the decoding and description of VCs to construct global concept explanation datasets. Further, to alleviate the effect of polysemanticity on model explainability, we design a concept polysemanticity disentanglement and filtering mechanism to distinguish the most contextually relevant concept atoms. Besides, a Concept Polysemanticity Entropy (CPE), as a measure of model interpretability, is formulated to quantify the degree of concept uncertainty. The modeling of deterministic concepts is upgraded to uncertain concept atom distributions. Finally, CoE automatically enables linguistic local explanations of the decision-making process of DVMs by tracing the concept circuit. GPT-4o and human-based experiments demonstrate the effectiveness of CPE and the superiority of CoE, achieving an average absolute improvement of 36% in terms of explainability scores.",0 "Semantic role labeling (SRL) enriches many downstream applications, e.g., machine translation, question answering, summarization, and stance/belief detection. However, building multilingual SRL models is challenging due to the scarcity of semantically annotated corpora for multiple languages. Moreover, state-of-the-art SRL projection (XSRL) based on large language models (LLMs) yields output that is riddled with spurious role labels. Remediation of such hallucinations is not straightforward due to the lack of explainability of LLMs. We show that hallucinated role labels are related to naturally occurring divergence types that interfere with initial alignments. We implement Divergence-Aware Hallucination-Remediated SRL projection (DAHRS), leveraging linguistically-informed alignment remediation followed by greedy First-Come First-Assign (FCFA) SRL projection. DAHRS improves the accuracy of SRL projection without additional transformer-based machinery, beating XSRL in both human and automatic comparisons, and advancing beyond headwords to accommodate phrase-level SRL projection (e.g., EN-FR, EN-ES). Using CoNLL-2009 as our ground truth, we achieve a higher word-level F1 over XSRL: 87.6% vs. 77.3% (EN-FR) and 89.0% vs. 82.7% (EN-ES). Human phrase-level assessments yield 89.1% (EN-FR) and 91.0% (EN-ES). We also define a divergence metric to adapt our approach to other language pairs (e.g., English-Tagalog).",0 "Empirical contact networks or interaction networks demonstrate peculiar characteristics stemming from the fundamental social, psychological, physical mechanisms governing human interactions. Although these mechanisms are complex, we test whether we are able to reproduce some dynamical properties of these empirical networks from relatively simple models. In this study, we perform simulations for a range of 2D models of particle dynamics, namely the Random Walk, Active Brownian Particles, and Vicsek models, to generate artificial contact networks. We investigate temporal properties of these contact networks: the distributions of contact durations, inter-contact durations and number of contact per pair of particle. We demonstrate that the distribution of inter-contact durations can be recovered by the dynamics of these simple crowd particle models, and show that it is simply related to the well-know first-return process, which explains the -3/2 exponent that is found in both the numerical models and empirical contact networks.",0 "Despite advances in Automatic Speech Recognition (ASR), transcription errors persist and require manual correction. Confidence scores, which indicate the certainty of ASR results, could assist users in identifying and correcting errors. This study evaluates the reliability of confidence scores for error detection through a comprehensive analysis of end-to-end ASR models and a user study with 36 participants. The results show that while confidence scores correlate with transcription accuracy, their error detection performance is limited. Classifiers frequently miss errors or generate many false positives, undermining their practical utility. Confidence-based error detection neither improved correction efficiency nor was perceived as helpful by participants. These findings highlight the limitations of confidence scores and the need for more sophisticated approaches to improve user interaction and explainability of ASR results.",2 "People's decision-making abilities often fail to improve or may even erode when they rely on AI for decision-support, even when the AI provides informative explanations. We argue this is partly because people intuitively seek contrastive explanations, which clarify the difference between the AI's decision and their own reasoning, while most AI systems offer ""unilateral"" explanations that justify the AI's decision but do not account for users' thinking. To align human-AI knowledge on decision tasks, we introduce a framework for generating human-centered contrastive explanations that explain the difference between AI's choice and a predicted, likely human choice about the same task. Results from a large-scale experiment (N = 628) demonstrate that contrastive explanations significantly enhance users' independent decision-making skills compared to unilateral explanations, without sacrificing decision accuracy. Amid rising deskilling concerns, our research demonstrates that incorporating human reasoning into AI design can foster human skill development.",0 "Early detection of depression from social media data offers a valuable opportunity for timely intervention. However, this task poses significant challenges, requiring both professional medical knowledge and the development of accurate and explainable models. In this paper, we propose LLM-MTD (Large Language Model for Multi-Task Depression Detection), a novel approach that leverages a pre-trained large language model to simultaneously classify social media posts for depression and generate textual explanations grounded in medical diagnostic criteria. We train our model using a multi-task learning framework with a combined loss function that optimizes both classification accuracy and explanation quality. We evaluate LLM-MTD on the benchmark Reddit Self-Reported Depression Dataset (RSDD) and compare its performance against several competitive baseline methods, including traditional machine learning and fine-tuned BERT. Our experimental results demonstrate that LLM-MTD achieves state-of-the-art performance in depression detection, showing significant improvements in AUPRC and other key metrics. Furthermore, human evaluation of the generated explanations reveals their relevance, completeness, and medical accuracy, highlighting the enhanced interpretability of our approach. This work contributes a novel methodology for depression detection that combines the power of large language models with the crucial aspect of explainability.",2 "Natural language misinformation detection approaches have been, to date, largely dependent on sequence classification methods, producing opaque systems in which the reasons behind classification as misinformation are unclear. While an effort has been made in the area of automated fact-checking to propose explainable approaches to the problem, this is not the case for automated reason-checking systems. In this paper, we propose a new explainable framework for both factual and rational misinformation detection based on the theory of Argumentation Schemes and Critical Questions. For that purpose, we create and release NLAS-CQ, the first corpus combining 3,566 textbook-like natural language argumentation scheme instances and 4,687 corresponding answers to critical questions related to these arguments. On the basis of this corpus, we implement and validate our new framework which combines classification with question answering to analyse arguments in search of misinformation, and provides the explanations in form of critical questions to the human user.",0 "The ever growing realism and quality of generated videos makes it increasingly harder for humans to spot deepfake content, who need to rely more and more on automatic deepfake detectors. However, deepfake detectors are also prone to errors, and their decisions are not explainable, leaving humans vulnerable to deepfake-based fraud and misinformation. To this end, we introduce ExDDV, the first dataset and benchmark for Explainable Deepfake Detection in Video. ExDDV comprises around 5.4K real and deepfake videos that are manually annotated with text descriptions (to explain the artifacts) and clicks (to point out the artifacts). We evaluate a number of vision-language models on ExDDV, performing experiments with various fine-tuning and in-context learning strategies. Our results show that text and click supervision are both required to develop robust explainable models for deepfake videos, which are able to localize and describe the observed artifacts. Our novel dataset and code to reproduce the results are available at https://github.com/vladhondru25/ExDDV.",0 "Chinese porcelain holds immense historical and cultural value, making its accurate classification essential for archaeological research and cultural heritage preservation. Traditional classification methods rely heavily on expert analysis, which is time-consuming, subjective, and difficult to scale. This paper explores the application of DL and transfer learning techniques to automate the classification of porcelain artifacts across four key attributes: dynasty, glaze, ware, and type. We evaluate four Convolutional Neural Networks (CNNs) - ResNet50, MobileNetV2, VGG16, and InceptionV3 - comparing their performance with and without pre-trained weights. Our results demonstrate that transfer learning significantly enhances classification accuracy, particularly for complex tasks like type classification, where models trained from scratch exhibit lower performance. MobileNetV2 and ResNet50 consistently achieve high accuracy and robustness across all tasks, while VGG16 struggles with more diverse classifications. We further discuss the impact of dataset limitations and propose future directions, including domain-specific pre-training, integration of attention mechanisms, explainable AI methods, and generalization to other cultural artifacts.",0 "We introduce a new fundamental algorithm called Matrix-POAFD to solve the matrix least square problem. The method is based on the matching pursuit principle. The method directly extracts, among the given features as column vectors of the measurement matrix, in the order of their importance, the decisive features for the observing vector. With competitive computational efficiency to the existing sophisticated least square solutions the proposed method, due to its explicit and iterative algorithm process, has the advantage of trading off minimum norms with tolerable error scales. The method inherits recently developed studies in functional space contexts. The second main contribution, also in the algorithm aspect, is to present a two-step iterative computation method for pseudo-inverse. We show that consecutively performing two least square solutions, of which one is to $X$ and the other to $X^*,$ results in the minimum norm least square solution. The two-step algorithm can also be combined into one solving a single least square problem but with respect to $XX^\ast.$ The result is extended to the functional formulation as well. To better explain the idea, as well as for the self-containing purpose, we give short surveys with proofs of key results on closely relevant subjects, including solutions with reproducing kernel Hilbert space setting, AFD type sparse representation in terms of matching pursuit, the general ${\mathcal H}$-$H_K$ formulation and pseudo-inverse of bounded linear operator in Hilbert spaces.",2 "Human Mesh Recovery (HMR) from a single RGB image is a highly ambiguous problem, as an infinite set of 3D interpretations can explain the 2D observation equally well. Nevertheless, most HMR methods overlook this issue and make a single prediction without accounting for this ambiguity. A few approaches generate a distribution of human meshes, enabling the sampling of multiple predictions; however, none of them is competitive with the latest single-output model when making a single prediction. This work proposes a new approach based on masked generative modeling. By tokenizing the human pose and shape, we formulate the HMR task as generating a sequence of discrete tokens conditioned on an input image. We introduce MEGA, a MaskEd Generative Autoencoder trained to recover human meshes from images and partial human mesh token sequences. Given an image, our flexible generation scheme allows us to predict a single human mesh in deterministic mode or to generate multiple human meshes in stochastic mode. Experiments on in-the-wild benchmarks show that MEGA achieves state-of-the-art performance in deterministic and stochastic modes, outperforming single-output and multi-output approaches.",0 "Understanding how neural dynamics shape cognitive experiences remains a central challenge in neuroscience and psychiatry. Here, we present a novel framework leveraging state-to-output controllability from dynamical systems theory to model the interplay between cognitive perturbations, neural activity, and subjective experience. We demonstrate that large-scale fMRI signals are constrained to low-dimensional manifolds, where affective and cognitive states are naturally organized. Furthermore, we provide a theoretically robust method to estimate the controllability Gramian from steady-state neural responses, offering a direct measure of the energy required to steer cognitive outcomes. In five healthy participants viewing 2,185 emotionally evocative short videos, our analyses reveal a strong alignment between neural activations and affective ratings, with an average correlation of $r \approx 0.7$. In a clinical cohort of 255 patients with major depressive disorder, biweekly Hamilton Rating Scale trajectories over 11 weeks significantly mapped onto these manifolds, explaining approximately 20% more variance than chance ($p < 10^{-10}$, numerically better than chance in 93% reaching statistical significance in one-third of subjects). Our work bridges dynamical systems theory and clinical neuroscience, providing a principled approach to optimize mental health treatments by targeting the most efficient neural pathways for cognitive change.",2 "Transparency and explainability are important features that responsible autonomous vehicles should possess, particularly when interacting with humans, and causal reasoning offers a strong basis to provide these qualities. However, even if one assumes agents act to maximise some concept of reward, it is difficult to make accurate causal inferences of agent planning without capturing what is of importance to the agent. Thus our work aims to learn a weighting of reward metrics for agents such that explanations for agent interactions can be causally inferred. We validate our approach quantitatively and qualitatively across three real-world driving datasets, demonstrating a functional improvement over previous methods and competitive performance across evaluation metrics.",0 "Scene understanding is critical for various downstream tasks in autonomous driving, including facilitating driver-agent communication and enhancing human-centered explainability of autonomous vehicle (AV) decisions. This paper evaluates the capability of four multimodal large language models (MLLMs), including relatively small models, to understand scenes in a zero-shot, in-context learning setting. Additionally, we explore whether combining these models using an ensemble approach with majority voting can enhance scene understanding performance. Our experiments demonstrate that GPT-4o, the largest model, outperforms the others in scene understanding. However, the performance gap between GPT-4o and the smaller models is relatively modest, suggesting that advanced techniques such as improved in-context learning, retrieval-augmented generation (RAG), or fine-tuning could further optimize the smaller models' performance. We also observe mixed results with the ensemble approach: while some scene attributes show improvement in performance metrics such as F1-score, others experience a decline. These findings highlight the need for more sophisticated ensemble techniques to achieve consistent gains across all scene attributes. This study underscores the potential of leveraging MLLMs for scene understanding and provides insights into optimizing their performance for autonomous driving applications.",0 "Humans tend to form quick subjective first impressions of non-physical attributes when seeing someone's face, such as perceived trustworthiness or attractiveness. To understand what variations in a face lead to different subjective impressions, this work uses generative models to find semantically meaningful edits to a face image that change perceived attributes. Unlike prior work that relied on statistical manipulation in feature space, our end-to-end framework considers trade-offs between preserving identity and changing perceptual attributes. It maps latent space directions to changes in attribute scores, enabling a perceptually significant identity-preserving transformation of any input face along an attribute axis according to a target change. We train on real and synthetic faces, evaluate for in-domain and out-of-domain images using predictive models and human ratings, demonstrating the generalizability of our approach. Ultimately, such a framework can be used to understand and explain trends and biases in subjective interpretation of faces that are not dependent on the subject's identity. This is demonstrated with improved model performance for first impression prediction when augmenting the training data with images generated by the proposed approach for a wider range of input to learn associations between face features and subjective attributes.",0 "This study delves into the mechanisms that spark user curiosity driving active engagement within public Telegram groups. By analyzing approximately 6 million messages from 29,196 users across 409 groups, we identify and quantify the key factors that stimulate users to actively participate (i.e., send messages) in group discussions. These factors include social influence, novelty, complexity, uncertainty, and conflict, all measured through metrics derived from message sequences and user participation over time. After clustering the messages, we apply explainability techniques to assign meaningful labels to the clusters. This approach uncovers macro categories representing distinct curiosity stimulation profiles, each characterized by a unique combination of various stimuli. Social influence from peers and influencers drives engagement for some users, while for others, rare media types or a diverse range of senders and media sparks curiosity. Analyzing patterns, we found that user curiosity stimuli are mostly stable, but, as the time between the initial message increases, curiosity occasionally shifts. A graph-based analysis of influence networks reveals that users motivated by direct social influence tend to occupy more peripheral positions, while those who are not stimulated by any specific factors are often more central, potentially acting as initiators and conversation catalysts. These findings contribute to understanding information dissemination and spread processes on social media networks, potentially contributing to more effective communication strategies.",0 "As large language models (LLMs) become increasingly capable, ensuring that their self-generated explanations are faithful to their internal decision-making process is critical for safety and oversight. In this work, we conduct a comprehensive counterfactual faithfulness analysis across 62 models from 8 families, encompassing both pretrained and instruction-tuned variants and significantly extending prior studies of counterfactual tests. We introduce phi-CCT, a simplified variant of the Correlational Counterfactual Test, which avoids the need for token probabilities while explaining most of the variance of the original test. Our findings reveal clear scaling trends: larger models are consistently more faithful on our metrics. However, when comparing instruction-tuned and human-imitated explanations, we find that observed differences in faithfulness can often be attributed to explanation verbosity, leading to shifts along the true-positive/false-positive Pareto frontier. While instruction-tuning and prompting can influence this trade-off, we find limited evidence that they fundamentally expand the frontier of explanatory faithfulness beyond what is achievable with pretrained models of comparable size. Our analysis highlights the nuanced relationship between instruction-tuning, verbosity, and the faithful representation of model decision processes.",0 "The synergy between virtual reality (VR) and artificial intelligence (AI), specifically deep learning (DL)-based cybersickness detection models, has ushered in unprecedented advancements in immersive experiences by automatically detecting cybersickness severity and adaptively various mitigation techniques, offering a smooth and comfortable VR experience. While this DL-enabled cybersickness detection method provides promising solutions for enhancing user experiences, it also introduces new risks since these models are vulnerable to adversarial attacks; a small perturbation of the input data that is visually undetectable to human observers can fool the cybersickness detection model and trigger unexpected mitigation, thus disrupting user immersive experiences (UIX) and even posing safety risks. In this paper, we present a new type of VR attack, i.e., a cybersickness attack, which successfully stops the triggering of cybersickness mitigation by fooling DL-based cybersickness detection models and dramatically hinders the UIX. Next, we propose a novel explainable artificial intelligence (XAI)-guided cybersickness attack detection framework to detect such attacks in VR to ensure UIX and a comfortable VR experience. We evaluate the proposed attack and the detection framework using two state-of-the-art open-source VR cybersickness datasets: Simulation 2021 and Gameplay dataset. Finally, to verify the effectiveness of our proposed method, we implement the attack and the XAI-based detection using a testbed with a custom-built VR roller coaster simulation with an HTC Vive Pro Eye headset and perform a user study. Our study shows that such an attack can dramatically hinder the UIX. However, our proposed XAI-guided cybersickness attack detection can successfully detect cybersickness attacks and trigger the proper mitigation, effectively reducing VR cybersickness.",2 "Classical radiomic features have been designed to describe image appearance and intensity patterns. These features are directly interpretable and readily understood by radiologists. Compared with end-to-end deep learning (DL) models, lower dimensional parametric models that use such radiomic features offer enhanced interpretability but lower comparative performance in clinical tasks. In this study, we propose an approach where a standard logistic regression model performance is substantially improved by learning to select radiomic features for individual patients, from a pool of candidate features. This approach has potentials to maintain the interpretability of such approaches while offering comparable performance to DL. We also propose to expand the feature pool by generating a patient-specific healthy persona via mask-inpainting using a denoising diffusion model trained on healthy subjects. Such a pathology-free baseline feature set allows further opportunity in novel feature discovery and improved condition classification. We demonstrate our method on multiple clinical tasks of classifying general abnormalities, anterior cruciate ligament tears, and meniscus tears. Experimental results demonstrate that our approach achieved comparable or even superior performance than state-of-the-art DL approaches while offering added interpretability by using radiomic features extracted from images and supplemented by generating healthy personas. Example clinical cases are discussed in-depth to demonstrate the intepretability-enabled utilities such as human-explainable feature discovery and patient-specific location/view selection. These findings highlight the potentials of the combination of subject-specific feature selection with generative models in augmenting radiomic analysis for more interpretable decision-making. The codes are available at: https://github.com/YaxiiC/RadiomicsPersona.git",2 "Both empirical and theoretical objective science can only access relations, revealing nothing about the intrinsic nature of the entities ""in relation"". We typically refer to these entities as ""matter"", assuming their nature is irrelevant and that relational structures alone explain all phenomena, including consciousness. This would imply that consciousness arises not from matter itself but solely from its structural configurations. Consequently, if two worlds possess isomorphic structures and obey identical dynamical laws, isomorphic systems within these worlds should be equally sentient or insentient. I demonstrate that if this premise held true, the memories within an observer's brain would bear no correlation to the external world. Yet since we do acquire knowledge of the external world, structure alone must be insufficient: there exists something beyond relations, the intrinsic essence of the ""stuff in relation"". This essence imbues the observer's structure with sentience, metaphorically ""breathing fire"" into it. In turn, the observer confers physical meaning upon the world's structures. Remarkably, this conclusion emerges directly from physics and mathematics. Without acknowledging this intrinsic element, we would be incapable not only of subjective experience but also of acquiring any objective knowledge about the physical world.",0 "Federated learning (FL) enables participants to store data locally while collaborating in training, yet it remains vulnerable to privacy attacks, such as data reconstruction. Existing differential privacy (DP) technologies inject noise dynamically into the training process to mitigate the impact of excessive noise. However, this dynamic scheduling is often grounded in factors indirectly related to privacy, making it difficult to clearly explain the intricate relationship between dynamic noise adjustments and privacy requirements. To address this issue, we propose FedSDP, a novel and explainable DP-based privacy protection mechanism that guides noise injection based on privacy contribution. Specifically, FedSDP leverages Shapley values to assess the contribution of private attributes to local model training and dynamically adjusts the amount of noise injected accordingly. By providing theoretical insights into the injection of varying scales of noise into local training, FedSDP enhances interpretability. Extensive experiments demonstrate that FedSDP can achieve a superior balance between privacy preservation and model performance, surpassing state-of-the-art (SOTA) solutions.",2 "Machine learning (ML) has been leveraged to tackle a diverse range of tasks in almost all branches of nuclear engineering. Many of the successes in ML applications can be attributed to the recent performance breakthroughs in deep learning, the growing availability of computational power, data, and easy-to-use ML libraries. However, these empirical successes have often outpaced our formal understanding of the ML algorithms. An important but under-rated area is uncertainty quantification (UQ) of ML. ML-based models are subject to approximation uncertainty when they are used to make predictions, due to sources including but not limited to, data noise, data coverage, extrapolation, imperfect model architecture and the stochastic training process. The goal of this paper is to clearly explain and illustrate the importance of UQ of ML. We will elucidate the differences in the basic concepts of UQ of physics-based models and data-driven ML models. Various sources of uncertainties in physical modeling and data-driven modeling will be discussed, demonstrated, and compared. We will also present and demonstrate a few techniques to quantify the ML prediction uncertainties. Finally, we will discuss the need for building a verification, validation and UQ framework to establish ML credibility.",0 "Recently, Large Language Models (LLMs) have witnessed remarkable performance as zero-shot task planners for robotic manipulation tasks. However, the open-loop nature of previous works makes LLM-based planning error-prone and fragile. On the other hand, failure detection approaches for closed-loop planning are often limited by task-specific heuristics or following an unrealistic assumption that the prediction is trustworthy all the time. As a general-purpose reasoning machine, LLMs or Multimodal Large Language Models (MLLMs) are promising for detecting failures. However, However, the appropriateness of the aforementioned assumption diminishes due to the notorious hullucination problem. In this work, we attempt to mitigate these issues by introducing a framework for closed-loop LLM-based planning called KnowLoop, backed by an uncertainty-based MLLMs failure detector, which is agnostic to any used MLLMs or LLMs. Specifically, we evaluate three different ways for quantifying the uncertainty of MLLMs, namely token probability, entropy, and self-explained confidence as primary metrics based on three carefully designed representative prompting strategies. With a self-collected dataset including various manipulation tasks and an LLM-based robot system, our experiments demonstrate that token probability and entropy are more reflective compared to self-explained confidence. By setting an appropriate threshold to filter out uncertain predictions and seek human help actively, the accuracy of failure detection can be significantly enhanced. This improvement boosts the effectiveness of closed-loop planning and the overall success rate of tasks.",0 "The demand for property valuation has attracted significant attention from sellers, buyers, and customers applying for loans. Reviews of existing approaches have revealed shortcomings in terms of not being able to handle missing value situations, as well as lacking interpretability, which means they cannot be used in real-world applications. To address these challenges, we propose an LLM-Generated EXplainable PRopErty valuation SyStem with neighbor imputation called EXPRESS, which provides the customizable missing value imputation technique, and addresses the opaqueness of prediction by providing the feature-wise explanation generated by LLM. The dynamic nearest neighbor search finds similar properties depending on different application scenarios by property configuration set by users (e.g., house age as criteria for the house in rural areas, and locations for buildings in urban areas). Motivated by the human appraisal procedure, we generate feature-wise explanations to provide users with a more intuitive understanding of the prediction results.",0 "Reading and understanding code are fundamental skills for novice programmers, and especially important with the growing prevalence of AI-generated code and the need to evaluate its accuracy and reliability. ``Explain in Plain English'' questions are a widely used approach for assessing code comprehension, but providing automated feedback, particularly on comprehension levels, is a challenging task. This paper introduces a novel method for automatically assessing the comprehension level of responses to ``Explain in Plain English'' questions. Central to this is the ability to distinguish between two response types: multi-structural, where students describe the code line-by-line, and relational, where they explain the code's overall purpose. Using a Large Language Model (LLM) to segment both the student's description and the code, we aim to determine whether the student describes each line individually (many segments) or the code as a whole (fewer segments). We evaluate this approach's effectiveness by comparing segmentation results with human classifications, achieving substantial agreement. We conclude with how this approach, which we release as an open source Python package, could be used as a formative feedback mechanism.",0 "Spurred by the demand for interpretable models, research on eXplainable AI for language technologies has experienced significant growth, with feature attribution methods emerging as a cornerstone of this progress. While prior work in NLP explored such methods for classification tasks and textual applications, explainability intersecting generation and speech is lagging, with existing techniques failing to account for the autoregressive nature of state-of-the-art models and to provide fine-grained, phonetically meaningful explanations. We address this gap by introducing Spectrogram Perturbation for Explainable Speech-to-text Generation (SPES), a feature attribution technique applicable to sequence generation tasks with autoregressive models. SPES provides explanations for each predicted token based on both the input spectrogram and the previously generated tokens. Extensive evaluation on speech recognition and translation demonstrates that SPES generates explanations that are faithful and plausible to humans.",0 "Recent cosmological observations suggest that the dark energy equation of state may have changed in the latter stages of cosmic history. We introduce a quintessence scenario, termed bounded dark energy, capable of explaining this feature in a technically natural way. Our approach is motivated from a bottom-up perspective, based on the concept of mirage cut-off, where we demonstrate the stability of the quintessence potential against large quantum corrections. At the same time, the bounded dark energy framework aligns well with top-down considerations motivated from quantum gravity arguments. We employ both human-driven insights and machine learning techniques to identify explicit realizations of bounded dark energy models. We then perform an analysis based on Markov Chain Monte-Carlo to assess their predictions against CMB, galaxy surveys, and supernova data, showing that bounded dark energy provides a good fit to current observations. We also discuss how upcoming measurements can further test and refine our proposal.",2 "README and CONTRIBUTING files can serve as the first point of contact for potential contributors to free/libre and open source software (FLOSS) projects. Prominent open source software organizations such as Mozilla, GitHub, and the Linux Foundation advocate that projects provide community-focused and process-oriented documentation early to foster recruitment and activity. In this paper we investigate the introduction of these documents in FLOSS projects, including whether early documentation conforms to these recommendations or explains subsequent activity. We use a novel dataset of FLOSS projects packaged by the Debian GNU/Linux distribution and conduct a quantitative analysis to examine README (n=4226) and CONTRIBUTING (n=714) files when they are first published into projects' repositories. We find that projects create minimal READMEs proactively, but often publish CONTRIBUTING files following an influx of contributions. The initial versions of these files rarely focus on community development, instead containing descriptions of project procedure for library usage or code contribution. The findings suggest that FLOSS projects do not create documentation with community-building in mind, but rather favor brevity and standardized instructions.",0 "Human Activity Recognition (HAR) using wearable inertial measurement unit (IMU) sensors can revolutionize healthcare by enabling continual health monitoring, disease prediction, and routine recognition. Despite the high accuracy of Deep Learning (DL) HAR models, their robustness to real-world variabilities remains untested, as they have primarily been trained and tested on limited lab-confined data. In this study, we isolate subject, device, position, and orientation variability to determine their effect on DL HAR models and assess the robustness of these models in real-world conditions. We evaluated the DL HAR models using the HARVAR and REALDISP datasets, providing a comprehensive discussion on the impact of variability on data distribution shifts and changes in model performance. Our experiments measured shifts in data distribution using Maximum Mean Discrepancy (MMD) and observed DL model performance drops due to variability. We concur that studied variabilities affect DL HAR models differently, and there is an inverse relationship between data distribution shifts and model performance. The compounding effect of variability was analyzed, and the implications of variabilities in real-world scenarios were highlighted. MMD proved an effective metric for calculating data distribution shifts and explained the drop in performance due to variabilities in HARVAR and REALDISP datasets. Combining our understanding of variability with evaluating its effects will facilitate the development of more robust DL HAR models and optimal training techniques. Allowing Future models to not only be assessed based on their maximum F1 score but also on their ability to generalize effectively",0 "Precise elevation perception in binaural audio remains a challenge, despite extensive research on head-related transfer functions (HRTFs) and spectral cues. While prior studies have advanced our understanding of sound localization cues, the interplay between spectral features and elevation perception is still not fully understood. This paper presents a comprehensive analysis of over 600 subjects from 11 diverse public HRTF datasets, employing a convolutional neural network (CNN) model combined with explainable artificial intelligence (XAI) techniques to investigate elevation cues. In addition to testing various HRTF pre-processing methods, we focus on both within-dataset and inter-dataset generalization and explainability, assessing the model's robustness across different HRTF variations stemming from subjects and measurement setups. By leveraging class activation mapping (CAM) saliency maps, we identify key frequency bands that may contribute to elevation perception, providing deeper insights into the spectral features that drive elevation-specific classification. This study offers new perspectives on HRTF modeling and elevation perception by analyzing diverse datasets and pre-processing techniques, expanding our understanding of these cues across a wide range of conditions.",0 "Alzheimer's disease, a neurodegenerative disorder, is associated with neural, genetic, and proteomic factors while affecting multiple cognitive and behavioral faculties. Traditional AD prediction largely focuses on univariate disease outcomes, such as disease stages and severity. Multimodal data encode broader disease information than a single modality and may, therefore, improve disease prediction; but they often contain missing values. Recent ""deeper"" machine learning approaches show promise in improving prediction accuracy, yet the biological relevance of these models needs to be further charted. Integrating missing data analysis, predictive modeling, multimodal data analysis, and explainable AI, we propose OPTIMUS, a predictive, modular, and explainable machine learning framework, to unveil the many-to-many predictive pathways between multimodal input data and multivariate disease outcomes amidst missing values. OPTIMUS first applies modality-specific imputation to uncover data from each modality while optimizing overall prediction accuracy. It then maps multimodal biomarkers to multivariate outcomes using machine-learning and extracts biomarkers respectively predictive of each outcome. Finally, OPTIMUS incorporates XAI to explain the identified multimodal biomarkers. Using data from 346 cognitively normal subjects, 608 persons with mild cognitive impairment, and 251 AD patients, OPTIMUS identifies neural and transcriptomic signatures that jointly but differentially predict multivariate outcomes related to executive function, language, memory, and visuospatial function. Our work demonstrates the potential of building a predictive and biologically explainable machine-learning framework to uncover multimodal biomarkers that capture disease profiles across varying cognitive landscapes. The results improve our understanding of the complex many-to-many pathways in AD.",2 "The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses based on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2-step-concept-based-skin-diagnosis.",0 "The human capacity for working together and with tools builds on cognitive abilities that, while not unique to humans, are most developed in humans both in scale and plasticity. Our capacity to engage with collaborators and with technology requires a continuous expenditure of attentive work that we show may be understood in terms of what is heuristically argued as`trust' in socio-economic fields. By adopting a `social physics' of information approach, we are able to bring dimensional analysis to bear on an anthropological-economic issue. The cognitive-economic trade-off between group size and rate of attention to detail is the connection between these. This allows humans to scale cooperative effort across groups, from teams to communities, with a trade-off between group size and attention. We show here that an accurate concept of trust follows a bipartite `economy of work' model, and that this leads to correct predictions about the statistical distribution of group sizes in society. Trust is essentially a cognitive-economic issue that depends on the memory cost of past behaviour and on the frequency of attentive policing of intent. All this leads to the characteristic `fractal' structure for human communities. The balance between attraction to some alpha attractor and dispersion due to conflict fully explains data from all relevant sources. The implications of our method suggest a broad applicability beyond purely social groupings to general resource constrained interactions, e.g. in work, technology, cybernetics, and generalized socio-economic systems of all kinds.",0 "Bias from contrast injection variability is a significant obstacle to accurate intracranial aneurysm occlusion prediction using quantitative angiography and deep neural networks . This study explores bias removal and explainable AI for outcome prediction. This study used angiograms from 458 patients with flow diverters treated IAs with six month follow up defining occlusion status. We minimized injection variability by deconvolving the parent artery input to isolate the impulse response of aneurysms, then reconvolving it with a standardized injection curve. A deep neural network trained on these QA derived biomarkers predicted six month occlusion. Local Interpretable Model Agnostic Explanations identified the key imaging features influencing the model, ensuring transparency and clinical relevance.",2 "This late-breaking work presents a large-scale analysis of explainable AI (XAI) literature to evaluate claims of human explainability. We collaborated with a professional librarian to identify 18,254 papers containing keywords related to explainability and interpretability. Of these, we find that only 253 papers included terms suggesting human involvement in evaluating an XAI technique, and just 128 of those conducted some form of a human study. In other words, fewer than 1% of XAI papers (0.7%) provide empirical evidence of human explainability when compared to the broader body of XAI literature. Our findings underscore a critical gap between claims of human explainability and evidence-based validation, raising concerns about the rigor of XAI research. We call for increased emphasis on human evaluations in XAI studies and provide our literature search methodology to enable both reproducibility and further investigation into this widespread issue.",0 "In this study, we examined whether a short-form AI literacy intervention could reduce the adoption of incorrect recommendations from large language models. High school seniors were randomly assigned to either a control or an intervention group, which received an educational text explaining ChatGPT's working mechanism, limitations, and proper use. Participants solved math puzzles with the help of ChatGPT's recommendations, which were incorrect in half of the cases. Results showed that students adopted incorrect suggestions 52.1% of the time, indicating widespread over-reliance. The educational intervention did not significantly reduce over-reliance. Instead, it led to an increase in ignoring ChatGPT's correct recommendations. We conclude that the usage of ChatGPT is associated with over-reliance and it is not trivial to increase AI literacy to counter over-reliance.",2 "Classifying images with an interpretable decision-making process is a long-standing problem in computer vision. In recent years, Prototypical Part Networks has gained traction as an approach for self-explainable neural networks, due to their ability to mimic human visual reasoning by providing explanations based on prototypical object parts. However, the quality of the explanations generated by these methods leaves room for improvement, as the prototypes usually focus on repetitive and redundant concepts. Leveraging recent advances in prototype learning, we present a framework for part-based interpretable image classification that learns a set of semantically distinctive object parts for each class, and provides diverse and comprehensive explanations. The core of our method is to learn the part-prototypes in a non-parametric fashion, through clustering deep features extracted from foundation vision models that encode robust semantic information. To quantitatively evaluate the quality of explanations provided by ProtoPNets, we introduce Distinctiveness Score and Comprehensiveness Score. Through evaluation on CUB-200-2011, Stanford Cars and Stanford Dogs datasets, we show that our framework compares favourably against existing ProtoPNets while achieving better interpretability. Code is available at: https://github.com/zijizhu/proto-non-param.",0 "Active matter spans a wide range of time and length scales, from groups of cells and synthetic self-propelled particles to schools of fish, flocks of birds, or even human crowds. The theoretical framework describing these systems has shown tremendous success at finding universal phenomenology. However, further progress is often burdened by the difficulty of determining the forces that control the dynamics of the individual elements within each system. Accessing this local information is key to understanding the physics dominating the system and to create the models that can explain the observed collective phenomena. In this work, we present a machine-learning model, a graph neural network, that uses the collective movement of the system to learn the active and two-body forces controlling the individual dynamics of the particles. We verify our approach using numerical simulations of active brownian particles, considering different interaction potentials and levels of activity. Finally, we apply our model to experiments of electrophoretic Janus particles, extracting the active and two-body forces that control the dynamics of the colloids. Due to this, we can uncover the physics dominating the behavior of the system. We extract an active force that depends on the electric field and also area fraction. We also discover a dependence of the two-body interaction with the electric field that leads us to propose that the dominant force between these colloids is a screened electrostatic interaction with a constant length scale. We expect that this methodology can open a new avenue for the study and modeling of experimental systems of active particles.",0 "Time-dependent Stark-Zeeman systems describe the motion of an electron attracted by a proton subject to a magnetic and a time-dependent electric field. For instance the study of the dynamics of a gateway around the moon which is subject to the joint attraction of the moon, the earth and the sun leads to time-dependent Stark-Zeeman systems. In the time-dependent case there is no preserved energy. Therefore collisions cannot be regularized by blowing up the energy hypersurface. A new regularization technique of blowing up instead of the energy hypersurface the loop space was recently discovered by Barutello, Ortega, and Verzini. In this article we explain how this new regularization technique can be applied to the study of periodic orbits in time-dependent planar Stark-Zeeman systems. Since the regularization by blowing-up the loop space is nonlocal the regularized periodic orbits will not satisfy an ODE anymore but a delay equation.",0 "We introduce AutoPersuade, a three-part framework for constructing persuasive messages. First, we curate a large dataset of arguments with human evaluations. Next, we develop a novel topic model to identify argument features that influence persuasiveness. Finally, we use this model to predict the effectiveness of new arguments and assess the causal impact of different components to provide explanations. We validate AutoPersuade through an experimental study on arguments for veganism, demonstrating its effectiveness with human studies and out-of-sample predictions.",0 "In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole. In such a paradigm, humans are found to rarely trigger analytical thinking and face difficulties in communicating the nuances of conflicting opinions to the AI when disagreements occur. To tackle this challenge, we propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making. Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates. To empower AI with deliberative capabilities, we designed Deliberative AI, which leverages large language models (LLMs) as a bridge between humans and domain-specific models to enable flexible conversational interactions and faithful information provision. An exploratory evaluation on a graduate admissions task shows that Deliberative AI outperforms conventional explainable AI (XAI) assistants in improving humans' appropriate reliance and task performance. Based on a mixed-methods analysis of participant behavior, perception, user experience, and open-ended feedback, we draw implications for future AI-assisted decision tool design.",2 "Previous studies have found that PLM-based retrieval models exhibit a preference for LLM-generated content, assigning higher relevance scores to these documents even when their semantic quality is comparable to human-written ones. This phenomenon, known as source bias, threatens the sustainable development of the information access ecosystem. However, the underlying causes of source bias remain unexplored. In this paper, we explain the process of information retrieval with a causal graph and discover that PLM-based retrievers learn perplexity features for relevance estimation, causing source bias by ranking the documents with low perplexity higher. Theoretical analysis further reveals that the phenomenon stems from the positive correlation between the gradients of the loss functions in language modeling task and retrieval task. Based on the analysis, a causal-inspired inference-time debiasing method is proposed, called Causal Diagnosis and Correction (CDC). CDC first diagnoses the bias effect of the perplexity and then separates the bias effect from the overall estimated relevance score. Experimental results across three domains demonstrate the superior debiasing effectiveness of CDC, emphasizing the validity of our proposed explanatory framework. Source codes are available at https://github.com/WhyDwelledOnAi/Perplexity-Trap.",0 "Modern telescopes provide breathtaking images of nebulae, clouds and galaxies shaped by gravity-driven interactions between complex bodies. While such structures are prevalent on an astrophysical scale, they are rarely observed at the human scale. In this letter, we report the observations of the complex orbits, collision, and coalescence of droplets on a soap film, forming structures such as bridges and spiral arms, reminiscent of their astrophysical counterparts. These dynamics emerge from attractive forces caused by gravito-capillary-driven distortions of the supporting soap film. Long orbits and intricate coalescence mechanisms are enabled by the small dissipation in the soap film and the fluidic nature of the droplets and supporting film, respectively. The existence of stable droplets within the soap film featuring a universal radius, as well as the attractive potentials, are explained through a careful comparison of experimental data with models computing the distortions of the supporting soap film. This work opens perspectives to",0 "Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally whitebox in nature and not practical as they can be easily detected by humans E.g. Changing the word from ""Poor"" to ""Rich"". We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.",0 """What cannot be measured cannot be improved"" while likely never uttered by Lord Kelvin, summarizes effectively the purpose of this work. This paper presents a detailed evaluation of automated metrics for evaluating structured 3D reconstructions. Pitfalls of each metric are discussed, and a thorough analyses through the lens of expert 3D modelers' preferences is presented. A set of systematic ""unit tests"" are proposed to empirically verify desirable properties, and context aware recommendations as to which metric to use depending on application are provided. Finally, a learned metric distilled from human expert judgments is proposed and analyzed.",0 "Electrocardiogram (ECG) signals are widely shared across multiple clinical applications for diagnosis, health monitoring, and biometric authentication. While valuable for healthcare, they also carry unique biometric identifiers that pose privacy risks, especially when ECG data shared across multiple entities. These risks are amplified in shared environments, where re-identification threats can compromise patient privacy. Existing deep learning re-identification models prioritize accuracy but lack explainability, making it challenging to understand how the unique biometric characteristics encoded within ECG signals are recognized and utilized for identification. Without these insights, despite high accuracy, developing secure and trustable ECG data-sharing frameworks remains difficult, especially in diverse, multi-source environments. In this work, we introduce TransECG, a Vision Transformer (ViT)-based method that uses attention mechanisms to pinpoint critical ECG segments associated with re-identification tasks like gender, age, and participant ID. Our approach demonstrates high accuracy (89.9% for gender, 89.9% for age, and 88.6% for ID re-identification) across four real-world datasets with 87 participants. Importantly, we provide key insights into ECG components such as the R-wave, QRS complex, and P-Q interval in re-identification. For example, in the gender classification, the R wave contributed 58.29% to the model's attention, while in the age classification, the P-R interval contributed 46.29%. By combining high predictive performance with enhanced explainability, TransECG provides a robust solution for privacy-conscious ECG data sharing, supporting the development of secure and trusted healthcare data environment.",2 "In this report, we introduce our first-generation reasoning model, LexPro-1.0, a large language model designed for the highly specialized Chinese legal domain, offering comprehensive capabilities to meet diverse realistic needs. Existing legal LLMs face two primary challenges. Firstly, their design and evaluation are predominantly driven by computer science perspectives, leading to insufficient incorporation of legal expertise and logic, which is crucial for high-precision legal applications, such as handling complex prosecutorial tasks. Secondly, these models often underperform due to a lack of comprehensive training data from the legal domain, limiting their ability to effectively address real-world legal scenarios. To address this, we first compile millions of legal documents covering over 20 types of crimes from 31 provinces in China for model training. From the extensive dataset, we further select high-quality for supervised fine-tuning, ensuring enhanced relevance and precision. The model further undergoes large-scale reinforcement learning without additional supervision, emphasizing the enhancement of its reasoning capabilities and explainability. To validate its effectiveness in complex legal applications, we also conduct human evaluations with legal experts. We develop fine-tuned models based on DeepSeek-R1-Distilled versions, available in three dense configurations: 14B, 32B, and 70B.",0 "This paper provides an explanation of NTRU, a post quantum encryption scheme, while also providing a gentle introduction to cryptography. NTRU is a very efficient lattice based cryptosystem that appears to be safe against attacks by quantum computers. NTRU's efficiency suggests that it is a strong candidate as an alternative to RSA, ElGamal, and ECC for the post quantum world. The paper begins with an introduction to cryptography and security proofs for cryptographic schemes before explaining the NTRU cryptosystem and culminating with a proof that the original presentation of NTRU is not IND-CPA secure. We will conclude by mentioning padding schemes to NTRU that are provably IND-CCA2 secure in the random oracle model. The paper is designed to be accessible to anyone with minimal background in abstract algebra and number theory - no previous knowledge of cryptography is assumed. Given the author's lack of familiarity with the subject, this paper aims to be an expository work rather than to provide new insights to the subject matter.",0 "Explainable AI is a crucial component for edge services, as it ensures reliable decision making based on complex AI models. Surrogate models are a prominent approach of XAI where human-interpretable models, such as a linear regression model, are trained to approximate a complex (black-box) model's predictions. This paper delves into the balance between the predictive accuracy of complex AI models and their approximation by surrogate ones, advocating that both these models benefit from being learned simultaneously. We derive a joint (bi-level) training scheme for both models and we introduce a new algorithm based on multi-objective optimization (MOO) to simultaneously minimize both the complex model's prediction error and the error between its outputs and those of the surrogate. Our approach leads to improvements that exceed 99% in the approximation of the black-box model through the surrogate one, as measured by the metric of Fidelity, for a compromise of less than 3% absolute reduction in the black-box model's predictive accuracy, compared to single-task and multi-task learning baselines. By improving Fidelity, we can derive more trustworthy explanations of the complex model's outcomes from the surrogate, enabling reliable AI applications for intelligent services at the network edge.",0 "Relational video customization refers to the creation of personalized videos that depict user-specified relations between two subjects, a crucial task for comprehending real-world visual content. While existing methods can personalize subject appearances and motions, they still struggle with complex relational video customization, where precise relational modeling and high generalization across subject categories are essential. The primary challenge arises from the intricate spatial arrangements, layout variations, and nuanced temporal dynamics inherent in relations; consequently, current models tend to overemphasize irrelevant visual details rather than capturing meaningful interactions. To address these challenges, we propose DreamRelation, a novel approach that personalizes relations through a small set of exemplar videos, leveraging two key components: Relational Decoupling Learning and Relational Dynamics Enhancement. First, in Relational Decoupling Learning, we disentangle relations from subject appearances using relation LoRA triplet and hybrid mask training strategy, ensuring better generalization across diverse relationships. Furthermore, we determine the optimal design of relation LoRA triplet by analyzing the distinct roles of the query, key, and value features within MM-DiT's attention mechanism, making DreamRelation the first relational video generation framework with explainable components. Second, in Relational Dynamics Enhancement, we introduce space-time relational contrastive loss, which prioritizes relational dynamics while minimizing the reliance on detailed subject appearances. Extensive experiments demonstrate that DreamRelation outperforms state-of-the-art methods in relational video customization. Code and models will be made publicly available.",0 "The Oaxaca-Blinder decomposition is a widely used method to explain social disparities. However, assigning causal meaning to its estimated components requires strong assumptions that often lack explicit justification. This article emphasizes the importance of clearly defined estimands and their identification when targeting mediating mechanisms of social disparities. Three approaches are distinguished based on their scientific questions and assumptions: a mediation approach and two interventional approaches. The Oaxaca-Blinder decomposition and Monte Carlo simulation-based g-computation are discussed for estimation in relation to these approaches. The latter method is used in an interventional effects analysis of the observed gender pay gap in Western Germany, using data from the 2017 German Socio-Economic Panel. Ten mediators, including indicators of human capital and job characteristics, are considered. Key findings indicate that the gender pay gap in log hourly wages could be reduced by up to 86% if these mediators were equally distributed between women and men. Substantial reductions could be achieved by aligning full-time employment and work experience.",0 "Autoencoders based on Graph Neural Networks (GNNs) have garnered significant attention in recent years for their ability to extract informative latent representations, characterizing the structure of complex topologies, such as graphs. Despite the prevalence of Graph Autoencoders, there has been limited focus on developing and evaluating explainable neural-based graph generative models specifically designed for signed networks. To address this gap, we propose the Signed Graph Archetypal Autoencoder (SGAAE) framework. SGAAE extracts node-level representations that express node memberships over distinct extreme profiles, referred to as archetypes, within the network. This is achieved by projecting the graph onto a learned polytope, which governs its polarization. The framework employs a recently proposed likelihood for analyzing signed networks based on the Skellam distribution, combined with relational archetypal analysis and GNNs. Our experimental evaluation demonstrates the SGAAEs' capability to successfully infer node memberships over the different underlying latent structures while extracting competing communities formed through the participation of the opposing views in the network. Additionally, we introduce the 2-level network polarization problem and show how SGAAE is able to characterize such a setting. The proposed model achieves high performance in different tasks of signed link prediction across four real-world datasets, outperforming several baseline models.",0 "Semantic-based test generators are widely used to produce failure-inducing inputs for Deep Learning (DL) systems. They typically generate challenging test inputs by applying random perturbations to input semantic concepts until a failure is found or a timeout is reached. However, such randomness may hinder them from efficiently achieving their goal. This paper proposes XMutant, a technique that leverages explainable artificial intelligence (XAI) techniques to generate challenging test inputs. XMutant uses the local explanation of the input to inform the fuzz testing process and effectively guide it toward failures of the DL system under test. We evaluated different configurations of XMutant in triggering failures for different DL systems both for model-level (sentiment analysis, digit recognition) and system-level testing (advanced driving assistance). Our studies showed that XMutant enables more effective and efficient test generation by focusing on the most impactful parts of the input. XMutant generates up to 125% more failure-inducing inputs compared to an existing baseline, up to 7X faster. We also assessed the validity of these inputs, maintaining a validation rate above 89%, according to automated and human validators.",0 "In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection. This paper addresses the challenge of interpretable depression detection by proposing a novel methodology that effectively combines Large Language Models (LLMs) with eXplainable Artificial Intelligence (XAI) and conversational agents like ChatGPT. In our methodology, explanations are achieved by integrating BERTweet, a Twitter-specific variant of BERT, into a novel self-explanatory model, namely BERT-XDD, capable of providing both classification and explanations via masked attention. The interpretability is further enhanced using ChatGPT to transform technical explanations into human-readable commentaries. By introducing an effective and modular approach for interpretable depression detection, our methodology can contribute to the development of socially responsible digital platforms, fostering early intervention and support for mental health challenges under the guidance of qualified healthcare professionals.",0 "Arabinoxylans are constituents of wheat flour that contribute to the dietary fiber properties of wheat. They exist in water-extractable and water-unextractable forms and contribute to human health. In bakery technology, especially the water-extractable arabinoxylans (WE-AX) are important due to their impact on viscosity and dough rheology. This study provides insights into the impact of wheat flour fermentation on WE-AX during sourdough production, offering potential applications for improving sourdough bread quality and its health benefits. The production of sourdoughs is known to increase the WE-AX fraction, yet the underlying (bio)chemical mechanisms remain unclear. This study investigated the alteration of WE-AX during the fermentation of wheat flour for sourdough production using 1H Diffusion Ordered SpectroscopY (DOSY) Nuclear Magnetic Resonance (NMR) at elevated temperature to analyze the structural changes of WE-AX during wheat flour fermentation for sourdough production with different lactic acid bacteria (LAB) strains. The results confirmed that DOSY NMR at elevated temperatures greatly improved the applicability of the method for analyzing larger biomolecules. Overall, a size reduction of the WE-AX compounds with increasing fermentation time was found. This was indicated both by the occurrence of higher self-diffusion coefficients, and increased transverse relaxation times. Further research is necessary to explain deviations from the general trend.",0 "The ability of Natural Language Processing (NLP) methods to categorize text into multiple classes has motivated their use in online content moderation tasks, such as hate speech and fake news detection. However, there is limited understanding of how or why these methods make such decisions, or why certain content is moderated in the first place. To investigate the hidden mechanisms behind content moderation, we explore multiple directions: 1) training classifiers to reverse-engineer content moderation decisions across countries; 2) explaining content moderation decisions by analyzing Shapley values and LLM-guided explanations. Our primary focus is on content moderation decisions made across countries, using pre-existing corpora sampled from the Twitter Stream Grab. Our experiments reveal interesting patterns in censored posts, both across countries and over time. Through human evaluations of LLM-generated explanations across three LLMs, we assess the effectiveness of using LLMs in content moderation. Finally, we discuss potential future directions, as well as the limitations and ethical considerations of this work. Our code and data are available at https://github.com/causalNLP/censorship",0 "Advancements in intelligent technologies have significantly improved navigation in complex traffic environments by enhancing environment perception and trajectory prediction for automated vehicles. However, current research often overlooks the joint reasoning of scenario agents and lacks explainability in trajectory prediction models, limiting their practical use in real-world situations. To address this, we introduce the Explainable Conditional Diffusion-based Multimodal Trajectory Prediction (DMTP) model, which is designed to elucidate the environmental factors influencing predictions and reveal the underlying mechanisms. Our model integrates a modified conditional diffusion approach to capture multimodal trajectory patterns and employs a revised Shapley Value model to assess the significance of global and scenario-specific features. Experiments using the Waymo Open Motion Dataset demonstrate that our explainable model excels in identifying critical inputs and significantly outperforms baseline models in accuracy. Moreover, the factors identified align with the human driving experience, underscoring the model's effectiveness in learning accurate predictions. Code is available in our open-source repository: https://github.com/ocean-luna/Explainable-Prediction.",0 "Interaction between humans and AI systems raises the question of how people understand AI systems. This has been addressed with explainable AI, the interpretability arising from users' domain expertise, or collaborating with AI in a stable environment. In the absence of these elements, we discuss designing Actionable AI, which allows non-experts to configure black-box agents. In this paper, we experiment with an AI-powered cartpole game and observe 22 pairs of participants to configure it via direct manipulation. Our findings suggest that, in uncertain conditions, non-experts were able to achieve good levels of performance. By influencing the behaviour of the agent, they exhibited an operational understanding of it, which proved sufficient to reach their goals. Based on this, we derive implications for designing Actionable AI systems. In conclusion, we propose Actionable AI as a way to open access to AI-based agents, giving end users the agency to influence such agents towards their own goals.",2 "Uncontrolled cell division in the brain is what gives rise to brain tumors. If the tumor size increases by more than half, there is little hope for the patient's recovery. This emphasizes the need of rapid and precise brain tumor diagnosis. When it comes to analyzing, diagnosing, and planning therapy for brain tumors, MRI imaging plays a crucial role. A brain tumor's development history is crucial information for doctors to have. When it comes to distinguishing between human soft tissues, MRI scans are superior. In order to get reliable classification results from MRI scans quickly, deep learning is one of the most practical methods. Early human illness diagnosis has been demonstrated to be more accurate when deep learning methods are used. In the case of diagnosing a brain tumor, when even a little misdiagnosis might have serious consequences, accuracy is especially important. Disclosure of brain tumors in medical images is still a difficult task. Brain MRIs are notoriously imprecise in revealing the presence or absence of tumors. Using MRI scans of the brain, a CNN was trained to identify the presence of a tumor in this research. Results from the CNN model showed an accuracy of 99.17%. The CNN model's characteristics were also retrieved. The CNN model's characteristics were also retrieved and we also localized the tumor regions from the unannotated images using GradCAM, a deep learning explainability tool. In order to evaluate the CNN model's capability for processing images, we applied the features into different ML models. CNN and machine learning models were also evaluated using the standard metrics of Precision, Recall, Specificity, and F1 score. The significance of the doctor's diagnosis enhanced the accuracy of the CNN model's assistance in identifying the existence of tumor and treating the patient.",2 "We paraphrase Descartes' famous dictum in the area of AI ethics where the ""I doubt and therefore I am"" is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. The foundation of our argument is the discipline of ethics, one of the oldest and largest knowledge projects of human history, yet, we seem only to be beginning to get a grasp of it. After a couple of thousand years of studying the ethics of humans, we (humans) arrived at a point where moral psychology suggests that our moral decisions are intuitive, and all the models from ethics become relevant only when we explain ourselves. This recognition has a major impact on what and how we can do regarding AI ethics. We do not offer a solution, we explore some ideas and leave the problem open, but we hope somewhat better understood than before our study.",0 "We study a nonlinear magnetic metamaterial modeled as a split-ring resonator array, where the standard discrete laplacian is replaced by its fractional form. We find a closed-form expression for the dispersion relation as a function of the fractional exponent s and the gain/loss parameter {\gamma} and examine the conditions under which stable magneto-inductive waves exist. The density of states is computed in closed form and suggests that the main effect of fractionality is the flattening of the bands, while gain/loss increase tends to reduce the bandgaps. The spatial extent of the modes for a finite array is computed by means of the participation ratio R, which is also obtained in closed form. For a fixed fractionality exponent, an increase in gain/loss {\gamma} decreases the overall R, from the number of sites N towards N/2 at large {\gamma}. The nonlinear dynamics of the average magnetic energy on an initial ring during a cycle shows a monotonic increase with {\gamma}, and it is qualitatively similar for all fractional exponents. This is explained as mainly due to the interplay of nonlinearity and PT symmetry.",0 "Visual prompt tuning offers significant advantages for adapting pre-trained visual foundation models to specific tasks. However, current research provides limited insight into the interpretability of this approach, which is essential for enhancing AI reliability and enabling AI-driven knowledge discovery. In this paper, rather than learning abstract prompt embeddings, we propose the first framework, named Interpretable Visual Prompt Tuning (IVPT), to explore interpretability for visual prompts, by introducing hierarchical concept prototypes. Specifically, visual prompts are linked to human-understandable semantic concepts, represented as a set of category-agnostic prototypes, each corresponding to a specific region of the image. Then, IVPT aggregates features from these regions to generate interpretable prompts, which are structured hierarchically to explain visual prompts at different granularities. Comprehensive qualitative and quantitative evaluations on fine-grained classification benchmarks show its superior interpretability and performance over conventional visual prompt tuning methods and existing interpretable methods.",0 "Explainable Recommender Systems is an important field of study which provides reasons behind the suggested recommendations. Explanations with recommender systems are useful for developers while debugging anomalies within the system and for consumers while interpreting the model's effectiveness in capturing their true preferences towards items. However, most of the existing state-of-the-art (SOTA) explainable recommenders could not retain their explanation capability under noisy circumstances and moreover are not generalizable across different datasets. The robustness of the explanations must be ensured so that certain malicious attackers do not manipulate any high-stake decision scenarios to their advantage, which could cause severe consequences affecting large groups of interest. In this work, we present a general framework for feature-aware explainable recommenders that can withstand external attacks and provide robust and generalized explanations. This paper presents a novel framework which could be utilized as an additional defense tool, preserving the global explainability when subject to model-based white box attacks. Our framework is simple to implement and supports different methods regardless of the internal model structure and intrinsic utility within any model. We experimented our framework on two architecturally different feature-based SOTA explainable algorithms by training them on three popular e-commerce datasets of increasing scales. We noticed that both the algorithms displayed an overall improvement in the quality and robustness of the global explainability under normal as well as noisy environments across all the datasets, indicating the flexibility and mutability of our framework.",0 "The field of emergent language represents a novel area of research within the domain of artificial intelligence, particularly within the context of multi-agent reinforcement learning. Although the concept of studying language emergence is not new, early approaches were primarily concerned with explaining human language formation, with little consideration given to its potential utility for artificial agents. In contrast, studies based on reinforcement learning aim to develop communicative capabilities in agents that are comparable to or even superior to human language. Thus, they extend beyond the learned statistical representations that are common in natural language processing research. This gives rise to a number of fundamental questions, from the prerequisites for language emergence to the criteria for measuring its success. This paper addresses these questions by providing a comprehensive review of 181 scientific publications on emergent language in artificial intelligence. Its objective is to serve as a reference for researchers interested in or proficient in the field. Consequently, the main contributions are the definition and overview of the prevailing terminology, the analysis of existing evaluation methods and metrics, and the description of the identified research gaps.",0 "Cooperation underlies many aspects of the evolution of human and animal societies, where cooperators produce social goods to benefit others. Explaining the emergence of cooperation among selfish individuals has become a major research interest in evolutionary dynamics. Previous studies typically use complex networks to capture the interactions between individuals, and assume that cooperators distribute benefits equally to their neighbors. In practice, the distribution of social goods is often non-uniform, and individuals may selectively provide benefits to those they interact with based on their personal preferences. Here, we develop an efficient algorithm to optimize the placement of donation structure in any given network to minimize the threshold for the emergence of cooperation. We find when cooperators allocate the benefits preferentially compared to the traditional settings of donating to all neighbors, cooperation tends to be maximally promoted. Furthermore, the optimal donation structure is strongly disassortative -- the low-degree nodes tend to donate to high-degree ones preferentially and vice versa. Based on this finding, we offer a local heuristic strategy based on degree thresholds for personalizing the allocation of social goods and choosing each cooperator's recipient, which we use to prove its effectiveness in empirical datasets. Our findings advance the understanding of mechanisms for promoting cooperation with strategic allocations of social goods.",0 "We consider the problem of preprocessing an $n\times n$ matrix M, and supporting queries that, for any vector v, returns the matrix-vector product Mv. This problem has been extensively studied in both theory and practice: on one side, practitioners have developed algorithms that are highly efficient in practice, whereas theoreticians have proven that the problem cannot be solved faster than naive multiplication in the worst-case. This lower bound holds even in the average-case, implying that existing average-case analyses cannot explain this gap between theory and practice. Therefore, we study the problem for structured matrices. We show that for $n\times n$ matrices of VC-dimension d, the matrix-vector multiplication problem can be solved with $\tilde{O}(n^2)$ preprocessing and $\tilde O(n^{2-1/d})$ query time. Given the low constant VC-dimensions observed in most real-world data, our results posit an explanation for why the problem can be solved so much faster in practice. Moreover, our bounds hold even if the matrix does not have a low VC-dimension, but is obtained by (possibly adversarially) corrupting at most a subquadratic number of entries of any unknown low VC-dimension matrix. Our results yield the first non-trivial upper bounds for many applications. In previous works, the online matrix-vector hypothesis (conjecturing that quadratic time is needed per query) was used to prove many conditional lower bounds, showing that it is impossible to compute and maintain high-accuracy estimates for shortest paths, Laplacian solvers, effective resistance, and triangle detection in graphs subject to node insertions and deletions in subquadratic time. Yet, via a reduction to our matrix-vector-multiplication result, we show we can maintain the aforementioned problems efficiently if the input is structured, providing the first subquadratic upper bounds in the high-accuracy regime.",0 "In this work, we explain our approach employed in the BabyLM Challenge, which uses various methods of training language models (LMs) with significantly less data compared to traditional large language models (LLMs) and are inspired by how human children learn. While a human child is exposed to far less linguistic input than an LLM, they still achieve remarkable language understanding and generation abilities. To this end, we develop a model trained on a curated dataset consisting of 10 million words, primarily sourced from child-directed transcripts. The 2024 BabyLM Challenge initial dataset of 10M words is filtered to 8.5M. Next, it is supplemented with a randomly selected subset of TVR dataset consisting of 1.5M words of television dialogues. The latter dataset ensures that similar to children, the model is also exposed to language through media. Furthermore, we reduce the vocabulary size to 32,000 tokens, aligning it with the limited vocabulary of children in the early stages of language acquisition. We use curriculum learning and is able to match the baseline on certain benchmarks while surpassing the baseline on others. Additionally, incorporating common LLM training datasets, such as MADLAD-400, degrades performance. These findings underscore the importance of dataset selection, vocabulary scaling, and curriculum learning in creating more data-efficient language models that better mimic human learning processes.",0 "In this paper I want to propose an argument to support Jerry Fodor's thesis (Fodor 1983) that input systems are modular and thus informationally encapsulated. The argument starts with the suggestion that there is a ""grounding problem"" in perception, i. e. that there is a problem in explaining how perception that can yield a visual experience is possible, how sensation can become meaningful perception of something for the subject. Given that visual experience is actually possible, this invites a transcendental argument that explains the conditions of its possibility. I propose that one of these conditions is the existence of a visual module in Fodor's sense that allows the step from sensation to object-identifying perception, thus enabling visual experience. It seems to follow that there is informationally encapsulated nonconceptual content in visual perception.",0 "Background. Career abandonment, the process in which professionals leave the activity, assuming positions in another area, among software developers involves frustration with the lost investment and emotional and financial costs, even though being beneficial for the human being, depending on personal context. Previous studies have identified work-related motivators for career abandonment, such as the threat of obsolescence, unstable requirements, and low code quality, though these factors have primarily been examined in former developers. The relationship between these motivators and the intention to abandon among currently active developers remains unexplored. Goal. This article investigates the relationship between key work-related motivators and currently active software developers intention to abandon their careers. Method. We employed a quantitative approach, surveying 221 software developers to validate a theoretical model for career abandonment intention, based on an adaptation of the Investment Model, which incorporates satisfaction with technical aspects of the profession as well as the intention to abandon. Findings. Exploratory and confirmatory factor analyses, through structural equation modeling (SEM), provided robust support for the adapted Investment Model in explaining software developers intention to abandon their careers. Moreover, career commitment significantly impacts the intention to leave the profession, being positively influenced by satisfaction with technical work-related factors and negatively influenced by career alternatives and career investment. Conclusion. The paper offers valuable insights for organizational leaders and research, potentially guiding retention strategies to better support developers, and the adoption of theoretical models to explain career abandonment.",0 "Functional brain network properties are heavily influenced by how the the network nodes are defined. A common approach uses Regions of Interest (ROIs), i.e., predetermined collections of functional magnetic resonance imaging (fMRI) measurement voxels, as nodes. Their definition is always a compromise, as static ROIs cannot capture the dynamics and temporal reconfigurations of the brain areas. Consequently, the ROIs do not align with the functionally homogeneous regions, which can explain the low functional homogeneity values observed for the ROIs. This is in violation of the underlying homogeneity assumption in functional brain network analysis pipelines, which can cause serious problems such as spurious network structure. We introduce the node-reconfiguring multilayer network model, where nodes represent ROIs with boundaries optimized for high functional homogeneity in each time window. In this representation, network layers correspond to time windows, intralayer links depict functional connectivity between ROIs, and interlayer links quantify the overlap between ROIs on different layers. The ROI optimization approach increases functional homogeneity notably, yielding an over 10-fold increase in the fraction of ROIs with high homogeneity compared to static ROIs from the Brainnetome atlas. The optimized ROIs reorganize non-trivially at short time scales of consecutive time windows and across several windows. The amount of reorganization across time windows is connected to intralayer hubness: ROIs with intermediate levels of reorganization have stronger intralayer links than extremely stable or unstable ROIs. Our results demonstrate that reconfiguring parcellations yield more accurate network models of brain function. This supports the ongoing paradigm shift towards the chronnectome that sees the brain as a set of sources with continuously reconfiguring spatial and connectivity profiles.",0 "While XAI focuses on providing AI explanations to humans, can the reverse - humans explaining their judgments to AI - foster richer, synergistic human-AI systems? This paper explores various forms of human inputs to AI and examines how human explanations can guide machine learning models toward automated judgments and explanations that align more closely with human concepts.",0 "Casimir effect, explained by Hendrik Casimir in 1948, is a macroscopic manifestation of quantum electrodynamics. Symmetry breaking due to space confinement of vacuum fluctuations in between two planar mirrors results in an attractive force acting between the two mirrors. Here, we show that spontaneous self-assembly of two-dimensional (2D) layered materials grown on molten metals (rheotaxy) is driven by the mechanical forces exerted by the liquid on the graphene domain and that these forces are subject to a Casimir-like effect. We present in situ environmental and ultrahigh vacuum scanning electron microscopy observations of chemical vapor deposition of graphene on both solid and molten gold and copper, which reveal that the self-assembly occurs via translational and rotational motions of graphene domains during growth on molten metals. Using high-temperature (~ 1300 K) atomic force microscopy measurements of graphene/molten-metal interfaces, coupled with density functional theory and continuum modelling of 2D layers floating on a liquid surface, we attribute the observed phenomena to Casimir-like effect of surface undulations and give further guidance for achieving seamless stitching of 2D layer domains into large scale monolayers.",0 "We study LLM judgments of misinformation expressed with uncertainty. Our experiments study the response of three widely used LLMs (GPT-4o, LlaMA3, DeepSeek-v2) to misinformation propositions that have been verified false and then are transformed into uncertain statements according to an uncertainty typology. Our results show that after transformation, LLMs change their factchecking classification from false to not-false in 25% of the cases. Analysis reveals that the change cannot be explained by predictors to which humans are expected to be sensitive, i.e., modality, linguistic cues, or argumentation strategy. The exception is doxastic transformations, which use linguistic cue phrases such as ""It is believed ..."".To gain further insight, we prompt the LLM to make another judgment about the transformed misinformation statements that is not related to truth value. Specifically, we study LLM estimates of the frequency with which people make the uncertain statement. We find a small but significant correlation between judgment of fact and estimation of frequency.",0 "In today's data-driven era, computational systems generate vast amounts of data that drive the digital transformation of industries, where Artificial Intelligence (AI) plays a key role. Currently, the demand for eXplainable AI (XAI) has increased to enhance the interpretability, transparency, and trustworthiness of AI models. However, evaluating XAI methods remains challenging: existing evaluation frameworks typically focus on quantitative properties such as fidelity, consistency, and stability without taking into account qualitative characteristics such as satisfaction and interpretability. In addition, practitioners face a lack of guidance in selecting appropriate datasets, AI models, and XAI methods -a major hurdle in human-AI collaboration. To address these gaps, we propose a framework that integrates quantitative benchmarking with qualitative user assessments through virtual personas based on the ""Anthology"" of backstories of the Large Language Model (LLM). Our framework also incorporates a content-based recommender system that leverages dataset-specific characteristics to match new input data with a repository of benchmarked datasets. This yields an estimated XAI score and provides tailored recommendations for both the optimal AI model and the XAI method for a given scenario.",0 "Ensuring the security of critical infrastructure has become increasingly vital with the proliferation of Internet of Things (IoT) systems. However, the heterogeneous nature of IoT data and the lack of human-comprehensible insights from anomaly detection models remain significant challenges. This paper presents a hybrid framework that combines numerical anomaly detection using Autoencoders with Large Language Models (LLMs) for enhanced preprocessing and interpretability. Two preprocessing approaches are implemented: a traditional method utilizing Principal Component Analysis (PCA) to reduce dimensionality and an LLM-assisted method where GPT-4 dynamically recommends feature selection, transformation, and encoding strategies. Experimental results on the KDDCup99 10% corrected dataset demonstrate that the LLM-assisted preprocessing pipeline significantly improves anomaly detection performance. The macro-average F1 score increased from 0.49 in the traditional PCA-based approach to 0.98 with LLM-driven insights. Additionally, the LLM generates natural language explanations for detected anomalies, providing contextual insights into their causes and implications. This framework highlights the synergy between numerical AI models and LLMs, delivering an accurate, interpretable, and efficient solution for IoT cybersecurity in critical infrastructure.",0 "Concern has recently been expressed by HCI researchers as to the inappropriate treatment of qualitative studies through a positivistic mode of evaluation that places emphasis on metrics and measurement. This contrasts with the nature of qualitative research, which privileges interpretation and understanding over quantification. This paper explains the difference between positivism and interpretivism, the limits of quantification in human science, the distinctive contribution of qualitative research, and how quality assurance might be provided for in the absence of numbers via five basic criteria that reviewers may use to evaluate qualitative studies on their own terms.",0 "Security researchers grapple with the surge of malicious files, necessitating swift identification and classification of malware strains for effective protection. Visual classifiers and in particular Convolutional Neural Networks (CNNs) have emerged as vital tools for this task. However, issues of robustness and explainability, common in other high risk domain like medicine and autonomous vehicles, remain understudied in current literature. Although deep learning visualization classifiers presented in research obtain great results without the need for expert feature extraction, they have not been properly studied in terms of their replicability. Additionally, the literature is not clear on how these types of classifiers arrive to their answers. Our study addresses these gaps by replicating six CNN models and exploring their pitfalls. We employ Class Activation Maps (CAMs), like GradCAM and HiResCAM, to assess model explainability. We evaluate the CNNs' performance and interpretability on two standard datasets, MalImg and Big2015, and a newly created called VX-Zoo. We employ these different CAM techniques to gauge the explainability of each of the models. With these tools, we investigate the underlying factors contributing to different interpretations of inputs across the different models, empowering human researchers to discern patterns crucial for identifying distinct malware families and explain why CNN models arrive at their conclusions. Other then highlighting the patterns found in the interpretability study, we employ the extracted heatmpas to enhance Visual Transformers classifiers' performance and explanation quality. This approach yields substantial improvements in F1 score, ranging from 2% to 8%, across the datasets compared to benchmark values.",0 "One of the primary goals of Human-Robot Interaction (HRI) research is to develop robots that can interpret human behavior and adapt their responses accordingly. Adaptive learning models, such as continual and reinforcement learning, play a crucial role in improving robots' ability to interact effectively in real-world settings. However, these models face significant challenges due to the limited availability of real-world data, particularly in sensitive domains like healthcare and well-being. This data scarcity can hinder a robot's ability to adapt to new situations. To address these challenges, causality provides a structured framework for understanding and modeling the underlying relationships between actions, events, and outcomes. By moving beyond mere pattern recognition, causality enables robots to make more explainable and generalizable decisions. This paper presents an exploratory causality-based analysis through a case study of an adaptive robotic coach delivering positive psychology exercises over four weeks in a workplace setting. The robotic coach autonomously adapts to multimodal human behaviors, such as facial valence and speech duration. By conducting both macro- and micro-level causal analyses, this study aims to gain deeper insights into how adaptability can enhance well-being during interactions. Ultimately, this research seeks to advance our understanding of how causality can help overcome challenges in HRI, particularly in real-world applications.",0 "The language technology moonshot moment of Generative Large Language Models (GLLMs) was not limited to English: These models brought a surge of technological applications, investments, and hype to low-resource languages as well. However, the capabilities of these models in languages such as Danish were, until recently, difficult to verify beyond qualitative demonstrations due to a lack of applicable evaluation corpora. We present a GLLM benchmark to evaluate \emph{Danoliteracy}, a measure of Danish language and cultural competency across eight diverse scenarios such as Danish citizenship tests and abstractive social media question answering. This limited-size benchmark was found to produce a robust ranking that correlates to human feedback at $\rho \sim 0.8$ with GPT-4 and Claude Opus models achieving the highest rankings. Analyzing these model results across scenarios, we find one strong underlying factor explaining $95\%$ of scenario performance variance for GLLMs in Danish, suggesting a $g$ factor of model consistency in language adaptation.",0 "Recent great advances in video generation models have demonstrated their potential to produce high-quality videos, bringing challenges to effective evaluation. Unlike human evaluation, existing automated evaluation metrics lack high-level semantic understanding and reasoning capabilities for video, thus making them infeasible and unexplainable. To fill this gap, we curate GRADEO-Instruct, a multi-dimensional T2V evaluation instruction tuning dataset, including 3.3k videos from over 10 existing video generation models and multi-step reasoning assessments converted by 16k human annotations. We then introduce GRADEO, one of the first specifically designed video evaluation models, which grades AI-generated videos for explainable scores and assessments through multi-step reasoning. Experiments show that our method aligns better with human evaluations than existing methods. Furthermore, our benchmarking reveals that current video generation models struggle to produce content that aligns with human reasoning and complex real-world scenarios. The models, datasets, and codes will be released soon.",2 "We examine whether data generated by explanation techniques, which promote a process of self-reflection, can improve classifier performance. Our work is based on the idea that humans have the ability to make quick, intuitive decisions as well as to reflect on their own thinking and learn from explanations. To the best of our knowledge, this is the first time that the potential of mimicking this process by using explanations generated by explainability methods has been explored. We found that combining explanations with traditional labeled data leads to significant improvements in classification accuracy and training efficiency across multiple image classification datasets and convolutional neural network architectures. It is worth noting that during training, we not only used explanations for the correct or predicted class, but also for other classes. This serves multiple purposes, including allowing for reflection on potential outcomes and enriching the data through augmentation.",0 "The advent of internet medicine provides patients with unprecedented convenience in searching and communicating with doctors relevant to their diseases and desired treatments online. However, the current doctor recommendation systems fail to fully ensure the professionalism and interpretability of the recommended results. In this work, we formulate doctor recommendation as a ranking task and develop a large language model (LLM)-based pointwise ranking framework. Our framework ranks doctors according to their relevance regarding specific diseases-treatment pairs in a zero-shot setting. The advantage of our framework lies in its ability to generate precise and explainable doctor ranking results. Additionally, we construct DrRank, a new expertise-driven doctor ranking dataset comprising over 38 disease-treatment pairs. Experiment results on the DrRank dataset demonstrate that our framework significantly outperforms the strongest cross-encoder baseline, achieving a notable gain of +5.45 in the NDCG@10 score while maintaining affordable latency consumption. Furthermore, we comprehensively present the fairness analysis results of our framework from three perspectives of different diseases, patient gender, and geographical regions. Meanwhile, the interpretability of our framework is rigorously verified by three human experts, providing further evidence of the reliability of our proposed framework for doctor recommendation.",2 "The quality of fetal ultrasound screening scans directly influences the precision of biometric measurements. However, acquiring high-quality scans is labor-intensive and highly relies on the operator's skills. Considering the low contrastiveness and imaging artifacts that widely exist in ultrasound, even a dedicated deep-learning model can be vulnerable to learning from confounding information in the image. In this paper, we propose a holistic and explainable method for fetal ultrasound quality assessment, where we design a hierarchical concept bottleneck model by introducing human-readable ``concepts"" into the task and imitating the sequential expert decision-making process. This hierarchical information flow forces the model to learn concepts from semantically meaningful areas: The model first passes through a layer of visual, segmentation-based concepts, and next a second layer of property concepts directly associated with the decision-making task. We consider the quality assessment to be in a more challenging but more realistic setting, with fine-grained image recognition. Experiments show that our model outperforms equivalent concept-free models on an in-house dataset, and shows better generalizability on two public benchmarks, one from Spain and one from Africa, without any fine-tuning.",0 "This theoretical work examines 'hallucinations' in both human cognition and large language models, comparing how each system can produce perceptions or outputs that deviate from reality. Drawing on neuroscience and machine learning research, we highlight the predictive processes that underlie human and artificial thought. In humans, complex neural mechanisms interpret sensory information under uncertainty, sometimes filling in gaps and creating false perceptions. This inference occurs hierarchically: higher cortical levels send top-down predictions to lower-level regions, while mismatches (prediction errors) propagate upward to refine the model. LLMs, in contrast, rely on auto-regressive modeling of text and can generate erroneous statements in the absence of robust grounding. Despite these different foundations - biological versus computational - the similarities in their predictive architectures help explain why hallucinations occur. We propose that the propensity to generate incorrect or confabulated responses may be an inherent feature of advanced intelligence. In both humans and AI, adaptive predictive processes aim to make sense of incomplete information and anticipate future states, fostering creativity and flexibility, but also introducing the risk of errors. Our analysis illuminates how factors such as feedback, grounding, and error correction affect the likelihood of 'being wrong' in each system. We suggest that mitigating AI hallucinations (e.g., through improved training, post-processing, or knowledge-grounding methods) may also shed light on human cognitive processes, revealing how error-prone predictions can be harnessed for innovation without compromising reliability. By exploring these converging and divergent mechanisms, the paper underscores the broader implications for advancing both AI reliability and scientific understanding of human thought.",0 "Large language model-based explainable recommendation (LLM-based ER) systems show promise in generating human-like explanations for recommendations. However, they face challenges in modeling user-item collaborative preferences, personalizing explanations, and handling sparse user-item interactions. To address these issues, we propose GaVaMoE, a novel Gaussian-Variational Gated Mixture of Experts framework for explainable recommendation. GaVaMoE introduces two key components: (1) a rating reconstruction module that employs Variational Autoencoder (VAE) with a Gaussian Mixture Model (GMM) to capture complex user-item collaborative preferences, serving as a pre-trained multi-gating mechanism; and (2) a set of fine-grained expert models coupled with the multi-gating mechanism for generating highly personalized explanations. The VAE component models latent factors in user-item interactions, while the GMM clusters users with similar behaviors. Each cluster corresponds to a gate in the multi-gating mechanism, routing user-item pairs to appropriate expert models. This architecture enables GaVaMoE to generate tailored explanations for specific user types and preferences, mitigating data sparsity by leveraging user similarities. Extensive experiments on three real-world datasets demonstrate that GaVaMoE significantly outperforms existing methods in explanation quality, personalization, and consistency. Notably, GaVaMoE exhibits robust performance in scenarios with sparse user-item interactions, maintaining high-quality explanations even for users with limited historical data.",0 "Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence suggests that neural networks encode features in superposition, meaning that input concepts are linearly overlaid within the network's representations. We present a perspective that explains this phenomenon and provides a foundation for extracting interpretable representations from neural activations. Our theoretical framework consists of three steps: (1) Identifiability theory shows that neural networks trained for classification recover latent features up to a linear transformation. (2) Sparse coding methods can extract disentangled features from these representations by leveraging principles from compressed sensing. (3) Quantitative interpretability metrics provide a means to assess the success of these methods, ensuring that extracted features align with human-interpretable concepts. By bridging insights from theoretical neuroscience, representation learning, and interpretability research, we propose an emerging perspective on understanding neural representations in both artificial and biological systems. Our arguments have implications for neural coding theories, AI transparency, and the broader goal of making deep learning models more interpretable.",0 "Diffuse Reflectance Spectroscopy has demonstrated a strong aptitude for identifying and differentiating biological tissues. However, the broadband and smooth nature of these signals require algorithmic processing, as they are often difficult for the human eye to distinguish. The implementation of machine learning models for this task has demonstrated high levels of diagnostic accuracies and led to a wide range of proposed methodologies for applications in various illnesses and conditions. In this systematic review, we summarise the state of the art of these applications, highlight current gaps in research and identify future directions. This review was conducted in accordance with the PRISMA guidelines. 77 studies were retrieved and in-depth analysis was conducted. It is concluded that diffuse reflectance spectroscopy and machine learning have strong potential for tissue differentiation in clinical applications, but more rigorous sample stratification in tandem with in-vivo validation and explainable algorithm development is required going forward.",0 "From a first-principles perspective, it may seem odd that the strongest results in foundation model fine-tuning (FT) are achieved via a relatively complex, two-stage training procedure. Specifically, one first trains a reward model (RM) on some dataset (e.g. human preferences) before using it to provide online feedback as part of a downstream reinforcement learning (RL) procedure, rather than directly optimizing the policy parameters on the dataset via offline maximum likelihood estimation. In fact, from an information-theoretic perspective, we can only lose information via passing through a reward model and cannot create any new information via on-policy sampling. To explain this discrepancy, we scrutinize several hypotheses on the value of RL in FT through both theoretical and empirical lenses. Of the hypotheses considered, we find the most support for the explanation that on problems with a generation-verification gap, the combination of the ease of learning the relatively simple RM (verifier) from the preference data, coupled with the ability of the downstream RL procedure to then filter its search space to the subset of policies (generators) that are optimal for relatively simple verifiers is what leads to the superior performance of online FT.",2 "Intelligent perception and interaction with the world hinges on internal representations that capture its underlying structure (''disentangled'' or ''abstract'' representations). Disentangled representations serve as world models, isolating latent factors of variation in the world along approximately orthogonal directions, thus facilitating feature-based generalization. We provide experimental and theoretical results guaranteeing the emergence of disentangled representations in agents that optimally solve multi-task evidence accumulation classification tasks, canonical in the neuroscience literature. The key conceptual finding is that, by producing accurate multi-task classification estimates, a system implicitly represents a set of coordinates specifying a disentangled representation of the underlying latent state of the data it receives. The theory provides conditions for the emergence of these representations in terms of noise, number of tasks, and evidence accumulation time. We experimentally validate these predictions in RNNs trained to multi-task, which learn disentangled representations in the form of continuous attractors, leading to zero-shot out-of-distribution (OOD) generalization in predicting latent factors. We demonstrate the robustness of our framework across autoregressive architectures, decision boundary geometries and in tasks requiring classification confidence estimation. We find that transformers are particularly suited for disentangling representations, which might explain their unique world understanding abilities. Overall, our framework establishes a formal link between competence at multiple tasks and the formation of disentangled, interpretable world models in both biological and artificial systems, and helps explain why ANNs often arrive at human-interpretable concepts, and how they both may acquire exceptional zero-shot generalization capabilities.",0 "A growing body of work in Ethical AI attempts to capture human moral judgments through simple computational models. The key question we address in this work is whether such simple AI models capture {the critical} nuances of moral decision-making by focusing on the use case of kidney allocation. We conducted twenty interviews where participants explained their rationale for their judgments about who should receive a kidney. We observe participants: (a) value patients' morally-relevant attributes to different degrees; (b) use diverse decision-making processes, citing heuristics to reduce decision complexity; (c) can change their opinions; (d) sometimes lack confidence in their decisions (e.g., due to incomplete information); and (e) express enthusiasm and concern regarding AI assisting humans in kidney allocation decisions. Based on these findings, we discuss challenges of computationally modeling moral judgments {as a stand-in for human input}, highlight drawbacks of current approaches, and suggest future directions to address these issues.",2 "Clinical trial eligibility matching is a critical yet often labor-intensive and error-prone step in medical research, as it ensures that participants meet precise criteria for safe and reliable study outcomes. Recent advances in Natural Language Processing (NLP) have shown promise in automating and improving this process by rapidly analyzing large volumes of unstructured clinical text and structured electronic health record (EHR) data. In this paper, we present a systematic overview of current NLP methodologies applied to clinical trial eligibility screening, focusing on data sources, annotation practices, machine learning approaches, and real-world implementation challenges. A comprehensive literature search (spanning Google Scholar, Mendeley, and PubMed from 2015 to 2024) yielded high-quality studies, each demonstrating the potential of techniques such as rule-based systems, named entity recognition, contextual embeddings, and ontology-based normalization to enhance patient matching accuracy. While results indicate substantial improvements in screening efficiency and precision, limitations persist regarding data completeness, annotation consistency, and model scalability across diverse clinical domains. The review highlights how explainable AI and standardized ontologies can bolster clinician trust and broaden adoption. Looking ahead, further research into advanced semantic and temporal representations, expanded data integration, and rigorous prospective evaluations is necessary to fully realize the transformative potential of NLP in clinical trial recruitment.",2 "How genes affect tissue scale organization remains a longstanding biological puzzle. As experimental efforts aim to quantify gene expression, chromatin organization, cellular structure, and tissue structure, computational modeling lags behind. To address this gap, we merge a cellular-based tissue model with a nuclear model that includes a deformable lamina shell and chromatin to test multiscale hypotheses linking chromatin and tissue scales. We propose a multiscale hypothesis focusing on brain organoids to explain structural differences between brain organoids built from induced-pluripotent human stem cells and induced-pluripotent gorilla and chimpanzee cells. Recent experiments discover that a cell fate transition from neuroepithelial to radial glial cells includes a new intermediate state delayed in human organoids, which narrows and lengthens cells on the apical side. Experiments show that the transcription factor ZEB2 plays a major role in the emergence of this intermediate state with ZEB2 mRNA levels peaking. We postulate that the enhancement of ZEB2 expression is potentially due to chromatin reorganization in response to mechanical deformations of the nucleus. A larger critical mechanical strain triggers reorganization in human-derived stem cells, causing delayed ZEB2 upregulation compared with genetically close relatives. We test this by exploring how slightly different initial configurations of chromatin reorganize under applied strain, with greater representing less genetically-close relatives. We find that larger configuration discrepancies produce increased differences in the magnitude of chromatin displacement that rise faster than linearly yet slower than exponentially. Changes in chromatin strain and contact maps can reveal species-specific differences, aiding our understanding of how one species differs in structure from another.",0 "Knowledge gaps often arise during communication due to diverse backgrounds, knowledge bases, and vocabularies. With recent LLM developments, providing real-time knowledge support is increasingly viable, but is challenging due to shared and individual cognitive limitations (e.g., attention, memory, and comprehension) and the difficulty in understanding the user's context and internal knowledge. To address these challenges, we explore the key question of understanding how people want to receive real-time knowledge support. We built StopGap -- a prototype that provides real-time knowledge support for explaining jargon words in videos -- to conduct a design probe study (N=24) that explored multiple visual knowledge representation formats. Our study revealed individual differences in preferred representations and highlighted the importance of user agency, personalization, and mixed-initiative assistance. Based on our findings, we map out six key design dimensions for real-time LLM knowledge support systems and offer insights for future research in this space.",0 "Retrieval-augmented generation (RAG) has shown promising potential to enhance the accuracy and factuality of language models (LMs). However, imperfect retrievers or noisy corpora can introduce misleading or even erroneous information to the retrieved contents, posing a significant challenge to the generation quality. Existing RAG methods typically address this challenge by directly predicting final answers despite potentially noisy inputs, resulting in an implicit denoising process that is difficult to interpret and verify. On the other hand, the acquisition of explicit denoising supervision is often costly, involving significant human efforts. In this work, we propose InstructRAG, where LMs explicitly learn the denoising process through self-synthesized rationales -- First, we instruct the LM to explain how the ground-truth answer is derived from retrieved documents. Then, these rationales can be used either as demonstrations for in-context learning of explicit denoising or as supervised fine-tuning data to train the model. Compared to standard RAG approaches, InstructRAG requires no additional supervision, allows for easier verification of the predicted answers, and effectively improves generation accuracy. Experiments show InstructRAG consistently outperforms existing RAG methods in both training-free and trainable scenarios, achieving a relative improvement of 8.3% over the best baseline method on average across five knowledge-intensive benchmarks. Extensive analysis indicates that InstructRAG scales well with increased numbers of retrieved documents and consistently exhibits robust denoising ability even in out-of-domain datasets, demonstrating strong generalizability.",0 "A key operation in federated learning is the aggregation of gradient vectors generated by individual client nodes. We develop a method based on multiparty homomorphic encryption (MPHE) that enables the central node to compute this aggregate, while receiving only encrypted version of each individual gradients. Towards this end, we extend classical MPHE methods so that the decryption of the aggregate vector can be successful even when only a subset of client nodes are available. This is accomplished by introducing a secret-sharing step during the setup phase of MPHE when the public encryption key is generated. We develop conditions on the parameters of the MPHE scheme that guarantee correctness of decryption and (computational) security. We explain how our method can be extended to accommodate client nodes that do not participate during the setup phase. We also propose a compression scheme for gradient vectors at each client node that can be readily combined with our MPHE scheme and perform the associated convergence analysis. We discuss the advantages of our proposed scheme with other approaches based on secure multi-party computation. Finally we discuss a practical implementation of our system, compare the performance of our system with different approaches, and demonstrate that by suitably combining compression with encryption the overhead over baseline schemes is rather small.",0 "Neuropathic pain, affecting up to 10% of adults, remains difficult to treat due to limited therapeutic efficacy and tolerability. Although resting-state functional MRI (rs-fMRI) is a promising non-invasive measurement of brain biomarkers to predict drug response in therapeutic development, the complexity of fMRI demands machine learning models with substantial capacity. However, extreme data scarcity in neuropathic pain research limits the application of high-capacity models. To address the challenge of data scarcity, we propose FMM$_{TC}$, a Foundation-Model-boosted Multimodal learning framework for fMRI-based neuropathic pain drug response prediction, which leverages both internal multimodal information in pain-specific data and external knowledge from large pain-agnostic data. Specifically, to maximize the value of limited pain-specific data, FMM$_{TC}$ integrates complementary information from two rs-fMRI modalities: Time series and functional Connectivity. FMM$_{TC}$ is further boosted by an fMRI foundation model with its external knowledge from extensive pain-agnostic fMRI datasets enriching limited pain-specific information. Evaluations with an in-house dataset and a public dataset from OpenNeuro demonstrate FMM$_{TC}$'s superior representation ability, generalizability, and cross-dataset adaptability over existing unimodal fMRI models that only consider one of the rs-fMRI modalities. The ablation study validates the effectiveness of multimodal learning and foundation-model-powered external knowledge transfer in FMM$_{TC}$. An integrated gradient-based interpretation study explains how FMM$_{TC}$'s cross-dataset dynamic behaviors enhance its adaptability. In conclusion, FMM$_{TC}$ boosts clinical trials in neuropathic pain therapeutic development by accurately predicting drug responses to improve the participant stratification efficiency.",2 "Legal systems worldwide continue to struggle with overwhelming caseloads, limited judicial resources, and growing complexities in legal proceedings. Artificial intelligence (AI) offers a promising solution, with Legal Judgment Prediction (LJP) -- the practice of predicting a court's decision from the case facts -- emerging as a key research area. However, existing datasets often formulate the task of LJP unrealistically, not reflecting its true difficulty. They also lack high-quality annotation essential for legal reasoning and explainability. To address these shortcomings, we introduce AnnoCaseLaw, a first-of-its-kind dataset of 471 meticulously annotated U.S. Appeals Court negligence cases. Each case is enriched with comprehensive, expert-labeled annotations that highlight key components of judicial decision making, along with relevant legal concepts. Our dataset lays the groundwork for more human-aligned, explainable LJP models. We define three legally relevant tasks: (1) judgment prediction; (2) concept identification; and (3) automated case annotation, and establish a performance baseline using industry-leading large language models (LLMs). Our results demonstrate that LJP remains a formidable task, with application of legal precedent proving particularly difficult. Code and data are available at https://github.com/anonymouspolar1/annocaselaw.",0 "Humour styles can have either a negative or a positive impact on well-being. Given the importance of these styles to mental health, significant research has been conducted on their automatic identification. However, the automated machine learning models used for this purpose are black boxes, making their prediction decisions opaque. Clarity and transparency are vital in the field of mental health. This paper presents an explainable AI (XAI) framework for understanding humour style classification, building upon previous work in computational humour analysis. Using the best-performing single model (ALI+XGBoost) from prior research, we apply comprehensive XAI techniques to analyse how linguistic, emotional, and semantic features contribute to humour style classification decisions. Our analysis reveals distinct patterns in how different humour styles are characterised and misclassified, with particular emphasis on the challenges in distinguishing affiliative humour from other styles. Through detailed examination of feature importance, error patterns, and misclassification cases, we identify key factors influencing model decisions, including emotional ambiguity, context misinterpretation, and target identification. The framework demonstrates significant utility in understanding model behaviour, achieving interpretable insights into the complex interplay of features that define different humour styles. Our findings contribute to both the theoretical understanding of computational humour analysis and practical applications in mental health, content moderation, and digital humanities research.",0 "Explainable AI (XAI) is concerned with how to make AI models more understandable to people. To date these explanations have predominantly been technocentric - mechanistic or productivity oriented. This paper introduces the Explainable AI for the Arts (XAIxArts) manifesto to provoke new ways of thinking about explainability and AI beyond technocentric discourses. Manifestos offer a means to communicate ideas, amplify unheard voices, and foster reflection on practice. To supports the co-creation and revision of the XAIxArts manifesto we combine a World Caf\'e style discussion format with a living manifesto to question four core themes: 1) Empowerment, Inclusion, and Fairness",2 "Artificial Intelligence (AI) has significantly advanced in recent years, driving innovation across various fields, especially in robotics. Even though robots can perform complex tasks with increasing autonomy, challenges remain in ensuring explainability and user-centered design for effective interaction. A key issue in Human-Robot Interaction (HRI) is enabling robots to effectively perceive and reason over multimodal inputs, such as audio and vision, to foster trust and seamless collaboration. In this paper, we propose a generalized and explainable multimodal framework for context representation, designed to improve the fusion of speech and vision modalities. We introduce a use case on assessing 'Relevance' between verbal utterances from the user and visual scene perception of the robot. We present our methodology with a Multimodal Joint Representation module and a Temporal Alignment module, which can allow robots to evaluate relevance by temporally aligning multimodal inputs. Finally, we discuss how the proposed framework for context representation can help with various aspects of explainability in HRI.",0 "Verification of biomedical claims is critical for healthcare decision-making, public health policy and scientific research. We present an interactive biomedical claim verification system by integrating LLMs, transparent model explanations, and user-guided justification. In the system, users first retrieve relevant scientific studies from a persistent medical literature corpus and explore how different LLMs perform natural language inference (NLI) within task-adaptive reasoning framework to classify each study as ""Support,"" ""Contradict,"" or ""Not Enough Information"" regarding the claim. Users can examine the model's reasoning process with additional insights provided by SHAP values that highlight word-level contributions to the final result. This combination enables a more transparent and interpretable evaluation of the model's decision-making process. A summary stage allows users to consolidate the results by selecting a result with narrative justification generated by LLMs. As a result, a consensus-based final decision is summarized for each retrieved study, aiming safe and accountable AI-assisted decision-making in biomedical contexts. We aim to integrate this explainable verification system as a component within a broader evidence synthesis framework to support human-AI collaboration.",0 "Algorithmic solutions have significant potential to improve decision-making across various domains, from healthcare to e-commerce. However, the widespread adoption of these solutions is hindered by a critical challenge: the lack of human-interpretable explanations. Current approaches to Explainable AI (XAI) predominantly focus on complex machine learning models, often producing brittle and non-intuitive explanations. This project proposes a novel approach to developing explainable algorithms by starting with optimization problems, specifically the assignment problem. The developed software library enriches basic algorithms with human-understandable explanations through four key methodologies: generating meaningful alternative solutions, creating robust solutions through input perturbation, generating concise decision trees and providing reports with comprehensive explanation of the results. Currently developed tools are often designed with specific clustering algorithms in mind, which limits their adaptability and flexibility to incorporate alternative techniques. Additionally, many of these tools fail to integrate expert knowledge, which could enhance the clustering process by providing valuable insights and context. This lack of adaptability and integration can hinder the effectiveness and robustness of the clustering outcomes in various applications. The represents a step towards making algorithmic solutions more transparent, trustworthy, and accessible. By collaborating with industry partners in sectors such as sales, we demonstrate the practical relevance and transformative potential of our approach.",0 "As AI systems are used in high-stakes applications, ensuring interpretability is crucial. Mechanistic Interpretability (MI) aims to reverse-engineer neural networks by extracting human-understandable algorithms to explain their behavior. This work examines a key question: for a given behavior, and under MI's criteria, does a unique explanation exist? Drawing on identifiability in statistics, where parameters are uniquely inferred under specific assumptions, we explore the identifiability of MI explanations. We identify two main MI strategies: (1) ""where-then-what,"" which isolates a circuit replicating model behavior before interpreting it, and (2) ""what-then-where,"" which starts with candidate algorithms and searches for neural activation subspaces implementing them, using causal alignment. We test both strategies on Boolean functions and small multi-layer perceptrons, fully enumerating candidate explanations. Our experiments reveal systematic non-identifiability: multiple circuits can replicate behavior, a circuit can have multiple interpretations, several algorithms can align with the network, and one algorithm can align with different subspaces. Is uniqueness necessary? A pragmatic approach may require only predictive and manipulability standards. If uniqueness is essential for understanding, stricter criteria may be needed. We also reference the inner interpretability framework, which validates explanations through multiple criteria. This work contributes to defining explanation standards in AI.",0 "Recent advances in vision-language models (VLMs) have shown remarkable potential in bridging visual and textual modalities. In computational pathology, domain-specific VLMs, which are pre-trained on extensive histopathology image-text datasets, have succeeded in various downstream tasks. However, existing research has primarily focused on the pre-training process and direct applications of VLMs on the patch level, leaving their great potential for whole slide image (WSI) applications unexplored. In this study, we hypothesize that pre-trained VLMs inherently capture informative and interpretable WSI representations through quantitative feature extraction. To validate this hypothesis, we introduce Vision and Language Embeddings for Explainable WSI Representation (VLEER), a novel method designed to leverage VLMs for WSI representation. We systematically evaluate VLEER on three pathological WSI datasets, proving its better performance in WSI analysis compared to conventional vision features. More importantly, VLEER offers the unique advantage of interpretability, enabling direct human-readable insights into the results by leveraging the textual modality for detailed pathology annotations, providing clear reasoning for WSI-level pathology downstream tasks.",0 "In human-AI interactions, explanation is widely seen as necessary for enabling trust in AI systems. We argue that trust, however, may be a pre-requisite because explanation is sometimes impossible. We derive this result from a formalization of explanation as a search process through knowledge networks, where explainers must find paths between shared concepts and the concept to be explained, within finite time. Our model reveals that explanation can fail even under theoretically ideal conditions - when actors are rational, honest, motivated, can communicate perfectly, and possess overlapping knowledge. This is because successful explanation requires not just the existence of shared knowledge but also finding the connection path within time constraints, and it can therefore be rational to cease attempts at explanation before the shared knowledge is discovered. This result has important implications for human-AI interaction: as AI systems, particularly Large Language Models, become more sophisticated and able to generate superficially compelling but spurious explanations, humans may default to trust rather than demand genuine explanations. This creates risks of both misplaced trust and imperfect knowledge integration.",0 "EXplainable machine learning (XML) has recently emerged to address the mystery mechanisms of machine learning (ML) systems by interpreting their 'black box' results. Despite the development of various explanation methods, determining the most suitable XML method for specific ML contexts remains unclear, highlighting the need for effective evaluation of explanations. The evaluating capabilities of the Transformer-based large language model (LLM) present an opportunity to adopt LLM-as-a-Judge for assessing explanations. In this paper, we propose a workflow that integrates both LLM-based and human judges for evaluating explanations. We examine how LLM-based judges evaluate the quality of various explanation methods and compare their evaluation capabilities to those of human judges within an iris classification scenario, employing both subjective and objective metrics. We conclude that while LLM-based judges effectively assess the quality of explanations using subjective metrics, they are not yet sufficiently developed to replace human judges in this role.",0 "In this short paper we address issues related to building multimodal AI systems for human performance support in manufacturing domains. We make two contributions: we first identify challenges of participatory design and training of such systems, and secondly, to address such challenges, we propose the ACE paradigm: ""Action and Control via Explanations"". Specifically, we suggest that LLMs can be used to produce explanations in the form of human interpretable ""semantic frames"", which in turn enable end users to provide data the AI system needs to align its multimodal models and representations, including computer vision, automatic speech recognition, and document inputs. ACE, by using LLMs to ""explain"" using semantic frames, will help the human and the AI system to collaborate, together building a more accurate model of humans activities and behaviors, and ultimately more accurate predictive outputs for better task support, and better outcomes for human users performing manual tasks.",0 "Some things are impossible, but some things may be even more impossible than impossible. Levitating a feather using one's mind is impossible in our world, but fits into our intuitive theories of possible worlds, whereas levitating a feather using the number five cannot be conceived in any possible world (""inconceivable""). While prior work has examined the distinction between improbable and impossible events, there has been little empirical research on inconceivability. Here, we investigate whether people maintain a distinction between impossibility and inconceivability, and how such distinctions might be made. We find that people can readily distinguish the impossible from the inconceivable, using categorization studies similar to those used to investigate the differences between impossible and improbable (Experiment 1). However, this distinction is not explained by people's subjective ratings of event likelihood, which are near zero and indistinguishable between impossible and inconceivable event descriptions (Experiment 2). Finally, we ask whether the probabilities assigned to event descriptions by statistical language models (LMs) can be used to separate modal categories, and whether these probabilities align with people's ratings (Experiment 3). We find high-level similarities between people and LMs: both distinguish among impossible and inconceivable event descriptions, and LM-derived string probabilities predict people's ratings of event likelihood across modal categories. Our findings suggest that fine-grained knowledge about exceedingly rare events (i.e., the impossible and inconceivable) may be learned via statistical learning over linguistic forms, yet leave open the question of whether people represent the distinction between impossible and inconceivable as a difference not of degree, but of kind.",0 "Large Multimodal Models (LMMs) exhibit impressive cross-modal understanding and reasoning abilities, often assessed through multiple-choice questions (MCQs) that include an image, a question, and several options. However, many benchmarks used for such evaluations suffer from systematic biases. Remarkably, Large Language Models (LLMs) without any visual perception capabilities achieve non-trivial performance, undermining the credibility of these evaluations. To address this issue while maintaining the efficiency of MCQ evaluations, we propose MMEvalPro, a benchmark designed to avoid Type-I errors through a trilogy evaluation pipeline and more rigorous metrics. For each original question from existing benchmarks, human annotators augment it by creating one perception question and one knowledge anchor question through a meticulous annotation process. MMEvalPro comprises $2,138$ question triplets, totaling $6,414$ distinct questions. Two-thirds of these questions are manually labeled by human experts, while the rest are sourced from existing benchmarks (MMMU, ScienceQA, and MathVista). Compared with the existing benchmarks, our experiments with the latest LLMs and LMMs demonstrate that MMEvalPro is more challenging (the best LMM lags behind human performance by $31.73\%$, compared to an average gap of $8.03\%$ in previous benchmarks) and more trustworthy (the best LLM trails the best LMM by $23.09\%$, whereas the gap for previous benchmarks is just $14.64\%$). Our in-depth analysis explains the reason for the large performance gap and justifies the trustworthiness of evaluation, underscoring its significant potential for advancing future research.",2 "The scientific method is the cornerstone of human progress across all branches of the natural and applied sciences, from understanding the human body to explaining how the universe works. The scientific method is based on identifying systematic rules or principles that describe the phenomenon of interest in a reproducible way that can be validated through experimental evidence. In the era of generative artificial intelligence, there are discussions on how AI systems may discover new knowledge. We argue that human complex reasoning for scientific discovery remains of vital importance, at least before the advent of artificial general intelligence. Yet, AI can be leveraged for scientific discovery via explainable AI. More specifically, knowing the `principles' the AI systems used to make decisions can be a point of contact with domain experts and scientists, that can lead to divergent or convergent views on a given scientific problem. Divergent views may spark further scientific investigations leading to interpretability-guided explanations (IGEs), and possibly to new scientific knowledge. We define this field as Explainable AI for Science, where domain experts -- potentially assisted by generative AI -- formulate scientific hypotheses and explanations based on the interpretability of a predictive AI system.",0 "The complex nature of disease mechanisms and the variability of patient symptoms pose significant challenges in developing effective diagnostic tools. Although machine learning (ML) has made substantial advances in medical diagnosis, the decision-making processes of these models often lack transparency, potentially jeopardizing patient outcomes. This review aims to highlight the role of Explainable AI (XAI) in addressing the interpretability issues of ML models in healthcare, with a focus on chronic conditions such as Parkinson's, stroke, depression, cancer, heart disease, and Alzheimer's disease. A comprehensive literature search was conducted across multiple databases to identify studies that applied XAI techniques in healthcare. The search focused on XAI algorithms used in diagnosing and monitoring chronic diseases. The review identified the application of nine trending XAI algorithms, each evaluated for their advantages and limitations in various healthcare contexts. The findings underscore the importance of transparency in ML models, which is crucial for improving trust and outcomes in clinical practice. While XAI provides significant potential to bridge the gap between complex ML models and clinical practice, challenges such as scalability, validation, and clinician acceptance remain. The review also highlights areas requiring further research, particularly in integrating XAI into healthcare systems. The study concludes that XAI methods offer a promising path forward for enhancing human health monitoring and patient care, though significant challenges must be addressed to fully realize their potential in clinical settings.",2 "Reward models (RMs) are a crucial component in the alignment of large language models' (LLMs) outputs with human values. RMs approximate human preferences over possible LLM responses to the same prompt by predicting and comparing reward scores. However, as they are typically modified versions of LLMs with scalar output heads, RMs are large black boxes whose predictions are not explainable. More transparent RMs would enable improved trust in the alignment of LLMs. In this work, we propose to use contrastive explanations to explain any binary response comparison made by an RM. Specifically, we generate a diverse set of new comparisons similar to the original one to characterise the RM's local behaviour. The perturbed responses forming the new comparisons are generated to explicitly modify manually specified high-level evaluation attributes, on which analyses of RM behaviour are grounded. In quantitative experiments, we validate the effectiveness of our method for finding high-quality contrastive explanations. We then showcase the qualitative usefulness of our method for investigating global sensitivity of RMs to each evaluation attribute, and demonstrate how representative examples can be automatically extracted to explain and compare behaviours of different RMs. We see our method as a flexible framework for RM explanation, providing a basis for more interpretable and trustworthy LLM alignment.",0 "Hyperparameter optimization is critical in modern machine learning, requiring expert knowledge, numerous trials, and high computational and human resources. Despite the advancements in Automated Machine Learning (AutoML), challenges in terms of trial efficiency, setup complexity, and interoperability still persist. To address these issues, we introduce a novel paradigm leveraging Large Language Models (LLMs) to automate hyperparameter optimization across diverse machine learning tasks, which is named AgentHPO (short for LLM Agent-based Hyperparameter Optimization). Specifically, AgentHPO processes the task information autonomously, conducts experiments with specific hyperparameters (HPs), and iteratively optimizes them based on historical trials. This human-like optimization process largely reduces the number of required trials, simplifies the setup process, and enhances interpretability and user trust, compared to traditional AutoML methods. Extensive empirical experiments conducted on 12 representative machine-learning tasks indicate that AgentHPO not only matches but also often surpasses the best human trials in terms of performance while simultaneously providing explainable results. Further analysis sheds light on the strategies employed by the LLM in optimizing these tasks, highlighting its effectiveness and adaptability in various scenarios.",0 "This paper presents a novel framework, called PLANTOR (PLanning with Natural language for Task-Oriented Robots), that integrates Large Language Models (LLMs) with Prolog-based knowledge management and planning for multi-robot tasks. The system employs a two-phase generation of a robot-oriented knowledge base, ensuring reusability and compositional reasoning, as well as a three-step planning procedure that handles temporal dependencies, resource constraints, and parallel task execution via mixed-integer linear programming. The final plan is converted into a Behaviour Tree for direct use in ROS2. We tested the framework in multi-robot assembly tasks within a block world and an arch-building scenario. Results demonstrate that LLMs can produce accurate knowledge bases with modest human feedback, while Prolog guarantees formal correctness and explainability. This approach underscores the potential of LLM integration for advanced robotics tasks requiring flexible, scalable, and human-understandable planning.",0 "Scientific discovery relies on scientists generating novel hypotheses that undergo rigorous experimental validation. To augment this process, we introduce an AI co-scientist, a multi-agent system built on Gemini 2.0. The AI co-scientist is intended to help uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and aligned to scientist-provided research objectives and guidance. The system's design incorporates a generate, debate, and evolve approach to hypothesis generation, inspired by the scientific method and accelerated by scaling test-time compute. Key contributions include: (1) a multi-agent architecture with an asynchronous task execution framework for flexible compute scaling; (2) a tournament evolution process for self-improving hypotheses generation. Automated evaluations show continued benefits of test-time compute, improving hypothesis quality. While general purpose, we focus development and validation in three biomedical areas: drug repurposing, novel target discovery, and explaining mechanisms of bacterial evolution and anti-microbial resistance. For drug repurposing, the system proposes candidates with promising validation findings, including candidates for acute myeloid leukemia that show tumor inhibition in vitro at clinically applicable concentrations. For novel target discovery, the AI co-scientist proposed new epigenetic targets for liver fibrosis, validated by anti-fibrotic activity and liver cell regeneration in human hepatic organoids. Finally, the AI co-scientist recapitulated unpublished experimental results via a parallel in silico discovery of a novel gene transfer mechanism in bacterial evolution. These results, detailed in separate, co-timed reports, demonstrate the potential to augment biomedical and scientific discovery and usher an era of AI empowered scientists.",0 "The precise identification of tree species is fundamental to forestry, conservation, and environmental monitoring. Though many studies have demonstrated that high accuracy can be achieved using bark-based species classification, these models often function as ""black boxes"", limiting interpretability, trust, and adoption in critical forestry applications. Attribution-based Explainable AI (XAI) methods have been used to address this issue in related works. However, XAI applications are often dependent on local features (such as a head shape or paw in animal applications) and cannot describe global visual features (such as ruggedness or smoothness) that are present in texture-dominant images such as tree bark. Concept-based XAI methods, on the other hand, offer explanations based on global visual features with concepts, but they tend to require large overhead in building external concept image datasets and the concepts can be vague and subjective without good means of precise quantification. To address these challenges, we propose a lightweight post-hoc method to interpret visual models for tree species classification using operators and quantifiable concepts. Our approach eliminates computational overhead, enables the quantification of complex concepts, and evaluates both concept importance and the model's reasoning process. To the best of our knowledge, our work is the first study to explain bark vision models in terms of global visual features with concepts. Using a human-annotated dataset as ground truth, our experiments demonstrate that our method significantly outperforms TCAV and Llama3.2 in concept importance ranking based on Kendall's Tau, highlighting its superior alignment with human perceptions.",0 "Social evolutionary theory seeks to explain increases in the scale and complexity of human societies, from origins to present. Over the course of the twentieth century, social evolutionary theory largely fell out of favor as a way of investigating human history, just when advances in complex systems science and computer science saw the emergence of powerful new conceptions of complex systems, and in particular new methods of measuring complexity. We propose that these advances in our understanding of complex systems and computer science should be brought to bear on our investigations into human history. To that end, we present a new framework for modeling how human societies co-evolve with their biotic environments, recognizing that both a society and its environment are computers. This leads us to model the dynamics of each of those two systems using the same, new kind of computational machine, which we define here. For simplicity, we construe a society as a set of interacting occupations and technologies. Similarly, under such a model, a biotic environment is a set of interacting distinct ecological and environmental processes. This provides novel ways to characterize social complexity, which we hope will cast new light on the archaeological and historical records. Our framework also provides a natural way to formalize both the energetic (thermodynamic) costs required by a society as it runs, and the ways it can extract thermodynamic resources from the environment in order to pay for those costs -- and perhaps to grow with any left-over resources.",0 "While dust is a key parameter of Mars climate, its behaviour from one year to the next can appear erratic. This variability is notably related to Global Dust Storms (GDS) which occur only certain years with different onset, duration and intensity. The interannual variabilities of the dust cycle may notably explain some characteristics of Recurring Slope Lineae (RSL), slope flows once thought to be caused by liquid water. Long-term monitoring of dust dynamics is thus required to better understand surface-atmosphere dust exchanges on Mars. Here we present a new method to detect atmospheric dust as a function of space and time in the OMEGA Near-InfraRed (NIR) dataset. This dataset covers more than three Martian years; it includes the 2007 GDS which seasonality differs from the preceding (2001) and later (2018) GDS. The method is based on the decrease of the atmospheric optical path caused by dust, that can be measured by OMEGA with the 2 $\mu$m CO$_2$ absorption band. This measure is converted to a 0.9 $\mu$m NIR dust optical depth using notably comparisons with Mars Exploration Rovers measurements. We derive dust optical depth maps and comment on the variability of the dust seasonal cycle before, during and after the 2007 GDS. We also compare OMEGA NIR optical depths to Thermal InfraRed (TIR) ones derived by other studies. We found a NIR/TIR dust extinction optical depth ratio of 1.8 on average, with some variations notably related to dust particle size. Finally, we show in the northern hemisphere that atmospheric dust and RSL activity is correlated. This may indicate that dust lifting or transport mechanisms working at regional scale also participate to local RSL activity.",0 "Sensor-based Human Activity Recognition (HAR) in smart home environments is crucial for several applications, especially in the healthcare domain. The majority of the existing approaches leverage deep learning models. While these approaches are effective, the rationale behind their outputs is opaque. Recently, eXplainable Artificial Intelligence (XAI) approaches emerged to provide intuitive explanations to the output of HAR models. To the best of our knowledge, these approaches leverage classic deep models like CNNs or RNNs. Recently, Graph Neural Networks (GNNs) proved to be effective for sensor-based HAR. However, existing approaches are not designed with explainability in mind. In this work, we propose the first explainable Graph Neural Network explicitly designed for smart home HAR. Our results on two public datasets show that this approach provides better explanations than state-of-the-art methods while also slightly improving the recognition rate.",0 "Medical image annotation is essential for diagnosing diseases, yet manual annotation is time-consuming, costly, and prone to variability among experts. To address these challenges, we propose an automated explainable annotation system that integrates ensemble learning, visual explainability, and uncertainty quantification. Our approach combines three pre-trained deep learning models - ResNet50, EfficientNet, and DenseNet - enhanced with XGrad-CAM for visual explanations and Monte Carlo Dropout for uncertainty quantification. This ensemble mimics the consensus of multiple radiologists by intersecting saliency maps from models that agree on the diagnosis while uncertain predictions are flagged for human review. We evaluated our system using the TBX11K medical imaging dataset and a Fire segmentation dataset, demonstrating its robustness across different domains. Experimental results show that our method outperforms baseline models, achieving 93.04% accuracy on TBX11K and 96.4% accuracy on the Fire dataset. Moreover, our model produces precise pixel-level annotations despite being trained with only image-level labels, achieving Intersection over Union IoU scores of 36.07% and 64.7%, respectively. By enhancing the accuracy and interpretability of image annotations, our approach offers a reliable and transparent solution for medical diagnostics and other image analysis tasks.",0 "The debate between self-interpretable models and post-hoc explanations for black-box models is central to Explainable AI (XAI). Self-interpretable models, such as concept-based networks, offer insights by connecting decisions to human-understandable concepts but often struggle with performance and scalability. Conversely, post-hoc methods like Shapley values, while theoretically robust, are computationally expensive and resource-intensive. To bridge the gap between these two lines of research, we propose a novel method that combines their strengths, providing theoretically guaranteed self-interpretability for black-box models without compromising prediction accuracy. Specifically, we introduce a parameter-efficient pipeline, AutoGnothi, which integrates a small side network into the black-box model, allowing it to generate Shapley value explanations without changing the original network parameters. This side-tuning approach significantly reduces memory, training, and inference costs, outperforming traditional parameter-efficient methods, where full fine-tuning serves as the optimal baseline. AutoGnothi enables the black-box model to predict and explain its predictions with minimal overhead. Extensive experiments show that AutoGnothi offers accurate explanations for both vision and language tasks, delivering superior computational efficiency with comparable interpretability.",0 "Large Multimodal Models (LMMs), or Vision-Language Models (VLMs), have shown impressive capabilities in a wide range of visual tasks. However, they often struggle with fine-grained visual reasoning, failing to identify domain-specific objectives and provide justifiable explanations for their predictions. To address the above challenge, we propose a novel visual rejection sampling framework to improve the cognition and explainability of LMMs using self-synthesized data. Specifically, visual fine-tuning requires images, queries, and target answers. Our approach begins by synthesizing interpretable answers that include human-verifiable visual features. These features are based on expert-defined concepts, and carefully selected based on their alignment with the image content. After each round of fine-tuning, we apply a reward model-free filtering mechanism to select the highest-quality interpretable answers for the next round of tuning. This iterative process of synthetic data generation and fine-tuning progressively improves the model's ability to generate accurate and reasonable explanations. Experimental results demonstrate the effectiveness of our method in improving both the accuracy and explainability of specialized visual classification tasks.",0 "An interpretable model or method has several appealing features, such as reliability to adversarial examples, transparency of decision-making, and communication facilitator. However, interpretability is a subjective concept, and even its definition can be diverse. The same model may be deemed as interpretable by a study team, but regarded as a black-box algorithm by another squad. Simplicity, accuracy and generalizability are some additional important aspects of evaluating interpretability. In this work, we present a general, flexible and harmonious framework to construct interpretable functions in regression analysis with a focus on continuous outcomes. We formulate a functional skeleton in light of users' expectations of interpretability. A new measure based on Mallows's $C_p$-statistic is proposed for model selection to balance approximation, generalizability, and interpretability. We apply this approach to derive a sample size formula in adaptive clinical trial designs to demonstrate the general workflow, and to explain operating characteristics in a Bayesian Go/No-Go paradigm to show the potential advantages of using meaningful intermediate variables. Generalization to categorical outcomes is illustrated in an example of hypothesis testing based on Fisher's exact test. A real data analysis of NHANES (National Health and Nutrition Examination Survey) is conducted to investigate relationships between some important laboratory measurements. We also discuss some extensions of this method.",2 "The leading AI companies are increasingly focused on building generalist AI agents -- systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI training methods. Indeed, various scenarios and experiments have demonstrated the possibility of AI agents engaging in deception or pursuing goals that were not specified by human operators and that conflict with human interests, such as self-preservation. Following the precautionary principle, we see a strong need for safer, yet still useful, alternatives to the current agency-driven trajectory. Accordingly, we propose as a core building block for further advances the development of a non-agentic AI system that is trustworthy and safe by design, which we call Scientist AI. This system is designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans. It comprises a world model that generates theories to explain data and a question-answering inference machine. Both components operate with an explicit notion of uncertainty to mitigate the risks of overconfident predictions. In light of these considerations, a Scientist AI could be used to assist human researchers in accelerating scientific progress, including in AI safety. In particular, our system can be employed as a guardrail against AI agents that might be created despite the risks involved. Ultimately, focusing on non-agentic AI may enable the benefits of AI innovation while avoiding the risks associated with the current trajectory. We hope these arguments will motivate researchers, developers, and policymakers to favor this safer path.",0 "Methods of artificial intelligence (AI) and especially machine learning (ML) have been growing ever more complex, and at the same time have more and more impact on people's lives. This leads to explainable AI (XAI) manifesting itself as an important research field that helps humans to better comprehend ML systems. In parallel, quantum machine learning (QML) is emerging with the ongoing improvement of quantum computing hardware combined with its increasing availability via cloud services. QML enables quantum-enhanced ML in which quantum mechanics is exploited to facilitate ML tasks, typically in the form of quantum-classical hybrid algorithms that combine quantum and classical resources. Quantum gates constitute the building blocks of gate-based quantum hardware and form circuits that can be used for quantum computations. For QML applications, quantum circuits are typically parameterized and their parameters are optimized classically such that a suitably defined objective function is minimized. Inspired by XAI, we raise the question of the explainability of such circuits by quantifying the importance of (groups of) gates for specific goals. To this end, we apply the well-established concept of Shapley values. The resulting attributions can be interpreted as explanations for why a specific circuit works well for a given task, improving the understanding of how to construct parameterized (or variational) quantum circuits, and fostering their human interpretability in general. An experimental evaluation on simulators and two superconducting quantum hardware devices demonstrates the benefits of the proposed framework for classification, generative modeling, transpilation, and optimization. Furthermore, our results shed some light on the role of specific gates in popular QML approaches.",0 "This position paper argues that, to its detriment, transparency research overlooks many foundational concepts of artificial intelligence. Here, we focus on uncertainty quantification -- in the context of ante-hoc interpretability and counterfactual explainability -- showing how its adoption could address key challenges in the field. First, we posit that uncertainty and ante-hoc interpretability offer complementary views of the same underlying idea; second, we assert that uncertainty provides a principled unifying framework for counterfactual explainability. Consequently, inherently transparent models can benefit from human-centred explanatory insights -- like counterfactuals -- which are otherwise missing. At a higher level, integrating artificial intelligence fundamentals into transparency research promises to yield more reliable, robust and understandable predictive models.",0 "Recent work, spanning from autonomous vehicle coordination to in-space assembly, has shown the importance of learning collaborative behavior for enabling robots to achieve shared goals. A common approach for learning this cooperative behavior is to utilize the centralized-training decentralized-execution paradigm. However, this approach also introduces a new challenge: how do we evaluate the contributions of each agent's actions to the overall success or failure of the team. This credit assignment problem has remained open, and has been extensively studied in the Multi-Agent Reinforcement Learning literature. In fact, humans manually inspecting agent behavior often generate better credit evaluations than existing methods. We combine this observation with recent works which show Large Language Models demonstrate human-level performance at many pattern recognition tasks. Our key idea is to reformulate credit assignment to the two pattern recognition problems of sequence improvement and attribution, which motivates our novel LLM-MCA method. Our approach utilizes a centralized LLM reward-critic which numerically decomposes the environment reward based on the individualized contribution of each agent in the scenario. We then update the agents' policy networks based on this feedback. We also propose an extension LLM-TACA where our LLM critic performs explicit task assignment by passing an intermediary goal directly to each agent policy in the scenario. Both our methods far outperform the state-of-the-art on a variety of benchmarks, including Level-Based Foraging, Robotic Warehouse, and our new Spaceworld benchmark which incorporates collision-related safety constraints. As an artifact of our methods, we generate large trajectory datasets with each timestep annotated with per-agent reward information, as sampled from our LLM critics.",0 "Modern recommender systems use ML models to predict consumer preferences from consumption history. Although these ""black-box"" models achieve impressive predictive performance, they often suffer from a lack of transparency and explainability. Contrary to the presumed tradeoff between explainability and accuracy, we show that integrating large language models (LLMs) with deep neural networks (DNNs) can improve both. We propose LR-Recsys, which augments DNN-based systems with LLM reasoning capabilities. LR-Recsys introduces a contrastive-explanation generator that produces human-readable positive explanations and negative explanations. These explanations are embedded via a fine-tuned autoencoder and combined with consumer and product features to improve predictions. Beyond offering explainability, we show that LR-Recsys also improves learning efficiency and predictive accuracy, as supported by high-dimensional, multi-environment statistical learning theory. LR-Recsys outperforms state-of-the-art recommender systems by 3-14% on three real-world datasets. Importantly, our analysis reveals that these gains primarily derive from LLMs' reasoning capabilities rather than their external domain knowledge. LR-RecSys presents an effective approach to combine LLMs with traditional DNNs, two of the most widely used ML models today. The explanations generated by LR-Recsys provide actionable insights for consumers, sellers, and platforms, helping to build trust, optimize product offerings, and inform targeting strategies.",0 "Analyzing imaging and hyperspectral data is crucial across scientific fields, including biology, medicine, chemistry, and physics. The primary goal is to transform high-resolution or high-dimensional data into an interpretable format to generate actionable insights, aiding decision-making and advancing knowledge. Currently, this task relies on complex, human-designed workflows comprising iterative steps such as denoising, spatial sampling, keypoint detection, feature generation, clustering, dimensionality reduction, and physics-based deconvolutions. The introduction of machine learning over the past decade has accelerated tasks like image segmentation and object detection via supervised learning, and dimensionality reduction via unsupervised methods. However, both classical and NN-based approaches still require human input, whether for hyperparameter tuning, data labeling, or both. The growing use of automated imaging tools, from atomically resolved imaging to biological applications, demands unsupervised methods that optimize data representation for human decision-making or autonomous experimentation. Here, we discuss advances in reward-based workflows, which adopt expert decision-making principles and demonstrate strong transfer learning across diverse tasks. We represent image analysis as a decision-making process over possible operations and identify desiderata and their mappings to classical decision-making frameworks. Reward-driven workflows enable a shift from supervised, black-box models sensitive to distribution shifts to explainable, unsupervised, and robust optimization in image analysis. They can function as wrappers over classical and DCNN-based methods, making them applicable to both unsupervised and supervised workflows (e.g., classification, regression for structure-property mapping) across imaging and hyperspectral data.",0 "Explainable AI is increasingly employing argumentation methods to facilitate interactive explanations between AI agents and human users. While existing approaches typically rely on predetermined human user models, there remains a critical gap in dynamically learning and updating these models during interactions. In this paper, we present a framework that enables AI agents to adapt their understanding of human users through argumentation-based dialogues. Our approach, called Persona, draws on prospect theory and integrates a probability weighting function with a Bayesian belief update mechanism that refines a probability distribution over possible human models based on exchanged arguments. Through empirical evaluations with human users in an applied argumentation setting, we demonstrate that Persona effectively captures evolving human beliefs, facilitates personalized interactions, and outperforms state-of-the-art methods.",0 "Understanding the high-order relationship between urban form and function is essential for modeling the underlying mechanisms of sustainable urban systems. Nevertheless, it is challenging to establish an accurate data representation for complex urban forms that are readily explicable in human terms. This study proposed the concept of core urban morphology representation and developed an explainable deep learning framework for explicably symbolizing complex urban forms into the novel representation, which we call CoMo. By interpretating the well-trained deep learning model with a stable weighted F1-score of 89.14%, CoMo presents a promising approach for revealing links between urban function and urban form in terms of core urban morphology representation. Using Boston as a study area, we analyzed the core urban forms at the individual-building, block, and neighborhood level that are important to corresponding urban functions. The residential core forms follow a gradual morphological pattern along the urban spine, which is consistent with a center-urban-suburban transition. Furthermore, we prove that urban morphology directly affects land use efficiency, which has a significantly strong correlation with the location (R2=0.721, p<0.001). Overall, CoMo can explicably symbolize urban forms, provide evidence for the classic urban location theory, and offer mechanistic insights for digital twins.",0 "ARPES studies have established that the high-$T_c$ cuprates with single and double CuO$_2$ layers evolve from the Mott insulator to the pseudogap state with a Fermi arc, on which the superconducting (SC) gap opens. In four- to six-layer cuprates, on the other hand, small hole Fermi pockets are formed in the innermost CuO$_2$ planes, indicating antiferromagnetism. Here, we performed ARPES studies on the triple-layer Bi$_2$Sr$_2$Ca$_2$Cu$_3$O$_{10+\delta}$ over a wide doping range, and found that, although the doping level of the inner CuO$_2$ plane was extremely low in underdoped samples, the $d$-wave SC gap was enhanced to the unprecedentedly large value of $\Delta_0\sim$100 meV at the antinode and persisted well above $T_{{c}}$ without the appearance of a Fermi arc, indicating a robust ``nodal metal''. We attribute the nodal metallic behavior to the unique local environment of the inner clean CuO$_2$ plane in the triple-layer cuprates, sandwiched by nearly optimally-doped two outer CuO$_2$ planes and hence subject to strong proximity effect from both sides. In the nodal metal, quasiparticle peaks showed electron-hole symmetry, suggesting $d$-wave pairing fluctuations. Thus the proximity effect on the innermost CuO${_2}$ plane is the strongest in the triple-layer cuprates, which explains why the $T_c$ reaches the maximum at the layer number of three in every multi-layer cuprate family.",0 "Long-term body identification algorithms have emerged recently with the increased availability of high-quality training data. We seek to fill knowledge gaps about these models by analyzing body image embeddings from four body identification networks trained with 1.9 million images across 4,788 identities and 9 databases. By analyzing a diverse range of architectures (ViT, SWIN-ViT, CNN, and linguistically primed CNN), we first show that the face contributes to the accuracy of body identification algorithms and that these algorithms can identify faces to some extent -- with no explicit face training. Second, we show that representations (embeddings) generated by body identification algorithms encode information about gender, as well as image-based information including view (yaw) and even the dataset from which the image originated. Third, we demonstrate that identification accuracy can be improved without additional training by operating directly and selectively on the learned embedding space. Leveraging principal component analysis (PCA), identity comparisons were consistently more accurate in subspaces that eliminated dimensions that explained large amounts of variance. These three findings were surprisingly consistent across architectures and test datasets. This work represents the first analysis of body representations produced by long-term re-identification networks trained on challenging unconstrained datasets.",0 "Many modern methods for prediction leverage nearest neighbor search to find past training examples most similar to a test example, an idea that dates back in text to at least the 11th century and has stood the test of time. This monograph aims to explain the success of these methods, both in theory, for which we cover foundational nonasymptotic statistical guarantees on nearest-neighbor-based regression and classification, and in practice, for which we gather prominent methods for approximate nearest neighbor search that have been essential to scaling prediction systems reliant on nearest neighbor analysis to handle massive datasets. Furthermore, we discuss connections to learning distances for use with nearest neighbor methods, including how random decision trees and ensemble methods learn nearest neighbor structure, as well as recent developments in crowdsourcing and graphons. In terms of theory, our focus is on nonasymptotic statistical guarantees, which we state in the form of how many training data and what algorithm parameters ensure that a nearest neighbor prediction method achieves a user-specified error tolerance. We begin with the most general of such results for nearest neighbor and related kernel regression and classification in general metric spaces. In such settings in which we assume very little structure, what enables successful prediction is smoothness in the function being estimated for regression, and a low probability of landing near the decision boundary for classification. In practice, these conditions could be difficult to verify for a real dataset. We then cover recent guarantees on nearest neighbor prediction in the three case studies of time series forecasting, recommending products to people over time, and delineating human organs in medical images by looking at image patches. In these case studies, clustering structure enables successful prediction.",0 "Large language models (LLMs) excel at handling human queries, but they can occasionally generate flawed or unexpected responses. Understanding their internal states is crucial for understanding their successes, diagnosing their failures, and refining their capabilities. Although sparse autoencoders (SAEs) have shown promise for interpreting LLM internal representations, limited research has explored how to better explain SAE features, i.e., understanding the semantic meaning of features learned by SAE. Our theoretical analysis reveals that existing explanation methods suffer from the frequency bias issue, where they emphasize linguistic patterns over semantic concepts, while the latter is more critical to steer LLM behaviors. To address this, we propose using a fixed vocabulary set for feature interpretations and designing a mutual information-based objective, aiming to better capture the semantic meaning behind these features. We further propose two runtime steering strategies that adjust the learned feature activations based on their corresponding explanations. Empirical results show that, compared to baselines, our method provides more discourse-level explanations and effectively steers LLM behaviors to defend against jailbreak attacks. These findings highlight the value of explanations for steering LLM behaviors in downstream applications. We will release our code and data once accepted.",0 "Obtaining high-quality explanations of a model's output enables developers to identify and correct biases, align the system's behavior with human values, and ensure ethical compliance. Explainable Artificial Intelligence (XAI) practitioners rely on specific measures to gauge the quality of such explanations. These measures assess key attributes, such as how closely an explanation aligns with a model's decision process (faithfulness), how accurately it pinpoints the relevant input features (localization), and its consistency across different cases (robustness). Despite providing valuable information, these measures do not fully address a critical practitioner's concern: how does the quality of a given explanation compare to other potential explanations? Traditionally, the quality of an explanation has been assessed by comparing it to a randomly generated counterpart. This paper introduces an alternative: the Quality Gap Estimate (QGE). The QGE method offers a direct comparison to what can be viewed as the `inverse' explanation, one that conceptually represents the antithesis of the original explanation. Our extensive testing across multiple model architectures, datasets, and established quality metrics demonstrates that the QGE method is superior to the traditional approach. Furthermore, we show that QGE enhances the statistical reliability of these quality assessments. This advance represents a significant step toward a more insightful evaluation of explanations that enables a more effective inspection of a model's behavior.",0 "Grammatical Error Correction (GEC) faces a critical challenge concerning explainability, notably when GEC systems are designed for language learners. Existing research predominantly focuses on explaining grammatical errors extracted in advance, thus neglecting the relationship between explanations and corrections. To address this gap, we introduce EXGEC, a unified explainable GEC framework that integrates explanation and correction tasks in a generative manner, advocating that these tasks mutually reinforce each other. Experiments have been conducted on EXPECT, a recent human-labeled dataset for explainable GEC, comprising around 20k samples. Moreover, we detect significant noise within EXPECT, potentially compromising model training and evaluation. Therefore, we introduce an alternative dataset named EXPECT-denoised, ensuring a more objective framework for training and evaluation. Results on various NLP models (BART, T5, and Llama3) show that EXGEC models surpass single-task baselines in both tasks, demonstrating the effectiveness of our approach.",0 "Explainable recommender systems are designed to elucidate the explanation behind each recommendation, enabling users to comprehend the underlying logic. Previous works perform rating prediction and explanation generation in a multi-task manner. However, these works suffer from incoherence between predicted ratings and explanations. To address the issue, we propose a novel framework that employs a large language model (LLM) to generate a rating, transforms it into a rating vector, and finally generates an explanation based on the rating vector and user-item information. Moreover, we propose utilizing publicly available LLMs and pre-trained sentiment analysis models to automatically evaluate the coherence without human annotations. Extensive experimental results on three datasets of explainable recommendation show that the proposed framework is effective, outperforming state-of-the-art baselines with improvements of 7.3\% in explainability and 4.4\% in text quality.",0 "Machine Learning (ML) systems are vulnerable to adversarial examples, particularly those from query-based black-box attacks. Despite various efforts to detect and prevent such attacks, ML systems are still at risk, demanding a more comprehensive approach to security that includes logging, analyzing, and sharing evidence. While traditional security benefits from well-established practices of forensics and threat intelligence sharing, ML security has yet to find a way to profile its attackers and share information about them. In response, this paper introduces SEA, a novel ML security system to characterize black-box attacks on ML systems for forensic purposes and to facilitate human-explainable intelligence sharing. SEA leverages Hidden Markov Models to attribute the observed query sequence to known attacks. It thus understands the attack's progression rather than focusing solely on the final adversarial examples. Our evaluations reveal that SEA is effective at attack attribution, even on the second incident, and is robust to adaptive strategies designed to evade forensic analysis. SEA's explanations of the attack's behavior allow us even to fingerprint specific minor bugs in widely used attack libraries. For example, we discover that the SignOPT and Square attacks in ART v1.14 send over 50% duplicated queries. We thoroughly evaluate SEA on a variety of settings and demonstrate that it can recognize the same attack with more than 90% Top-1 and 95% Top-3 accuracy. Finally, we demonstrate how SEA generalizes to other domains like text classification.",0 "Audio deepfakes are increasingly in-differentiable from organic speech, often fooling both authentication systems and human listeners. While many techniques use low-level audio features or optimization black-box model training, focusing on the features that humans use to recognize speech will likely be a more long-term robust approach to detection. We explore the use of prosody, or the high-level linguistic features of human speech (e.g., pitch, intonation, jitter) as a more foundational means of detecting audio deepfakes. We develop a detector based on six classical prosodic features and demonstrate that our model performs as well as other baseline models used by the community to detect audio deepfakes with an accuracy of 93% and an EER of 24.7%. More importantly, we demonstrate the benefits of using a linguistic features-based approach over existing models by applying an adaptive adversary using an $L_{\infty}$ norm attack against the detectors and using attention mechanisms in our training for explainability. We show that we can explain the prosodic features that have highest impact on the model's decision (Jitter, Shimmer and Mean Fundamental Frequency) and that other models are extremely susceptible to simple $L_{\infty}$ norm attacks (99.3% relative degradation in accuracy). While overall performance may be similar, we illustrate the robustness and explainability benefits to a prosody feature approach to audio deepfake detection.",0 "We present a hierarchy of natural language understanding abilities and argue for the importance of moving beyond assessments of understanding at the lexical and sentence levels to the discourse level. We propose the task of anaphora accessibility as a diagnostic for assessing discourse understanding, and to this end, present an evaluation dataset inspired by theoretical research in dynamic semantics. We evaluate human and LLM performance on our dataset and find that LLMs and humans align on some tasks and diverge on others. Such divergence can be explained by LLMs' reliance on specific lexical items during language comprehension, in contrast to human sensitivity to structural abstractions.",0 "Hardware Trojans are malicious modifications in digital designs that can be inserted by untrusted supply chain entities. Hardware Trojans can give rise to diverse attack vectors such as information leakage (e.g. MOLES Trojan) and denial-of-service (rarely triggered bit flip). Such an attack in critical systems (e.g. healthcare and aviation) can endanger human lives and lead to catastrophic financial loss. Several techniques have been developed to detect such malicious modifications in digital designs, particularly for designs sourced from third-party intellectual property (IP) vendors. However, most techniques have scalability concerns (due to unsound assumptions during evaluation) and lead to large number of false positive detections (false alerts). Our framework (SALTY) mitigates these concerns through the use of a novel Graph Neural Network architecture (using Jumping-Knowledge mechanism) for generating initial predictions and an Explainable Artificial Intelligence (XAI) approach for fine tuning the outcomes (post-processing). Experiments show 98% True Positive Rate (TPR) and True Negative Rate (TNR), significantly outperforming state-of-the-art techniques across a large set of standard benchmarks.",0 "The Distributed Constraint Optimization Problem (DCOP) formulation is a powerful tool to model cooperative multi-agent problems that need to be solved distributively. A core assumption of existing approaches is that DCOP solutions can be easily understood, accepted, and adopted, which may not hold, as evidenced by the large body of literature on Explainable AI. In this paper, we propose the Explainable DCOP (X-DCOP) model, which extends a DCOP to include its solution and a contrastive query for that solution. We formally define some key properties that contrastive explanations must satisfy for them to be considered as valid solutions to X-DCOPs as well as theoretical results on the existence of such valid explanations. To solve X-DCOPs, we propose a distributed framework as well as several optimizations and suboptimal variants to find valid explanations. We also include a human user study that showed that users, not surprisingly, prefer shorter explanations over longer ones. Our empirical evaluations showed that our approach can scale to large problems, and the different variants provide different options for trading off explanation lengths for smaller runtimes. Thus, our model and algorithmic contributions extend the state of the art by reducing the barrier for users to understand DCOP solutions, facilitating their adoption in more real-world applications.",2 "Citations are a key indicator of research impact but are shaped by factors beyond intrinsic research quality, including prestige, social networks, and thematic similarity. While the Matthew Effect explains how prestige accumulates and influences citation distributions, our study contextualizes this by showing that other mechanisms also play a crucial role. Analyzing a large dataset of disambiguated authors (N=43,467) and citation linkages (N=264,436) in U.S. economics, we find that close ties in the collaboration network are the strongest predictor of citation, closely followed by thematic similarity between papers. This reinforces the idea that citations are not only a matter of prestige but mostly of social networks and intellectual proximity. Prestige remains important for understanding highly cited papers, but for the majority of citations, proximity--both social and semantic--plays a more significant role. These findings shift attention from extreme cases of highly cited research toward the broader distribution of citations, which shapes career trajectories and the production of knowledge. Recognizing the diverse factors influencing citations is critical for science policy, as this work highlights inequalities that are not based on preferential attachment, but on the role of self-citations, collaborations, and mainstream versus no mainstream research subjects.",0 "Large language models (LLMs) have revolutionized machine learning due to their ability to capture complex interactions between input features. Popular post-hoc explanation methods like SHAP provide marginal feature attributions, while their extensions to interaction importances only scale to small input lengths ($\approx 20$). We propose Spectral Explainer (SPEX), a model-agnostic interaction attribution algorithm that efficiently scales to large input lengths ($\approx 1000)$. SPEX exploits underlying natural sparsity among interactions -- common in real-world data -- and applies a sparse Fourier transform using a channel decoding algorithm to efficiently identify important interactions. We perform experiments across three difficult long-context datasets that require LLMs to utilize interactions between inputs to complete the task. For large inputs, SPEX outperforms marginal attribution methods by up to 20% in terms of faithfully reconstructing LLM outputs. Further, SPEX successfully identifies key features and interactions that strongly influence model output. For one of our datasets, HotpotQA, SPEX provides interactions that align with human annotations. Finally, we use our model-agnostic approach to generate explanations to demonstrate abstract reasoning in closed-source LLMs (GPT-4o mini) and compositional reasoning in vision-language models.",0 "The evolution of cognition is frequently discussed as the evolution of cognitive abilities or the evolution of some neuronal structures in the brain. However, since such traits or abilities are often highly complex, understanding their evolution requires explaining how they could have gradually evolved through selection acting on heritable variations in simpler cognitive mechanisms. With this in mind, making use of a previously proposed theory, here we show how the evolution of cognitive abilities can be captured by the fine-tuning of basic learning mechanisms and, in particular, chunking mechanisms. We use the term chunking broadly for all types of non-elemental learning, claiming that the process by which elements are combined into chunks and associated with other chunks, or elements, is critical for what the brain can do, and that it must be fine-tuned to ecological conditions. We discuss the relevance of this approach to studies in animal cognition, using examples from animal foraging and decision-making, problem solving, and cognitive flexibility. Finally, we explain how even the apparent human-animal gap in sequence learning ability can be explained in terms of different fine-tunings of a similar chunking process.",0 "The opaque nature of Large Language Models (LLMs) has led to significant research efforts aimed at enhancing their interpretability, primarily through post-hoc methods. More recent in-hoc approaches, such as Concept Bottleneck Models (CBMs), offer both interpretability and intervenability by incorporating explicit concept representations. However, these methods suffer from key limitations, including reliance on labeled concept datasets and significant architectural modifications that challenges re-integration into existing system pipelines. In this work, we introduce a new methodology for incorporating interpretability and intervenability into an existing model by integrating Concept Layers (CLs) into its architecture. Our approach projects the model's internal vector representations into a conceptual, explainable vector space before reconstructing and feeding them back into the model. Furthermore, we eliminate the need for a human-selected concept set by algorithmically searching an ontology for a set of concepts that can be either task-specific or task-agnostic. We evaluate CLs across multiple tasks, demonstrating that they maintain the original model's performance and agreement while enabling meaningful interventions. Additionally, we present a proof of concept showcasing an intervenability interface, allowing users to adjust model behavior dynamically, such as mitigating biases during inference.",0 "Understanding the property of neural populations (or voxels) in the human brain can advance our comprehension of human perceptual and cognitive processing capabilities and contribute to developing brain-inspired computer models. Recent encoding models using deep neural networks (DNNs) have successfully predicted voxel-wise activity. However, interpreting the properties that explain voxel responses remains challenging because of the black-box nature of DNNs. As a solution, we propose LLM-assisted Visual Cortex Captioning (LaVCa), a data-driven approach that uses large language models (LLMs) to generate natural-language captions for images to which voxels are selective. By applying LaVCa for image-evoked brain activity, we demonstrate that LaVCa generates captions that describe voxel selectivity more accurately than the previously proposed method. Furthermore, the captions generated by LaVCa quantitatively capture more detailed properties than the existing method at both the inter-voxel and intra-voxel levels. Furthermore, a more detailed analysis of the voxel-specific properties generated by LaVCa reveals fine-grained functional differentiation within regions of interest (ROIs) in the visual cortex and voxels that simultaneously represent multiple distinct concepts. These findings offer profound insights into human visual representations by assigning detailed captions throughout the visual cortex while highlighting the potential of LLM-based methods in understanding brain representations. Please check out our webpage at https://sites.google.com/view/lavca-llm/",0 "Science communication increases public interest in science by educating, engaging, and encouraging everyday people to participate in the sciences. But traditional science communication is often too formal and inaccessible for general audiences. However, there is a growing trend on social media to make it more approachable using three techniques: relatable examples to make explanations concrete, step-by-step walkthroughs to improve understanding, and personal language to drive engagement. These techniques are flashy and often garner more engagement from social media users, but the effectiveness of these techniques in actually explaining the science is unknown. Furthermore, many scientists struggle with adopting these science communication strategies for social media, fearing it might undermine their authority. We conduct a reader study to understand how these science communication techniques on social media affect readers' understanding and engagement of the science. We found that while most readers prefer these techniques, they had diverse preferences for when and where these techniques are used. With these findings, we conducted a writer study to understand how scientists' varying comfort levels with these strategies can be supported by presenting different structure and style options. We found that the side-by-side comparison of options helped writers make editorial decisions. Instead of adhering to one direction of science communication, writers explored a continuum of options which helped them identify which communication strategies they wanted to implement.",0 "Palms are ecologically and economically indicators of tropical forest health, biodiversity, and human impact that support local economies and global forest product supply chains. While palm detection in plantations is well-studied, efforts to map naturally occurring palms in dense forests remain limited by overlapping crowns, uneven shading, and heterogeneous landscapes. We develop PRISM (Processing, Inference, Segmentation, and Mapping), a flexible pipeline for detecting and localizing palms in dense tropical forests using large orthomosaic images. Orthomosaics are created from thousands of aerial images and spanning several to hundreds of gigabytes. Our contributions are threefold. First, we construct a large UAV-derived orthomosaic dataset collected across 21 ecologically diverse sites in western Ecuador, annotated with 8,830 bounding boxes and 5,026 palm center points. Second, we evaluate multiple state-of-the-art object detectors based on efficiency and performance, integrating zero-shot SAM 2 as the segmentation backbone, and refining the results for precise geographic mapping. Third, we apply calibration methods to align confidence scores with IoU and explore saliency maps for feature explainability. Though optimized for palms, PRISM is adaptable for identifying other natural objects, such as eastern white pines. Future work will explore transfer learning for lower-resolution datasets (0.5 to 1m).",0 "Sleep is an essential component of human physiology, contributing significantly to overall health and quality of life. Accurate sleep staging and disorder detection are crucial for assessing sleep quality. Studies in the literature have proposed PSG-based approaches and machine-learning methods utilizing single-modality signals. However, existing methods often lack multimodal, multilabel frameworks and address sleep stages and disorders classification separately. In this paper, we propose a 1D-Vision Transformer for simultaneous classification of sleep stages and sleep disorders. Our method exploits the sleep disorders' correlation with specific sleep stage patterns and performs a simultaneous identification of a sleep stage and sleep disorder. The model is trained and tested using multimodal-multilabel sensory data (including photoplethysmogram, respiratory flow, and respiratory effort signals). The proposed method shows an overall accuracy (cohen's Kappa) of 78% (0.66) for five-stage sleep classification and 74% (0.58) for sleep apnea classification. Moreover, we analyzed the encoder attention weights to clarify our models' predictions and investigate the influence different features have on the models' outputs. The result shows that identified patterns, such as respiratory troughs and peaks, make a higher contribution to the final classification process.",0 "Effective feedback is essential for fostering students' success in scientific inquiry. With advancements in artificial intelligence, large language models (LLMs) offer new possibilities for delivering instant and adaptive feedback. However, this feedback often lacks the pedagogical validation provided by real-world practitioners. To address this limitation, our study evaluates and compares the feedback quality of LLM agents with that of human teachers and science education experts on student-written experimentation protocols. Four blinded raters, all professionals in scientific inquiry and science education, evaluated the feedback texts generated by 1) the LLM agent, 2) the teachers and 3) the science education experts using a five-point Likert scale based on six criteria of effective feedback: Feed Up, Feed Back, Feed Forward, Constructive Tone, Linguistic Clarity, and Technical Terminology. Our results indicate that LLM-generated feedback shows no significant difference to that of teachers and experts in overall quality. However, the LLM agent's performance lags in the Feed Back dimension, which involves identifying and explaining errors within the student's work context. Qualitative analysis highlighted the LLM agent's limitations in contextual understanding and in the clear communication of specific errors. Our findings suggest that combining LLM-generated feedback with human expertise can enhance educational practices by leveraging the efficiency of LLMs and the nuanced understanding of educators.",2 "Learning for animals or humans is the process that leads to behaviors better adapted to the environment. This process highly depends on the individual that learns and is usually observed only through the individual's actions. This article presents ways to use this individual behavioral data to find the model that best explains how the individual learns. We propose two model selection methods: a general hold-out procedure and an AIC-type criterion, both adapted to non-stationary dependent data. We provide theoretical error bounds for these methods that are close to those of the standard i.i.d. case. To compare these approaches, we apply them to contextual bandit models and illustrate their use on both synthetic and experimental learning data in a human categorization task.",0 "Explainable recommendation has demonstrated significant advantages in informing users about the logic behind recommendations, thereby increasing system transparency, effectiveness, and trustworthiness. To provide personalized and interpretable explanations, existing works often combine the generation capabilities of large language models (LLMs) with collaborative filtering (CF) information. CF information extracted from the user-item interaction graph captures the user behaviors and preferences, which is crucial for providing informative explanations. However, due to the complexity of graph structure, effectively extracting the CF information from graphs still remains a challenge. Moreover, existing methods often struggle with the integration of extracted CF information with LLMs due to its implicit representation and the modality gap between graph structures and natural language explanations. To address these challenges, we propose G-Refer, a framework using graph retrieval-augmented large language models (LLMs) for explainable recommendation. Specifically, we first employ a hybrid graph retrieval mechanism to retrieve explicit CF signals from both structural and semantic perspectives. The retrieved CF information is explicitly formulated as human-understandable text by the proposed graph translation and accounts for the explanations generated by LLMs. To bridge the modality gap, we introduce knowledge pruning and retrieval-augmented fine-tuning to enhance the ability of LLMs to process and utilize the retrieved CF information to generate explanations. Extensive experiments show that G-Refer achieves superior performance compared with existing methods in both explainability and stability. Codes and data are available at https://github.com/Yuhan1i/G-Refer.",0 "With the advent of social media, children are becoming increasingly vulnerable to the risk of grooming in online settings. Detecting grooming instances in an online conversation poses a significant challenge as the interactions are not necessarily sexually explicit, since the predators take time to build trust and a relationship with their victim. Moreover, predators evade detection using indirect and coded language. While previous studies have fine-tuned Transformers to automatically identify grooming in chat conversations, they overlook the impact of coded and indirect language on model predictions, and how these align with human perceptions of grooming. In this paper, we address this gap and evaluate bi-encoders on the task of classifying different degrees of grooming risk in chat contexts, for three different participant groups, i.e. law enforcement officers, real victims, and decoys. Using a fuzzy-theoretic framework, we map human assessments of grooming behaviors to estimate the actual degree of grooming risk. Our analysis reveals that fine-tuned models fail to tag instances where the predator uses indirect speech pathways and coded language to evade detection. Further, we find that such instances are characterized by a higher presence of out-of-vocabulary (OOV) words in samples, causing the model to misclassify. Our findings highlight the need for more robust models to identify coded language from noisy chat inputs in grooming contexts.",2 "Encoding implicit language presents a challenge for language models, especially in high-risk domains where maintaining high precision is important. Automated detection of online child grooming is one such critical domain, where predators manipulate victims using a combination of explicit and implicit language to convey harmful intentions. While recent studies have shown the potential of Transformer language models like SBERT for preemptive grooming detection, they primarily depend on surface-level features and approximate real victim grooming processes using vigilante and law enforcement conversations. The question of whether these features and approximations are reasonable has not been addressed thus far. In this paper, we address this gap and study whether SBERT can effectively discern varying degrees of grooming risk inherent in conversations, and evaluate its results across different participant groups. Our analysis reveals that while fine-tuning aids language models in learning to assign grooming scores, they show high variance in predictions, especially for contexts containing higher degrees of grooming risk. These errors appear in cases that 1) utilize indirect speech pathways to manipulate victims and 2) lack sexually explicit content. This finding underscores the necessity for robust modeling of indirect speech acts by language models, particularly those employed by predators.",2 "As humans increasingly share environments with diverse agents powered by RL, LLMs, and beyond, the ability to explain their policies in natural language will be vital for reliable coexistence. In this paper, we build a model-agnostic explanation generator based on an LLM. The technical novelty is that the rewards for training this LLM are generated by a generative flow matching model. This model has a specially designed structure with a hidden layer merged with an LLM to harness the linguistic cues of explanations into generating appropriate rewards. Experiments on both RL and LLM tasks demonstrate that our method can generate dense and effective rewards while saving on expensive human feedback",0 "Explainable AI (XAI) is critical for ensuring transparency, accountability, and trust in machine learning systems as black-box models are increasingly deployed within high-stakes domains. Among XAI methods, Shapley values are widely used for their fairness and consistency axioms. However, prevalent Shapley value approximation methods commonly rely on abstract baselines or computationally intensive calculations, which can limit their interpretability and scalability. To address such challenges, we propose Pairwise Shapley Values, a novel framework that grounds feature attributions in explicit, human-relatable comparisons between pairs of data instances proximal in feature space. Our method introduces pairwise reference selection combined with single-value imputation to deliver intuitive, model-agnostic explanations while significantly reducing computational overhead. Here, we demonstrate that Pairwise Shapley Values enhance interpretability across diverse regression and classification scenarios--including real estate pricing, polymer property prediction, and drug discovery datasets. We conclude that the proposed methods enable more transparent AI systems and advance the real-world applicability of XAI.",0 "This paper proposes a feature-based domain adaptation technique for identifying emotions in generic images, encompassing both facial and non-facial objects, as well as non-human components. This approach addresses the challenge of the limited availability of pre-trained models and well-annotated datasets for Image Emotion Recognition (IER). Initially, a deep-learning-based Facial Expression Recognition (FER) system is developed, classifying facial images into discrete emotion classes. Maintaining the same network architecture, this FER system is then adapted to recognize emotions in generic images through the application of discrepancy loss, enabling the model to effectively learn IER features while classifying emotions into categories such as 'happy,' 'sad,' 'hate,' and 'anger.' Additionally, a novel interpretability method, Divide and Conquer based Shap (DnCShap), is introduced to elucidate the visual features most relevant for emotion recognition. The proposed IER system demonstrated emotion classification accuracies of 61.86% for the IAPSa dataset, 62.47 for the ArtPhoto dataset, 70.78% for the FI dataset, and 59.72% for the EMOTIC dataset. The system effectively identifies the important visual features that lead to specific emotion classifications and also provides detailed embedding plots explaining the predictions, enhancing the understanding and trust in AI-driven emotion recognition systems.",0 "A unique aspect of human visual understanding is the ability to flexibly interpret abstract concepts: acquiring lifted rules explaining what they symbolize, grounding them across familiar and unfamiliar contexts, and making predictions or reasoning about them. While off-the-shelf vision-language models excel at making literal interpretations of images (e.g., recognizing object categories such as tree branches), they still struggle to make sense of such visual abstractions (e.g., how an arrangement of tree branches may form the walls of a maze). To address this challenge, we introduce Deep Schema Grounding (DSG), a framework that leverages explicit structured representations of visual abstractions for grounding and reasoning. At the core of DSG are schemas--dependency graph descriptions of abstract concepts that decompose them into more primitive-level symbols. DSG uses large language models to extract schemas, then hierarchically grounds concrete to abstract components of the schema onto images with vision-language models. The grounded schema is used to augment visual abstraction understanding. We systematically evaluate DSG and different methods in reasoning on our new Visual Abstractions Dataset, which consists of diverse, real-world images of abstract concepts and corresponding question-answer pairs labeled by humans. We show that DSG significantly improves the abstract visual reasoning performance of vision-language models, and is a step toward human-aligned understanding of visual abstractions.",0 "Large Vision-Language Models (VLMs) have demonstrated strong capabilities in tasks requiring a fine-grained understanding of literal meaning in images and text, such as visual question-answering or visual entailment. However, there has been little exploration of the capabilities of these models when presented with images and captions containing figurative meaning, such as metaphors or humor. To close this gap, we propose a new task framing the figurative meaning understanding problem as an explainable visual entailment task, where the model has to predict whether the image (premise) entails a caption (hypothesis) and justify the predicted label with a textual explanation. The figurative phenomena can be present in the image, in the caption, or both. Using a human-AI collaboration approach, we build the accompanying expert-verified dataset V-FLUTE, containing 6,027 {image, caption, label, explanation} instances spanning five diverse figurative phenomena: metaphors, similes, idioms, sarcasm, and humor. Through automatic evaluation, we find that VLMs struggle to generalize from literal to figurative meaning, particularly when it is present in images. Further, we identify common types of errors in VLM reasoning (hallucination and incomplete or unsound reasoning) across classes of models via human evaluation.",2 "Living organisms rely on internal models of the world to act adaptively. These models, because of resource limitations, cannot encode every detail and hence need to compress information. From a cognitive standpoint, information compression can manifest as a distortion of latent representations, resulting in the emergence of representations that may not accurately reflect the external world or its geometry. Rate-distortion theory formalizes the optimal way to compress information while minimizing such distortions, by considering factors such as capacity limitations, the frequency and the utility of stimuli. However, while this theory explains why the above factors distort latent representations, it does not specify which specific distortions they produce. To address this question, here we investigate how rate-distortion trade-offs shape the latent representations of images in generative models, specifically Beta Variational Autoencoders ($\beta$-VAEs), under varying constraints of model capacity, data distributions, and task objectives. By systematically exploring these factors, we identify three primary distortions in latent representations: prototypization, specialization, and orthogonalization. These distortions emerge as signatures of information compression, reflecting the model's adaptation to capacity limitations, data imbalances, and task demands. Additionally, our findings demonstrate that these distortions can coexist, giving rise to a rich landscape of latent spaces, whose geometry could differ significantly across generative models subject to different constraints. Our findings contribute to explain how the normative constraints of rate-distortion theory shape the geometry of latent representations of generative models of artificial systems and living organisms.",0 "The eXplainable Artificial Intelligence (XAI) research predominantly concentrates to provide explainations about AI model decisions, especially Deep Learning (DL) models. However, there is a growing interest in using XAI techniques to automatically improve the performance of the AI systems themselves. This paper proposes IMPACTX, a novel approach that leverages XAI as a fully automated attention mechanism, without requiring external knowledge or human feedback. Experimental results show that IMPACTX has improved performance respect to the standalone ML model by integrating an attention mechanism based an XAI method outputs during the model training. Furthermore, IMPACTX directly provides proper feature attribution maps for the model's decisions, without relying on external XAI methods during the inference process. Our proposal is evaluated using three widely recognized DL models (EfficientNet-B2, MobileNet, and LeNet-5) along with three standard image datasets: CIFAR-10, CIFAR-100, and STL-10. The results show that IMPACTX consistently improves the performance of all the inspected DL models across all evaluated datasets, and it directly provides appropriate explanations for its responses.",0 "As artificial intelligence (AI) becomes increasingly embedded in healthcare delivery, this chapter explores the critical aspects of developing reliable and ethical Clinical Decision Support Systems (CDSS). Beginning with the fundamental transition from traditional statistical models to sophisticated machine learning approaches, this work examines rigorous validation strategies and performance assessment methods, including the crucial role of model calibration and decision curve analysis. The chapter emphasizes that creating trustworthy AI systems in healthcare requires more than just technical accuracy; it demands careful consideration of fairness, explainability, and privacy. The challenge of ensuring equitable healthcare delivery through AI is stressed, discussing methods to identify and mitigate bias in clinical predictive models. The chapter then delves into explainability as a cornerstone of human-centered CDSS. This focus reflects the understanding that healthcare professionals must not only trust AI recommendations but also comprehend their underlying reasoning. The discussion advances in an analysis of privacy vulnerabilities in medical AI systems, from data leakage in deep learning models to sophisticated attacks against model explanations. The text explores privacy-preservation strategies such as differential privacy and federated learning, while acknowledging the inherent trade-offs between privacy protection and model performance. This progression, from technical validation to ethical considerations, reflects the multifaceted challenges of developing AI systems that can be seamlessly and reliably integrated into daily clinical practice while maintaining the highest standards of patient care and data protection.",2 "In an era where black-box AI models are integral to decision-making across industries, robust methods for explaining these models are more critical than ever. While these models leverage complex feature interplay for accurate predictions, most explanation methods only assign relevance to individual features. There is a research gap in methods that effectively illustrate interactions between features, especially in visualizing higher-order interactions involving multiple features, which challenge conventional representation methods. To address this challenge in local explanations focused on individual instances, we employ a visual, subset-based approach to reveal relevant feature interactions. Our visual analytics tool FINCH uses coloring and highlighting techniques to create intuitive, human-centered visualizations, and provides additional views that enable users to calibrate their trust in the model and explanations. We demonstrate FINCH in multiple case studies, demonstrating its generalizability, and conducted an extensive human study with machine learning experts to highlight its helpfulness and usability. With this approach, FINCH allows users to visualize feature interactions involving any number of features locally.",0 "Existing studies explore the explainability of Grammatical Error Correction (GEC) in a limited scenario, where they ignore the interaction between corrections and explanations and have not established a corresponding comprehensive benchmark. To bridge the gap, this paper first introduces the task of EXplainable GEC (EXGEC), which focuses on the integral role of correction and explanation tasks. To facilitate the task, we propose EXCGEC, a tailored benchmark for Chinese EXGEC consisting of 8,216 explanation-augmented samples featuring the design of hybrid edit-wise explanations. We then benchmark several series of LLMs in multi-task learning settings, including post-explaining and pre-explaining. To promote the development of the task, we also build a comprehensive evaluation suite by leveraging existing automatic metrics and conducting human evaluation experiments to demonstrate the human consistency of the automatic metrics for free-text explanations. Our experiments reveal the effectiveness of evaluating free-text explanations using traditional metrics like METEOR and ROUGE, and the inferior performance of multi-task models compared to the pipeline solution, indicating its challenges to establish positive effects in learning both tasks.",2 "Deep Neural Networks (DNNs) have demonstrated strong capacity in supporting a wide variety of applications. Shapley value has emerged as a prominent tool to analyze feature importance to help people understand the inference process of deep neural models. Computing Shapley value function requires choosing a baseline to represent feature's missingness. However, existing random and conditional baselines could negatively influence the explanation. In this paper, by analyzing the suboptimality of different baselines, we identify the problematic baseline where the asymmetric interaction between $\bm{x}'_i$ (the replacement of the faithful influential feature) and other features has significant directional bias toward the model's output, and conclude that $p(y|\bm{x}'_i) = p(y)$ potentially minimizes the asymmetric interaction involving $\bm{x}'_i$. We further generalize the uninformativeness of $\bm{x}'_i$ toward the label space $L$ to avoid estimating $p(y)$ and design a simple uncertainty-based reweighting mechanism to accelerate the computation process. We conduct experiments on various NLP tasks and our quantitative analysis demonstrates the effectiveness of the proposed uncertainty-based reweighting mechanism. Furthermore, by measuring the consistency of explanations generated by explainable methods and human, we highlight the disparity between model inference and human understanding.",0 "Knowing the truth is rarely enough -- we also seek out reasons why the fact is true. While much is known about how we explain contingent truths, we understand less about how we explain facts, such as those in mathematics, that are true as a matter of logical necessity. We present a framework, based in computational complexity, where explanations for deductive truths co-emerge with discoveries of simplifying steps during the search process. When such structures are missing, we revert, in turn, to error-based reasons, where a (corrected) mistake can serve as fictitious, but explanatory, contingency-cause: not making the mistake serves as a reason why the truth takes the form it does. We simulate human subjects, using GPT-4o, presented with SAT puzzles of varying complexity and reasonableness, validating our theory and showing how its predictions can be tested in future human studies.",2 "Explainable AI (XAI) aims to provide insights into the decisions made by AI models. To date, most XAI approaches provide only one-time, static explanations, which cannot cater to users' diverse knowledge levels and information needs. Conversational explanations have been proposed as an effective method to customize XAI explanations. However, building conversational explanation systems is hindered by the scarcity of training data. Training with synthetic data faces two main challenges: lack of data diversity and hallucination in the generated data. To alleviate these issues, we introduce a repetition penalty to promote data diversity and exploit a hallucination detector to filter out untruthful synthetic conversation turns. We conducted both automatic and human evaluations on the proposed system, fEw-shot Multi-round ConvErsational Explanation (EMCEE). For automatic evaluation, EMCEE achieves relative improvements of 81.6% in BLEU and 80.5% in ROUGE compared to the baselines. EMCEE also mitigates the degeneration of data quality caused by training on synthetic data. In human evaluations (N=60), EMCEE outperforms baseline models and the control group in improving users' comprehension, acceptance, trust, and collaboration with static explanations by large margins. Through a fine-grained analysis of model responses, we further demonstrate that training on self-generated synthetic data improves the model's ability to generate more truthful and understandable answers, leading to better user interactions. To the best of our knowledge, this is the first conversational explanation method that can answer free-form user questions following static explanations.",0 "Hateful meme detection presents a significant challenge as a multimodal task due to the complexity of interpreting implicit hate messages and contextual cues within memes. Previous approaches have fine-tuned pre-trained vision-language models (PT-VLMs), leveraging the knowledge they gained during pre-training and their attention mechanisms to understand meme content. However, the reliance of these models on implicit knowledge and complex attention mechanisms renders their decisions difficult to explain, which is crucial for building trust in meme classification. In this paper, we introduce IntMeme, a novel framework that leverages Large Multimodal Models (LMMs) for hateful meme classification with explainable decisions. IntMeme addresses the dual challenges of improving both accuracy and explainability in meme moderation. The framework uses LMMs to generate human-like, interpretive analyses of memes, providing deeper insights into multimodal content and context. Additionally, it uses independent encoding modules for both memes and their interpretations, which are then combined to enhance classification performance. Our approach addresses the opacity and misclassification issues associated with PT-VLMs, optimizing the use of LMMs for hateful meme detection. We demonstrate the effectiveness of IntMeme through comprehensive experiments across three datasets, showcasing its superiority over state-of-the-art models.",0 "This article presents factor copula approaches to model temporal dependency of non-Gaussian (continuous/discrete) longitudinal data. Factor copula models are canonical vine copulas which explain the underlying dependence structure of a multivariate data through latent variables, and therefore can be easily interpreted and implemented to unbalanced longitudinal data. We develop regression models for continuous, binary and ordinal longitudinal data including covariates, by using factor copula constructions with subject-specific latent variables. Considering homogeneous within-subject dependence, our proposed models allow for feasible parametric inference in moderate to high dimensional situations, using two-stage (IFM) estimation method. We assess the finite sample performance of the proposed models with extensive simulation studies. In the empirical analysis, the proposed models are applied for analysing different longitudinal responses of two real world data sets. Moreover, we compare the performances of these models with some widely used random effect models using standard model selection techniques and find substantial improvements. Our studies suggest that factor copula models can be good alternatives to random effect models and can provide better insights to temporal dependency of longitudinal data of arbitrary nature.",0 "Large Language Models (LLMs) are known to be vulnerable to backdoor attacks, where triggers embedded in poisoned samples can maliciously alter LLMs' behaviors. In this paper, we move beyond attacking LLMs and instead examine backdoor attacks through the novel lens of natural language explanations. Specifically, we leverage LLMs' generative capabilities to produce human-readable explanations for their decisions, enabling direct comparisons between explanations for clean and poisoned samples. Our results show that backdoored models produce coherent explanations for clean inputs but diverse and logically flawed explanations for poisoned data, a pattern consistent across classification and generation tasks for different backdoor attacks. Further analysis reveals key insights into the explanation generation process. At the token level, explanation tokens associated with poisoned samples only appear in the final few transformer layers. At the sentence level, attention dynamics indicate that poisoned inputs shift attention away from the original input context during explanation generation. These findings enhance our understanding of backdoor mechanisms in LLMs and present a promising framework for detecting vulnerabilities through explainability.",0 "A core component of a successful artificial general intelligence would be the rapid creation and manipulation of grounded compositional abstractions and the demonstration of expertise in the family of recursive hierarchical syntactic objects necessary for the creative use of human language. We evaluated the recently released o3 model (OpenAI; o3-mini-high) and discovered that while it succeeds on some basic linguistic tests relying on linear, surface statistics (e.g., the Strawberry Test), it fails to generalize basic phrase structure rules; it fails with comparative sentences involving semantically illegal cardinality comparisons ('Escher sentences'); its fails to correctly rate and explain acceptability dynamics; and it fails to distinguish between instructions to generate unacceptable semantic vs. unacceptable syntactic outputs. When tasked with generating simple violations of grammatical rules, it is seemingly incapable of representing multiple parses to evaluate against various possible semantic interpretations. In stark contrast to many recent claims that artificial language models are on the verge of replacing the field of linguistics, our results suggest not only that deep learning is hitting a wall with respect to compositionality (Marcus 2022), but that it is hitting [a [stubbornly [resilient wall]]] that cannot readily be surmounted to reach human-like compositional reasoning simply through more compute.",0 "This work presents a novel Bayesian framework for unsupervised domain adaptation (UDA) in medical image segmentation. While prior works have explored this clinically significant task using various strategies of domain alignment, they often lack an explicit and explainable mechanism to ensure that target image features capture meaningful structural information. Besides, these methods are prone to the curse of dimensionality, inevitably leading to challenges in interpretability and computational efficiency. To address these limitations, we propose RemInD, a framework inspired by human adaptation. RemInD learns a domain-agnostic latent manifold, characterized by several anchors, to memorize anatomical variations. By mapping images onto this manifold as weighted anchor averages, our approach ensures realistic and reliable predictions. This design mirrors how humans develop representative components to understand images and then retrieve component combinations from memory to guide segmentation. Notably, model prediction is determined by two explainable factors: a low-dimensional anchor weight vector, and a spatial deformation. This design facilitates computationally efficient and geometry-adherent adaptation by aligning weight vectors between domains on a probability simplex. Experiments on two public datasets, encompassing cardiac and abdominal imaging, demonstrate the superiority of RemInD, which achieves state-of-the-art performance using a single alignment approach, outperforming existing methods that often rely on multiple complex alignment strategies.",0 "Purpose: As visual inspection is an inherent process during radiological screening, the associated eye gaze data can provide valuable insights into relevant clinical decisions. As deep learning has become the state-of-the-art for computer-assisted diagnosis, integrating human behavior, such as eye gaze data, into these systems is instrumental to help align machine predictions with clinical diagnostic criteria, thus enhancing the quality of automatic radiological diagnosis. Methods: We propose a novel deep learning framework for joint disease diagnosis and prediction of corresponding clinical visual attention maps for chest X-ray scans. Specifically, we introduce a new dual-encoder multi-task UNet, which leverages both a DenseNet201 backbone and a Residual and Squeeze-and-Excitation block-based encoder to extract diverse features for visual attention map prediction, and a multi-scale feature-fusion classifier to perform disease classification. To tackle the issue of asynchronous training schedules of individual tasks in multi-task learning, we proposed a multi-stage cooperative learning strategy, with contrastive learning for feature encoder pretraining to boost performance. Results: Our proposed method is shown to significantly outperform existing techniques for chest X-ray diagnosis (AUC=0.93) and the quality of visual attention map prediction (Correlation coefficient=0.58). Conclusion: Benefiting from the proposed multi-task multi-stage cooperative learning, our technique demonstrates the benefit of integrating clinicians' eye gaze into clinical AI systems to boost performance and potentially explainability.",0 "Deep Reinforcement Learning (RL) is remarkably effective in addressing sequential resource allocation problems in domains such as healthcare, public policy, and resource management. However, deep RL policies often lack transparency and adaptability, challenging their deployment alongside human decision-makers. In contrast, Language Agents, powered by large language models (LLMs), provide human-understandable reasoning but may struggle with effective decision making. To bridge this gap, we propose Rule-Bottleneck Reinforcement Learning (RBRL), a novel framework that jointly optimizes decision and explanations. At each step, RBRL generates candidate rules with an LLM, selects among them using an attention-based RL policy, and determines the environment action with an explanation via chain-of-thought reasoning. The RL rule selection is optimized using the environment rewards and an explainability metric judged by the LLM. Evaluations in real-world scenarios highlight RBRL's competitive performance with deep RL and efficiency gains over LLM fine-tuning. A survey further confirms the enhanced quality of its explanations.",2 "Recent work demonstrated the existence of critical flaws in the current use of Shapley values in explainable AI (XAI), i.e. the so-called SHAP scores. These flaws are significant in that the scores provided to a human decision-maker can be misleading. Although these negative results might appear to indicate that Shapley values ought not be used in XAI, this paper argues otherwise. Concretely, this paper proposes a novel definition of SHAP scores that overcomes existing flaws. Furthermore, the paper outlines a practically efficient solution for the rigorous estimation of the novel SHAP scores. Preliminary experimental results confirm our claims, and further underscore the flaws of the current SHAP scores.",0 "According to the ""hard-steps"" model, the origin of humanity required ""successful passage through a number of intermediate steps"" (so-called ""hard"" or ""critical"" steps) that were intrinsically improbable with respect to the total time available for biological evolution on Earth. This model similarly predicts that technological life analogous to human life on Earth is ""exceedingly rare"" in the universe. Here, we critically reevaluate the core assumptions of the hard-steps model in light of recent advances in the Earth and life sciences. Specifically, we advance a potential alternative model where there are no hard steps, and evolutionary novelties (or singularities) required for human origins can be explained via mechanisms outside of intrinsic improbability. Furthermore, if Earth's surface environment was initially inhospitable not only to human life, but also to certain key intermediate steps in human evolution (e.g., the origin of eukaryotic cells, multicellular animals), then the ""delay"" in the appearance of humans can be best explained through the sequential opening of new global environmental windows of habitability over Earth history, with humanity arising relatively quickly once the right conditions were established. In this co-evolutionary (or geobiological) scenario, humans did not evolve ""early"" or ""late"" with respect to the total lifespan of the biosphere, but ""on time.""",0 "Recently, to comprehensively improve Vision Language Models (VLMs) for Visual Question Answering (VQA), several methods have been proposed to further reinforce the inference capabilities of VLMs to independently tackle VQA tasks rather than some methods that only utilize VLMs as aids to Large Language Models (LLMs). However, these methods ignore the rich common-sense knowledge inside the given VQA image sampled from the real world. Thus, they cannot fully use the powerful VLM for the given VQA question to achieve optimal performance. Attempt to overcome this limitation and inspired by the human top-down reasoning process, i.e., systematically exploring relevant issues to derive a comprehensive answer, this work introduces a novel, explainable multi-agent collaboration framework by leveraging the expansive knowledge of Large Language Models (LLMs) to enhance the capabilities of VLMs themselves. Specifically, our framework comprises three agents, i.e., Responder, Seeker, and Integrator, to collaboratively answer the given VQA question by seeking its relevant issues and generating the final answer in such a top-down reasoning process. The VLM-based Responder agent generates the answer candidates for the question and responds to other relevant issues. The Seeker agent, primarily based on LLM, identifies relevant issues related to the question to inform the Responder agent and constructs a Multi-View Knowledge Base (MVKB) for the given visual scene by leveraging the build-in world knowledge of LLM. The Integrator agent combines knowledge from the Seeker agent and the Responder agent to produce the final VQA answer. Extensive and comprehensive evaluations on diverse VQA datasets with a variety of VLMs demonstrate the superior performance and interpretability of our framework over the baseline method in the zero-shot setting without extra training cost.",0 "Most commonly used non-linear machine learning methods are closed-box models, uninterpretable to humans. The field of explainable artificial intelligence (XAI) aims to develop tools to examine the inner workings of these closed boxes. An often-used model-agnostic approach to XAI involves using simple models as local approximations to produce so-called local explanations; examples of this approach include LIME, SHAP, and SLISEMAP. This paper shows how a large set of local explanations can be reduced to a small ""proxy set"" of simple models, which can act as a generative global explanation. This reduction procedure, ExplainReduce, can be formulated as an optimisation problem and approximated efficiently using greedy heuristics.",0 "The shape of the brain's white matter connections is relatively unexplored in diffusion MRI tractography analysis. While it is known that tract shape varies in populations and across the human lifespan, it is unknown if the variability in dMRI tractography-derived shape may relate to the brain's functional variability across individuals. This work explores the potential of leveraging tractography fiber cluster shape measures to predict subject-specific cognitive performance. We implement machine learning models to predict individual cognitive performance scores. We study a large-scale database from the HCP-YA study. We apply an atlas-based fiber cluster parcellation to the dMRI tractography of each individual. We compute 15 shape, microstructure, and connectivity features for each fiber cluster. Using these features as input, we train a total of 210 models to predict 7 different NIH Toolbox cognitive performance assessments. We apply an explainable AI technique, SHAP, to assess the importance of each fiber cluster for prediction. Our results demonstrate that shape measures are predictive of individual cognitive performance. The studied shape measures, such as irregularity, diameter, total surface area, volume, and branch volume, are as effective for prediction as microstructure and connectivity measures. The overall best-performing feature is a shape feature, irregularity, which describes how different a cluster's shape is from an idealized cylinder. Further interpretation using SHAP values suggest that fiber clusters with features highly predictive of cognitive ability are widespread throughout the brain, including fiber clusters from the superficial association, deep association, cerebellar, striatal, and projection pathways. This study demonstrates the strong potential of shape descriptors to enhance the study of the brain's white matter and its relationship to cognitive function.",0 "In the contemporary era of intelligent connectivity, Affective Computing (AC), which enables systems to recognize, interpret, and respond to human behavior states, has become an integrated part of many AI systems. As one of the most critical components of responsible AI and trustworthiness in all human-centered systems, explainability has been a major concern in AC. Particularly, the recently released EU General Data Protection Regulation requires any high-risk AI systems to be sufficiently interpretable, including biometric-based systems and emotion recognition systems widely used in the affective computing field. Existing explainable methods often compromise between interpretability and performance. Most of them focus only on highlighting key network parameters without offering meaningful, domain-specific explanations to the stakeholders. Additionally, they also face challenges in effectively co-learning and explaining insights from multimodal data sources. To address these limitations, we propose a novel and generalizable framework, namely the Attention-Guided Concept Model (AGCM), which provides learnable conceptual explanations by identifying what concepts that lead to the predictions and where they are observed. AGCM is extendable to any spatial and temporal signals through multimodal concept alignment and co-learning, empowering stakeholders with deeper insights into the model's decision-making process. We validate the efficiency of AGCM on well-established Facial Expression Recognition benchmark datasets while also demonstrating its generalizability on more complex real-world human behavior understanding applications.",0 "In the domain of code generation, self-debugging is crucial. It allows LLMs to refine their generated code based on execution feedback. This is particularly important because generating correct solutions in one attempt proves challenging for complex tasks. Prior works on self-debugging mostly focus on prompting methods by providing LLMs with few-shot examples, which work poorly on small open-sourced LLMs. In this work, we propose LeDex, a training framework that significantly improves the self-debugging capability of LLMs. Intuitively, we observe that a chain of explanations on the wrong code followed by code refinement helps LLMs better analyze the wrong code and do refinement. We thus propose an automated pipeline to collect a high-quality dataset for code explanation and refinement by generating a number of explanations and refinement trajectories from the LLM itself or a larger teacher model and filtering via execution verification. We perform supervised fine-tuning (SFT) and further reinforcement learning (RL) on both success and failure trajectories with a novel reward design considering code explanation and refinement quality. SFT improves the pass@1 by up to 15.92% and pass@10 by 9.30% over four benchmarks. RL training brings additional up to 3.54% improvement on pass@1 and 2.55% improvement on pass@10. The trained LLMs show iterative refinement ability and can keep refining code continuously. Lastly, our human evaluation shows that the LLMs trained with our framework generate more useful code explanations and help developers better understand bugs in source code.",2 "The growing interest in eXplainable Artificial Intelligence (XAI) has prompted research into models with built-in interpretability, the most prominent of which are part-prototype models. Part-Prototype Models (PPMs) make decisions by comparing an input image to a set of learned prototypes, providing human-understandable explanations in the form of ``this looks like that''. Despite their inherent interpretability, PPMS are not yet considered a valuable alternative to post-hoc models. In this survey, we investigate the reasons for this and provide directions for future research. We analyze papers from 2019 to 2024, and derive a taxonomy of the challenges that current PPMS face. Our analysis shows that the open challenges are quite diverse. The main concern is the quality and quantity of prototypes. Other concerns are the lack of generalization to a variety of tasks and contexts, and general methodological issues, including non-standardized evaluation. We provide ideas for future research in five broad directions: improving predictive performance, developing novel architectures grounded in theory, establishing frameworks for human-AI collaboration, aligning models with humans, and establishing metrics and benchmarks for evaluation. We hope that this survey will stimulate research and promote intrinsically interpretable models for application domains. Our list of surveyed papers is available at https://github.com/aix-group/ppm-survey.",2 "The ability to generate artificial human movement patterns while meeting location and time constraints is an important problem in the security community, particularly as it enables the study of the analog problem of detecting such patterns while maintaining privacy. We frame this problem as an instance of abduction guided by a novel parsimony function represented as an aggregate truth value over an annotated logic program. This approach has the added benefit of affording explainability to an analyst user. By showing that any subset of such a program can provide a lower bound on this parsimony requirement, we are able to abduce movement trajectories efficiently through an informed (i.e., A*) search. We describe how our implementation was enhanced with the application of multiple techniques in order to be scaled and integrated with a cloud-based software stack that included bottom-up rule learning, geolocated knowledge graph retrieval/management, and interfaces with government systems for independently conducted government-run tests for which we provide results. We also report on our own experiments showing that we not only provide exact results but also scale to very large scenarios and provide realistic agent trajectories that can go undetected by machine learning anomaly detectors.",0 "We study rate-induced phase-tipping (RP-tipping) between two stable limit cycles of a birhythmic oscillator. We say that such an oscillator RP-tips when a time variation of an input parameter preserves the bistability of the limit cycles but induces transitions from one stable limit cycle to the other, causing abrupt changes in the amplitude and frequency of the oscillations. Crucially, these transitions occur when: the rate of change of the input is in a certain interval bounded by critical rate(s), and the system is in certain phases of the cycle. We focus on two illustrative examples: the birhythmic van der Pol oscillator and the birhythmic Decroly-Goldbeter glycolysis model, each subjected to monotone and non-monotone shifts in their input parameters. We explain RP-tipping in terms of properties of the autonomous frozen system, including the phase of a cycle and partial basin instability along the parameter path traced by the changing input. We show that RP-tipping can occur as an irreversible one-way transition or as a series of transitions between the stable limit cycles. Finally, we present RP-tipping diagrams showing combinations of the rate and magnitude of parameter shifts and the phase of the oscillation that give rise to this genuine non-autonomous instability.",0 "Concept-based explanations translate the internal representations of deep learning models into a language that humans are familiar with: concepts. One popular method for finding concepts is Concept Activation Vectors (CAVs), which are learnt using a probe dataset of concept exemplars. In this work, we investigate three properties of CAVs: (1) inconsistency across layers, (2) entanglement with other concepts, and (3) spatial dependency. Each property provides both challenges and opportunities in interpreting models. We introduce tools designed to detect the presence of these properties, provide insight into how each property can lead to misleading explanations, and provide recommendations to mitigate their impact. To demonstrate practical applications, we apply our recommendations to a melanoma classification task, showing how entanglement can lead to uninterpretable results and that the choice of negative probe set can have a substantial impact on the meaning of a CAV. Further, we show that understanding these properties can be used to our advantage. For example, we introduce spatially dependent CAVs to test if a model is translation invariant with respect to a specific concept and class. Our experiments are performed on natural images (ImageNet), skin lesions (ISIC 2019), and a new synthetic dataset, Elements. Elements is designed to capture a known ground truth relationship between concepts and classes. We release this dataset to facilitate further research in understanding and evaluating interpretability methods.",0 "The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.",2 "Explainability is a highly demanded requirement for applications in high-risk areas such as medicine. Vision Transformers have mainly been limited to attention extraction to provide insight into the model's reasoning. Our approach combines the high performance of Vision Transformers with the introduction of new explainability capabilities. We present HierViT, a Vision Transformer that is inherently interpretable and adapts its reasoning to that of humans. A hierarchical structure is used to process domain-specific features for prediction. It is interpretable by design, as it derives the target output with human-defined features that are visualized by exemplary images (prototypes). By incorporating domain knowledge about these decisive features, the reasoning is semantically similar to human reasoning and therefore intuitive. Moreover, attention heatmaps visualize the crucial regions for identifying each feature, thereby providing HierViT with a versatile tool for validating predictions. Evaluated on two medical benchmark datasets, LIDC-IDRI for lung nodule assessment and derm7pt for skin lesion classification, HierViT achieves superior and comparable prediction accuracy, respectively, while offering explanations that align with human reasoning.",0 "In recent years, the rapid advancement of large language models (LLMs) in natural language processing has sparked significant interest among researchers to understand their mechanisms and functional characteristics. Although existing studies have attempted to explain LLM functionalities by identifying and interpreting specific neurons, these efforts mostly focus on individual neuron contributions, neglecting the fact that human brain functions are realized through intricate interaction networks. Inspired by cognitive neuroscience research on functional brain networks (FBNs), this study introduces a novel approach to investigate whether similar functional networks exist within LLMs. We use methods similar to those in the field of functional neuroimaging analysis to locate and identify functional networks in LLM. Experimental results show that, similar to the human brain, LLMs contain functional networks that frequently recur during operation. Further analysis shows that these functional networks are crucial for LLM performance. Masking key functional networks significantly impairs the model's performance, while retaining just a subset of these networks is adequate to maintain effective operation. This research provides novel insights into the interpretation of LLMs and the lightweighting of LLMs for certain downstream tasks. Code is available at https://github.com/WhatAboutMyStar/LLM_ACTIVATION.",0 "With the continuous advancement of vision language models (VLMs) technology, remarkable research achievements have emerged in the dermatology field, the fourth most prevalent human disease category. However, despite these advancements, VLM still faces explainable problems to user in diagnosis due to the inherent complexity of dermatological conditions, existing tools offer relatively limited support for user comprehension. We propose SkinGEN, a diagnosis-to-generation framework that leverages the stable diffusion(SD) model to generate reference demonstrations from diagnosis results provided by VLM, thereby enhancing the visual explainability for users. Through extensive experiments with Low-Rank Adaptation (LoRA), we identify optimal strategies for skin condition image generation. We conduct a user study with 32 participants evaluating both the system performance and explainability. Results demonstrate that SkinGEN significantly improves users' comprehension of VLM predictions and fosters increased trust in the diagnostic process. This work paves the way for more transparent and user-centric VLM applications in dermatology and beyond.",2 "Hadfield-Menell et al. (2017) propose the Off-Switch Game, a model of Human-AI cooperation in which AI agents always defer to humans because they are uncertain about our preferences. I explain two reasons why AI agents might not defer. First, AI agents might not value learning. Second, even if AI agents value learning, they might not be certain to learn our actual preferences.",0 "In galaxies, the flattening of the spectrum at low radio frequencies below 300 MHz has been the subject of some debate. A turnover at low frequencies could be caused by multiple physical processes, which can yield new insights into the properties of the ionised gas in the interstellar medium. We investigate the existence and nature of the low-frequency turnover in the HII regions of M 101. We study the nearby galaxy M 101 using the LOw Frequency ARray (LOFAR) at frequencies of 54 and 144 MHz, Apertif at 1370 MHz, and published combined map from the Very Large Array (VLA) and Effelesberg telescope at 4850 MHz. The spectral index between 54 and 144 MHz is inverted at the centres of HII regions. We find a significant low-frequency flattening at the centres of five out of six HII regions that we selected for this study. The low frequency flattening in HII regions of M 101 can be explained with two different free-free absorption models. The flattening is localised in a region smaller than 1.5 kpc and can only be detected with high resolution (better than 45''). The detection of low frequency flattening has important consequences for using radio continuum observations below 100 MHz to measure extinction-free star-formation rates.",0 "Graph Neural Networks (GNNs) are a powerful technique for machine learning on graph-structured data, yet they pose challenges in interpretability. Existing GNN explanation methods usually yield technical outputs, such as subgraphs and feature importance scores, that are difficult for non-data scientists to understand and thereby violate the purpose of explanations. Motivated by recent Explainable AI (XAI) research, we propose GraphXAIN, a method that generates natural language narratives explaining GNN predictions. GraphXAIN is a model- and explainer-agnostic method that uses Large Language Models (LLMs) to translate explanatory subgraphs and feature importance scores into coherent, story-like explanations of GNN decision-making processes. Evaluations on real-world datasets demonstrate GraphXAIN's ability to improve graph explanations. A survey of machine learning researchers and practitioners reveals that GraphXAIN enhances four explainability dimensions: understandability, satisfaction, convincingness, and suitability for communicating model predictions. When combined with another graph explainer method, GraphXAIN further improves trustworthiness, insightfulness, confidence, and usability. Notably, 95% of participants found GraphXAIN to be a valuable addition to the GNN explanation method. By incorporating natural language narratives, our approach serves both graph practitioners and non-expert users by providing clearer and more effective explanations.",2 "Extreme Ultraviolet (EUV) driven atmospheric escape is a key process in the atmospheric evolution of close-in exoplanets. In many evolutionary models, the energy-limited mass-loss rate with a constant efficiency (typically $\sim10\%$) is assumed for calculating the mass-loss rate. However, hydrodynamic simulations have demonstrated that this efficiency depends on various stellar and planetary parameters. Comprehending the underlying physics of the efficiency is essential for understanding planetary atmospheric evolution and recent observations of the upper atmosphere of close-in exoplanets. We introduce relevant temperatures and timescales derived from physical principles to elucidate the mass-loss process. Our analytical mass-loss model is based on phenomenology and consistent across a range of planetary parameters. We compare our mass-loss efficiency and the radiation hydrodynamic simulations. The model can predict efficiency in both energy-limited and recombination-limited regimes. We further apply our model to exoplanets observed with hydrogen absorption (Ly$\alpha$ and H$\alpha$). Our findings suggest that Ly$\alpha$ absorption is detectable in planets subjected to intermediate EUV flux; under these conditions, the escaping outflow is insufficient in low-EUV environments, while the photoionization timescale remains short in high-EUV ranges. Conversely, H$\alpha$ absorption is detectable under high EUV flux conditions, facilitated by the intense Ly$\alpha$ flux exciting hydrogen atoms. According to our model, the non-detection of neutral hydrogen can be explained by a low mass-loss rate and is not necessarily due to stellar wind confinement or the absence of a hydrogen-dominated atmosphere in many cases. This model assists in identifying future observational targets and explicates the unusual absorption detection/non-detection patterns observed in recent studies.",0 "We study a one-dimensional lattice system of free fermions subjected to a generalized measurement process: the system exchanges particles with its environment, but each fermion leaving or entering the system is counted. In contrast to the freezing of dynamics due to frequent measurements of lattice site occupation numbers, a high rate of fermion counts induces fast fluctuations in the state of the system. Still, through numerical simulations of quantum trajectories and an analytical approach based on replica Keldysh field theory, we find that instantaneous correlations and entanglement properties of free fermions subjected to fermion counting and local occupation measurements are strikingly similar. We explain this similarity through a generalized Zeno effect induced by fermion counting and a universal long-wavelength description in terms of a nonlinear sigma model. The physical requirements underlying this universal emergent behavior are conservation of the total number of particles in the system and its environment, and conservation of the purity of the state of the system by keeping a full record of all measurement outcomes. For both types of measurement processes, we present strong evidence against the existence of a critical phase with logarithmic entanglement and conformal invariance. Instead, we identify a finite critical range of length scales on which signatures of conformal invariance are observable. While area-law entanglement is established beyond a scale that is exponentially large in the measurement rate, the upper boundary of the critical range is only algebraically large and thus numerically accessible. Our finding that these properties do not rely on particle number conservation has far reaching implications for measurement-induced phenomena beyond noninteracting fermions, such as charge sharpening in random quantum circuits or generic interacting systems.",0 "This paper reviews Trustworthy Artificial Intelligence (TAI) and its various definitions. Considering the principles respected in any society, TAI is often characterized by a few attributes, some of which have led to confusion in regulatory or engineering contexts. We argue against using terms such as Responsible or Ethical AI as substitutes for TAI. And to help clarify any confusion, we suggest leaving them behind. Given the subjectivity and complexity inherent in TAI, developing a universal framework is deemed infeasible. Instead, we advocate for approaches centered on addressing key attributes and properties such as fairness, bias, risk, security, explainability, and reliability. We examine the ongoing regulatory landscape, with a focus on initiatives in the EU, China, and the USA. We recognize that differences in AI regulations based on geopolitical and geographical reasons pose an additional challenge for multinational companies. We identify risk as a core factor in AI regulation and TAI. For example, as outlined in the EU-AI Act, organizations must gauge the risk level of their AI products to act accordingly (or risk hefty fines). We compare modalities of TAI implementation and how multiple cross-functional teams are engaged in the overall process. Thus, a brute force approach for enacting TAI renders its efficiency and agility, moot. To address this, we introduce our framework Set-Formalize-Measure-Act (SFMA). Our solution highlights the importance of transforming TAI-aware metrics, drivers of TAI, stakeholders, and business/legal requirements into actual benchmarks or tests. Finally, over-regulation driven by panic of powerful AI models can, in fact, harm TAI too. Based on GitHub user-activity data, in 2023, AI open-source projects rose to top projects by contributor account. Enabling innovation in TAI hinges on the independent contributions of the open-source community.",0 "This paper describes MAIA, a Multimodal Automated Interpretability Agent. MAIA is a system that uses neural models to automate neural model understanding tasks like feature interpretation and failure mode discovery. It equips a pre-trained vision-language model with a set of tools that support iterative experimentation on subcomponents of other models to explain their behavior. These include tools commonly used by human interpretability researchers: for synthesizing and editing inputs, computing maximally activating exemplars from real-world datasets, and summarizing and describing experimental results. Interpretability experiments proposed by MAIA compose these tools to describe and explain system behavior. We evaluate applications of MAIA to computer vision models. We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images. Across several trained models and a novel dataset of synthetic vision neurons with paired ground-truth descriptions, MAIA produces descriptions comparable to those generated by expert human experimenters. We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.",0 "Social media is profoundly changing our society with its unprecedented spreading power. Due to the complexity of human behaviors and the diversity of massive messages, the information spreading dynamics are complicated, and the reported mechanisms are different and even controversial. Based on data from mainstream social media platforms, including WeChat, Weibo, and Twitter, cumulatively encompassing a total of 7.45 billion users, we uncover a ubiquitous mechanism that the information spreading dynamics are basically driven by the interplay of social reinforcement and social weakening effects. Accordingly, we propose a concise equation, which, surprisingly, can well describe all the empirical large-scale spreading trajectories. Our theory resolves a number of controversial claims and satisfactorily explains many phenomena previously observed. It also reveals that the highly clustered nature of social networks can lead to rapid and high-frequency information bursts with relatively small coverage per burst. This vital feature enables social media to have a high capacity and diversity for information dissemination, beneficial for its ecological development.",0 "In this paper, we introduce a learning analytics framework to analyze the in-context learning (ICL) behavior of large language models (LLMs) through the lens of the Zone of Proximal Development (ZPD), an established theory in educational psychology. ZPD delineates the space between what a learner is capable of doing unsupported and what the learner cannot do even with support. We adapt this concept to ICL, measuring the ZPD of LLMs based on model performance on individual examples with and without ICL. Furthermore, we propose an item response theory (IRT) model to predict the distribution of zones for LLMs. Our findings reveal a series of intricate and multifaceted behaviors of ICL, providing new insights into understanding and leveraging this technique. Finally, we demonstrate how our framework can enhance LLM in both inference and fine-tuning scenarios: (1) By predicting a model's zone of proximal development, we selectively apply ICL to queries that are most likely to benefit from demonstrations, achieving a better balance between inference cost and performance; (2) We propose a human-like curriculum for fine-tuning, which prioritizes examples within the model's ZPD. The curriculum results in improved performance, and we explain its effectiveness through an analysis of the training dynamics of LLMs.",0 "In this paper, an explainable deep learning-based classifier based on adaptive sinc filters for Parkinson's Disease diagnosis (PD) along with determining its severity, based on analyzing the gait cycle (SincPD) is presented. Considering the effects of PD on the gait cycle of patients, the proposed method utilizes raw data in the form of vertical Ground Reaction Force (vGRF) measured by wearable sensors placed in soles of subjects' shoes. The proposed method consists of Sinc layers that model adaptive bandpass filters to extract important frequency-bands in gait cycle of patients along with healthy subjects. Therefore, by considering these frequencies, the reasons behind the classification a person as a patient or healthy can be explained. In this method, after applying some preprocessing processes, a large model equipped with many filters is first trained. Next, to prune the extra units and reach a more explainable and parsimonious structure, the extracted filters are clusters based on their cut-off frequencies using a centroid-based clustering approach. Afterward, the medoids of the extracted clusters are considered as the final filters. Therefore, only 15 bandpass filters for each sensor are derived to classify patients and healthy subjects. Finally, the most effective filters along with the sensors are determined by comparing the energy of each filter encountering patients and healthy subjects.",2 "Visual search is a fundamental natural task for humans and other animals. We investigated the decision processes humans use in covert (single-fixation) search with briefly presented displays having well-separated potential target locations. Performance was compared with the Bayesian-optimal decision process under the assumption that the information from the different potential target locations is statistically independent. Surprisingly, humans performed slightly better than optimal, despite humans' substantial loss of sensitivity in the fovea (foveal neglect), and the implausibility of the human brain replicating the optimal computations. We show that three factors can quantitatively explain these seemingly paradoxical results. Most importantly, simple and fixed heuristic decision rules reach near optimal search performance. Secondly, foveal neglect primarily affects only the central potential target location. Finally, spatially correlated neural noise can cause search performance to exceed that predicted for independent noise. These findings have broad implications for understanding visual search tasks and other identification tasks in humans and other animals.",0 "This study explores strategies for efficiently classifying scientific full texts using both small, BERT-based models and local large language models like Llama-3.1 8B. We focus on developing methods for selecting subsets of input sentences to reduce input size while simultaneously enhancing classification performance. To this end, we compile a novel dataset consisting of full-text scientific papers from the field of invasion biology, specifically addressing the impacts of invasive species. These papers are aligned with publicly available impact assessments created by researchers for the International Union for Conservation of Nature (IUCN). Through extensive experimentation, we demonstrate that various sources like human evidence annotations, LLM-generated annotations or explainability scores can be used to train sentence selection models that improve the performance of both encoder- and decoder-based language models while optimizing efficiency through the reduction in input length, leading to improved results even if compared to models like ModernBERT that are able to handle the complete text as input. Additionally, we find that repeated sampling of shorter inputs proves to be a very effective strategy that, at a slightly increased cost, can further improve classification performance.",0 "Context and Motivation: Due to their increasing complexity, everyday software systems are becoming increasingly opaque for users. A frequently adopted method to address this difficulty is explainability, which aims to make systems more understandable and usable. Question/problem: However, explanations can also lead to unnecessary cognitive load. Therefore, adapting explanations to the actual needs of a user is a frequently faced challenge. Principal ideas/results: This study investigates factors influencing users' preferred the level of detail and the form of an explanation (e.g., short text or video tutorial) in software. We conducted an online survey with 58 participants to explore relationships between demographics, software usage, app-specific knowledge, as well as their preferred explanation form and level of detail. The results indicate that users prefer moderately detailed explanations in short text formats. Correlation analyses revealed no relationship between app-specific knowledge and the preferred level of detail of an explanation, but an influence of demographic aspects (like gender) on app-specific knowledge and its impact on application confidence were observed, pointing to a possible mediated relationship between knowledge and preferences for explanations. Contribution: Our results show that explanation preferences are weakly influenced by app-specific knowledge but shaped by demographic and psychological factors, supporting the development of adaptive explanation systems tailored to user expertise. These findings support requirements analysis processes by highlighting important factors that should be considered in user-centered methods such as personas.",2 "Context and Motivation: The increasing complexity of modern software systems often challenges users' abilities to interact with them. Taking established quality attributes such as usability and transparency into account can mitigate this problem, but often do not suffice to completely solve it. Recently, explainability has emerged as essential non-functional requirement to help overcome the aforementioned difficulties. Question/problem: User preferences regarding the integration of explanations in software differ. Neither too few nor too many explanations are helpful. In this paper, we investigate the influence of a user's subjective mood and objective demographic aspects on explanation needs by means of frequency and type of explanation. Principal ideas/results: Our results reveal a limited relationship between these factors and explanation needs. Two significant correlations were identified: Emotional reactivity was positively correlated with the need for UI explanations, while a negative correlation was found between age and user interface needs. Contribution: As we only find very few significant aspects that influence the need for explanations, we conclude that the need for explanations is very subjective and does only partially depend on objective factors. These findings emphasize the necessity for software companies to actively gather user-specific explainability requirements to address diverse and context-dependent user demands. Nevertheless, future research should explore additional personal traits and cross-cultural factors to inform the development of adaptive, user-centered explanation systems.",0 "Trust models are essential components of networks of any nature, as they refer to confidence frameworks to evaluate and verify if their participants act reliably and fairly. They are necessary to any social, organizational, or computer network model to ensure truthful interactions, data integrity, and overall system resilience. Trust models can be centralized or distributed, each providing a good fair of benefits and challenges. Blockchain is a special case of distributed trust models that utilize advanced cryptographic techniques and decentralized consensus mechanisms to enforce confidence among participants within a network. In this piece, we provide an overview of blockchain networks from the trust model perspective, with a special focus on the Hyperledger Fabric framework, a widespread blockchain implementation with a consortium architecture. We explore Fabric in detail, including its trust model, components, overall architecture, and a general implementation blueprint for the platform. We intend to offer readers with technical backgrounds but not necessarily experts in the blockchain field a friendly review of these topics to spark their curiosity to continue expanding their knowledge on these increasingly popular technologies.",2 "With an evolutionary approach, the basis of morality can be explained as adaptations to problems of cooperation. With 'evolution' taken in a broad sense, AIs that satisfy the conditions for evolution to apply will be subject to the same cooperative evolutionary pressure as biological entities. Here the adaptiveness of increased cooperation as material safety and wealth increase is discussed -- for humans, for other societies, and for AIs. Diminishing beneficial returns from increased access to material resources also suggests the possibility that, on the whole, there will be no incentive to for instance colonize entire galaxies, thus providing a possible explanation of the Fermi paradox, wondering where everybody is. It is further argued that old societies could engender, give way to, super-AIs, since it is likely that super-AIs are feasible, and fitter. Closing is an aside on effective ways for morals and goals to affect life and society, emphasizing environments, cultures, and laws, and exemplified by how to eat. 'Diminishing returns' is defined, as less than roots, the inverse of infeasibility. It is also noted that there can be no exponential colonization or reproduction, for mathematical reasons, as each entity takes up a certain amount of space. Appended are an algorithm for colonizing for example a galaxy quickly, models of the evolution of cooperation and fairness under diminishing returns, and software for simulating signaling development.",0 "Numerous fairness metrics have been proposed and employed by artificial intelligence (AI) experts to quantitatively measure bias and define fairness in AI models. Recognizing the need to accommodate stakeholders' diverse fairness understandings, efforts are underway to solicit their input. However, conveying AI fairness metrics to stakeholders without AI expertise, capturing their personal preferences, and seeking a collective consensus remain challenging and underexplored. To bridge this gap, we propose a new framework, EARN Fairness, which facilitates collective metric decisions among stakeholders without requiring AI expertise. The framework features an adaptable interactive system and a stakeholder-centered EARN Fairness process to Explain fairness metrics, Ask stakeholders' personal metric preferences, Review metrics collectively, and Negotiate a consensus on metric selection. To gather empirical results, we applied the framework to a credit rating scenario and conducted a user study involving 18 decision subjects without AI knowledge. We identify their personal metric preferences and their acceptable level of unfairness in individual sessions. Subsequently, we uncovered how they reached metric consensus in team sessions. Our work shows that the EARN Fairness framework enables stakeholders to express personal preferences and reach consensus, providing practical guidance for implementing human-centered AI fairness in high-risk contexts. Through this approach, we aim to harmonize fairness expectations of diverse stakeholders, fostering more equitable and inclusive AI fairness.",2 "The increasing complexity of software dependencies has led to the emergence of automated dependency management tools, such as Dependabot. However, these tools often overwhelm developers with a high volume of alerts and notifications, leading to alert fatigue. This paper presents a position on using Artificial Intelligence (AI) agents as dependency negotiators to reduce alert fatigue. We then examine specific use cases where AI agents can facilitate dependency negotiations, such as when working with external dependencies or managing complex, multi-component systems. Our findings highlight the need for more research on the design and evaluation of AI-driven dependency mediation mechanisms. With a focus on ensuring transparency, explainability, and human trustworthiness in these GitHub software projects, our goal is to reduce alert fatigue to an extent that maintainers no longer feel overwhelmed and welcome pull requests just like any other contribution into their projects.",0 "We revisit the reference determinacy (RD) assumption in the task of natural language inference (NLI), i.e., the premise and hypothesis are assumed to refer to the same context when human raters annotate a label. While RD is a practical assumption for constructing a new NLI dataset, we observe that current NLI models, which are typically trained solely on hypothesis-premise pairs created with the RD assumption, fail in downstream applications such as fact verification, where the input premise and hypothesis may refer to different contexts. To highlight the impact of this phenomenon in real-world use cases, we introduce RefNLI, a diagnostic benchmark for identifying reference ambiguity in NLI examples. In RefNLI, the premise is retrieved from a knowledge source (i.e., Wikipedia) and does not necessarily refer to the same context as the hypothesis. With RefNLI, we demonstrate that finetuned NLI models and few-shot prompted LLMs both fail to recognize context mismatch, leading to over 80% false contradiction and over 50% entailment predictions. We discover that the existence of reference ambiguity in NLI examples can in part explain the inherent human disagreements in NLI and provide insight into how the RD assumption impacts the NLI dataset creation process.",2 "Inductive reasoning - the process of inferring general rules from a small number of observations - is a fundamental aspect of human intelligence. Recent works suggest that large language models (LLMs) can engage in inductive reasoning by sampling multiple hypotheses about the rules and selecting the one that best explains the observations. However, due to the IID sampling, semantically redundant hypotheses are frequently generated, leading to significant wastage of compute. In this paper, we 1) demonstrate that increasing the temperature to enhance the diversity is limited due to text degeneration issue, and 2) propose a novel method to improve the diversity while maintaining text quality. We first analyze the effect of increasing the temperature parameter, which is regarded as the LLM's diversity control, on IID hypotheses. Our analysis shows that as temperature rises, diversity and accuracy of hypotheses increase up to a certain point, but this trend saturates due to text degeneration. To generate hypotheses that are more semantically diverse and of higher quality, we propose a novel approach inspired by human inductive reasoning, which we call Mixture of Concepts (MoC). When applied to several inductive reasoning benchmarks, MoC demonstrated significant performance improvements compared to standard IID sampling and other approaches.",0 "Around 50 percent of Irelands rural population relies on unregulated private wells vulnerable to agricultural runoff and untreated wastewater. High national rates of Shiga toxin-producing Escherichia coli (STEC) and other waterborne illnesses have been linked to well water exposure. Periodic well testing is essential for public health, yet the lack of government incentives places the financial burden on households. Understanding environmental, cognitive, and material factors influencing well-testing behavior is critical. This study employs Agent-Based Modeling (ABM) to simulate policy interventions based on national survey data. The ABM framework, designed for private well-testing behavior, integrates a Deep Q-network reinforcement learning model and Explainable AI (XAI) for decision-making insights. Key features were selected using Recursive Feature Elimination (RFE) with 10-fold cross-validation, while SHAP (Shapley Additive Explanations) provided further interpretability for policy recommendations. Fourteen policy scenarios were tested. The most effective, Free Well Testing plus Communication Campaign, increased participation to 435 out of 561 agents, from a baseline of approximately 5 percent, with rapid behavioral adaptation. Free Well Testing plus Regulation also performed well, with 433 out of 561 agents initiating well testing. Free testing alone raised participation to over 75 percent, with some agents testing multiple times annually. Scenarios with free well testing achieved faster learning efficiency, converging in 1000 episodes, while others took 2000 episodes, indicating slower adaptation. This research demonstrates the value of ABM and XAI in public health policy, providing a framework for evaluating behavioral interventions in environmental health.",2 "Deep learning-based expert models have reached superhuman performance in decision-making domains such as chess and Go. However, it is under-explored to explain or comment on given decisions although it is important for model explainability and human education. The outputs of expert models are accurate, but yet difficult to interpret for humans. On the other hand, large language models (LLMs) can produce fluent commentary but are prone to hallucinations due to their limited decision-making capabilities. To bridge this gap between expert models and LLMs, we focus on chess commentary as a representative task of explaining complex decision-making processes through language and address both the generation and evaluation of commentary. We introduce Concept-guided Chess Commentary generation (CCC) for producing commentary and GPT-based Chess Commentary Evaluation (GCC-Eval) for assessing it. CCC integrates the decision-making strengths of expert models with the linguistic fluency of LLMs through prioritized, concept-based explanations. GCC-Eval leverages expert knowledge to evaluate chess commentary based on informativeness and linguistic quality. Experimental results, validated by both human judges and GCC-Eval, demonstrate that CCC generates commentary which is accurate, informative, and fluent.",0 "Deep Reinforcement Learning (DRL) has achieved remarkable success in sequential decision-making tasks across diverse domains, yet its reliance on black-box neural architectures hinders interpretability, trust, and deployment in high-stakes applications. Explainable Deep Reinforcement Learning (XRL) addresses these challenges by enhancing transparency through feature-level, state-level, dataset-level, and model-level explanation techniques. This survey provides a comprehensive review of XRL methods, evaluates their qualitative and quantitative assessment frameworks, and explores their role in policy refinement, adversarial robustness, and security. Additionally, we examine the integration of reinforcement learning with Large Language Models (LLMs), particularly through Reinforcement Learning from Human Feedback (RLHF), which optimizes AI alignment with human preferences. We conclude by highlighting open research challenges and future directions to advance the development of interpretable, reliable, and accountable DRL systems.",2 "Background and Context: Some skills taught in introductory programming courses are categorized into 1) explaining code, 2) arranging lines of code in correct sequence, 3) tracing through the execution of a program, and 4) writing code from scratch. Objective: Knowing if a programming skill is a prerequisite to another would benefit teachers in properly planning the course and structuring the order in which they present activities relating to new content. Prior attempts to establish a skill hierarchy have suffered from methodological issues. Method: In this study, we used the conviction measure from association rule mining to perform pair-wise comparisons of five skills: Write, Trace, Reverse trace, Sequence, and Explain code. We used the data from four exams with more than 600 participants where students solved programming assignments of different skills for several programming topics. Findings: Our findings matched the previous finding that tracing is a prerequisite for students to learn to write code. Contradicting the previous claims, our analysis showed that using the mean threshold writing code is a prerequisite to explaining code. However, there is no clear relationship when we change the threshold to the median. Unlike prior work, we did not find a clear prerequisite relationship between sequencing code and writing or explaining code. Implications: Our research can help instructors by systematically arranging the skills students exercise when encountering a new topic. The goal is to help instructors properly teach and assess programming in a fashion most effective for learning by leveraging the relationship between skills.",2 "Free-text explanations are expressive and easy to understand, but many datasets lack annotated explanation data, making it challenging to train models for explainable predictions. To address this, we investigate how to use existing explanation datasets for self-rationalization and evaluate models' out-of-distribution (OOD) performance. We fine-tune T5-Large and OLMo-7B models and assess the impact of fine-tuning data quality, the number of fine-tuning samples, and few-shot selection methods. The models are evaluated on 19 diverse OOD datasets across three tasks: natural language inference (NLI), fact-checking, and hallucination detection in abstractive summarization. For the generated explanation evaluation, we conduct a human study on 13 selected models and study its correlation with the Acceptability score (T5-11B) and three other LLM-based reference-free metrics. Human evaluation shows that the Acceptability score correlates most strongly with human judgments, demonstrating its effectiveness in evaluating free-text explanations. Our findings reveal: 1) few annotated examples effectively adapt models for OOD explanation generation; 2) compared to sample selection strategies, fine-tuning data source has a larger impact on OOD performance; and 3) models with higher label prediction accuracy tend to produce better explanations, as reflected by higher Acceptability scores.",2 "Software producers are now recognizing the importance of improving their products' suitability for diverse populations, but little attention has been given to measurements to shed light on products' suitability to individuals below the median socioeconomic status (SES) -- who, by definition, make up half the population. To enable software practitioners to attend to both lower- and higher-SES individuals, this paper provides two new surveys that together facilitate measuring how well a software product serves socioeconomically diverse populations. The first survey (SES-Subjective) is who-oriented: it measures who their potential or current users are in terms of their subjective SES (perceptions of their SES). The second survey (SES-Facets) is why-oriented: it collects individuals' values for an evidence-based set of facet values (individual traits) that (1) statistically differ by SES and (2) affect how an individual works and problem-solves with software products. Our empirical validations with deployments at University A and University B (464 and 522 responses, respectively) showed that both surveys are reliable. Further, our results statistically agree with both ground truth data on respondents' socioeconomic statuses and with predictions from foundational literature. Finally, we explain how the pair of surveys is uniquely actionable by software practitioners, such as in requirements gathering, debugging, quality assurance activities, maintenance activities, and fulfilling legal reporting requirements such as those being drafted by various governments for AI-powered software.",2 "This position paper emphasizes the critical gap in the evaluation of Explainable AI (XAI) due to the lack of standardized and reliable metrics, which diminishes its practical value, trustworthiness, and ability to meet regulatory requirements. Current evaluation methods are often fragmented, subjective, and biased, making them prone to manipulation and complicating the assessment of complex models. A central issue is the absence of a ground truth for explanations, complicating comparisons across various XAI approaches. To address these challenges, we advocate for widespread research into developing robust, context-sensitive evaluation metrics. These metrics should be resistant to manipulation, relevant to each use case, and based on human judgment and real-world applicability. We also recommend creating domain-specific evaluation benchmarks that align with the user and regulatory needs of sectors such as healthcare and finance. By encouraging collaboration among academia, industry, and regulators, we can create standards that balance flexibility and consistency, ensuring XAI explanations are meaningful, trustworthy, and compliant with evolving regulations.",0 "Recent advancements in retrieval-augmented generation (RAG) have demonstrated impressive performance in the question-answering (QA) task. However, most previous works predominantly focus on text-based answers. While some studies address multimodal data, they still fall short in generating comprehensive multimodal answers, particularly for explaining concepts or providing step-by-step tutorials on how to accomplish specific goals. This capability is especially valuable for applications such as enterprise chatbots and settings such as customer service and educational systems, where the answers are sourced from multimodal data. In this paper, we introduce a simple and effective framework named MuRAR (Multimodal Retrieval and Answer Refinement). MuRAR enhances text-based answers by retrieving relevant multimodal data and refining the responses to create coherent multimodal answers. This framework can be easily extended to support multimodal answers in enterprise chatbots with minimal modifications. Human evaluation results indicate that multimodal answers generated by MuRAR are more useful and readable compared to plain text answers.",2 "Understanding and explaining differences between audio recordings is crucial for fields like audio forensics, quality assessment, and audio generation. This involves identifying and describing audio events, acoustic scenes, signal characteristics, and their emotional impact on listeners. This paper stands out as the first work to comprehensively study the task of explaining audio differences and then propose benchmark, baselines for the task. First, we present two new datasets for audio difference explanation derived from the AudioCaps and Clotho audio captioning datasets. Using Large Language Models (LLMs), we generate three levels of difference explanations: (1) concise descriptions of audio events and objects, (2) brief sentences about audio events, acoustic scenes, and signal properties, and (3) comprehensive explanations that include semantics and listener emotions. For the baseline, we use prefix tuning where audio embeddings from two audio files are used to prompt a frozen language model. Our empirical analysis and ablation studies reveal that the naive baseline struggles to distinguish perceptually similar sounds and generate detailed tier 3 explanations. To address these limitations, we propose ADIFF, which introduces a cross-projection module, position captioning, and a three-step training process to enhance the model's ability to produce detailed explanations. We evaluate our model using objective metrics and human evaluation and show our model enhancements lead to significant improvements in performance over naive baseline and SoTA Audio-Language Model (ALM) Qwen Audio. Lastly, we conduct multiple ablation studies to study the effects of cross-projection, language model parameters, position captioning, third stage fine-tuning, and present our findings. Our benchmarks, findings, and strong baseline pave the way for nuanced and human-like explanations of audio differences.",2 "Hand kinematics can be measured in Human-Computer Interaction (HCI) with the intention to predict the user's intention in a reach-to-grasp action. Using multiple hand sensors, multivariate time series data are being captured. Given a number of possible actions on a number of objects, the goal is to classify the multivariate time series data, where the class shall be predicted as early as possible. Many machine-learning methods have been developed for such classification tasks, where different approaches produce favorable solutions on different data sets. We, therefore, employ an ensemble approach that includes and weights different approaches. To provide a trustworthy classification production, we present the XMTC tool that incorporates coordinated multiple-view visualizations to analyze the predictions. Temporal accuracy plots, confusion matrix heatmaps, temporal confidence heatmaps, and partial dependence plots allow for the identification of the best trade-off between early prediction and prediction quality, the detection and analysis of challenging classification conditions, and the investigation of the prediction evolution in an overview and detail manner. We employ XMTC to real-world HCI data in multiple scenarios and show that good classification predictions can be achieved early on with our classifier as well as which conditions are easy to distinguish, which multivariate time series measurements impose challenges, and which features have most impact.",0 "To design data visualizations that are easy to comprehend, we need to understand how people with different interests read them. Computational models of predicting scanpaths on charts could complement empirical studies by offering estimates of user performance inexpensively; however, previous models have been limited to gaze patterns and overlooked the effects of tasks. Here, we contribute Chartist, a computational model that simulates how users move their eyes to extract information from the chart in order to perform analysis tasks, including value retrieval, filtering, and finding extremes. The novel contribution lies in a two-level hierarchical control architecture. At the high level, the model uses LLMs to comprehend the information gained so far and applies this representation to select a goal for the lower-level controllers, which, in turn, move the eyes in accordance with a sampling policy learned via reinforcement learning. The model is capable of predicting human-like task-driven scanpaths across various tasks. It can be applied in fields such as explainable AI, visualization design evaluation, and optimization. While it displays limitations in terms of generalizability and accuracy, it takes modeling in a promising direction, toward understanding human behaviors in interacting with charts.",0 "Recent advancements in large language models (LLMs) have led to significant successes across various applications, where the most noticeable is to a series of emerging capabilities, particularly in the areas of In-Context Learning (ICL) and Chain-of-Thought (CoT). To better understand and control model performance, many studies have begun investigating the underlying causes of these phenomena and their impact on task outcomes. However, existing explanatory frameworks predominantly focus on isolating and explaining ICL and CoT independently, leading to an incomplete understanding of their combined influence on model performance. To address this gap, we propose the Electronic Circuit Model (ECM), which provides a foundation for developing scalable, learnable policies and improving the management of AI-generated content. Specifically, ECM conceptualizes model behavior as an electronic circuit: ICL is represented as semantic magnetic field to providing an additional voltage following Faraday's Law, while CoT is modeled as series resistors to constrain the model output performance following Ohm's Law. Experimental results demonstrate that the ECM effectively predicts and explains LLM performance across a variety of prompting strategies. Furthermore, we apply ECM to advanced reasoning strategy optimization on a series of tasks, such as the International Olympiad in Informatics (IOI) and the International Mathematical Olympiad (IMO), achieving competitive performance that surpasses nearly 80% of top human competitors.",0 "Tree-based and rule-based machine learning models play pivotal roles in explainable artificial intelligence (XAI) due to their unique ability to provide explanations in the form of tree or rule sets that are easily understandable and interpretable, making them essential for applications in which trust in model decisions is necessary. These transparent models are typically used in surrogate modeling, a post-hoc XAI approach for explaining the logic of black-box models, enabling users to comprehend and trust complex predictive systems while maintaining competitive performance. This study proposes the Cost-Sensitive Rule and Tree Extraction (CORTEX) method, a novel rule-based XAI algorithm grounded in the multi-class cost-sensitive decision tree (CSDT) method. The original version of the CSDT is extended to classification problems with more than two classes by inducing the concept of an n-dimensional class-dependent cost matrix. The performance of CORTEX as a rule-extractor XAI method is compared to other post-hoc tree and rule extraction methods across several datasets with different numbers of classes. Several quantitative evaluation metrics are employed to assess the explainability of generated rule sets. Our findings demonstrate that CORTEX is competitive with other tree-based methods and can be superior to other rule-based methods across different datasets. The extracted rule sets suggest the advantages of using the CORTEX method over other methods by producing smaller rule sets with shorter rules on average across datasets with a diverse number of classes. Overall, the results underscore the potential of CORTEX as a powerful XAI tool for scenarios that require the generation of clear, human-understandable rules while maintaining good predictive performance.",0 "This article focuses on elucidating the concept of consciousness from a relational and post-phenomenological theory of non-human communication agents (ANHC). Specifically, we explore the contributions of Thomas Metzinger s Self Model Theory, Katherine Hayles conceptualizations of non-conscious cognitive processes centered on knowledge processing phenomena shared between biological and technical systems and Lenore and Manuel Blum s theoretical perspective on computation, which defines consciousness as an emergent phenomenon of complex computational systems, arising from the appropriate organization of their inorganic materiality. Building on interactions with non-human cognitive agents, among other factors, the explainability of sociotechnical systems challenges the humanistic common sense of modern philosophy and science. This critical integration of various approaches ultimately questions other concepts associated with consciousness, such as autonomy, freedom, and mutual responsibility. The aim is to contribute to a necessary discussion for designing new frameworks of understanding that pave the way toward an ethical and pragmatic approach to addressing contemporary challenges in the design, regulation, and interaction with ANHC. Such frameworks, in turn, enable a more inclusive and relational understanding of agency in an interconnected world.",0 "We introduce GeXSe (Generative Explanatory Sensor System), a novel framework designed to extract interpretable sensor-based and vision domain features from non-invasive smart space sensors. We combine these to provide a comprehensive explanation of sensor-activation patterns in activity recognition tasks. This system leverages advanced machine learning architectures, including transformer blocks, Fast Fourier Convolution (FFC), and diffusion models, to provide a more detailed understanding of sensor-based human activity data. A standout feature of GeXSe is our unique Multi-Layer Perceptron (MLP) with linear, ReLU, and normalization layers, specially devised for optimal performance on small datasets. It also yields meaningful activation maps to explain sensor-based activation patterns. The standard approach is based on a CNN model, which our MLP model outperforms.GeXSe offers two types of explanations: sensor-based activation maps and visual domain explanations using short videos. These methods offer a comprehensive interpretation of the output from non-interpretable sensor data, thereby augmenting the interpretability of our model. Utilizing the Frechet Inception Distance (FID) for evaluation, it outperforms established methods, improving baseline performance by about 6\%. GeXSe also achieves a high F1 score of up to 0.85, demonstrating precision, recall, and noise resistance, marking significant progress in reliable and explainable smart space sensing systems.",0 "We introduce Conceptual Metaphor Theory (CMT) as a framework for enhancing large language models (LLMs) through cognitive prompting in complex reasoning tasks. CMT leverages metaphorical mappings to structure abstract reasoning, improving models' ability to process and explain intricate concepts. By incorporating CMT-based prompts, we guide LLMs toward more structured and human-like reasoning patterns. To evaluate this approach, we compare four native models (Llama3.2, Phi3, Gemma2, and Mistral) against their CMT-augmented counterparts on benchmark tasks spanning domain-specific reasoning, creative insight, and metaphor interpretation. Responses were automatically evaluated using the Llama3.3 70B model. Experimental results indicate that CMT prompting significantly enhances reasoning accuracy, clarity, and metaphorical coherence, outperforming baseline models across all evaluated tasks.",0 "It is often argued that effective human-centered explainable artificial intelligence (XAI) should resemble human reasoning. However, empirical investigations of how concepts from cognitive science can aid the design of XAI are lacking. Based on insights from cognitive science, we propose a framework of explanatory modes to analyze how people frame explanations, whether mechanistic, teleological, or counterfactual. Using the complex safety-critical domain of autonomous driving, we conduct an experiment consisting of two studies on (i) how people explain the behavior of a vehicle in 14 unique scenarios (N1=54) and (ii) how they perceive these explanations (N2=382), curating the novel Human Explanations for Autonomous Driving Decisions (HEADD) dataset. Our main finding is that participants deem teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality. Based on our results, we argue that explanatory modes are an important axis of analysis when designing and evaluating XAI and highlight the need for a principled and empirically grounded understanding of the cognitive mechanisms of explanation. The HEADD dataset and our code are available at: https://datashare.ed.ac.uk/handle/10283/8930.",2 "As Vision Transformers (ViTs) are increasingly adopted in sensitive vision applications, there is a growing demand for improved interpretability. This has led to efforts to forward-align these models with carefully annotated abstract, human-understandable semantic entities - concepts. Concepts provide global rationales to the model predictions and can be quickly understood/intervened on by domain experts. Most current research focuses on designing model-agnostic, plug-and-play generic concept-based explainability modules that do not incorporate the inner workings of foundation models (e.g., inductive biases, scale invariance, etc.) during training. To alleviate this issue for ViTs, in this paper, we propose ASCENT-ViT, an attention-based, concept learning framework that effectively composes scale and position-aware representations from multiscale feature pyramids and ViT patch representations, respectively. Further, these representations are aligned with concept annotations through attention matrices - which incorporate spatial and global (semantic) concepts. ASCENT-ViT can be utilized as a classification head on top of standard ViT backbones for improved predictive performance and accurate and robust concept explanations as demonstrated on five datasets, including three widely used benchmarks (CUB, Pascal APY, Concept-MNIST) and 2 real-world datasets (AWA2, KITS).",0 "Objective: Assessing Alzheimer's disease (AD) using high-dimensional radiology images is clinically important but challenging. Although Artificial Intelligence (AI) has advanced AD diagnosis, it remains unclear how to design AI models embracing predictability and explainability. Here, we propose VisTA, a multimodal language-vision model assisted by contrastive learning, to optimize disease prediction and evidence-based, interpretable explanations for clinical decision-making. Methods: We developed VisTA (Vision-Text Alignment Model) for AD diagnosis. Architecturally, we built VisTA from BiomedCLIP and fine-tuned it using contrastive learning to align images with verified abnormalities and their descriptions. To train VisTA, we used a constructed reference dataset containing images, abnormality types, and descriptions verified by medical experts. VisTA produces four outputs: predicted abnormality type, similarity to reference cases, evidence-driven explanation, and final AD diagnoses. To illustrate VisTA's efficacy, we reported accuracy metrics for abnormality retrieval and dementia prediction. To demonstrate VisTA's explainability, we compared its explanations with human experts' explanations. Results: Compared to 15 million images used for baseline pretraining, VisTA only used 170 samples for fine-tuning and obtained significant improvement in abnormality retrieval and dementia prediction. For abnormality retrieval, VisTA reached 74% accuracy and an AUC of 0.87 (26% and 0.74, respectively, from baseline models). For dementia prediction, VisTA achieved 88% accuracy and an AUC of 0.82 (30% and 0.57, respectively, from baseline models). The generated explanations agreed strongly with human experts' and provided insights into the diagnostic process. Taken together, VisTA optimize prediction, clinical reasoning, and explanation.",0 "Adding explanations to recommender systems is said to have multiple benefits, such as increasing user trust or system transparency. Previous work from other application areas suggests that specific user characteristics impact the users' perception of the explanation. However, we rarely find this type of evaluation for recommender systems explanations. This paper addresses this gap by surveying 124 papers in which recommender systems explanations were evaluated in user studies. We analyzed their participant descriptions and study results where the impact of user characteristics on the explanation effects was measured. Our findings suggest that the results from the surveyed studies predominantly cover specific users who do not necessarily represent the users of recommender systems in the evaluation domain. This may seriously hamper the generalizability of any insights we may gain from current studies on explanations in recommender systems. We further find inconsistencies in the data reporting, which impacts the reproducibility of the reported results. Hence, we recommend actions to move toward a more inclusive and reproducible evaluation.",2 "Large language models (LLMs) are increasing in capability and popularity, propelling their application in new domains -- including as replacements for human participants in computational social science, user testing, annotation tasks, and more. In many settings, researchers seek to distribute their surveys to a sample of participants that are representative of the underlying human population of interest. This means in order to be a suitable replacement, LLMs will need to be able to capture the influence of positionality (i.e., relevance of social identities like gender and race). However, we show that there are two inherent limitations in the way current LLMs are trained that prevent this. We argue analytically for why LLMs are likely to both misportray and flatten the representations of demographic groups, then empirically show this on 4 LLMs through a series of human studies with 3200 participants across 16 demographic identities. We also discuss a third limitation about how identity prompts can essentialize identities. Throughout, we connect each limitation to a pernicious history of epistemic injustice against the value of lived experiences that explains why replacement is harmful for marginalized demographic groups. Overall, we urge caution in use cases where LLMs are intended to replace human participants whose identities are relevant to the task at hand. At the same time, in cases where the benefits of LLM replacement are determined to outweigh the harms (e.g., the goal is to supplement rather than fully replace, engaging human participants may cause them harm), we provide inference-time techniques that we empirically demonstrate do reduce, but do not remove, these harms.",2 "Human-AI collaboration is evolving from a tool-based perspective to a partnership model where AI systems complement and enhance human capabilities. Traditional approaches often limit AI to a supportive role, missing the potential for reciprocal relationships where both human and AI inputs contribute to shared goals. Although Human-Centered AI (HcAI) frameworks emphasize transparency, ethics, and user experience, they often lack mechanisms for genuine, dynamic collaboration. The ""Human-AI Handshake Model"" addresses this gap by introducing a bi-directional, adaptive framework with five key attributes: information exchange, mutual learning, validation, feedback, and mutual capability augmentation. These attributes foster balanced interaction, enabling AI to act as a responsive partner, evolving with users over time. Human enablers like user experience and trust, alongside AI enablers such as explainability and responsibility, facilitate this collaboration, while shared values of ethics and co-evolution ensure sustainable growth. Distinct from existing frameworks, this model is reflected in tools like GitHub Copilot and ChatGPT, which support bi-directional learning and transparency. Challenges remain, including maintaining ethical standards and ensuring effective user oversight. Future research will explore these challenges, aiming to create a truly collaborative human-AI partnership that leverages the strengths of both to achieve outcomes beyond what either could accomplish alone.",0 "Large Language Models (LLMs) leverage chain-of-thought (CoT) prompting to provide step-by-step rationales, improving performance on complex tasks. Despite its benefits, vanilla CoT often fails to fully verify intermediate inferences and can produce misleading explanations. In this work, we propose Layered Chain-of-Thought (Layered-CoT) Prompting, a novel framework that systematically segments the reasoning process into multiple layers, each subjected to external checks and optional user feedback. We expand on the key concepts, present three scenarios -- medical triage, financial risk assessment, and agile engineering -- and demonstrate how Layered-CoT surpasses vanilla CoT in terms of transparency, correctness, and user engagement. By integrating references from recent arXiv papers on interactive explainability, multi-agent frameworks, and agent-based collaboration, we illustrate how Layered-CoT paves the way for more reliable and grounded explanations in high-stakes domains.",0 "Intent classification is crucial for conversational agents (chatbots), and deep learning models perform well in this area. However, little research has been done on the explainability of intent classification due to the absence of suitable benchmark data. Human annotation of explanation signals in text samples is time-consuming and costly. However, from inspection of data on intent classification, we see that, more often than not, the main verb denotes the action, and the direct object indicates the domain of conversation, serving as explanation signals for intent. This observation enables us to hypothesize that the main predicate in the text utterances, along with the arguments of the main predicate, can serve as explanation signals. Leveraging this, we introduce a new technique to automatically augment text samples from intent classification datasets with word-level explanations. We mark main predicates (primarily verbs) and their arguments (dependency relations) as explanation signals in benchmark intent classification datasets ATIS and SNIPS, creating a unique 21k-instance dataset for explainability. Further, we experiment with deep learning and language models. We observe that models that work well for classification do not perform well in explainability metrics like plausibility and faithfulness. We also observe that guiding models to focus on explanation signals from our dataset during training improves the plausibility Token F1 score by 3-4%, improving the model's reasoning.",0 "This paper introduces an explanation framework designed to enhance the quality of rules in knowledge-based reasoning systems based on dataset-driven insights. The traditional method for rule induction from data typically requires labor-intensive labeling and data-driven learning. This framework provides an alternative and instead allows for the data-driven refinement of existing rules: it generates explanations of rule inferences and leverages human interpretation to refine rules. It leverages four complementary explanation types: trace-based, contextual, contrastive, and counterfactual, providing diverse perspectives for debugging, validating, and ultimately refining rules. By embedding explainability into the reasoning architecture, the framework enables knowledge engineers to address inconsistencies, optimize thresholds, and ensure fairness, transparency, and interpretability in decision-making processes. Its practicality is demonstrated through a use case in finance.",0 "This thesis explores advanced approaches to improve explainability in computer vision by analyzing and modeling the features exploited by deep neural networks. Initially, it evaluates attribution methods, notably saliency maps, by introducing a metric based on algorithmic stability and an approach utilizing Sobol indices, which, through quasi-Monte Carlo sequences, allows a significant reduction in computation time. In addition, the EVA method offers a first formulation of attribution with formal guarantees via verified perturbation analysis. Experimental results indicate that in complex scenarios these methods do not provide sufficient understanding, particularly because they identify only ""where"" the model focuses without clarifying ""what"" it perceives. Two hypotheses are therefore examined: aligning models with human reasoning -- through the introduction of a training routine that integrates the imitation of human explanations and optimization within the space of 1-Lipschitz functions -- and adopting a conceptual explainability approach. The CRAFT method is proposed to automate the extraction of the concepts used by the model and to assess their importance, complemented by MACO, which enables their visualization. These works converge towards a unified framework, illustrated by an interactive demonstration applied to the 1000 ImageNet classes in a ResNet model.",0 "Blood oxygen saturation (SpO$_2$) is an essential indicator of respiratory functionality and is receiving increasing attention during the COVID-19 pandemic. Clinical findings show that it is possible for COVID-19 patients to have significantly low SpO$_2$ before any obvious symptoms. The prevalence of cameras has motivated researchers to investigate methods for monitoring SpO$_2$ using videos. Most prior schemes involving smartphones are contact-based: They require a fingertip to cover the phone's camera and the nearby light source to capture re-emitted light from the illuminated tissue. In this paper, we propose the first convolutional neural network based noncontact SpO$_2$ estimation scheme using smartphone cameras. The scheme analyzes the videos of a participant's hand for physiological sensing, which is convenient and comfortable, and can protect their privacy and allow for keeping face masks on. We design our neural network architectures inspired by the optophysiological models for SpO$_2$ measurement and demonstrate the explainability by visualizing the weights for channel combination. Our proposed models outperform the state-of-the-art model that is designed for contact-based SpO$_2$ measurement, showing the potential of our proposed method to contribute to public health. We also analyze the impact of skin type and the side of a hand on SpO$_2$ estimation performance.",2 "How do we measure the efficacy of language model explainability methods? While many explainability methods have been developed, they are typically evaluated on bespoke tasks, preventing an apples-to-apples comparison. To help fill this gap, we present ALMANACS, a language model explainability benchmark. ALMANACS scores explainability methods on simulatability, i.e., how well the explanations improve behavior prediction on new inputs. The ALMANACS scenarios span twelve safety-relevant topics such as ethical reasoning and advanced AI behaviors; they have idiosyncratic premises to invoke model-specific behavior; and they have a train-test distributional shift to encourage faithful explanations. By using another language model to predict behavior based on the explanations, ALMANACS is a fully automated benchmark. While not a replacement for human evaluations, we aim for ALMANACS to be a complementary, automated tool that allows for fast, scalable evaluation. Using ALMANACS, we evaluate counterfactual, rationalization, attention, and Integrated Gradients explanations. Our results are sobering: when averaged across all topics, no explanation method outperforms the explanation-free control. We conclude that despite modest successes in prior work, developing an explanation method that aids simulatability in ALMANACS remains an open challenge.",0 "Artificial Intelligence Generated Content (AIGC) has grown rapidly in recent years, among which AI-based image generation has gained widespread attention due to its efficient and imaginative image creation ability. However, AI-generated Images (AIGIs) may not satisfy human preferences due to their unique distortions, which highlights the necessity to understand and evaluate human preferences for AIGIs. To this end, in this paper, we first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+, which provides human visual preference scores and detailed preference explanations from three perspectives including quality, authenticity, and correspondence. Then, based on the constructed AIGCIQA2023+ database, this paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning. Specifically, the MINT-IQA model first learn and evaluate human preferences for AI-generated Images from multi-perspectives, then via the vision-language instruction tuning strategy, MINT-IQA attains powerful understanding and explanation ability for human visual preference on AIGIs, which can be used for feedback to further improve the assessment capabilities. Extensive experimental results demonstrate that the proposed MINT-IQA model achieves state-of-the-art performance in understanding and evaluating human visual preferences for AIGIs, and the proposed model also achieves competing results on traditional IQA tasks compared with state-of-the-art IQA models. The AIGCIQA2023+ database and MINT-IQA model are available at: https://github.com/IntMeGroup/MINT-IQA.",2 "The field of explainability in artificial intelligence (AI) has witnessed a growing number of studies and increasing scholarly interest. However, the lack of human-friendly and individual interpretations in explaining the outcomes of machine learning algorithms has significantly hindered the acceptance of these methods by clinicians in their research and clinical practice. To address this issue, our study uses counterfactual explanations to explore the applicability of ""what if?"" scenarios in medical research. Our aim is to expand our understanding of magnetic resonance imaging (MRI) features used for diagnosing pediatric posterior fossa brain tumors beyond existing boundaries. In our case study, the proposed concept provides a novel way to examine alternative decision-making scenarios that offer personalized and context-specific insights, enabling the validation of predictions and clarification of variations under diverse circumstances. Additionally, we explore the potential use of counterfactuals for data augmentation and evaluate their feasibility as an alternative approach in our medical research case. The results demonstrate the promising potential of using counterfactual explanations to improve AI-driven methods in clinical research.",0 "LLMs have demonstrated impressive performance in answering medical questions, such as achieving passing scores on medical licensing examinations. However, medical board exams or general clinical questions do not capture the complexity of realistic clinical cases. Moreover, the lack of reference explanations means we cannot easily evaluate the reasoning of model decisions, a crucial component of supporting doctors in making complex medical decisions. To address these challenges, we construct two new datasets: JAMA Clinical Challenge and Medbullets.\footnote{Datasets and code are available at \url{https://github.com/HanjieChen/ChallengeClinicalQA}.} JAMA Clinical Challenge consists of questions based on challenging clinical cases, while Medbullets comprises simulated clinical questions. Both datasets are structured as multiple-choice question-answering tasks, accompanied by expert-written explanations. We evaluate seven LLMs on the two datasets using various prompts. Experiments demonstrate that our datasets are harder than previous benchmarks. In-depth automatic and human evaluations of model-generated explanations provide insights into the promise and deficiency of LLMs for explainable medical QA.",0 "Emotion recognition and generation have emerged as crucial topics in Artificial Intelligence research, playing a significant role in enhancing human-computer interaction within healthcare, customer service, and other fields. Although several reviews have been conducted on emotion recognition and generation as separate entities, many of these works are either fragmented or limited to specific methodologies, lacking a comprehensive overview of recent developments and trends across different modalities. In this survey, we provide a holistic review aimed at researchers beginning their exploration in emotion recognition and generation. We introduce the fundamental principles underlying emotion recognition and generation across facial, vocal, and textual modalities. This work categorises recent state-of-the-art research into distinct technical approaches and explains the theoretical foundations and motivations behind these methodologies, offering a clearer understanding of their application. Moreover, we discuss evaluation metrics, comparative analyses, and current limitations, shedding light on the challenges faced by researchers in the field. Finally, we propose future research directions to address these challenges and encourage further exploration into developing robust, effective, and ethically responsible emotion recognition and generation systems.",2 "In 2024, the outbreak of Human Metapneumovirus (HMPV) in China, which later spread to the UK and other countries, raised significant public concern. While HMPV typically causes mild symptoms, its effects on vulnerable individuals prompted health authorities to emphasize preventive measures. This paper explores how sentiment analysis can enhance our understanding of public reactions to HMPV by analyzing social media data. We apply transformer models, particularly XLNet, achieving 93.50% accuracy in sentiment classification. Additionally, we use explainable AI (XAI) through SHAP to improve model transparency.",0 "The locate-then-edit paradigm has shown significant promise for knowledge editing (KE) in Large Language Models (LLMs). While previous methods perform well on single-hop fact recall tasks, they consistently struggle with multi-hop factual recall tasks involving newly edited knowledge. In this paper, leveraging tools in mechanistic interpretability, we first identify that in multi-hop tasks, LLMs tend to retrieve knowledge with implicit subject information from deeper MLP layers, unlike single-hop tasks, which rely on shallow layers. This distinction explains the poor performance of current methods in multi-hop queries, as they primarily focus on editing shallow layers with single-hop edit prompts, leaving deeper layers unchanged. To address this, we propose IFMET, a novel locate-then-edit KE approach designed to edit both shallow and deep MLP layers. Beyond single-hop editing prompts, IFMET further incorporates multi-hop editing prompts to locate and modify knowledge across different stages of reasoning. Experimental results demonstrate that IFMET significantly improves performance on multi-hop factual recall tasks, overcoming the limitations of previous locate-then-edit methods",0 "Providing human-understandable insights into the inner workings of neural networks is an important step toward achieving more explainable and trustworthy AI. Existing approaches to such mechanistic interpretability typically require substantial prior knowledge and manual effort, with strategies tailored to specific tasks. In this work, we take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks. Specifically, we explore a novel automated and task-agnostic approach based on the notion of functionally similar representations within neural networks to identify similar and dissimilar layers, revealing potential sub-networks. We achieve this by proposing, for the first time to our knowledge, the use of Gromov-Wasserstein distance, which overcomes challenges posed by varying distributions and dimensionalities across intermediate representations, issues that complicate direct layer to layer comparisons. On algebraic, language, and vision tasks, we observe the emergence of sub-groups within neural network layers corresponding to functional abstractions. Through downstream applications of model compression and fine-tuning, we show the proposed approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.",0 "Graph neural networks (GNN) have emerged as a popular tool for modelling functional magnetic resonance imaging (fMRI) datasets. Many recent studies have reported significant improvements in disorder classification performance via more sophisticated GNN designs and highlighted salient features that could be potential biomarkers of the disorder. However, existing methods of evaluating their robustness are often limited to cross-referencing with existing literature, which is a subjective and inconsistent process. In this review, we provide an overview of how GNN and model explainability techniques (specifically, feature attributors) have been applied to fMRI datasets for disorder prediction tasks, with an emphasis on evaluating the robustness of potential biomarkers produced for psychiatric disorders. Then, 65 studies using GNNs that reported potential fMRI biomarkers for psychiatric disorders (attention-deficit hyperactivity disorder, autism spectrum disorder, major depressive disorder, schizophrenia) published before 9 October 2024 were identified from 2 online databases (Scopus, PubMed). We found that while most studies have performant models, salient features highlighted in these studies (as determined by feature attribution scores) vary greatly across studies on the same disorder. Reproducibility of biomarkers is only limited to a small subset at the level of regions and few transdiagnostic biomarkers were identified. To address these issues, we suggest establishing new standards that are based on objective evaluation metrics to determine the robustness of these potential biomarkers. We further highlight gaps in the existing literature and put together a prediction-attribution-evaluation framework that could set the foundations for future research on discovering robust biomarkers of psychiatric disorders via GNNs.",0 "The Industry 5.0 transition highlights EU efforts to design intelligent devices that can work alongside humans to enhance human capabilities, and such vision aligns with user preferences and needs to feel safe while collaborating with such systems take priority. This demands a human-centric research vision and requires a societal and educational shift in how we perceive technological advancements. To better understand this perspective, we conducted a systematic literature review focusing on understanding how trust and trustworthiness can be key aspects of supporting this move towards Industry 5.0. This review aims to overview the most common methodologies and measurements and collect insights about barriers and facilitators for fostering trustworthy HRI. After a rigorous quality assessment following the Systematic Reviews and Meta-Analyses guidelines, using rigorous inclusion criteria and screening by at least two reviewers, 34 articles were included in the review. The findings underscores the significance of trust and safety as foundational elements for promoting secure and trustworthy human-machine cooperation. Confirm that almost 30% of the revised articles do not present a definition of trust, which can be problematic as this lack of conceptual clarity can undermine research efforts in addressing this problem from a central perspective. It highlights that the choice of domain and area of application should influence the choice of methods and approaches to fostering trust in HRI, as those choices can significantly affect user preferences and their perceptions and assessment of robot capabilities. Additionally, this lack of conceptual clarity can be a potential barrier to fostering trust in HRI and explains the sometimes contradictory findings or choice of methods and instruments used to investigate trust in robots and other autonomous systems in the literature.",0 "Functionality or proxy-based approach is one of the used approaches to evaluate the quality of explainable artificial intelligence methods. It uses statistical methods, definitions and new developed metrics for the evaluation without human intervention. Among them, Selectivity or RemOve And Retrain (ROAR), and Permutation Importance (PI) are the most commonly used metrics to evaluate the quality of explainable artificial intelligence methods to highlight the most significant features in machine learning models. They state that the model performance should experience a sharp reduction if the most informative feature is removed from the model or permuted. However, the efficiency of both metrics is significantly affected by multicollinearity, number of significant features in the model and the accuracy of the model. This paper shows with empirical examples that both metrics suffer from the aforementioned limitations. Accordingly, we propose expected accuracy interval (EAI), a metric to predict the upper and lower bounds of the the accuracy of the model when ROAR or IP is implemented. The proposed metric found to be very useful especially with collinear features.",0 "Concept-based explanation methods, such as concept bottleneck models (CBMs), aim to improve the interpretability of machine learning models by linking their decisions to human-understandable concepts, under the critical assumption that such concepts can be accurately attributed to the network's feature space. However, this foundational assumption has not been rigorously validated, mainly because the field lacks standardised metrics and benchmarks to assess the existence and spatial alignment of such concepts. To address this, we propose three metrics: the concept global importance metric, the concept existence metric, and the concept location metric, including a technique for visualising concept activations, i.e., concept activation mapping. We benchmark post-hoc CBMs to illustrate their capabilities and challenges. Through qualitative and quantitative experiments, we demonstrate that, in many cases, even the most important concepts determined by post-hoc CBMs are not present in input images; moreover, when they are present, their saliency maps fail to align with the expected regions by either activating across an entire object or misidentifying relevant concept-specific regions. We analyse the root causes of these limitations, such as the natural correlation of concepts. Our findings underscore the need for more careful application of concept-based explanation techniques especially in settings where spatial interpretability is critical.",0 "Explanation is a fundamentally human process. Understanding the goal and audience of the explanation is vital, yet existing work on explainable reinforcement learning (XRL) routinely does not consult humans in their evaluations. Even when they do, they routinely resort to subjective metrics, such as confidence or understanding, that can only inform researchers of users' opinions, not their practical effectiveness for a given problem. This paper calls on researchers to use objective human metrics for explanation evaluations based on observable and actionable behaviour to build more reproducible, comparable, and epistemically grounded research. To this end, we curate, describe, and compare several objective evaluation methodologies for applying explanations to debugging agent behaviour and supporting human-agent teaming, illustrating our proposed methods using a novel grid-based environment. We discuss how subjective and objective metrics complement each other to provide holistic validation and how future work needs to utilise standardised benchmarks for testing to enable greater comparisons between research.",0 "In this work, we build a model to combine the mass generated from the Higgs mechanism and that from the dynamical chiral symmetry breaking mechanism. This is motivated by the fermion mass hierarchy that the neutrino mass is smaller than the charged lepton mass and the charged lepton mass is smaller than the quark mass. Since they participate different interactions, it is natural to conjecture that interactions contribute to the fermion mass. This conjecture could be explained via the dynamical chiral symmetry breaking mechanism with assuming the existence of a non-perturbative regime. In addition, this model predicts a different ratio of fermion Yukawa coupling to the Higgs self coupling, which could be verified in the near future.",0 "The recent development of generative large language models (LLMs) poses new challenges for model evaluation that the research community and industry have been grappling with. While the versatile capabilities of these models ignite much excitement, they also inevitably make a leap toward homogenization: powering a wide range of applications with a single, often referred to as ``general-purpose'', model. In this position paper, we argue that model evaluation practices must take on a critical task to cope with the challenges and responsibilities brought by this homogenization: providing valid assessments for whether and how much human needs in diverse downstream use cases can be satisfied by the given model (\textit{socio-technical gap}). By drawing on lessons about improving research realism from the social sciences, human-computer interaction (HCI), and the interdisciplinary field of explainable AI (XAI), we urge the community to develop evaluation methods based on real-world contexts and human requirements, and embrace diverse evaluation methods with an acknowledgment of trade-offs between realisms and pragmatic costs to conduct the evaluation. By mapping HCI and current NLG evaluation methods, we identify opportunities for evaluation methods for LLMs to narrow the socio-technical gap and pose open questions.",0 "The addressee estimation (understanding to whom somebody is talking) is a fundamental task for human activity recognition in multi-party conversation scenarios. Specifically, in the field of human-robot interaction, it becomes even more crucial to enable social robots to participate in such interactive contexts. However, it is usually implemented as a binary classification task, restricting the robot's capability to estimate whether it was addressed \review{or not, which} limits its interactive skills. For a social robot to gain the trust of humans, it is also important to manifest a certain level of transparency and explainability. Explainable artificial intelligence thus plays a significant role in the current machine learning applications and models, to provide explanations for their decisions besides excellent performance. In our work, we a) present an addressee estimation model with improved performance in comparison with the previous state-of-the-art",2 "This paper investigates the reliability of explanations generated by large language models (LLMs) when prompted to explain their previous output. We evaluate two kinds of such self-explanations - extractive and counterfactual - using three state-of-the-art LLMs (2B to 8B parameters) on two different classification tasks (objective and subjective). Our findings reveal, that, while these self-explanations can correlate with human judgement, they do not fully and accurately follow the model's decision process, indicating a gap between perceived and actual model reasoning. We show that this gap can be bridged because prompting LLMs for counterfactual explanations can produce faithful, informative, and easy-to-verify results. These counterfactuals offer a promising alternative to traditional explainability methods (e.g. SHAP, LIME), provided that prompts are tailored to specific tasks and checked for validity.",0 "In cardiac cells, structural organization is an important indicator of cell maturity and healthy function. Healthy cardiomyocytes exhibit well-aligned morphology with densely packed and organized sarcomeres. Immature or diseased cardiomyocytes typically lack this organized structure. Critically, human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) offer a valuable model for studying human cardiac cells in a controlled environment. However, these cells often exhibit a disorganized structure. In this work, we extend the SarcGraph computational framework -- designed to assess the structural and functional behavior of hiPSC-CMs -- to better accommodate the structural features of immature cells. There are two key enhancements: (1) incorporating a deep learning-based z-disc classifier, and (2) introducing a novel ensemble graph-scoring approach. These modification significantly reduced false positive sarcomere detections in immature cells, and resulted in the detection of longer myofibrils in mature samples. With this enhanced framework, we analyze an open-source dataset published by the Allen Institute for Cell Science, where, for the first time, we are able to extract key structural features from these data using information from each individually detected sarcomere. Not only are we able to use these structural features to predict expert scores, but we are also able to use these structural features to identify bias in expert scoring and offer an alternative unsupervised learning approach based on explainable clustering. These results demonstrate the efficacy of our modified SarcGraph in extracting biologically meaningful features, enabling a deeper understanding of hiPSC-CM structural integrity. By making our code and tools open-source, we aim to empower the broader cardiac research community and foster further development of computational tools for cardiac tissue analysis.",0 "Building trust between humans and robots has long interested the robotics community. Various studies have aimed to clarify the factors that influence the development of user trust. In Human-Robot Interaction (HRI) environments, a critical aspect of trust development is the robot's ability to make its behavior understandable. The concept of an eXplainable Autonomous Robot (XAR) addresses this requirement. However, giving a robot self-explanatory abilities is a complex task. Robot behavior includes multiple skills and diverse subsystems. This complexity led to research into a wide range of methods for generating explanations about robot behavior. This paper presents a systematic literature review that analyzes existing strategies for generating explanations in robots and studies the current XAR trends. Results indicate promising advancements in explainability systems. However, these systems are still unable to fully cover the complex behavior of autonomous robots. Furthermore, we also identify a lack of consensus on the theoretical concept of explainability, and the need for a robust methodology to assess explainability methods and tools has been identified.",0 "Heroes are people who perform costly altruistic acts. Few people turn out to be heroes, but many spontaneously honor heroes by commenting, applauding, or enthusiastically celebrating their deeds. The existence of a praising audience leads individuals to compete to attract the crowd's admiration. The outcome is a winner-take-all situation in which only one or a few individuals engage in extreme altruistic behavior. The more difficult part is to explain the crowd's propensity to pay tribute from an individual fitness optimization perspective. The model proposed here shows how heroic behavior and its celebration by a large audience may emerge together. This situation is possible if admirers use public praise as a social signal to promote their own commitment to the values displayed by the hero.",0 "Context. The scale height of the spatial distribution of open clusters (OCs) in the Milky Way exhibits a well known increase with age which is usually interpreted as evidence for dynamical heating of the disc or of the disc having been thicker in the past. Aims. We address the increase of the scale height with age of the OC population from a different angle. We propose that the apparent thickening of the disc can be largely explained as a consequence of a stronger disruption of OCs near the Galactic plane by disc phenomena, namely encounters with giant molecular clouds (GMCs). Methods. We present a computational model that forms OCs with different initial masses and follows their orbits while subjecting them to different disruption mechanisms. To setup the model and infer its parameters, we use and analyse a Gaia-based OC catalogue (Dias et al. 2021). We investigate both the spatial and age distributions of the OC population and discuss the completeness of the sample. The simulation results are then compared to the observations. Results. Consistent with previous studies, the observations reveal that the SH of the spatial distribution of OCs increases with age. We find that it is very likely that the OC sample is incomplete even for the solar neighbourhood. The model simulations successfully reproduce the SH increase with age and the total number of OCs that survive with age up to 1 Gyr. For older OCs, the predicted SH from the model starts deviating from the observations, although remaining within the uncertainties of the observations. This can be related with effects of incompleteness and/or simplifications in the model. Conclusions. A selective disruption of OCs near the galactic plane through GMC encounters is able to explain the SH evolution of the OC population.",0 "Distributed Lag Models (DLMs) and similar regression approaches such as MIDAS have been used for many decades in econometrics and more recently to investigate how poor air quality adversely affects human health. In this paper we describe how to expand the utility of these models for Bayesian inference by leveraging latent variables. In particular we explain how to perform binary regression to better handle imbalanced data, how to incorporate negative binomial regression, and how to estimate the probability of predictor inclusion. Extra parameters introduced through the DLM framework may require calibration for the MCMC algorithm, but this will not be the case in DLM-based analyses often seen in pollution exposure literature. In these cases, the parameters are inferred through a fully automatic Gibbs sampling procedure.",0 "Fuzzy logic provides a robust framework for enhancing explainability, particularly in domains requiring the interpretation of complex and ambiguous signals, such as brain-computer interface (BCI) systems. Despite significant advances in deep learning, interpreting human emotions remains a formidable challenge. In this work, we present iFuzzyAffectDuo, a novel computational model that integrates a dual-filter fuzzy neural network architecture for improved detection and interpretation of emotional states from neuroimaging data. The model introduces a new membership function (MF) based on the Laplace distribution, achieving superior accuracy and interpretability compared to traditional approaches. By refining the extraction of neural signals associated with specific emotions, iFuzzyAffectDuo offers a human-understandable framework that unravels the underlying decision-making processes. We validate our approach across three neuroimaging datasets using functional Near-Infrared Spectroscopy (fNIRS) and Electroencephalography (EEG), demonstrating its potential to advance affective computing. These findings open new pathways for understanding the neural basis of emotions and their application in enhancing human-computer interaction.",0 "Vowels are primarily characterized by tongue position. Humans have discovered these features of vowel articulation through their own experience and explicit objective observation such as using MRI. With this knowledge and our experience, we can explain and understand the relationship between tongue positions and vowels, and this knowledge is helpful for language learners to learn pronunciation. Since language models (LMs) are trained on a large amount of data that includes linguistic and medical fields, our preliminary studies indicate that an LM is able to explain the pronunciation mechanisms of vowels. However, it is unclear whether multi-modal LMs, such as vision LMs, align textual information with visual information. One question arises: do LMs associate real tongue positions with vowel articulation? In this study, we created video and image datasets from the existing real-time MRI dataset and investigated whether LMs can understand vowel articulation based on tongue positions using vision-based information. Our findings suggest that LMs exhibit potential for understanding vowels and tongue positions when reference examples are provided while they have difficulties without them. Our code for dataset building is available on GitHub.",0 "Large Language Models (LLMs) are trained on a vast amount of text to interpret and generate human-like textual content. They are becoming a vital vehicle in realizing the vision of the autonomous enterprise, with organizations today actively adopting LLMs to automate many aspects of their operations. LLMs are likely to play a prominent role in future AI-augmented business process management systems, catering functionalities across all system lifecycle stages. One such system's functionality is Situation-Aware eXplainability (SAX), which relates to generating causally sound and human-interpretable explanations. In this paper, we present the SAX4BPM framework developed to generate SAX explanations. The SAX4BPM suite consists of a set of services and a central knowledge repository. The functionality of these services is to elicit the various knowledge ingredients that underlie SAX explanations. A key innovative component among these ingredients is the causal process execution view. In this work, we integrate the framework with an LLM to leverage its power to synthesize the various input ingredients for the sake of improved SAX explanations. Since the use of LLMs for SAX is also accompanied by a certain degree of doubt related to its capacity to adequately fulfill SAX along with its tendency for hallucination and lack of inherent capacity to reason, we pursued a methodological evaluation of the perceived quality of the generated explanations. We developed a designated scale and conducted a rigorous user study. Our findings show that the input presented to the LLMs aided with the guard-railing of its performance, yielding SAX explanations having better-perceived fidelity. This improvement is moderated by the perception of trust and curiosity. More so, this improvement comes at the cost of the perceived interpretability of the explanation.",2 "Explainable artificial intelligence (XAI) methods are being proposed to help interpret and understand how AI systems reach specific predictions. Inspired by prior work on conversational user interfaces, we argue that augmenting existing XAI methods with conversational user interfaces can increase user engagement and boost user understanding of the AI system. In this paper, we explored the impact of a conversational XAI interface on users' understanding of the AI system, their trust, and reliance on the AI system. In comparison to an XAI dashboard, we found that the conversational XAI interface can bring about a better understanding of the AI system among users and higher user trust. However, users of both the XAI dashboard and conversational XAI interfaces showed clear overreliance on the AI system. Enhanced conversations powered by large language model (LLM) agents amplified over-reliance. Based on our findings, we reason that the potential cause of such overreliance is the illusion of explanatory depth that is concomitant with both XAI interfaces. Our findings have important implications for designing effective conversational XAI interfaces to facilitate appropriate reliance and improve human-AI collaboration. Code can be found at https://github.com/delftcrowd/IUI2025_ConvXAI",0 "Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.",2 "Software bugs claim approximately 50% of development time and cost the global economy billions of dollars. Once a bug is reported, the assigned developer attempts to identify and understand the source code responsible for the bug and then corrects the code. Over the last five decades, there has been significant research on automatically finding or correcting software bugs. However, there has been little research on automatically explaining the bugs to the developers, which is essential but a highly challenging task. In this paper, we propose Bugsplainer, a transformer-based generative model, that generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits. Bugsplainer can leverage structural information and buggy patterns from the source code to generate an explanation for a bug. Our evaluation using three performance metrics shows that Bugsplainer can generate understandable and good explanations according to Google's standard, and can outperform multiple baselines from the literature. We also conduct a developer study involving 20 participants where the explanations from Bugsplainer were found to be more accurate, more precise, more concise and more useful than the baselines.",2 "We introduce VIBA, a novel approach for explainable video classification by adapting Information Bottlenecks for Attribution (IBA) to video sequences. While most traditional explainability methods are designed for image models, our IBA framework addresses the need for explainability in temporal models used for video analysis. To demonstrate its effectiveness, we apply VIBA to video deepfake detection, testing it on two architectures: the Xception model for spatial features and a VGG11-based model for capturing motion dynamics through optical flow. Using a custom dataset that reflects recent deepfake generation techniques, we adapt IBA to create relevance and optical flow maps, visually highlighting manipulated regions and motion inconsistencies. Our results show that VIBA generates temporally and spatially consistent explanations, which align closely with human annotations, thus providing interpretability for video classification and particularly for deepfake detection.",0 "Artificial Intelligence (AI) has demonstrated potential in healthcare, particularly in enhancing diagnostic accuracy and decision-making through Clinical Decision Support Systems (CDSSs). However, the successful implementation of these systems relies on user trust and reliance, which can be influenced by explainable AI. This study explores the impact of varying explainability levels on clinicians trust, cognitive load, and diagnostic performance in breast cancer detection. Utilizing an interrupted time series design, we conducted a web-based experiment involving 28 healthcare professionals. The results revealed that high confidence scores substantially increased trust but also led to overreliance, reducing diagnostic accuracy. In contrast, low confidence scores decreased trust and agreement while increasing diagnosis duration, reflecting more cautious behavior. Some explainability features influenced cognitive load by increasing stress levels. Additionally, demographic factors such as age, gender, and professional role shaped participants' perceptions and interactions with the system. This study provides valuable insights into how explainability impact clinicians' behavior and decision-making. The findings highlight the importance of designing AI-driven CDSSs that balance transparency, usability, and cognitive demands to foster trust and improve integration into clinical workflows.",2 "Many human knowledge systems, such as science, law, and invention, are built on documents and the citations that link them. Citations, while serving multiple purposes, primarily function as a way to explicitly document the use of prior work and thus have become central to the study of knowledge systems. Analyzing citation dynamics has revealed statistical patterns that shed light on knowledge production, recognition, and formalization, and has helped identify key mechanisms driving these patterns. However, most quantitative findings are confined to scientific citations, raising the question of universality of these findings. Moreover, existing models of individual citation trajectories fail to explain phenomena such as delayed recognition, calling for a unifying framework. Here, we analyze a newly available corpus of U.S. case law, in addition to scientific and patent citation networks, to show that they share remarkably similar citation patterns, including a heavy-tailed distribution of sleeping beauties. We propose a holistic model that captures the three core mechanisms driving collective dynamics and replicates the elusive phenomenon of delayed recognition. We demonstrate that the model not only replicates observed citation patterns, but also better predicts future successes by considering the whole system. Our work offers insights into key mechanisms that govern large-scale patterns of collective human knowledge systems and may provide generalizable perspectives on discovery and innovation across domains.",0 "In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision-making in such environments is challenging, and humans can benefit from obtaining explanations. However, such environments and scenarios have rarely been explored in the context of explainable AI. While some explanation methods for cooperative environments can be applied in mixed-motive setups, they do not address inter-agent competition, cheap-talk, or implicit communication by actions. In this work, we design explanation methods to address these issues. Then, we proceed to establish generality and demonstrate the applicability of the methods to three games with vastly different properties. Lastly, we demonstrate the effectiveness and usefulness of the methods for humans in two mixed-motive games. The first is a challenging 7-player game called no-press Diplomacy. The second is a 3-player game inspired by the prisoner's dilemma, featuring communication in natural language.",0 "Recently, multimodal depression recognition for clinical interviews (MDRC) has recently attracted considerable attention. Existing MDRC studies mainly focus on improving task performance and have achieved significant development. However, for clinical applications, model transparency is critical, and previous works ignore the interpretability of decision-making processes. To address this issue, we propose an Explainable Multimodal Depression Recognition for Clinical Interviews (EMDRC) task, which aims to provide evidence for depression recognition by summarizing symptoms and uncovering underlying causes. Given an interviewer-participant interaction scenario, the goal of EMDRC is to structured summarize participant's symptoms based on the eight-item Patient Health Questionnaire depression scale (PHQ-8), and predict their depression severity. To tackle the EMDRC task, we construct a new dataset based on an existing MDRC dataset. Moreover, we utilize the PHQ-8 and propose a PHQ-aware multimodal multi-task learning framework, which captures the utterance-level symptom-related semantic information to help generate dialogue-level summary. Experiment results on our annotated dataset demonstrate the superiority of our proposed methods over baseline systems on the EMDRC task.",2 "Explainable Artificial Intelligence (XAI) has emerged as a critical area of research to unravel the opaque inner logic of (deep) machine learning models. Among the various XAI techniques proposed in the literature, counterfactual explanations stand out as one of the most promising approaches. However, these ""what-if"" explanations are frequently complex and technical, making them difficult for non-experts to understand and, more broadly, challenging for humans to interpret. To bridge this gap, in this work, we exploit the power of open-source Large Language Models to generate natural language explanations when prompted with valid counterfactual instances produced by state-of-the-art explainers for graph-based models. Experiments across several graph datasets and counterfactual explainers show that our approach effectively produces accurate natural language representations of counterfactual instances, as demonstrated by key performance metrics.",0 "Mechanistic interpretability is the program of explaining what AI systems are doing in terms of their internal mechanisms. I analyze some aspects of the program, along with setting out some concrete challenges and assessing progress to date. I argue for the importance of propositional interpretability, which involves interpreting a system's mechanisms and behavior in terms of propositional attitudes: attitudes (such as belief, desire, or subjective probability) to propositions (e.g. the proposition that it is hot outside). Propositional attitudes are the central way that we interpret and explain human beings and they are likely to be central in AI too. A central challenge is what I call thought logging: creating systems that log all of the relevant propositional attitudes in an AI system over time. I examine currently popular methods of interpretability (such as probing, sparse auto-encoders, and chain of thought methods) as well as philosophical methods of interpretation (including those grounded in psychosemantics) to assess their strengths and weaknesses as methods of propositional interpretability.",0 "Music and language are structurally similar. Such structural similarity is often explained by generative processes. This paper describes the recent development of probabilistic generative models (PGMs) for language learning and symbol emergence in robotics. Symbol emergence in robotics aims to develop a robot that can adapt to real-world environments and human linguistic communications and acquire language from sensorimotor information alone (i.e., in an unsupervised manner). This is regarded as a constructive approach to symbol emergence systems. To this end, a series of PGMs have been developed, including those for simultaneous phoneme and word discovery, lexical acquisition, object and spatial concept formation, and the emergence of a symbol system. By extending the models, a symbol emergence system comprising a multi-agent system in which a symbol system emerges is revealed to be modeled using PGMs. In this model, symbol emergence can be regarded as collective predictive coding. This paper expands on this idea by combining the theory that ''emotion is based on the predictive coding of interoceptive signals'' and ''symbol emergence systems,'' and describes the possible hypothesis of the emergence of meaning in music.",0 "The black-box nature of large language models (LLMs) necessitates the development of eXplainable AI (XAI) techniques for transparency and trustworthiness. However, evaluating these techniques remains a challenge. This study presents a general evaluation framework using four key metrics: Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. We assess the effectiveness of six explainability techniques from five different XAI categories model simplification (LIME), perturbation-based methods (SHAP), gradient-based approaches (InputXGradient, Grad-CAM), Layer-wise Relevance Propagation (LRP), and attention mechanisms-based explainability methods (Attention Mechanism Visualization, AMV) across five encoder-based language models: TinyBERT, BERTbase, BERTlarge, XLM-R large, and DeBERTa-xlarge, using the IMDB Movie Reviews and Tweet Sentiment Extraction (TSE) datasets. Our findings show that the model simplification-based XAI method (LIME) consistently outperforms across multiple metrics and models, significantly excelling in HA with a score of 0.9685 on DeBERTa-xlarge, robustness, and consistency as the complexity of large language models increases. AMV demonstrates the best Robustness, with scores as low as 0.0020. It also excels in Consistency, achieving near-perfect scores of 0.9999 across all models. Regarding Contrastivity, LRP performs the best, particularly on more complex models, with scores up to 0.9371.",0 "Stable and efficient food markets are crucial for global food security, yet international staple food markets are increasingly exposed to complex risks, including intensified risk contagion and escalating external uncertainties. This paper systematically investigates risk spillovers in global staple food markets and explores the key determinants of these spillover effects, combining innovative decomposition-reconstruction techniques, risk connectedness analysis, and random forest models. The findings reveal that short-term components exhibit the highest volatility, with futures components generally more volatile than spot components. Further analysis identifies two main risk transmission patterns, namely cross-grain and cross-timescale transmission, and clarifies the distinct roles of each component in various net risk spillover networks. Additionally, price drivers, external uncertainties, and core supply-demand indicators significantly influence these spillover effects, with heterogeneous importance of varying factors in explaining different risk spillovers. This study provides valuable insights into the risk dynamics of staple food markets, offers evidence-based guidance for policymakers and market participants to enhance risk warning and mitigation efforts, and supports the stabilization of international food markets and the safeguarding of global food security.",2 "There are various models proposed on how knowledge is generated in the human brain including the semantic networks model. Although this model has been widely studied and even computational models are presented, but, due to various limits and inefficiencies in the generation of different types of knowledge, its application is limited to semantic knowledge because of has been formed according to semantic memory and declarative knowledge and has many limits in explaining various procedural and conditional knowledge. Given the importance of providing an appropriate model for knowledge generation, especially in the areas of improving human cognitive functions or building intelligent machines, improving existing models in knowledge generation or providing more comprehensive models is of great importance. In the current study, based on the free energy principle of the brain, is the researchers proposed a model for generating three types of declarative, procedural, and conditional knowledge. While explaining different types of knowledge, this model is capable to compute and generate concepts from stimuli based on probabilistic mathematics and the action-perception process (active inference). The proposed model is unsupervised learning that can update itself using a combination of different stimuli as a generative model can generate new concepts of unsupervised received stimuli. In this model, the active inference process is used in the generation of procedural and conditional knowledge and the perception process is used to generate declarative knowledge.",0 "One of the most widely used methods to evaluate LLMs are Multiple Choice Question (MCQ) tests. MCQ benchmarks enable the testing of LLM knowledge on almost any topic at scale as the results can be processed automatically. To help the LLM answer, a few examples called few shots can be included in the prompt. Moreover, the LLM can be asked to answer the question directly with the selected option or to first provide the reasoning and then the selected answer, which is known as chain of thought. In addition to checking whether the selected answer is correct, the evaluation can look at the LLM-estimated probability of its response as an indication of the confidence of the LLM in the response. In this paper, we study how the LLM confidence in its answer depends on whether the model has been asked to answer directly or to provide the reasoning before answering. The results of the evaluation of questions on a wide range of topics in seven different models show that LLMs are more confident in their answers when they provide reasoning before the answer. This occurs regardless of whether the selected answer is correct. Our hypothesis is that this behavior is due to the reasoning that modifies the probability of the selected answer, as the LLM predicts the answer based on the input question and the reasoning that supports the selection made. Therefore, LLM estimated probabilities seem to have intrinsic limitations that should be understood in order to use them in evaluation procedures. Interestingly, the same behavior has been observed in humans, for whom explaining an answer increases confidence in its correctness.",0 "Modern datasets often consist of numerous samples with abundant features and associated timestamps. Analyzing such datasets to uncover underlying events typically requires complex statistical methods and substantial domain expertise. A notable example, and the primary data focus of this paper, is the global synthetic dataset from the Counter Trafficking Data Collaborative (CTDC) -- a global hub of human trafficking data containing over 200,000 anonymized records spanning from 2002 to 2022, with numerous categorical features for each record. In this paper, we propose a fast and scalable method for analyzing and extracting significant categorical feature interactions, and querying large language models (LLMs) to generate data-driven insights that explain these interactions. Our approach begins with a binarization step for categorical features using one-hot encoding, followed by the computation of graph covariance at each time. This graph covariance quantifies temporal changes in dependence structures within categorical data and is established as a consistent dependence measure under the Bernoulli distribution. We use this measure to identify significant feature pairs, such as those with the most frequent trends over time or those exhibiting sudden spikes in dependence at specific moments. These extracted feature pairs, along with their timestamps, are subsequently passed to an LLM tasked with generating potential explanations of the underlying events driving these dependence changes. The effectiveness of our method is demonstrated through extensive simulations, and its application to the CTDC dataset reveals meaningful feature pairs and potential data stories underlying the observed feature interactions.",0 "Chronic Kidney Disease (CKD) is one of the widespread Chronic diseases with no known ultimo cure and high morbidity. Research demonstrates that progressive Chronic Kidney Disease (CKD) is a heterogeneous disorder that significantly impacts kidney structure and functions, eventually leading to kidney failure. With the progression of time, chronic kidney disease has moved from a life-threatening disease affecting few people to a common disorder of varying severity. The goal of this research is to visualize dominating features, feature scores, and values exhibited for early prognosis and detection of CKD using ensemble learning and explainable AI. For that, an AI-driven predictive analytics approach is proposed to aid clinical practitioners in prescribing lifestyle modifications for individual patients to reduce the rate of progression of this disease. Our dataset is collected on body vitals from individuals with CKD and healthy subjects to develop our proposed AI-driven solution accurately. In this regard, blood and urine test results are provided, and ensemble tree-based machine-learning models are applied to predict unseen cases of CKD. Our research findings are validated after lengthy consultations with nephrologists. Our experiments and interpretation results are compared with existing explainable AI applications in various healthcare domains, including CKD. The comparison shows that our developed AI models, particularly the Random Forest model, have identified more features as significant contributors than XgBoost. Interpretability (I), which measures the ratio of important to masked features, indicates that our XgBoost model achieved a higher score, specifically a Fidelity of 98\%, in this metric and naturally in the FII index compared to competing models.",2 "Skin cancer is one of the most prevalent and potentially life-threatening diseases worldwide, necessitating early and accurate diagnosis to improve patient outcomes. Conventional diagnostic methods, reliant on clinical expertise and histopathological analysis, are often time-intensive, subjective, and prone to variability. To address these limitations, we propose a novel hybrid deep learning framework that integrates convolutional neural networks (CNNs) with Radial Basis Function (RBF) Networks to achieve high classification accuracy and enhanced interpretability. The motivation for incorporating RBF Networks lies in their intrinsic interpretability and localized response to input features, which make them well-suited for tasks requiring transparency and fine-grained decision-making. Unlike traditional deep learning models that rely on global feature representations, RBF Networks allow for mapping segments of images to chosen prototypes, exploiting salient features within a single image. This enables clinicians to trace predictions to specific, interpretable patterns. The framework incorporates segmentation-based feature extraction, active learning for prototype selection, and K-Medoids clustering to focus on these salient features. Evaluations on the ISIC 2016 and ISIC 2017 datasets demonstrate the model's effectiveness, achieving classification accuracies of 83.02\% and 72.15\% using ResNet50, respectively, and outperforming VGG16-based configurations. By generating interpretable explanations for predictions, the framework aligns with clinical workflows, bridging the gap between predictive performance and trustworthiness. This study highlights the potential of hybrid models to deliver actionable insights, advancing the development of reliable AI-assisted diagnostic tools for high-stakes medical applications.",2 "Recent advances in eXplainable AI (XAI) for education have highlighted a critical challenge: ensuring that explanations for state-of-the-art AI models are understandable for non-technical users such as educators and students. In response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI pipeline inspired by Miller's cognitive model of explanation. iLLuMinaTE is designed to deliver theory-driven, actionable feedback to students in online courses. iLLuMinaTE navigates three main stages - causal connection, explanation selection, and explanation presentation - with variations drawing from eight social science theories (e.g. Abnormal Conditions, Pearl's Model of Explanation, Necessity and Robustness Selection, Contrastive Explanation). We extensively evaluate 21,915 natural language explanations of iLLuMinaTE extracted from three LLMs (GPT-4o, Gemma2-9B, Llama3-70B), with three different underlying XAI methods (LIME, Counterfactuals, MC-LIME), across students from three diverse online courses. Our evaluation involves analyses of explanation alignment to the social science theory, understandability of the explanation, and a real-world user preference study with 114 university students containing a novel actionability simulation. We find that students prefer iLLuMinaTE explanations over traditional explainers 89.52% of the time. Our work provides a robust, ready-to-use framework for effectively communicating hybrid XAI-driven insights in education, with significant generalization potential for other human-centric fields.",0 "Advances in data acquisition and numerical wave simulation have improved tomographic imaging techniques and results, but non-experts may find it difficult to understand which model is best for their needs. This paper is intended for these users. We argue that our notion of best is influenced by the extent to which models satisfy our biases. We explain how the basic types of seismic waves see Earth structure, illustrate the essential strategy of seismic tomography, discuss advanced adaptations such as full-waveform inversion, and emphasize the artistic components of tomography. The compounding effect of a plethora of reasonable, yet subjective choices is a range of models that differ more than their individual uncertainty analyses may suggest. Perhaps counter-intuitively, we argue producing similar tomographic models should not be the goal of seismic tomography. Instead, we promote a Community Monte Carlo effort to assemble a range of dissimilar models based on different modeling approaches and subjective choices, but which explain the seismic data. This effort could serve as input for geodynamic inferences with meaningful seismic uncertainties.",0 "In this paper we focus on the opacity issue of sub-symbolic machine learning predictors by promoting two complementary activities, namely, symbolic knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic predictors. We consider as symbolic any language being intelligible and interpretable for both humans and computers. Accordingly, we propose general meta-models for both SKE and SKI, along with two taxonomies for the classification of SKE and SKI methods. By adopting an explainable artificial intelligence (XAI) perspective, we highlight how such methods can be exploited to mitigate the aforementioned opacity issue. Our taxonomies are attained by surveying and classifying existing methods from the literature, following a systematic approach, and by generalising the results of previous surveys targeting specific sub-topics of either SKE or SKI alone. More precisely, we analyse 132 methods for SKE and 117 methods for SKI, and we categorise them according to their purpose, operation, expected input/output data and predictor types. For each method, we also indicate the presence/lack of runnable software implementations. Our work may be of interest for data scientists aiming at selecting the most adequate SKE/SKI method for their needs, and also work as suggestions for researchers interested in filling the gaps of the current state of the art, as well as for developers willing to implement SKE/SKI-based technologies.",2 "Human trust is a prerequisite to trustworthy AI adoption, yet trust remains poorly understood. Trust is often described as an attitude, but attitudes cannot be reliably measured or managed. Additionally, humans frequently conflate trust in an AI system, its machine learning (ML) technology, and its other component parts. Without fully understanding the 'leap of faith' involved in trusting ML, users cannot develop intrinsic trust in these systems. A common approach to building trust is to explain a ML model's reasoning process. However, such explanations often fail to resonate with non-experts due to the inherent complexity of ML systems and explanations are disconnected from users' own (unarticulated) mental models. This work puts forward an innovative way of directly building intrinsic trust in ML, by discerning and measuring the Leap of Faith (LoF) taken when a user decides to rely on ML. The LoF matrix captures the alignment between an ML model and a human expert's mental model. This match is rigorously and practically identified by feeding the user's data and objective function into both an ML agent and an expert-validated rules-based agent: a verified point of reference that can be tested a priori against a user's own mental model. This represents a new class of neuro-symbolic architecture. The LoF matrix reveals to the user the distance that constitutes the leap of faith between the rules-based and ML agents. For the first time, we propose trust metrics that evaluate whether users demonstrate trust through their actions rather than self-reported intent and whether such trust is deserved based on outcomes. The significance of the contribution is that it enables empirical assessment and management of ML trust drivers, to support trustworthy ML adoption. The approach is illustrated through a long-term high-stakes field study: a 3-month pilot of a multi-agent sleep-improvement system.",0 "This paper introduces GPT-HTree, a framework combining hierarchical clustering, decision trees, and large language models (LLMs) to address this challenge. By leveraging hierarchical clustering to segment individuals based on salient features, resampling techniques to balance class distributions, and decision trees to tailor classification paths within each cluster, GPT-HTree ensures both accuracy and interpretability. LLMs enhance the framework by generating human-readable cluster descriptions, bridging quantitative analysis with actionable insights.",0 "Attention maps in neural models for NLP are appealing to explain the decision made by a model, hopefully emphasizing words that justify the decision. While many empirical studies hint that attention maps can provide such justification from the analysis of sound examples, only a few assess the plausibility of explanations based on attention maps, i.e., the usefulness of attention maps for humans to understand the decision. These studies furthermore focus on text classification. In this paper, we report on a preliminary assessment of attention maps in a sentence comparison task, namely natural language inference. We compare the cross-attention weights between two RNN encoders with human-based and heuristic-based annotations on the eSNLI corpus. We show that the heuristic reasonably correlates with human annotations and can thus facilitate evaluation of plausible explanations in sentence comparison tasks. Raw attention weights however remain only loosely related to a plausible explanation.",0 "Integrated artificial intelligence (AI) and communication has been recognized as a key pillar of 6G and beyond networks. In line with AI-native 6G vision, explainability and robustness in AI-driven systems are critical for establishing trust and ensuring reliable performance in diverse and evolving environments. This paper addresses these challenges by developing a robust and explainable deep learning (DL)-based beam alignment engine (BAE) for millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. The proposed convolutional neural network (CNN)-based BAE utilizes received signal strength indicator (RSSI) measurements over a set of wide beams to accurately predict the best narrow beam for each UE, significantly reducing the overhead associated with exhaustive codebook-based narrow beam sweeping for initial access (IA) and data transmission. To ensure transparency and resilience, the Deep k-Nearest Neighbors (DkNN) algorithm is employed to assess the internal representations of the network via nearest neighbor approach, providing human-interpretable explanations and confidence metrics for detecting out-of-distribution inputs. Experimental results demonstrate that the proposed DL-based BAE exhibits robustness to measurement noise, reduces beam training overhead by 75% compared to the exhaustive search while maintaining near-optimal performance in terms of spectral efficiency. Moreover, the proposed framework improves outlier detection robustness by up to 5x and offers clearer insights into beam prediction decisions compared to traditional softmax-based classifiers.",0 "This research delves into the problem of interactive editing of human motion generation. Previous motion diffusion models lack explicit modeling of the word-level text-motion correspondence and good explainability, hence restricting their fine-grained editing ability. To address this issue, we propose an attention-based motion diffusion model, namely MotionCLR, with CLeaR modeling of attention mechanisms. Technically, MotionCLR models the in-modality and cross-modality interactions with self-attention and cross-attention, respectively. More specifically, the self-attention mechanism aims to measure the sequential similarity between frames and impacts the order of motion features. By contrast, the cross-attention mechanism works to find the fine-grained word-sequence correspondence and activate the corresponding timesteps in the motion sequence. Based on these key properties, we develop a versatile set of simple yet effective motion editing methods via manipulating attention maps, such as motion (de-)emphasizing, in-place motion replacement, and example-based motion generation, etc. For further verification of the explainability of the attention mechanism, we additionally explore the potential of action-counting and grounded motion generation ability via attention maps. Our experimental results show that our method enjoys good generation and editing ability with good explainability.",0 "Evaluating radiology reports is a challenging problem as factual correctness is extremely important due to the need for accurate medical communication about medical images. Existing automatic evaluation metrics either suffer from failing to consider factual correctness (e.g., BLEU and ROUGE) or are limited in their interpretability (e.g., F1CheXpert and F1RadGraph). In this paper, we introduce GREEN (Generative Radiology Report Evaluation and Error Notation), a radiology report generation metric that leverages the natural language understanding of language models to identify and explain clinically significant errors in candidate reports, both quantitatively and qualitatively. Compared to current metrics, GREEN offers: 1) a score aligned with expert preferences, 2) human interpretable explanations of clinically significant errors, enabling feedback loops with end-users, and 3) a lightweight open-source method that reaches the performance of commercial counterparts. We validate our GREEN metric by comparing it to GPT-4, as well as to error counts of 6 experts and preferences of 2 experts. Our method demonstrates not only higher correlation with expert error counts, but simultaneously higher alignment with expert preferences when compared to previous approaches.",0 "The recent pandemic emphasized the need to consider the role of human behavior in shaping epidemic dynamics. In particular, it is necessary to extend beyond the classical epidemiological structures to fully capture the interplay between the spread of disease and how people respond. Here, we focus on the challenge of incorporating change in human behavior in the form of ""risk response"" into compartmental epidemiological models, where humans adapt their actions in response to their perceived risk of becoming infected. The review examines 37 papers containing over 40 compartmental models, categorizing them into two fundamentally distinct classes: exogenous and endogenous approaches to modeling risk response. While in exogenous approaches, human behavior is often included using different fixed parameter values for certain time periods, endogenous approaches seek for a mechanism internal to the model to explain changes in human behavior as a function of the state of disease. We further discuss two different formulations within endogenous models as implicit versus explicit representation of information diffusion. This analysis provides insights for modelers in selecting an appropriate framework for epidemic modeling.",0 "The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the final disease prediction on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/.",0 "Explanations constitute an important aspect of successful human robot interactions and can enhance robot understanding. To improve the understanding of the robot, we have developed four levels of explanation (LOE) based on two questions: what needs to be explained, and why the robot has made a particular decision. The understandable robot requires a communicative action when there is disparity between the human s mental model of the robot and the robots state of mind. This communicative action was generated by utilizing a conversational AI platform to generate explanations. An adaptive dialog was implemented for transition from one LOE to another. Here, we demonstrate the adaptive dialog in a collaborative task with errors and provide results of a feasibility study with users.",0 "Face recognition systems (FRS) exhibit significant accuracy differences based on the user's gender. Since such a gender gap reduces the trustworthiness of FRS, more recent efforts have tried to find the causes. However, these studies make use of manually selected, correlated, and small-sized sets of facial features to support their claims. In this work, we analyse gender bias in face recognition by successfully extending the search domain to decorrelated combinations of 40 non-demographic facial characteristics. First, we propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis on large-scale data. Second, we introduce two new fairness metrics to measure fairness with and without context. Based on these grounds, we thirdly present a novel unsupervised algorithm able to reliably identify attribute combinations that lead to vanishing bias when used as filter predicates for balanced testing datasets. The experiments show that the gender gap vanishes when images of male and female subjects share specific attributes, clearly indicating that the issue is not a question of biology but of the social definition of appearance. These findings could reshape our understanding of fairness in face biometrics and provide insights into FRS, helping to address gender bias issues.",0 "Current novel view synthesis tasks primarily rely on high-quality and clear images. However, in foggy scenes, scattering and attenuation can significantly degrade the reconstruction and rendering quality. Although NeRF-based dehazing reconstruction algorithms have been developed, their use of deep fully connected neural networks and per-ray sampling strategies leads to high computational costs. Moreover, NeRF's implicit representation struggles to recover fine details from hazy scenes. In contrast, recent advancements in 3D Gaussian Splatting achieve high-quality 3D scene reconstruction by explicitly modeling point clouds into 3D Gaussians. In this paper, we propose leveraging the explicit Gaussian representation to explain the foggy image formation process through a physically accurate forward rendering process. We introduce DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media using only muti-view foggy images as input. We model the transmission within each Gaussian distribution to simulate the formation of fog. During this process, we jointly learn the atmospheric light and scattering coefficient while optimizing the Gaussian representation of the hazy scene. In the inference stage, we eliminate the effects of scattering and attenuation on the Gaussians and directly project them onto a 2D plane to obtain a clear view. Experiments on both synthetic and real-world foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance in terms of both rendering quality and computational efficiency. visualizations are available at https://dehazegs.github.io/",0 "Affordances, a foundational concept in human-computer interaction and design, have traditionally been explained by direct-perception theories, which assume that individuals perceive action possibilities directly from the environment. However, these theories fall short of explaining how affordances are perceived, learned, refined, or misperceived, and how users choose between multiple affordances in dynamic contexts. This paper introduces a novel affordance theory grounded in Computational Rationality, positing that humans construct internal representations of the world based on bounded sensory inputs. Within these internal models, affordances are inferred through two core mechanisms: feature recognition and hypothetical motion trajectories. Our theory redefines affordance perception as a decision-making process, driven by two components: confidence (the perceived likelihood of successfully executing an action) and predicted utility (the expected value of the outcome). By balancing these factors, individuals make informed decisions about which actions to take. Our theory frames affordances perception as dynamic, continuously learned, and refined through reinforcement and feedback. We validate the theory via thought experiments and demonstrate its applicability across diverse types of affordances (e.g., physical, digital, social). Beyond clarifying and generalizing the understanding of affordances across contexts, our theory serves as a foundation for improving design communication and guiding the development of more adaptive and intuitive systems that evolve with user capabilities.",0 "Explaining deep learning models in a way that humans can easily understand is essential for responsible artificial intelligence applications. Attribution methods constitute an important area of explainable deep learning. The attribution problem involves finding parts of the network's input that are the most responsible for the model's output. In this work, we demonstrate that implicit neural representations (INRs) constitute a good framework for generating visual explanations. Firstly, we utilize coordinate-based implicit networks to reformulate and extend the extremal perturbations technique and generate attribution masks. Experimental results confirm the usefulness of our method. For instance, by proper conditioning of the implicit network, we obtain attribution masks that are well-behaved with respect to the imposed area constraints. Secondly, we present an iterative INR-based method that can be used to generate multiple non-overlapping attribution masks for the same image. We depict that a deep learning model may associate the image label with both the appearance of the object of interest as well as with areas and textures usually accompanying the object. Our study demonstrates that implicit networks are well-suited for the generation of attribution masks and can provide interesting insights about the performance of deep learning models.",0 "Online users often post facial images of themselves and other people on online social networks (OSNs) and other Web 2.0 platforms, which can lead to potential privacy leakage of people whose faces are included in such images. There is limited research on understanding face privacy in social media while considering user behavior. It is crucial to consider privacy of subjects and bystanders separately. This calls for the development of privacy-aware face detection classifiers that can distinguish between subjects and bystanders automatically. This paper introduces such a classifier trained on face-based features, which outperforms the two state-of-the-art methods with a significant margin (by 13.1% and 3.1% for OSN images, and by 17.9% and 5.9% for non-OSN images). We developed a semi-automated framework for conducting a large-scale analysis of the face privacy problem by using our novel bystander-subject classifier. We collected 27,800 images, each including at least one face, shared by 6,423 Twitter users. We then applied our framework to analyze this dataset thoroughly. Our analysis reveals eight key findings of different aspects of Twitter users' real-world behaviors on face privacy, and we provide quantitative and qualitative results to better explain these findings. We share the practical implications of our study to empower online platforms and users in addressing the face privacy problem efficiently.",0 "Conventional game theory assumes that players are perfectly rational. In a realistic situation, however, players are rarely perfectly rational. This bounded rationality is one of the main reasons why the predictions of Nash equilibrium in normative game theory often diverge from human behavior in real experiments. Motivated by the Boltzmann weight formalism, here we present a theoretical framework to predict the non-Nash equilibrium probabilities of possible outcomes in strategic games by focusing on the differences in expected payoffs of players rather than traditional utility metrics. In this model, bounded rationality is parameterized by assigning a temperature to each player, reflecting their level of rationality by interpolating between two decision-making regimes, i.e., utility maximization and equiprobable choices. Our framework predicts all possible joint strategies and is able to determine the relative probabilities for multiple pure or mixed strategy equilibria. To validate model predictions, by analyzing experimental data we demonstrated that our model can successfully explain non-Nash equilibrium strategic behavior in experimental games. Our approach reinterprets the concept of temperature in game theory, leveraging the development of theoretical frameworks to bridge the gap between the predictions of normative game theory and the results of behavioral experiments.",0 "Controversial contents largely inundate the Internet, infringing various cultural norms and child protection standards. Traditional Image Content Moderation (ICM) models fall short in producing precise moderation decisions for diverse standards, while recent multimodal large language models (MLLMs), when adopted to general rule-based ICM, often produce classification and explanation results that are inconsistent with human moderators. Aiming at flexible, explainable, and accurate ICM, we design a novel rule-based dataset generation pipeline, decomposing concise human-defined rules and leveraging well-designed multi-stage prompts to enrich short explicit image annotations. Our ICM-Instruct dataset includes detailed moderation explanation and moderation Q-A pairs. Built upon it, we create our ICM-Assistant model in the framework of rule-based ICM, making it readily applicable in real practice. Our ICM-Assistant model demonstrates exceptional performance and flexibility. Specifically, it significantly outperforms existing approaches on various sources, improving both the moderation classification (36.8% on average) and moderation explanation quality (26.6% on average) consistently over existing MLLMs. Code/Data is available at https://github.com/zhaoyuzhi/ICM-Assistant.",0 "When subject to a non-local unitary evolution, qubits in a quantum circuit become increasingly entangled. Conversely, measurements applied to individual qubits lead to their disentanglement from the collective system. The extent of entanglement reduction depends on the frequency of local projective measurements. A delicate balance emerges between unitary evolution, which enhances entanglement, and measurements which diminish it. In the thermodynamic limit, there is a phase transition from volume law entanglement to area law entanglement at a critical value of measurement frequency. This phenomenon, occurring in hybrid quantum circuits with both unitary gates and measurements, is termed as measurement-induced phase transition (MIPT). We study the behavior of MIPT in circuits comprising of two qubit unitary gates parameterized by Cartan decomposition. We show that the entangling power and gate typicality of the two-qubit local unitaries employed in the circuit can be used to explain the behavior of global bipartite entanglement the circuit can sustain. When the two qubit gate throughout the circuit is the identity and measurements are the sole driver of the entanglement behavior, we obtain analytical estimate for the entanglement entropy that shows remarkable agreement with numerical simulations. We also find that the entangling power and gate typicality enable the classification of the two-qubit unitaries by different universality classes of phase transitions that can occur in the hybrid circuit. For all unitaries in a particular universality class, the transition from volume to area law of entanglement occurs with same exponent that characterizes the phase transition.",0 "Counterfactual explanation methods have recently received significant attention for explaining CNN-based image classifiers due to their ability to provide easily understandable explanations that align more closely with human reasoning. However, limited attention has been given to utilizing explainability methods to improve model performance. In this paper, we propose to leverage counterfactual concepts aiming to enhance the performance of CNN models in image classification tasks. Our proposed approach utilizes counterfactual reasoning to identify crucial filters used in the decision-making process. Following this, we perform model retraining through the design of a novel methodology and loss functions that encourage the activation of class-relevant important filters and discourage the activation of irrelevant filters for each class. This process effectively minimizes the deviation of activation patterns of local predictions and the global activation patterns of their respective inferred classes. By incorporating counterfactual explanations, we validate unseen model predictions and identify misclassifications. The proposed methodology provides insights into potential weaknesses and biases in the model's learning process, enabling targeted improvements and enhanced performance. Experimental results on publicly available datasets have demonstrated an improvement of 1-2\%, validating the effectiveness of the approach.",0 "Traditional adversarial attacks typically aim to alter the predicted labels of input images by generating perturbations that are imperceptible to the human eye. However, these approaches often lack explainability. Moreover, most existing work on adversarial attacks focuses on single-stage classifiers, but multi-stage classifiers are largely unexplored. In this paper, we introduce instance-based adversarial attacks for multi-stage classifiers, leveraging Layer-wise Relevance Propagation (LRP), which assigns relevance scores to pixels based on their influence on classification outcomes. Our approach generates explainable adversarial perturbations by utilizing LRP to identify and target key features critical for both coarse and fine-grained classifications. Unlike conventional attacks, our method not only induces misclassification but also enhances the interpretability of the model's behavior across classification stages, as demonstrated by experimental results.",0 "Conventional behavior cloning (BC) models often struggle to replicate the subtleties of human actions. Previous studies have attempted to address this issue through the development of a new BC technique: Implicit Behavior Cloning (IBC). This new technique consistently outperformed the conventional Mean Squared Error (MSE) BC models in a variety of tasks. Our goal is to replicate the performance of the IBC model by Florence [in Proceedings of the 5th Conference on Robot Learning, 164:158-168, 2022], for social interaction tasks using our custom dataset. While previous studies have explored the use of large language models (LLMs) for enhancing group conversations, they often overlook the significance of non-verbal cues, which constitute a substantial part of human communication. We propose using IBC to replicate nonverbal cues like gaze behaviors. The model is evaluated against various types of facilitator data and compared to an explicit, MSE BC model. Results show that the IBC model outperforms the MSE BC model across session types using the same metrics used in the previous IBC paper. Despite some metrics showing mixed results which are explainable for the custom dataset for social interaction, we successfully replicated the IBC model to generate nonverbal cues. Our contributions are (1) the replication and extension of the IBC model, and (2) a nonverbal cues generation model for social interaction. These advancements facilitate the integration of robots into the complex interactions between robots and humans, e.g., in the absence of a human facilitator.",0 "The coronavirus pandemic corresponds to a serious global health crisis which not only changed the way people used to live but also how people behaved in their daily lives. Information from social and behavioural sciences can help in modifying human behaviour to comply with the recommendations of health officials, as the pandemic requires large-scale behaviour change and puts significant mental stress on individuals. The aim of this paper is to examine the changes in human behaviour brought about by the COVID-19 pandemic, which has caused a global health crisis and altered the way people live and interact. The collection of data has been done through online mode and the behaviour of the people is observed, and the results were finally analysed using the Analytical Hierarchy Process (AHP) which is a multi-criteria decision-making method to rank the factors that had the greatest impact on the changes in human behaviour. During the study, parameters taken under consideration were the ones which were most likely to affect the human behaviour as an impact of COVID-19 lockdown on health, relationship with family and friends, overall lifestyle, online education and work from home, screen time etc. The paper explains each criterion and how it affected human behaviour the most.",0 "Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.",0 "Smart contracts are increasingly used in critical use cases (e.g., financial transactions). Thus, it is pertinent to ensure that end-users understand the transfer risks in smart contracts. To address this, we investigate end-user comprehension of risks in the most popular Ethereum smart contract (i.e., USD Tether (USDT)) and their prevalence in the top ERC-20 smart contracts. We focus on five transfer risks with severe impact on transfer outcomes and user objectives, including users being blacklisted, contract being paused, and contract being arbitrarily upgraded. Firstly, we conducted a user study investigating end-user comprehension of smart contract transfer risks with 110 participants and USDT/MetaMask. Secondly, we performed manual and automated source code analysis of the next top (78) ERC-20 smart contracts (after USDT) to identify the prevalence of these risks. Results show that end-users do not comprehend real risks: most (up to 71.8% of) users believe contract upgrade and blacklisting are highly severe/surprising. More importantly, twice as many users find it easier to discover successful outcomes than risky outcomes using the USDT/MetaMask UI flow. These results hold regardless of the self-rated programming and Web3 proficiency of participants. Furthermore, our source code analysis demonstrates that the examined risks are prevalent in up to 19.2% of the top ERC-20 contracts. Additionally, we discovered (three) other risks with up to 25.6% prevalence in these contracts. This study informs the need to provide explainable smart contracts, understandable UI and relevant information for risky outcomes.",2 "The phenomenon of emergence of swarm intelligence exists widely in nature and human society. People have been exploring the root cause of emergence of swarm intelligence and trying to establish general theories and models for emergence of swarm intelligence. However, the existing theories or models do not grasp the essence of swarm intelligence, so they lack generality and are difficult to explain various phenomena of emergence of swarm intelligence. In this paper, a contradiction-centered model for the emergence of swarm intelligence is proposed, in which the internal contradictions of individuals determine their behavior and properties, individuals are related and interact within the swarm because of competing and occupying environmental resources, interactions and swarm potential affect the internal contradictions of individuals and their distribution in the swarm, and the swarm intelligence is manifested as the specific distribution of individual contradictions. This model completely explains the conditions, dynamics, pathways, formations and processes of the emergence of swarm intelligence. In order to verify the validity of this model, several swarm intelligence systems are implemented and analyzed in this paper. The experimental results show that the model has good generality and can be used to describe the emergence of various swarm intelligence.",0 "In this paper, we elaborate on how AI can support diversity and inclusion and exemplify research projects conducted in that direction. We start by looking at the challenges and progress in making large language models (LLMs) more transparent, inclusive, and aware of social biases. Even though LLMs like ChatGPT have impressive abilities, they struggle to understand different cultural contexts and engage in meaningful, human like conversations. A key issue is that biases in language processing, especially in machine translation, can reinforce inequality. Tackling these biases requires a multidisciplinary approach to ensure AI promotes diversity, fairness, and inclusion. We also highlight AI's role in identifying biased content in media, which is important for improving representation. By detecting unequal portrayals of social groups, AI can help challenge stereotypes and create more inclusive technologies. Transparent AI algorithms, which clearly explain their decisions, are essential for building trust and reducing bias in AI systems. We also stress AI systems need diverse and inclusive training data. Projects like the Child Growth Monitor show how using a wide range of data can help address real world problems like malnutrition and poverty. We present a project that demonstrates how AI can be applied to monitor the role of search engines in spreading disinformation about the LGBTQ+ community. Moreover, we discuss the SignON project as an example of how technology can bridge communication gaps between hearing and deaf people, emphasizing the importance of collaboration and mutual trust in developing inclusive AI. Overall, with this paper, we advocate for AI systems that are not only effective but also socially responsible, promoting fair and inclusive interactions between humans and machines.",0 "Concept Bottleneck Models (CBMs) provide interpretable prediction by introducing an intermediate Concept Bottleneck Layer (CBL), which encodes human-understandable concepts to explain models' decision. Recent works proposed to utilize Large Language Models and pre-trained Vision-Language Models to automate the training of CBMs, making it more scalable and automated. However, existing approaches still fall short in two aspects: First, the concepts predicted by CBL often mismatch the input image, raising doubts about the faithfulness of interpretation. Second, it has been shown that concept values encode unintended information: even a set of random concepts could achieve comparable test accuracy to state-of-the-art CBMs. To address these critical limitations, in this work, we propose a novel framework called Vision-Language-Guided Concept Bottleneck Model (VLG-CBM) to enable faithful interpretability with the benefits of boosted performance. Our method leverages off-the-shelf open-domain grounded object detectors to provide visually grounded concept annotation, which largely enhances the faithfulness of concept prediction while further improving the model performance. In addition, we propose a new metric called Number of Effective Concepts (NEC) to control the information leakage and provide better interpretability. Extensive evaluations across five standard benchmarks show that our method, VLG-CBM, outperforms existing methods by at least 4.27% and up to 51.09% on Accuracy at NEC=5 (denoted as ANEC-5), and by at least 0.45% and up to 29.78% on average accuracy (denoted as ANEC-avg), while preserving both faithfulness and interpretability of the learned concepts as demonstrated in extensive experiments.",0 "Large Language Models (LLMs) have achieved remarkable success recently, displaying exceptional capabilities in creating understandable and organized text. These LLMs have been utilized in diverse fields, such as clinical research, where domain-specific models like Med-Palm have achieved human-level performance. Recently, researchers have employed advanced prompt engineering to enhance the general reasoning ability of LLMs. Despite the remarkable success of zero-shot Chain-of-Thoughts (CoT) in solving general reasoning tasks, the potential of these methods still remains paid limited attention in the financial reasoning task.To address this issue, we explore multiple prompt strategies and incorporated semantic news information to improve LLMs' performance on financial reasoning tasks.To the best of our knowledge, we are the first to explore this important issue by applying ChatGPT to the gold investment.In this work, our aim is to investigate the financial reasoning capabilities of LLMs and their capacity to generate logical and persuasive investment opinions. We will use ChatGPT, one of the most powerful LLMs recently, and prompt engineering to achieve this goal. Our research will focus on understanding the ability of LLMs in sophisticated analysis and reasoning within the context of investment decision-making. Our study finds that ChatGPT with CoT prompt can provide more explainable predictions and overcome behavioral biases, which is crucial in finance-related tasks and can achieve higher investment returns.",0 "While large language models (LLMs) have shown promise for medical question answering, there is limited work focused on tropical and infectious disease-specific exploration. We build on an opensource tropical and infectious diseases (TRINDs) dataset, expanding it to include demographic and semantic clinical and consumer augmentations yielding 11000+ prompts. We evaluate LLM performance on these, comparing generalist and medical LLMs, as well as LLM outcomes to human experts. We demonstrate through systematic experimentation, the benefit of contextual information such as demographics, location, gender, risk factors for optimal LLM response. Finally we develop a prototype of TRINDs-LM, a research tool that provides a playground to navigate how context impacts LLM outputs for health.",0 "This study examines the ethical reasoning of six prominent generative large language models: OpenAI GPT-4o, Meta LLaMA 3.1, Perplexity, Anthropic Claude 3.5 Sonnet, Google Gemini, and Mistral 7B. The research explores how these models articulate and apply ethical logic, particularly in response to moral dilemmas such as the Trolley Problem, and Heinz Dilemma. Departing from traditional alignment studies, the study adopts an explainability-transparency framework, prompting models to explain their ethical reasoning. This approach is analyzed through three established ethical typologies: the consequentialist-deontological analytic, Moral Foundations Theory, and the Kohlberg Stages of Moral Development Model. Findings reveal that LLMs exhibit largely convergent ethical logic, marked by a rationalist, consequentialist emphasis, with decisions often prioritizing harm minimization and fairness. Despite similarities in pre-training and model architecture, a mixture of nuanced and significant differences in ethical reasoning emerge across models, reflecting variations in fine-tuning and post-training processes. The models consistently display erudition, caution, and self-awareness, presenting ethical reasoning akin to a graduate-level discourse in moral philosophy. In striking uniformity these systems all describe their ethical reasoning as more sophisticated than what is characteristic of typical human moral logic.",0 "Analyzing large volumes of real-world driving data is essential for providing meaningful and reliable insights into real-world trips, scenarios, and human driving behaviors. To this end, we developed a multi-level data processing approach that adds new information, segments data, and extracts desired parameters. Leveraging a confidential but extensive dataset (over 1 million km), this approach leads to three levels of in-depth analysis: trip, scenario, and driving. The trip-level analysis explains representative properties observed in real-world trips, while the scenario-level analysis focuses on scenario conditions resulting from road events that reduce vehicle speed. The driving-level analysis identifies the cause of driving regimes for specific situations and characterizes typical human driving behaviors. Such analyses can support the design of both trip- and scenario-based tests, the modeling of human drivers, and the establishment of guidelines for connected and automated vehicles.",0 "In human-robot teams, human situational awareness is the operator's conscious knowledge of the team's states, actions, plans and their environment. Appropriate human situational awareness is critical to successful human-robot collaboration. In human-robot teaming, it is often assumed that the best and required level of situational awareness is knowing everything at all times. This view is problematic, because what a human needs to know for optimal team performance varies given the dynamic environmental conditions, task context and roles and capabilities of team members. We explore this topic by interviewing 16 participants with active and repeated experience in diverse human-robot teaming applications. Based on analysis of these interviews, we derive a framework explaining the dynamic nature of required situational awareness in human-robot teaming. In addition, we identify a range of factors affecting the dynamic nature of required and actual levels of situational awareness (i.e., dynamic situational awareness), types of situational awareness inefficiencies resulting from gaps between actual and required situational awareness, and their main consequences. We also reveal various strategies, initiated by humans and robots, that assist in maintaining the required situational awareness. Our findings inform the implementation of accurate estimates of dynamic situational awareness and the design of user-adaptive human-robot interfaces. Therefore, this work contributes to the future design of more collaborative and effective human-robot teams.",2 "We describe a method for modeling spatial context to enable video anomaly detection. The main idea is to discover regions that share similar object-level activities by clustering joint object attributes using Gaussian mixture models. We demonstrate that this straightforward approach, using orders of magnitude fewer parameters than competing models, achieves state-of-the-art performance in the challenging spatial-context-dependent Street Scene dataset. As a side benefit, the high-resolution discovered regions learned by the model also provide explainable normalcy maps for human operators without the need for any pre-trained segmentation model.",0 "Over the last century, deep learning models have become the state-of-the-art for solving complex computer vision problems. These modern computer vision models have millions of parameters, which presents two major challenges: (1) the increased computational requirements hamper the deployment in resource-constrained environments, such as mobile or IoT devices, and (2) explaining the complex decisions of such networks to humans is challenging. Network pruning is a technical approach to reduce the complexity of models, where less important parameters are removed. The work presented in this paper investigates whether this reduction in technical complexity also helps with perceived explainability. To do so, we conducted a pre-study and two human-grounded experiments, assessing the effects of different pruning ratios on explainability. Overall, we evaluate four different compression rates (i.e., 2, 4, 8, and 32) with 37 500 tasks on Mechanical Turk. Results indicate that lower compression rates have a positive influence on explainability, while higher compression rates show negative effects. Furthermore, we were able to identify sweet spots that increase both the perceived explainability and the model's performance.",2 "This paper introduces the Semantic Propagation Graph Neural Network (SProp GNN), a machine learning sentiment analysis (SA) architecture that relies exclusively on syntactic structures and word-level emotional cues to predict emotions in text. By semantically blinding the model to information about specific words, it is robust to social biases such as political or gender bias that have been plaguing previous machine learning-based SA systems. The SProp GNN shows performance superior to lexicon-based alternatives such as VADER (Valence Aware Dictionary and Sentiment Reasoner) and EmoAtlas on two different prediction tasks, and across two languages. Additionally, it approaches the accuracy of transformer-based models while significantly reducing bias in emotion prediction tasks. By offering improved explainability and reducing bias, the SProp GNN bridges the methodological gap between interpretable lexicon approaches and powerful, yet often opaque, deep learning models, offering a robust tool for fair and effective emotion analysis in understanding human behavior through text.",0 "Background: Verbal deception detection research relies on narratives and commonly assumes statements as truthful or deceptive. A more realistic perspective acknowledges that the veracity of statements exists on a continuum with truthful and deceptive parts being embedded within the same statement. However, research on embedded lies has been lagging behind. Methods: We collected a novel dataset of 2,088 truthful and deceptive statements with annotated embedded lies. Using a within-subjects design, participants provided a truthful account of an autobiographical event. They then rewrote their statement in a deceptive manner by including embedded lies, which they highlighted afterwards and judged on lie centrality, deceptiveness, and source. Results: We show that a fined-tuned language model (Llama-3-8B) can classify truthful statements and those containing embedded lies with 64% accuracy. Individual differences, linguistic properties and explainability analysis suggest that the challenge of moving the dial towards embedded lies stems from their resemblance to truthful statements. Typical deceptive statements consisted of 2/3 truthful information and 1/3 embedded lies, largely derived from past personal experiences and with minimal linguistic differences with their truthful counterparts. Conclusion: We present this dataset as a novel resource to address this challenge and foster research on embedded lies in verbal deception detection.",2 "Human-like personality traits have recently been discovered in large language models, raising the hypothesis that their (known and as yet undiscovered) biases conform with human latent psychological constructs. While large conversational models may be tricked into answering psychometric questionnaires, the latent psychological constructs of thousands of simpler transformers, trained for other tasks, cannot be assessed because appropriate psychometric methods are currently lacking. Here, we show how standard psychological questionnaires can be reformulated into natural language inference prompts, and we provide a code library to support the psychometric assessment of arbitrary models. We demonstrate, using a sample of 88 publicly available models, the existence of human-like mental health-related constructs (including anxiety, depression, and Sense of Coherence) which conform with standard theories in human psychology and show similar correlations and mitigation strategies. The ability to interpret and rectify the performance of language models by using psychological tools can boost the development of more explainable, controllable, and trustworthy models.",2 "Public distrust of self-driving cars is growing. Studies emphasize the need for interpreting the behavior of these vehicles to passengers to promote trust in autonomous systems. Interpreters can enhance trust by improving transparency and reducing perceived risk. However, current solutions often lack a human-centric approach to integrating multimodal interpretations. This paper introduces a novel Human-centered Multimodal Interpreter (HMI) system that leverages human preferences to provide visual, textual, and auditory feedback. The system combines a visual interface with Bird's Eye View (BEV), map, and text display, along with voice interaction using a fine-tuned large language model (LLM). Our user study, involving diverse participants, demonstrated that the HMI system significantly boosts passenger trust in AVs, increasing average trust levels by over 8%, with trust in ordinary environments rising by up to 30%. These results underscore the potential of the HMI system to improve the acceptance and reliability of autonomous vehicles by providing clear, real-time, and context-sensitive explanations of vehicle actions.",2 "Assessing learners in ill-defined domains, such as scenario-based human tutoring training, is an area of limited research. Equity training requires a nuanced understanding of context, but do contemporary large language models (LLMs) have a knowledge base that can navigate these nuances? Legacy transformer models like BERT, in contrast, have less real-world knowledge but can be more easily fine-tuned than commercial LLMs. Here, we study whether fine-tuning BERT on human annotations outperforms state-of-the-art LLMs (GPT-4o and GPT-4-Turbo) with few-shot prompting and instruction. We evaluate performance on four prediction tasks involving generating and explaining open-ended responses in advocacy-focused training lessons in a higher education student population learning to become middle school tutors. Leveraging a dataset of 243 human-annotated open responses from tutor training lessons, we find that BERT demonstrates superior performance using an offline fine-tuning approach, which is more resource-efficient than commercial GPT models. We conclude that contemporary GPT models may not adequately capture nuanced response patterns, especially in complex tasks requiring explanation. This work advances the understanding of AI-driven learner evaluation under the lens of fine-tuning versus few-shot prompting on the nuanced task of equity training, contributing to more effective training solutions and assisting practitioners in choosing adequate assessment methods.",0 "We introduce semitopology, a generalisation of point-set topology that removes the restriction that intersections of open sets need necessarily be open. The intuition is that points represent participants in a decentralised system, and open sets represent collections of participants that collectively have the authority to collaborate to update their local state; we call this an actionable coalition. Examples of actionable coalition include: majority stakes in proof-of-stake blockchains; communicating peers in peer-to-peer networks; and even pedestrians working together to not bump into one another in the street. Where actionable coalitions exist, they have in common that: collaborations are local (updating the states of the participants in the coalition, but not immediately those of the whole system); collaborations are voluntary (up to and including breaking rules); participants may be heterogeneous in their computing power or in their goals (not all pedestrians want to go to the same place); participants can choose with whom to collaborate; and they are not assumed subject to permission or synchronisation by a central authority. We develop a topology-flavoured mathematics that goes some way to explaining how and why these complex decentralised systems can exhibit order, and gives us new ways to understand existing practical implementations.",2 "Aligning machine representations with human understanding is key to improving interpretability of machine learning (ML) models. When classifying a new image, humans often explain their decisions by decomposing the image into concepts and pointing to corresponding regions in familiar images. Current ML explanation techniques typically either trace decision-making processes to reference prototypes, generate attribution maps highlighting feature importance, or incorporate intermediate bottlenecks designed to align with human-interpretable concepts. The proposed method, named COMIX, classifies an image by decomposing it into regions based on learned concepts and tracing each region to corresponding ones in images from the training dataset, assuring that explanations fully represent the actual decision-making process. We dissect the test image into selected internal representations of a neural network to derive prototypical parts (primitives) and match them with the corresponding primitives derived from the training data. In a series of qualitative and quantitative experiments, we theoretically prove and demonstrate that our method, in contrast to post hoc analysis, provides fidelity of explanations and shows that the efficiency is competitive with other inherently interpretable architectures. Notably, it shows substantial improvements in fidelity and sparsity metrics, including 48.82% improvement in the C-insertion score on the ImageNet dataset over the best state-of-the-art baseline.",0 "Besides natural language processing, transformers exhibit extraordinary performance in solving broader applications, including scientific computing and computer vision. Previous works try to explain this from the expressive power and capability perspectives that standard transformers are capable of performing some algorithms. To empower transformers with algorithmic capabilities and motivated by the recently proposed looped transformer, we design a novel transformer framework, dubbed Algorithm Transformer (abbreviated as AlgoFormer). We provide an insight that efficient transformer architectures can be designed by leveraging prior knowledge of tasks and the underlying structure of potential algorithms. Compared with the standard transformer and vanilla looped transformer, the proposed AlgoFormer can perform efficiently in algorithm representation in some specific tasks. In particular, inspired by the structure of human-designed learning algorithms, our transformer framework consists of a pre-transformer that is responsible for task preprocessing, a looped transformer for iterative optimization algorithms, and a post-transformer for producing the desired results after post-processing. We provide theoretical evidence of the expressive power of the AlgoFormer in solving some challenging problems, mirroring human-designed algorithms. Furthermore, some theoretical and empirical results are presented to show that the designed transformer has the potential to perform algorithm representation and learning. Experimental results demonstrate the empirical superiority of the proposed transformer in that it outperforms the standard transformer and vanilla looped transformer in some specific tasks. An extensive experiment on real language tasks (e.g., neural machine translation of German and English, and text classification) further validates the expressiveness and effectiveness of AlgoFormer.",0 "Human mobility is a fundamental aspect of social behavior, with broad applications in transportation, urban planning, and epidemic modeling. However, for decades new mathematical formulas to model mobility phenomena have been scarce and usually discovered by analogy to physical processes, such as the gravity model and the radiation model. These sporadic discoveries are often thought to rely on intuition and luck in fitting empirical data. Here, we propose a systematic approach that leverages symbolic regression to automatically discover interpretable models from human mobility data. Our approach finds several well-known formulas, such as the distance decay effect and classical gravity models, as well as previously unknown ones, such as an exponential-power-law decay that can be explained by the maximum entropy principle. By relaxing the constraints on the complexity of model expressions, we further show how key variables of human mobility are progressively incorporated into the model, making this framework a powerful tool for revealing the underlying mathematical structures of complex social phenomena directly from observational data.",0 "This entry provides an overview of Human-centered Geospatial Data Science, highlighting the gaps it aims to bridge, its significance, and its key topics and research. Geospatial Data Science, which derives geographic knowledge and insights from large volumes of geospatial big data using advanced Geospatial Artificial Intelligence (GeoAI), has been widely used to tackle a wide range of geographic problems. However, it often overlooks the subjective human experiences that fundamentally influence human-environment interactions, and few strategies have been developed to ensure that these technologies follow ethical guidelines and prioritize human values. Human-centered Geospatial Data Science advocates for two primary focuses. First, it advances our understanding of human-environment interactions by leveraging Geospatial Data Science to measure and analyze human subjective experiences at place including emotion, perception, cognition, and creativity. Second, it advocates for the development of responsible and ethical Geospatial Data Science methods that protect geoprivacy, enhance fairness and reduce bias, and improve the explainability and transparency of geospatial technologies. With these two missions, Human-centered Geospatial Data Sciences brings a fresh perspective to develop and utilize geospatial technologies that positively impact society and benefit human well-being and the humanities.",0 "Accurate attribution of authorship is crucial for maintaining the integrity of digital content, improving forensic investigations, and mitigating the risks of misinformation and plagiarism. Addressing the imperative need for proper authorship attribution is essential to uphold the credibility and accountability of authentic authorship. The rapid advancements of Large Language Models (LLMs) have blurred the lines between human and machine authorship, posing significant challenges for traditional methods. We presents a comprehensive literature review that examines the latest research on authorship attribution in the era of LLMs. This survey systematically explores the landscape of this field by categorizing four representative problems: (1) Human-written Text Attribution; (2) LLM-generated Text Detection; (3) LLM-generated Text Attribution; and (4) Human-LLM Co-authored Text Attribution. We also discuss the challenges related to ensuring the generalization and explainability of authorship attribution methods. Generalization requires the ability to generalize across various domains, while explainability emphasizes providing transparent and understandable insights into the decisions made by these models. By evaluating the strengths and limitations of existing methods and benchmarks, we identify key open problems and future research directions in this field. This literature review serves a roadmap for researchers and practitioners interested in understanding the state of the art in this rapidly evolving field. Additional resources and a curated list of papers are available and regularly updated at https://llm-authorship.github.io",2 "Explainable AI (XAI) provides methods to understand non-interpretable machine learning models. However, we have little knowledge about what legal experts expect from these explanations, including their legal compliance with, and value against European Union legislation. To close this gap, we present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI, with a specific focus on the European General Data Protection Regulation. The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain. We extract both a set of hierarchical and interconnected codes using grounded theory, and present the standpoints of the participating experts towards XAI. We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject. Finally, we present a set of recommendations for developers of XAI methods, and indications of legal areas of discussion. Among others, recommendations address the presentation, choice, and content of an explanation, technical risks as well as the end-user, while we provide legal pointers to the contestability of explanations, transparency thresholds, intellectual property rights as well as the relationship between involved parties.",2 "The widespread acceptance of empirically derived codal provisions and equations in civil engineering stands in stark contrast to the skepticism facing machine learning (ML) models, despite their shared statistical foundations. This paper examines this philosophical tension through the lens of structural engineering and explores how integrating ML challenges traditional engineering philosophies and professional identities. Recent efforts have documented how ML enhances predictive accuracy, optimizes designs, and analyzes complex behaviors. However, one might also raise concerns about the diminishing role of human intuition and the interpretability of algorithms. To showcase this rarely explored front, this paper presents how ML can be successfully integrated into various engineering problems by means of formulation via deduction, induction, and abduction. Then, this paper identifies three principal paradoxes that could arise when adopting ML: analysis paralysis (increased prediction accuracy leading to a reduced understanding of physical mechanisms), infeasible solutions (optimization resulting in unconventional designs that challenge engineering intuition), and the Rashomon effect (where contradictions in explainability methods and physics arise). This paper concludes by addressing these paradoxes and arguing the need to rethink epistemological shifts in engineering and engineering education and methodologies to harmonize traditional principles with ML.",0 "Robotic guide dogs hold significant potential to enhance the autonomy and mobility of blind or visually impaired (BVI) individuals by offering universal assistance over unstructured terrains at affordable costs. However, the design of robotic guide dogs remains underexplored, particularly in systematic aspects such as gait controllers, navigation behaviors, interaction methods, and verbal explanations. Our study addresses this gap by conducting user studies with 18 BVI participants, comprising 15 cane users and three guide dog users. Participants interacted with a quadrupedal robot and provided both quantitative and qualitative feedback. Our study revealed several design implications, such as a preference for a learning-based controller and a rigid handle, gradual turns with asymmetric speeds, semantic communication methods, and explainability. The study also highlighted the importance of customization to support users with diverse backgrounds and preferences, along with practical concerns such as battery life, maintenance, and weather issues. These findings offer valuable insights and design implications for future research and development of robotic guide dogs.",2 "Across various applications, humans increasingly use black-box artificial intelligence (AI) systems without insight into these systems' reasoning. To counter this opacity, explainable AI (XAI) methods promise enhanced transparency and interpretability. While recent studies have explored how XAI affects human-AI collaboration, few have examined the potential pitfalls caused by incorrect explanations. The implications for humans can be far-reaching but have not been explored extensively. To investigate this, we ran a study (n=160) on AI-assisted decision-making in which humans were supported by XAI. Our findings reveal a misinformation effect when incorrect explanations accompany correct AI advice with implications post-collaboration. This effect causes humans to infer flawed reasoning strategies, hindering task execution and demonstrating impaired procedural knowledge. Additionally, incorrect explanations compromise human-AI team-performance during collaboration. With our work, we contribute to HCI by providing empirical evidence for the negative consequences of incorrect explanations on humans post-collaboration and outlining guidelines for designers of AI.",0 "Objective: This paper describes the development of hybrid artificial intelligence strategies for drone navigation. Methods: The navigation module combines a deep learning model with a rule-based engine depending on the agent state. The deep learning model has been trained using reinforcement learning. The rule-based engine uses expert knowledge to deal with specific situations. The navigation module incorporates several strategies to explain the drone decision based on its observation space, and different mechanisms for including human decisions in the navigation process. Finally, this paper proposes an evaluation methodology based on defining several scenarios and analyzing the performance of the different strategies according to metrics adapted to each scenario. Results: Two main navigation problems have been studied. For the first scenario (reaching known targets), it has been possible to obtain a 90% task completion rate, reducing significantly the number of collisions thanks to the rule-based engine. For the second scenario, it has been possible to reduce 20% of the time required to locate all the targets using the reinforcement learning model. Conclusions: Reinforcement learning is a very good strategy to learn policies for drone navigation, but in critical situations, it is necessary to complement it with a rule-based module to increase task success rate.",0 "Low-intensity Transcranial Ultrasonic Stimulation (TUS) is a non-invasive brain stimulation technique enabling cortical and deep brain targeting with unprecedented spatial accuracy. Given the high rate of adoption by new users with varying levels of expertise and interdisciplinary backgrounds, practical guidelines are needed to ensure state-of-the-art TUS application and reproducible outcomes. Therefore, the International Transcranial Ultrasonic Stimulation Safety and Standards (ITRUSST) consortium has formed a subcommittee, endorsed by the International Federation of Clinical Neurophysiology (IFCN), to develop recommendations for best practice in TUS applications in humans. The practical guide presented here provides a brief introduction into ultrasound physics and sonication parameters. It explains the requirements of TUS lab equipment and transducer selection and discusses experimental design and procedures alongside potential confounds and control conditions. Finally, the guide elaborates on essential steps of application planning for stimulation safety and efficacy, as well as considerations when combining TUS with neuroimaging, electrophysiology, or other brain stimulation techniques. We hope that this practical guide to TUS will assist both novice and experienced users in planning and conducting high-quality studies and provide a solid foundation for further advancements in this promising field.",0 "Understanding the abilities of LLMs to reason about natural language plans, such as instructional text and recipes, is critical to reliably using them in decision-making systems. A fundamental aspect of plans is the temporal order in which their steps needs to be executed, which reflects the underlying causal dependencies between them. We introduce CaT-Bench, a benchmark of Step Order Prediction questions, which test whether a step must necessarily occur before or after another in cooking recipe plans. We use this to evaluate how well frontier LLMs understand causal and temporal dependencies. We find that SOTA LLMs are underwhelming (best zero-shot is only 0.59 in F1), and are biased towards predicting dependence more often, perhaps relying on temporal order of steps as a heuristic. While prompting for explanations and using few-shot examples improve performance, the best F1 result is only 0.73. Further, human evaluation of explanations along with answer correctness show that, on average, humans do not agree with model reasoning. Surprisingly, we also find that explaining after answering leads to better performance than normal chain-of-thought prompting, and LLM answers are not consistent across questions about the same step pairs. Overall, results show that LLMs' ability to detect dependence between steps has significant room for improvement.",0 "We investigate the explainability of Reinforcement Learning (RL) policies from a temporal perspective, focusing on the sequence of future outcomes associated with individual actions. In RL, value functions compress information about rewards collected across multiple trajectories and over an infinite horizon, allowing a compact form of knowledge representation. However, this compression obscures the temporal details inherent in sequential decision-making, presenting a key challenge for interpretability. We present Temporal Policy Decomposition (TPD), a novel explainability approach that explains individual RL actions in terms of their Expected Future Outcome (EFO). These explanations decompose generalized value functions into a sequence of EFOs, one for each time step up to a prediction horizon of interest, revealing insights into when specific outcomes are expected to occur. We leverage fixed-horizon temporal difference learning to devise an off-policy method for learning EFOs for both optimal and suboptimal actions, enabling contrastive explanations consisting of EFOs for different state-action pairs. Our experiments demonstrate that TPD generates accurate explanations that (i) clarify the policy's future strategy and anticipated trajectory for a given action and (ii) improve understanding of the reward composition, facilitating fine-tuning of the reward function to align with human expectations.",0 "Cardiovascular diseases (CVD) are the leading cause of death globally. Non-invasive, cost-effective imaging techniques play a crucial role in early detection and prevention of CVD. Optical coherence tomography (OCT) has gained recognition as a potential tool for early CVD risk prediction, though its use remains underexplored. In this study, we investigated the potential of OCT as an additional imaging technique to predict future CVD events. We analysed retinal OCT data from the UK Biobank. The dataset included 612 patients who suffered a myocardial infarction (MI) or stroke within five years of imaging and 2,234 controls without CVD (total: 2,846 participants). A self-supervised deep learning approach based on Variational Autoencoders (VAE) was used to extract low-dimensional latent representations from high-dimensional 3D OCT images, capturing distinct features of retinal layers. These latent features, along with clinical data, were used to train a Random Forest (RF) classifier to differentiate between patients at risk of future CVD events (MI or stroke) and healthy controls. Our model achieved an AUC of 0.75, sensitivity of 0.70, specificity of 0.70, and accuracy of 0.70, outperforming the QRISK3 score (the third version of the QRISK cardiovascular disease risk prediction algorithm; AUC = 0.60, sensitivity = 0.60, specificity = 0.55, accuracy = 0.55). The choroidal layer in OCT images was identified as a key predictor of future CVD events, revealed through a novel model explainability approach. This study demonstrates that retinal OCT imaging is a cost-effective, non-invasive alternative for predicting CVD risk, offering potential for widespread application in optometry practices and hospitals.",2 "The assessment of face image quality is crucial to ensure reliable face recognition. In order to provide data subjects and operators with explainable and actionable feedback regarding captured face images, relevant quality components have to be measured. Quality components that are known to negatively impact the utility of face images include JPEG and JPEG 2000 compression artefacts, among others. Compression can result in a loss of important image details which may impair the recognition performance. In this work, deep neural networks are trained to detect the compression artefacts in a face images. For this purpose, artefact-free facial images are compressed with the JPEG and JPEG 2000 compression algorithms. Subsequently, the PSNR and SSIM metrics are employed to obtain training labels based on which neural networks are trained using a single network to detect JPEG and JPEG 2000 artefacts, respectively. The evaluation of the proposed method shows promising results: in terms of detection accuracy, error rates of 2-3% are obtained for utilizing PSNR labels during training. In addition, we show that error rates of different open-source and commercial face recognition systems can be significantly reduced by discarding face images exhibiting severe compression artefacts. To minimize resource consumption, EfficientNetV2 serves as basis for the presented algorithm, which is available as part of the OFIQ software.",0 "Urban segregation refers to the physical and social division of people, often driving inequalities within cities and exacerbating socioeconomic and racial tensions. While most studies focus on residential spaces, they often neglect segregation across ""activity spaces"" where people work, socialize, and engage in leisure. Human mobility data offers new opportunities to analyze broader segregation patterns, encompassing both residential and activity spaces, but challenges existing methods in capturing the complexity and local nuances of urban segregation. This work introduces InclusiViz, a novel visual analytics system for multi-level analysis of urban segregation, facilitating the development of targeted, data-driven interventions. Specifically, we developed a deep learning model to predict mobility patterns across social groups using environmental features, augmented with explainable AI to reveal how these features influence segregation. The system integrates innovative visualizations that allow users to explore segregation patterns from broad overviews to fine-grained detail and evaluate urban planning interventions with real-time feedback. We conducted a quantitative evaluation to validate the model's accuracy and efficiency. Two case studies and expert interviews with social scientists and urban analysts demonstrated the system's effectiveness, highlighting its potential to guide urban planning toward more inclusive cities.",2 "Acknowledging the effects of outdoor air pollution, the literature inadequately addresses indoor air pollution's impacts. Despite daily health risks, existing research primarily focused on monitoring, lacking accuracy in pinpointing indoor pollution sources. In our research work, we thoroughly investigated the influence of indoor activities on pollution levels. A survey of 143 participants revealed limited awareness of indoor air pollution. Leveraging 65 days of diverse data encompassing activities like incense stick usage, indoor smoking, inadequately ventilated cooking, excessive AC usage, and accidental paper burning, we developed a comprehensive monitoring system. We identify pollutant sources and effects with high precision through clustering analysis and interpretability models (LIME and SHAP). Our method integrates Decision Trees, Random Forest, Naive Bayes, and SVM models, excelling at 99.8% accuracy with Decision Trees. Continuous 24-hour data allows personalized assessments for targeted pollution reduction strategies, achieving 91% accuracy in predicting activities and pollution exposure.",2 "Hecke modifications of vector bundles have played a significant role in several areas of mathematics. They appear in subjects ranging from number theory to complex geometry. This article intends to be a friendly introduction to the subject. We give an overview of how Hecke modifications appear in the literature, explain their origin and their importance in number theory and classical algebraic geometry. Moreover, we report the progress made in describing Hecke modifications explicitly and why these explicit descriptions are important. We describe all the Hecke modifications of the trivial rank $2$ vector bundle over a closed point of degree $5$ in the projective line, as well as all the vector bundles over a certain elliptic curve, which admit a rank $2$ and degree $0$ trace bundle as a Hecke modification. This result is not present in existing literature.",0 "The development of Generative AI Large Language Models (LLMs) raised the alarm regarding identifying content produced through generative AI or humans. In one case, issues arise when students heavily rely on such tools in a manner that can affect the development of their writing or coding skills. Other issues of plagiarism also apply. This study aims to support efforts to detect and identify textual content generated using LLM tools. We hypothesize that LLMs-generated text is detectable by machine learning (ML), and investigate ML models that can recognize and differentiate texts generated by multiple LLMs tools. We leverage several ML and Deep Learning (DL) algorithms such as Random Forest (RF), and Recurrent Neural Networks (RNN), and utilized Explainable Artificial Intelligence (XAI) to understand the important features in attribution. Our method is divided into 1) binary classification to differentiate between human-written and AI-text, and 2) multi classification, to differentiate between human-written text and the text generated by the five different LLM tools (ChatGPT, LLaMA, Google Bard, Claude, and Perplexity). Results show high accuracy in the multi and binary classification. Our model outperformed GPTZero with 98.5\% accuracy to 78.3\%. Notably, GPTZero was unable to recognize about 4.2\% of the observations, but our model was able to recognize the complete test dataset. XAI results showed that understanding feature importance across different classes enables detailed author/source profiles. Further, aiding in attribution and supporting plagiarism detection by highlighting unique stylistic and structural elements ensuring robust content originality verification.",0 "This study seeks to enhance academic integrity by providing tools to detect AI-generated content in student work using advanced technologies. The findings promote transparency and accountability, helping educators maintain ethical standards and supporting the responsible integration of AI in education. A key contribution of this work is the generation of the CyberHumanAI dataset, which has 1000 observations, 500 of which are written by humans and the other 500 produced by ChatGPT. We evaluate various machine learning (ML) and deep learning (DL) algorithms on the CyberHumanAI dataset comparing human-written and AI-generated content from Large Language Models (LLMs) (i.e., ChatGPT). Results demonstrate that traditional ML algorithms, specifically XGBoost and Random Forest, achieve high performance (83% and 81% accuracies respectively). Results also show that classifying shorter content seems to be more challenging than classifying longer content. Further, using Explainable Artificial Intelligence (XAI) we identify discriminative features influencing the ML model's predictions, where human-written content tends to use a practical language (e.g., use and allow). Meanwhile AI-generated text is characterized by more abstract and formal terms (e.g., realm and employ). Finally, a comparative analysis with GPTZero show that our narrowly focused, simple, and fine-tuned model can outperform generalized systems like GPTZero. The proposed model achieved approximately 77.5% accuracy compared to GPTZero's 48.5% accuracy when tasked to classify Pure AI, Pure Human, and mixed class. GPTZero showed a tendency to classify challenging and small-content cases as either mixed or unrecognized while our proposed model showed a more balanced performance across the three classes.",0 "A new Digital Europe Programme (DEP), a funding instrument for development and innovation, was established in the European Union (EU) in 2021. The paper makes an empirical inquiry into the projects funded through the DEP. According to the results, the projects align well with the DEP's strategic focus on cyber security, artificial intelligence, high-performance computing, innovation hubs, small- and medium-sized enterprises, and education. Most of the projects have received an equal amount of national and EU funding. Although national origins of participating organizations do not explain the amounts of funding granted, there is a rather strong tendency for national organizations to primarily collaborate with other national organizations. Finally, information about the technological domains addressed and the economic sectors involved provides decent explanatory power for statistically explaining the funding amounts granted. With these results and the accompanying discussion, the paper contributes to the timely debate about innovation, technology development, and industrial policy in Europe.",0 "The increasing complexity of machine learning models in computer vision, particularly in face verification, requires the development of explainable artificial intelligence (XAI) to enhance interpretability and transparency. This study extends previous work by integrating semantic concepts derived from human cognitive processes into XAI frameworks to bridge the comprehension gap between model outputs and human understanding. We propose a novel approach combining global and local explanations, using semantic features defined by user-selected facial landmarks to generate similarity maps and textual explanations via large language models (LLMs). The methodology was validated through quantitative experiments and user feedback, demonstrating improved interpretability. Results indicate that our semantic-based approach, particularly the most detailed set, offers a more nuanced understanding of model decisions than traditional methods. User studies highlight a preference for our semantic explanations over traditional pixelbased heatmaps, emphasizing the benefits of human-centric interpretability in AI. This work contributes to the ongoing efforts to create XAI frameworks that align AI models behaviour with human cognitive processes, fostering trust and acceptance in critical applications.",2 "Past experiments show that reputation or the knowledge of peers' past cooperation can enhance cooperation in human social networks. On the other hand, the knowledge of peers' wealth undermines cooperativeness, and that of peers' interconnectedness and network structure does not affect it. However, it is unknown if making peers' subjective well-being (SWB) available or visible in social networks may enhance or undermine cooperation. Therefore, we implemented online network experiments (N = 662 in 50 networked groups with 15 rounds of interactions), in which study participants cooperated with or defected against connected peers through Public Goods Game, made and cut social ties with others, and rated their SWB. We manipulated the visibility of connected peers' SWB (25 visible vs. 25 invisible SWB networked groups) while keeping the connected peers' reputation and in-game wealth visible. Results show that making the peers/ SWB visible did not alter overall cooperativeness, wealth, inter-connectedness, or SWB. In contrast, the visible SWB networked groups exhibited a higher number of communities and lower transitivity (the proportion of the cases where a peer of a peer is also a peer) than the invisible SWB networked groups. These phenomena are explained by an altered decision-making pattern in the visible SWB networks: cooperators were less likely to connect with cooperators and more likely to connect with defectors, and consequently, cooperators could not maintain their popularity or stay in the center of the networks.",2 "Many recent IT companies use cloud services for deploying their products, mainly because of their convenience. As such, cloud assets have become a new attack surface, and the concept of cloud security has emerged. However, cloud security is not emphasized enough compared to on-premise security, resulting in many insecure cloud architectures. In particular, small organizations often don't have enough human resources to design a secure architecture, leaving them vulnerable to cloud security breaches. We suggest the multi-account strategy for securing the cloud architecture. This strategy cost-effectively improves security by separating assets and reducing management overheads on the cloud infrastructure. When implemented, it automatically provides access restriction within the boundary of an account and eliminates redundancies in policy management. Since access control is a critical objective for constructing secure architectures, this practical method successfully enhances security even in small companies. In this paper, we analyze the benefits of multi-accounts compared to single accounts and explain how to deploy multiple accounts effortlessly using the services provided by AWS. Then, we present possible design choices for multi-account structures with a concrete example. Finally, we illustrate two techniques for operational excellence on multi-account structures. We take an incremental approach to secure policy management with the principle of least privilege and introduce methods for auditing multiple accounts.",0 "The widespread application of pre-trained language models (PLMs) in natural language processing (NLP) has led to increasing concerns about their explainability. Selective rationalization is a self-explanatory framework that selects human-intelligible input subsets as rationales for predictions. Recent studies have shown that applying existing rationalization frameworks to PLMs will result in severe degeneration and failure problems, producing sub-optimal or meaningless rationales. Such failures severely damage trust in rationalization methods and constrain the application of rationalization techniques on PLMs. In this paper, we find that the homogeneity of tokens in the sentences produced by PLMs is the primary contributor to these problems. To address these challenges, we propose a method named Pre-trained Language Model's Rationalization (PLMR), which splits PLMs into a generator and a predictor to deal with NLP tasks while providing interpretable rationales. The generator in PLMR also alleviates homogeneity by pruning irrelevant tokens, while the predictor uses full-text information to standardize predictions. Experiments conducted on two widely used datasets across multiple PLMs demonstrate the effectiveness of the proposed method PLMR in addressing the challenge of applying selective rationalization to PLMs. Codes: https://github.com/ylb777/PLMR.",0 "Understanding travelers' route choices can help policymakers devise optimal operational and planning strategies for both normal and abnormal circumstances. However, existing choice modeling methods often rely on predefined assumptions and struggle to capture the dynamic and adaptive nature of travel behavior. Recently, Large Language Models (LLMs) have emerged as a promising alternative, demonstrating remarkable ability to replicate human-like behaviors across various fields. Despite this potential, their capacity to accurately simulate human route choice behavior in transportation contexts remains doubtful. To satisfy this curiosity, this paper investigates the potential of LLMs for route choice modeling by introducing an LLM-empowered agent, ""LLMTraveler."" This agent integrates an LLM as its core, equipped with a memory system that learns from past experiences and makes decisions by balancing retrieved data and personality traits. The study systematically evaluates the LLMTraveler's ability to replicate human-like decision-making through two stages of day-to-day (DTD) congestion games: (1) analyzing its route-switching behavior in single origin-destination (OD) pair scenarios, where it demonstrates patterns that align with laboratory data but cannot be fully explained by traditional models, and (2) testing its capacity to model adaptive learning behaviors in multi-OD scenarios on the Ortuzar and Willumsen (OW) network, producing results comparable to Multinomial Logit (MNL) and Reinforcement Learning (RL) models. These experiments demonstrate that the framework can partially replicate human-like decision-making in route choice while providing natural language explanations for its decisions. This capability offers valuable insights for transportation policymaking, such as simulating traveler responses to new policies or changes in the network.",0 "Most existing social robot navigation techniques either leverage hand-crafted rules or human demonstrations to connect robot perception to socially compliant actions. However, there remains a significant gap in effectively translating perception into socially compliant actions, much like how human reasoning naturally occurs in dynamic environments. Considering the recent success of Vision-Language Models (VLMs), we propose using language to bridge the gap in human-like reasoning between perception and socially aware robot actions. We create a vision-language dataset, Social robot Navigation via Explainable Interactions (SNEI), featuring 40K human-annotated Visual Question Answers (VQAs) based on 2K human-robot social interactions in unstructured, crowded public spaces, spanning perception, prediction, chain-of-thought reasoning, action, and explanation. We fine-tune a VLM, Social-LLaVA, using SNEI to demonstrate the practical application of our dataset. Social-LLaVA outperforms state-of-the-art models like GPT-4V and Gemini, based on the average of fifteen different human-judge scores across 50 VQA. Deployed onboard a mobile robot, Social-LLaVA enables human-like reasoning, marking a promising step toward socially compliant robot navigation in dynamic public spaces through language reasoning.",0 "Explainability is crucial for the application of black-box Graph Neural Networks (GNNs) in critical fields such as healthcare, finance, cybersecurity, and more. Various feature attribution methods, especially the perturbation-based methods, have been proposed to indicate how much each node/edge contributes to the model predictions. However, these methods fail to generate connected explanatory subgraphs that consider the causal interaction between edges within different coalition scales, which will result in unfaithful explanations. In our study, we propose GISExplainer, a novel game-theoretic interaction based explanation method that uncovers what the underlying GNNs have learned for node classification by discovering human-interpretable causal explanatory subgraphs. First, GISExplainer defines a causal attribution mechanism that considers the game-theoretic interaction of multi-granularity coalitions in candidate explanatory subgraph to quantify the causal effect of an edge on the prediction. Second, GISExplainer assumes that the coalitions with negative effects on the predictions are also significant for model interpretation, and the contribution of the computation graph stems from the combined influence of both positive and negative interactions within the coalitions. Then, GISExplainer regards the explanation task as a sequential decision process, in which a salient edges is successively selected and connected to the previously selected subgraph based on its causal effect to form an explanatory subgraph, ultimately striving for better explanations. Additionally, an efficiency optimization scheme is proposed for the causal attribution mechanism through coalition sampling. Extensive experiments demonstrate that GISExplainer achieves better performance than state-of-the-art approaches w.r.t. two quantitative metrics: Fidelity and Sparsity.",0 "This paper introduces Agency-Driven Labor Theory as a new theoretical framework for understanding human work in AI-augmented environments. While traditional labor theories have focused primarily on task execution and labor time, ADLT proposes that human labor value is increasingly derived from agency - the capacity to make informed judgments, provide strategic direction, and design operational frameworks for AI systems. The paper presents a mathematical framework expressing labor value as a function of agency quality, direction effectiveness, and outcomes, providing a quantifiable approach to analyzing human value creation in AI-augmented workplaces. Drawing on recent work in organizational economics and knowledge worker productivity, ADLT explains how human workers create value by orchestrating complex systems that combine human and artificial intelligence. The theory has significant implications for job design, compensation structures, professional development, and labor market dynamics. Through applications across various sectors, the paper demonstrates how ADLT can guide organizations in managing the transition to AI-augmented operations while maximizing human value creation. The framework provides practical tools for policymakers and educational institutions as they prepare workers for a labor market where value creation increasingly centers on agency and direction rather than execution.",0 "Iterative Closest Point (ICP) is a commonly used algorithm to estimate transformation between two point clouds. The key idea of this work is to leverage recent advances in explainable AI for probabilistic ICP methods that provide uncertainty estimates. Concretely, we propose a method that can explain why a probabilistic ICP method produced a particular output. Our method is based on kernel SHAP (SHapley Additive exPlanations). With this, we assign an importance value to common sources of uncertainty in ICP such as sensor noise, occlusion, and ambiguous environments. The results of the experiment show that this explanation method can reasonably explain the uncertainty sources, providing a step towards robots that know when and why they failed in a human interpretable manner",0 "In recent years, Large Language Models (LLMs) have become increasingly more powerful in their ability to complete complex tasks. One such task in which LLMs are often employed is scoring, i.e., assigning a numerical value from a certain scale to a subject. In this paper, we strive to understand how LLMs score, specifically in the context of empathy scoring. We develop a novel and comprehensive framework for investigating how effective LLMs are at measuring and scoring empathy of responses in dialogues, and what methods can be employed to deepen our understanding of LLM scoring. Our strategy is to approximate the performance of state-of-the-art and fine-tuned LLMs with explicit and explainable features. We train classifiers using various features of dialogues including embeddings, the Motivational Interviewing Treatment Integrity (MITI) Code, a set of explicit subfactors of empathy as proposed by LLMs, and a combination of the MITI Code and the explicit subfactors. Our results show that when only using embeddings, it is possible to achieve performance close to that of generic LLMs, and when utilizing the MITI Code and explicit subfactors scored by an LLM, the trained classifiers can closely match the performance of fine-tuned LLMs. We employ feature selection methods to derive the most crucial features in the process of empathy scoring. Our work provides a new perspective toward understanding LLM empathy scoring and helps the LLM community explore the potential of LLM scoring in social science studies.",0 "This paper addresses the extraction of the bird vocalization embedding from the whole song level using disentangled representation learning (DRL). Bird vocalization embeddings are necessary for large-scale bioacoustic tasks, and self-supervised methods such as Variational Autoencoder (VAE) have shown their performance in extracting such low-dimensional embeddings from vocalization segments on the note or syllable level. To extend the processing level to the entire song instead of cutting into segments, this paper regards each vocalization as the generalized and discriminative part and uses two encoders to learn these two parts. The proposed method is evaluated on the Great Tits dataset according to the clustering performance, and the results outperform the compared pre-trained models and vanilla VAE. Finally, this paper analyzes the informative part of the embedding, further compresses its dimension, and explains the disentangled performance of bird vocalizations.",0 "Reinforcement learning (RL) has shown great promise in simulated environments, such as games, where failures have minimal consequences. However, the deployment of RL agents in real-world systems such as autonomous vehicles, robotics, UAVs, and medical devices demands a higher level of safety and transparency, particularly when facing adversarial threats. Safe RL algorithms have been developed to address these concerns by optimizing both task performance and safety constraints. However, errors are inevitable, and when they occur, it is essential that the RL agents can also explain their actions to human operators. This makes trust in the safety mechanisms of RL systems crucial for effective deployment. Explainability plays a key role in building this trust by providing clear, actionable insights into the agent's decision-making process, ensuring that safety-critical decisions are well understood. While machine learning (ML) has seen significant advances in interpretability and visualization, explainability methods for RL remain limited. Current tools fail to address the dynamic, sequential nature of RL and its needs to balance task performance with safety constraints over time. The re-purposing of traditional ML methods, such as saliency maps, is inadequate for safety-critical RL applications where mistakes can result in severe consequences. To bridge this gap, we propose xSRL, a framework that integrates both local and global explanations to provide a comprehensive understanding of RL agents' behavior. xSRL also enables developers to identify policy vulnerabilities through adversarial attacks, offering tools to debug and patch agents without retraining. Our experiments and user studies demonstrate xSRL's effectiveness in increasing safety in RL systems, making them more reliable and trustworthy for real-world deployment. Code is available at https://github.com/risal-shefin/xSRL.",2 "Machine learning models use high dimensional feature spaces to map their inputs to the corresponding class labels. However, these features often do not have a one-to-one correspondence with physical concepts understandable by humans, which hinders the ability to provide a meaningful explanation for the decisions made by these models. We propose a method for measuring the correlation between high-level concepts and the decisions made by a machine learning model. Our method can isolate the impact of a given high-level concept and accurately measure it quantitatively. Additionally, this study aims to determine the prevalence of frequent patterns in machine learning models, which often occur in imbalanced datasets. We have successfully applied the proposed method to fundus images and managed to quantitatively measure the impact of radiomic patterns on the model decisions.",0 "Causal learning allows humans to predict the effect of their actions on the known environment and use this knowledge to plan the execution of more complex actions. Such knowledge also captures the behaviour of the environment and can be used for its analysis and the reasoning behind the behaviour. This type of knowledge is also crucial in the design of intelligent robotic systems with common sense. In this paper, we study causal relations by learning the forward and inverse models based on data generated by a simulated robotic arm involved in two sensorimotor tasks. As a next step, we investigate feature attribution methods for the analysis of the forward model, which reveals the low-level causal effects corresponding to individual features of the state vector related to both the arm joints and the environment features. This type of analysis provides solid ground for dimensionality reduction of the state representations, as well as for the aggregation of knowledge towards the explainability of causal effects at higher levels.",0 "Data-driven software solutions have significantly been used in critical domains with significant socio-economic, legal, and ethical implications. The rapid adoptions of data-driven solutions, however, pose major threats to the trustworthiness of automated decision-support software. A diminished understanding of the solution by the developer and historical/current biases in the data sets are primary challenges. To aid data-driven software developers and end-users, we present FairLay-ML, a debugging tool to test and explain the fairness implications of data-driven solutions. FairLay-ML visualizes the logic of datasets, trained models, and decisions for a given data point. In addition, it trains various models with varying fairness-accuracy trade-offs. Crucially, FairLay-ML incorporates counterfactual fairness testing that finds bugs beyond the development datasets. We conducted two studies through FairLay-ML that allowed us to measure false positives/negatives in prevalent counterfactual testing and understand the human perception of counterfactual test cases in a class survey. FairLay-ML and its benchmarks are publicly available at https://github.com/Pennswood/FairLay-ML. The live version of the tool is available at https://fairlayml-v2.streamlit.app/. We provide a video demo of the tool at https://youtu.be/wNI9UWkywVU?t=133.",2 "Gastrointestinal (GI) bleeding is a serious medical condition that presents significant diagnostic challenges, particularly in settings with limited access to healthcare resources. Wireless Capsule Endoscopy (WCE) has emerged as a powerful diagnostic tool for visualizing the GI tract, but it requires time-consuming manual analysis by experienced gastroenterologists, which is prone to human error and inefficient given the increasing number of patients.To address this challenge, we propose ClassifyViStA, an AI-based framework designed for the automated detection and classification of bleeding and non-bleeding frames from WCE videos. The model consists of a standard classification path, augmented by two specialized branches: an implicit attention branch and a segmentation branch.The attention branch focuses on the bleeding regions, while the segmentation branch generates accurate segmentation masks, which are used for classification and interpretability. The model is built upon an ensemble of ResNet18 and VGG16 architectures to enhance classification performance. For the bleeding region detection, we implement a Soft Non-Maximum Suppression (Soft NMS) approach with YOLOv8, which improves the handling of overlapping bounding boxes, resulting in more accurate and nuanced detections.The system's interpretability is enhanced by using the segmentation masks to explain the classification results, offering insights into the decision-making process similar to the way a gastroenterologist identifies bleeding regions. Our approach not only automates the detection of GI bleeding but also provides an interpretable solution that can ease the burden on healthcare professionals and improve diagnostic efficiency. Our code is available at ClassifyViStA.",2 "This paper presents a comprehensive overview of the first edition of the Academic Essay Authenticity Challenge, organized as part of the GenAI Content Detection shared tasks collocated with COLING 2025. This challenge focuses on detecting machine-generated vs. human-authored essays for academic purposes. The task is defined as follows: ""Given an essay, identify whether it is generated by a machine or authored by a human.'' The challenge involves two languages: English and Arabic. During the evaluation phase, 25 teams submitted systems for English and 21 teams for Arabic, reflecting substantial interest in the task. Finally, seven teams submitted system description papers. The majority of submissions utilized fine-tuned transformer-based models, with one team employing Large Language Models (LLMs) such as Llama 2 and Llama 3. This paper outlines the task formulation, details the dataset construction process, and explains the evaluation framework. Additionally, we present a summary of the approaches adopted by participating teams. Nearly all submitted systems outperformed the n-gram-based baseline, with the top-performing systems achieving F1 scores exceeding 0.98 for both languages, indicating significant progress in the detection of machine-generated text.",0 "The uses of machine learning (ML) have snowballed in recent years. In many cases, ML models are highly complex, and their operation is beyond the understanding of human decision-makers. Nevertheless, some uses of ML models involve high-stakes and safety-critical applications. Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding the operation of such complex ML models, thus eliciting trust in their operation. Unfortunately, the majority of past XAI work is based on informal approaches, that offer no guarantees of rigor. Unsurprisingly, there exists comprehensive experimental and theoretical evidence confirming that informal methods of XAI can provide human-decision makers with erroneous information. Logic-based XAI represents a rigorous approach to explainability; it is model-based and offers the strongest guarantees of rigor of computed explanations. However, a well-known drawback of logic-based XAI is the complexity of logic reasoning, especially for highly complex ML models. Recent work proposed distance-restricted explanations, i.e. explanations that are rigorous provided the distance to a given input is small enough. Distance-restricted explainability is tightly related with adversarial robustness, and it has been shown to scale for moderately complex ML models, but the number of inputs still represents a key limiting factor. This paper investigates novel algorithms for scaling up the performance of logic-based explainers when computing and enumerating ML model explanations with a large number of inputs.",0 "Models based on human-understandable concepts have received extensive attention to improve model interpretability for trustworthy artificial intelligence in the field of medical image analysis. These methods can provide convincing explanations for model decisions but heavily rely on the detailed annotation of pre-defined concepts. Consequently, they may not be effective in cases where concepts or annotations are incomplete or low-quality. Although some methods automatically discover effective and new visual concepts rather than using pre-defined concepts or could find some human-understandable concepts via large Language models, they are prone to veering away from medical diagnostic evidence and are challenging to understand. In this paper, we propose a concept complement bottleneck model for interpretable medical image diagnosis with the aim of complementing the existing concept set and finding new concepts bridging the gap between explainable models. Specifically, we propose to use concept adapters for specific concepts to mine the concept differences and score concepts in their own attention channels to support almost fairly concept learning. Then, we devise a concept complement strategy to learn new concepts while jointly using known concepts to improve model performance. Comprehensive experiments on medical datasets demonstrate that our model outperforms the state-of-the-art competitors in concept detection and disease diagnosis tasks while providing diverse explanations to ensure model interpretability effectively.",0 "Many AI systems focus solely on providing solutions or explaining outcomes. However, complex tasks like research and strategic thinking often benefit from a more comprehensive approach to augmenting the thinking process rather than passively getting information. We introduce the concept of ""Thinking Assistant"", a new genre of assistants that help users improve decision-making with a combination of asking reflection questions based on expert knowledge. Through our lab study (N=80), these Large Language Model (LLM) based Thinking Assistants were better able to guide users to make important decisions, compared with conversational agents that only asked questions, provided advice, or neither. Based on the results, we develop a Thinking Assistant in academic career development, determining research trajectory or developing one's unique research identity, which requires deliberation, reflection and experts' advice accordingly. In a longitudinal deployment with 223 conversations, participants responded positively to approximately 65% of the responses. Our work proposes directions for developing more effective LLM agents. Rather than adhering to the prevailing authoritative approach of generating definitive answers, LLM agents aimed at assisting with cognitive enhancement should prioritize fostering reflection. They should initially provide responses designed to prompt thoughtful consideration through inquiring, followed by offering advice only after gaining a deeper understanding of the user's context and needs.",2 "Concept bottleneck models are interpretable predictive models that are often used in domains where model trust is a key priority, such as healthcare. They identify a small number of human-interpretable concepts in the data, which they then use to make predictions. Learning relevant concepts from data proves to be a challenging task. The most predictive concepts may not align with expert intuition, thus, failing interpretability with no recourse. Our proposed approach identifies a number of predictive concepts that explain the data. By offering multiple alternative explanations, we allow the human expert to choose the one that best aligns with their expectation. To demonstrate our method, we show that it is able discover all possible concept representations on a synthetic dataset. On EHR data, our model was able to identify 4 out of the 5 pre-defined concepts without supervision.",0 "Recent breakthroughs in AI capability have been attributed to increasingly sophisticated architectures and alignment techniques, but a simpler principle may explain these advances: memory makes computation universal. Memory enables universal computation through two fundamental capabilities: recursive state maintenance and reliable history access. We formally prove these requirements are both necessary and sufficient for universal computation. This principle manifests across scales, from cellular computation to neural networks to language models. Complex behavior emerges not from sophisticated processing units but from maintaining and accessing state across time. We demonstrate how parallel systems like neural networks achieve universal computation despite limitations in their basic units by maintaining state across iterations. This theoretical framework reveals a universal pattern: computational advances consistently emerge from enhanced abilities to maintain and access state rather than from more complex basic operations. Our analysis unifies understanding of computation across biological systems, artificial intelligence, and human cognition, reminding us that humanity's own computational capabilities have evolved in step with our technical ability to remember through oral traditions, writing, and now computing.",0 "This paper studies the relationship between the student's abilities in the second year of high school and the infrastructural endowment in all Italian municipalities, using spatial Bayesian modelling. Municipal student scores are obtained by averaging standardized and spatially homogeneous indicators of student outcomes provided by the Invalsi Institute for two subjects, Italian and Mathematics. Given the nature of the data, we employ a multilevel regression model assuming a bivariate Intrinsic Conditionally Autoregressive (ICAR) latent effect to explain the spatial variability and account for the correlation between the two subjects. Bayesian model estimation is obtained by the Integrated Nested Laplace Approximation (INLA), implemented in the \texttt{R-INLA} package. We find that alongside a significant association with the current state of school infrastructure and facilities, spatially structured latent effects are still necessary to explain the different student outcomes across municipalities.",0 "The widespread adoption of machine learning in scientific research has created a fundamental tension between model opacity and scientific understanding. Whilst some advocate for intrinsically interpretable models, we introduce Computational Interpretabilism (CI) as a philosophical framework for post-hoc interpretability in scientific AI. Drawing parallels with human expertise, where post-hoc rationalisation coexists with reliable performance, CI establishes that scientific knowledge emerges through structured model interpretation when properly bounded by empirical validation. Through mediated understanding and bounded factivity, we demonstrate how post-hoc methods achieve epistemically justified insights without requiring complete mechanical transparency, resolving tensions between model complexity and scientific comprehension.",0 "Visual Language Models have demonstrated remarkable capabilities across tasks, including visual question answering and image captioning. However, most models rely on text-based instructions, limiting their effectiveness in human-machine interactions. Moreover, the quality of language models depends on reasoning and prompting techniques, such as COT, which remain underexplored when using speech instructions. To address these challenges, we propose SilVar, a novel end-to-end multimodal model that uses speech instructions for reasoning in visual question answering. In addition, we investigate reasoning techniques with levels including conversational, simple, and complex speech instruction. SilVar is built upon CLIP, Whisper, and LLaMA 3.1-8B, enabling intuitive interactions by allowing users to provide verbal or text instructions. To this end, we introduce a dataset designed to challenge models with speech-based reasoning tasks for object localization. This dataset enhances the model ability to process and explain visual scenes from spoken input, moving beyond object recognition to reasoning-based interactions. The experiments show that SilVar achieves SOTA performance on the MMMU and ScienceQA benchmarks despite the challenge of speech-based instructions. We believe SilVar will inspire next-generation multimodal reasoning models, toward expert artificial general intelligence. Our code and dataset are available here.",0 "Raga identification is an important problem within the domain of Indian Art music, as Ragas are fundamental to its composition and performance, playing a crucial role in music retrieval, preservation, and education. Few studies that have explored this task employ approaches such as signal processing, Machine Learning (ML), and more recently, Deep Learning (DL) based methods. However, a key question remains unanswered in all these works: do these ML/DL methods learn and interpret Ragas in a manner similar to human experts? Besides, a significant roadblock in this research is the unavailability of an ample supply of rich, labeled datasets, which drives these ML/DL-based methods. In this paper, firstly we curate a dataset comprising 191 hours of Hindustani Classical Music (HCM) recordings, annotate it for Raga and tonic labels, and train a CNN-LSTM model for the task of Automatic Raga Identification (ARI). We achieve a chunk-wise f1-measure of 0.89 for a subset of 12 Raga classes. Following this, we make one of the first attempts to employ model explainability techniques: SoundLIME and GradCAM++ for Raga identification, to evaluate whether the classifier's predictions align with human understanding of Ragas. We compare the generated explanations with human expert annotations and further analyze individual test examples to understand the role of regions highlighted by explanations in making correct or incorrect predictions made by the model. Our results demonstrate a significant alignment of the model's understanding with human understanding, and the thorough analysis validates the effectiveness of our approach.",0 "The LLM-as-judge paradigm is increasingly being adopted for automated evaluation of model outputs. While LLM judges have shown promise on constrained evaluation tasks, closed source LLMs display critical shortcomings when deployed in real world applications due to challenges of fine grained metrics and explainability, while task specific evaluation models lack cross-domain generalization. We introduce GLIDER, a powerful 3B evaluator LLM that can score any text input and associated context on arbitrary user defined criteria. GLIDER shows higher Pearson's correlation than GPT-4o on FLASK and greatly outperforms prior evaluation models, achieving comparable performance to LLMs 17x its size. GLIDER supports fine-grained scoring, multilingual reasoning, span highlighting and was trained on 685 domains and 183 criteria. Extensive qualitative analysis shows that GLIDER scores are highly correlated with human judgments, with 91.3% human agreement. We have open-sourced GLIDER to facilitate future research.",0 "Despite the recent successes of large, pretrained neural language models (LLMs), comparatively little is known about the representations of linguistic structure they learn during pretraining, which can lead to unexpected behaviors in response to prompt variation or distribution shift. To better understand these models and behaviors, we introduce a general model analysis framework to study LLMs with respect to their representation and use of human-interpretable linguistic properties. Our framework, CALM (Competence-based Analysis of Language Models), is designed to investigate LLM competence in the context of specific tasks by intervening on models' internal representations of different linguistic properties using causal probing, and measuring models' alignment under these interventions with a given ground-truth causal model of the task. We also develop a new approach for performing causal probing interventions using gradient-based adversarial attacks, which can target a broader range of properties and representations than prior techniques. Finally, we carry out a case study of CALM using these interventions to analyze and compare LLM competence across a variety of lexical inference tasks, showing that CALM can be used to explain behaviors across these tasks.",0 "Network-based representations of fitness landscapes have grown in popularity in the past decade; this is probably because of growing interest in explainability for optimisation algorithms. Local optima networks (LONs) have been especially dominant in the literature and capture an approximation of local optima and their connectivity in the landscape. However, thus far, LONs have been constructed according to a strict definition of what a local optimum is: the result of local search. Many evolutionary approaches do not include this, however. Popular algorithms such as CMA-ES have therefore never been subject to LON analysis. Search trajectory networks (STNs) offer a possible alternative: nodes can be any search space location. However, STNs are not typically modelled in such a way that models temporal stalls: that is, a region in the search space where an algorithm fails to find a better solution over a defined period of time. In this work, we approach this by systematically analysing a special case of STN which we name attractor networks. These offer a coarse-grained view of algorithm behaviour with a singular focus on stall locations. We construct attractor networks for CMA-ES, differential evolution, and random search for 24 noiseless black-box optimisation benchmark problems. The properties of attractor networks are systematically explored. They are also visualised and compared to traditional LONs and STN models. We find that attractor networks facilitate insights into algorithm behaviour which other models cannot, and we advocate for the consideration of attractor analysis even for algorithms which do not include local search.",0 "Behavioral experiments on the ultimatum game (UG) reveal that we humans prefer fair acts, which contradicts the prediction made in orthodox Economics. Existing explanations, however, are mostly attributed to exogenous factors within the imitation learning framework. Here, we adopt the reinforcement learning paradigm, where individuals make their moves aiming to maximize their accumulated rewards. Specifically, we apply Q-learning to UG, where each player is assigned two Q-tables to guide decisions for the roles of proposer and responder. In a two-player scenario, fairness emerges prominently when both experiences and future rewards are appreciated. In particular, the probability of successful deals increases with higher offers, which aligns with observations in behavioral experiments. Our mechanism analysis reveals that the system undergoes two phases, eventually stabilizing into fair or rational strategies. These results are robust when the rotating role assignment is replaced by a random or fixed manner, or the scenario is extended to a latticed population. Our findings thus conclude that the endogenous factor is sufficient to explain the emergence of fairness, exogenous factors are not needed.",0 "CuInS2 quantum dots have been studied in a broad range of applications, but despite this, the fine details of their charge carrier dynamics remain a subject of intense debate. Two of the most relevant points of discussion are the hole dynamics and the influence of Cu:In synthesis stoichiometry on them. It has been proposed that Cu-deficiency leads to the formation of Cu2+, affecting the localization of holes into Cu defects. Importantly, it is precisely these confined hole states which are used to explain the interesting photoluminescence properties of CuInS2 quantum dots. We use static X-ray spectroscopy to reveal no evidence for a measurable amount of native Cu2+ states in Cu-deficient samples. Instead, the improved properties of these samples are explained by an increase of crystallinity, reducing the concentration of mid gap states. Furthermore, to understand the charge carrier dynamics, herein we employ ultrafast optical transient absorption, and fluorescence up-conversion spectroscopies in combination with ultrafast X-ray absorption spectroscopy using a hard X-ray free electron laser. We demonstrate that in non-passivated samples, holes are transferred from Cu atoms in sub-picosecond timescales. We assign this transfer to occur towards the thiol-based ligands. Finally, we observe that Cu-deficient samples are more robust against the photothermal heating effects of using higher laser fluences. This is not the case for the stoichiometric sample, where heating effects on the structure are directly observed.",0 "We present ARCAS (Automated Root Cause Analysis System), a diagnostic platform based on a Domain Specific Language (DSL) built for fast diagnostic implementation and low learning curve. Arcas is composed of a constellation of automated troubleshooting guides (Auto-TSGs) that can execute in parallel to detect issues using product telemetry and apply mitigation in near-real-time. The DSL is tailored specifically to ensure that subject matter experts can deliver highly curated and relevant Auto-TSGs in a short time without having to understand how they will interact with the rest of the diagnostic platform, thus reducing time-to-mitigate and saving crucial engineering cycles when they matter most. This contrasts with platforms like Datadog and New Relic, which primarily focus on monitoring and require manual intervention for mitigation. ARCAS uses a Large Language Model (LLM) to prioritize Auto-TSGs outputs and take appropriate actions, thus suppressing the costly requirement of understanding the general behavior of the system. We explain the key concepts behind ARCAS and demonstrate how it has been successfully used for multiple products across Azure Synapse Analytics and Microsoft Fabric Synapse Data Warehouse.",0 "Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.",2 "As Artificial Intelligence (AI) continues to advance rapidly, Friendly AI (FAI) has been proposed to advocate for more equitable and fair development of AI. Despite its importance, there is a lack of comprehensive reviews examining FAI from an ethical perspective, as well as limited discussion on its potential applications and future directions. This paper addresses these gaps by providing a thorough review of FAI, focusing on theoretical perspectives both for and against its development, and presenting a formal definition in a clear and accessible format. Key applications are discussed from the perspectives of eXplainable AI (XAI), privacy, fairness and affective computing (AC). Additionally, the paper identifies challenges in current technological advancements and explores future research avenues. The findings emphasise the significance of developing FAI and advocate for its continued advancement to ensure ethical and beneficial AI development.",0 "Extracting time-varying latent variables from computational cognitive models is a key step in model-based neural analysis, which aims to understand the neural correlates of cognitive processes. However, existing methods only allow researchers to infer latent variables that explain subjects' behavior in a relatively small class of cognitive models. For example, a broad class of relevant cognitive models with analytically intractable likelihood is currently out of reach from standard techniques, based on Maximum a Posteriori parameter estimation. Here, we present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space using recurrent neural networks and simulated datasets. We show that our approach achieves competitive performance in inferring latent variable sequences in both tractable and intractable models. Furthermore, the approach is generalizable across different computational models and is adaptable for both continuous and discrete latent spaces. We then demonstrate its applicability in real world datasets. Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models for model-based neural analyses, and thus test a broader set of theories.",0 "The quantum three-rotor problem concerns the dynamics of 3 equally massive particles moving on a circle subject to pairwise attractive cosine potentials and can model coupled Josephson junctions. Classically, it displays order-chaos-order behavior with increasing energy. The quantum system admits a dimensionless coupling with semiclassical behavior at strong coupling. We study stationary states with periodic `relative' wave functions. Perturbative and harmonic approximations capture the spectrum at weak coupling and that of low-lying states at strong coupling. More generally, the cumulative distribution of energy levels obtained by numerical diagonalization is well-described by a Weyl-like semiclassical estimate. However, the system has an $S_3 \times Z_2$ symmetry that is obscured when working with relative angles. By exploiting a basis for invariant states, we obtain the spectrum restricted to the identity representation. To uncover universal quantum hallmarks of chaos, we partition the spectrum into energy windows where the classical motion is regular, mixed or chaotic and unfold each separately. At strong coupling, we find striking signatures of transitions between regularity and chaos: spacing distributions morph from Poisson to Wigner-Dyson while the number variance shifts from linear to logarithmic behavior at small lengths. Some nonuniversal features are also examined. For instance, for strong coupling, the number variance saturates and oscillates at large lengths while the spectral form factor displays a nonuniversal peak at short times. Moreover, deviations from Poisson spacings at asymptotically low and high energies are well-explained by quantum harmonic and free-rotor spectra projected to the identity representation at strong and weak coupling. Interestingly, the degeneracy of free-rotor levels admits an elegant formula that we deduce using properties of Eisenstein primes.",0 "Improving global school connectivity is critical for ensuring inclusive and equitable quality education. To reliably estimate the cost of connecting schools, governments and connectivity providers require complete and accurate school location data - a resource that is often scarce in many low- and middle-income countries. To address this challenge, we propose a cost-effective, scalable approach to locating schools in high-resolution satellite images using weakly supervised deep learning techniques. Our best models, which combine vision transformers and convolutional neural networks, achieve AUPRC values above 0.96 across 10 pilot African countries. Leveraging explainable AI techniques, our approach can approximate the precise geographical coordinates of the school locations using only low-cost, classification-level annotations. To demonstrate the scalability of our method, we generate nationwide maps of school location predictions in African countries and present a detailed analysis of our results, using Senegal as our case study. Finally, we demonstrate the immediate usability of our work by introducing an interactive web mapping tool to streamline human-in-the-loop model validation efforts by government partners. This work successfully showcases the real-world utility of deep learning and satellite images for planning regional infrastructure and accelerating universal school connectivity.",0 "As artificial intelligence (AI) continues advancing, ensuring positive societal impacts becomes critical, especially as AI systems become increasingly ubiquitous in various aspects of life. However, developing ""AI for good"" poses substantial challenges around aligning systems with complex human values. Presently, we lack mature methods for addressing these challenges. This article presents and evaluates the Positive AI design method aimed at addressing this gap. The method provides a human-centered process to translate wellbeing aspirations into concrete practices. First, we explain the method's four key steps: contextualizing, operationalizing, optimizing, and implementing wellbeing supported by continuous measurement for feedback cycles. We then present a multiple case study where novice designers applied the method, revealing strengths and weaknesses related to efficacy and usability. Next, an expert evaluation study assessed the quality of the resulting concepts, rating them moderately high for feasibility, desirability, and plausibility of achieving intended wellbeing benefits. Together, these studies provide preliminary validation of the method's ability to improve AI design, while surfacing areas needing refinement like developing support for complex steps. Proposed adaptations such as examples and evaluation heuristics could address weaknesses. Further research should examine sustained application over multiple projects. This human-centered approach shows promise for realizing the vision of 'AI for Wellbeing' that does not just avoid harm, but actively benefits humanity.",0 "Clinical trials or studies oftentimes require long-term and/or costly follow-up of participants to evaluate a novel treatment/drug/vaccine. There has been increasing interest in the past few decades in using short-term surrogate outcomes as a replacement of the primary outcome i.e., in using the surrogate outcome, which can potentially be observed sooner, to make inference about the treatment effect on the long-term primary outcome. Very few of the available statistical methods to evaluate a surrogate are applicable to settings where both the surrogate and the primary outcome are time-to-event outcomes subject to censoring. Methods that can handle this setting tend to require parametric assumptions or be limited to assessing only the restricted mean survival time. In this paper, we propose a non-parametric approach to evaluate a censored surrogate outcome, such as time to progression, when the primary outcome is also a censored time-to-event outcome, such as time to death, and the treatment effect of interest is the difference in overall survival. Specifically, we define the proportion of the treatment effect on the primary outcome that is explained (PTE) by the censored surrogate outcome in this context, and estimate this proportion by defining and deriving an optimal transformation of the surrogate information. Our approach provides the added advantage of relaxed assumptions to guarantee that the true PTE is within (0,1), along with being model-free. Finite sample performance of our estimators are illustrated via extensive simulation studies and a real data application examining progression-free survival as a surrogate for overall survival for patients with metastatic colorectal cancer.",2 "Artificial intelligence (AI) has rapidly developed through advancements in computational power and the growth of massive datasets. However, this progress has also heightened challenges in interpreting the ""black-box"" nature of AI models. To address these concerns, eXplainable AI (XAI) has emerged with a focus on transparency and interpretability to enhance human understanding and trust in AI decision-making processes. In the context of multimodal data fusion and complex reasoning scenarios, the proposal of Multimodal eXplainable AI (MXAI) integrates multiple modalities for prediction and explanation tasks. Meanwhile, the advent of Large Language Models (LLMs) has led to remarkable breakthroughs in natural language processing, yet their complexity has further exacerbated the issue of MXAI. To gain key insights into the development of MXAI methods and provide crucial guidance for building more transparent, fair, and trustworthy AI systems, we review the MXAI methods from a historical perspective and categorize them across four eras: traditional machine learning, deep learning, discriminative foundation models, and generative LLMs. We also review evaluation metrics and datasets used in MXAI research, concluding with a discussion of future challenges and directions. A project related to this review has been created at https://github.com/ShilinSun/mxai_review.",0 "We introduce FarExStance, a new dataset for explainable stance detection in Farsi. Each instance in this dataset contains a claim, the stance of an article or social media post towards that claim, and an extractive explanation which provides evidence for the stance label. We compare the performance of a fine-tuned multilingual RoBERTa model to several large language models in zero-shot, few-shot, and parameter-efficient fine-tuned settings on our new dataset. On stance detection, the most accurate models are the fine-tuned RoBERTa model, the LLM Aya-23-8B which has been fine-tuned using parameter-efficient fine-tuning, and few-shot Claude-3.5-Sonnet. Regarding the quality of the explanations, our automatic evaluation metrics indicate that few-shot GPT-4o generates the most coherent explanations, while our human evaluation reveals that the best Overall Explanation Score (OES) belongs to few-shot Claude-3.5-Sonnet. The fine-tuned Aya-32-8B model produced explanations most closely aligned with the reference explanations.",0 "Every human with a functioning vestibular system is capable of feeling motion sickness, but some are more vulnerable than others. Based on the leading theories explaining this condition, vulnerability should be predicted by a person's years of real-life experience before using a VR device and years of VR experience after. A questionnaire was filled out on susceptibility to motion sickness in VR by people on VR-related forums. Results from the survey show that the condition has a significant relationship with age or experience outside the environment.",2 "Governments typically collect and steward a vast amount of high-quality data on their citizens and institutions, and the UK government is exploring how it can better publish and provision this data to the benefit of the AI landscape. However, the compositions of generative AI training corpora remain closely guarded secrets, making the planning of data sharing initiatives difficult. To address this, we devise two methods to assess UK government data usage for the training of Large Language Models (LLMs) and 'peek behind the curtain' in order to observe the UK government's current contributions as a data provider for AI. The first method, an ablation study that utilises LLM 'unlearning', seeks to examine the importance of the information held on UK government websites for LLMs and their performance in citizen query tasks. The second method, an information leakage study, seeks to ascertain whether LLMs are aware of the information held in the datasets published on the UK government's open data initiative data$.$gov$.$uk. Our findings indicate that UK government websites are important data sources for AI (heterogenously across subject matters) while data$.$gov$.$uk is not. This paper serves as a technical report, explaining in-depth the designs, mechanics, and limitations of the above experiments. It is accompanied by a complementary non-technical report on the ODI website in which we summarise the experiments and key findings, interpret them, and build a set of actionable recommendations for the UK government to take forward as it seeks to design AI policy. While we focus on UK open government data, we believe that the methods introduced in this paper present a reproducible approach to tackle the opaqueness of AI training corpora and provide organisations a framework to evaluate and maximize their contributions to AI development.",0 "We study strongly isochronous Hamiltonians that generate periodic time evolution with the same basic period for a dense set of initial values. We explain that all such Hamiltonians are maximally superintegrable, and show that if the system is subjected to Hamiltonian reduction based on a compact symmetry group and certain conditions are met, then the reduced Hamiltonian is strongly isochronous with the original basic period. We utilize these simple observations for demonstrating the maximal superintegrability of rational spin Calogero--Moser type models in confining harmonic potential.",0 "The ubiquitous use of Shapley values in eXplainable AI (XAI) has been triggered by the tool SHAP, and as a result are commonly referred to as SHAP scores. Recent work devised examples of machine learning (ML) classifiers for which the computed SHAP scores are thoroughly unsatisfactory, by allowing human decision-makers to be misled. Nevertheless, such examples could be perceived as somewhat artificial, since the selected classes must be interpreted as numeric. Furthermore, it was unclear how general were the issues identified with SHAP scores. This paper answers these criticisms. First, the paper shows that for Boolean classifiers there are arbitrarily many examples for which the SHAP scores must be deemed unsatisfactory. Second, the paper shows that the issues with SHAP scores are also observed in the case of regression models. In addition, the paper studies the class of regression models that respect Lipschitz continuity, a measure of a function's rate of change that finds important recent uses in ML, including model robustness. Concretely, the paper shows that the issues with SHAP scores occur even for regression models that respect Lipschitz continuity. Finally, the paper shows that the same issues are guaranteed to exist for arbitrarily differentiable regression models.",0 "Deep neural networks (DNNs) are vulnerable to adversarial examples (AEs) that mislead the model while appearing benign to human observers. A critical concern is the transferability of AEs, which enables black-box attacks without direct access to the target model. However, many previous attacks have failed to explain the intrinsic mechanism of adversarial transferability. In this paper, we rethink the property of transferable AEs and reformalize the formulation of transferability. Building on insights from this mechanism, we analyze the generalization of AEs across models with different architectures and prove that we can find a local perturbation to mitigate the gap between surrogate and target models. We further establish the inner connections between model smoothness and flat local maxima, both of which contribute to the transferability of AEs. Further, we propose a new adversarial attack algorithm, \textbf{A}dversarial \textbf{W}eight \textbf{T}uning (AWT), which adaptively adjusts the parameters of the surrogate model using generated AEs to optimize the flat local maxima and model smoothness simultaneously, without the need for extra data. AWT is a data-free tuning method that combines gradient-based and model-based attack methods to enhance the transferability of AEs. Extensive experiments on a variety of models with different architectures on ImageNet demonstrate that AWT yields superior performance over other attacks, with an average increase of nearly 5\% and 10\% attack success rates on CNN-based and Transformer-based models, respectively, compared to state-of-the-art attacks.",0 "All biological systems are subject to perturbations: due to thermal fluctuations, external environments, or mutations. Yet, while biological systems are composed of thousands of interacting components, recent high-throughput experiments show that their response to perturbations is surprisingly low-dimensional: confined to only a few stereotyped changes out of the many possible. Here, we explore a unifying dynamical systems framework - soft modes - to explain and analyze low-dimensionality in biology, from molecules to eco-systems. We argue that this one framework of soft modes makes non-trivial predictions that generalize classic ideas from developmental biology to disparate systems, namely: phenocopying, dual buffering, and global epistasis. While some of these predictions have been borne out in experiments, we discuss how soft modes allow for a surprisingly far-reaching and unifying framework in which to analyze data from protein biophysics to microbial ecology.",0 "Machine-generated music (MGM) has become a groundbreaking innovation with wide-ranging applications, such as music therapy, personalised editing, and creative inspiration within the music industry. However, the unregulated proliferation of MGM presents considerable challenges to the entertainment, education, and arts sectors by potentially undermining the value of high-quality human compositions. Consequently, MGM detection (MGMD) is crucial for preserving the integrity of these fields. Despite its significance, MGMD domain lacks comprehensive benchmark results necessary to drive meaningful progress. To address this gap, we conduct experiments on existing large-scale datasets using a range of foundational models for audio processing, establishing benchmark results tailored to the MGMD task. Our selection includes traditional machine learning models, deep neural networks, Transformer-based architectures, and State Space Models (SSM). Recognising the inherently multimodal nature of music, which integrates both melody and lyrics, we also explore fundamental multimodal models in our experiments. Beyond providing basic binary classification outcomes, we delve deeper into model behaviour using multiple explainable Aritificial Intelligence (XAI) tools, offering insights into their decision-making processes. Our analysis reveals that ResNet18 performs the best according to in-domain and out-of-domain tests. By providing a comprehensive comparison of benchmark results and their interpretability, we propose several directions to inspire future research to develop more robust and effective detection methods for MGM.",0 "After recalling the principles that allow space-time to be considered by analogy as an elastic medium, we show how the modified gravity according to the MOND theory concerning the anomaly of the velocities of stars at the periphery of galaxies can be seen as a creep of space acting on the radius of galaxies that gives a creep coefficient of Phi(space) = ((a0/a) x (Ro local/ Ro mean) -1). The values vary between 0.2 and 9 depending on the type of galaxy and density distribution. Considering the gravitational lensing effect of the ball cluster we obtain a creep coefficient Phi (space) = (1-pv)/pv. With pv the percentage of visible matter and pDM the percentage of Dark matter from the global mass (pv + pDM =1). The values vary between 0.66 and 4 for this cluster. This paper therefore raises the question, via these creep coefficients, of the possible granular nature of the vacuum and therefore of space fabric on the one hand and proposes another dark matter-free approach based on the creep of the texture of space to explain gravitational anomalies on the other hand.",0 "Various evaluation metrics have been proposed for Grammatical Error Correction (GEC), but many, particularly reference-free metrics, lack explainability. This lack of explainability hinders researchers from analyzing the strengths and weaknesses of GEC models and limits the ability to provide detailed feedback for users. To address this issue, we propose attributing sentence-level scores to individual edits, providing insight into how specific corrections contribute to the overall performance. For the attribution method, we use Shapley values, from cooperative game theory, to compute the contribution of each edit. Experiments with existing sentence-level metrics demonstrate high consistency across different edit granularities and show approximately 70\% alignment with human evaluations. In addition, we analyze biases in the metrics based on the attribution results, revealing trends such as the tendency to ignore orthographic edits. Our implementation is available at \url{https://github.com/naist-nlp/gec-attribute}.",0 "Recent advancements in large language models (LLMs) have shown promise in generating psychotherapeutic dialogues, particularly in the context of motivational interviewing (MI). However, the inherent lack of transparency in LLM outputs presents significant challenges given the sensitive nature of psychotherapy. Applying MI strategies, a set of MI skills, to generate more controllable therapeutic-adherent conversations with explainability provides a possible solution. In this work, we explore the alignment of LLMs with MI strategies by first prompting the LLMs to predict the appropriate strategies as reasoning and then utilizing these strategies to guide the subsequent dialogue generation. We seek to investigate whether such alignment leads to more controllable and explainable generations. Multiple experiments including automatic and human evaluations are conducted to validate the effectiveness of MI strategies in aligning psychotherapy dialogue generation. Our findings demonstrate the potential of LLMs in producing strategically aligned dialogues and suggest directions for practical applications in psychotherapeutic settings.",0 "Background: Humor is a fundamental part of human communication, with prior work linking positive humor in the workplace to positive outcomes, such as improved performance and job satisfaction. Aims: This study aims to investigate programming-related humor in a large social media community. Methodology: We collected 139,718 submissions from Reddit subreddit r/ProgrammerHumor. Both textual and image-based (memes) submissions were considered. The image data was processed with OCR to extract text from images for NLP analysis. Multiple regression models were built to investigate what makes submissions humorous. Additionally, a random sample of 800 submissions was labeled by human annotators regarding their relation to theories of humor, suitability for the workplace, the need for programming knowledge to understand the submission, and whether images in image-based submissions added context to the submission. Results: Our results indicate that predicting the humor of software developers is difficult. Our best regression model was able to explain only 10% of the variance. However, statistically significant differences were observed between topics, submission times, and associated humor theories. Our analysis reveals that the highest submission scores are achieved by imagebased submissions that are created during the winter months in the northern hemisphere, between 2-3pm UTC on weekends, which are distinctly related to superiority and incongruity theories of humor, and are about the topic of ""Learning"". Conclusions: Predicting humor with natural language processing methods is challenging. We discuss the benefits and inherent difficulties in assessing perceived humor of submissions, as well as possible avenues for future work. Additionally, our replication package should help future studies and can act as a joke repository for the software industry and education.",2 "Temporal higher-order networks, where each hyperlink involving a group of nodes are activated or deactivated over time, are recently used to represent complex systems such as social contacts, interactions or collaborations that occur at specific times. Such networks are substrates for social contagion processes like the diffusion of information and opinions. In this work, we consider eight temporal higher-order networks derived from human face-to-face interactions in various contexts and the Susceptible-Infected threshold process on each of these networks: whenever a hyperlink is active and the number of infected nodes in the hyperlink exceeds a threshold $\Theta$, each susceptible node in the hyperlink is infected independently with probability $\beta$. The objective is to understand (1) the contribution of each hyperlink to the diffusion process, namely, the average number of nodes that are infected directly via the activation of the hyperlink when the diffusion starts from an arbitrary seed node, and (2) hyperlinks with what network properties tend to contribute more. We first propose to construct the diffusion backbone. The backbone is a weighted higher-order network, where the weight of each hyperlink denotes the contribution of the hyperlink to a given diffusion process. Secondly, we find that the backbone, or the contribution of hyperlinks, is dependent on the parameters $\beta$ and $\Theta$ of the diffusion process, which is also supported by our theoretical analysis of the backbone when $\beta\rightarrow 0$. Thirdly, we systematically design centrality metrics for hyperlinks in a temporal higher-order network, and each centrality metric is used to estimate the ranking of hyperlinks by the weight in the backbone. Finally, we find and explain why different centrality metrics can better estimate the contributions of hyperlinks for different parameters of the diffusion process.",0 "It has been suggested that merging black hole (BH) binaries in active galactic nucleus (AGN) discs formed through two-body scatterings via the gas-capture process may explain a significant fraction of BH mergers in AGN and a non-negligible contribution to the observed rate from LIGO-VIRGO-KAGRA. We perform Monte Carlo simulations of BH and binary BH formation, evolution and mergers across the observed AGN mass function using a novel physically motivated treatment for the gas-capture process derived from hydrodynamical simulations of BH-BH encounters in AGN and varying assumptions on the AGN disc physics. The results suggest that gas-captured binaries could result in merger rates of 0.73 - 7.1Gpc$^{-3}$yr$^{-1}$. Most mergers take place near the outer boundary of the accretion disk, but this may be subject to change when migration is considered. The BH merger rate in the AGN channel in the Universe is dominated by AGN with supermassive BH masses on the order of 10$^{7} M_\odot$ , with 90% of mergers occurring in the range 10$^{6} M_\odot$ - 10$^{8} M_\odot$ . The merging mass distribution is flatter than the initial BH mass power law by a factor $\Delta \xi$ = 1.1 to 1.2, as larger BHs can align with the disc and successfully form binaries more efficiently. Similarly, the merging mass ratio distribution is flatter, therefore the AGN channel could easily explain the high mass and unequal mass ratio detections such as GW190521 and GW190814. When modelling the BH binary formation process using a simpler dynamical friction treatment, we observe very similar results, where the primary bottleneck is the alignment time with the disk. We find the most influential parameters on the rates are the anticipated number of BHs and their mass function. We conclude that AGN remain an important channel for consideration, particularly for gravitational wave detections involving one or two high mass BHs.",0 "Time perception research has advanced significantly over the years. However, some areas remain largely unexplored. This study addresses two such under-explored areas in timing research: (1) A quantitative analysis of time perception at an individual level, and (2) Time perception in an ecological setting. In this context, we trained a machine learning model to predict the direction of change in an individual's time production. The model's training data was collected using an ecologically valid setup. We moved closer to an ecological setting by conducting an online experiment with 995 participants performing a time production task that used naturalistic videos (no audio) as stimuli. The model achieved an accuracy of 61%. This was 10 percentage points higher than the baseline models derived from cognitive theories of timing. The model performed equally well on new data from a second experiment, providing evidence of its generalization capabilities. The model's output analysis revealed that it also contained information about the magnitude of change in time production. The predictions were further analysed at both population and individual level. It was found that a participant's previous timing performance played a significant role in determining the direction of change in time production. By integrating attentional-gate theories from timing research with feature importance techniques from machine learning, we explained model predictions using cognitive theories of timing. The model and findings from this study have potential applications in systems involving human-computer interactions where understanding and predicting changes in user's time perception can enable better user experience and task performance.",2 "Machine Learning is transforming medical research by improving diagnostic accuracy and personalizing treatments. General ML models trained on large datasets identify broad patterns across populations, but their effectiveness is often limited by the diversity of human biology. This has led to interest in subject-specific models that use individual data for more precise predictions. However, these models are costly and challenging to develop. To address this, we propose a novel validation approach that uses a general ML model to ensure reproducible performance and robust feature importance analysis at both group and subject-specific levels. We tested a single Random Forest (RF) model on nine datasets varying in domain, sample size, and demographics. Different validation techniques were applied to evaluate accuracy and feature importance consistency. To introduce variability, we performed up to 400 trials per subject, randomly seeding the ML algorithm for each trial. This generated 400 feature sets per subject, from which we identified top subject-specific features. A group-specific feature importance set was then derived from all subject-specific results. We compared our approach to conventional validation methods in terms of performance and feature importance consistency. Our repeated trials approach, with random seed variation, consistently identified key features at the subject level and improved group-level feature importance analysis using a single general model. Subject-specific models address biological variability but are resource-intensive. Our novel validation technique provides consistent feature importance and improved accuracy within a general ML model, offering a practical and explainable alternative for clinical research.",0 "Three different computed tomography (CT) reconstruction algorithms: Filtered Back Projection (FBP), Unified Tomographic Reconstruction (UTR) and customized Simultaneous Algebraic Reconstruction Technique (cSART), have been systematically compared and evaluated using experimental data from CT scans of ten fresh mastectomy samples collected at the Imaging and Medical beamline of the Australian Synchrotron. All the scans were collected at the mean glandular dose of 2 mGy, using monochromatic X-rays with 32 keV energy, flat-panel detectors with 0.1 mm pixels and 6 meter distance between the rotation stage and the detector. Paganin's phase retrieval method was used in conjunction with all three CT reconstruction algorithms. The reconstructed images were compared in terms of the objective image quality characteristics, including spatial resolution, contrast, signal-to-noise, and contrast-to-noise ratios. The images were also evaluated by seven experienced medical imaging specialists, rating perceptible contrast, sharpness of tissue interfaces, image noise, calcification visibility and overall image quality. Of the three compared algorithms, cSART was clearly superior to UTR and FBP in terms of most measured objective image quality characteristics. At the same time, the results of the subjective quality evaluation consistently favoured the images reconstructed by FBP, followed by UTR, with cSART receiving lower scores on average. We argue that this apparent disagreement between the objective and subjective assessments of image quality can be explained by the importance assigned to image contrast in the subjective assessment, while the signal-to-noise ratio seemed to receive relatively low weighting. This study was conducted in preparation for phase-contrast breast CT imaging of live patients at Australian Synchrotron (Melbourne, Australia).",2 "Despite recent advancements in text-to-image generation, most existing methods struggle to create images with multiple objects and complex spatial relationships in the 3D world. To tackle this limitation, we introduce a generic AI system, namely MUSES, for 3D-controllable image generation from user queries. Specifically, our MUSES addresses this challenging task by developing a progressive workflow with three key components, including (1) Layout Manager for 2D-to-3D layout lifting, (2) Model Engineer for 3D object acquisition and calibration, (3) Image Artist for 3D-to-2D image rendering. By mimicking the collaboration of human professionals, this multi-modal agent pipeline facilitates the effective and automatic creation of images with 3D-controllable objects, through an explainable integration of top-down planning and bottom-up generation. Additionally, we find that existing benchmarks lack detailed descriptions of complex 3D spatial relationships of multiple objects. To fill this gap, we further construct a new benchmark of T2I-3DisBench (3D image scene), which describes diverse 3D image scenes with 50 detailed prompts. Extensive experiments show the state-of-the-art performance of MUSES on both T2I-CompBench and T2I-3DisBench, outperforming recent strong competitors such as DALL-E 3 and Stable Diffusion 3. These results demonstrate a significant step of MUSES forward in bridging natural language, 2D image generation, and 3D world. Our codes are available at the following link: https://github.com/DINGYANB/MUSES.",0 "We present an exposition on the Fuss--Catalan numbers, which are a generalization of the well known Catalan numbers. The literature on the subject is scattered (especially for the case of multiple independent parameters, as will be explained in the text), with overlapping definitions by different authors and duplication of proofs. This paper collects the main theorems and identities, with a consistent notation. Contact is made with the works of numerous authors, including the early works of Lambert and Euler. We demonstrate the application of the formalism to solve algebraic equations by infinite series. Our main result in this context is a new necessary and sufficient formula for the domain of absolute convergence of the series solutions of algebraic equations, which corrects and extends previous work in the field. Some historical material is placed in an Appendix.",0 "Equity is a core concern of learning analytics. However, applications that teach and assess equity skills, particularly at scale are lacking, often due to barriers in evaluating language. Advances in generative AI via large language models (LLMs) are being used in a wide range of applications, with this present work assessing its use in the equity domain. We evaluate tutor performance within an online lesson on enhancing tutors' skills when responding to students in potentially inequitable situations. We apply a mixed-method approach to analyze the performance of 81 undergraduate remote tutors. We find marginally significant learning gains with increases in tutors' self-reported confidence in their knowledge in responding to middle school students experiencing possible inequities from pretest to posttest. Both GPT-4o and GPT-4-turbo demonstrate proficiency in assessing tutors ability to predict and explain the best approach. Balancing performance, efficiency, and cost, we determine that few-shot learning using GPT-4o is the preferred model. This work makes available a dataset of lesson log data, tutor responses, rubrics for human annotation, and generative AI prompts. Future work involves leveling the difficulty among scenarios and enhancing LLM prompts for large-scale grading and assessment.",0 "Active learning (AL), which aims to construct an effective training set by iteratively curating the most formative unlabeled data for annotation, has been widely used in low-resource tasks. Most active learning techniques in classification rely on the model's uncertainty or disagreement to choose unlabeled data, suffering from the problem of over-confidence in superficial patterns and a lack of exploration. Inspired by the cognitive processes in which humans deduce and predict through causal information, we take an initial attempt towards integrating rationales into AL and propose a novel Explainable Active Learning framework (XAL) for low-resource text classification, which aims to encourage classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations. Specifically, besides using a pre-trained bi-directional encoder for classification, we employ a pre-trained uni-directional decoder to generate and score the explanation. We further facilitate the alignment of the model with human reasoning preference through a proposed ranking loss. During the selection of unlabeled data, the predicted uncertainty of the encoder and the explanation score of the decoder complement each other as the final metric to acquire informative data. Extensive experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines. Analysis indicates that the proposed method can generate corresponding explanations for its predictions.",0 "Explainable AI (XAI) methods typically focus on identifying essential input features or more abstract concepts for tasks like image or text classification. However, for algorithmic tasks like combinatorial optimization, these concepts may depend not only on the input but also on the current state of the network, like in the graph neural networks (GNN) case. This work studies concept learning for an existing GNN model trained to solve Boolean satisfiability (SAT). \textcolor{black}{Our analysis reveals that the model learns key concepts matching those guiding human-designed SAT heuristics, particularly the notion of 'support.' We demonstrate that these concepts are encoded in the top principal components (PCs) of the embedding's covariance matrix, allowing for unsupervised discovery. Using sparse PCA, we establish the minimality of these concepts and show their teachability through a simplified GNN. Two direct applications of our framework are (a) We improve the convergence time of the classical WalkSAT algorithm and (b) We use the discovered concepts to ""reverse-engineer"" the black-box GNN and rewrite it as a white-box textbook algorithm. Our results highlight the potential of concept learning in understanding and enhancing algorithmic neural networks for combinatorial optimization tasks.",0 "In this paper, we investigate the impact of adversarial attacks on the explainability of deep learning models, which are commonly criticized for their black-box nature despite their capacity for autonomous feature extraction. This black-box nature can affect the perceived trustworthiness of these models. To address this, explainability techniques such as GradCAM, SmoothGrad, and LIME have been developed to clarify model decision-making processes. Our research focuses on the robustness of these explanations when models are subjected to adversarial attacks, specifically those involving subtle image perturbations that are imperceptible to humans but can significantly mislead models. For this, we utilize attack methods like the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and observe their effects on model accuracy and explanations. The results reveal a substantial decline in model accuracy, with accuracies dropping from 89.94% to 58.73% and 45.50% under FGSM and BIM attacks, respectively. Despite these declines in accuracy, the explanation of the models measured by metrics such as Intersection over Union (IoU) and Root Mean Square Error (RMSE) shows negligible changes, suggesting that these metrics may not be sensitive enough to detect the presence of adversarial perturbations.",0 "We present a study that explores the role of user-centred design in developing Generative AI (GenAI) tools for music composition. Through semi-structured interviews with professional composers, we gathered insights on a novel generative model for creating variations, highlighting concerns around trust, transparency, and ethical design. The findings helped form a feedback loop, guiding improvements to the model that emphasised traceability, transparency and explainability. They also revealed new areas for innovation, including novel features for controllability and research questions on the ethical and practical implementation of GenAI models.",2 "With the continuous advancement of processors, modern micro-architecture designs have become increasingly complex. The vast design space presents significant challenges for human designers, making design space exploration (DSE) algorithms a significant tool for $\mu$-arch design. In recent years, efforts have been made in the development of DSE algorithms, and promising results have been achieved. However, the existing DSE algorithms, e.g., Bayesian Optimization and ensemble learning, suffer from poor interpretability, hindering designers' understanding of the decision-making process. To address this limitation, we propose utilizing Fuzzy Neural Networks to induce and summarize knowledge and insights from the DSE process, enhancing interpretability and controllability. Furthermore, to improve efficiency, we introduce a multi-fidelity reinforcement learning approach, which primarily conducts exploration using cheap but less precise data, thereby substantially diminishing the reliance on costly data. Experimental results show that our method achieves excellent results with a very limited sample budget and successfully surpasses the current state-of-the-art. Our DSE framework is open-sourced and available at https://github.com/fanhanwei/FNN\_MFRL\_ArchDSE/\ .",0 "Artificial intelligence (AI) is transforming scientific research, with explainable AI methods like concept-based models (CMs) showing promise for new discoveries. However, in molecular science, CMs are less common than black-box models like Graph Neural Networks (GNNs), due to their need for predefined concepts and manual labeling. This paper introduces the Automated Molecular Concept (AutoMolCo) framework, which leverages Large Language Models (LLMs) to automatically generate and label predictive molecular concepts. Through iterative concept refinement, AutoMolCo enables simple linear models to outperform GNNs and LLM in-context learning on several benchmarks. The framework operates without human knowledge input, overcoming limitations of existing CMs while maintaining explainability and allowing easy intervention. Experiments on MoleculeNet and High-Throughput Experimentation (HTE) datasets demonstrate that AutoMolCo-induced explainable CMs are beneficial for molecular science research.",0 "Prototypical networks aim to build intrinsically explainable models based on the linear summation of concepts. Concepts are coherent entities that we, as humans, can recognize and associate with a certain object or entity. However, important challenges remain in the fair evaluation of explanation quality provided by these models. This work first proposes an extensive set of quantitative and qualitative metrics which allow to identify drawbacks in current prototypical networks. It then introduces a novel architecture which provides compact explanations, outperforming current prototypical models in terms of explanation quality. Overall, the proposed architecture demonstrates how frozen pre-trained ViT backbones can be effectively turned into prototypical models for both general and domain-specific tasks, in our case biomedical image classifiers. Code is available at \url{https://github.com/hturbe/protosvit}.",0 "Personalized problem selection enhances student practice in tutoring systems. Prior research has focused on transparent problem selection that supports learner control but rarely engages learners in selecting practice materials. We explored how different levels of control (i.e., full AI control, shared control, and full learner control), combined with showing learning analytics on skill mastery and visual what-if explanations, can support students in practice contexts requiring high degrees of self-regulation, such as homework. Semi-structured interviews with six middle school students revealed three key insights: (1) participants highly valued learner control for an enhanced learning experience and better self-regulation, especially because most wanted to avoid losses in skill mastery; (2) only seeing their skill mastery estimates often made participants base problem selection on their weaknesses; and (3) what-if explanations stimulated participants to focus more on their strengths and improve skills until they were mastered. These findings show how explainable learning analytics could shape students' selection strategies when they have control over what to practice. They suggest promising avenues for helping students learn to regulate their effort, motivation, and goals during practice with tutoring systems.",0 "In industrial contexts, effective workforce allocation is crucial for operational efficiency. This paper presents an ongoing project focused on developing a decision-making tool designed for workforce allocation, emphasising the explainability to enhance its trustworthiness. Our objective is to create a system that not only optimises the allocation of teams to scheduled tasks but also provides clear, understandable explanations for its decisions, particularly in cases where the problem is infeasible. By incorporating human-in-the-loop mechanisms, the tool aims to enhance user trust and facilitate interactive conflict resolution. We implemented our approach on a prototype tool/digital demonstrator intended to be evaluated on a real industrial scenario both in terms of performance and user acceptability.",0 "A rapidly developing application of LLMs in XAI is to convert quantitative explanations such as SHAP into user-friendly narratives to explain the decisions made by smaller prediction models. Evaluating the narratives without relying on human preference studies or surveys is becoming increasingly important in this field. In this work we propose a framework and explore several automated metrics to evaluate LLM-generated narratives for explanations of tabular classification tasks. We apply our approach to compare several state-of-the-art LLMs across different datasets and prompt types. As a demonstration of their utility, these metrics allow us to identify new challenges related to LLM hallucinations for XAI narratives.",2 "Automatically generating feedback via large language models (LLMs) in intelligent tutoring systems and online learning platforms has the potential to improve the learning outcomes of many students. However, both feedback generation and evaluation are challenging: feedback content has to be valid especially in subjects like math, which requires models to understand the problem, the solution, and where the student's error lies. Feedback also has to be pedagogically valid to reflect effective tutoring strategies, such as explaining possible misconceptions and encouraging the student, among other desirable features. In this work, we address both problems of automatically generating and evaluating feedback while considering both correctness and alignment. First, we propose a rubric for evaluating math feedback and show that GPT-4 is able to effectively use it to annotate human-written and LLM-generated feedback. Second, we propose a framework for feedback generation that optimizes both correctness and alignment using reinforcement learning (RL). Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO). We show that our methods significantly increase the correctness and alignment of generated feedback with Llama 2, an open-source LLM, qualitatively analyze our generation and evaluation systems using case studies, and outline several areas for future work.",0 "Recent advancement in deep-neural network performance led to the development of new state-of-the-art approaches in numerous areas. However, the black-box nature of neural networks often prohibits their use in areas where model explainability and model transparency are crucial. Over the years, researchers proposed many algorithms to aid neural network understanding and provide additional information to the human expert. One of the most popular methods being Layer-Wise Relevance Propagation (LRP). This method assigns local relevance based on the pixel-wise decomposition of nonlinear classifiers. With the rise of attribution method research, there has emerged a pressing need to assess and evaluate their performance. Numerous metrics have been proposed, each assessing an individual property of attribution methods such as faithfulness, robustness or localization. Unfortunately, no single metric is deemed optimal for every case, and researchers often use several metrics to test the quality of the attribution maps. In this work, we address the shortcomings of the current LRP formulations and introduce a novel method for determining the relevance of input neurons through layer-wise relevance propagation. Furthermore, we apply this approach to the recently developed Vision Transformer architecture and evaluate its performance against existing methods on two image classification datasets, namely ImageNet and PascalVOC. Our results clearly demonstrate the advantage of our proposed method. Furthermore, we discuss the insufficiencies of current evaluation metrics for attribution-based explainability and propose a new evaluation metric that combines the notions of faithfulness, robustness and contrastiveness. We utilize this new metric to evaluate the performance of various attribution-based methods. Our code is available at: https://github.com/davor10105/relative-absolute-magnitude-propagation",0 "Satire detection is essential for accurately extracting opinions from textual data and combating misinformation online. However, the lack of diverse corpora for satire leads to the problem of stylistic bias which impacts the models' detection performances. This study proposes a debiasing approach for satire detection, focusing on reducing biases in training data by utilizing generative large language models. The approach is evaluated in both cross-domain (irony detection) and cross-lingual (English) settings. Results show that the debiasing method enhances the robustness and generalizability of the models for satire and irony detection tasks in Turkish and English. However, its impact on causal language models, such as Llama-3.1, is limited. Additionally, this work curates and presents the Turkish Satirical News Dataset with detailed human annotations, with case studies on classification, debiasing, and explainability.",0 "Following formatting instructions to generate well-structured content is a fundamental yet often unmet capability for large language models (LLMs). To study this capability, which we refer to as format faithfulness, we present FormatBench, a comprehensive format-related benchmark. Compared to previous format-related benchmarks, FormatBench involves a greater variety of tasks in terms of application scenes (traditional NLP tasks, creative works, autonomous agency tasks), human-LLM interaction styles (single-turn instruction, multi-turn chat), and format types (inclusion, wrapping, length, coding). Moreover, each task in FormatBench is attached with a format checker program. Extensive experiments on the benchmark reveal that state-of-the-art open- and closed-source LLMs still suffer from severe deficiency in format faithfulness. By virtue of the decidable nature of formats, we propose to Reinforce Format Faithfulness (ReFF) to help LLMs generate formatted output as instructed without compromising general quality. Without any annotated data, ReFF can substantially improve the format faithfulness rate (e.g., from 21.6% in original LLaMA3 to 95.0% on caption segmentation task), while keep the general quality comparable (e.g., from 47.3 to 46.4 in F1 scores). Combined with labeled training data, ReFF can simultaneously improve both format faithfulness (e.g., from 21.6% in original LLaMA3 to 75.5%) and general quality (e.g., from 47.3 to 61.6 in F1 scores). We further offer an interpretability analysis to explain how ReFF improves both format faithfulness and general quality.",0 "As academic literature proliferates, traditional review methods are increasingly challenged by the sheer volume and diversity of available research. This article presents a study that aims to address these challenges by enhancing the efficiency and scope of systematic reviews in the social sciences through advanced machine learning (ML) and natural language processing (NLP) tools. In particular, we focus on automating stages within the systematic reviewing process that are time-intensive and repetitive for human annotators and which lend themselves to immediate scalability through tools such as information retrieval and summarisation guided by expert advice. The article concludes with a summary of lessons learnt regarding the integrated approach towards systematic reviews and future directions for improvement, including explainability.",2 "Human Activity Recognition using time-series data from wearable sensors poses unique challenges due to complex temporal dependencies, sensor noise, placement variability, and diverse human behaviors. These factors, combined with the nontransparent nature of black-box Machine Learning models impede interpretability and hinder human comprehension of model behavior. This paper addresses these challenges by exploring strategies to enhance interpretability through white-box approaches, which provide actionable insights into latent space dynamics and model behavior during training. By leveraging human intuition and expertise, the proposed framework improves explainability, fosters trust, and promotes transparent Human Activity Recognition systems. A key contribution is the proposal of a Human-in-the-Loop framework that enables dynamic user interaction with models, facilitating iterative refinements to enhance performance and efficiency. Additionally, we investigate the usefulness of Large Language Model as an assistance to provide users with guidance for interpreting visualizations, diagnosing issues, and optimizing workflows. Together, these contributions present a scalable and efficient framework for developing interpretable and accessible Human Activity Recognition systems.",0 "Explainable artificial intelligence (XAI) aims to make machine learning models more transparent. While many approaches focus on generating explanations post-hoc, interpretable approaches, which generate the explanations intrinsically alongside the predictions, are relatively rare. In this work, we integrate different discrete subset sampling methods into a graph-based visual question answering system to compare their effectiveness in generating interpretable explanatory subgraphs intrinsically. We evaluate the methods on the GQA dataset and show that the integrated methods effectively mitigate the performance trade-off between interpretability and answer accuracy, while also achieving strong co-occurrences between answer and question tokens. Furthermore, we conduct a human evaluation to assess the interpretability of the generated subgraphs using a comparative setting with the extended Bradley-Terry model, showing that the answer and question token co-occurrence metrics strongly correlate with human preferences. Our source code is publicly available.",2 "One goal of Artificial Intelligence is to learn meaningful representations for natural language expressions, but what this entails is not always clear. A variety of new linguistic behaviours present themselves embodied as computers, enhanced humans, and collectives with various kinds of integration and communication. But to measure and understand the behaviours generated by such systems, we must clarify the language we use to talk about them. Computational models are often confused with the phenomena they try to model and shallow metaphors are used as justifications for (or to hype) the success of computational techniques on many tasks related to natural language; thus implying their progress toward human-level machine intelligence without ever clarifying what that means. This paper discusses the challenges in the specification of ""machines of meaning"", machines capable of acquiring meaningful semantics from natural language in order to achieve their goals. We characterize ""meaning"" in a computational setting, while highlighting the need for detachment from anthropocentrism in the study of the behaviour of machines of meaning. The pressing need to analyse AI risks and ethics requires a proper measurement of its capabilities which cannot be productively studied and explained while using ambiguous language. We propose a view of ""meaning"" to facilitate the discourse around approaches such as neural language models and help broaden the research perspectives for technology that facilitates dialogues between humans and machines.",0 "Deep learning models are widely used for the data-driven design of materials based on atomic force microscopy (AFM) and other scanning probe microscopy. These tools enhance efficiency in inverse design and characterization of materials. However, limited and imbalanced experimental materials data typically available is a major challenge. Also important is the need to interpret trained models, which have typically been complex enough to be uninterpretable by humans. Here, we present a systemic evaluation of transfer learning strategies to accommodate low-data scenarios in materials synthesis and a model latent feature analysis to draw connections to the human-interpretable characteristics of the samples. Our models show accurate predictions in five classes of transition metal dichalcogenides (TMDs) (MoS$_2$, WS$_2$, WSe$_2$, MoSe$_2$, and Mo-WSe$_2$) with up to 89$\%$ accuracy on held-out test samples. Analysis of the latent features reveals a correlation with physical characteristics such as grain density, DoG blob, and local variation. The transfer learning optimization modality and the exploration of the correlation between the latent and physical features provide important frameworks that can be applied to other classes of materials beyond TMDs to enhance the models' performance and explainability which can accelerate the inverse design of materials for technological applications.",0 "We recover the underlying 3D structure from images of cartoons and anime depicting the same scene. This is an interesting problem domain because images in creative media are often depicted without explicit geometric consistency for storytelling and creative expression-they are only 3D in a qualitative sense. While humans can easily perceive the underlying 3D scene from these images, existing Structure-from-Motion (SfM) methods that assume 3D consistency fail catastrophically. We present Toon3D for reconstructing geometrically inconsistent images. Our key insight is to deform the input images while recovering camera poses and scene geometry, effectively explaining away geometrical inconsistencies to achieve consistency. This process is guided by the structure inferred from monocular depth predictions. We curate a dataset with multi-view imagery from cartoons and anime that we annotate with reliable sparse correspondences using our user-friendly annotation tool. Our recovered point clouds can be plugged into novel-view synthesis methods to experience cartoons from viewpoints never drawn before. We evaluate against classical and recent learning-based SfM methods, where Toon3D is able to obtain more reliable camera poses and scene geometry.",0 "Online marketing faces formidable challenges in managing and interpreting immense volumes of data necessary for competitor analysis, content research, and strategic branding. It is impossible to review hundreds to thousands of transient online content items by hand, and partial analysis often leads to suboptimal outcomes and poorly performing campaigns. We introduce an explainable AI framework SOMONITOR that aims to synergize human intuition with AI-based efficiency, helping marketers across all stages of the marketing funnel, from strategic planning to content creation and campaign execution. SOMONITOR incorporates a CTR prediction and ranking model for advertising content and uses large language models (LLMs) to process high-performing competitor content, identifying core content pillars such as target audiences, customer needs, and product features. These pillars are then organized into broader categories, including communication themes and targeted customer personas. By integrating these insights with data from the brand's own advertising campaigns, SOMONITOR constructs a narrative for addressing new customer personas and simultaneously generates detailed content briefs in the form of user stories that, as shown in the conducted case study, can be directly applied by marketing teams to streamline content production and campaign execution. The adoption of SOMONITOR in daily operations allows digital marketers to quickly parse through extensive datasets, offering actionable insights that significantly enhance campaign effectiveness and overall job satisfaction.",0 "As LLMs increase in accessibility, LLM-generated texts have proliferated across several fields, such as scientific, academic, and creative writing. However, LLMs are not created equally; they may have different architectures and training datasets. Thus, some LLMs may be more challenging to detect than others. Using two datasets spanning four total writing domains, we train AI-generated (AIG) text classifiers using the LibAUC library - a deep learning library for training classifiers with imbalanced datasets. Our results in the Deepfake Text dataset show that AIG-text detection varies across domains, with scientific writing being relatively challenging. In the Rewritten Ivy Panda (RIP) dataset focusing on student essays, we find that the OpenAI family of LLMs was substantially difficult for our classifiers to distinguish from human texts. Additionally, we explore possible factors that could explain the difficulties in detecting OpenAI-generated texts.",0 "Transitioning from Education 1.0 to Education 5.0, the integration of generative artificial intelligence (GenAI) revolutionizes the learning environment by fostering enhanced human-machine collaboration, enabling personalized, adaptive and experiential learning, and preparing students with the skills and adaptability needed for the future workforce. Our understanding of academic integrity and the scholarship of teaching, learning, and research has been revolutionised by GenAI. Schools and universities around the world are experimenting and exploring the integration of GenAI in their education systems (like, curriculum design, teaching process and assessments, administrative tasks, results generation and so on). The findings of the literature study demonstrate how well GenAI has been incorporated into the global educational system. This study explains the roles of GenAI in the schooling and university education systems with respect to the different stakeholders (students, teachers, researchers etc,). It highlights the current challenges of integrating Generative AI into the education system and outlines future directions for leveraging GenAI to enhance educational practices.",0 "EXplainable Artificial Intelligence (XAI) approaches are widely applied for identifying fairness issues in Artificial Intelligence (AI) systems. However, in the context of facial analysis, existing XAI approaches, such as pixel attribution methods, offer explanations for individual images, posing challenges in assessing the overall behavior of a model, which would require labor-intensive manual inspection of a very large number of instances and leaving to the human the task of drawing a general impression of the model behavior from the individual outputs. Addressing this limitation, we introduce FaceX, the first method that provides a comprehensive understanding of face attribute classifiers through summary model explanations. Specifically, FaceX leverages the presence of distinct regions across all facial images to compute a region-level aggregation of model activations, allowing for the visualization of the model's region attribution across 19 predefined regions of interest in facial images, such as hair, ears, or skin. Beyond spatial explanations, FaceX enhances interpretability by visualizing specific image patches with the highest impact on the model's decisions for each facial region within a test benchmark. Through extensive evaluation in various experimental setups, including scenarios with or without intentional biases and mitigation efforts on four benchmarks, namely CelebA, FairFace, CelebAMask-HQ, and Racial Faces in the Wild, FaceX demonstrates high effectiveness in identifying the models' biases.",0 "Evidence accumulation models (EAMs) are the dominant framework for modeling response time (RT) data from speeded decision-making tasks. While providing a good quantitative description of RT data in terms of abstract perceptual representations, EAMs do not explain how the visual system extracts these representations in the first place. To address this limitation, we introduce the visual accumulator model (VAM), in which convolutional neural network models of visual processing and traditional EAMs are jointly fitted to trial-level RTs and raw (pixel-space) visual stimuli from individual subjects in a unified Bayesian framework. Models fitted to large-scale cognitive training data from a stylized flanker task captured individual differences in congruency effects, RTs, and accuracy. We find evidence that the selection of task-relevant information occurs through the orthogonalization of relevant and irrelevant representations, demonstrating how our framework can be used to relate visual representations to behavioral outputs. Together, our work provides a probabilistic framework for both constraining neural network models of vision with behavioral data and studying how the visual system extracts representations that guide decisions.",0 "We investigated the emerging traffic patterns of Argentine ants (Linepithema humile) as they navigated a narrow bridge between their nest and a food source. By tracking ant movements in experiments with varying bridge widths and colony sizes and analyzing the resulting trajectories, we discovered that a small subset of ants stopped for long periods of time, acting as obstacles and affecting traffic flow. Interestingly, the fraction of these stopped ants increased with wider bridges, suggesting a mechanism to reduce traffic flow to a narrower section of the bridge. To quantify transport efficiency, we measured the average speed of the ants on the bridge as a function of the pressure of ants arriving at the bridge, finding this relationship to be an increasing but saturating function of the pressure. We developed an agent-based model for ant movement and interactions to better understand these dynamics. Including stopped agents in the model was crucial to explaining the experimental observations. We further validated our hypothesis by introducing artificial obstacles on the bridges and found that our simulations accurately mirrored the experimental data when these obstacles were included. These findings provide new insights into how Argentine ants self-organize to manage traffic, highlighting a unique form of dynamic obstruction that enhances traffic flow in high-density conditions. This study advances our understanding of self-regulation in biological traffic systems and suggests potential applications for managing human traffic in congested environments.",0 "Challenges inherent to high-resolution and high signal-to-noise data as well as model degeneracies can cause systematic biases in analyses of strong lens systems. In the past decade, the number of lens modeling methods has significantly increased, from purely analytical methods, to pixelated and non-parametric ones, or ones based on deep learning. We embraced this diversity by selecting different software packages and use them to blindly model independently simulated Hubble Space Telescope (HST) imaging data. To overcome the difficulties arising from using different codes and conventions, we used the COde-independent Organized LEns STandard (COOLEST) to store, compare, and release all models in a self-consistent and human-readable manner. From an ensemble of six modeling methods, we studied the recovery of the lens potential parameters and properties of the reconstructed source. We find that, overall, both lens and source properties are recovered reasonably well, but systematic biases arise in all methods. Interestingly, we do not observe that a single method is significantly more accurate than others, and the amount of bias largely depends on the specific lens or source property of interest. By combining posterior distributions from individual methods using equal weights, the maximal systematic biases on lens model parameters inferred from individual models are reduced by a factor of 5.4 on average. We investigated a selection of modeling effects that partly explain the observed biases, such as the cuspy nature of the background source and the accuracy of the point spread function. This work introduces, for the first time, a generic framework to compare and ease the combination of models obtained from different codes and methods, which will be key to retain accuracy in future strong lensing analyses.",0 "Supervised fine-tuning (SFT) is crucial for aligning Large Language Models (LLMs) with human instructions. The primary goal during SFT is to select a small yet representative subset of training data from the larger pool, such that fine-tuning with this subset achieves results comparable to or even exceeding those obtained using the entire dataset. However, most existing data selection techniques are designed for small-scale data pools, which fail to meet the demands of real-world SFT scenarios. In this paper, we replicated several self-scoring methods those that do not rely on external model assistance on two million scale datasets, and found that nearly all methods struggled to significantly outperform random selection when dealing with such large-scale data pools. Moreover, our comparisons suggest that, during SFT, diversity in data selection is more critical than simply focusing on high quality data. We also analyzed the limitations of several current approaches, explaining why they perform poorly on large-scale datasets and why they are unsuitable for such contexts. Finally, we found that filtering data by token length offers a stable and efficient method for improving results. This approach, particularly when training on long text data, proves highly beneficial for relatively weaker base models, such as Llama3.",0 "Data and metadata documentation requirements for explainable-AI-ready (XAIR) models and data in physics-based simulation technology are discussed by analysing different perspectives from the literature on two core aspects: First, the scope of the simulation; this category is taken to include subject matter, the objective with which the simulation is conducted, and the object of reference, i.e., the simulated physical system or process. Second, the artefacts that need to be documented in order to make data and models XAIR, and modelling and simulation workflows explainable; two CEN workshop agreements, MODA and ModGra, are compared for this purpose. As a result, minimum requirements for an ontologization of the scope of simulation artefacts are formulated, and the object-objective abstractness diagram is proposed as a tool for visualizing the landscape of use cases for physics-based simulation.",0 "In this paper, we present Language Model as Visual Explainer LVX, a systematic approach for interpreting the internal workings of vision models using a tree-structured linguistic explanation, without the need for model training. Central to our strategy is the collaboration between vision models and LLM to craft explanations. On one hand, the LLM is harnessed to delineate hierarchical visual attributes, while concurrently, a text-to-image API retrieves images that are most aligned with these textual concepts. By mapping the collected texts and images to the vision model's embedding space, we construct a hierarchy-structured visual embedding tree. This tree is dynamically pruned and grown by querying the LLM using language templates, tailoring the explanation to the model. Such a scheme allows us to seamlessly incorporate new attributes while eliminating undesired concepts based on the model's representations. When applied to testing samples, our method provides human-understandable explanations in the form of attribute-laden trees. Beyond explanation, we retrained the vision model by calibrating it on the generated concept hierarchy, allowing the model to incorporate the refined knowledge of visual attributes. To access the effectiveness of our approach, we introduce new benchmarks and conduct rigorous evaluations, demonstrating its plausibility, faithfulness, and stability.",0 "The recent discovery of little red dots - a population of extremely compact and highly dust-reddened high redshift galaxies - by the James Webb Space Telescope presents a new challenge to the fields of astrophysics and cosmology. Their remarkably high luminosities at redshifts 5 < z < 10, appear to challenge LambdaCDM cosmology and galaxy formation models, as they imply stellar masses and star formation rates that exceed the upper limits set by these models. LRDs are currently subjects of debate as the mechanisms behind their high luminosities are not yet fully understood. LRD energy outputs are thought to be either dominated by star formation or their energy output results from the hosting of active galactic nuclei. We investigate the starburst hypothesis by attempting to replicate the stellar properties of LRDs using output data from the FLARES simulation suite. Comparative analysis of galactic properties such as galactic number density, stellar mass and star formation rate yield significant tension between simulated and observed galaxies. The FLARES simulation overestimates the number densities of galaxies with stellar masses similar to observed LRDs by several orders of magnitude. Additionally, the simulation shows an overestimation of star formation rates. These tensions suggest a potential underestimation by the FLARES model of stellar feedback mechanisms such as active galactic nuclei feedback. These results suggest that the starburst hypothesis may be insufficient to explain the observed properties of these galaxies. Instead, the AGN scenario should be further investigated by repeating the methods in this study with a hydrodynamic galaxy simulation suite that models a higher influence of AGN feedback mechanisms on stellar activity in high redshift galaxies.",0 "Large multimodal models (LMMs) have shown remarkable performance in the visual commonsense reasoning (VCR) task, which aims to answer a multiple-choice question based on visual commonsense within an image. However, the ability of LMMs to correct potential visual commonsense errors in the distractor upon their occurrence is yet under-explored. Drawing inspiration from how a human teacher crafts challenging distractors to test students' comprehension of the concepts or skills and assists them in identifying and correcting errors toward the answer, we are the pioneering research for LMMs to simulate this error correction process. To this end, we employ GPT-4 as a ``teacher'' to collect the explainable feedback dataset VCR-DF for error correction, which serves as a benchmark to evaluate the ability of LMMs to identify misconceptions and clarify reasons behind the error in VCR distractors toward final answers. In addition, we propose an LMM-based Pedagogical Expert Instructed Feedback Generation (PEIFG) model to incorporate the learnable expert prompts and multimodal instruction as guidance for feedback generation. Experimental results show that our PEIFG significantly outperforms existing LMMs. We believe that our benchmark provides a new direction for evaluating the capabilities of LMMs.",0 "This paper introduces a comprehensive framework for the evaluation and validation of generative language models (GLMs), with a focus on Retrieval-Augmented Generation (RAG) systems deployed in high-stakes domains such as banking. GLM evaluation is challenging due to open-ended outputs and subjective quality assessments. Leveraging the structured nature of RAG systems, where generated responses are grounded in a predefined document collection, we propose the Human-Calibrated Automated Testing (HCAT) framework. HCAT integrates a) automated test generation using stratified sampling, b) embedding-based metrics for explainable assessment of functionality, risk and safety attributes, and c) a two-stage calibration approach that aligns machine-generated evaluations with human judgments through probability calibration and conformal prediction. In addition, the framework includes robustness testing to evaluate model performance against adversarial, out-of-distribution, and varied input conditions, as well as targeted weakness identification using marginal and bivariate analysis to pinpoint specific areas for improvement. This human-calibrated, multi-layered evaluation framework offers a scalable, transparent, and interpretable approach to GLM assessment, providing a practical and reliable solution for deploying GLMs in applications where accuracy, transparency, and regulatory compliance are paramount.",0 "Machine learning (ML) has the potential to become an essential tool in supporting clinical decision-making processes, offering enhanced diagnostic capabilities and personalized treatment plans. However, outsourcing medical records to train ML models using patient data raises legal, privacy, and security concerns. Federated learning has emerged as a promising paradigm for collaborative ML, meeting healthcare institutions' requirements for robust models without sharing sensitive data and compromising patient privacy. This study proposes a novel method that combines federated learning (FL) and Graph Neural Networks (GNNs) to predict stroke severity using electroencephalography (EEG) signals across multiple medical institutions. Our approach enables multiple hospitals to jointly train a shared GNN model on their local EEG data without exchanging patient information. Specifically, we address a regression problem by predicting the National Institutes of Health Stroke Scale (NIHSS), a key indicator of stroke severity. The proposed model leverages a masked self-attention mechanism to capture salient brain connectivity patterns and employs EdgeSHAP to provide post-hoc explanations of the neurological states after a stroke. We evaluated our method on EEG recordings from four institutions, achieving a mean absolute error (MAE) of 3.23 in predicting NIHSS, close to the average error made by human experts (MAE $\approx$ 3.0). This demonstrates the method's effectiveness in providing accurate and explainable predictions while maintaining data privacy.",2 "In this study, the effect of ferrite grain size on the mechanical properties and dislocation behavior of dual-phase (DP) steel is investigated using dislocation-based crystal plasticity finite element analysis. DP steel, composed of a soft ferritic phase and a hard martensitic phase, shows mechanical properties that are significantly influenced by ferrite grain size. The mechanism underlying this grain size effect is clarified by analyzing the partitioning and distribution of stress, strain, and dislocations in each phase. Three models with the same volume fraction of martensitic phase but different ferrite grain sizes are subjected to tensile loading. Interestingly, even though only the ferrite grain size is changed, the stress in the martensitic phase exhibited a notable dependence on ferrite grain size. This can be explained as follows. Geometrically necessary (GN) dislocations accumulate on the ferrite side of the ferrite-martensite grain boundary, and the grain boundary occupancy per unit area increases as the ferrite grain size decreases. As a result, smaller ferrite grain sizes make the ferritic phase less deformable owing to the effect of GN dislocations, shifting more deformation to the martensitic phase. This behavior is confirmed by the more uniform strain distribution and partitioning observed with decreasing ferrite grain size. As the martensitic phase takes on greater deformation, the statistically stored dislocation density in the martensitic phase becomes ferrite grain size dependent, which in turn leads to the observed grain size dependence of stress in the martensitic phase.",0 "This study introduces a new framework for 3D person re-identification (re-ID) that leverages readily available high-resolution texture data in 3D reconstruction to improve the performance and explainability of the person re-ID task. We propose a method to emphasize texture in 3D person re-ID models by incorporating UVTexture mapping, which better differentiates human subjects. Our approach uniquely combines UVTexture and its heatmaps with 3D models to visualize and explain the person re-ID process. In particular, the visualization and explanation are achieved through activation maps and attribute-based attention maps, which highlight the important regions and features contributing to the person re-ID decision. Our contributions include: (1) a novel technique for emphasizing texture in 3D models using UVTexture processing, (2) an innovative method for explicating person re-ID matches through a combination of 3D models and UVTexture mapping, and (3) achieving state-of-the-art performance in 3D person re-ID. We ensure the reproducibility of our results by making all data, codes, and models publicly available.",2 "This report proposes a neural cognitive model for discovering regularities in event sequences. In a fluid intelligence task, the subject is required to discover regularities from relatively short-term memory of the first-seen task. Some fluid intelligence tasks require discovering regularities in event sequences. Thus, a neural network model was constructed to explain fluid intelligence or regularity discovery in event sequences with relatively short-term memory. The model was implemented and tested with delayed match-to-sample tasks.",0 "Post-hoc interpretability methods are critical tools to explain neural-network results. Several post-hoc methods have emerged in recent years, but when applied to a given task, they produce different results, raising the question of which method is the most suitable to provide correct post-hoc interpretability. To understand the performance of each method, quantitative evaluation of interpretability methods is essential. However, currently available frameworks have several drawbacks which hinders the adoption of post-hoc interpretability methods, especially in high-risk sectors. In this work, we propose a framework with quantitative metrics to assess the performance of existing post-hoc interpretability methods in particular in time series classification. We show that several drawbacks identified in the literature are addressed, namely dependence on human judgement, retraining, and shift in the data distribution when occluding samples. We additionally design a synthetic dataset with known discriminative features and tunable complexity. The proposed methodology and quantitative metrics can be used to understand the reliability of interpretability methods results obtained in practical applications. In turn, they can be embedded within operational workflows in critical fields that require accurate interpretability results for e.g., regulatory policies.",0 "Understanding the behavior of laboratory animals is a key to find answers about diseases and neurodevelopmental disorders that also affects humans. One behavior of interest is the stopping, as it correlates with exploration, feeding and sleeping habits of individuals. To improve comprehension of animal's behavior, we focus on identifying trait revealing age/sex of mice through the series of stopping spots of each individual. We track 4 mice using LiveMouseTracker (LMT) system during 3 days. Then, we build a stack of 2D histograms of the stop positions. This stack of histograms passes through a shallow CNN architecture to classify mice in terms of age and sex. We observe that female mice show more recognizable behavioral patterns, reaching a classification accuracy of more than 90%, while males, which do not present as many distinguishable patterns, reach an accuracy of 62.5%. To gain explainability from the model, we look at the activation function of the convolutional layers and found that some regions of the cage are preferentially explored by females. Males, especially juveniles, present behavior patterns that oscillate between juvenile female and adult male.",0 "Explanations of machine learning (ML) model predictions generated by Explainable AI (XAI) techniques such as SHAP are essential for people using ML outputs for decision-making. We explore the potential of Large Language Models (LLMs) to transform these explanations into human-readable, narrative formats that align with natural communication. We address two key research questions: (1) Can LLMs reliably transform traditional explanations into high-quality narratives? and (2) How can we effectively evaluate the quality of narrative explanations? To answer these questions, we introduce Explingo, which consists of two LLM-based subsystems, a Narrator and Grader. The Narrator takes in ML explanations and transforms them into natural-language descriptions. The Grader scores these narratives on a set of metrics including accuracy, completeness, fluency, and conciseness. Our experiments demonstrate that LLMs can generate high-quality narratives that achieve high scores across all metrics, particularly when guided by a small number of human-labeled and bootstrapped examples. We also identified areas that remain challenging, in particular for effectively scoring narratives in complex domains. The findings from this work have been integrated into an open-source tool that makes narrative explanations available for further applications.",0 "As humanoid service robots are becoming more and more perceptible in public service settings for instance as a guide to welcome visitors or to explain a procedure to follow, it is desirable to improve the comprehensibility of complex issues for human customers and to adapt the level of difficulty of the information provided as well as the language used to individual requirements. This work examines a case study using a humanoid social robot Pepper performing support for customers in a public service environment offering advice and information. An application architecture is proposed that improves the intelligibility of the information received by providing the possibility to translate this information into easy language and/or into another spoken language.",0 "Link prediction (LP) is crucial for Knowledge Graphs (KG) completion but commonly suffers from interpretability issues. While several methods have been proposed to explain embedding-based LP models, they are generally limited to local explanations on KG and are deficient in providing human interpretable semantics. Based on real-world observations of the characteristics of KGs from multiple domains, we propose to explain LP models in KG with path-based explanations. An integrated framework, namely eXpath, is introduced which incorporates the concept of relation path with ontological closed path rules to enhance both the efficiency and effectiveness of LP interpretation. Notably, the eXpath explanations can be fused with other single-link explanation approaches to achieve a better overall solution. Extensive experiments across benchmark datasets and LP models demonstrate that introducing eXpath can boost the quality of resulting explanations by about 20% on two key metrics and reduce the required explanation time by 61.4%, in comparison to the best existing method. Case studies further highlight eXpath's ability to provide more semantically meaningful explanations through path-based evidence.",0 "This work focuses on the nature of visibility in societies where the behaviours of humans and algorithms influence each other - termed algorithmically infused societies. We propose a quantitative measure of visibility, with implications and applications to an array of disciplines including communication studies, political science, marketing, technology design, and social media analytics. The measure captures the basic characteristics of the visibility of a given topic, in algorithm/AI-mediated communication/social media settings. Topics, when trending, are ranked against each other, and the proposed measure combines the following two attributes of a topic: (i) the amount of time a topic spends at different ranks, and (ii) the different ranks the topic attains. The proposed measure incorporates a tunable parameter, termed the discrimination level, whose value determines the relative weights of the two attributes that contribute to visibility. Analysis of a large-scale, real-time dataset of trending topics, from one of the largest social media platforms, demonstrates that the proposed measure can explain a large share of the variability of the accumulated views of a topic.",0 "Modern science is formally structured around scholarly publication, where scientific knowledge is canonized through citation. Precisely how citations are given and accrued can provide information about the value of discovery, the history of scientific ideas, the structure of fields, and the space or scope of inquiry. Yet parsing this information has been challenging because citations are not simply present or absent; rather, they differ in purpose, function, and sentiment. In this paper, we investigate how critical and favorable sentiments are distributed across citations, and demonstrate that citation sentiment tracks sociocultural norms across scales of collaboration, discipline, and country. At the smallest scale of individuals, we find that researchers cite scholars they have collaborated with more favorably (and less critically) than scholars they have not collaborated with. Outside collaborative relationships, higher h-index scholars cite lower h-index scholars more critically. At the mesoscale of disciplines, we find that wetlab disciplines tend to be less critical than drylab disciplines, and disciplines that engage in more synthesis through publishing more review articles tend to be less critical. At the largest scale of countries, we find that greater individualism (and lesser acceptance of the unequal distribution of power) is associated with more critical sentiment. Collectively, our results demonstrate how sociocultural factors can explain variations in sentiment in scientific communication. As such, our study contributes to a broader understanding of how human factors influence the practice of science, and underscores the importance of considering the larger sociocultural contexts in which science progresses.",0 "A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations. While methods exist to connect neurons to human-understandable textual descriptions, evaluating the quality of these explanations is challenging due to the lack of a unified quantitative approach. We introduce CoSy (Concept Synthesis), a novel, architecture-agnostic framework for evaluating textual explanations of latent neurons. Given textual explanations, our proposed framework uses a generative model conditioned on textual input to create data points representing the explanations. By comparing the neuron's response to these generated data points and control data points, we can estimate the quality of the explanation. We validate our framework through sanity checks and benchmark various neuron description methods for Computer Vision tasks, revealing significant differences in quality.",0 "Recent studies evaluating various criteria for explainable artificial intelligence (XAI) suggest that fidelity, stability, and comprehensibility are among the most important metrics considered by users of AI across a diverse collection of usage contexts. We consider these criteria as applied to feature-based attribution methods, which are amongst the most prevalent in XAI literature. Going beyond standard correlation, methods have been proposed that highlight what should be minimally sufficient to justify the classification of an input (viz. pertinent positives). While minimal sufficiency is an attractive property akin to comprehensibility, the resulting explanations are often too sparse for a human to understand and evaluate the local behavior of the model. To overcome these limitations, we incorporate the criteria of stability and fidelity and propose a novel method called Path-Sufficient Explanations Method (PSEM) that outputs a sequence of stable and sufficient explanations for a given input of strictly decreasing size (or value) -- from original input to a minimally sufficient explanation -- which can be thought to trace the local boundary of the model in a stable manner, thus providing better intuition about the local model behavior for the specific input. We validate these claims, both qualitatively and quantitatively, with experiments that show the benefit of PSEM across three modalities (image, tabular and text) as well as versus other path explanations. A user study depicts the strength of the method in communicating the local behavior, where (many) users are able to correctly determine the prediction made by a model.",2 "Due to the subjective nature of current clinical evaluation, the need for automatic severity evaluation in dysarthric speech has emerged. DNN models outperform ML models but lack user-friendly explainability. ML models offer explainable results at a feature level, but their performance is comparatively lower. Current ML models extract various features from raw waveforms to predict severity. However, existing methods do not encompass all dysarthric features used in clinical evaluation. To address this gap, we propose a feature extraction method that minimizes information loss. We introduce an ASR transcription as a novel feature extraction source. We finetune the ASR model for dysarthric speech, then use this model to transcribe dysarthric speech and extract word segment boundary information. It enables capturing finer pronunciation and broader prosodic features. These features demonstrated an improved severity prediction performance to existing features: balanced accuracy of 83.72%.",0 "Wildfire activity has increased dramatically in the western United States (US) over the last three decades, having a significant impact on air quality and human health. However, quantifying the drivers of trends in wildfires and subsequent smoke exposure is challenging, as both natural variability and anthropogenic climate change play important roles. Here we devise an approach involving observed meteorology and vegetation and a range of models to determine the relative roles of anthropogenic climate change and natural variability in driving burned area across the western US. We also examine the influence of anthropogenic climate change on smoke exposure. We estimate that anthropogenic climate change accounts for 33-82% of observed total burned area, depending on the ecoregion, yielding 65% of total fire emissions on average across the western US from 1992 to 2020. In all ecoregions except Mediterranean California, anthropogenic climate change contributes to a greater percentage of burned area in lightning-caused wildfires than in human-caused wildfires. On average, anthropogenic climate change contributes 49% to smoke PM2.5 concentrations in the western US from 1997 to 2020, and explains 58% of the increasing trend in smoke PM2.5 from 2010 to 2020. We further find that populations in northern California, western Oregon, Washington, and parts of Idaho have experienced the greatest smoke exposure attributable to anthropogenic climate change in recent years. Our work highlights the significant role of anthropogenic climate change in degrading air quality in the western US and identifies those regions most vulnerable to wildfire smoke and thus adverse health impacts.",0 "The sum-of-squares method can give rigorous lower bounds on the energy of quantum Hamiltonians. Unfortunately, typically using this method requires solving a semidefinite program, which can be computationally expensive. Further, the typically used degree-$4$ sum-of-squares (also known as the 2RDM method) does not correctly reproduce second order perturbation theory. Here, we give a general method, an analogue of Wigner's $2n+1$ rule for perturbation theory, to compute the order of the error in a given sum-of-squares ansatz. We also give a method for finding solutions of the dual semidefinite program, based on a perturbative ansatz combined with a self-consistent method. As an illustration, we show that for a class of model Hamiltonians (with a gap in the quadratic term and quartic terms chosen as i.i.d. Gaussians), this self-consistent sum-of-squares method significantly improves over the 2RDM method in both speed and accuracy, and also improves over low order perturbation theory. We then explain why the particular ansatz we implement is not suitable for use for quantum chemistry Hamiltonians (due to presence of certain large diagonal terms), but we suggest a modified ansatz that may be suitable, which will be the subject of future work.",0 "Artificial Intelligence (AI) has apparently become one of the most important techniques discovered by humans in history while the human brain is widely recognized as one of the most complex systems in the universe. One fundamental critical question which would affect human sustainability remains open: Will artificial intelligence (AI) evolve to surpass human intelligence in the future? This paper shows that in theory new AI twins with fresh cellular level of AI techniques for neuroscience could approximate the brain and its functioning systems (e.g. perception and cognition functions) with any expected small error and AI without restrictions could surpass human intelligence with probability one in the end. This paper indirectly proves the validity of the conjecture made by Frank Rosenblatt 70 years ago about the potential capabilities of AI, especially in the realm of artificial neural networks. Intelligence is just one of fortuitous but sophisticated creations of the nature which has not been fully discovered. Like mathematics and physics, with no restrictions artificial intelligence would lead to a new subject with its self-contained systems and principles. We anticipate that this paper opens new doors for 1) AI twins and other AI techniques to be used in cellular level of efficient neuroscience dynamic analysis, functioning analysis of the brain and brain illness solutions; 2) new worldwide collaborative scheme for interdisciplinary teams concurrently working on and modelling different types of neurons and synapses and different level of functioning subsystems of the brain with AI techniques; 3) development of low energy of AI techniques with the aid of fundamental neuroscience properties; and 4) new controllable, explainable and safe AI techniques with reasoning capabilities of discovering principles in nature.",0 "Although deep learning techniques show promising results for many neuroimaging tasks in research settings, they have not yet found widespread use in clinical scenarios. One of the reasons for this problem is that many machine learning models only identify correlations between the input images and the outputs of interest, which can lead to many practical problems, such as encoding of uninformative biases and reduced explainability. Thus, recent research is exploring if integrating a priori causal knowledge into deep learning models is a potential avenue to identify these problems. This work introduces a new causal generative architecture named Masked Causal Flow (MACAW) for neuroimaging applications. Within this context, three main contributions are described. First, a novel approach that integrates complex causal structures into normalizing flows is proposed. Second, counterfactual prediction is performed to identify the changes in effect variables associated with a cause variable. Finally, an explicit Bayesian inference for classification is derived and implemented, providing an inherent uncertainty estimation. The feasibility of the proposed method was first evaluated using synthetic data and then using MRI brain data from more than 23000 participants of the UK biobank study. The evaluation results show that the proposed method can (1) accurately encode causal reasoning and generate counterfactuals highlighting the structural changes in the brain known to be associated with aging, (2) accurately predict a subject's age from a single 2D MRI slice, and (3) generate new samples assuming other values for subject-specific indicators such as age, sex, and body mass index. The code for a toy dataset is available at the following link: https://github.com/vibujithan/macaw-2D.git.",2 "Recommendation Systems have become integral to modern user experiences, but lack transparency in their decision-making processes. Existing explainable recommendation methods are hindered by reliance on a post-hoc paradigm, wherein explanation generators are trained independently of the underlying recommender models. This paradigm necessitates substantial human effort in data construction and raises concerns about explanation reliability. In this paper, we present ExpCTR, a novel framework that integrates large language model based explanation generation directly into the CTR prediction process. Inspired by recent advances in reinforcement learning, we employ two carefully designed reward mechanisms, LC alignment, which ensures explanations reflect user intentions, and IC alignment, which maintains consistency with traditional ID-based CTR models. Our approach incorporates an efficient training paradigm with LoRA and a three-stage iterative process. ExpCTR circumvents the need for extensive explanation datasets while fostering synergy between CTR prediction and explanation generation. Experimental results demonstrate that ExpCTR significantly enhances both recommendation accuracy and interpretability across three real-world datasets.",0 "Human behavior is often assumed to be hierarchically structured, made up of abstract actions that can be decomposed into concrete actions. However, behavior is typically measured as a sequence of actions, which makes it difficult to infer its hierarchical structure. In this paper, we explore how people form hierarchically structured plans, using an experimental paradigm with observable hierarchical representations: participants create programs that produce sequences of actions in a language with explicit hierarchical structure. This task lets us test two well-established principles of human behavior: utility maximization (i.e. using fewer actions) and minimum description length (MDL; i.e. having a shorter program). We find that humans are sensitive to both metrics, but that both accounts fail to predict a qualitative feature of human-created programs, namely that people prefer programs with reuse over and above the predictions of MDL. We formalize this preference for reuse by extending the MDL account into a generative model over programs, modeling hierarchy choice as the induction of a grammar over actions. Our account can explain the preference for reuse and provides better predictions of human behavior, going beyond simple accounts of compressibility to highlight a principle that guides hierarchical planning.",0 "Understanding and anticipating human movement has become more critical and challenging in diverse applications such as autonomous driving and surveillance. The complex interactions brought by different relations between agents are a crucial reason that poses challenges to this task. Researchers have put much effort into designing a system using rule-based or data-based models to extract and validate the patterns between pedestrian trajectories and these interactions, which has not been adequately addressed yet. Inspired by how humans perceive social interactions with different level of relations to themself, this work proposes the GrouP ConCeption (short for GPCC) model composed of the Group method, which categorizes nearby agents into either group members or non-group members based on a long-term distance kernel function, and the Conception module, which perceives both visual and acoustic information surrounding the target agent. Evaluated across multiple datasets, the GPCC model demonstrates significant improvements in trajectory prediction accuracy, validating its effectiveness in modeling both social and individual dynamics. The qualitative analysis also indicates that the GPCC framework successfully leverages grouping and perception cues human-like intuitively to validate the proposed model's explainability in pedestrian trajectory forecasting.",0 "Effective prompting of generative AI is challenging for many users, particularly in expressing context for comprehension tasks such as explaining spreadsheet formulas, Python code, and text passages. Prompt middleware aims to address this barrier by assisting in prompt construction, but barriers remain for users in expressing adequate control so that they can receive AI-responses that match their preferences. We conduct a formative survey (n=38) investigating user needs for control over AI-generated explanations in comprehension tasks, which uncovers a trade-off between standardized but predictable support for prompting, and adaptive but unpredictable support tailored to the user and task. To explore this trade-off, we implement two prompt middleware approaches: Dynamic Prompt Refinement Control (Dynamic PRC) and Static Prompt Refinement Control (Static PRC). The Dynamic PRC approach generates context-specific UI elements that provide prompt refinements based on the user's prompt and user needs from the AI, while the Static PRC approach offers a preset list of generally applicable refinements. We evaluate these two approaches with a controlled user study (n=16) to assess the impact of these approaches on user control of AI responses for crafting better explanations. Results show a preference for the Dynamic PRC approach as it afforded more control, lowered barriers to providing context, and encouraged exploration and reflection of the tasks, but that reasoning about the effects of different generated controls on the final output remains challenging. Drawing on participant feedback, we discuss design implications for future Dynamic PRC systems that enhance user control of AI responses. Our findings suggest that dynamic prompt middleware can improve the user experience of generative AI workflows by affording greater control and guide users to a better AI response.",2 "We investigate the nonequilibrium dynamics of a groundstate fermionic many body gas subjected to a quench between parameter regimes of a topologically nontrivial Hamiltonian. By focusing on the role of the chiral edge states inherent to the system, we calculate the many body overlap and show that the characteristic monotonic decay of the orthogonality catastrophe with increasing system size is notably altered. Specifically, we demonstrate that the dynamics are governed not solely by the total particle number but rather by the number of occupied single particle edge states. This behavior is further explained through an analysis of the full work probability distribution, providing a deeper understanding of the system's dynamics.",0 "The connectome, a map of the structural and/or functional connections in the brain, provides a complex representation of the neurobiological phenotypes on which it supervenes. This information-rich data modality has the potential to transform our understanding of the relationship between patterns in brain connectivity and neurological processes, disorders, and diseases. However, existing computational techniques used to analyze connectomes are oftentimes insufficient for interrogating multi-subject connectomics datasets: many current methods are either solely designed to analyze single connectomes or leverage heuristic graph statistics that are unable to capture the complete topology of multiscale connections between brain regions. To enable more rigorous connectomics analysis, we introduce a set of robust and interpretable effect size measures motivated by recent theoretical advances in random graph models. These measures facilitate simultaneous analysis of multiple connectomes across different scales of network topology, enabling the robust and reproducible discovery of hierarchical brain structures that vary in relation to phenotypic profiles. In addition to explaining the theoretical foundations and guarantees of our algorithms, we demonstrate their superiority over current state-of-the-art connectomics methods through extensive simulation studies and real-data experiments. Using a set of high-resolution connectomes obtained from genetically distinct mouse strains (including the BTBR mouse -- a standard model of autism -- and three behavioral wild-types), we illustrate how our methods successfully uncover latent information in multi-subject connectomics data and yield valuable insights into the connective correlates of neurological phenotypes that other methods do not capture. The data and code necessary to reproduce our analyses are available at https://github.com/neurodata/MCC.",0 "Motivations: Explainable Artificial Intelligence (XAI) systems aim to improve users' understanding of AI, but XAI research shows many cases of different explanations serving some users well and being unhelpful to others. In non-AI systems, some software practitioners have used inclusive design approaches and sometimes their improvements turned out to be ""curb-cut"" improvements -- not only addressing the needs of underserved users, but also making the products better for everyone. So, if AI practitioners used inclusive design approaches, they too might create curb-cut improvements, i.e., better explanations for everyone. Objectives: To find out, we investigated the curb-cut effects of inclusivity-driven fixes on users' mental models of AI when using an XAI prototype. The prototype and fixes came from an AI team who had adopted an inclusive design approach (GenderMag) to improve their XAI prototype. Methods: We ran a between-subject study with 69 participants with no AI background. 34 participants used the original version of the XAI prototype and 35 used the version with the inclusivity fixes. We compared the two groups' mental model concepts scores, prediction accuracy, and inclusivity. Results: We found four main results. First, it revealed several curb-cut effects of the inclusivity fixes: overall increased engagement with explanations and better mental model concepts scores, which revealed fixes with curb-cut properties. However (second), the inclusivity fixes did not improve participants' prediction accuracy scores -- instead, it appears to have harmed them. This ""curb-fence"" effect (opposite of the curb-cut effect) revealed the AI explanations' double-edged impact. Third, the AI team's inclusivity fixes brought significant improvements for users whose problem-solving styles had previously been underserved. Further (fourth), the AI team's fixes reduced the gender gap by 45%.",2 "Recently, the quality of artworks generated using Artificial Intelligence (AI) has increased significantly, resulting in growing difficulties in detecting synthetic artworks. However, limited studies have been conducted on identifying the authenticity of synthetic artworks and their source. This paper introduces AI-ArtBench, a dataset featuring 185,015 artistic images across 10 art styles. It includes 125,015 AI-generated images and 60,000 pieces of human-created artwork. This paper also outlines a method to accurately detect AI-generated images and trace them to their source model. This work proposes a novel Convolutional Neural Network model based on the ConvNeXt model called AttentionConvNeXt. AttentionConvNeXt was implemented and trained to differentiate between the source of the artwork and its style with an F1-Score of 0.869. The accuracy of attribution to the generative model reaches 0.999. To combine the scientific contributions arising from this study, a web-based application named ArtBrain was developed to enable both technical and non-technical users to interact with the model. Finally, this study presents the results of an Artistic Turing Test conducted with 50 participants. The findings reveal that humans could identify AI-generated images with an accuracy of approximately 58%, while the model itself achieved a significantly higher accuracy of around 99%.",2 "When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for induction (inferring latent functions) and transduction (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.",0 "This paper investigates the evolving landscape of decentralized finance (DeFi) by examining its foundational concepts, research trends, and ecosystem. A bibliometric analysis was conducted to identify thematic clusters and track the evolution of DeFi research. Additionally, a thematic review was performed to analyze the roles and interactions of key participants within the DeFi ecosystem, focusing on its opportunities and inherent risks. The bibliometric analysis identified a progression in research priorities, transitioning from an initial focus on technological innovation to addressing sustainability, environmental impacts, and regulatory challenges. Key thematic clusters include decentralization, smart contracts, tokenization, and sustainability concerns. The analysis of participants highlighted the roles of developers, liquidity providers, auditors, and regulators while identifying critical risks such as smart contract vulnerabilities, liquidity constraints, and regulatory uncertainties. The study underlines the transformative potential of DeFi to enhance financial inclusion and transparency while emphasizing the need for robust security frameworks and regulatory oversight to ensure long-term stability. This paper comprehensively explains the DeFi ecosystem by integrating bibliometric and thematic analyses. It offers valuable insights for researchers, practitioners, and policymakers, contributing to the ongoing discourse on the sustainable development and integration of DeFi into the global financial system.",2 "Understanding and predicting human migration patterns is a central challenge in population dynamics research. Traditional physics-inspired gravity and radiation models represent migration flows as functions of attractiveness using socio-economic features as proxies. They assume that the relationship between features and migration is spatially invariant, regardless of the origin and destination locations of migrants. We use Bayesian hierarchical models to demonstrate that migrant preferences likely vary based on geographical context, specifically the origin-destination pair. By applying these models to U.S. interstate migration data, we show that incorporating heterogeneity in a single latent migration parameter significantly improves the ability to explain variations in migrant flows. Accounting for such heterogeneity enables it to outperform classical methods and recent machine-learning approaches. A clustering analysis of spatially varying parameters reveals two distinct groups of migration paths. Individuals migrating along low-flow paths (typically between smaller populations or over larger distances) exhibit more nuanced decision-making. Their choices are less directly influenced by specific destination characteristics such as housing costs, land area, and climate-related disaster costs. High-flow path migrants appear to respond more directly to these destination attributes. Our results challenge assumptions of uniform preferences and underscore the value of capturing heterogeneity in migration models and policymaking.",0 "Accurate diagnosis of gait impairments is often hindered by subjective or costly assessment methods, with current solutions requiring either expensive multi-camera equipment or relying on subjective clinical observation. There is a critical need for accessible, objective tools that can aid in gait assessment while preserving patient privacy. In this work, we present a mobile phone-based, privacy-preserving artificial intelligence (AI) system for classifying gait impairments and introduce a novel dataset of 743 videos capturing seven distinct gait patterns. The dataset consists of frontal and sagittal views of trained subjects simulating normal gait and six types of pathological gait (circumduction, Trendelenburg, antalgic, crouch, Parkinsonian, and vaulting), recorded using standard mobile phone cameras. Our system achieved 86.5% accuracy using combined frontal and sagittal views, with sagittal views generally outperforming frontal views except for specific gait patterns like Circumduction. Model feature importance analysis revealed that frequency-domain features and entropy measures were critical for classifcation performance, specifically lower limb keypoints proved most important for classification, aligning with clinical understanding of gait assessment. These findings demonstrate that mobile phone-based systems can effectively classify diverse gait patterns while preserving privacy through on-device processing. The high accuracy achieved using simulated gait data suggests their potential for rapid prototyping of gait analysis systems, though clinical validation with patient data remains necessary. This work represents a significant step toward accessible, objective gait assessment tools for clinical, community, and tele-rehabilitation settings",2 "Calibration is a frequently invoked concept when useful label probability estimates are required on top of classification accuracy. A calibrated model is a function whose values correctly reflect underlying label probabilities. Calibration in itself however does not imply classification accuracy, nor human interpretable estimates, nor is it straightforward to verify calibration from finite data. There is a plethora of evaluation metrics (and loss functions) that each assess a specific aspect of a calibration model. In this work, we initiate an axiomatic study of the notion of calibration. We catalogue desirable properties of calibrated models as well as corresponding evaluation metrics and analyze their feasibility and correspondences. We complement this analysis with an empirical evaluation, comparing common calibration methods to employing a simple, interpretable decision tree.",0 "Knowledge graph reasoning is pivotal in various domains such as data mining, artificial intelligence, the Web, and social sciences. These knowledge graphs function as comprehensive repositories of human knowledge, facilitating the inference of new information. Traditional symbolic reasoning, despite its strengths, struggles with the challenges posed by incomplete and noisy data within these graphs. In contrast, the rise of Neural Symbolic AI marks a significant advancement, merging the robustness of deep learning with the precision of symbolic reasoning. This integration aims to develop AI systems that are not only highly interpretable and explainable but also versatile, effectively bridging the gap between symbolic and neural methodologies. Additionally, the advent of large language models (LLMs) has opened new frontiers in knowledge graph reasoning, enabling the extraction and synthesis of knowledge in unprecedented ways. This survey offers a thorough review of knowledge graph reasoning, focusing on various query types and the classification of neural symbolic reasoning. Furthermore, it explores the innovative integration of knowledge graph reasoning with large language models, highlighting the potential for groundbreaking advancements. This comprehensive overview is designed to support researchers and practitioners across multiple fields, including data mining, AI, the Web, and social sciences, by providing a detailed understanding of the current landscape and future directions in knowledge graph reasoning.",2 "Creativity is a fundamental skill of human cognition. We use textual forma mentis networks (TFMN) to extract network (semantic/syntactic associations) and emotional features from approximately one thousand human- and GPT3.5-generated stories. Using Explainable Artificial Intelligence (XAI), we test whether features relative to Mednick's associative theory of creativity can explain creativity ratings assigned by humans and GPT-3.5. Using XGBoost, we examine three scenarios: (i) human ratings of human stories, (ii) GPT-3.5 ratings of human stories, and (iii) GPT-3.5 ratings of GPT-generated stories. Our findings reveal that GPT-3.5 ratings differ significantly from human ratings not only in terms of correlations but also because of feature patterns identified with XAI methods. GPT-3.5 favours 'its own' stories and rates human stories differently from humans. Feature importance analysis with SHAP scores shows that: (i) network features are more predictive for human creativity ratings but also for GPT-3.5's ratings of human stories; (ii) emotional features played a greater role than semantic/syntactic network structure in GPT-3.5 rating its own stories. These quantitative results underscore key limitations in GPT-3.5's ability to align with human assessments of creativity. We emphasise the need for caution when using GPT-3.5 to assess and generate creative content, as it does not yet capture the nuanced complexity that characterises human creativity.",0 "Human-machine teaming in medical AI requires us to understand to what degree a trained clinician should weigh AI predictions. While previous work has shown the potential of AI assistance at improving clinical predictions, existing clinical decision support systems either provide no explainability of their predictions or use techniques like saliency and Shapley values, which do not allow for physician-based verification. To address this gap, this study compares previously used explainable AI techniques with a newly proposed technique termed '2-factor retrieval (2FR)', which is a combination of interface design and search retrieval that returns similarly labeled data without processing this data. This results in a 2-factor security blanket where: (a) correct images need to be retrieved by the AI; and (b) humans should associate the retrieved images with the current pathology under test. We find that when tested on chest X-ray diagnoses, 2FR leads to increases in clinician accuracy, with particular improvements when clinicians are radiologists and have low confidence in their decision. Our results highlight the importance of understanding how different modes of human-AI decision making may impact clinician accuracy in clinical decision support systems.",0 "Stereotypes are generalised assumptions about societal groups, and even state-of-the-art LLMs using in-context learning struggle to identify them accurately. Due to the subjective nature of stereotypes, where what constitutes a stereotype can vary widely depending on cultural, social, and individual perspectives, robust explainability is crucial. Explainable models ensure that these nuanced judgments can be understood and validated by human users, promoting trust and accountability. We address these challenges by introducing HEARTS (Holistic Framework for Explainable, Sustainable, and Robust Text Stereotype Detection), a framework that enhances model performance, minimises carbon footprint, and provides transparent, interpretable explanations. We establish the Expanded Multi-Grain Stereotype Dataset (EMGSD), comprising 57,201 labelled texts across six groups, including under-represented demographics like LGBTQ+ and regional stereotypes. Ablation studies confirm that BERT models fine-tuned on EMGSD outperform those trained on individual components. We then analyse a fine-tuned, carbon-efficient ALBERT-V2 model using SHAP to generate token-level importance values, ensuring alignment with human understanding, and calculate explainability confidence scores by comparing SHAP and LIME outputs...",0 "Neuronal heterogeneity, characterized by the presence of a multitude of spiking neuronal patterns, is a widespread phenomenon throughout the nervous system. In particular, the brain exhibits strong variability among inhibitory neurons. Despite the huge neuronal heterogeneity across brain regions, which in principle could decrease synchronization, cortical areas coherently oscillate during various cognitive tasks. Therefore, the functional significance of neuronal heterogeneity remains a subject of active investigation. Previous studies typically focus on the role of heterogeneity in the dynamic properties of only one population. Here, we explore how different types of inhibitory neurons can contribute to the diversity of the phase relations between two cortical areas. This research sheds light on the potential impact of local properties, such as neuronal variability, on communication between distant brain regions. We show that both homogeneous and heterogeneous inhibitory networks can exhibit phase diversity and nonintuitive regimes such as anticipated synchronization (AS) and phase bistability. It has been proposed that the bi-stable phase could be related to bi-stable perception, such as in the Necker cube. Moreover, we show that heterogeneity enlarges the region of zero-lag synchronization and bistability. We also show that the parameter controlling inhibitory heterogeneity modulates the transition from the usual delayed synchronization regime (DS) to AS. Finally, we show that the inhibitory heterogeneity drives the internal dynamics of the free-running population. Therefore, we suggest a possible mechanism to explain when the DS-AS transition occurs via zero-lag synchronization or bi-stability.",0 "Evaluating off-policy decisions using batch data poses significant challenges due to limited sample sizes leading to high variance. To improve Off-Policy Evaluation (OPE), we must identify and address the sources of this variance. Recent research on Concept Bottleneck Models (CBMs) shows that using human-explainable concepts can improve predictions and provide better understanding. We propose incorporating concepts into OPE to reduce variance. Our work introduces a family of concept-based OPE estimators, proving that they remain unbiased and reduce variance when concepts are known and predefined. Since real-world applications often lack predefined concepts, we further develop an end-to-end algorithm to learn interpretable, concise, and diverse parameterized concepts optimized for variance reduction. Our experiments with synthetic and real-world datasets show that both known and learned concept-based estimators significantly improve OPE performance. Crucially, we show that, unlike other OPE methods, concept-based estimators are easily interpretable and allow for targeted interventions on specific concepts, further enhancing the quality of these estimators.",0 "Understanding public perception of artificial intelligence (AI) and the tradeoffs between potential risks and benefits is crucial, as these perceptions might shape policy decisions, influence innovation trajectories for successful market strategies, and determine individual and societal acceptance of AI technologies. Using a representative sample of 1100 participants from Germany, this study examines mental models of AI. Participants quantitatively evaluated 71 statements about AI's future capabilities (e.g., autonomous driving, medical care, art, politics, warfare, and societal divides), assessing the expected likelihood of occurrence, perceived risks, benefits, and overall value. We present rankings of these projections alongside visual mappings illustrating public risk-benefit tradeoffs. While many scenarios were deemed likely, participants often associated them with high risks, limited benefits, and low overall value. Across all scenarios, 96.4% ($r^2=96.4\%$) of the variance in value assessment can be explained by perceived risks ($\beta=-.504$) and perceived benefits ($\beta=+.710$), with no significant relation to expected likelihood. Demographics and personality traits influenced perceptions of risks, benefits, and overall evaluations, underscoring the importance of increasing AI literacy and tailoring public information to diverse user needs. These findings provide actionable insights for researchers, developers, and policymakers by highlighting critical public concerns and individual factors essential to align AI development with individual values.",2 "This study proposes a qualitative analysis of self replies in Wikipedia talk pages, more precisely when the first two messages of a discussion are written by the same user. This specific pattern occurs in more than 10% of threads with two messages or more and can be explained by a number of reasons. After a first examination of the lexical specificities of second messages, we propose a seven categories typology and use it to annotate two reference samples (English and French) of 100 threads each. Finally, we analyse and compare the performance of human annotators (who reach a reasonable global efficiency) and instruction-tuned LLMs (which encounter important difficulties with several categories).",2 "In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods aim to address this issue, but many existing techniques are model-specific and designed for classification tasks, making them less effective for object detection and difficult for non-specialists to interpret. In this work we focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based masks to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness and localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number of masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage object detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace where safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due to the potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges the performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used.",0 "Self-driving cars increasingly rely on deep neural networks to achieve human-like driving. However, the opacity of such black-box motion planners makes it challenging for the human behind the wheel to accurately anticipate when they will fail, with potentially catastrophic consequences. Here, we introduce concept-wrapper network (i.e., CW-Net), a method for explaining the behavior of black-box motion planners by grounding their reasoning in human-interpretable concepts. We deploy CW-Net on a real self-driving car and show that the resulting explanations refine the human driver's mental model of the car, allowing them to better predict its behavior and adjust their own behavior accordingly. Unlike previous work using toy domains or simulations, our study presents the first real-world demonstration of how to build authentic autonomous vehicles (AVs) that give interpretable, causally faithful explanations for their decisions, without sacrificing performance. We anticipate our method could be applied to other safety-critical systems with a human in the loop, such as autonomous drones and robotic surgeons. Overall, our study suggests a pathway to explainability for autonomous agents as a whole, which can help make them more transparent, their deployment safer, and their usage more ethical.",0 "The recent O-RAN specifications promote the evolution of RAN architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by RAN Intelligent Controllers (RICs) entities. This paves the road to novel data-driven network management approaches based on programmable logic. Aided by Artificial Intelligence (AI) and Machine Learning (ML), novel solutions targeting traditionally unsolved RAN management issues can be devised. Nevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such AI/ML solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the human-to-machine barrier. In this survey, we provide a summary of the XAI methods and metrics before studying their deployment over the O-RAN Alliance RAN architecture along with its main building blocks. We then present various use cases and discuss the automation of XAI pipelines for O-RAN as well as the underlying security aspects. We also review some projects/standards that tackle this area. Finally, we identify different challenges and research directions that may arise from the heavy adoption of AI/ML decision entities in this context, focusing on how XAI can help to interpret, understand, and improve trust in O-RAN operational networks.",2 "This paper addresses the challenge of enhancing artificial intelligence reasoning capabilities, focusing on logicality within the Abstraction and Reasoning Corpus (ARC). Humans solve such visual reasoning tasks based on their observations and hypotheses, and they can explain their solutions with a proper reason. However, many previous approaches focused only on the grid transition and it is not enough for AI to provide reasonable and human-like solutions. By considering the human process of solving visual reasoning tasks, we have concluded that the thinking process is likely the abductive reasoning process. Thus, we propose a novel framework that symbolically represents the observed data into a knowledge graph and extracts core knowledge that can be used for solution generation. This information limits the solution search space and helps provide a reasonable mid-process. Our approach holds promise for improving AI performance on ARC tasks by effectively narrowing the solution space and providing logical solutions grounded in core knowledge extraction.",0 "We explore the drawing of an axisymmetric viscoelastic tube subject to inertial and surface tension effects. We adopt the Giesekus constitutive model and derive asymptotic long-wave equations for weakly viscoelastic effects. Intuitively, one might imagine that the elastic stresses should act to prevent hole closure during the drawing process. Surprisingly, our results show that the hole closure at the outlet is enhanced by elastic effects for most parameter values. However, the opposite is true if the tube has a very large hole size at the inlet of the device or if the axial stretching is very weak. We explain the physical mechanism underlying this phenomenon by examining how the second normal stress difference induced by elastic effects modifies the hole evolution process. We also determine how viscoelasticity affects the stability of the drawing process and show that elastic effects are always destabilizing for negligible inertia. This is in direct contrast to the case of a thread without a hole for which elastic effects are always stabilizing. On the other hand, our results show that if the inertia is non-zero, elastic effects can be either stabilizing or destabilizing depending on the parameters.",0 "Early detection and diagnosis of coronary artery disease (CAD) could save lives and reduce healthcare costs. The current clinical practice is to perform CAD diagnosis through analysing medical images from computed tomography coronary angiography (CTCA). Most current approaches utilise deep learning methods but require centerline extraction and multi-planar reconstruction. These indirect methods are not designed in a clinician-friendly manner, and they complicate the interventional procedure. Furthermore, the current deep learning methods do not provide exact explainability and limit the usefulness of these methods to be deployed in clinical settings. In this study, we first propose a 3D Resnet-50 deep learning model to directly classify normal subjects and CAD patients on CTCA images, then we demonstrate a 2D modified U-Net model can be subsequently employed to segment the coronary arteries. Our proposed approach outperforms the state-of-the-art models by 21.43% in terms of classification accuracy. The classification model with focal loss provides a better and more focused heat map, and the segmentation model provides better explainability than the classification-only model. The proposed holistic approach not only provides a simpler and clinician-friendly solution but also good classification accuracy and exact explainability for CAD diagnosis.",2 "Kalman's fundamental notion of a controllable state space system \cite{k} has been generalised to higher order systems by Willems \cite{w}, and further to distributed systems defined by partial differential equations \cite{ps}. It turns out, that for systems defined in several important spaces of distributions, controllability is now identical to the notion of vector potential in physics, or of vanishing homology in mathematics. These notes will explain this relationship, and a few of its consequences. It will also pose an important question: does a controllable system, in any space of distributions, always admit a vector potential? In other words, is Kalman's notion of a controllable system, suitably generalised, nothing more -- nor less -- than the possibility of describing the dynamics of the system by means of a vector potential? Furthermore, it also turns out that the category of distributed systems bears many formal similarities to the category of affine algebraic sets. This raises a second important question: what is the category for which these distributed systems are `local models', just as affine algebraic sets are local models for the category of algebraic varieties? It would then be possible to extend the theory of control described in these notes to this larger category of systems.",0 "Vascular age is traditionally measured using invasive methods or through 12-lead electrocardiogram (ECG). This paper utilizes a low-cost single-lead (lead-I) ECG module to predict the vascular age of an apparently healthy young person. In addition, we also study the impact of smoking on ECG traces of the light-but-habitual smokers. We begin by collecting (lead-I) ECG data from 42 apparently healthy subjects (smokers and non-smokers) aged 18 to 30 years, using our custom-built low-cost single-lead ECG module, and anthropometric data, e.g., body mass index, smoking status, blood pressure, etc. Under our proposed method, we first pre-process our dataset by denoising the ECG traces, followed by baseline drift removal, followed by z-score normalization. Next, we create another dataset by dividing the ECG traces into overlapping segments of five-second duration. We then feed both segmented and unsegmented datasets to a number of machine learning models, a 1D convolutional neural network, and ResNet18 model, for vascular ageing prediction. We also do transfer learning whereby we pre-train our models on a public PPG dataset, and later, fine-tune and evaluate them on our unsegmented ECG dataset. The random forest model outperforms all other models and previous works by achieving a mean squared error (MSE) of 0.07 and coefficient of determination R2 of 0.99, MSE of 3.56 and R2 of 0.26, MSE of 0.99 and R2 of 0.87, for segmented ECG dataset, for unsegmented ECG dataset, and for transfer learning scenario, respectively. Finally, we utilize the explainable AI framework to identify those ECG features that get affected due to smoking. This work is aligned with the sustainable development goals 3 and 10 of the United Nations which aim to provide low-cost but quality healthcare solutions to the unprivileged. This work also finds its applications in the broad domain of forensic science.",0 "Non-pharmaceutical interventions (NPIs) aimed at limiting human mobility have demonstrated success in curbing the transmission of airborne diseases. However, their effectiveness in managing vector-borne diseases remains less clear. In this study, we introduce a framework that integrates mobility data with vulnerability matrices to evaluate the differential impacts of mobility-based NPIs on both airborne and vector-borne pathogens. Focusing on the city of Santiago de Cali in Colombia, our analysis illustrates how mobility-based policies previously proposed to contain airborne disease can make cities more prone to the spread of vector-borne diseases. By proposing a simplified synthetic model, we explain the limitations of the latter policies and exploit the synergies between both types of diseases to find new interventions reshaping the mobility network for their simultaneous control. Our results thus offer valuable insights into the epidemiological trade-offs of concurrent disease management, providing a foundation for the design and assessment of targeted interventions that reshape human mobility.",0 "Today, GPS-equipped mobile devices are ubiquitous, and they generate Location-Based Service (LBS) data, which has become a critical resource for understanding human mobility. However, inherent limitations in LBS datasets, primarily characterized by discontinuity and sparsity, may introduce significant biases in representing individual movement patterns. This study develops data quality metrics for LBS data, examines their disparities among different populations, and quantifies their effects on inferred individual movement, stays in particular, in the Boston Metropolitan Area. We find that data from higher-income, more educated, and predominantly white census block groups (CBGs) show higher sampling rates but paradoxically lower data quality. This contradiction may stem from greater privacy awareness in these communities. Additionally, we propose a new framework to resample LBS data and quantitatively evaluate the inferential biases associated with data of varying quality. This versatile framework can analyze the impacts originating from different data processing workflows with LBS data. Using linear regression models with clustered standard error, we assess the impact of data quality metrics on inferring the number of stay points. The results show that better data quality, characterized by the number of observations and temporal occupancy, can significantly reduce the bias when calculating the stay points of an individual. The introduction of additional data quality metrics into the regression model can further explain the bias. Overall, this study provides insights into how data quality can influence our understanding of human mobility patterns, highlighting the importance of carefully handling LBS data in research.",0 "Online hate speech is responsible for violent attacks such as, e.g., the Pittsburgh synagogue shooting in 2018, thereby posing a significant threat to vulnerable groups and society in general. However, little is known about what makes hate speech on social media go viral. In this paper, we collect N = 25,219 cascades with 65,946 retweets from X (formerly known as Twitter) and classify them as hateful vs. normal. Using a generalized linear regression, we then estimate differences in the spread of hateful vs. normal content based on author and content variables. We thereby identify important determinants that explain differences in the spreading of hateful vs. normal content. For example, hateful content authored by verified users is disproportionally more likely to go viral than hateful content from non-verified ones: hateful content from a verified user (as opposed to normal content) has a 3.5 times larger cascade size, a 3.2 times longer cascade lifetime, and a 1.2 times larger structural virality. Altogether, we offer novel insights into the virality of hate speech on social media.",0 "The cybercriminal underground consists of hundreds of forum communities that function as marketplaces and information-exchange platforms for both established and wannabe cybercriminals. The ecosystem is continuously evolving, with users migrating between forums and platforms. The emergence of cybercrime communities in Telegram and Discord only highlights the rising fragmentation and adaptability of the ecosystem. In this position paper, we explore the economic incentives and trust-building mechanisms that may drive a participant (hereafter, Dmitry) of the cybercriminal underground ecosystem to migrate from one forum or platform to another. What are the market signals that matter to Dmitry's decision of joining a specific community, and what roles and purposes do these communities or platforms play within the broader ecosystem? Ultimately, we build towards our thesis that by studying these mechanisms we could explain, and therefore act upon, Dmitry's choice of joining a criminal community rather than another. To build this argument, we first discuss previous work evaluating differences in trust signals depicted in criminal forums. We then present preliminary results evaluating criminal channels on Telegram using those same lenses. Further, we analyze the different roles these channels play in the criminal ecosystem. We then discuss implications for future research.",2 "Large Language Models (LLMs) have significantly impacted nearly every domain of human knowledge. However, the explainability of these models esp. to laypersons, which are crucial for instilling trust, have been examined through various skeptical lenses. In this paper, we introduce a novel notion of LLM explainability to laypersons, termed $\textit{ReQuesting}$, across three high-priority application domains -- law, health and finance, using multiple state-of-the-art LLMs. The proposed notion exhibits faithful generation of explainable layman-understandable algorithms on multiple tasks through high degree of reproducibility. Furthermore, we observe a notable alignment of the explainable algorithms with intrinsic reasoning of the LLMs.",0 "The application of Virtual Reality Environments (VRE) has been gaining momentum as a relatively new tool to assist with mitigating various difficulties including abstractness of concepts, lack of user engagement, perception of disconnection from other users. A VRE may offer both synchronous and asynchronous experiences, in addition to an immersive environment which promotes users' engagement. Past research has shown that, in general, VRE do improve the experiences they try to enhance in many aspects of human activity. Terms like immersiveness and 3D representation of real life objects and environments are, as it appears, the two most obvious positive effects of Virtual Reality (VR) applications. However, despite these benefits it does not come without challenges. The main three concepts/challenges are the spatial design, the collaboration interaction between its members and the VRE, and the audio and video fidelity. Each of the three includes a number of other components that should be addressed for the total experience to be fine-tuned. These include mutual embodiment and shared perspectives, teleportation, gestural interaction, symmetric and asymmetric collaboration, physical and virtual co-location, inventory, and time and spatial synchronization. This paper comprises a survey of the literature, that identifies and explains the features introduced and the challenges involved with the VREs, and furthermore provides various interesting future research directions.",2 "Explaining the decisions of AI has become vital for fostering appropriate user trust in these systems. This paper investigates explanations for a structured prediction task called ``text-to-SQL Semantic Parsing'', which translates a natural language question into a structured query language (SQL) program. In this task setting, we designed three levels of model explanation, each exposing a different amount of the model's decision-making details (called ``algorithm transparency''), and investigated how different model explanations could potentially yield different impacts on the user experience. Our study with $\sim$100 participants shows that (1) the low-/high-transparency explanations often lead to less/more user reliance on the model decisions, whereas the medium-transparency explanations strike a good balance. We also show that (2) only the medium-transparency participant group was able to engage further in the interaction and exhibit increasing performance over time, and that (3) they showed the least changes in trust before and after the study.",2 "Despite advancements in Large Language Model (LLM) alignment, understanding the reasons behind LLM preferences remains crucial for bridging the gap between desired and actual behavior. LLMs often exhibit biases or tendencies that diverge from human preferences, such as favoring certain writing styles or producing overly verbose outputs. However, current methods for evaluating preference alignment often lack explainability, relying on coarse-grained comparisons. To address this, we introduce PROFILE (PRObing Factors of InfLuence for Explainability), a novel framework that uncovers and quantifies the influence of specific factors driving preferences. PROFILE's factor level analysis explains the 'why' behind human-model alignment and misalignment, offering insights into the direction of model improvement. We apply PROFILE to analyze human and LLM preferences across three tasks: summarization, helpful response generation, and document-based question-answering. Our factor level analysis reveals a substantial discrepancy between human and LLM preferences in generation tasks, whereas LLMs show strong alignment with human preferences in evaluation tasks. We demonstrate how leveraging factor level insights, including addressing misaligned factors or exploiting the generation-evaluation gap, can improve alignment with human preferences. This work underscores the importance of explainable preference analysis and highlights PROFILE's potential to provide valuable training signals, driving further improvements in human-model alignment.",2 "Socio-spatial segregation is the physical separation of different social, economic, or demographic groups within a geographic space, often resulting in unequal access to resources, services, and opportunities. The literature has traditionally focused on residential segregation, examining how individuals' residential locations are distributed differently across neighborhoods based on various social attributes, e.g., race, ethnicity, and income. However, this approach overlooks the complexity of spatial segregation in people's daily activities, which often extend far beyond residential areas. Since the 2010s, emerging mobility data sources have enabled a new understanding of socio-spatial segregation by considering daily activities such as work, school, shopping, and leisure visits. From traditional surveys to GPS trajectories, diverse data sources reveal that daily mobility can result in spatial segregation levels that differ from those observed in residential segregation. This literature review focuses on three critical questions: (a) What are the strengths and limitations of segregation research incorporating extensive mobility data? (b) How do human mobility patterns relate to individuals' residential vs. experienced segregation levels? and (c) What key factors explain the relationship between one's mobility patterns and experienced segregation? Our literature review enhances the understanding of socio-spatial segregation at the individual level and clarifies core concepts and methodological challenges in the field. Our review explores studies of key themes: segregation, activity space, co-presence, and the built environment. By synthesizing their findings, we aim to offer actionable insights for reducing segregation.",2 "The use of machine learning (ML) in critical domains such as medicine poses risks and requires regulation. One requirement is that decisions of ML systems in high-risk applications should be human-understandable. The field of ""explainable artificial intelligence"" (XAI) seemingly addresses this need. However, in its current form, XAI is unfit to provide quality control for ML; it itself needs scrutiny. Popular XAI methods cannot reliably answer important questions about ML models, their training data, or a given test input. We recapitulate results demonstrating that popular XAI methods systematically attribute importance to input features that are independent of the prediction target. This limits their utility for purposes such as model and data (in)validation, model improvement, and scientific discovery. We argue that the fundamental reason for this limitation is that current XAI methods do not address well-defined problems and are not evaluated against objective criteria of explanation correctness. Researchers should formally define the problems they intend to solve first and then design methods accordingly. This will lead to notions of explanation correctness that can be theoretically verified and objective metrics of explanation performance that can be assessed using ground-truth data.",0 "Large language models (LLMs) are becoming more advanced and widespread and have shown their applicability to various domains, including cybersecurity. Static malware analysis is one of the most important tasks in cybersecurity; however, it is time-consuming and requires a high level of expertise. Therefore, we conducted a demonstration experiment focusing on whether an LLM can be used to support static analysis. First, we evaluated the ability of the LLM to explain malware functionality. The results showed that the LLM can generate descriptions that cover functions with an accuracy of up to 90.9\%. In addition, we asked six static analysts to perform a pseudo static analysis task using LLM explanations to verify that the LLM can be used in practice. Through subsequent questionnaires and interviews with the participants, we also demonstrated the practical applicability of LLMs. Lastly, we summarized the problems and required functions when using an LLM as static analysis support, as well as recommendations for future research opportunities.",2 "A ReLU network is a piecewise linear function over polytopes. Figuring out the properties of such polytopes is of fundamental importance for the research and development of neural networks. So far, either theoretical or empirical studies on polytopes only stay at the level of counting their number, which is far from a complete characterization. Here, we propose to study the shapes of polytopes via the number of faces of the polytope. Then, by computing and analyzing the histogram of faces across polytopes, we find that a ReLU network has relatively simple polytopes under both initialization and gradient descent, although these polytopes can be rather diverse and complicated by a specific design. This finding can be appreciated as a kind of generalized implicit bias, subjected to the intrinsic geometric constraint in space partition of a ReLU network. Next, we perform a combinatorial analysis to explain why adding depth does not generate a more complicated polytope by bounding the average number of faces of polytopes with the dimensionality. Our results concretely reveal what kind of simple functions a network learns and what will happen when a network goes deep. Also, by characterizing the shape of polytopes, the number of faces can be a novel leverage for other problems, \textit{e.g.}, serving as a generic tool to explain the power of popular shortcut networks such as ResNet and analyzing the impact of different regularization strategies on a network's space partition.",0 "When human operators of cyber-physical systems encounter surprising behavior, they often consider multiple hypotheses that might explain it. In some cases, taking information-gathering actions such as additional measurements or control inputs given to the system can help resolve uncertainty and determine the most accurate hypothesis. The task of optimizing these actions can be formulated as a belief-space Markov decision process that we call a hypothesis-driven belief MDP. Unfortunately, this problem suffers from the curse of history similar to a partially observable Markov decision process (POMDP). To plan in continuous domains, an agent needs to reason over countlessly many possible action-observation histories, each resulting in a different belief over the unknown state. The problem is exacerbated in the hypothesis-driven context because each action-observation pair spawns a different belief for each hypothesis, leading to additional branching. This paper considers the case in which each hypothesis corresponds to a different dynamic model in an underlying POMDP. We present a new belief MDP formulation that: (i) enables reasoning over multiple hypotheses, (ii) balances the goals of determining the (most likely) correct hypothesis and performing well in the underlying POMDP, and (iii) can be solved with sparse tree search.",0 "LLM-based autonomous agents have demonstrated outstanding performance in solving complex industrial tasks. However, in the pursuit of carbon neutrality and high-performance renewable energy systems, existing AI-assisted design automation faces significant limitations in explainability, scalability, and usability. To address these challenges, we propose LP-COMDA, an LLM-based, physics-informed autonomous agent that automates the modulation design of power converters in Power Electronics Systems with minimal human supervision. Unlike traditional AI-assisted approaches, LP-COMDA contains an LLM-based planner that gathers and validates design specifications through a user-friendly chat interface. The planner then coordinates with physics-informed design and optimization tools to iteratively generate and refine modulation designs autonomously. Through the chat interface, LP-COMDA provides an explainable design process, presenting explanations and charts. Experiments show that LP-COMDA outperforms all baseline methods, achieving a 63.2% reduction in error compared to the second-best benchmark method in terms of standard mean absolute error. Furthermore, empirical studies with 20 experts conclude that design time with LP-COMDA is over 33 times faster than conventional methods, showing its significant improvement on design efficiency over the current processes.",0 "Gait analysis using computer vision is an emerging field in AI, offering clinicians an objective, multi-feature approach to analyse complex movements. Despite its promise, current applications using RGB video data alone are limited in measuring clinically relevant spatial and temporal kinematics and establishing normative parameters essential for identifying movement abnormalities within a gait cycle. This paper presents a data-driven method using RGB video data and 2D human pose estimation for developing normative kinematic gait parameters. By analysing joint angles, an established kinematic measure in biomechanics and clinical practice, we aim to enhance gait analysis capabilities and improve explainability. Our cycle-wise kinematic analysis enables clinicians to simultaneously measure and compare multiple joint angles, assessing individuals against a normative population using just monocular RGB video. This approach expands clinical capacity, supports objective decision-making, and automates the identification of specific spatial and temporal deviations and abnormalities within the gait cycle.",0 "We present Soda (Symbolic Objective Descriptive Analysis), a language that helps to treat qualities and quantities in a natural way and greatly simplifies the task of checking their correctness. We present key properties for the language motivated by the design of a descriptive language to encode complex requirements on computer systems, and we explain how these key properties must be addressed to model these requirements with simple definitions. We give an overview of a tool that helps to describe problems in an easy way that we consider more transparent and less error-prone.",0 "Effective communication is essential in collaborative tasks, so AI-equipped robots working alongside humans need to be able to explain their behaviour in order to cooperate effectively and earn trust. We analyse and classify communications among human participants collaborating to complete a simulated emergency response task. The analysis identifies messages that relate to various kinds of interactive explanations identified in the explainable AI literature. This allows us to understand what type of explanations humans expect from their teammates in such settings, and thus where AI-equipped robots most need explanation capabilities. We find that most explanation-related messages seek clarification in the decisions or actions taken. We also confirm that messages have an impact on the performance of our simulated task.",2 "We report on the first experimental characterization of a laser-wakefield accelerator able to deliver, in a single pulse, doses in excess of \unit[1]{Gy} on timescales of the order of a hundred femtoseconds, reaching unprecedented average dose-rates up to \unit[10$^{13}$]{Gy/s}. The irradiator is demonstrated to deliver doses tuneable up to \unit[2.2]{Gy} in a cm$^2$ area and with a high degree of longitudinal and transverse uniformity in a single irradiation. In this regime, proof-of-principle irradiation of patient-derived glioblastoma stem-like cells and human skin fibroblast cells show indications of a differential cellular response, when compared to reference irradiations at conventional dose-rates. These include a statistically significant increase in relative biological effectiveness ($1.40\pm0.08$ at 50\% survival for both cell lines) and a significant reduction of the relative radioresistance of tumour cells. Data analysis provides preliminary indications that these effects might not be fully explained by induced oxygen depletion in the cells but may be instead linked to a higher complexity of the damages triggered by the ultra-high density of ionising tracks of femtosecond-scale radiation pulses. These results demonstrate an integrated platform for systematic radiobiological studies at unprecedented beam durations and dose-rates, a unique infrastructure for translational research in radiobiology at the femtosecond scale.",2 "There is a growing need to understand how digital systems can support clinical decision-making, particularly as artificial intelligence (AI) models become increasingly complex and less human-interpretable. This complexity raises concerns about trustworthiness, impacting safe and effective adoption of such technologies. Improved understanding of decision-making processes and requirements for explanations coming from decision support tools is a vital component in providing effective explainable solutions. This is particularly relevant in the data-intensive, fast-paced environments of intensive care units (ICUs). To explore these issues, group interviews were conducted with seven ICU clinicians, representing various roles and experience levels. Thematic analysis revealed three core themes: (T1) ICU decision-making relies on a wide range of factors, (T2) the complexity of patient state is challenging for shared decision-making, and (T3) requirements and capabilities of AI decision support systems. We include design recommendations from clinical input, providing insights to inform future AI systems for intensive care.",2 "In recent years, the rapid growth of security vulnerabilities poses great challenges to tracing and managing them. For example, it was reported that the NVD database experienced significant delays due to the shortage of maintainers. Such delay creates challenges for third-party security personnel (e.g., administrators) to trace the information related to the CVE. To help security personnel trace a vulnerability patch, we build a retrieval system that automatically retrieves the patch in the repository. Inspired by existing work on explainable machine learning, we ask the following research question: can explanations help security maintainers make decisions in patch tracing? First, we investigate using LIME (a widely used explainable machine learning method) to highlight the rationale tokens in the commit message and code. In addition, we propose an explanation method called TfIdf-Highlight, which leverages the Tf-Idf statistics to select the most informative words in the repository and the dataset. We evaluate the effectiveness of highlighting using two experiments. First, we compare LIME and TfIdf-Highlight using a faithfulness score (i.e., sufficiency and comprehensiveness) defined for ranking. We find that TfIdf-Highlight significantly outperforms LIME's sufficiency scores by 15\% and slightly outperforms the comprehensiveness scores. Second, we conduct a blind human labeling experiment by asking the annotators to guess the patch under 3 settings (TfIdf-Highlight, LIME, and no highlight). We find that the helpfulness score for TfIdf-Highlight is higher than LIME while the labeling accuracies of LIME and TfIdf-Highlight are similar. Nevertheless, highlighting does not improve the accuracy over non-highlighting.",2 "This paper continues on the program of developing a relativistic quantum information theory in terms of unequal-time correlation functions in quantum field theory (QFT)[arXiv:2208.03696]. Here, we focus on the definition of quantum resources from the irreducibly quantum behavior contained in the correlation functions of a QFT. We explain how set-ups with $N$ particle detectors probe the information in the high order field correlation functions. Our main object is the associated hierarchy of probability densities of $N$-detector events. We show that classical probabilistic hierarchies are subject to two conditions: Kolmogorov additivity and measurement independence. QFT violates those conditions, and the degree of violation enables us to define novel quantum resources. We give specific examples in set-ups where the main observables are the times of particle detection events. The new resources capture instances of irreducibly quantum behavior differ from the quantum behavior encapsulated in Bell inequalities. An interesting byproduct of our analysis is a relativistic state reduction rule for particles detected through scattering.",0 "Deep learning (DL) enables deep neural networks (DNNs) to automatically learn complex tasks or rules from given examples without instructions or guiding principles. As we do not engineer DNNs' functions, it is extremely difficult to diagnose their decisions, and multiple lines of studies proposed to explain the principles of their operations. Notably, one line of studies suggests that DNNs may learn concepts, the high level features that are recognizable to humans. In this study, we extend this line of studies and hypothesize that DNNs can develop abstract codes that can be used to augment DNNs' decision-making. To address this hypothesis, we combine foundation segmentation models and unsupervised learning to extract internal codes and identify potential use of abstract codes to make DL's decision-making more reliable and safer.",0 "Humans and other organisms make decisions choosing between different options, with the aim to maximize the reward and minimize the cost. The main theoretical framework for modeling the decision-making process has been based on the highly successful drift-diffusion model, which is a simple tool for explaining many aspects of this process. However, new observations challenge this model. Recently, it was found that inhibitory tone increases during high cognitive load and situations of uncertainty, but the origin of this phenomenon is not understood. Motivated by this observation, we extend a recently developed model for decision making while animals move towards targets in real space. We introduce an integrated Ising-type model, that includes global inhibition, and use it to explore its role in decision-making. This model can explain how the brain may utilize inhibition to improve its decision-making accuracy. Compared to experimental results, this model suggests that the regime of the brain's decision-making activity is in proximity to a critical transition line between the ordered and disordered. Within the model, the critical region near the transition line has the advantageous property of enabling a significant decrease in error with a small increase in inhibition and also exhibits unique properties with respect to learning and memory decay.",0 "The human brain is a dynamical system whose extremely complex sensor-driven neural processes give rise to conceptual, logical cognition. Understanding the interplay between nonlinear neural dynamics and concept-level cognition remains a major scientific challenge. Here I propose a mechanism of neurodynamical organization, called conceptors, which unites nonlinear dynamics with basic principles of conceptual abstraction and logic. It becomes possible to learn, store, abstract, focus, morph, generalize, de-noise and recognize a large number of dynamical patterns within a single neural system; novel patterns can be added without interfering with previously acquired ones; neural noise is automatically filtered. Conceptors help explaining how conceptual-level information processing emerges naturally and robustly in neural systems, and remove a number of roadblocks in the theory and applications of recurrent neural networks.",0 "Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics for machine translation (for example, COMET or BERTScore) are based on black-box large language models. They often achieve strong correlations with human judgments, but recent research indicates that the lower-quality classical metrics remain dominant, one of the potential reasons being that their decision processes are more transparent. To foster more widespread acceptance of novel high-quality metrics, explainability thus becomes crucial. In this concept paper, we identify key properties as well as key goals of explainable machine translation metrics and provide a comprehensive synthesis of recent techniques, relating them to our established goals and properties. In this context, we also discuss the latest state-of-the-art approaches to explainable metrics based on generative models such as ChatGPT and GPT4. Finally, we contribute a vision of next-generation approaches, including natural language explanations. We hope that our work can help catalyze and guide future research on explainable evaluation metrics and, mediately, also contribute to better and more transparent machine translation systems.",0 "Collaborative robots and machine learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity and enhancing safety. Despite this, we show in a ubiquitous experimental domain, Overcooked-AI, that state-of-the-art techniques for human-machine teaming (HMT), which rely on imitation or reinforcement learning, are brittle and result in a machine agent that aims to decouple the machine and human's actions to act independently rather than in a synergistic fashion. To remedy this deficiency, we develop HMT approaches that enable iterative, mixed-initiative team development allowing end-users to interactively reprogram interpretable AI teammates. Our 50-subject study provides several findings that we summarize into guidelines. While all approaches underperform a simple collaborative heuristic (a critical, negative result for learning-based methods), we find that white-box approaches supported by interactive modification can lead to significant team development, outperforming white-box approaches alone, and that black-box approaches are easier to train and result in better HMT performance highlighting a tradeoff between explainability and interactivity versus ease-of-training. Together, these findings present three important future research directions: 1) Improving the ability to generate collaborative agents with white-box models, 2) Better learning methods to facilitate collaboration rather than individualized coordination, and 3) Mixed-initiative interfaces that enable users, who may vary in ability, to improve collaboration.",0 "To interpret Vision Transformers, post-hoc explanations assign salience scores to input pixels, providing human-understandable heatmaps. However, whether these interpretations reflect true rationales behind the model's output is still underexplored. To address this gap, we study the faithfulness criterion of explanations: the assigned salience scores should represent the influence of the corresponding input pixels on the model's predictions. To evaluate faithfulness, we introduce Salience-guided Faithfulness Coefficient (SaCo), a novel evaluation metric leveraging essential information of salience distribution. Specifically, we conduct pair-wise comparisons among distinct pixel groups and then aggregate the differences in their salience scores, resulting in a coefficient that indicates the explanation's degree of faithfulness. Our explorations reveal that current metrics struggle to differentiate between advanced explanation methods and Random Attribution, thereby failing to capture the faithfulness property. In contrast, our proposed SaCo offers a reliable faithfulness measurement, establishing a robust metric for interpretations. Furthermore, our SaCo demonstrates that the use of gradient and multi-layer aggregation can markedly enhance the faithfulness of attention-based explanation, shedding light on potential paths for advancing Vision Transformer explainability.",0 "Stereotype detection is a challenging and subjective task, as certain statements, such as ""Black people like to play basketball,"" may not appear overtly toxic but still reinforce racial stereotypes. With the increasing prevalence of large language models (LLMs) in human-facing artificial intelligence (AI) applications, detecting these types of biases is essential. However, LLMs risk perpetuating and amplifying stereotypical outputs derived from their training data. A reliable stereotype detector is crucial for benchmarking bias, monitoring model input and output, filtering training data, and ensuring fairer model behavior in downstream applications. This paper introduces the Multi-Grain Stereotype (MGS) dataset, consisting of 51,867 instances across gender, race, profession, religion, and other stereotypes, curated from multiple existing datasets. We evaluate various machine learning approaches to establish baselines and fine-tune language models of different architectures and sizes, presenting a suite of stereotype multiclass classifiers trained on the MGS dataset. Given the subjectivity of stereotypes, explainability is essential to align model learning with human understanding of stereotypes. We employ explainable AI (XAI) tools, including SHAP, LIME, and BertViz, to assess whether the model's learned patterns align with human intuitions about stereotypes.Additionally, we develop stereotype elicitation prompts and benchmark the presence of stereotypes in text generation tasks using popular LLMs, employing the best-performing stereotype classifiers.",0 "The potential of Machine Learning Control (MLC) in HVAC systems is hindered by its opaque nature and inference mechanisms, which is challenging for users and modelers to fully comprehend, ultimately leading to a lack of trust in MLC-based decision-making. To address this challenge, this paper investigates and explores Interpretable Machine Learning (IML), a branch of Machine Learning (ML) that enhances transparency and understanding of models and their inferences, to improve the credibility of MLC and its industrial application in HVAC systems. Specifically, we developed an innovative framework that combines the principles of Shapley values and the in-context learning feature of Large Language Models (LLMs). While the Shapley values are instrumental in dissecting the contributions of various features in ML models, LLM provides an in-depth understanding of the non-data-driven or rule-based elements in MLC; combining them, LLM further packages these insights into a coherent, human-understandable narrative. The paper presents a case study to demonstrate the feasibility of the developed IML framework for model predictive control-based precooling under demand response events in a virtual testbed. The results indicate that the developed framework generates and explains the control signals in accordance with the rule-based rationale.",0 "False data injection attacks (FDIAs) on smart inverters are a growing concern linked to increased renewable energy production. While data-based FDIA detection methods are also actively developed, we show that they remain vulnerable to impactful and stealthy adversarial examples that can be crafted using Reinforcement Learning (RL). We propose to include such adversarial examples in data-based detection training procedure via a continual adversarial RL (CARL) approach. This way, one can pinpoint the deficiencies of data-based detection, thereby offering explainability during their incremental improvement. We show that a continual learning implementation is subject to catastrophic forgetting, and additionally show that forgetting can be addressed by employing a joint training strategy on all generated FDIA scenarios.",0 "Collaborative decision-making with artificial intelligence (AI) agents presents opportunities and challenges. While human-AI performance often surpasses that of individuals, the impact of such technology on human behavior remains insufficiently understood, primarily when AI agents can provide justifiable explanations for their suggestions. This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task. Three participant groups were involved: one interacting with a computer, another with a humanoid robot, and a third one without assistance. Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved. With the computer, participants enhanced their task completion times. At the same time, those interacting with the humanoid robot were more inclined to follow its suggestions, although they did not reduce their timing. Interestingly, participants autonomously performing the learning-by-doing task demonstrated superior knowledge acquisition than those assisted by explainable AI (XAI). These findings raise profound questions and have significant implications for automated tutoring and human-AI collaboration.",2 "The continuous development of artificial intelligence (AI) theory has propelled this field to unprecedented heights, owing to the relentless efforts of scholars and researchers. In the medical realm, AI takes a pivotal role, leveraging robust machine learning (ML) algorithms. AI technology in medical imaging aids physicians in X-ray, computed tomography (CT) scans, and magnetic resonance imaging (MRI) diagnoses, conducts pattern recognition and disease prediction based on acoustic data, delivers prognoses on disease types and developmental trends for patients, and employs intelligent health management wearable devices with human-computer interaction technology to name but a few. While these well-established applications have significantly assisted in medical field diagnoses, clinical decision-making, and management, collaboration between the medical and AI sectors faces an urgent challenge: How to substantiate the reliability of decision-making? The underlying issue stems from the conflict between the demand for accountability and result transparency in medical scenarios and the black-box model traits of AI. This article reviews recent research grounded in explainable artificial intelligence (XAI), with an emphasis on medical practices within the visual, audio, and multimodal perspectives. We endeavour to categorise and synthesise these practices, aiming to provide support and guidance for future researchers and healthcare professionals.",2 "Cardiovascular diseases (CVD) remain a leading health concern and contribute significantly to global mortality rates. While clinical advancements have led to a decline in CVD mortality, accurately identifying individuals who could benefit from preventive interventions remains an unsolved challenge in preventive cardiology. Current CVD risk prediction models, recommended by guidelines, are based on limited traditional risk factors or use CT imaging to acquire quantitative biomarkers, and still have limitations in predictive accuracy and applicability. On the other hand, end-to-end trained CVD risk prediction methods leveraging deep learning on CT images often fail to provide transparent and explainable decision grounds for assisting physicians. In this work, we proposed a novel joint representation that integrates discrete quantitative biomarkers and continuous deep features extracted from chest CT scans. Our approach initiated with a deep CVD risk classification model by capturing comprehensive continuous deep learning features while jointly obtaining currently clinical-established quantitative biomarkers via segmentation models. In the feature joint representation stage, we use an instance-wise feature-gated mechanism to align the continuous and discrete features, followed by a soft instance-wise feature interaction mechanism fostering independent and effective feature interaction for the final CVD risk prediction. Our method substantially improves CVD risk predictive performance and offers individual contribution analysis of each biomarker, which is important in assisting physicians' decision-making processes. We validated our method on a public chest low-dose CT dataset and a private external chest standard-dose CT patient cohort of 17,207 CT volumes from 6,393 unique subjects, and demonstrated superior predictive performance, achieving AUCs of 0.875 and 0.843, respectively.",2 "Artificial intelligence (AI) technologies (re-)shape modern life, driving innovation in a wide range of sectors. However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: 1) human agency and oversight, 2) fairness and non-discrimination, 3) transparency and explainability, 4) robustness and accuracy, 5) privacy and security, and 6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to 1) interdisciplinary research, 2) conceptual clarity, 3) context-dependency, 4) dynamics in evolving systems, and 5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.",0 "Causal concept effect estimation is gaining increasing interest in the field of interpretable machine learning. This general approach explains the behaviors of machine learning models by estimating the causal effect of human-understandable concepts, which represent high-level knowledge more comprehensibly than raw inputs like tokens. However, existing causal concept effect explanation methods assume complete observation of all concepts involved within the dataset, which can fail in practice due to incomplete annotations or missing concept data. We theoretically demonstrate that unobserved concepts can bias the estimation of the causal effects of observed concepts. To address this limitation, we introduce the Missingness-aware Causal Concept Explainer (MCCE), a novel framework specifically designed to estimate causal concept effects when not all concepts are observable. Our framework learns to account for residual bias resulting from missing concepts and utilizes a linear predictor to model the relationships between these concepts and the outputs of black-box machine learning models. It can offer explanations on both local and global levels. We conduct validations using a real-world dataset, demonstrating that MCCE achieves promising performance compared to state-of-the-art explanation methods in causal concept effect estimation.",0 "This paper introduces immersive humanitarian visualization as a promising research area in information visualization. Humanitarian visualizations are data visualizations designed to promote human welfare. This paper explains why immersive display technologies taken broadly (e.g, virtual reality, augmented reality, ambient displays and physical representations) open up a range of opportunities for humanitarian visualization. In particular, immersive displays offer ways to make remote and hidden human suffering more salient. They also offer ways to communicate quantitative facts together with qualitative information and visceral experiences, in order to provide a holistic understanding of humanitarian issues that could support more informed humanitarian decisions. But despite some promising preliminary work, immersive humanitarian visualization has not taken off as a research topic yet. The goal of this paper is to encourage, motivate, and inspire future research in this area.",0 "Hyperspectral Image Classification (HSC) presents significant challenges owing to the high dimensionality and intricate nature of Hyperspectral (HS) data. While traditional Machine Learning (TML) approaches have demonstrated effectiveness, they often encounter substantial obstacles in real-world applications, including the variability of optimal feature sets, subjectivity in human-driven design, inherent biases, and methodological limitations. Specifically, TML suffers from the curse of dimensionality, difficulties in feature selection and extraction, insufficient consideration of spatial information, limited robustness against noise, scalability issues, and inadequate adaptability to complex data distributions. In recent years, Deep Learning (DL) techniques have emerged as robust solutions to address these challenges. This survey offers a comprehensive overview of current trends and future prospects in HSC, emphasizing advancements from DL models to the increasing adoption of Transformer and Mamba Model architectures. We systematically review key concepts, methodologies, and state-of-the-art approaches in DL for HSC. Furthermore, we investigate the potential of Transformer-based models and the Mamba Model in HSC, detailing their advantages and challenges. Emerging trends in HSC are explored, including in-depth discussions on Explainable AI and Interoperability concepts, alongside Diffusion Models for image denoising, feature extraction, and image fusion. Comprehensive experimental results were conducted on three HS datasets to substantiate the efficacy of various conventional DL models and Transformers. Additionally, we identify several open challenges and pertinent research questions in the field of HSC. Finally, we outline future research directions and potential applications aimed at enhancing the accuracy and efficiency of HSC.",2 "Emotion is often described as something people 'feel' in their bodies. Embodied emotion theorists propose that this connection is not purely linguistic; perceiving an emotion may require somatosensory and motor re-experiencing. However, it remains unclear whether self-reports of emotion-related bodily sensations (i.e., 'lump in my throat') are related to neural simulations of bodily action and sensation or whether they can be explained by cognitive appraisals or the visual features of socioemotional signals. To investigate this, participants (N = 21) were shown arousing emotional images that varied in valence, complexity, and content while undergoing fMRI scans. Participants then rated the images on a set of emotion appraisal scales and indicated where, on a body map, they experienced sensation in response to the image. To derive normative models of responses on these scales, a separate larger online sample online (N = 56 - 128) also rated these images. Representational similarity analysis (RSA) was used to compare the emotional content in the body maps with appraisals and visual features. A pairwise distance matrix between the body maps generated for each stimulus was then used in a whole brain voxel-wise searchlight analysis to identify brain regions which reflect the representational geometry of embodied emotion. This analysis revealed a network including bilateral primary somatosensory and motor cortices, precuneus, insula, and medial prefrontal cortex. The results of this study suggest that the relationship between emotion and the body is not purely conceptual: It is supported by sensorimotor cortical activations.",2 "Interpretable Machine Learning faces a recurring challenge of explaining the predictions made by opaque classifiers such as ensemble models, kernel methods, or neural networks in terms that are understandable to humans. When the model is viewed as a black box, the objective is to identify a small set of features that jointly determine the black box response with minimal error. However, finding such model-agnostic explanations is computationally demanding, as the problem is intractable even for binary classifiers. In this paper, the task is framed as a Constraint Optimization Problem, where the constraint solver seeks an explanation of minimum error and bounded size for an input data instance and a set of samples generated by the black box. From a theoretical perspective, this constraint programming approach offers PAC-style guarantees for the output explanation. We evaluate the approach empirically on various datasets and show that it statistically outperforms the state-of-the-art heuristic Anchors method.",0 "Interpretability is the study of explaining models in understandable terms to humans. At present, interpretability is divided into two paradigms: the intrinsic paradigm, which believes that only models designed to be explained can be explained, and the post-hoc paradigm, which believes that black-box models can be explained. At the core of this debate is how each paradigm ensures its explanations are faithful, i.e., true to the model's behavior. This is important, as false but convincing explanations lead to unsupported confidence in artificial intelligence (AI), which can be dangerous. This paper's position is that we should think about new paradigms while staying vigilant regarding faithfulness. First, by examining the history of paradigms in science, we see that paradigms are constantly evolving. Then, by examining the current paradigms, we can understand their underlying beliefs, the value they bring, and their limitations. Finally, this paper presents 3 emerging paradigms for interpretability. The first paradigm designs models such that faithfulness can be easily measured. Another optimizes models such that explanations become faithful. The last paradigm proposes to develop models that produce both a prediction and an explanation.",0 "Traditional decision tree algorithms are explainable but struggle with non-linear, high-dimensional data, limiting its applicability in complex decision-making. Neural networks excel at capturing complex patterns but sacrifice explainability in the process. In this work, we present GPTree, a novel framework combining explainability of decision trees with the advanced reasoning capabilities of LLMs. GPTree eliminates the need for feature engineering and prompt chaining, requiring only a task-specific prompt and leveraging a tree-based structure to dynamically split samples. We also introduce an expert-in-the-loop feedback mechanism to further enhance performance by enabling human intervention to refine and rebuild decision paths, emphasizing the harmony between human expertise and machine intelligence. Our decision tree achieved a 7.8% precision rate for identifying ""unicorn"" startups at the inception stage of a startup, surpassing gpt-4o with few-shot learning as well as the best human decision-makers (3.1% to 5.6%).",0 "In contemporary economic society, credit scores are crucial for every participant. A robust credit evaluation system is essential for the profitability of core businesses such as credit cards, loans, and investments for commercial banks and the financial sector. This paper combines high-performance models like XGBoost and LightGBM, already widely used in modern banking systems, with the powerful TabNet model. We have developed a potent model capable of accurately determining credit score levels by integrating Random Forest, XGBoost, and TabNet, and through the stacking technique in ensemble modeling. This approach surpasses the limitations of single models and significantly advances the precise credit score prediction. In the following sections, we will explain the techniques we used and thoroughly validate our approach by comprehensively comparing a series of metrics such as Precision, Recall, F1, and AUC. By integrating Random Forest, XGBoost, and with the TabNet deep learning architecture, these models complement each other, demonstrating exceptionally strong overall performance.",2 "By dynamic planning, we refer to the ability of the human brain to infer and impose motor trajectories related to cognitive decisions. A recent paradigm, active inference, brings fundamental insights into the adaptation of biological organisms, constantly striving to minimize prediction errors to restrict themselves to life-compatible states. Over the past years, many studies have shown how human and animal behaviors could be explained in terms of active inference - either as discrete decision-making or continuous motor control - inspiring innovative solutions in robotics and artificial intelligence. Still, the literature lacks a comprehensive outlook on effectively planning realistic actions in changing environments. Setting ourselves the goal of modeling complex tasks such as tool use, we delve into the topic of dynamic planning in active inference, keeping in mind two crucial aspects of biological behavior: the capacity to understand and exploit affordances for object manipulation, and to learn the hierarchical interactions between the self and the environment, including other agents. We start from a simple unit and gradually describe more advanced structures, comparing recently proposed design choices and providing basic examples. This study distances itself from traditional views centered on neural networks and reinforcement learning, and points toward a yet unexplored direction in active inference: hybrid representations in hierarchical models.",0 "As NLP models become more complex, understanding their decisions becomes more crucial. Counterfactuals (CFs), where minimal changes to inputs flip a model's prediction, offer a way to explain these models. While Large Language Models (LLMs) have shown remarkable performance in NLP tasks, their efficacy in generating high-quality CFs remains uncertain. This work fills this gap by investigating how well LLMs generate CFs for two NLU tasks. We conduct a comprehensive comparison of several common LLMs, and evaluate their CFs, assessing both intrinsic metrics, and the impact of these CFs on data augmentation. Moreover, we analyze differences between human and LLM-generated CFs, providing insights for future research directions. Our results show that LLMs generate fluent CFs, but struggle to keep the induced changes minimal. Generating CFs for Sentiment Analysis (SA) is less challenging than NLI where LLMs show weaknesses in generating CFs that flip the original label. This also reflects on the data augmentation performance, where we observe a large gap between augmenting with human and LLMs CFs. Furthermore, we evaluate LLMs' ability to assess CFs in a mislabelled data setting, and show that they have a strong bias towards agreeing with the provided labels. GPT4 is more robust against this bias and its scores correlate well with automatic metrics. Our findings reveal several limitations and point to potential future work directions.",0 "Data visualizations are inherently rhetorical, and therefore bias-laden visual artifacts that contain both explicit and implicit arguments. The implicit arguments depicted in data visualizations are the net result of many seemingly minor decisions about data and design from inception of a research project through to final publication of the visualization. Data workflow, selected visualization formats, and individual design decisions made within those formats all frame and direct the possible range of interpretation, and the potential for harm of any data visualization. Considering this, it is imperative that we take an ethical approach to the creation and use of data visualizations. Therefore, we have suggested an ethical data visualization workflow with the dual aim of minimizing harm to the subjects of our study and the audiences viewing our visualization, while also maximizing the explanatory capacity and effectiveness of the visualization itself. To explain this ethical data visualization workflow, we examine two recent digital mapping projects, Racial Terror Lynchings and Map of White Supremacy Mob Violence.",0 "The emergence of large language models (LLMs) has opened up exciting possibilities for simulating human behavior and cognitive processes, with potential applications in various domains, including marketing research and consumer behavior analysis. However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain due to glaring divergences that suggest fundamentally different underlying processes at play and the sensitivity of LLM responses to prompt variations. This paper presents a novel approach based on Shapley values from cooperative game theory to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output. Through two applications - a discrete choice experiment and an investigation of cognitive biases - we demonstrate how the Shapley value method can uncover what we term ""token noise"" effects, a phenomenon where LLM decisions are disproportionately influenced by tokens providing minimal informative content. This phenomenon raises concerns about the robustness and generalizability of insights obtained from LLMs in the context of human behavior simulation. Our model-agnostic approach extends its utility to proprietary LLMs, providing a valuable tool for practitioners and researchers to strategically optimize prompts and mitigate apparent cognitive biases. Our findings underscore the need for a more nuanced understanding of the factors driving LLM responses before relying on them as substitutes for human subjects in survey settings. We emphasize the importance of researchers reporting results conditioned on specific prompt templates and exercising caution when drawing parallels between human behavior and LLMs.",2 "Design and manufacturing of integrated circuits predominantly use a globally distributed semiconductor supply chain involving diverse entities. The modern semiconductor supply chain has been designed to boost production efficiency, but is filled with major security concerns such as malicious modifications (hardware Trojans), reverse engineering (RE), and cloning. While being deployed, digital systems are also subject to a plethora of threats such as power, timing, and electromagnetic (EM) side channel attacks. Many Design-for-Security (DFS) solutions have been proposed to deal with these vulnerabilities, and such solutions (DFS) relays on strategic modifications (e.g., logic locking, side channel resilient masking, and dummy logic insertion) of the digital designs for ensuring a higher level of security. However, most of these DFS strategies lack robust formalism, are often not human-understandable, and require an extensive amount of human expert effort during their development/use. All of these factors make it difficult to keep up with the ever growing number of microelectronic vulnerabilities. In this work, we propose X-DFS, an explainable Artificial Intelligence (AI) guided DFS solution-space exploration approach that can dramatically cut down the mitigation strategy development/use time while enriching our understanding of the vulnerability by providing human-understandable decision rationale. We implement X-DFS and comprehensively evaluate it for reverse engineering threats (SAIL, SWEEP, and OMLA) and formalize a generalized mechanism for applying X-DFS to defend against other threats such as hardware Trojans, fault attacks, and side channel attacks for seamless future extensions.",0 "This work investigates the reproducibility of the paper 'Explaining RL decisions with trajectories'. The original paper introduces a novel approach in explainable reinforcement learning based on the attribution decisions of an agent to specific clusters of trajectories encountered during training. We verify the main claims from the paper, which state that (i) training on less trajectories induces a lower initial state value, (ii) trajectories in a cluster present similar high-level patterns, (iii) distant trajectories influence the decision of an agent, and (iv) humans correctly identify the attributed trajectories to the decision of the agent. We recover the environments used by the authors based on the partial original code they provided for one of the environments (Grid-World), and implemented the remaining from scratch (Seaquest, HalfCheetah, Breakout and Q*Bert). While we confirm that (i), (ii), and (iii) partially hold, we extend on the largely qualitative experiments from the authors by introducing a quantitative metric to further support (iii), and new experiments and visual results for (i). Moreover, we investigate the use of different clustering algorithms and encoder architectures to further support (ii). We could not support (iv), given the limited extent of the original experiments. We conclude that, while some of the claims can be supported, further investigations and experiments could be of interest. We recognise the novelty of the work from the authors and hope that our work paves the way for clearer and more transparent approaches.",0 "Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence. Despite this, when tasked with several simple questions supported by a generic fact, LLMs often struggle to abstract and apply the generic fact to provide consistent and precise answers, revealing a deficiency in abstract reasoning abilities. This has sparked a vigorous debate about whether LLMs are genuinely reasoning or merely memorizing. In light of this, we design a preliminary study to quantify and delve into the abstract reasoning abilities of existing LLMs. Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances. To relieve this problem, we tailor an abstract reasoning dataset (AbsR) together with a meaningful learning paradigm to teach LLMs how to leverage generic facts for reasoning purposes. The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts. The code is available at https://github.com/Waste-Wood/MeanLearn.",0 "This paper introduces the concept of ``generative midtended cognition'', exploring the integration of generative AI with human cognition. The term ""generative"" reflects AI's ability to iteratively produce structured outputs, while ""midtended"" captures the potential hybrid (human-AI) nature of the process. It stands between traditional conceptions of intended creation, understood directed from within, and extended processes that bring exo-biological processes into the creative process. We examine current generative technologies (based on multimodal transformer architectures typical of large language models like ChatGPT), to explain how they can transform human cognitive agency beyond what standard theories of extended cognition can capture. We suggest that the type of cognitive activity typical of the coupling between a human and generative technologies is closer (but not equivalent) to social cognition than to classical extended cognitive paradigms. Yet, it deserves a specific treatment. We provide an explicit definition of generative midtended cognition in which we treat interventions by AI systems as constitutive of the agent's intentional creative processes. Furthermore, we distinguish two dimensions of generative hybrid creativity: 1. Width: captures the sensitivity of the context of the generative process (from the single letter to the whole historical and surrounding data), 2. Depth: captures the granularity of iteration loops involved in the process. Generative midtended cognition stands in the middle depth between conversational forms of cognition in which complete utterances or creative units are exchanged, and micro-cognitive (e.g. neural) subpersonal processes. Finally, the paper discusses the potential risks and benefits of widespread generative AI adoption, including the challenges of authenticity, generative power asymmetry, and creative boost or atrophy.",0 "Generative AI has made remarkable strides to revolutionize fields such as image and video generation. These advancements are driven by innovative algorithms, architecture, and data. However, the rapid proliferation of generative models has highlighted a critical gap: the absence of trustworthy evaluation metrics. Current automatic assessments such as FID, CLIP, FVD, etc often fail to capture the nuanced quality and user satisfaction associated with generative outputs. This paper proposes an open platform GenAI-Arena to evaluate different image and video generative models, where users can actively participate in evaluating these models. By leveraging collective user feedback and votes, GenAI-Arena aims to provide a more democratic and accurate measure of model performance. It covers three tasks of text-to-image generation, text-to-video generation, and image editing respectively. Currently, we cover a total of 35 open-source generative models. GenAI-Arena has been operating for seven months, amassing over 9000 votes from the community. We describe our platform, analyze the data, and explain the statistical methods for ranking the models. To further promote the research in building model-based evaluation metrics, we release a cleaned version of our preference data for the three tasks, namely GenAI-Bench. We prompt the existing multi-modal models like Gemini, and GPT-4o to mimic human voting. We compute the accuracy by comparing the model voting with the human voting to understand their judging abilities. Our results show existing multimodal models are still lagging in assessing the generated visual content, even the best model GPT-4o only achieves an average accuracy of 49.19 across the three generative tasks. Open-source MLLMs perform even worse due to the lack of instruction-following and reasoning ability in complex vision scenarios.",0 "The drive toward automating cellular network operations has grown with the increasing complexity of these systems. Despite advancements, full autonomy currently remains out of reach due to reliance on human intervention for modeling network behaviors and defining policies to meet target requirements. Network Digital Twins (NDTs) have shown promise in enhancing network intelligence, but the successful implementation of this technology is constrained by use case-specific architectures, limiting its role in advancing network autonomy. A more capable network intelligence, or ""telecommunications brain"", is needed to enable seamless, autonomous management of cellular network. Large Language Models (LLMs) have emerged as potential enablers for this vision but face challenges in network modeling, especially in reasoning and handling diverse data types. To address these gaps, we introduce Hermes, a chain of LLM agents that uses ""blueprints"" for constructing NDT instances through structured and explainable logical steps. Hermes allows automatic, reliable, and accurate network modeling of diverse use cases and configurations, thus marking progress toward fully autonomous network operations.",0 "Learners sharing similar implicit cognitive states often display comparable observable problem-solving performances. Leveraging collaborative connections among such similar learners proves valuable in comprehending human learning. Motivated by the success of collaborative modeling in various domains, such as recommender systems, we aim to investigate how collaborative signals among learners contribute to the diagnosis of human cognitive states (i.e., knowledge proficiency) in the context of intelligent education. The primary challenges lie in identifying implicit collaborative connections and disentangling the entangled cognitive factors of learners for improved explainability and controllability in learner Cognitive Diagnosis (CD). However, there has been no work on CD capable of simultaneously modeling collaborative and disentangled cognitive states. To address this gap, we present Coral, a Collaborative cognitive diagnosis model with disentangled representation learning. Specifically, Coral first introduces a disentangled state encoder to achieve the initial disentanglement of learners' states. Subsequently, a meticulously designed collaborative representation learning procedure captures collaborative signals. It dynamically constructs a collaborative graph of learners by iteratively searching for optimal neighbors in a context-aware manner. Using the constructed graph, collaborative information is extracted through node representation learning. Finally, a decoding process aligns the initial cognitive states and collaborative states, achieving co-disentanglement with practice performance reconstructions. Extensive experiments demonstrate the superior performance of Coral, showcasing significant improvements over state-of-the-art methods across several real-world datasets. Our code is available at https://github.com/bigdata-ustc/Coral.",0 "Classification tasks are typically handled using Machine Learning (ML) models, which lack a balance between accuracy and interpretability. This paper introduces a new approach for classification tasks using Large Language Models (LLMs) in an explainable method. Unlike ML models, which rely heavily on data cleaning and feature engineering, this method streamlines the process using LLMs. This paper proposes a method called ""Language Model Learning (LML)"" powered by a new method called ""Data-Augmented Prediction (DAP)."" The classification is performed by LLMs using a method similar to that used by humans who manually explore and understand the data to decide classifications. In the process of LML, a dataset is summarized and evaluated to determine the features leading to each label the most. In the DAP process, the system uses the data summary and a row of the testing dataset to automatically generate a query to retrieve relevant rows from the dataset for context-aware classification. LML and DAP unlock new possibilities in areas that require explainable and context-aware decisions by ensuring satisfactory accuracy even with complex data. The system scored an accuracy above 90% in some test cases, confirming the effectiveness and potential of the system to outperform ML models in various scenarios. The source code is available at https://github.com/Pro-GenAI/LML-DAP",0 "We introduce FinDVer, a comprehensive benchmark specifically designed to evaluate the explainable claim verification capabilities of LLMs in the context of understanding and analyzing long, hybrid-content financial documents. FinDVer contains 2,400 expert-annotated examples, divided into three subsets: information extraction, numerical reasoning, and knowledge-intensive reasoning, each addressing common scenarios encountered in real-world financial contexts. We assess a broad spectrum of LLMs under long-context and RAG settings. Our results show that even the current best-performing system, GPT-4o, still lags behind human experts. We further provide in-depth analysis on long-context and RAG setting, Chain-of-Thought reasoning, and model reasoning errors, offering insights to drive future advancements. We believe that FinDVer can serve as a valuable benchmark for evaluating LLMs in claim verification over complex, expert-domain documents.",0 "Separating disinformation from fact on the web has long challenged both the search and the reasoning powers of humans. We show that the reasoning power of large language models (LLMs) and the retrieval power of modern search engines can be combined to automate this process and explainably verify claims. We integrate LLMs and search under a multi-hop evidence pursuit strategy. This strategy generates an initial question based on an input claim using a sequence to sequence model, searches and formulates an answer to the question, and iteratively generates follow-up questions to pursue the evidence that is missing using an LLM. We demonstrate our system on the FEVER 2024 (AVeriTeC) shared task. Compared to a strategy of generating all the questions at once, our method obtains .045 higher label accuracy and .155 higher AVeriTeC score (evaluating the adequacy of the evidence). Through ablations, we show the importance of various design choices, such as the question generation method, medium-sized context, reasoning with one document at a time, adding metadata, paraphrasing, reducing the problem to two classes, and reconsidering the final verdict. Our submitted system achieves .510 AVeriTeC score on the dev set and .477 AVeriTeC score on the test set.",0 "When trains collide with obstacles, the consequences are often severe. To assess how artificial intelligence might contribute to avoiding collisions, we need to understand how train drivers do it. What aspects of a situation do they consider when evaluating the risk of collision? In the present study, we assumed that train drivers do not only identify potential obstacles but interpret what they see in order to anticipate how the situation might unfold. However, to date it is unclear how exactly this is accomplished. Therefore, we assessed which cues train drivers use and what inferences they make. To this end, image-based expert interviews were conducted with 33 train drivers. Participants saw images with potential obstacles, rated the risk of collision, and explained their evaluation. Moreover, they were asked how the situation would need to change to decrease or increase collision risk. From their verbal reports, we extracted concepts about the potential obstacles, contexts, or consequences, and assigned these concepts to various categories (e.g., people's identity, location, movement, action, physical features, and mental states). The results revealed that especially for people, train drivers reason about their actions and mental states, and draw relations between concepts to make further inferences. These inferences systematically differ between situations. Our findings emphasise the need to understand train drivers' risk evaluation processes when aiming to enhance the safety of both human and automatic train operation.",2 "Deep reinforcement learning (DRL) is currently the most popular AI-based approach to autonomous vehicle control. An agent, trained for this purpose in simulation, can interact with the real environment with a human-level performance. Despite very good results in terms of selected metrics, this approach has some significant drawbacks: high computational requirements and low explainability. Because of that, a DRL-based agent cannot be used in some control tasks, especially when safety is the key issue. Therefore we propose to use Tangled Program Graphs (TPGs) as an alternative for deep reinforcement learning in control-related tasks. In this approach, input signals are processed by simple programs that are combined in a graph structure. As a result, TPGs are less computationally demanding and their actions can be explained based on the graph structure. In this paper, we present our studies on the use of TPGs as an alternative for DRL in control-related tasks. In particular, we consider the problem of navigating an unmanned aerial vehicle (UAV) through the unknown environment based solely on the on-board LiDAR sensor. The results of our work show promising prospects for the use of TPGs in control related-tasks.",0 "In this study, we formulate the task of Video Anomaly Detection as a probabilistic analysis of object bounding boxes. We hypothesize that the representation of objects via their bounding boxes only, can be sufficient to successfully identify anomalous events in a scene. The implied value of this approach is increased object anonymization, faster model training and fewer computational resources. This can particularly benefit applications within video surveillance running on edge devices such as cameras. We design our model based on human reasoning which lends itself to explaining model output in human-understandable terms. Meanwhile, the slowest model trains within less than 7 seconds on a 11th Generation Intel Core i9 Processor. While our approach constitutes a drastic reduction of problem feature space in comparison with prior art, we show that this does not result in a reduction in performance: the results we report are highly competitive on the benchmark datasets CUHK Avenue and ShanghaiTech, and significantly exceed on the latest State-of-the-Art results on StreetScene, which has so far proven to be the most challenging VAD dataset.",0 "Machine learning algorithms have achieved superhuman performance in specific complex domains. However, learning online from few examples and compositional learning for efficient generalization across domains remain elusive. In humans, such learning includes specific declarative memory formation and is closely associated with consciousness. Predictive processing has been advanced as a principled Bayesian framework for understanding the cortex as implementing deep generative models for both sensory perception and action control. However, predictive processing offers little direct insight into fast compositional learning or of the separation between conscious and unconscious contents. Here, propose that access consciousness arises as a consequence of a particular learning mechanism operating within a predictive processing system. We extend predictive processing by adding online, single-example new structure learning via hierarchical binding of unpredicted inferences. This system learns new causes by quickly connecting together novel combinations of perceptions, which manifests as working memories that can become short- and long-term declarative memories retrievable by associative recall. The contents of such bound representations are unified yet differentiated, can be maintained by selective attention and are globally available. The proposed learning process explains contrast and masking manipulations, postdictive perceptual integration, and other paradigm cases of consciousness research. 'Phenomenal conscious experience' is how the learning system transparently models its own functioning, giving rise to perceptual illusions underlying the meta-problem of consciousness. Our proposal naturally unifies the feature binding, recurrent processing, predictive processing, and global workspace theories of consciousness.",0 "Human writers plan, then write. For large language models (LLMs) to play a role in longer-form article generation, we must understand the planning steps humans make before writing. We explore one kind of planning, source-selection in news, as a case-study for evaluating plans in long-form generation. We ask: why do specific stories call for specific kinds of sources? We imagine a generative process for story writing where a source-selection schema is first selected by a journalist, and then sources are chosen based on categories in that schema. Learning the article's plan means predicting the schema initially chosen by the journalist. Working with professional journalists, we adapt five existing schemata and introduce three new ones to describe journalistic plans for the inclusion of sources in documents. Then, inspired by Bayesian latent-variable modeling, we develop metrics to select the most likely plan, or schema, underlying a story, which we use to compare schemata. We find that two schemata: stance and social affiliation best explain source plans in most documents. However, other schemata like textual entailment explain source plans in factually rich topics like ""Science"". Finally, we find we can predict the most suitable schema given just the article's headline with reasonable accuracy. We see this as an important case-study for human planning, and provides a framework and approach for evaluating other kinds of plans. We release a corpora, NewsSources, with annotations for 4M articles.",0 "Stellar systems - star clusters, galaxies, dark matter haloes, and so on - are ubiquitous characters in the evolutionary tale of our Universe. This tutorial article is an introduction to the collective dynamical evolution of the very large numbers of stars and/or other self-gravitating objects that comprise such systems, i.e. their kinetic theory. We begin by introducing the basic phenomenology of stellar systems, and explaining why and when we must develop a kinetic theory that transcends the traditional two-body relaxation picture of Chandrasekhar. We study the orbits that comprise stellar systems, how those orbits are modified by perturbations, how a system responds self-consistently to fluctuations in its gravitational potential, and how one can predict the long term fate of a stellar system in various dynamical regimes. Though our treatment is necessarily mathematical, we develop the formalism only to the extent that it facilitates real calculations. We give many examples throughout the text of the equations being applied to topics of major astrophysical importance. Furthermore, in the 1960s and 1970s the kinetic theory of stellar systems was a fledgling subject which developed in tandem with the kinetic theory of plasmas. However, the two fields have long since diverged. Yet once one has become fluent in both Plasmaish and Galacticese, and has a dictionary relating the two, one can pull ideas directly from one field to solve a problem in the other. Therefore, another aim of this tutorial article is to provide our plasma colleagues with a jargon-light understanding of the key properties of stellar systems, to point out the many direct analogies between stellar- and plasma-kinetic calculations, and ultimately to convince them that stellar dynamics and plasma kinetics are, in a deep and beautiful and useful sense, the same thing.",0 "Many cultural institutions have made large digitized visual collections available online, often under permissible re-use licences. Creating interfaces for exploring and searching these collections is difficult, particularly in the absence of granular metadata. In this paper, we introduce a method for using state-of-the-art multimodal large language models (LLMs) to enable an open-ended, explainable search and discovery interface for visual collections. We show how our approach can create novel clustering and recommendation systems that avoid common pitfalls of methods based directly on visual embeddings. Of particular interest is the ability to offer concrete textual explanations of each recommendation without the need to preselect the features of interest. Together, these features can create a digital interface that is more open-ended and flexible while also being better suited to addressing privacy and ethical concerns. Through a case study using a collection of documentary photographs, we provide several metrics showing the efficacy and possibilities of our approach.",0 "Designing an explainable model becomes crucial now for Natural Language Processing(NLP) since most of the state-of-the-art machine learning models provide a limited explanation for the prediction. In the spectrum of an explainable model, Tsetlin Machine(TM) is promising because of its capability of providing word-level explanation using proposition logic. However, concern rises over the elaborated combination of literals (propositional logic) in the clause that makes the model difficult for humans to comprehend, despite having a transparent learning process. In this paper, we design a post-hoc pruning of clauses that eliminate the randomly placed literals in the clause thereby making the model more efficiently interpretable than the vanilla TM. Experiments on the publicly available YELP-HAT Dataset demonstrate that the proposed pruned TM's attention map aligns more with the human attention map than the vanilla TM's attention map. In addition, the pairwise similarity measure also surpasses the attention map-based neural network models. In terms of accuracy, the proposed pruning method does not degrade the accuracy significantly but rather enhances the performance up to 4% to 9% in some test data.",0 "Recent advances in large language models (LLMs), make it potentially feasible to automatically refactor source code with LLMs. However, it remains unclear how well LLMs perform compared to human experts in conducting refactorings automatically and accurately. To fill this gap, in this paper, we conduct an empirical study to investigate the potential of LLMs in automated software refactoring, focusing on the identification of refactoring opportunities and the recommendation of refactoring solutions. We first construct a high-quality refactoring dataset comprising 180 real-world refactorings from 20 projects, and conduct the empirical study on the dataset. With the to-be-refactored Java documents as input, ChatGPT and Gemini identified only 28 and 7 respectively out of the 180 refactoring opportunities. However, explaining the expected refactoring subcategories and narrowing the search space in the prompts substantially increased the success rate of ChatGPT from 15.6% to 86.7%. Concerning the recommendation of refactoring solutions, ChatGPT recommended 176 refactoring solutions for the 180 refactorings, and 63.6% of the recommended solutions were comparable to (even better than) those constructed by human experts. However, 13 out of the 176 solutions suggested by ChatGPT and 9 out of the 137 solutions suggested by Gemini were unsafe in that they either changed the functionality of the source code or introduced syntax errors, which indicate the risk of LLM-based refactoring. To this end, we propose a detect-and-reapply tactic, called RefactoringMirror, to avoid such unsafe refactorings. By reapplying the identified refactorings to the original code using thoroughly tested refactoring engines, we can effectively mitigate the risks associated with LLM-based automated refactoring while still leveraging LLM's intelligence to obtain valuable refactoring recommendations.",0 "The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term a preferentist approach to AI alignment. In this paper, we characterize and challenge the preferentist approach, describing conceptual and technical alternatives that are ripe for further research. We first survey the limits of rational choice theory as a descriptive model, explaining how preferences fail to capture the thick semantic content of human values, and how utility representations neglect the possible incommensurability of those values. We then critique the normativity of expected utility theory (EUT) for humans and AI, drawing upon arguments showing how rational agents need not comply with EUT, while highlighting how EUT is silent on which preferences are normatively acceptable. Finally, we argue that these limitations motivate a reframing of the targets of AI alignment: Instead of alignment with the preferences of a human user, developer, or humanity-writ-large, AI systems should be aligned with normative standards appropriate to their social roles, such as the role of a general-purpose assistant. Furthermore, these standards should be negotiated and agreed upon by all relevant stakeholders. On this alternative conception of alignment, a multiplicity of AI systems will be able to serve diverse ends, aligned with normative standards that promote mutual benefit and limit harm despite our plural and divergent values.",2 "In mission-critical domains such as law enforcement and medical diagnosis, the ability to explain and interpret the outputs of deep learning models is crucial for ensuring user trust and supporting informed decision-making. Despite advancements in explainability, existing methods often fall short in providing explanations that mirror the depth and clarity of those given by human experts. Such expert-level explanations are essential for the dependable application of deep learning models in law enforcement and medical contexts. Additionally, we recognize that most explanations in real-world scenarios are communicated primarily through natural language. Addressing these needs, we propose a novel approach that utilizes characteristic descriptors to explain model decisions by identifying their presence in images, thereby generating expert-like explanations. Our method incorporates a concept bottleneck layer within the model architecture, which calculates the similarity between image and descriptor encodings to deliver inherent and faithful explanations. Through experiments in face recognition and chest X-ray diagnosis, we demonstrate that our approach offers a significant contrast over existing techniques, which are often limited to the use of saliency maps. We believe our approach represents a significant step toward making deep learning systems more accountable, transparent, and trustworthy in the critical domains of face recognition and medical diagnosis.",0 "In theatre, playwrights use the portrayal of characters to explore culturally based gender norms. In this paper, we develop quantitative methods to study gender depiction in the non-religious works (comedias) of Pedro Calder\'on de la Barca, a prolific Spanish 17th century author. We gather insights from a corpus of more than 100 plays by using a gender classifier and applying model explainability (attribution) methods to determine which text features are most influential in the model's decision to classify speech as 'male' or 'female', indicating the most gendered elements of dialogue in Calder\'on's comedias in a human accessible manner. We find that female and male characters are portrayed differently and can be identified by the gender prediction model at practically useful accuracies (up to f=0.83). Analysis reveals semantic aspects of gender portrayal, and demonstrates that the model is even useful in providing a relatively accurate scene-by-scene prediction of cross-dressing characters.",2 "This paper offers a comprehensive examination of single-file experiments within the field of pedestrian dynamics, providing a review from both theoretical and analytical perspectives. It begins by tracing the historical context of single-file movement studies in pedestrian dynamics. The significance of understanding the fundamental relationships between density, speed, and flow in pedestrian dynamics is explored through the lens of simple single-file systems. Furthermore, we examine various traffic systems involving human or non-human entities such as ants, mice, bicycles, and cars, and provide insights. We explore the types of experimental setups, data collection methods, and factors that influence pedestrian movement. We also define and explain the common concepts related to single-file movement, particularly in experimental research. Finally, we present a Python tool named ""SingleFileMovementAnalysis"" designed for analyzing single-file experimental data, specifically head trajectories. This tool provides a unified approach for computing movement metrics like speed, density, and headway. The article aims to stimulate further research and underscore the areas where future researchers can contribute to the advancement and improvement of single-file studies.",0 "Recognizing the states of objects in a video is crucial in understanding the scene beyond actions and objects. For instance, an egg can be raw, cracked, and whisked while cooking an omelet, and these states can coexist simultaneously (an egg can be both raw and whisked). However, most existing research assumes a single object state change (e.g., uncracked -> cracked), overlooking the coexisting nature of multiple object states and the influence of past states on the current state. We formulate object state recognition as a multi-label classification task that explicitly handles multiple states. We then propose to learn multiple object states from narrated videos by leveraging large language models (LLMs) to generate pseudo-labels from the transcribed narrations, capturing the influence of past states. The challenge is that narrations mostly describe human actions in the video but rarely explain object states. Therefore, we use the LLMs knowledge of the relationship between actions and states to derive the missing object states. We further accumulate the derived object states to consider past state contexts to infer current object state pseudo-labels. We newly collect a dataset called the Multiple Object States Transition (MOST) dataset, which includes manual multi-label annotation for evaluation purposes, covering 60 object states across six object categories. Experimental results show that our model trained on LLM-generated pseudo-labels significantly outperforms strong vision-language models, demonstrating the effectiveness of our pseudo-labeling framework that considers past context via LLMs.",0 "Subjective speech quality assessment (SSQA) is critical for evaluating speech samples as perceived by human listeners. While model-based SSQA has enjoyed great success thanks to the development of deep neural networks (DNNs), generalization remains a key challenge, especially for unseen, out-of-domain data. To benchmark the generalization abilities of SSQA models, we present MOS-Bench, a diverse collection of datasets. In addition, we also introduce SHEET, an open-source toolkit containing complete recipes to conduct SSQA experiments. We provided benchmark results for MOS-Bench, and we also explored multi-dataset training to enhance generalization. Additionally, we proposed a new performance metric, best score difference/ratio, and used latent space visualizations to explain model behavior, offering valuable insights for future research.",0 "In this paper, I introduce a novel decomposition method based on Gaussian mixtures and k-Means clustering, applied to a large Brazilian administrative dataset, to analyze the gender wage gap through the lens of worker-firm interactions shaped by comparative advantage. These interactions generate wage levels in logs that exceed the simple sum of worker and firm components, making them challenging for traditional linear models to capture effectively. I find that these ``complementarity effects'' account for approximately 17% of the gender wage gap. Larger firms, high human capital, STEM degrees, and managerial roles are closely related to it. For instance, among managerial occupations, the match effect goes as high as one-third of the total gap. I also find women are less likely to be employed by firms offering higher returns to both human capital and firm-specific premiums, resulting in a significantly larger firm contribution to the gender wage gap than previously estimated. Combined, these factors explain nearly half of the overall gender wage gap, suggesting the importance of understanding firm-worker matches in addressing gender-based pay disparities.",0 "Most methods in explainable AI (XAI) focus on providing reasons for the prediction of a given set of features. However, we solve an inverse explanation problem, i.e., given the deviation of a label, find the reasons of this deviation. We use a Bayesian framework to recover the ``true'' features, conditioned on the observed label value. We efficiently explain the deviation of a label value from the mode, by identifying and ranking the influential features using the ``distances'' in the ANOVA functional decomposition. We show that the new method is more human-intuitive and robust than methods based on mean values, e.g., SHapley Additive exPlanations (SHAP values). The extra costs of solving a Bayesian inverse problem are dimension-independent.",0 "Is explainability a false promise? This debate has emerged from the insufficient evidence that explanations help people in situations they are introduced for. More human-centered, application-grounded evaluations of explanations are needed to settle this. Yet, with no established guidelines for such studies in NLP, researchers accustomed to standardized proxy evaluations must discover appropriate measurements, tasks, datasets, and sensible models for human-AI teams in their studies. To aid with this, we first review existing metrics suitable for application-grounded evaluation. We then establish criteria to select appropriate datasets, and using them, we find that only 4 out of over 50 datasets available for explainability research in NLP meet them. We then demonstrate the importance of reassessing the state of the art to form and study human-AI teams: teaming people with models for certain tasks might only now start to make sense, and for others, it remains unsound. Finally, we present the exemplar studies of human-AI decision-making for one of the identified tasks -- verifying the correctness of a legal claim given a contract. Our results show that providing AI predictions, with or without explanations, does not cause decision makers to speed up their work without compromising performance. We argue for revisiting the setup of human-AI teams and improving automatic deferral of instances to AI, where explanations could play a useful role.",0 "CLIP embeddings have demonstrated remarkable performance across a wide range of multimodal applications. However, these high-dimensional, dense vector representations are not easily interpretable, limiting our understanding of the rich structure of CLIP and its use in downstream applications that require transparency. In this work, we show that the semantic structure of CLIP's latent space can be leveraged to provide interpretability, allowing for the decomposition of representations into semantic concepts. We formulate this problem as one of sparse recovery and propose a novel method, Sparse Linear Concept Embeddings, for transforming CLIP representations into sparse linear combinations of human-interpretable concepts. Distinct from previous work, SpLiCE is task-agnostic and can be used, without training, to explain and even replace traditional dense CLIP representations, maintaining high downstream performance while significantly improving their interpretability. We also demonstrate significant use cases of SpLiCE representations including detecting spurious correlations and model editing.",0 "Einstein's article on the EPR paradox is the most cited of his works, but not many know that it was not fully representative of the way he thought about the incompleteness of the quantum formalism. Indeed, his main worry was not Heisenberg's uncertainty principle, which he accepted, but the experimental non-separability of spatially separate systems. The same problem was also recognized, years later, by one of us, as part of an axiomatic analysis of the quantum formalism, which revealed an unexpected structural limitation of the quantum formalism in Hilbert space, preventing the description of separate systems. As we will explain, this limitation does not manifest at the level of the states, but of the projectors describing the properties, in the sense that there are not enough properties in the formalism to describe separate systems. The question remains whether separability is a possibility at the fundamental level and if a formalism should integrate it into its mathematical structure, as a possibility. To aid our intuition, we offer a reflection based on a powerful analogy between physical systems and human conceptual entities, as the question of separability also arises for the latter.",0 "The effective distribution of user transmit powers is essential for the significant advancements that the emergence of 6G wireless networks brings. In recent studies, Deep Neural Networks (DNNs) have been employed to address this challenge. However, these methods frequently encounter issues regarding fairness and computational inefficiency when making decisions, rendering them unsuitable for future dynamic services that depend heavily on the participation of each individual user. To address this gap, this paper focuses on the challenge of transmit power allocation in wireless networks, aiming to optimize $\alpha$-fairness to balance network utilization and user equity. We introduce a novel approach utilizing Kolmogorov-Arnold Networks (KANs), a class of machine learning models that offer low inference costs compared to traditional DNNs through superior explainability. The study provides a comprehensive problem formulation, establishing the NP-hardness of the power allocation problem. Then, two algorithms are proposed for dataset generation and decentralized KAN training, offering a flexible framework for achieving various fairness objectives in dynamic 6G environments. Extensive numerical simulations demonstrate the effectiveness of our approach in terms of fairness and inference cost. The results underscore the potential of KANs to overcome the limitations of existing DNN-based methods, particularly in scenarios that demand rapid adaptation and fairness.",0 "Although there have been automated approaches and tools supporting toxicity censorship for social posts, most of them focus on detection. Toxicity censorship is a complex process, wherein detection is just an initial task and a user can have further needs such as rationale understanding and content modification. For this problem, we conduct a needfinding study to investigate people's diverse needs in toxicity censorship and then build a ChatGPT-based censorship tool named DeMod accordingly. DeMod is equipped with the features of explainable Detection and personalized Modification, providing fine-grained detection results, detailed explanations, and personalized modification suggestions. We also implemented the tool and recruited 35 Weibo users for evaluation. The results suggest DeMod's multiple strengths like the richness of functionality, the accuracy of censorship, and ease of use. Based on the findings, we further propose several insights into the design of content censorship systems.",0 "Transformer-based models have achieved state-of-the-art performance in various computer vision tasks, including image and video analysis. However, Transformer's complex architecture and black-box nature pose challenges for explainability, a crucial aspect for real-world applications and scientific inquiry. Current Explainable AI (XAI) methods can only provide one-dimensional feature importance, either spatial or temporal explanation, with significant computational complexity. This paper introduces STAA (Spatio-Temporal Attention Attribution), an XAI method for interpreting video Transformer models. Differ from traditional methods that separately apply image XAI techniques for spatial features or segment contribution analysis for temporal aspects, STAA offers both spatial and temporal information simultaneously from attention values in Transformers. The study utilizes the Kinetics-400 dataset, a benchmark collection of 400 human action classes used for action recognition research. We introduce metrics to quantify explanations. We also apply optimization to enhance STAA's raw output. By implementing dynamic thresholding and attention focusing mechanisms, we improve the signal-to-noise ratio in our explanations, resulting in more precise visualizations and better evaluation results. In terms of computational overhead, our method requires less than 3\% of the computational resources of traditional XAI methods, making it suitable for real-time video XAI analysis applications. STAA contributes to the growing field of XAI by offering a method for researchers and practitioners to analyze Transformer models.",0 "We present ConceptFactory, a novel scope to facilitate more efficient annotation of 3D object knowledge by recognizing 3D objects through generalized concepts (i.e. object conceptualization), aiming at promoting machine intelligence to learn comprehensive object knowledge from both vision and robotics aspects. This idea originates from the findings in human cognition research that the perceptual recognition of objects can be explained as a process of arranging generalized geometric components (e.g. cuboids and cylinders). ConceptFactory consists of two critical parts: i) ConceptFactory Suite, a unified toolbox that adopts Standard Concept Template Library (STL-C) to drive a web-based platform for object conceptualization, and ii) ConceptFactory Asset, a large collection of conceptualized objects acquired using ConceptFactory suite. Our approach enables researchers to effortlessly acquire or customize extensive varieties of object knowledge to comprehensively study different object understanding tasks. We validate our idea on a wide range of benchmark tasks from both vision and robotics aspects with state-of-the-art algorithms, demonstrating the high quality and versatility of annotations provided by our approach. Our website is available at https://apeirony.github.io/ConceptFactory.",0 "Human Action Recognition (HAR) is an interesting research area in human-computer interaction used to monitor the activities of elderly and disabled individuals affected by physical and mental health. In the recent era, skeleton-based HAR has received much attention because skeleton data has shown that it can handle changes in striking, body size, camera views, and complex backgrounds. One key characteristic of ST-GCN is automatically learning spatial and temporal patterns from skeleton sequences. It has some limitations, as this method only works for short-range correlation due to its limited receptive field. Consequently, understanding human action requires long-range interconnection. To address this issue, we developed a spatial-temporal relative transformer ST-RTR model. The ST-RTR includes joint and relay nodes, which allow efficient communication and data transmission within the network. These nodes help to break the inherent spatial and temporal skeleton topologies, which enables the model to understand long-range human action better. Furthermore, we combine ST-RTR with a fusion model for further performance improvements. To assess the performance of the ST-RTR method, we conducted experiments on three skeleton-based HAR benchmarks: NTU RGB+D 60, NTU RGB+D 120, and UAV-Human. It boosted CS and CV by 2.11 % and 1.45% on NTU RGB+D 60, 1.25% and 1.05% on NTU RGB+D 120. On UAV-Human datasets, accuracy improved by 2.54%. The experimental outcomes explain that the proposed ST-RTR model significantly improves action recognition associated with the standard ST-GCN method.",0 "In many industrial applications, it is common that the graph embeddings generated from training GNNs are used in an ensemble model where the embeddings are combined with other tabular features (e.g., original node or edge features) in a downstream ML task. The tabular features may even arise naturally if, e.g., one tries to build a graph such that some of the node or edge features are stored in a tabular format. Here we address the problem of explaining the output of such ensemble models for which the input features consist of learned neural graph embeddings combined with additional tabular features. We propose MBExplainer, a model-agnostic explanation approach for downstream models with augmented graph embeddings. MBExplainer returns a human-legible triple as an explanation for an instance prediction of the whole pipeline consisting of three components: a subgraph with the highest importance, the topmost important nodal features, and the topmost important augmented downstream features. A game-theoretic formulation is used to take the contributions of each component and their interactions into account by assigning three Shapley values corresponding to their own specific games. Finding the explanation requires an efficient search through the corresponding local search spaces corresponding to each component. MBExplainer applies a novel multilevel search algorithm that enables simultaneous pruning of local search spaces in a computationally tractable way. In particular, three interweaved Monte Carlo Tree Search are utilized to iteratively prune the local search spaces. MBExplainer also includes a global search algorithm that uses contextual bandits to efficiently allocate pruning budget among the local search spaces. We show the effectiveness of MBExplainer by presenting a set of comprehensive numerical examples on multiple public graph datasets for both node and graph classification tasks.",0 "We introduce and study a physically motivated problem that exhibits interesting and perhaps unexpected mathematical features. A cellular flow is a two-dimensional Hamiltonian flow of the Hamiltonian $H(x, y) = \cos(x) \cos(y)$. We study a simple model of the dynamics of an inertial particle carried by such a flow, subject to viscous drag and to an additional constant external force $(b, a)$. In the limiting case of zero inertia particles the dynamics is Hamiltonian with $H(x, y) = \cos(x) \cos(y) - ax + by$. For small but nonzero $a, \ b $ there appear ``channels"" of trajectories that wind their way to infinity, of small relative measure, while most trajectories remain periodic. By contrast, for nonzero inertia, no matter how small, almost all particle trajectories drift to infinity. Moreover, the asymptotic direction of this drift no longer coincides with the direction of forcing, and rather becomes Cantor-like function of the forcing direction $a/b$, and with an unexpected feature: the plateaus of this function occupy a set of full measure. Moreover, the complement to this set has zero Hausdorff dimension. In a two-parameter representation (one parameter being the forcing direction $a/b$, the other the drag coefficient), this gives rise to Arnold tongues, the tongues corresponding to rational slopes of drift. However, unlike Arnold's example, the complement to the union of all tongues has zero measure. This is explained by the behavior of rotation number for monotone families of circle maps with flat spots.",0 "Symbolic integration is a fundamental problem in mathematics: we consider how machine learning may be used to optimise this task in a Computer Algebra System (CAS). We train transformers that predict whether a particular integration method will be successful, and compare against the existing human-made heuristics (called guards) that perform this task in a leading CAS. We find the transformer can outperform these guards, gaining up to 30% accuracy and 70% precision. We further show that the inference time of the transformer is inconsequential which shows that it is well-suited to include as a guard in a CAS. Furthermore, we use Layer Integrated Gradients to interpret the decisions that the transformer is making. If guided by a subject-matter expert, the technique can explain some of the predictions based on the input tokens, which can lead to further optimisations.",0 "Graph Neural Networks (GNN) can capture the geometric properties of neural representations in EEG data. Here we utilise those to study how reinforcement-based motor learning affects neural activity patterns during motor planning, leveraging the inherent graph structure of EEG channels to capture the spatial relationships in brain activity. By exploiting task-specific symmetries, we define different pretraining strategies that not only improve model performance across all participant groups but also validate the robustness of the geometric representations. Explainability analysis based on the graph structures reveals consistent group-specific neural signatures that persist across pretraining conditions, suggesting stable geometric structures in the neural representations associated with motor learning and feedback processing. These geometric patterns exhibit partial invariance to certain task space transformations, indicating symmetries that enable generalisation across conditions while maintaining specificity to individual learning strategies. This work demonstrates how GNNs can uncover the effects of previous outcomes on motor planning, in a complex real-world task, providing insights into the geometric principles governing neural representations. Our experimental design bridges the gap between controlled experiments and ecologically valid scenarios, offering new insights into the organisation of neural representations during naturalistic motor learning, which may open avenues for exploring fundamental principles governing brain activity in complex tasks.",2 "Ejection fraction (EF) of the left ventricle (LV) is considered as one of the most important measurements for diagnosing acute heart failure and can be estimated during cardiac ultrasound acquisition. While recent successes in deep learning research successfully estimate EF values, the proposed models often lack an explanation for the prediction. However, providing clear and intuitive explanations for clinical measurement predictions would increase the trust of cardiologists in these models. In this paper, we explore predicting EF measurements with Natural Language Explanation (NLE). We propose a model that in a single forward pass combines estimation of the LV contour over multiple frames, together with a set of modules and routines for computing various motion and shape attributes that are associated with ejection fraction. It then feeds the attributes into a large language model to generate text that helps to explain the network's outcome in a human-like manner. We provide experimental evaluation of our explanatory output, as well as EF prediction, and show that our model can provide EF comparable to state-of-the-art together with meaningful and accurate natural language explanation to the prediction. The project page can be found at https://github.com/guybenyosef/EchoNarrator .",0 "Enormous progress have been made in the last 20 years since the publication of our review \cite{csk05polrev} in this journal on transport and traffic phenomena in biology. In this brief article we present a glimpse of the major advances during this period. First, we present similarities and differences between collective intracellular transport of a single micron-size cargo by multiple molecular motors and that of a cargo particle by a team of ants on the basis of the common principle of load-sharing. Second, we sketch several models all of which are biologically motivated extensions of the Asymmetric Simple Exclusion Process (ASEP); some of these models represent the traffic of molecular machines, like RNA polymerase (RNAP) and ribosome, that catalyze template-directed polymerization of RNA and proteins, respectively, whereas few other models capture the key features of the traffic of ants on trails. More specifically, using the ASEP-based models we demonstrate the effects of traffic of RNAPs and ribosomes on random and `programmed' errors in gene expression as well as on some other subcellular processes. We recall a puzzling empirical result on the single-lane traffic of predatory ants {\it Leptogenys processionalis} as well as recent attempts to account for this puzzle. We also mention some surprising effects of lane-changing rules observed in a ASEP-based model for 3-lane traffic of army ants. Finally, we explain the conceptual similarities between the pheromone-mediated indirect communication, called stigmergy, between ants on a trail and the floor-field-mediated interaction between humans in a pedestrian traffic. For the floor-field model of human pedestrian traffic we present a major theoretical result that is relevant from the perspective of all types of traffic phenomena.",0 "When we experience a visual stimulus as beautiful, how much of that experience derives from perceptual computations we cannot describe versus conceptual knowledge we can readily translate into natural language? Disentangling perception from language in visually-evoked affective and aesthetic experiences through behavioral paradigms or neuroimaging is often empirically intractable. Here, we circumnavigate this challenge by using linear decoding over the learned representations of unimodal vision, unimodal language, and multimodal (language-aligned) deep neural network (DNN) models to predict human beauty ratings of naturalistic images. We show that unimodal vision models (e.g. SimCLR) account for the vast majority of explainable variance in these ratings. Language-aligned vision models (e.g. SLIP) yield small gains relative to unimodal vision. Unimodal language models (e.g. GPT2) conditioned on visual embeddings to generate captions (via CLIPCap) yield no further gains. Caption embeddings alone yield less accurate predictions than image and caption embeddings combined (concatenated). Taken together, these results suggest that whatever words we may eventually find to describe our experience of beauty, the ineffable computations of feedforward perception may provide sufficient foundation for that experience.",0 "The success of machine learning models relies heavily on effectively representing high-dimensional data. However, ensuring data representations capture human-understandable concepts remains difficult, often requiring the incorporation of prior knowledge and decomposition of data into multiple subspaces. Traditional linear methods fall short in modeling more than one space, while more expressive deep learning approaches lack interpretability. Here, we introduce Supervised Independent Subspace Principal Component Analysis ($\texttt{sisPCA}$), a PCA extension designed for multi-subspace learning. Leveraging the Hilbert-Schmidt Independence Criterion (HSIC), $\texttt{sisPCA}$ incorporates supervision and simultaneously ensures subspace disentanglement. We demonstrate $\texttt{sisPCA}$'s connections with autoencoders and regularized linear regression and showcase its ability to identify and separate hidden data structures through extensive applications, including breast cancer diagnosis from image features, learning aging-associated DNA methylation changes, and single-cell analysis of malaria infection. Our results reveal distinct functional pathways associated with malaria colonization, underscoring the essentiality of explainable representation in high-dimensional data analysis.",0 "Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary plausibility, which is not satisfactory to serve as explainable and engineerable tools in real-world studies. To address these issues, we present MutaPLM, a unified framework for interpreting and navigating protein mutations with protein language models. MutaPLM introduces a protein delta network that captures explicit protein mutation representations within a unified feature space, and a transfer learning pipeline with a chain-of-thought (CoT) strategy to harvest protein mutation knowledge from biomedical texts. We also construct MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations, which provides cross-modal supervision signals. Through comprehensive experiments, we demonstrate that MutaPLM excels at providing human-understandable explanations for mutational effects and prioritizing novel mutations with desirable properties. Our code, model, and data are open-sourced at https://github.com/PharMolix/MutaPLM.",0 "Monte Carlo tree search (MCTS) is one of the most capable online search algorithms for sequential planning tasks, with significant applications in areas such as resource allocation and transit planning. Despite its strong performance in real-world deployment, the inherent complexity of MCTS makes it challenging to understand for users without technical background. This paper considers the use of MCTS in transportation routing services, where the algorithm is integrated to develop optimized route plans. These plans are required to meet a range of constraints and requirements simultaneously, further complicating the task of explaining the algorithm's operation in real-world contexts. To address this critical research gap, we introduce a novel computation tree logic-based explainer for MCTS. Our framework begins by taking user-defined requirements and translating them into rigorous logic specifications through the use of language templates. Then, our explainer incorporates a logic verification and quantitative evaluation module that validates the states and actions traversed by the MCTS algorithm. The outcomes of this analysis are then rendered into human-readable descriptive text using a second set of language templates. The user satisfaction of our approach was assessed through a survey with 82 participants. The results indicated that our explanatory approach significantly outperforms other baselines in user preference.",2 "The widespread and diverse online media platforms and other internet-driven communication technologies have presented significant challenges in defining the boundaries of freedom of expression. Consequently, the internet has been transformed into a potential cyber weapon. Within this evolving landscape, two particularly hazardous phenomena have emerged: fake news and doxxing. Although these threats have been subjects of extensive scholarly analysis, the crossroads where they intersect remain unexplored. This research addresses this convergence by introducing a novel system. The Fake News and Doxxing Detection with Explainable Artificial Intelligence (FNDEX) system leverages the capabilities of three distinct transformer models to achieve high-performance detection for both fake news and doxxing. To enhance data security, a rigorous three-step anonymization process is employed, rooted in a pattern-based approach for anonymizing personally identifiable information. Finally, this research emphasizes the importance of generating coherent explanations for the outcomes produced by both detection models. Our experiments on realistic datasets demonstrate that our system significantly outperforms the existing baselines",0 "Despite their high predictive accuracies, current machine learning systems often exhibit systematic biases stemming from annotation artifacts or insufficient support for certain classes in the dataset. Recent work proposes automatic methods for identifying and explaining systematic biases using keywords. We introduce DISCERN, a framework for interpreting systematic biases in text classifiers using language explanations. DISCERN iteratively generates precise natural language descriptions of systematic errors by employing an interactive loop between two large language models. Finally, we use the descriptions to improve classifiers by augmenting classifier training sets with synthetically generated instances or annotated examples via active learning. On three text-classification datasets, we demonstrate that language explanations from our framework induce consistent performance improvements that go beyond what is achievable with exemplars of systematic bias. Finally, in human evaluations, we show that users can interpret systematic biases more effectively (by over 25% relative) and efficiently when described through language explanations as opposed to cluster exemplars.",0 "The European Union's Artificial Intelligence Act (AI Act) introduces comprehensive guidelines for the development and oversight of Artificial Intelligence (AI) and Machine Learning (ML) systems, with significant implications for Graph Neural Networks (GNNs). This paper addresses the unique challenges posed by the AI Act for GNNs, which operate on complex graph-structured data. The legislation's requirements for data management, data governance, robustness, human oversight, and privacy necessitate tailored strategies for GNNs. Our study explores the impact of these requirements on GNN training and proposes methods to ensure compliance. We provide an in-depth analysis of bias, robustness, explainability, and privacy in the context of GNNs, highlighting the need for fair sampling strategies and effective interpretability techniques. Our contributions fill the research gap by offering specific guidance for GNNs under the new legislative framework and identifying open questions and future research directions.",0 "Audits contribute to the trustworthiness of Learning Analytics (LA) systems that integrate Artificial Intelligence (AI) and may be legally required in the future. We argue that the efficacy of an audit depends on the auditability of the audited system. Therefore, systems need to be designed with auditability in mind. We present a framework for assessing the auditability of AI-integrating systems that consists of three parts: (1) Verifiable claims about the validity, utility and ethics of the system, (2) Evidence on subjects (data, models or the system) in different types (documentation, raw sources and logs) to back or refute claims, (3) Evidence must be accessible to auditors via technical means (APIs, monitoring tools, explainable AI, etc.). We apply the framework to assess the auditability of Moodle's dropout prediction system and a prototype AI-based LA. We find that Moodle's auditability is limited by incomplete documentation, insufficient monitoring capabilities and a lack of available test data. The framework supports assessing the auditability of AI-based LA systems in use and improves the design of auditable systems and thus of audits.",0 "With Deep Reinforcement Learning (DRL) being increasingly considered for the control of real-world systems, the lack of transparency of the neural network at the core of RL becomes a concern. Programmatic Reinforcement Learning (PRL) is able to to create representations of this black-box in the form of source code, not only increasing the explainability of the controller but also allowing for user adaptations. However, these methods focus on distilling a black-box policy into a program and do so after learning using the Mean Squared Error between produced and wanted behaviour, discarding other elements of the RL algorithm. The distilled policy may therefore perform significantly worse than the black-box learned policy. In this paper, we propose to directly learn a program as the policy of an RL agent. We build on TD3 and use its critics as the basis of the objective function of a genetic algorithm that syntheses the program. Our approach builds the program during training, as opposed to after the fact. This steers the program to actual high rewards, instead of a simple Mean Squared Error. Also, our approach leverages the TD3 critics to achieve high sample-efficiency, as opposed to pure genetic methods that rely on Monte-Carlo evaluations. Our experiments demonstrate the validity, explainability and sample-efficiency of our approach in a simple gridworld environment.",0 "Color constancy (CC) is one of the important perceptual abilities of the human visual system, which states that despite changes in illumination, the perceived colors of surfaces generally tend to remain constant. Nevertheless, the mechanisms underlying CC have been debated for several decades. A specific type of cell, known as the double opponent cell in the primary visual cortex (V1), is strongly implicated in achieving CC. However, the exact functioning manner of this cell type remains uncertain. In this work, our quantitative analysis of concentric double-opponent cells in V1 revealed their ability to identify gray surfaces within color-biased scenes. These gray surfaces can then be used to estimate the illumination easily. For the first time, this finding offers a clear functional explanation of concentric double-opponent receptive fields of this cell type in the visual system. Building on this insight, we introduced a novel computational theory--gray-anchoring (GA) theory--to explain how CC is achieved in the visual system. Specifically, GA-based CC involves detecting and anchoring gray surfaces within complex scenes. Our new theory serves as a bridge among the retinex theory, anchoring theory, and the neural mechanisms underlying visual CC in color vision.",0 "In the current era of social media and generative AI, an ability to automatically assess the credibility of online social media content is of tremendous importance. Credibility assessment is fundamentally based on aggregating credibility signals, which refer to small units of information, such as content factuality, bias, or a presence of persuasion techniques, into an overall credibility score. Credibility signals provide a more granular, more easily explainable and widely utilizable information in contrast to currently predominant fake news detection, which utilizes various (mostly latent) features. A growing body of research on automatic credibility assessment and detection of credibility signals can be characterized as highly fragmented and lacking mutual interconnections. This issue is even more prominent due to a lack of an up-to-date overview of research works on automatic credibility assessment. In this survey, we provide such systematic and comprehensive literature review of 175 research papers while focusing on textual credibility signals and Natural Language Processing (NLP), which undergoes a significant advancement due to Large Language Models (LLMs). While positioning the NLP research into the context of other multidisciplinary research works, we tackle with approaches for credibility assessment as well as with 9 categories of credibility signals (we provide a thorough analysis for 3 of them, namely: 1) factuality, subjectivity and bias, 2) persuasion techniques and logical fallacies, and 3) claims and veracity). Following the description of the existing methods, datasets and tools, we identify future challenges and opportunities, while paying a specific attention to recent rapid development of generative AI.",2 "With the advances of AI research, AI has been increasingly adopted in numerous domains, ranging from low-stakes daily tasks such as movie recommendations to high-stakes tasks such as medicine, and criminal justice decision-making. Explainability is becoming an essential requirement for people to understand, trust and adopt AI applications. Despite a vast collection of explainable AI (XAI) algorithms produced by the AI research community, successful examples of XAI are still relatively scarce in real-world AI applications. This can be due to the gap between what the XAI is designed for and how the XAI is actually perceived by end-users. As explainability is an inherently human-centered property, in recent years, the XAI field is starting to embrace human-centered approaches and increasingly realizing the importance of empirical studies of XAI design by involving human subjects. To move a step towards a systematic review of empirical study for human-centered XAI design, in this survey, we first brief the technical landscape of commonly used XAI algorithms in existing empirical studies. Then we analyze the diverse stakeholders and needs-finding approaches. Next, we provide an overview of the design space explored in the current human-centered XAI design. Further, we summarize the evaluation metrics based on evaluation goals. Afterward, we analyze the common findings and pitfalls derived from existing studies. For each chapter, we provide a summary of current challenges and research opportunities. Finally, we conclude the survey with a framework for human-centered XAI design with empirical studies.",2 "Artificial Intelligence (AI) techniques, particularly machine learning techniques, are rapidly transforming tactical operations by augmenting human decision-making capabilities. This paper explores AI-driven Human-Autonomy Teaming (HAT) as a transformative approach, focusing on how it empowers human decision-making in complex environments. While trust and explainability continue to pose significant challenges, our exploration focuses on the potential of AI-driven HAT to transform tactical operations. By improving situational awareness and supporting more informed decision-making, AI-driven HAT can enhance the effectiveness and safety of such operations. To this end, we propose a comprehensive framework that addresses the key components of AI-driven HAT, including trust and transparency, optimal function allocation between humans and AI, situational awareness, and ethical considerations. The proposed framework can serve as a foundation for future research and development in the field. By identifying and discussing critical research challenges and knowledge gaps in this framework, our work aims to guide the advancement of AI-driven HAT for optimizing tactical operations. We emphasize the importance of developing scalable and ethical AI-driven HAT systems that ensure seamless human-machine collaboration, prioritize ethical considerations, enhance model transparency through Explainable AI (XAI) techniques, and effectively manage the cognitive load of human operators.",0 "Node representations, or embeddings, are low-dimensional vectors that capture node properties, typically learned through unsupervised structural similarity objectives or supervised tasks. While recent efforts have focused on explaining graph model decisions, the interpretability of unsupervised node embeddings remains underexplored. To bridge this gap, we introduce DiSeNE (Disentangled and Self-Explainable Node Embedding), a framework that generates self-explainable embeddings in an unsupervised manner. Our method employs disentangled representation learning to produce dimension-wise interpretable embeddings, where each dimension is aligned with distinct topological structure of the graph. We formalize novel desiderata for disentangled and interpretable embeddings, which drive our new objective functions, optimizing simultaneously for both interpretability and disentanglement. Additionally, we propose several new metrics to evaluate representation quality and human interpretability. Extensive experiments across multiple benchmark datasets demonstrate the effectiveness of our approach.",0 "Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system's decisions, such as domain experts and decision subjects. To address this, we present the ""XAI Novice Question Bank,"" an extension of the XAI Question Bank containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task-based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants' confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants' prior perceptions of the systems' risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system's deployment, while those who perceived low risks rather asked about the system's operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.",2 "Achieving human-level performance on some of the Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs). However, the internal mechanism of these artifacts remains unclear, placing an obstacle for further understanding these models. This paper focuses on conducting a series of analytical experiments to examine the relations between the multi-head self-attention and the final MRC system performance, revealing the potential explainability in PLM-based MRC models. To ensure the robustness of the analyses, we perform our experiments in a multilingual way on top of various PLMs. We discover that passage-to-question and passage understanding attentions are the most important ones in the question answering process, showing strong correlations to the final performance than other parts. Through comprehensive visualizations and case studies, we also observe several general findings on the attention maps, which can be helpful to understand how these models solve the questions.",0 "Owing to the impressive general intelligence of large language models (LLMs), there has been a growing trend to integrate them into recommender systems to gain a more profound insight into human interests and intentions. Existing LLMs-based recommender systems primarily leverage item attributes and user interaction histories in textual format, improving the single task like rating prediction or explainable recommendation. Nevertheless, these approaches overlook the crucial contribution of traditional collaborative signals in discerning users' profound intentions and disregard the interrelatedness among tasks. To address these limitations, we introduce a novel framework known as CKF, specifically developed to boost multi-task recommendations via personalized collaborative knowledge fusion into LLMs. Specifically, our method synergizes traditional collaborative filtering models to produce collaborative embeddings, subsequently employing the meta-network to construct personalized mapping bridges tailored for each user. Upon mapped, the embeddings are incorporated into meticulously designed prompt templates and then fed into an advanced LLM to represent user interests. To investigate the intrinsic relationship among diverse recommendation tasks, we develop Multi-Lora, a new parameter-efficient approach for multi-task optimization, adept at distinctly segregating task-shared and task-specific information. This method forges a connection between LLMs and recommendation scenarios, while simultaneously enriching the supervisory signal through mutual knowledge transfer among various tasks. Extensive experiments and in-depth robustness analyses across four common recommendation tasks on four large public data sets substantiate the effectiveness and superiority of our framework.",0 "As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the Explainable Artificial Intelligence (XAI) field. Recently, a novel counterfactual explanation model, CELS, has been introduced. CELS learns a saliency map for the interest of an instance and generates a counterfactual explanation guided by the learned saliency map. While CELS represents the first attempt to exploit learned saliency maps not only to provide intuitive explanations for the reason behind the decision made by the time series classifier but also to explore post hoc counterfactual explanations, it exhibits limitations in terms of high validity for the sake of ensuring high proximity and sparsity. In this paper, we present an enhanced approach that builds upon CELS. While the original model achieved promising results in terms of sparsity and proximity, it faced limitations in validity. Our proposed method addresses this limitation by removing mask normalization to provide more informative and valid counterfactual explanations. Through extensive experimentation on datasets from various domains, we demonstrate that our approach outperforms the CELS model, achieving higher validity and producing more informative explanations.",0 "Speakers sometimes omit certain arguments of a predicate in a sentence; such omission is especially frequent in pro-drop languages. This study addresses a question about ellipsis -- what can explain the native speakers' ellipsis decisions? -- motivated by the interest in human discourse processing and writing assistance for this choice. To this end, we first collect large-scale human annotations of whether and why a particular argument should be omitted across over 2,000 data points in the balanced corpus of Japanese, a prototypical pro-drop language. The data indicate that native speakers overall share common criteria for such judgments and further clarify their quantitative characteristics, e.g., the distribution of related linguistic factors in the balanced corpus. Furthermore, the performance of the language model-based argument ellipsis judgment model is examined, and the gap between the systems' prediction and human judgments in specific linguistic aspects is revealed. We hope our fundamental resource encourages further studies on natural human ellipsis judgment.",0 "Machine Learning (ML) has emerged as a powerful form of data modelling with widespread applicability beyond its roots in the design of autonomous agents. However, relatively little attention has been paid to the interaction between people and ML systems. In this paper we view interaction between humans and ML systems within the broader context of communication between agents capable of prediction and explanation. We formalise the interaction model by taking agents to be automata with some special characteristics and define a protocol for communication between such agents. We define One- and Two-Way Intelligibility as properties that emerge at run-time by execution of the protocol. The formalisation allows us to identify conditions under which run-time sequences are bounded, and identify conditions under which the protocol can correctly implement an axiomatic specification of intelligible interaction between a human and an ML system. We also demonstrate using the formal model to: (a) identify instances of One- and Two-Way Intelligibility in literature reports on humans interacting with ML systems providing logic-based explanations, as is done in Inductive Logic Programming (ILP)",0 "Individual participants in human society collectively exhibit aggregation behavior. In this study, we present a simple microscopic model of labor force migration based on the active Brownian particles framework. In particular, agent-based simulations show that the model produces clusters of agents from a random initial distribution. Furthermore, two empirical regularities called Zipf's and Okun's laws were observed. To reveal the mechanism underlying the reproduced aggregation phenomena, we use our microscopic model to derive an extended Keller--Segel system, which is a classic model describing the aggregation behavior of biological organisms called taxis. The obtained macroscopic system indicates that the concentration of the workforce in the real world can be explained through a new type of taxis central to human behavior, highlighting the relevance of urbanization to blow-up phenomena in the derived PDE system. We then characterize the transition between the aggregation and diffusion regimes both analytically and computationally. The predicted long-term dynamics of urbanization -- originating in the asymmetric natures of employed and unemployed agents -- are compared with global empirical data, particularly in the realms of labor statistics and urban indicators.",2 "Multi-agent cyber-physical systems are present in a variety of applications. Agent decision-making can be affected due to errors induced by uncertain, dynamic operating environments or due to incorrect actions taken by an agent. When an erroneous decision that leads to a violation of safety is identified, assigning responsibility to individual agents is a key step toward preventing future accidents. Current approaches to carrying out such investigations require human labor or high degree of familiarity with operating environments. Automated strategies to assign responsibility can achieve a significant reduction in human effort and associated cognitive burden. In this paper, we develop an automated procedure to assign responsibility for safety violations to actions of any single agent in a principled manner. We base our approach on reasoning about safety violations in road safety. Given a safety violation, we use counterfactual reasoning to create alternative scenarios, showing how different outcomes could have occurred if certain actions had been replaced by others. We introduce the degree of responsibility (DoR) metric for each agent. The DoR, using the Shapley value, quantifies each agent's contribution to the safety violation, providing a basis to explain and justify decisions. We also develop heuristic techniques and methods based on agent interaction structures to improve scalability as agent numbers grow. We examine three safety violation cases from the National Highway Traffic Safety Administration (NHTSA). We run experiments using CARLA urban driving simulator. Results show the DoR improves the explainability of decisions and accountability for agent actions and their consequences.",0 "In the human activity of word translation, two languages face each other, mutually searching their own language system for the semantic place of words in the other language. We discover the huge network formed by the chain of these mutual translations as Word Translation Network, a network where words are nodes, and translation volume is represented as edges, and propose Media of Langue, a novel interface for exploring this network. Media of Langue points to the semantic configurations of many words in multiple languages at once, containing the information of existing dictionaries such as bilingual and synonym dictionaries. We have also implemented and published this interface as a web application, focusing on seven language pairs. This paper first defines the Word Translation Network and describes how to actually construct the network from bilingual corpora, followed by an analysis of the properties of the network. Next, we explain how to design a Media of Langue using the Word Translation Network, and finally, we analyze the features of the Media of Langue as a dictionary. Our website is https://www.media-of-langue.org .",0 "Explainable AI (XAI) aims to support appropriate human-AI reliance by increasing the interpretability of complex model decisions. Despite the proliferation of proposed methods, there is mixed evidence surrounding the effects of different styles of XAI explanations on human-AI reliance. Interpreting these conflicting findings requires an understanding of the individual and combined qualities of different explanation styles that influence appropriate and inappropriate human-AI reliance, and the role of interpretability in this interaction. In this study, we investigate the influences of feature-based, example-based, and combined feature- and example-based XAI methods on human-AI reliance through a two-part experimental study with 274 participants comparing these explanation style conditions. Our findings suggest differences between feature-based and example-based explanation styles beyond interpretability that affect human-AI reliance patterns across differences in individual performance and task complexity. Our work highlights the importance of adapting explanations to their specific users and context over maximising broad interpretability.",2 "We propose a game-theoretic framework to model and optimize user engagement in cooperative activities over social networks. While traditional diffusion models suggest that individuals are only influenced by their neighbors, empirical evidence shows that diffusion alone does not fully explain network evolution, and non-diffusion factors play a significant role in network growth. We model network participation and resource-sharing as strategic games involving boundedly rational players to address this gap between the analytical models and empirical evidence. Specifically, we employ Log-Linear Learning (LLL), a version of noisy best response, to capture players' decision-making strategies. By incorporating stochastic decision models like LLL, our framework integrates both diffusion and non-diffusion dynamics into network evolution dynamics. Through equilibrium analysis and simulations, we demonstrate that our model aligns with theoretical predictions from existing analytical frameworks and empirical observations across various initial network configurations. Our second contribution is a novel method for selecting anchor nodes to enhance user participation. This approach allows system designers to identify anchor nodes and compute their incentives in real time under a more realistic information requirement constraints as compared to the existing approaches. The proposed approach adapts to changing network conditions by reallocating resources from less impactful to more influential nodes. Furthermore, the method is resilient to anchor node failures, ensuring sustained and continuous network participation.",0 "Recent advancements in deep learning have significantly improved visual quality inspection and predictive maintenance within industrial settings. However, deploying these technologies on low-resource edge devices poses substantial challenges due to their high computational demands and the inherent complexity of Explainable AI (XAI) methods. This paper addresses these challenges by introducing a novel XAI-integrated Visual Quality Inspection framework that optimizes the deployment of semantic segmentation models on low-resource edge devices. Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability through visual and textual explanations to end-users. This is crucial for end-user trust and model interpretability. We outline a comprehensive methodology consisting of six fundamental modules: base model fine-tuning, XAI-based explanation generation, evaluation of XAI approaches, XAI-guided data augmentation, development of an edge-compatible model, and the generation of understandable visual and textual explanations. Through XAI-guided data augmentation, the enhanced model incorporating domain expert knowledge with visual and textual explanations is successfully deployed on mobile devices to support end-users in real-world scenarios. Experimental results showcase the effectiveness of the proposed framework, with the mobile model achieving competitive accuracy while significantly reducing model size. This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications, where decisions must be both rapid and justifiable. Our code for this work can be found at https://github.com/Analytics-Everywhere-Lab/vqixai.",0 "The tendency of repeating past choices more often than expected from the history of outcomes has been repeatedly empirically observed in reinforcement learning experiments. It can be explained by at least two computational processes: asymmetric update and (gradual) choice perseveration. A recent meta-analysis showed that both mechanisms are detectable in human reinforcement learning. However, while their descriptive value seems to be well established, they have not been compared regarding their possible adaptive value. In this study, we address this gap by simulating reinforcement learning agents in a variety of environments with a new variant of an evolutionary algorithm. Our results show that positivity bias (in the form of asymmetric update) is evolutionary stable in many situations, while the emergence of gradual perseveration is less systematic and robust. Overall, our results illustrate that biases can be adaptive and selected by evolution, in an environment-specific manner.",0 "While Machine Learning has become crucial for Industry 4.0, its opaque nature hinders trust and impedes the transformation of valuable insights into actionable decision, a challenge exacerbated in the evolving Industry 5.0 with its human-centric focus. This paper addresses this need by testing the applicability of AcME-AD in industrial settings. This recently developed framework facilitates fast and user-friendly explanations for anomaly detection. AcME-AD is model-agnostic, offering flexibility, and prioritizes real-time efficiency. Thus, it seems suitable for seamless integration with industrial Decision Support Systems. We present the first industrial application of AcME-AD, showcasing its effectiveness through experiments. These tests demonstrate AcME-AD's potential as a valuable tool for explainable AD and feature-based root cause analysis within industrial environments, paving the way for trustworthy and actionable insights in the age of Industry 5.0.",0 "Humans possess a remarkable talent for flexibly alternating to different senses when interacting with the environment. Picture a chef skillfully gauging the timing of ingredient additions and controlling the heat according to the colors, sounds, and aromas, seamlessly navigating through every stage of the complex cooking process. This ability is founded upon a thorough comprehension of task stages, as achieving the sub-goal within each stage can necessitate the utilization of different senses. In order to endow robots with similar ability, we incorporate the task stages divided by sub-goals into the imitation learning process to accordingly guide dynamic multi-sensory fusion. We propose MS-Bot, a stage-guided dynamic multi-sensory fusion method with coarse-to-fine stage understanding, which dynamically adjusts the priority of modalities based on the fine-grained state within the predicted current stage. We train a robot system equipped with visual, auditory, and tactile sensors to accomplish challenging robotic manipulation tasks: pouring and peg insertion with keyway. Experimental results indicate that our approach enables more effective and explainable dynamic fusion, aligning more closely with the human fusion process than existing methods.",0 "Many trials are designed to collect outcomes at or around pre-specified times after randomization. If there is variability in the times when participants are actually assessed, this can pose a challenge to learning the effect of treatment, since not all participants have outcome assessments at the times of interest. Furthermore, observed outcome values may not be representative of all participants' outcomes at a given time. Methods have been developed that account for some types of such irregular and informative assessment times; however, since these methods rely on untestable assumptions, sensitivity analyses are needed. We develop a methodology that is benchmarked at the explainable assessmen (EA) assumption, under which assessment and outcomes at each time are related only through data collected prior to that time. Our method uses an exponential tilting assumption, governed by a sensitivity analysis parameter, that posits deviations from the EA assumption. Our inferential strategy is based on a new influence function-based, augmented inverse intensity-weighted estimator. Our approach allows for flexible semiparametric modeling of the observed data, which is separated from specification of the sensitivity parameter. We apply our method to a randomized trial of low-income 10 individuals with uncontrolled asthma, and we illustrate implementation of our estimation procedure in detail.""",2 "Tree ensemble models like random forests and gradient boosting machines are widely used in machine learning due to their excellent predictive performance. However, a high-performance ensemble consisting of a large number of decision trees lacks sufficient transparency and explainability. In this paper, we demonstrate that when shallow decision trees are used as base learners, the ensemble learning algorithms can not only become inherently interpretable subject to an equivalent representation as the generalized additive models but also sometimes lead to better generalization performance. First, an interpretation algorithm is developed that converts the tree ensemble into the functional ANOVA representation with inherent interpretability. Second, two strategies are proposed to further enhance the model interpretability, i.e., by adding constraints in the model training stage and post-hoc effect pruning. Experiments on simulations and real-world datasets show that our proposed methods offer a better trade-off between model interpretation and predictive performance, compared with its counterpart benchmarks.",0 "Biases and errors in human-labeled data present significant challenges for machine learning, especially in supervised learning reliant on potentially flawed ground truth data. These flaws, including diagnostic errors and societal biases, risk being propagated and amplified through models trained using maximum likelihood estimation. We present the Reflective LLM Dialogue Framework RLDF, which leverages structured adversarial dialogues between multiple instances of a single LLM or different LLMs to uncover diverse perspectives and correct inconsistencies. By conditioning LLMs to adopt opposing stances, RLDF enables systematic bias detection through conditional statistics, information theory, and divergence metrics. Experiments show RLDF successfully identifies potential biases in public content while exposing limitations in human-labeled data. Our framework supports measurable progress tracking and explainable remediation actions, offering a scalable approach for improving content neutrality through transparent, multi-perspective analysis.",0 "Research into the external behaviors and internal mechanisms of large language models (LLMs) has shown promise in addressing complex tasks in the physical world. Studies suggest that powerful LLMs, like GPT-4, are beginning to exhibit human-like cognitive abilities, including planning, reasoning, and reflection. In this paper, we introduce a research line and methodology called LLM Psychology, leveraging human psychology experiments to investigate the cognitive behaviors and mechanisms of LLMs. We migrate the Typoglycemia phenomenon from psychology to explore the ""mind"" of LLMs. Unlike human brains, which rely on context and word patterns to comprehend scrambled text, LLMs use distinct encoding and decoding processes. Through Typoglycemia experiments at the character, word, and sentence levels, we observe: (I) LLMs demonstrate human-like behaviors on a macro scale, such as lower task accuracy and higher token/time consumption; (II) LLMs exhibit varying robustness to scrambled input, making Typoglycemia a benchmark for model evaluation without new datasets; (III) Different task types have varying impacts, with complex logical tasks (e.g., math) being more challenging in scrambled form; (IV) Each LLM has a unique and consistent ""cognitive pattern"" across tasks, revealing general mechanisms in its psychology process. We provide an in-depth analysis of hidden layers to explain these phenomena, paving the way for future research in LLM Psychology and deeper interpretability.",0 "Time perception is crucial for a coherent human experience. As life progresses, our perception of the passage of time becomes increasingly non-uniform, often feeling as though it accelerates with age. While various causes for this phenomenon have been theorized, a comprehensive mathematical and theoretical framework remains underexplored. This study aims to elucidate the mechanisms behind perceived time dilation by integrating classical and revised psychophysical theorems with a novel mathematical approach. Utilizing Weber-Fechner laws as foundational elements, we develop a model that transitions from exponential to logarithmic functions to represent changes in time perception across the human lifespan. Our results indicate that the perception of time shifts significantly around the age of mental maturity, aligning with a proposed inversion point where sensitivity to temporal stimuli decreases, eventually plateauing out at a constant rate. This model not only explains the underlying causes of time perception changes but also provides analytical values to quantify this acceleration. These findings offer valuable insights into the cognitive and neurological processes influencing how we experience time as we go through life.",0 "In the absence of any new physics signals at the Large Hadron Collider (LHC), anomalous results at low energy experiments have become the subject of increased attention. We focus on three such results from the LSND, MiniBooNE (MB), and ATOMKI experiments. A 17 MeV pseudoscalar mediator ($a'$) can account for two ($^8$Be and $^4$He) out of the three cases in which excess events have been seen in pair creation transitions in ATOMKI. We incorporate this mediator in a gauge invariant extension of the Standard Model (SM) with a second Higgs doublet and three singlet (seesaw) neutrinos ($N_i, i=1,2,3$). $N_{1,2}$ participate in an interaction in MB and LSND which, with $a'$ as mediator, leads to the production of $e^+ e^-$ pairs. The $N_i$ also lead to mass-squared differences for SM neutrinos in agreement with global oscillation data. We first show that such a model offers a natural joint solution to the MB and LSND excesses, providing excellent fits to their data. Next, using the values of the couplings to the quarks and electrons which are required to explain pair creation nuclear transition data for $^8$Be and $^4$He in ATOMKI, we show that these values still lead to fits for MB and LSND data. However, once ATOMKI is incorporated, we find that strong constraints from the decays $K^+ \rightarrow \pi^+ a' \, (a'\rightarrow e^+e^-)$ and $\pi^+ \rightarrow $ $ e^+ ~\nu_e ~e^+ e^- $ come into play. While our solution is in conformity with the bounds on the former decay, it remains in tension with $90\%$ CL bounds on the latter. We also discuss other constraints from both collider and non-collider experiments and from electroweak precision data, stability and unitarity. We compute the contributions to the electron and muon $g-2$ up to two loops for our model. We discuss tests of the model in upcoming experiments.",0 "In this paper, we propose a model for building natural language explanations for Bayesian Network Reasoning in terms of factor arguments, which are argumentation graphs of flowing evidence, relating the observed evidence to a target variable we want to learn about. We introduce the notion of factor argument independence to address the outstanding question of defining when arguments should be presented jointly or separately and present an algorithm that, starting from the evidence nodes and a target node, produces a list of all independent factor arguments ordered by their strength. Finally, we implemented a scheme to build natural language explanations of Bayesian Reasoning using this approach. Our proposal has been validated in the medical domain through a human-driven evaluation study where we compare the Bayesian Network Reasoning explanations obtained using factor arguments with an alternative explanation method. Evaluation results indicate that our proposed explanation approach is deemed by users as significantly more useful for understanding Bayesian Network Reasoning than another existing explanation method it is compared to.",0 "Cognitive architectures are influential, integrated computational frameworks for modeling cognitive processes. Due to a variety of factors, however, researchers using cognitive architectures to explain and predict human performance rarely employ model validation, comparison, and selection techniques based on likelihood. This paper provides a primer on how to implement maximum likelihood techniques and its derivatives to fit and compare models at the individual and group level, using models implemented in the ACT-R cognitive architecture as examples. The paper covers the most common ways in which likelihood measures can be applied, under different scenarios, for models of different complexity, and provides further technical references for the interested reader. An accompanying notebook in Python provides the code to implement all of the suggestions.",0 "As AI becomes fundamental in sectors like healthcare, explainable AI (XAI) tools are essential for trust and transparency. However, traditional user studies used to evaluate these tools are often costly, time consuming, and difficult to scale. In this paper, we explore the use of Large Language Models (LLMs) to replicate human participants to help streamline XAI evaluation. We reproduce a user study comparing counterfactual and causal explanations, replicating human participants with seven LLMs under various settings. Our results show that (i) LLMs can replicate most conclusions from the original study, (ii) different LLMs yield varying levels of alignment in the results, and (iii) experimental factors such as LLM memory and output variability affect alignment with human responses. These initial findings suggest that LLMs could provide a scalable and cost-effective way to simplify qualitative XAI evaluation.",2 "We propose a neuropsychological approach to the explainability of artificial neural networks, which involves using concepts from human cognitive psychology as relevant heuristic references for developing synthetic explanatory frameworks that align with human modes of thought. The analogical concepts mobilized here, which are intended to create such an epistemological bridge, are those of categorization and similarity, as these notions are particularly suited to the categorical ""nature"" of the reconstructive information processing performed by artificial neural networks. Our study aims to reveal a unique process of synthetic cognition, that of the categorical convergence of highly activated tokens. We attempt to explain this process with the idea that the categorical segment created by a neuron is actually the result of a superposition of categorical sub-dimensions within its input vector space.",0 "Humans possess multimodal literacy, allowing them to actively integrate information from various modalities to form reasoning. Faced with challenges like lexical ambiguity in text, we supplement this with other modalities, such as thumbnail images or textbook illustrations. Is it possible for machines to achieve a similar multimodal understanding capability? In response, we present Understanding Pun with Image Explanations (UNPIE), a novel benchmark designed to assess the impact of multimodal inputs in resolving lexical ambiguities. Puns serve as the ideal subject for this evaluation due to their intrinsic ambiguity. Our dataset includes 1,000 puns, each accompanied by an image that explains both meanings. We pose three multimodal challenges with the annotations to assess different aspects of multimodal literacy; Pun Grounding, Disambiguation, and Reconstruction. The results indicate that various Socratic Models and Visual-Language Models improve over the text-only models when given visual context, particularly as the complexity of the tasks increases.",0 "Explainable Artificial Intelligence (AI) focuses on helping humans understand the working of AI systems or their decisions and has been a cornerstone of AI for decades. Recent research in explainability has focused on explaining the workings of AI models or model explainability. There have also been several position statements and review papers detailing the needs of end-users for user-centered explainability but fewer implementations. Hence, this thesis seeks to bridge some gaps between model and user-centered explainability. We create an explanation ontology (EO) to represent literature-derived explanation types via their supporting components. We implement a knowledge-augmented question-answering (QA) pipeline to support contextual explanations in a clinical setting. Finally, we are implementing a system to combine explanations from different AI methods and data modalities. Within the EO, we can represent fifteen different explanation types, and we have tested these representations in six exemplar use cases. We find that knowledge augmentations improve the performance of base large language models in the contextualized QA, and the performance is variable across disease groups. In the same setting, clinicians also indicated that they prefer to see actionability as one of the main foci in explanations. In our explanations combination method, we plan to use similarity metrics to determine the similarity of explanations in a chronic disease detection setting. Overall, through this thesis, we design methods that can support knowledge-enabled explanations across different use cases, accounting for the methods in today's AI era that can generate the supporting components of these explanations and domain knowledge sources that can enhance them.",0 "Deep learning models have revolutionized medical imaging and diagnostics, yet their opaque nature poses challenges for clinical adoption and trust. Amongst approaches to improve model interpretability, concept-based explanations aim to provide concise and human-understandable explanations of any arbitrary classifier. However, such methods usually require a large amount of manually collected data with concept annotation, which is often scarce in the medical domain. In this paper, we propose Conceptual Counterfactual Explanations for Chest X-ray (CoCoX), which leverages the joint embedding space of an existing vision-language model (VLM) to explain black-box classifier outcomes without the need for annotated datasets. Specifically, we utilize textual concepts derived from chest radiography reports and a pre-trained chest radiography-based VLM to explain three common cardiothoracic pathologies. We demonstrate that the explanations generated by our method are semantically meaningful and faithful to underlying pathologies.",0 "Based on the former work Conscious Turing Machine, in this paper, we attempt to talk about the consciousness of CTM, dig deeper into the self-consciousness in CTM, offer a clear definition of it, and design a possible model of the Model-of-the-World processor. To prove the consciousness of CTM does exist, we chose two definitions of human consciousness and extracted four key points to see if the CTM framework meets with them. If it does, we affirm that it's more likely to be able to generate consciousness. About self-consciousness, our definition of it refers to both the definition of conscious awareness in CTM and former studies about the duality of self. After that, we give a brief introduction to a possible model of MoTW processors including five important parts: Modeling function, Gist function, Value function, Cache, and Long term memory. Finally, we use some illusions and disorders to explain our MotW processor model, trying to understand how these illusions work on a CTM.",0 "Objectives: To investigate clinicians' attitudes towards current automated interpretation of ECG and novel AI technologies and their perception of computer-assisted interpretation. Materials and Methods: We conducted a series of interviews with clinicians in the UK. Our study: (i) explores the potential for AI, specifically future 'human-like' computing approaches, to facilitate ECG interpretation and support clinical decision making, and (ii) elicits their opinions about the importance of explainability and trustworthiness of AI algorithms. Results: We performed inductive thematic analysis on interview transcriptions from 23 clinicians and identified the following themes: (i) a lack of trust in current systems, (ii) positive attitudes towards future AI applications and requirements for these, (iii) the relationship between the accuracy and explainability of algorithms, and (iv) opinions on education, possible deskilling, and the impact of AI on clinical competencies. Discussion: Clinicians do not trust current computerised methods, but welcome future 'AI' technologies. Where clinicians trust future AI interpretation to be accurate, they are less concerned that it is explainable. They also preferred ECG interpretation that demonstrated the results of the algorithm visually. Whilst clinicians do not fear job losses, they are concerned about deskilling and the need to educate the workforce to use AI responsibly. Conclusion: Clinicians are positive about the future application of AI in clinical decision-making. Accuracy is a key factor of uptake and visualisations are preferred over current computerised methods. This is viewed as a potential means of training and upskilling, in contrast to the deskilling that automation might be perceived to bring.",2 "This survey revolves around the question how the roots of a monic polynomial (resp. the spectral decomposition of a linear operator), whose coefficients depend in a smooth way on parameters, depend on those parameters. The parameter dependence of the polynomials (resp. operators) ranges from real analytic over $C^\infty$ to differentiable of finite order with often drastically different regularity results for the roots (resp. eigenvalues and eigenvectors). Another interesting point is the difference between the perturbation theory of hyperbolic polynomials (where, by definition, all roots are real) and that of general complex polynomials. The subject, which started with Rellich's work in the 1930s, enjoyed sustained interest through time that intensified in the last two decades, bringing some definitive optimal results. Throughout we try to explain the main proof ideas",2 "We study how well large language models (LLMs) explain their generations through rationales -- a set of tokens extracted from the input text that reflect the decision-making process of LLMs. Specifically, we systematically study rationales derived using two approaches: (1) popular prompting-based methods, where prompts are used to guide LLMs in generating rationales, and (2) technical attribution-based methods, which leverage attention or gradients to identify important tokens. Our analysis spans three classification datasets with annotated rationales, encompassing tasks with varying performance levels. While prompting-based self-explanations are widely used, our study reveals that these explanations are not always as ""aligned"" with the human rationale as attribution-based explanations. Even more so, fine-tuning LLMs to enhance classification task accuracy does not enhance the alignment of prompting-based rationales. Still, it does considerably improve the alignment of attribution-based methods (e.g., InputXGradient). More importantly, we show that prompting-based self-explanation is also less ""faithful"" than attribution-based explanations, failing to provide a reliable account of the model's decision-making process. To evaluate faithfulness, unlike prior studies that excluded misclassified examples, we evaluate all instances and also examine the impact of fine-tuning and accuracy on alignment and faithfulness. Our findings suggest that inconclusive faithfulness results reported in earlier studies may stem from low classification accuracy. These findings underscore the importance of more rigorous and comprehensive evaluations of LLM rationales.",0 "An important line of research in the field of explainability is to extract a small subset of crucial rationales from the full input. The most widely used criterion for rationale extraction is the maximum mutual information (MMI) criterion. However, in certain datasets, there are spurious features non-causally correlated with the label and also get high mutual information, complicating the loss landscape of MMI. Although some penalty-based methods have been developed to penalize the spurious features (e.g., invariance penalty, intervention penalty, etc) to help MMI work better, these are merely remedial measures. In the optimization objectives of these methods, spurious features are still distinguished from plain noise, which hinders the discovery of causal rationales. This paper aims to develop a new criterion that treats spurious features as plain noise, allowing the model to work on datasets rich in spurious features as if it were working on clean datasets, thereby making rationale extraction easier. We theoretically observe that removing either plain noise or spurious features from the input does not alter the conditional distribution of the remaining components relative to the task label. However, significant changes in the conditional distribution occur only when causal features are eliminated. Based on this discovery, the paper proposes a criterion for \textbf{M}aximizing the \textbf{R}emaining \textbf{D}iscrepancy (MRD). Experiments on six widely used datasets show that our MRD criterion improves rationale quality (measured by the overlap with human-annotated rationales) by up to $10.4\%$ as compared to several recent competitive MMI variants. Code: \url{https://github.com/jugechengzi/Rationalization-MRD}.",0 "The emergence of tools based on artificial intelligence has also led to the need of producing explanations which are understandable by a human being. In most approaches, the system is considered a black box, making it difficult to generate appropriate explanations. In this work, though, we consider a setting where models are transparent: probabilistic logic programming (PLP), a paradigm that combines logic programming for knowledge representation and probability to model uncertainty. However, given a query, the usual notion of explanation is associated with a set of choices, one for each random variable of the model. Unfortunately, such a set does not explain why the query is true and, in fact, it may contain choices that are actually irrelevant for the considered query. To improve this situation, we present in this paper an approach to explaining explanations which is based on defining a new query-driven inference mechanism for PLP where proofs are labeled with ""choice expressions"", a compact and easy to manipulate representation for sets of choices. The combination of proof trees and choice expressions allows us to produce comprehensible query justifications with a causal structure.",0 "The results of studies on the properties of ordinary and heavy water subjected to sharp mechanical impacts at acoustic repetition frequency are presented. Experimental evidence for the phenomenon of acoustically induced nuclear processes in water is provided, supported by direct measurements of radiation emission and the formation of new elements, which cannot be explained by chemical reactions. The complex influence of mechanical oscillations on changes in the concentrations of stable isotopes of elements such as Ti, B, Na, Mg, and Li in water is demonstrated. The cause of surface erosion of metal structures during cavitation in water is explained through the formation of fluorine and the subsequent creation of aggressive HF acid molecules. A mechanism for the occurrence of sonoluminescence under sharp acoustic impact on water is proposed.",0 "Understanding specifically where a model focuses on within an image is critical for human interpretability of the decision-making process. Deep learning-based solutions are prone to learning coincidental correlations in training datasets, causing over-fitting and reducing the explainability. Recent advances have shown that guiding models to human-defined regions of saliency within individual images significantly increases performance and interpretability. Human-guided models also exhibit greater generalization capabilities, as coincidental dataset features are avoided. Results show that models trained with saliency incorporation display an increase in interpretability of up to 30% over models trained without saliency information. The collection of this saliency information, however, can be costly, laborious and in some cases infeasible. To address this limitation, we propose a combination strategy of saliency incorporation and active learning to reduce the human annotation data required by 80% while maintaining the interpretability and performance increase from human saliency. Extensive experimentation outlines the effectiveness of the proposed approach across five public datasets and six active learning criteria.",0 "This study is located in the Human-Centered Artificial Intelligence (HCAI) and focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms, specifically investigating how humans understand and interact with the explanations provided by these algorithms. To achieve this, we employed a multi-disciplinary approach that included state-of-the-art research methods from social sciences to measure the comprehensibility of explanations generated by a state-of-the-art lachine learning model, specifically the Gradient Boosting Classifier (XGBClassifier). We conducted an extensive empirical user study involving interviews with 39 participants from three different groups, each with varying expertise in data science, data visualization, and domain-specific knowledge related to the dataset used for training the machine learning model. Participants were asked a series of questions to assess their understanding of the model's explanations. To ensure replicability, we built the model using a publicly available dataset from the UC Irvine Machine Learning Repository, focusing on edible and non-edible mushrooms. Our findings reveal limitations in existing XAI methods and confirm the need for new design principles and evaluation techniques that address the specific information needs and user perspectives of different classes of AI stakeholders. We believe that the results of our research and the cross-disciplinary methodology we developed can be successfully adapted to various data types and user profiles, thus promoting dialogue and address opportunities in HCAI research. To support this, we are making the data resulting from our study publicly available.",2 "The growing capabilities of AI models are leading to their wider use, including in safety-critical domains. Explainable AI (XAI) aims to make these models safer to use by making their inference process more transparent. However, current explainability methods are seldom evaluated in the way they are intended to be used: by real-world end users. To address this, we conducted a large-scale user study with 85 healthcare practitioners in the context of human-AI collaborative chest X-ray analysis. We evaluated three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities. We specifically examined how different explanation types influence users depending on whether the AI advice and explanations are factually correct. We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps. We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.",2 "Large Language Models (LLMs) have revolutionised natural language processing, exhibiting impressive human-like capabilities. In particular, LLMs are capable of ""lying"", knowingly outputting false statements. Hence, it is of interest and importance to develop methods to detect when LLMs lie. Indeed, several authors trained classifiers to detect LLM lies based on their internal model activations. However, other researchers showed that these classifiers may fail to generalise, for example to negated statements. In this work, we aim to develop a robust method to detect when an LLM is lying. To this end, we make the following key contributions: (i) We demonstrate the existence of a two-dimensional subspace, along which the activation vectors of true and false statements can be separated. Notably, this finding is universal and holds for various LLMs, including Gemma-7B, LLaMA2-13B, Mistral-7B and LLaMA3-8B. Our analysis explains the generalisation failures observed in previous studies and sets the stage for more robust lie detection; (ii) Building upon (i), we construct an accurate LLM lie detector. Empirically, our proposed classifier achieves state-of-the-art performance, attaining 94% accuracy in both distinguishing true from false factual statements and detecting lies generated in real-world scenarios.",0 "In an era increasingly dominated by digital platforms, the spread of misinformation poses a significant challenge, highlighting the need for solutions capable of assessing information veracity. Our research contributes to the field of Explainable Artificial Antelligence (XAI) by developing transformer-based fact-checking models that contextualise and justify their decisions by generating human-accessible explanations. Importantly, we also develop models for automatic evaluation of explanations for fact-checking verdicts across different dimensions such as \texttt{(self)-contradiction}, \texttt{hallucination}, \texttt{convincingness} and \texttt{overall quality}. By introducing human-centred evaluation methods and developing specialised datasets, we emphasise the need for aligning Artificial Intelligence (AI)-generated explanations with human judgements. This approach not only advances theoretical knowledge in XAI but also holds practical implications by enhancing the transparency, reliability and users' trust in AI-driven fact-checking systems. Furthermore, the development of our metric learning models is a first step towards potentially increasing efficiency and reducing reliance on extensive manual assessment. Based on experimental results, our best performing generative model \textsc{ROUGE-1} score of 47.77, demonstrating superior performance in generating fact-checking explanations, particularly when provided with high-quality evidence. Additionally, the best performing metric learning model showed a moderately strong correlation with human judgements on objective dimensions such as \texttt{(self)-contradiction and \texttt{hallucination}, achieving a Matthews Correlation Coefficient (MCC) of around 0.7.}",0 "Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions, making it very important to understand and explain these models to ensure informed decisions. Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge. In contrast, natural language explanations (NLEs) are more accessible to laypeople. However, evaluating forecast NLEs is difficult due to the complex causal relationships in time series data. To address this, we introduce two new performance metrics based on simulatability, assessing how well a human surrogate can predict model forecasts using the explanations. Experiments show these metrics differentiate good from poor explanations and align with human judgments. Utilizing these metrics, we further evaluate the ability of state-of-the-art large language models (LLMs) to generate explanations for time series data, finding that numerical reasoning, rather than model size, is the main factor influencing explanation quality.",0 "With the rapid advancement of machine translation research, evaluation toolkits have become essential for benchmarking system progress. Tools like COMET and SacreBLEU offer single quality score assessments that are effective for pairwise system comparisons. However, these tools provide limited insights for fine-grained system-level comparisons and the analysis of instance-level defects. To address these limitations, we introduce Translation Canvas, an explainable interface designed to pinpoint and analyze translation systems' performance: 1) Translation Canvas assists machine translation researchers in comprehending system-level model performance by identifying common errors (their frequency and severity) and analyzing relationships between different systems based on various evaluation metrics. 2) It supports fine-grained analysis by highlighting error spans with explanations and selectively displaying systems' predictions. According to human evaluation, Translation Canvas demonstrates superior performance over COMET and SacreBLEU packages under enjoyability and understandability criteria.",2 "Background/Objectives: Cataract surgery, a very common and critical procedure for restoring vision, has outcomes that can vary based on patient demographics. This study aimed to elucidate the effects of age and sex on the risk factors, intraoperative complications, and postoperative outcomes of cataract surgery. Subjects/Methods: Conducted as a single-center retrospective cohort study, it analyzed 691 eyes from 589 individuals who underwent surgery at a tertiary referral center, utilizing data from electronic medical records to assess preoperative risk factors, intraoperative complications, and pre- and post-operative best corrected visual acuity (BCVA) along with demographic data. Results: The main results highlighted that males aged 65-75 years exhibited significantly higher rates of functional postoperative BCVA (91% for males vs. 79% for females, p=0.007), a disparity that is not explained by differences in surgical complications or risk factor prevalence. Furthermore, the study identified age-specific thresholds where BCVA improvements significantly declined beyond 65 years for females and 75 years for males. The likelihood of worsened BCVA post-surgery increased with age for both sexes, with a significant decline in BCVA improvement transitioning from 55-65 years to 65-75 years age groups. Conclusions: The findings underscore the critical influence of both sex and age on cataract surgery outcomes, revealing significant sex-specific age thresholds that signal lesser improvements in postoperative BCVA. These insights advocate for the integration of patient age and sex into preoperative evaluations to better tailor the timing and planning of cataract surgery, ultimately aiming to optimize clinical outcomes.",2 "In recent years, Graph Neural Networks (GNNs) have become successful in molecular property prediction tasks such as toxicity analysis. However, due to the black-box nature of GNNs, their outputs can be concerning in high-stakes decision-making scenarios, e.g., drug discovery. Facing such an issue, Graph Counterfactual Explanation (GCE) has emerged as a promising approach to improve GNN transparency. However, current GCE methods usually fail to take domain-specific knowledge into consideration, which can result in outputs that are not easily comprehensible by humans. To address this challenge, we propose a novel GCE method, LLM-GCE, to unleash the power of large language models (LLMs) in explaining GNNs for molecular property prediction. Specifically, we utilize an autoencoder to generate the counterfactual graph topology from a set of counterfactual text pairs (CTPs) based on an input graph. Meanwhile, we also incorporate a CTP dynamic feedback module to mitigate LLM hallucination, which provides intermediate feedback derived from the generated counterfactuals as an attempt to give more faithful guidance. Extensive experiments demonstrate the superior performance of LLM-GCE. Our code is released on https://github.com/YinhanHe123/new\_LLM4GNNExplanation.",0 "Connected and automated vehicles (CAVs) are considered a potential solution for future transportation challenges, aiming to develop systems that are efficient, safe, and environmentally friendly. However, CAV control presents significant challenges due to the complexity of interconnectivity and coordination required among vehicles. Multi-agent reinforcement learning (MARL), which has shown notable advancements in addressing complex problems in autonomous driving, robotics, and human-vehicle interaction, emerges as a promising tool to enhance CAV capabilities. Despite its potential, there is a notable absence of current reviews on mainstream MARL algorithms for CAVs. To fill this gap, this paper offers a comprehensive review of MARL's application in CAV control. The paper begins with an introduction to MARL, explaining its unique advantages in handling complex and multi-agent scenarios. It then presents a detailed survey of MARL applications across various control dimensions for CAVs, including critical scenarios such as platooning control, lane-changing, and unsignalized intersections. Additionally, the paper reviews prominent simulation platforms essential for developing and testing MARL algorithms. Lastly, it examines the current challenges in deploying MARL for CAV control, including macro-micro optimization, communication, mixed traffic, and sim-to-real challenges. Potential solutions discussed include hierarchical MARL, decentralized MARL, adaptive interactions, and offline MARL.",2 "The aggressiveness of prostate cancer, the most common cancer in men worldwide, is primarily assessed based on histopathological data using the Gleason scoring system. While artificial intelligence (AI) has shown promise in accurately predicting Gleason scores, these predictions often lack inherent explainability, potentially leading to distrust in human-machine interactions. To address this issue, we introduce a novel dataset of 1,015 tissue microarray core images, annotated by an international group of 54 pathologists. The annotations provide detailed localized pattern descriptions for Gleason grading in line with international guidelines. Utilizing this dataset, we develop an inherently explainable AI system based on a U-Net architecture that provides predictions leveraging pathologists' terminology. This approach circumvents post-hoc explainability methods while maintaining or exceeding the performance of methods trained directly for Gleason pattern segmentation (Dice score: 0.713 $\pm$ 0.003 trained on explanations vs. 0.691 $\pm$ 0.010 trained on Gleason patterns). By employing soft labels during training, we capture the intrinsic uncertainty in the data, yielding strong results in Gleason pattern segmentation even in the context of high interobserver variability. With the release of this dataset, we aim to encourage further research into segmentation in medical tasks with high levels of subjectivity and to advance the understanding of pathologists' reasoning processes.",0 "Cooperative behavior constitutes a key aspect of human society and non-human animal systems, but explaining how cooperation evolves represents a major scientific challenge. It is now well established that social network structure plays a central role for the viability of cooperation. However, not much is known about the importance of the positions of cooperators in the networks for the evolution of cooperation. Here, we investigate how the spread of cooperation is affected by correlations between cooperativeness and individual social connectedness (such that cooperators occupy well-connected network positions). Using simulation models, we find that these correlations enhance cooperation in standard scale-free networks but not in standard Poisson networks. In contrast, when degree assortativity is increased such that individuals cluster with others of similar social connectedness, we find that Poisson networks can maintain high levels of cooperation, which can even exceed those of scale-free networks. We show that this is due to dynamics where bridge areas between social clusters act as barriers to the spread of defection. We also find that this positive effect on cooperation is sensitive to the presence of Trojan horses (defectors placed within cooperator clusters), which allow defection to invade. The results provide new knowledge about the conditions under which cooperation may evolve, and are also relevant to consider in regard to the design of cooperation studies.",0 "As artificial intelligence (AI) becomes more integrated into educational environments, how can we ensure that these systems are both understandable and trustworthy? The growing demand for explainability in AI systems is a critical area of focus. This paper explores Human-Centric eXplainable AI (HCXAI) in the educational landscape, emphasizing its role in enhancing learning outcomes, fostering trust among users, and ensuring transparency in AI-driven tools, particularly through the innovative use of large language models (LLMs). What challenges arise in the implementation of explainable AI in educational contexts? This paper analyzes these challenges, addressing the complexities of AI models and the diverse needs of users. It outlines comprehensive frameworks for developing HCXAI systems that prioritize user understanding and engagement, ensuring that educators and students can effectively interact with these technologies. Furthermore, what steps can educators, developers, and policymakers take to create more effective, inclusive, and ethically responsible AI solutions in education? The paper provides targeted recommendations to address this question, highlighting the necessity of prioritizing explainability. By doing so, how can we leverage AI's transformative potential to foster equitable and engaging educational experiences that support diverse learners?",0 "Electricity forecasting has been a recurring research topic, as it is key to finding the right balance between production and consumption. While most papers are focused on the national or regional scale, few are interested in the household level. Desegregated forecast is a common topic in Machine Learning (ML) literature but lacks explainability that household energy forecasts require. This paper specifically targets the challenges of forecasting electricity use at the household level. This paper confronts common Machine Learning algorithms to electricity household forecasts, weighing the pros and cons, including accuracy and explainability with well-known key metrics. Furthermore, we also confront them in this paper with the business challenges specific to this sector such as explainability or outliers resistance. We introduce a custom decision tree, aiming at providing a fair estimate of the energy consumption, while being explainable and consistent with human intuition. We show that this novel method allows greater explainability without sacrificing much accuracy. The custom tree methodology can be used in various business use cases but is subject to limitations, such as a lack of resilience with outliers.",0 "In the last two decades, Alternating-time Temporal Logic (ATL) has been proved to be very useful in modeling strategic reasoning for Multi-Agent Systems (MAS). However, this logic struggles to capture the bounded rationality inherent in human decision-making processes. To overcome these limitations, Natural Alternating-time Temporal Logic (NatATL) has been recently introduced. As an extension of ATL, NatATL incorporates bounded memory constraints into agents' strategies, which allows to resemble human cognitive limitations. In this paper, we present a model checker tool for NatATL specifications - both for memoryless strategies and strategies with recall - integrated into VITAMIN, an open-source model checker designed specifically for MAS verification. By embedding NatATL into VITAMIN, we transform theoretical advancements into a practical verification framework, enabling comprehensive analysis and validation of strategic reasoning in complex multi-agent environments. Our novel tool paves the way for applications in areas such as explainable AI and human-in-the-loop systems, highlighting NatATL's substantial potential.",0 "Natural Language Processing (NLP) research on AI Safety and social bias in AI has focused on safety for humans and social bias against human minorities. However, some AI ethicists have argued that the moral significance of nonhuman animals has been ignored in AI research. Therefore, the purpose of this study is to investigate whether there is speciesism, i.e., discrimination against nonhuman animals, in NLP research. First, we explain why nonhuman animals are relevant in NLP research. Next, we survey the findings of existing research on speciesism in 300 NLP researchers, data, and models and further investigate this problem in this study. The findings of this study suggest that speciesism exists within researchers, data, and models, respectively. Specifically, our survey and experiments show that (a) among NLP researchers, even those who study social bias in AI, do not recognize speciesism or speciesist bias; (b) among NLP data, speciesist bias is inherent in the data annotated in the datasets used to evaluate NLP models; (c) OpenAI GPTs, recent NLP models, exhibit speciesist bias by default. Finally, we discuss how we can reduce speciesism in NLP research.""",2 "Zero-shot reasoning methods with Large Language Models (LLMs) offer significant advantages including great generalization to novel tasks and reduced dependency on human-crafted examples. However, the current zero-shot methods still have limitations in complex tasks, e.g., answering questions that require multi-step reasoning. In this paper, we address this limitation by introducing a novel structure-oriented analysis method to help LLMs better understand the question and guide the problem-solving process of LLMs. We first demonstrate how the existing reasoning strategies, Chain-of-Thought and ReAct, can benefit from our structure-oriented analysis. In addition to empirical investigations, we leverage the probabilistic graphical model to theoretically explain why our structure-oriented analysis can improve the LLM reasoning process. To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA), that can better enforce the reasoning process following our structure-oriented analysis by refinement techniques and is equipped with external knowledge retrieval capability to reduce factual errors. Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods. Finally, the system not only improves reasoning accuracy in complex tasks but also demonstrates robustness against potential attacks that corrupt the reasoning process.",0 "Despite advancements in enhancing LLM safety against jailbreak attacks, evaluating LLM defenses remains a challenge, with current methods often lacking explainability and generalization to complex scenarios, leading to incomplete assessments (e.g., direct judgment without reasoning, low F1 score of GPT-4 in complex cases, bias in multilingual scenarios). To address this, we present JAILJUDGE, a comprehensive benchmark featuring diverse risk scenarios, including synthetic, adversarial, in-the-wild, and multilingual prompts, along with high-quality human-annotated datasets. The JAILJUDGE dataset includes over 35k+ instruction-tune data with reasoning explainability and JAILJUDGETEST, a 4.5k+ labeled set for risk scenarios, and a 6k+ multilingual set across ten languages. To enhance evaluation with explicit reasoning, we propose the JailJudge MultiAgent framework, which enables explainable, fine-grained scoring (1 to 10). This framework supports the construction of instruction-tuning ground truth and facilitates the development of JAILJUDGE Guard, an end-to-end judge model that provides reasoning and eliminates API costs. Additionally, we introduce JailBoost, an attacker-agnostic attack enhancer, and GuardShield, a moderation defense, both leveraging JAILJUDGE Guard. Our experiments demonstrate the state-of-the-art performance of JailJudge methods (JailJudge MultiAgent, JAILJUDGE Guard) across diverse models (e.g., GPT-4, Llama-Guard) and zero-shot scenarios. JailBoost and GuardShield significantly improve jailbreak attack and defense tasks under zero-shot settings, with JailBoost enhancing performance by 29.24% and GuardShield reducing defense ASR from 40.46% to 0.15%.",0 "Graph Neural Networks (GNNs) have achieved great success in Knowledge Graph Completion (KGC) by modelling how entities and relations interact in recent years. However, the explanation of the predicted facts has not caught the necessary attention. Proper explanations for the results of GNN-based KGC models increase model transparency and help researchers develop more reliable models. Existing practices for explaining KGC tasks rely on instance/subgraph-based approaches, while in some scenarios, paths can provide more user-friendly and interpretable explanations. Nonetheless, the methods for generating path-based explanations for KGs have not been well-explored. To address this gap, we propose Power-Link, the first path-based KGC explainer that explores GNN-based models. We design a novel simplified graph-powering technique, which enables the generation of path-based explanations with a fully parallelisable and memory-efficient training scheme. We further introduce three new metrics for quantitative evaluation of the explanations, together with a qualitative human evaluation. Extensive experiments demonstrate that Power-Link outperforms the SOTA baselines in interpretability, efficiency, and scalability.",2 "Recent advances in Natural Language Processing (NLP) have ignited interest in developing effective methods for predicting protein-ligand interactions (PLIs) given their relevance to drug discovery and protein engineering efforts and the ever-growing volume of biochemical sequence and structural data available. The parallels between human languages and the ""languages"" used to represent proteins and ligands have enabled the use of NLP machine learning approaches to advance PLI studies. In this review, we explain where and how such approaches have been applied in the recent literature and discuss useful mechanisms such as long short-term memory, transformers, and attention. We conclude with a discussion of the current limitations of NLP methods for the study of PLIs as well as key challenges that need to be addressed in future work.",0 "An ability to learn about new objects from a small amount of visual data and produce convincing linguistic justification about the presence/absence of certain concepts (that collectively compose the object) in novel scenarios is an important characteristic of human cognition. This is possible due to abstraction of attributes/properties that an object is composed of e.g. an object `bird' can be identified by the presence of a beak, feathers, legs, wings, etc. Inspired by this aspect of human reasoning, in this work, we present a zero-shot framework for fine-grained visual concept learning by leveraging large language model and Visual Question Answering (VQA) system. Specifically, we prompt GPT-3 to obtain a rich linguistic description of visual objects in the dataset. We convert the obtained concept descriptions into a set of binary questions. We pose these questions along with the query image to a VQA system and aggregate the answers to determine the presence or absence of an object in the test images. Our experiments demonstrate comparable performance with existing zero-shot visual classification methods and few-shot concept learning approaches, without substantial computational overhead, yet being fully explainable from the reasoning perspective.",0 "The reasoning abilities of Large Language Models (LLMs) are becoming a central focus of study in NLP. In this paper, we consider the case of syllogistic reasoning, an area of deductive reasoning studied extensively in logic and cognitive psychology. Previous research has shown that pre-trained LLMs exhibit reasoning biases, such as $\textit{content effects}$, avoid answering that $\textit{no conclusion follows}$, display human-like difficulties, and struggle with multi-step reasoning. We contribute to this research line by systematically investigating the effects of chain-of-thought reasoning, in-context learning (ICL), and supervised fine-tuning (SFT) on syllogistic reasoning, considering syllogisms with conclusions that support or violate world knowledge, as well as ones with multiple premises. Crucially, we go beyond the standard focus on accuracy, with an in-depth analysis of the conclusions generated by the models. Our results suggest that the behavior of pre-trained LLMs can be explained by heuristics studied in cognitive science and that both ICL and SFT improve model performance on valid inferences, although only the latter mitigates most reasoning biases without harming model consistency.",0 "In this memoir, we seek to construct a constructive theory that is as complete as possible to describe the algebraic properties of the real number field in constructive mathematics without a dependent choice axiom. To this purpose, we use a dynamical version of geometric theories. We obtain a nice description of the algebraic properties of the real number field, but also a first outline for a constructive theory of certain o-minimal structures. The memoir we present here is an unfinished development of the article by the authors https://inria.hal.science/hal-01426164. Compared to that paper, however, we have modified the definition of continuous semialgebraic functions, in the same spirit in which Bishop defines a continuous real function as a uniformly continuous function on any bounded interval. Despite its unfinished nature and the many questions that we do not currently know how to answer, we hope that this paper will arouse interest for its original approach to the subject. This paper is an English translation of a French version on arXiv:2406.15218",0 "Artificial intelligence methods are being increasingly applied across various domains, but their often opaque nature has raised concerns about accountability and trust. In response, the field of explainable AI (XAI) has emerged to address the need for human-understandable AI systems. Evolutionary computation (EC), a family of powerful optimization and learning algorithms, offers significant potential to contribute to XAI, and vice versa. This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models. We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques. Furthermore, we discuss the application of XAI principles within EC itself, investigating how these principles can illuminate the behavior and outcomes of EC algorithms, their (automatic) configuration, and the underlying problem landscapes they optimize. Finally, we discuss open challenges in XAI and highlight opportunities for future research at the intersection of XAI and EC. Our goal is to demonstrate EC's suitability for addressing current explainability challenges and to encourage further exploration of these methods, ultimately contributing to the development of more understandable and trustworthy ML models and EC algorithms.",0 "This paper proposes a novel framework for understanding large language models (LLMs) by reconceptualizing them as semiotic machines rather than as imitations of human cognition. Drawing from structuralist and post-structuralist theories of language-specifically the works of Ferdinand de Saussure and Jacques Derrida-I argue that LLMs should be understood as models of language itself, aligning with Derrida's concept of 'writing' (l'ecriture). The paper is structured into three parts. First, I lay the theoretical groundwork by explaining how the word2vec embedding algorithm operates within Saussure's framework of language as a relational system of signs. Second, I apply Derrida's critique of Saussure to position 'writing' as the object modeled by LLMs, offering a view of the machine's 'mind' as a statistical approximation of sign behavior. Finally, the third section addresses how modern LLMs reflect post-structuralist notions of unfixed meaning, arguing that the ""next token generation"" mechanism effectively captures the dynamic nature of meaning. By reconceptualizing LLMs as semiotic machines rather than cognitive models, this framework provides an alternative lens through which to assess the strengths and limitations of LLMs, offering new avenues for future research.",0 "Local explanation of machine learning (ML) models has recently received significant attention due to its ability to reduce ambiguities about why the models make specific decisions. Extensive efforts have been invested to address explainability for different data types, particularly images. However, the work on multivariate time series data is limited. A possible reason is that the conflation of time and other variables in time series data can cause the generated explanations to be incomprehensible to humans. In addition, some efforts on time series fall short of providing accurate explanations as they either ignore a context in the time domain or impose differentiability requirements on the ML models. Such restrictions impede their ability to provide valid explanations in real-world applications and non-differentiable ML settings. In this paper, we propose a swapping--sliding decision explanation for multivariate time series classifiers, called SSET. The proposal consists of swapping and sliding stages, by which salient sub-sequences causing significant drops in the prediction score are presented as explanations. In the former stage, the important variables are detected by swapping the series of interest with close train data from target classes. In the latter stage, the salient observations of these variables are explored by sliding a window over each time step. Additionally, the model measures the importance of different variables over time in a novel way characterized by multiple factors. We leverage SSET on affect detection domain where evaluations are performed on two real-world physiological time series datasets, WESAD and MAHNOB-HCI, and a deep convolutional classifier, CN-Waterfall. This classifier has shown superior performance to prior models to detect human affective states. Comparing SSET with several benchmarks, including LIME, integrated gradients, and Dynamask, we found..",0 "To many Chomsky's debates with Quine and Skinner are an updated version of the Rationalist Empiricist debates of the 17th century. The consensus being that Chomsky's Rationalism was victorious. This dispute has reemerged with the advent of Large Language Models. With some arguing that LLMs vindicate rationalism because of the necessity of building in innate biases to make them work. The necessity of building in innate biases is taken to prove that empiricism hasn't got the conceptual resources to explain linguistic competence. Such claims depend on the nature of the empiricism one is endorsing. Externalized Empiricism has no difficulties with innate apparatus once they are determined empirically (Quine 1969). Thus, externalized empiricism is not refuted because of the need to build in innate biases in LLMs. Furthermore, the relevance of LLMs to the rationalist empiricist debate in relation to humans is dubious. For any claim about whether LLMs learn in an empiricist manner to be relevant to humans it needs to be shown that LLMs and humans learn in the same way. Two key features distinguish humans and LLMs. Humans learn despite a poverty of stimulus and LLMs learn because of an incredibly rich stimulus. Human linguistic outputs are grounded in sensory experience and LLMs are not. These differences in how the two learn indicates that they both use different underlying competencies to produce their output. Therefore, any claims about whether LLMs learn in an empiricist manner are not relevant to whether humans learn in an empiricist manner.",0 "In this paper we assess how well users know biometric authentication methods, how they perceive them, and if they have misconceptions about them. We present the results of an online survey that we conducted in two rounds (2019, N=57; and 2023, N=47) to understand the impact of the increasing availability of biometrics on their use and perception. The survey covered participants' general understanding of physiological and behavioral biometrics and their perceived usability and security. While most participants were able to name examples and stated that they use biometrics in their daily lives, they still had difficulties explaining the concepts behind them. We shed light on participants' misconceptions, their coping strategies with authentication failures and potential attacks, as well as their perception of the usability and security of biometrics in general. As such, our results can support the design of both further studies to gain deeper insights and future biometric interfaces to foster the informed use of biometrics.",2 "The burgeoning field of Natural Language Processing (NLP) stands at a critical juncture where the integration of fairness within its frameworks has become an imperative. This PhD thesis addresses the need for equity and transparency in NLP systems, recognizing that fairness in NLP is not merely a technical challenge but a moral and ethical necessity, requiring a rigorous examination of how these technologies interact with and impact diverse human populations. Through this lens, this thesis undertakes a thorough investigation into the development of equitable NLP methodologies and the evaluation of biases that prevail in current systems. First, it introduces an innovative algorithm to mitigate biases in multi-class classifiers, tailored for high-risk NLP applications, surpassing traditional methods in both bias mitigation and prediction accuracy. Then, an analysis of the Bios dataset reveals the impact of dataset size on discriminatory biases and the limitations of standard fairness metrics. This awareness has led to explorations in the field of explainable AI, aiming for a more complete understanding of biases where traditional metrics are limited. Consequently, the thesis presents COCKATIEL, a model-agnostic explainability method that identifies and ranks concepts in Transformer models, outperforming previous approaches in sentiment analysis tasks. Finally, the thesis contributes to bridging the gap between fairness and explainability by introducing TaCo, a novel method to neutralize bias in Transformer model embeddings. In conclusion, this thesis constitutes a significant interdisciplinary endeavor that intertwines explicability and fairness to challenge and reshape current NLP paradigms. The methodologies and critiques presented contribute to the ongoing discourse on fairness in machine learning, offering actionable solutions for more equitable and responsible AI systems.",0 "Sophisticated grammatical error detection/correction tools are available for a small set of languages such as English and Chinese. However, it is not straightforward -- if not impossible -- to adapt them to morphologically rich languages with complex writing rules like Turkish which has more than 80 million speakers. Even though several tools exist for Turkish, they primarily focus on spelling errors rather than grammatical errors and lack features such as web interfaces, error explanations and feedback mechanisms. To fill this gap, we introduce GECTurk WEB, a light, open-source, and flexible web-based system that can detect and correct the most common forms of Turkish writing errors, such as the misuse of diacritics, compound and foreign words, pronouns, light verbs along with spelling mistakes. Our system provides native speakers and second language learners an easily accessible tool to detect/correct such mistakes and also to learn from their mistakes by showing the explanation for the violated rule(s). The proposed system achieves 88,3 system usability score, and is shown to help learn/remember a grammatical rule (confirmed by 80% of the participants). The GECTurk WEB is available both as an offline tool at https://github.com/GGLAB-KU/gecturkweb or online at www.gecturk.net.",2 "A central goal of linguistic theory is to find a precise characterization of the notion ""possible human language"", in the form of a computational device that is capable of describing all and only the languages that can be acquired by a typically developing human child. The success of recent large language models (LLMs) in NLP applications arguably raises the possibility that LLMs might be computational devices that meet this goal. This would only be the case if, in addition to succeeding in learning human languages, LLMs struggle to learn ""impossible"" human languages. Kallini et al. (2024; ""Mission: Impossible Language Models"", Proc. ACL) conducted experiments aiming to test this by training GPT-2 on a variety of synthetic languages, and found that it learns some more successfully than others. They present these asymmetries as support for the idea that LLMs' inductive biases align with what is regarded as ""possible"" for human languages, but the most significant comparison has a confound that makes this conclusion unwarranted. In this paper I explain the confound and suggest some ways forward towards constructing a comparison that appropriately tests the underlying issue.",0 "Sustainability commonly refers to entities, such as individuals, companies, and institutions, having a non-detrimental (or even positive) impact on the environment, society, and the economy. With sustainability becoming a synonym of acceptable and legitimate behaviour, it is being increasingly demanded and regulated. Several frameworks and standards have been proposed to measure the sustainability impact of corporations, including United Nations' sustainable development goals and the recently introduced global sustainability reporting framework, amongst others. However, the concept of corporate sustainability is complex due to the diverse and intricate nature of firm operations (i.e. geography, size, business activities, interlinks with other stakeholders). As a result, corporate sustainability assessments are plagued by subjectivity both within data that reflect corporate sustainability efforts (i.e. corporate sustainability disclosures) and the analysts evaluating them. This subjectivity can be distilled into distinct challenges, such as incompleteness, ambiguity, unreliability and sophistication on the data dimension, as well as limited resources and potential bias on the analyst dimension. Put together, subjectivity hinders effective cost attribution to entities non-compliant with prevailing sustainability expectations, potentially rendering sustainability efforts and its associated regulations futile. To this end, we argue that Explainable Natural Language Processing (XNLP) can significantly enhance corporate sustainability analysis. Specifically, linguistic understanding algorithms (lexical, semantic, syntactic), integrated with XAI capabilities (interpretability, explainability, faithfulness), can bridge gaps in analyst resources and mitigate subjectivity problems within data.",0 "Predicting pedestrian behavior is challenging yet crucial for applications such as autonomous driving and smart city. Recent deep learning models have achieved remarkable performance in making accurate predictions, but they fail to provide explanations of their inner workings. One reason for this problem is the multi-modal inputs. To bridge this gap, we present Sparse Prototype Network (SPN), an explainable method designed to simultaneously predict a pedestrian's future action, trajectory, and pose. SPN leverages an intermediate prototype bottleneck layer to provide sample-based explanations for its predictions. The prototypes are modality-independent, meaning that they can correspond to any modality from the input. Therefore, SPN can extend to arbitrary combinations of modalities. Regularized by mono-semanticity and clustering constraints, the prototypes learn consistent and human-understandable features and achieve state-of-the-art performance on action, trajectory and pose prediction on TITAN and PIE. Finally, we propose a metric named Top-K Mono-semanticity Scale to quantitatively evaluate the explainability. Qualitative results show the positive correlation between sparsity and explainability. Code available at https://github.com/Equinoxxxxx/SPN.",0 "Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.",2 "Persona-driven role-playing (PRP) aims to build AI characters that can respond to user queries by faithfully sticking with all persona statements. Unfortunately, existing faithfulness criteria for PRP are limited to coarse-grained LLM-based scoring without a clear definition or formulation. This paper presents a pioneering exploration to quantify PRP faithfulness as a fine-grained and explainable criterion, which also serves as a reliable reference for optimization. Our criterion first discriminates persona statements into active and passive constraints by identifying the query-statement relevance. Then, we incorporate all constraints following the principle that the AI character's response should be (a) entailed by active (relevant) constraints and (b) not contradicted by passive (irrelevant) constraints. We translate this principle mathematically into a novel Active-Passive-Constraint (APC) score, a constraint-wise sum of natural language inference (NLI) scores weighted by relevance scores. In practice, we build the APC scoring system by symbolically distilling small discriminators from GPT-4 for efficiency. We validate the quality of the APC score against human evaluation based on example personas with tens of statements, and the results show a high correlation. We further leverage it as a reward system in direct preference optimization (DPO) for better AI characters. Our experiments offer a fine-grained and explainable comparison between existing PRP techniques, revealing their advantages and limitations. We further find APC-based DPO to be one of the most competitive techniques for sticking with all constraints and can be well incorporated with other techniques. We then extend the scale of the experiments to real persons with hundreds of statements and reach a consistent conclusion.",2 "As LLMs increasingly take on roles in human-AI interactions and autonomous AI systems, understanding their social behavior becomes important for informed use and continuous improvement. However, their behaviors in social interactions with humans and other agents, as well as the mechanisms shaping their responses, remain underexplored. To address this gap, we introduce a novel probabilistic framework, State-Understanding-Value-Action (SUVA), to systematically analyze LLM responses in social contexts based on their textual outputs (i.e., utterances). Using canonical behavioral economics games and social preference concepts relatable to LLM users, SUVA assesses LLMs' social behavior through both their final decisions and the response generation processes leading to those decisions. Our analysis of eight LLMs -- including two GPT, four LLaMA, and two Mistral models -- suggests that most models do not generate decisions aligned solely with self-interest; instead, they often produce responses that reflect social welfare considerations and display patterns consistent with direct and indirect reciprocity. Additionally, higher-capacity models more frequently display group identity effects. The SUVA framework also provides explainable tools -- including tree-based visualizations and probabilistic dependency analysis -- to elucidate how factors in LLMs' utterance-based reasoning influence their decisions. We demonstrate that utterance-based reasoning reliably predicts LLMs' final actions; references to altruism, fairness, and cooperation in the reasoning increase the likelihood of prosocial actions, while mentions of self-interest and competition reduce them. Overall, our framework enables practitioners to assess LLMs for applications involving social interactions, and provides researchers with a structured method to interpret how LLM behavior arises from utterance-based reasoning.",0 "Learning rewards from human behaviour or feedback is a promising approach to aligning AI systems with human values but fails to consistently extract correct reward functions. Interpretability tools could enable users to understand and evaluate possible flaws in learned reward functions. We propose Counterfactual Trajectory Explanations (CTEs) to interpret reward functions in reinforcement learning by contrasting an original with a counterfactual partial trajectory and the rewards they each receive. We derive six quality criteria for CTEs and propose a novel Monte-Carlo-based algorithm for generating CTEs that optimises these quality criteria. Finally, we measure how informative the generated explanations are to a proxy-human model by training it on CTEs. CTEs are demonstrably informative for the proxy-human model, increasing the similarity between its predictions and the reward function on unseen trajectories. Further, it learns to accurately judge differences in rewards between trajectories and generalises to out-of-distribution examples. Although CTEs do not lead to a perfect understanding of the reward, our method, and more generally the adaptation of XAI methods, are presented as a fruitful approach for interpreting learned reward functions.",0 "Since LLMs emerged, more attention has been paid to abstractive long-form summarization, where longer input sequences indicate more information contained. Nevertheless, the automatic evaluation of such summaries remains underexplored. The current evaluation metrics for long-form summarization either use similarity-based metrics like ROUGE and BERTScore or LLM-based metrics using appropriate prompts or pre-defined schema. We argue that the former only relies on similarity and fails to consider informativeness while the latter lacks quantitative analysis of informative richness, and is rather subjective and hard to explain. Current evaluation metrics either use traditional metrics like ROUGE and BERTScore, which rely on surface-level similarity and fail to consider informativeness, or simple LLM-based metrics, which are not robust and easily overwhelmed by the long contexts. In this paper, we propose a new evaluation metric called EVA-Score to extract all information from the given summaries, identify overlapped information based on reference, and calculate the information score. We test EVA-Score on several datasets and the experimental results reveal that EVA-Score shows the highest correlation with humans. We also re-evaluate the performance of LLMs on long-form summarization from the information perspective. The results indicate that responses of LLMs still have a gap with the human-written answers. Moreover, we provide a detailed analysis of the effectiveness of EVA-Score, forecasting future ways to automatically evaluate abstractive long-form summarization.",0 "Amyloid-$\beta$ (A$\beta$) plaques in conjunction with hyperphosphorylated tau proteins in the form of neurofibrillary tangles are the two neuropathological hallmarks of Alzheimer's disease. It is well-known that the identification of individuals with A$\beta$ positivity could enable early diagnosis. In this work, we aim at capturing the A$\beta$ positivity status in an unbalanced cohort enclosing subjects at different disease stages, exploiting the underlying structural and connectivity disease-induced modulations as revealed by structural, functional, and diffusion MRI. Of note, due to the unbalanced cohort, the outcomes may be guided by those factors rather than amyloid accumulation. The partial views provided by each modality are integrated in the model allowing to take full advantage of their complementarity in encoding the effects of the A$\beta$ accumulation, leading to an accuracy of $0.762\pm0.04$. The specificity of the information brought by each modality is assessed by \textit{post-hoc} explainability analysis (guided backpropagation), highlighting the underlying structural and functional changes. Noteworthy, well-established biomarker key regions related to A$\beta$ deposition could be identified by all modalities, including the hippocampus, thalamus, precuneus, and cingulate gyrus, witnessing in favor of the reliability of the method as well as its potential in shading light on modality-specific possibly unknown A$\beta$ deposition signatures.",0 "Explainable Artificial Intelligence (XAI) is essential for building advanced machine learning-powered applications, especially in critical domains such as medical diagnostics or autonomous driving. Legal, business, and ethical requirements motivate using effective XAI, but the increasing number of different methods makes it challenging to pick the right ones. Further, as explanations are highly context-dependent, measuring the effectiveness of XAI methods without users can only reveal a limited amount of information, excluding human factors such as the ability to understand it. We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task, designed such that a good performance is an indicator for the explanation to provide helpful information. In other words, we address the helpfulness of XAI for human decision-making. Further, a user study on state-of-the-art methods was conducted, showing differences in their ability to generate trust and skepticism and the ability to judge the rightfulness of an AI decision correctly. Based on the results, we highly recommend using and extending this approach for more objective-based human-centered user studies to measure XAI performance in an end-to-end fashion.",2 "Reasoning is key to many decision making processes. It requires consolidating a set of rule-like premises that are often associated with degrees of uncertainty and observations to draw conclusions. In this work, we address both the case where premises are specified as numeric probabilistic rules and situations in which humans state their estimates using words expressing degrees of certainty. Existing probabilistic reasoning datasets simplify the task, e.g., by requiring the model to only rank textual alternatives, by including only binary random variables, or by making use of a limited set of templates that result in less varied text. In this work, we present QUITE, a question answering dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships. QUITE provides high-quality natural language verbalizations of premises together with evidence statements and expects the answer to a question in the form of an estimated probability. We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types (causal, evidential, and explaining-away). Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning. We release QUITE and code for training and experiments on Github.",0 "Supervised Learning is a way of developing Artificial Intelligence systems in which a computer algorithm is trained on labeled data inputs. Effectiveness of a Supervised Learning algorithm is determined by its performance on a given dataset for a particular problem. In case of Supervised Learning problems, Stacking Ensembles usually perform better than individual classifiers due to their generalization ability. Stacking Ensembles combine predictions from multiple Machine Learning algorithms to make final predictions. Inspite of Stacking Ensembles superior performance, the overhead of Stacking Ensembles such as high cost, resources, time, and lack of explainability create challenges in real-life applications. This paper shows how we can strike a balance between performance, time, and resource constraints. Another goal of this research is to make Ensembles more explainable and intelligible using the Human-Centered approach. To achieve the aforementioned goals, we proposed a Human-Centered Behavior-inspired algorithm that streamlines the Ensemble Learning process while also reducing time, cost, and resource overhead, resulting in the superior performance of Supervised Learning in real-world applications. To demonstrate the effectiveness of our method, we perform our experiments on nine real-world datasets. Experimental results reveal that the proposed method satisfies our goals and outperforms the existing methods.",0 "The atmospheric characterisation of GJ1214 b has so far remained uncertain due to the observed flatness of the transit spectra of this planet that is typically attributed to the presence of hazes or clouds in its atmosphere. Here we combine for the first time transit and eclipse observations obtained with JWST to benefit from both type of constraints and advance on the atmospheric characterisation of GJ1214 b. Our results reveal that photochemical hazes can be produced at high enough mass fluxes in the atmosphere of GJ1214 b to explain both type of observations. These hazes have a drastic impact on the atmospheric thermal structure, which has further ramifications on the emitted radiation of the planet, as well as, its Bond albedo. Clouds of KCL, NaCL and ZnS composition also form in this atmosphere but their opacity is too small to explain the observed flatness of the transit spectrum. We find that metallicities in the range 2000-3000x solar provide atmospheric structures that are closest to the observations for haze mass fluxes in the range of (1-3)x1E-11 g cm-2 s-1. Correspondingly the Bond albedo is within 10-20%. Moreover, sulfur photochemistry produces abundant OCS that has a detectable signature in the transit spectra and should be seaked for in future observations. Sulfur should also participate to the haze formation in this atmosphere, therefore optical properties of such compounds are needed.",0 "Generating rationales that justify scoring decisions has been a promising way to facilitate explainability in automated scoring systems. However, existing methods do not match the accuracy of classifier-based methods. Plus, the generated rationales often contain hallucinated information. To address these issues, we propose a novel framework capable of generating more faithful rationales and, more importantly, matching performance with classifier-based black-box scoring systems. We first mimic the human assessment process by querying Large Language Models (LLMs) to generate a thought tree. We then summarise intermediate assessment decisions from each thought tree path for creating synthetic rationale data and rationale preference data. Finally, we utilise the generated synthetic data to calibrate LLMs through a two-step training process: supervised fine-tuning and preference optimization. Extensive experimental results demonstrate that our framework achieves a 38% assessment performance improvement in the QWK score compared to prior work while producing higher-quality rationales, as recognised by human evaluators and LLMs. Our work sheds light on the effectiveness of performing preference optimization using synthetic preference data obtained from thought tree paths. Data and code are available at https://github.com/lijiazheng99/thought_tree_assessment.",0 "Metaphors are everywhere. They appear extensively across all domains of natural language, from the most sophisticated poetry to seemingly dry academic prose. A significant body of research in the cognitive science of language argues for the existence of conceptual metaphors, the systematic structuring of one domain of experience in the language of another. Conceptual metaphors are not simply rhetorical flourishes but are crucial evidence of the role of analogical reasoning in human cognition. In this paper, we ask whether Large Language Models (LLMs) can accurately identify and explain the presence of such conceptual metaphors in natural language data. Using a novel prompting technique based on metaphor annotation guidelines, we demonstrate that LLMs are a promising tool for large-scale computational research on conceptual metaphors. Further, we show that LLMs are able to apply procedural guidelines designed for human annotators, displaying a surprising depth of linguistic knowledge.",2 "As machine learning models become increasingly complex, concerns about their robustness and trustworthiness have become more pressing. A critical vulnerability of these models is data poisoning attacks, where adversaries deliberately alter training data to degrade model performance. One particularly stealthy form of these attacks is subpopulation poisoning, which targets distinct subgroups within a dataset while leaving overall performance largely intact. The ability of these attacks to generalize within subpopulations poses a significant risk in real-world settings, as they can be exploited to harm marginalized or underrepresented groups within the dataset. In this work, we investigate how model complexity influences susceptibility to subpopulation poisoning attacks. We introduce a theoretical framework that explains how overparameterized models, due to their large capacity, can inadvertently memorize and misclassify targeted subpopulations. To validate our theory, we conduct extensive experiments on large-scale image and text datasets using popular model architectures. Our results show a clear trend: models with more parameters are significantly more vulnerable to subpopulation poisoning. Moreover, we find that attacks on smaller, human-interpretable subgroups often go undetected by these models. These results highlight the need to develop defenses that specifically address subpopulation vulnerabilities.",0 "Recent years have seen a growing interest in methods for predicting an unknown variable of interest, such as a subject's diagnosis, from medical images depicting its anatomical-functional effects. Methods based on discriminative modeling excel at making accurate predictions, but are challenged in their ability to explain their decisions in anatomically meaningful terms. In this paper, we propose a simple technique for single-subject prediction that is inherently interpretable. It augments the generative models used in classical human brain mapping techniques, in which the underlying cause-effect relations can be encoded, with a multivariate noise model that captures dominant spatial correlations. Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions, while at the same time offering intuitive visual explanations of its inner workings. The method is easy to use: training is fast for typical training set sizes, and only a single hyperparameter needs to be set by the user. Our code is available at https://github.com/chiara-mauri/Interpretable-subject-level-prediction.",0 "Developing a new diagnostic models based on the underlying biological mechanisms rather than subjective symptoms for psychiatric disorders is an emerging consensus. Recently, machine learning-based classifiers using functional connectivity (FC) for psychiatric disorders and healthy controls are developed to identify brain markers. However, existing machine learning-based diagnostic models are prone to over-fitting (due to insufficient training samples) and perform poorly in new test environment. Furthermore, it is difficult to obtain explainable and reliable brain biomarkers elucidating the underlying diagnostic decisions. These issues hinder their possible clinical applications. In this work, we propose BrainIB, a new graph neural network (GNN) framework to analyze functional magnetic resonance images (fMRI), by leveraging the famed Information Bottleneck (IB) principle. BrainIB is able to identify the most informative edges in the brain (i.e., subgraph) and generalizes well to unseen data. We evaluate the performance of BrainIB against 3 baselines and 7 state-of-the-art brain network classification methods on three psychiatric datasets and observe that our BrainIB always achieves the highest diagnosis accuracy. It also discovers the subgraph biomarkers which are consistent to clinical and neuroimaging findings. The source code and implementation details of BrainIB are freely available at GitHub repository (https://github.com/SJYuCNEL/brain-and-Information-Bottleneck/).",0 "As the complexity of multi-robot systems grows to incorporate a greater number of robots, more complex tasks, and longer time horizons, the solutions to such problems often become too complex to be fully intelligible to human users. In this work, we introduce an approach for generating natural language explanations that justify the validity of the system's solution to the user, or else aid the user in correcting any errors that led to a suboptimal system solution. Toward this goal, we first contribute a generalizable formalism of contrastive explanations for multi-robot systems, and then introduce a holistic approach to generating contrastive explanations for multi-robot scenarios that selectively incorporates data from multi-robot task allocation, scheduling, and motion-planning to explain system behavior. Through user studies with human operators we demonstrate that our integrated contrastive explanation approach leads to significant improvements in user ability to identify and solve system errors, leading to significant improvements in overall multi-robot team performance.",2 "Measures of voting power have been the subject of extensive research since the mid 1940s. More recently, similar measures of relative importance have been studied in other domains that include inconsistent knowledge bases, intensity of attacks in argumentation, different problems in the analysis of database management, and explainability. This paper demonstrates that all these examples are instantiations of computing measures of importance for a rather more general problem domain. The paper then shows that the best-known measures of importance can be computed for any reference set whenever one is given a monotonically increasing predicate that partitions the subsets of that reference set. As a consequence, the paper also proves that measures of importance can be devised in several domains, for some of which such measures have not yet been studied nor proposed. Furthermore, the paper highlights several research directions related with computing measures of importance.",0 "This paper presents a novel hybrid algorithm designed to interpret natural human commands in tabletop scenarios. By integrating multiple sources of information, including speech, gestures, and scene context, the system extracts actionable instructions for a robot, identifying relevant objects and actions. The system operates in a zero-shot fashion, without reliance on predefined object models, enabling flexible and adaptive use in various environments. We assess the integration of multiple deep learning models, evaluating their suitability for deployment in real-world robotic setups. Our algorithm performs robustly across different tasks, combining language processing with visual grounding. In addition, we release a small dataset of video recordings used to evaluate the system. This dataset captures real-world interactions in which a human provides instructions in natural language to a robot, a contribution to future research on human-robot interaction. We discuss the strengths and limitations of the system, with particular focus on how it handles multimodal command interpretation, and its ability to be integrated into symbolic robotic frameworks for safe and explainable decision-making.",0 "Autonomous systems, like vehicles or robots, require reliable, accurate, fast, resource-efficient, scalable, and low-latency trajectory predictions to get initial knowledge about future locations and movements of surrounding objects for safe human-machine interaction. Furthermore, they need to know the uncertainty of the predictions for risk assessment to provide safe path planning. This paper presents a lightweight method to address these requirements, combining Long Short-Term Memory and Mixture Density Networks. Our method predicts probability distributions, including confidence level estimations for positional uncertainty to support subsequent risk management applications and runs on a low-power embedded platform. We discuss essential requirements for human trajectory prediction in autonomous vehicle applications and demonstrate our method's performance using multiple traffic-related datasets. Furthermore, we explain reliability and sharpness metrics and show how important they are to guarantee the correctness and robustness of a model's predictions and uncertainty assessments. These essential evaluations have so far received little attention for no good reason. Our approach focuses entirely on real-world applicability. Verifying prediction uncertainties and a model's reliability are central to autonomous real-world applications. Our framework and code are available at: https://github.com/kav-institute/mdn_trajectory_forecasting.",0 "Learning from Demonstration (LfD) is a powerful type of machine learning that can allow novices to teach and program robots to complete various tasks. However, the learning process for these systems may still be difficult for novices to interpret and understand, making effective teaching challenging. Explainable artificial intelligence (XAI) aims to address this challenge by explaining a system to the user. In this work, we investigate XAI within LfD by implementing an adaptive explanatory feedback system on an inverse reinforcement learning (IRL) algorithm. The feedback is implemented by demonstrating selected learnt trajectories to users. The system adapts to user teaching by categorizing and then selectively sampling trajectories shown to a user, to show a representative sample of both successful and unsuccessful trajectories. The system was evaluated through a user study with 26 participants teaching a robot a navigation task. The results of the user study demonstrated that the proposed explanatory feedback system can improve robot performance, teaching efficiency and user understanding of the robot.",2 "The rapid proliferation of AI-manipulated or generated audio deepfakes poses serious challenges to media integrity and election security. Current AI-driven detection solutions lack explainability and underperform in real-world settings. In this paper, we introduce novel explainability methods for state-of-the-art transformer-based audio deepfake detectors and open-source a novel benchmark for real-world generalizability. By narrowing the explainability gap between transformer-based audio deepfake detectors and traditional methods, our results not only build trust with human experts, but also pave the way for unlocking the potential of citizen intelligence to overcome the scalability issue in audio deepfake detection.",0 "Multimodal emotion recognition in conversation (MERC) and multimodal emotion-cause pair extraction (MECPE) have recently garnered significant attention. Emotions are the expression of affect or feelings; responses to specific events, or situations -- known as emotion causes. Both collectively explain the causality between human emotion and intents. However, existing works treat emotion recognition and emotion cause extraction as two individual problems, ignoring their natural causality. In this paper, we propose a Unified Multimodal Emotion recognition and Emotion-Cause analysis framework (UniMEEC) to explore the causality between emotion and emotion cause. Concretely, UniMEEC reformulates the MERC and MECPE tasks as mask prediction problems and unifies them with a causal prompt template. To differentiate the modal effects, UniMEEC proposes a multimodal causal prompt to probe the pre-trained knowledge specified to modality and implements cross-task and cross-modality interactions under task-oriented settings. Experiment results on four public benchmark datasets verify the model performance on MERC and MECPE tasks and achieve consistent improvements compared with the previous state-of-the-art methods.",0 "Metacognition has been recognized as an essential skill for academic success and for performance in solving problems. During learning or problem-solving, metacognitive skills facilitate a range of cognitive and affective processes, leading collectively to improved performance. This study explores the predictive potential of metacognition in the second introductory programming course. A two-dimensional model has been proposed, consisting of metacognitive awareness and metacognitive behavior. To evaluate the predictive capacity of metacognition empirically, an exploratory case study with 194 participants from two institutions was conducted in the second introductory programming course. A latent approach was employed to examine the associations between metacognition and performance in object-oriented programming. Our findings indicate that both metacognitive dimensions have a positive effect on programming. Likewise, the results of the structural equation modeling show that 27% of variance in programming performance is explained by metacognitive behavior. Following the results, metacognition has the potential to be considered as one of the important predictors of performance in introductory programming.",2 "Ensuring model explainability and robustness is essential for reliable deployment of deep vision systems. Current methods for evaluating robustness rely on collecting and annotating extensive test sets. While this is common practice, the process is labor-intensive and expensive with no guarantee of sufficient coverage across attributes of interest. Recently, model diagnosis frameworks have emerged leveraging user inputs (e.g., text) to assess the vulnerability of the model. However, such dependence on human can introduce bias and limitation given the domain knowledge of particular users. This paper proposes Unsupervised Model Diagnosis (UMO), that leverages generative models to produce semantic counterfactual explanations without any user guidance. Given a differentiable computer vision model (i.e., the target model), UMO optimizes for the most counterfactual directions in a generative latent space. Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources, such as dictionaries or language models. We validate the framework on multiple vision tasks (e.g., classification, segmentation, keypoint detection). Extensive experiments show that our unsupervised discovery of semantic directions can correctly highlight spurious correlations and visualize the failure mode of target models without any human intervention.",0 "Interleaved text-and-image generation has been an intriguing research direction, where the models are required to generate both images and text pieces in an arbitrary order. Despite the emerging advancements in interleaved generation, the progress in its evaluation still significantly lags behind. Existing evaluation benchmarks do not support arbitrarily interleaved images and text for both inputs and outputs, and they only cover a limited number of domains and use cases. Also, current works predominantly use similarity-based metrics which fall short in assessing the quality in open-ended scenarios. To this end, we introduce InterleavedBench, the first benchmark carefully curated for the evaluation of interleaved text-and-image generation. InterleavedBench features a rich array of tasks to cover diverse real-world use cases. In addition, we present InterleavedEval, a strong reference-free metric powered by GPT-4o to deliver accurate and explainable evaluation. We carefully define five essential evaluation aspects for InterleavedEval, including text quality, perceptual quality, image coherence, text-image coherence, and helpfulness, to ensure a comprehensive and fine-grained assessment. Through extensive experiments and rigorous human evaluation, we show that our benchmark and metric can effectively evaluate the existing models with a strong correlation with human judgments surpassing previous reference-based metrics. We also provide substantial findings and insights to foster future research in interleaved generation and its evaluation.",2 "Human trajectory anomaly detection has become increasingly important across a wide range of applications, including security surveillance and public health. However, existing trajectory anomaly detection methods are primarily focused on vehicle-level traffic, while human-level trajectory anomaly detection remains under-explored. Since human trajectory data is often very sparse, machine learning methods have become the preferred approach for identifying complex patterns. However, concerns regarding potential biases and the robustness of these models have intensified the demand for more transparent and explainable alternatives. In response to these challenges, our research focuses on developing a lightweight anomaly detection model specifically designed to detect anomalies in human trajectories. We propose a Neural Collaborative Filtering approach to model and predict normal mobility. Our method is designed to model users' daily patterns of life without requiring prior knowledge, thereby enhancing performance in scenarios where data is sparse or incomplete, such as in cold start situations. Our algorithm consists of two main modules. The first is the collaborative filtering module, which applies collaborative filtering to model normal mobility of individual humans to places of interest. The second is the neural module, responsible for interpreting the complex spatio-temporal relationships inherent in human trajectory data. To validate our approach, we conducted extensive experiments using simulated and real-world datasets comparing to numerous state-of-the-art trajectory anomaly detection approaches.",0 "Chat LLMs such as GPT-3.5-turbo and GPT-4 have shown promise in assisting humans in coding, particularly by enabling them to conversationally provide feedback. However, current approaches assume users have expert debugging skills, limiting accessibility for non-professional programmers. In this paper, we first explore Chat LLMs' limitations in assisting non-professional programmers with coding. Through a formative study, we identify two key elements affecting their experience: the way a Chat LLM explains its generated code and the structure of human-LLM interaction. We then propose IntelliExplain, a new conversational code generation framework with enhanced code explanations and a structured interaction paradigm, which enforces both better code understanding and a more effective feedback loop. In two programming tasks (SQL and Python), IntelliExplain yields significantly higher success rates and reduces task time compared to the vanilla Chat LLM. We also identify several opportunities that remain in effectively offering a chat-based programming experience for non-professional programmers.",0 "Explaining Artificial Intelligence (AI) decisions is a major challenge nowadays in AI, in particular when applied to sensitive scenarios like medicine and law. However, the need to explain the rationale behind decisions is a main issue also for human-based deliberation as it is important to justify \textit{why} a certain decision has been taken. Resident medical doctors for instance are required not only to provide a (possibly correct) diagnosis, but also to explain how they reached a certain conclusion. Developing new tools to aid residents to train their explanation skills is therefore a central objective of AI in education. In this paper, we follow this direction, and we present, to the best of our knowledge, the first multilingual dataset for Medical Question Answering where correct and incorrect diagnoses for a clinical case are enriched with a natural language explanation written by doctors. These explanations have been manually annotated with argument components (i.e., premise, claim) and argument relations (i.e., attack, support), resulting in the Multilingual CasiMedicos-Arg dataset which consists of 558 clinical cases in four languages (English, Spanish, French, Italian) with explanations, where we annotated 5021 claims, 2313 premises, 2431 support relations, and 1106 attack relations. We conclude by showing how competitive baselines perform over this challenging dataset for the argument mining task.",0 "Neuropsychology of artificial intelligence focuses on synthetic neural cog nition as a new type of study object within cognitive psychology. With the goal of making artificial neural networks of language models more explainable, this approach involves transposing concepts from cognitive psychology to the interpretive construction of artificial neural cognition. The human cognitive concept involved here is categorization, serving as a heuristic for thinking about the process of segmentation and construction of reality carried out by the neural vectors of synthetic cognition.",0 "Fine-tuning large language models (LLMs) on instruction datasets is a common way to improve their generative capabilities. However, instruction datasets can be expensive and time-consuming to manually curate, and while LLM-generated data is less labor-intensive, it may violate user privacy agreements or terms of service of LLM providers. Therefore, we seek a way of constructing instruction datasets with samples that are not generated by humans or LLMs but still improve LLM generative capabilities. In this work, we introduce Cookbook, a framework that programmatically generates training data consisting of simple patterns over random tokens, resulting in a scalable, cost-effective approach that avoids legal and privacy issues. First, Cookbook uses a template -- a data generating Python function -- to produce training data that encourages the model to learn an explicit pattern-based rule that corresponds to a desired task. We find that fine-tuning on Cookbook-generated data is able to improve performance on its corresponding task by up to 52.7 accuracy points. Second, since instruction datasets improve performance on multiple downstream tasks simultaneously, Cookbook algorithmically learns how to mix data from various templates to optimize performance on multiple tasks. On the standard multi-task GPT4ALL evaluation suite, Mistral-7B fine-tuned using a Cookbook-generated dataset attains the best accuracy on average compared to other 7B parameter instruction-tuned models and is the best performing model on 3 out of 8 tasks. Finally, we analyze when and why Cookbook improves performance and present a metric that allows us to verify that the improvement is largely explained by the model's generations adhering better to template rules.",0 "Language Models (LMs) are being proposed for mental health applications where the heightened risk of adverse outcomes means predictive performance may not be a sufficient litmus test of a model's utility in clinical practice. A model that can be trusted for practice should have a correspondence between explanation and clinical determination, yet no prior research has examined the attention fidelity of these models and their effect on ground truth explanations. We introduce an evaluation design that focuses on the robustness and explainability of LMs in identifying Wellness Dimensions (WDs). We focus on two existing mental health and well-being datasets: (a) Multi-label Classification-based MultiWD, and (b) WellXplain for evaluating attention mechanism veracity against expert-labeled explanations. The labels are based on Halbert Dunn's theory of wellness, which gives grounding to our evaluation. We reveal four surprising results about LMs/LLMs: (1) Despite their human-like capabilities, GPT-3.5/4 lag behind RoBERTa, and MedAlpaca, a fine-tuned LLM on WellXplain fails to deliver any remarkable improvements in performance or explanations. (2) Re-examining LMs' predictions based on a confidence-oriented loss function reveals a significant performance drop. (3) Across all LMs/LLMs, the alignment between attention and explanations remains low, with LLMs scoring a dismal 0.0. (4) Most mental health-specific LMs/LLMs overlook domain-specific knowledge and undervalue explanations, causing these discrepancies. This study highlights the need for further research into their consistency and explanations in mental health and well-being.",0 "Medical Visual Question Answering (MedVQA), which offers language responses to image-based medical inquiries, represents a challenging task and significant advancement in healthcare. It assists medical experts to swiftly interpret medical images, thereby enabling faster and more accurate diagnoses. However, the model interpretability and transparency of existing MedVQA solutions are often limited, posing challenges in understanding their decision-making processes. To address this issue, we devise a semi-automated annotation process to streamline data preparation and build new benchmark MedVQA datasets R-RAD, R-SLAKE and R-Path. These datasets provide intermediate medical decision-making rationales generated by multimodal large language models and human annotations for question-answering pairs in existing MedVQA datasets, i.e., VQA-RAD, SLAKE and PathVQA. Moreover, we design a novel framework, MedThink, which finetunes lightweight pretrained generative models by incorporating medical decision-making rationales. MedThink includes three distinct strategies to generate decision outcomes and corresponding rationales, thereby clearly showcasing the medical decision-making process during reasoning. Our comprehensive experiments show that our method achieves an accuracy of 83.5% on R-RAD, 86.3% on R-SLAKE and 87.2% on R-Path. These results significantly exceed those of existing state-of-the-art models with comparable parameters. Datasets and code will be released.",0 "Artificial intelligence (AI) has become tightly integrated into modern technology, yet existing exploratory visualizations for explainable AI (XAI) are primarily designed for users with technical expertise. This leaves everyday users, who also regularly interact with AI systems, with limited resources to explore or understand AI technologies they use. We propose a novel framework that enables non-technical users to collect insights by conversing directly with visualization elements via LLM-powered narrative gamifications. We implemented a prototype that utilizes such gamification to facilitate non-technical users' exploration of AI embedding projections. We conducted a comparative study with 10 participants to assess our prototype quantitatively and qualitatively. Our study results indicate that although our prototype effectively enhances non-technical users' AI/XAI knowledge, and users believe they learn more through the gamification feature, it remains inconclusive whether the gamification itself leads to further improvements in understanding. In addition, opinions among participants regarding the framework's engagement are mixed: some believe it enhances their exploration of the visualizations, while others feel it disrupts their workflow.",2 "Even though a few initial works have shown on small sets of data some level of bias in the performance of fingerprint recognition technology with respect to certain demographic groups, there is still not sufficient evidence to understand the impact that certain factors such as gender, age or finger-type may have on fingerprint quality and, in turn, also on fingerprint matching accuracy. The present work addresses this still under researched topic, on a large-scale database of operational data containing 10-print impressions of almost 16,000 subjects. The results reached provide further insight into the dependency of fingerprint quality and demographics, and show that there in fact exists a certain degree of performance variability in fingerprint-based recognition systems for different segments of the population. Based on the experimental evaluation, the work points out new observations based on data-driven evidence, provides plausible hypotheses to explain such observations, and concludes with potential follow-up actions that can help to reduce the observed fingerprint quality differences. This way, the current paper can be considered as a contribution to further increase the algorithmic fairness and equality of biometric technology.",0 "We investigate two one-dimensional tight-binding models with disorder that have extended states at zero energy. We use exact and partial diagonalisation of the Hamiltonian to obtain the eigenmodes and the associated participation ratios, and the transfer-matrix method to determine the localisation length. The first model has no on-site disorder, but random couplings. While the participation ratio remains finite at zero energy, the localisation length diverges logarithmically as the energy goes to zero. We provide an intuitive derivation of this logarithmic divergence based on the weak coupling of the two sublattices. The second model has a conserved quantity as the row sums of the Hamiltonian are zero. This model can be represented as a harmonic chain with random couplings, or as a diffusion model on a lattice with random links. We find, in agreement with existing analytical calculations, that the number of system-spanning eigenmodes increases proportionally to the square root of the system size, and we related this power law to other power laws that characterise the scaling behaviour of the eigenmodes, the participation ratio, the localisation length, and their dependence on energy and system size. When disorder is so strong that the smallest hopping terms can be arbitrarily close to zero, all these power laws change, and we show a crossover between the two scaling regimes. All these results are explained by intuitive arguments based on scaling.",0 "Humans cluster in social groups where they discuss their shared past, problems, and potential solutions; they learn collectively when they repeat activities; they establish social norms; they synchronize when they sing or dance together; and they bond through social cohesion. A group is more cohesive if its members are closer together in their network and are bonded by multiple connections. Network proximity and redundancy are indicated by the second smallest eigenvalue of the Laplacian matrix of the group network, called the algebraic connectivity. This eigenvalue is key to explaining and predicting the outcomes of said activities.",0 "Diabetic retinopathy is a common complication of diabetes, and monitoring the progression of retinal abnormalities using fundus imaging is crucial. Because the images must be interpreted by a medical expert, it is infeasible to screen all individuals with diabetes for diabetic retinopathy. Deep learning has shown impressive results for automatic analysis and grading of fundus images. One drawback is, however, the lack of interpretability, which hampers the implementation of such systems in the clinic. Explainable artificial intelligence methods can be applied to explain the deep neural networks. Explanations based on concepts have shown to be intuitive for humans to understand, but have not yet been explored in detail for diabetic retinopathy grading. This work investigates and compares two concept-based explanation techniques for explaining deep neural networks developed for automatic diagnosis of diabetic retinopathy: Quantitative Testing with Concept Activation Vectors and Concept Bottleneck Models. We found that both methods have strengths and weaknesses, and choice of method should take the available data and the end user's preferences into account.",0 "In this work, we study in-context teaching (ICT), where a teacher provides in-context example rationales to teach a student to reason over unseen cases. Human teachers are usually required to craft in-context demonstrations, which are costly and have high variance. We ask whether a large language model (LLM) can serve as a more effective in-context teacher for itself or other LLMs, compared to humans. Inspired by the Encoding Specificity Hypothesis from human episodic memory, we hypothesize that in-context exemplars crafted by the teacher should match the training data of the student. This hypothesis motivates us to propose Self-Explain where an LLM's self-elicited explanations are used as in-context demonstrations for prompting it as they are generalized from the model's training examples. Self-Explain is shown to significantly outperform using human-crafted exemplars and other baselines. Furthermore, we reveal that for ICT, rationales from different teacher LLMs or human experts that more resemble the student LLM's self-explanations are better in-context demonstrations. This supports our encoding specificity hypothesis. We then propose Teach-Back that aligns a teacher LLM with the student to enhance the ICT performance. For example, Teach-Back enables a 7B model to teach the much larger GPT-3.5 in context, surpassing human teachers by around 5% in test accuracy on medical question answering.",0 "With the growing popularity of general-purpose Large Language Models (LLMs), comes a need for more global explanations of model behaviors. Concept-based explanations arise as a promising avenue for explaining high-level patterns learned by LLMs. Yet their evaluation poses unique challenges, especially due to their non-local nature and high dimensional representation in a model's hidden space. Current methods approach concepts from different perspectives, lacking a unified formalization. This makes evaluating the core measures of concepts, namely faithfulness or readability, challenging. To bridge the gap, we introduce a formal definition of concepts generalizing to diverse concept-based explanations' settings. Based on this, we quantify the faithfulness of a concept explanation via perturbation. We ensure adequate perturbation in the high-dimensional space for different concepts via an optimization problem. Readability is approximated via an automatic and deterministic measure, quantifying the coherence of patterns that maximally activate a concept while aligning with human understanding. Finally, based on measurement theory, we apply a meta-evaluation method for evaluating these measures, generalizable to other types of explanations or tasks as well. Extensive experimental analysis has been conducted to inform the selection of explanation evaluation measures.",0 "Data-driven storytelling is a powerful method for conveying insights by combining narrative techniques with visualizations and text. These stories integrate visual aids, such as highlighted bars and lines in charts, along with textual annotations explaining insights. However, creating such stories requires a deep understanding of the data and meticulous narrative planning, often necessitating human intervention, which can be time-consuming and mentally taxing. While Large Language Models (LLMs) excel in various NLP tasks, their ability to generate coherent and comprehensive data stories remains underexplored. In this work, we introduce a novel task for data story generation and a benchmark containing 1,449 stories from diverse sources. To address the challenges of crafting coherent data stories, we propose a multiagent framework employing two LLM agents designed to replicate the human storytelling process: one for understanding and describing the data (Reflection), generating the outline, and narration, and another for verification at each intermediary step. While our agentic framework generally outperforms non-agentic counterparts in both model-based and human evaluations, the results also reveal unique challenges in data story generation.",0 "For automatic human figure drawing (HFD) assessment tasks, such as diagnosing autism spectrum disorder (ASD) using HFD images, the clarity and explainability of a model decision are crucial. Existing pixel-level attribution-based explainable AI (XAI) approaches demand considerable effort from users to interpret the semantic information of a region in an image, which can be often time-consuming and impractical. To overcome this challenge, we propose a part contribution evaluation based model explanation (PCEvE) framework. On top of the part detection, we measure the Shapley Value of each individual part to evaluate the contribution to a model decision. Unlike existing attribution-based XAI approaches, the PCEvE provides a straightforward explanation of a model decision, i.e., a part contribution histogram. Furthermore, the PCEvE expands the scope of explanations beyond the conventional sample-level to include class-level and task-level insights, offering a richer, more comprehensive understanding of model behavior. We rigorously validate the PCEvE via extensive experiments on multiple HFD assessment datasets. Also, we sanity-check the proposed method with a set of controlled experiments. Additionally, we demonstrate the versatility and applicability of our method to other domains by applying it to a photo-realistic dataset, the Stanford Cars.",0 "Large language models (LLMs) offer significant potential as tools to support an expanding range of decision-making tasks. Given their training on human (created) data, LLMs have been shown to inherit societal biases against protected groups, as well as be subject to bias functionally resembling cognitive bias. Human-like bias can impede fair and explainable decisions made with LLM assistance. Our work introduces BiasBuster, a framework designed to uncover, evaluate, and mitigate cognitive bias in LLMs, particularly in high-stakes decision-making tasks. Inspired by prior research in psychology and cognitive science, we develop a dataset containing 13,465 prompts to evaluate LLM decisions on different cognitive biases (e.g., prompt-induced, sequential, inherent). We test various bias mitigation strategies, while proposing a novel method utilizing LLMs to debias their own human-like cognitive bias within prompts. Our analysis provides a comprehensive picture of the presence and effects of cognitive bias across commercial and open-source models. We demonstrate that our selfhelp debiasing effectively mitigates model answers that display patterns akin to human cognitive bias without having to manually craft examples for each bias.",0 "Case law is instrumental in shaping our understanding of human rights, including the right to adequate housing. The HUDOC database provides access to the textual content of case law from the European Court of Human Rights (ECtHR), along with some metadata. While this metadata includes valuable information, such as the application number and the articles addressed in a case, it often lacks detailed substantive insights, such as the specific issues a case covers. This underscores the need for detailed analysis to extract such information. However, given the size of the database - containing over 40,000 cases - an automated solution is essential. In this study, we focus on the right to adequate housing and aim to build models to detect cases related to housing and eviction issues. Our experiments show that the resulting models not only provide performance comparable to more sophisticated approaches but are also interpretable, offering explanations for their decisions by highlighting the most influential words. The application of these models led to the identification of new cases that were initially overlooked during data collection. This suggests that NLP approaches can be effectively applied to categorise case law based on the specific issues they address.",0 "The cause of the speed-accuracy tradeoff (typically quantified via Fitts' Law) is a debated topic of interest in motor neuroscience, and is commonly studied using tools from control theory. Two prominent theories involve the presence of signal dependent motor noise and planning variability -- these factors are generally incorporated separately. In this work, we study how well the simultaneous presence of both factors explains the speed-accuracy tradeoff. A human arm reaching model is developed with bio-realistic signal dependent motor noise, and a Gaussian noise model is used to deterministically approximate the motor noise. Both offline trajectory optimization and online model predictive control are used to simulate the planning and execution of several different reaching tasks with varying target sizes and movement durations. These reaching trajectories are then compared to experimental human reaching data, revealing that both models produce behavior consistent with humans, and the speed-accuracy tradeoff is present in both online and offline control. These results suggest the speed-accuracy tradeoff is likely caused by a combination of these two factors, and also that it plays a role in both offline and online computation.",0 "The objective of the image inpainting task is to fill missing regions of an image in a visually plausible way. Recently, deep-learning-based image inpainting networks have generated outstanding results, and some utilize their models as object removers by masking unwanted objects in an image. However, while trying to better remove objects using their networks, the previous works pay less attention to the importance of the input mask. In this paper, we focus on generating the input mask to better remove objects using the off-the-shelf image inpainting network. We propose an automatic mask generator inspired by the explainable AI (XAI) method, whose output can better remove objects than a semantic segmentation mask. The proposed method generates an importance map using randomly sampled input masks and quantitatively estimated scores of the completed images obtained from the random masks. The output mask is selected by a judge module among the candidate masks which are generated from the importance map. We design the judge module to quantitatively estimate the quality of the object removal results. In addition, we empirically find that the evaluation methods used in the previous works reporting object removal results are not appropriate for estimating the performance of an object remover. Therefore, we propose new evaluation metrics (FID$^*$ and U-IDS$^*$) to properly evaluate the quality of object removers. Experiments confirm that our method shows better performance in removing target class objects than the masks generated from the semantic segmentation maps, and the two proposed metrics make judgments consistent with humans.",0 "Recent progress in generative models has stimulated significant innovations in many fields, such as image generation and chatbots. Despite their success, these models often produce sketchy and misleading solutions for complex multi-agent decision-making problems because they miss the trial-and-error experience and reasoning as humans. To address this limitation, we explore a paradigm that integrates a language-guided simulator into the multi-agent reinforcement learning pipeline to enhance the generated answer. The simulator is a world model that separately learns dynamics and reward, where the dynamics model comprises an image tokenizer as well as a causal transformer to generate interaction transitions autoregressively, and the reward model is a bidirectional transformer learned by maximizing the likelihood of trajectories in the expert demonstrations under language guidance. Given an image of the current state and the task description, we use the world model to train the joint policy and produce the image sequence as the answer by running the converged policy on the dynamics model. The empirical results demonstrate that this framework can improve the answers for multi-agent decision-making problems by showing superior performance on the training and unseen tasks of the StarCraft Multi-Agent Challenge benchmark. In particular, it can generate consistent interaction sequences and explainable reward functions at interaction states, opening the path for training generative models of the future.",0 "Will a Visual Language Model (VLM)-based bot warn us about slipping if it detects a wet floor? Recent VLMs have demonstrated impressive capabilities, yet their ability to infer outcomes and causes remains underexplored. To address this, we introduce NL-Eye, a benchmark designed to assess VLMs' visual abductive reasoning skills. NL-Eye adapts the abductive Natural Language Inference (NLI) task to the visual domain, requiring models to evaluate the plausibility of hypothesis images based on a premise image and explain their decisions. NL-Eye consists of 350 carefully curated triplet examples (1,050 images) spanning diverse reasoning categories: physical, functional, logical, emotional, cultural, and social. The data curation process involved two steps - writing textual descriptions and generating images using text-to-image models, both requiring substantial human involvement to ensure high-quality and challenging scenes. Our experiments show that VLMs struggle significantly on NL-Eye, often performing at random baseline levels, while humans excel in both plausibility prediction and explanation quality. This demonstrates a deficiency in the abductive reasoning capabilities of modern VLMs. NL-Eye represents a crucial step toward developing VLMs capable of robust multimodal reasoning for real-world applications, including accident-prevention bots and generated video verification.",0 "With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase. Notably, recent advancements in multi-round self-improving methods allow LLMs to generate new examples for training subsequent models. At the same time, multi-agent LLM systems, involving automated interactions among agents, are also increasing in prominence. Thus, in both short and long terms, LLMs may actively engage in an evolutionary process. We draw parallels between the behavior of LLMs and the evolution of human culture, as the latter has been extensively studied by cognitive scientists for decades. Our approach involves leveraging Iterated Learning (IL), a Bayesian framework that elucidates how subtle biases are magnified during human cultural evolution, to explain some behaviors of LLMs. This paper outlines key characteristics of agents' behavior in the Bayesian-IL framework, including predictions that are supported by experimental verification with various LLMs. This theoretical framework could help to more effectively predict and guide the evolution of LLMs in desired directions.",0 "Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code. But sometimes, an even better explanation is a bug in the programmer's mental model of the language or API they are using. Instead of merely debugging our current code (""giving the programmer a fish""), what if our tools could directly debug our mental models (""teaching the programmer to fish"")? In this paper, we apply recent ideas from computational cognitive science to offer a principled framework for doing exactly that. Given a ""why?"" question about a program, we automatically infer potential misconceptions about the language/API that might cause the user to be surprised by the program's behavior -- and then analyze those misconceptions to provide explanations of the program's behavior. Our key idea is to formally represent misconceptions as counterfactual (erroneous) semantics for the language/API, which can be inferred and debugged using program synthesis techniques. We demonstrate our framework, WatChat, by building systems for explanation in two domains: JavaScript type coercion, and the Git version control system. We evaluate WatChatJS and WatChatGit by comparing their outputs to experimentally-collected human-written explanations in these two domains: we show that WatChat's explanations exhibit key features of human-written explanation, unlike those of a state-of-the-art language model.",0 "We propose an impossible trinity of human space usage between home, workplace, and amenity in this paper to explain mobility pattern changes and shifts in demand for space during COVID-19. We developed detailed time usage and location visit profiles for 60,131 people in England and Wales by analyzing about 120 million cell phone location and timestamp records on March 2020 and March 2021. We found that both at-home time and amenity visits increased during COVID-19, while workplace visits decreased. Individual visits to different locations are determined by three key factors: individual preference measured by pre-pandemic location visit frequency, time constraints influenced by work-from-home, and space accessibility. We also find that WFH improves equality of individual amenity usage between people of different incomes. Low-income and middle-income people saw an 8% and 4% increase in additional amenity visits, respectively, compared to high-income people during the pandemic.",0 "In-falling cosmic dust has left evidence of meteoritic polymer amide in stromatolites, both fossil and modern. In search of evidence for continued present day in-fall sea foam was collected from two beaches in Rhode Island and subjected to Folch extraction to concentrate amphiphilic components in a chloroform water-methanol interphase layer. Hemoglycin polymer amide molecules previously characterized by MALDI mass spectrometry in meteorites and stromatolites were identified in sea foam either directly, or via their fragmentation patterns. Residual isotope enrichment pointed to an extra-terrestrial origin. The unique resiliency of sea foam may be due to the formation of extended hemoglycin lattices that stabilize its closed-cell structure and its lightness can potentially be explained by photolytic hydrogen production.",0 "The explainability of a robot's actions is crucial to its acceptance in social spaces. Explaining why a robot fails to complete a given task is particularly important for non-expert users to be aware of the robot's capabilities and limitations. So far, research on explaining robot failures has only considered generating textual explanations, even though several studies have shown the benefits of multimodal ones. However, a simple combination of multiple modalities may lead to semantic incoherence between the information across different modalities - a problem that is not well-studied. An incoherent multimodal explanation can be difficult to understand, and it may even become inconsistent with what the robot and the human observe and how they perform reasoning with the observations. Such inconsistencies may lead to wrong conclusions about the robot's capabilities. In this paper, we introduce an approach to generate coherent multimodal explanations by checking the logical coherence of explanations from different modalities, followed by refinements as required. We propose a classification approach for coherence assessment, where we evaluate if an explanation logically follows another. Our experiments suggest that fine-tuning a neural network that was pre-trained to recognize textual entailment, performs well for coherence assessment of multimodal explanations. Code & data: https://pradippramanick.github.io/coherent-explain/.",0 "Explainable Artificial Intelligence (XAI) plays a crucial role in fostering transparency and trust in AI systems, where traditional XAI approaches typically offer one level of abstraction for explanations, often in the form of heatmaps highlighting single or multiple input features. However, we ask whether abstract reasoning or problem-solving strategies of a model may also be relevant, as these align more closely with how humans approach solutions to problems. We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features, thereby capturing the abstract reasoning behind a model's predictions. The methodology is built upon a simple yet general multi-order decomposition of model predictions. This decomposition can be specified using higher-order propagation-based relevance methods, such as GNN-LRP, or perturbation-based explanation methods commonly used in XAI. The effectiveness of our framework is demonstrated in the domains of natural language processing (NLP), vision, and quantum chemistry (QC), where abstract symbolic domain knowledge is abundant and of significant interest to users. The Symbolic XAI framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable through logical formulas.",0 "Being able to recognise defects in industrial objects is a key element of quality assurance in production lines. Our research focuses on visual anomaly detection in RGB images. Although Convolutional Neural Networks (CNNs) achieve high accuracies in this task, end users in industrial environments receive the model's decisions without additional explanations. Therefore, it is of interest to enrich the model's outputs with further explanations to increase confidence in the model and speed up anomaly detection. In our work, we focus on (1) CNN-based classification models and (2) the further development of a model-agnostic explanation algorithm for black-box classifiers. Additionally, (3) we demonstrate how we can establish an interactive interface that allows users to further correct the model's output. We present our NearCAIPI Interaction Framework, which improves AI through user interaction, and show how this approach increases the system's trustworthiness. We also illustrate how NearCAIPI can integrate human feedback into an interactive process chain.",0 "The opacity of AI models necessitates both validation and evaluation before their integration into services. To investigate these models, explainable AI (XAI) employs methods that elucidate the relationship between input features and output predictions. The operations of XAI extend beyond the execution of a single algorithm, involving a series of activities that include preprocessing data, adjusting XAI to align with model parameters, invoking the model to generate predictions, and summarizing the XAI results. Adversarial attacks are well-known threats that aim to mislead AI models. The assessment complexity, especially for XAI, increases when open-source AI models are subject to adversarial attacks, due to various combinations. To automate the numerous entities and tasks involved in XAI-based assessments, we propose a cloud-based service framework that encapsulates computing components as microservices and organizes assessment tasks into pipelines. The current XAI tools are not inherently service-oriented. This framework also integrates open XAI tool libraries as part of the pipeline composition. We demonstrate the application of XAI services for assessing five quality attributes of AI models: (1) computational cost, (2) performance, (3) robustness, (4) explanation deviation, and (5) explanation resilience across computer vision and tabular cases. The service framework generates aggregated analysis that showcases the quality attributes for more than a hundred combination scenarios.",0 "Neurological disorders that affect speech production, such as Alzheimer's Disease (AD), significantly impact the lives of both patients and caregivers, whether through social, psycho-emotional effects or other aspects not yet fully understood. Recent advancements in Large Language Model (LLM) architectures have developed many tools to identify representative features of neurological disorders through spontaneous speech. However, LLMs typically lack interpretability, meaning they do not provide clear and specific reasons for their decisions. Therefore, there is a need for methods capable of identifying the representative features of neurological disorders in speech and explaining clearly why these features are relevant. This paper presents an explainable LLM method, named SLIME (Statistical and Linguistic Insights for Model Explanation), capable of identifying lexical components representative of AD and indicating which components are most important for the LLM's decision. In developing this method, we used an English-language dataset consisting of transcriptions from the Cookie Theft picture description task. The LLM Bidirectional Encoder Representations from Transformers (BERT) classified the textual descriptions as either AD or control groups. To identify representative lexical features and determine which are most relevant to the model's decision, we used a pipeline involving Integrated Gradients (IG), Linguistic Inquiry and Word Count (LIWC), and statistical analysis. Our method demonstrates that BERT leverages lexical components that reflect a reduction in social references in AD and identifies which further improve the LLM's accuracy. Thus, we provide an explainability tool that enhances confidence in applying LLMs to neurological clinical contexts, particularly in the study of neurodegeneration.",2 "Black-hole (BH) high-mass X-ray binary (HMXB) systems are likely to be the progenitors of BH-BH mergers detected by LIGO/Virgo/KAGRA (LVK). Yet merging BHs reach higher masses ($\sim 100M_{\odot}$) than BHs in HMXBs ($\sim 20 M_{\odot}$) and exhibit lower spins ($a_{\rm BH}\lesssim 0.25$ with a larger values tail) than what is often claimed for BHs in HMXBs ($a_{\rm BH}\gtrsim 0.9$). This could suggest that these two classes of systems belong to different populations, but here we show that this may not necessarily be the case. The difference in masses is easily explained as the known HMXB-BHs are in galaxies with relatively high metallicity, so their progenitor stars are subject to strong mass loss from winds, leading to relatively low-mass BH at core collapse. Conversely, LVK is also able to detect BHs from low-metallicity galaxies that produce more massive stellar-origin BHs. The difference in spin is more difficult to explain. Models with efficient angular momentum transport in stellar interiors produce slowly spinning progenitors for both LVK and HMXB BHs. Known HMXBs have orbital periods that are too long for tidal spin-up and are unlikely to have undergone significant accretion spin-up. Instead, we show that the derived value of the BH spin depends strongly on how the HMXB accretion disc emission is modelled. We argue that since Cyg X-1 is never observed in a soft state, the appropriate spectral models must take into account the Comptonisation of the disc photosphere. We show that such models are consistent with low spin values, namely: $a_{\rm BH}\sim 0.1$. This was confirmed by other teams for both Cyg X-1 and LMC X-1 and we show this is also the case for M33 X-7. We conclude that all HMXB BHs can exhibit low spins, in accordance with stellar evolution models. Hence, the observations are consistent with the LVK BHs and HMXB BHs belonging to the same population.",0 "Despite the discovery of multiple intrinsic magnetic topological insulators in recent years the observation of Chern insulators is still restricted to very low temperatures due to the negligible charge gaps. Here, we uncover the potential of heavy transition-metal compounds for realizing a collinear antiferromagnetic Chern insulator (AFCI) with a charge gap as large as 300 meV. Our analysis relies on the Kane-Mele-Kondo model with a ferromagnetic Hund coupling $J_{\rm H}$ between the spins of itinerant electrons and the localized spins of size $S$. We show that a spin-orbit coupling $\lambda_{\rm SO} \gtrsim 0.03t$, where $t$ is the nearest-neighbor hopping element, is already large enough to stabilize an AFCI provided the alternating sublattice potential $\delta$ is in the range $\delta \approx SJ_{\rm H}$. We establish a remarkable increase in the charge gap upon increasing $\lambda_{\rm SO}$ in the AFCI phase. Using our results we explain the collinear AFCI recently found in monolayers of CrO and MoO with charge gaps of 1 and $50$ meV, respectively. In addition, we propose bilayers of heavy transition-metal oxides of perovskite structure as candidates to realize a room-temperature AFCI if grown along the $[111]$ direction and subjected to a perpendicular electric field.",0 "Natural language and visualization are two complementary modalities of human communication that play a crucial role in conveying information effectively. While visualizations help people discover trends, patterns, and anomalies in data, natural language descriptions help explain these insights. Thus, combining text with visualizations is a prevalent technique for effectively delivering the core message of the data. Given the rise of natural language generation (NLG), there is a growing interest in automatically creating natural language descriptions for visualizations, which can be used as chart captions, answering questions about charts, or telling data-driven stories. In this survey, we systematically review the state of the art on NLG for visualizations and introduce a taxonomy of the problem. The NLG tasks fall within the domain of Natural Language Interfaces (NLI) for visualization, an area that has garnered significant attention from both the research community and industry. To narrow down the scope of the survey, we primarily concentrate on the research works that focus on text generation for visualizations. To characterize the NLG problem and the design space of proposed solutions, we pose five Wh-questions, why and how NLG tasks are performed for visualizations, what the task inputs and outputs are, as well as where and when the generated texts are integrated with visualizations. We categorize the solutions used in the surveyed papers based on these ""five Wh-questions."" Finally, we discuss the key challenges and potential avenues for future research in this domain.",2 "Electronic healthcare records are vital for patient safety as they document conditions, plans, and procedures in both free text and medical codes. Language models have significantly enhanced the processing of such records, streamlining workflows and reducing manual data entry, thereby saving healthcare providers significant resources. However, the black-box nature of these models often leaves healthcare professionals hesitant to trust them. State-of-the-art explainability methods increase model transparency but rely on human-annotated evidence spans, which are costly. In this study, we propose an approach to produce plausible and faithful explanations without needing such annotations. We demonstrate on the automated medical coding task that adversarial robustness training improves explanation plausibility and introduce AttInGrad, a new explanation method superior to previous ones. By combining both contributions in a fully unsupervised setup, we produce explanations of comparable quality, or better, to that of a supervised approach. We release our code and model weights.",2 "Electroadhesion (EA) has potential in robotics, automation, space missions, textiles, and tactile displays, but its physics remains underexplored due to limited models and experimental data. This thesis develops an electro-mechanical model to estimate electrostatic forces between human finger and touchscreen under EA and compares it to experimentally measured friction forces. The model aligns well with the data, showing that the electrostatic force changes mainly due to charge leakage from the Stratum Corneum at frequencies below 250 Hz and its electrical properties above 250 Hz. Additionally, a novel approach using electrical impedance measurements estimates electrostatic forces by subtracting skin and touchscreen impedances from the total impedance. This method is the first to experimentally estimate the average air gap between finger and voltage-induced capacitive touchscreen. The effect of electrode polarization impedance, particularly at low frequencies, was also studied, revealing its role in the charge leakage phenomenon. Tactile perception via EA was investigated using DC and AC voltage signals on a touchscreen with 10 participants of varying finger moisture levels. Results showed that AC voltage detection thresholds were significantly lower than for DC, explained by charge leakage at lower frequencies. Participants with moist fingers exhibited higher threshold levels, supported by impedance measurements. The thesis also investigated how touchscreen top coatings influence tactile perception, focusing on EA-free interactions. Psychophysical experiments and physical measurements demonstrated that coating materials significantly affect tactile perception, likely due to molecular interactions. These findings offer insights into finger-touchscreen interactions under EA and have potential applications in designing robotic systems and haptic interfaces using this technology.",2 "Memory is the foundation of all human activities; without memory, it would be nearly impossible for people to perform any task in daily life. With the development of Large Language Models (LLMs), their language capabilities are becoming increasingly comparable to those of humans. But do LLMs have memory? Based on current performance, LLMs do appear to exhibit memory. So, what is the underlying mechanism of this memory? Previous research has lacked a deep exploration of LLMs' memory capabilities and the underlying theory. In this paper, we use Universal Approximation Theorem (UAT) to explain the memory mechanism in LLMs. We also conduct experiments to verify the memory capabilities of various LLMs, proposing a new method to assess their abilities based on these memory ability. We argue that LLM memory operates like Schr\""odinger's memory, meaning that it only becomes observable when a specific memory is queried. We can only determine if the model retains a memory based on its output in response to the query; otherwise, it remains indeterminate. Finally, we expand on this concept by comparing the memory capabilities of the human brain and LLMs, highlighting the similarities and differences in their operational mechanisms.",0 "Predicting and explaining the private information contained in an image in human-understandable terms is a complex and contextual task. This task is challenging even for large language models. To facilitate the understanding of privacy decisions, we propose to predict image privacy based on a set of natural language content descriptors. These content descriptors are associated with privacy scores that reflect how people perceive image content. We generate descriptors with our novel Image-guided Topic Modeling (ITM) approach. ITM leverages, via multimodality alignment, both vision information and image textual descriptions from a vision language model. We use the ITM-generated descriptors to learn a privacy predictor, Priv$\times$ITM, whose decisions are interpretable by design. Our Priv$\times$ITM classifier outperforms the reference interpretable method by 5 percentage points in accuracy and performs comparably to the current non-interpretable state-of-the-art model.",0 "The sudden emergence of large language models (LLMs) such as ChatGPT has had a disruptive impact throughout the computing education community. LLMs have been shown to excel at producing correct code to CS1 and CS2 problems, and can even act as friendly assistants to students learning how to code. Recent work shows that LLMs demonstrate unequivocally superior results in being able to explain and resolve compiler error messages -- for decades, one of the most frustrating parts of learning how to code. However, LLM-generated error message explanations have only been assessed by expert programmers in artificial conditions. This work sought to understand how novice programmers resolve programming error messages (PEMs) in a more realistic scenario. We ran a within-subjects study with $n$ = 106 participants in which students were tasked to fix six buggy C programs. For each program, participants were randomly assigned to fix the problem using either a stock compiler error message, an expert-handwritten error message, or an error message explanation generated by GPT-4. Despite promising evidence on synthetic benchmarks, we found that GPT-4 generated error messages outperformed conventional compiler error messages in only 1 of the 6 tasks, measured by students' time-to-fix each problem. Handwritten explanations still outperform LLM and conventional error messages, both on objective and subjective measures.",2 "Human Activity Recognition (HAR) is a challenging, multi-label classification problem as activities may co-occur and sensor signals corresponding to the same activity may vary in different contexts (e.g., different device placements). This paper proposes a Deep Heterogeneous Contrastive Hyper-Graph Learning (DHC-HGL) framework that captures heterogenous Context-Aware HAR (CA-HAR) hypergraph properties in a message-passing and neighborhood-aggregation fashion. Prior work only explored homogeneous or shallow-node-heterogeneous graphs. DHC-HGL handles heterogeneous CA-HAR data by innovatively 1) Constructing three different types of sub-hypergraphs that are each passed through different custom HyperGraph Convolution (HGC) layers designed to handle edge-heterogeneity and 2) Adopting a contrastive loss function to ensure node-heterogeneity. In rigorous evaluation on two CA-HAR datasets, DHC-HGL significantly outperformed state-of-the-art baselines by 5.8% to 16.7% on Matthews Correlation Coefficient (MCC) and 3.0% to 8.4% on Macro F1 scores. UMAP visualizations of learned CA-HAR node embeddings are also presented to enhance model explainability.",0 "Explanation is key to people having confidence in high-stakes AI systems. However, machine-learning-based systems -- which account for almost all current AI -- can't explain because they are usually black boxes. The explainable AI (XAI) movement hedges this problem by redefining ""explanation"". The human-centered explainable AI (HCXAI) movement identifies the explanation-oriented needs of users but can't fulfill them because of its commitment to machine learning. In order to achieve the kinds of explanations needed by real people operating in critical domains, we must rethink how to approach AI. We describe a hybrid approach to developing cognitive agents that uses a knowledge-based infrastructure supplemented by data obtained through machine learning when applicable. These agents will serve as assistants to humans who will bear ultimate responsibility for the decisions and actions of the human-robot team. We illustrate the explanatory potential of such agents using the under-the-hood panels of a demonstration system in which a team of simulated robots collaborate on a search task assigned by a human.",0 "This paper presents a system called Robo-CSK-Organizer that infuses commonsense knowledge from a classical knowledge based to enhance the context recognition capabilities of robots so as to facilitate the organization of detected objects by classifying them in a task-relevant manner. It is particularly useful in multipurpose robotics. Unlike systems relying solely on deep learning tools such as ChatGPT, the Robo-CSK-Organizer system stands out in multiple avenues as follows. It resolves ambiguities well, and maintains consistency in object placement. Moreover, it adapts to diverse task-based classifications. Furthermore, it contributes to explainable AI, hence helping to improve trust and human-robot collaboration. Controlled experiments performed in our work, simulating domestic robotics settings, make Robo-CSK-Organizer demonstrate superior performance while placing objects in contextually relevant locations. This work highlights the capacity of an AI-based system to conduct commonsense-guided decision-making in robotics closer to the thresholds of human cognition. Hence, Robo-CSK-Organizer makes positive impacts on AI and robotics.",0 "The application of Shapley values to high-dimensional, time-series-like data is computationally challenging - and sometimes impossible. For $N$ inputs the problem is $2^N$ hard. In image processing, clusters of pixels, referred to as superpixels, are used to streamline computations. This research presents an efficient solution for time-seres-like data that adapts the idea of superpixels for Shapley value computation. Motivated by a forensic DNA classification example, the method is applied to multivariate time-series-like data whose features have been classified by a convolutional neural network (CNN). In DNA processing, it is important to identify alleles from the background noise created by DNA extraction and processing. A single DNA profile has $31,200$ scan points to classify, and the classification decisions must be defensible in a court of law. This means that classification is routinely performed by human readers - a monumental and time consuming process. The application of a CNN with fast computation of meaningful Shapley values provides a potential alternative to the classification. This research demonstrates the realistic, accurate and fast computation of Shapley values for this massive task",0 "There is a mismatch between psychological and computational studies on emotions. Psychological research aims at explaining and documenting internal mechanisms of these phenomena, while computational work often simplifies them into labels. Many emotion fundamentals remain under-explored in natural language processing, particularly how emotions develop and how people cope with them. To help reduce this gap, we follow theories on coping, and treat emotions as strategies to cope with salient situations (i.e., how people deal with emotion-eliciting events). This approach allows us to investigate the link between emotions and behavior, which also emerges in language. We introduce the task of coping identification, together with a corpus to do so, constructed via role-playing. We find that coping strategies realize in text even though they are challenging to recognize, both for humans and automatic systems trained and prompted on the same task. We thus open up a promising research direction to enhance the capability of models to better capture emotion mechanisms from text.",0 "Automated decision-making systems are becoming increasingly ubiquitous, which creates an immediate need for their interpretability and explainability. However, it remains unclear whether users know what insights an explanation offers and, more importantly, what information it lacks. To answer this question we conducted an online study with 200 participants, which allowed us to assess explainees' ability to realise explicated information -- i.e., factual insights conveyed by an explanation -- and unspecified information -- i.e, insights that are not communicated by an explanation -- across four representative explanation types: model architecture, decision surface visualisation, counterfactual explainability and feature importance. Our findings uncover that highly comprehensible explanations, e.g., feature importance and decision surface visualisation, are exceptionally susceptible to misinterpretation since users tend to infer spurious information that is outside of the scope of these explanations. Additionally, while the users gauge their confidence accurately with respect to the information explicated by these explanations, they tend to be overconfident when misinterpreting the explanations. Our work demonstrates that human comprehension can be a double-edged sword since highly accessible explanations may convince users of their truthfulness while possibly leading to various misinterpretations at the same time. Machine learning explanations should therefore carefully navigate the complex relation between their full scope and limitations to maximise understanding and curb misinterpretation.",2 "Safety in industrial robotic environments is a hot research topic in the area of human-robot interaction (HRI). Up to now, a robotic arm on an assembly line interacts with other machines away from human workers. Nowadays, robotic arm manufactures are aimed to their robots could increasingly perform tasks collaborating with humans. One of the ways to improve this collaboration is by making the movement of robots more humanlike. This way, it would be easier for a human to foresee the movement of the robot and approach it without fear of contact. The main difference between the movement of a human and of a robotic arm is that the former has a bell-shaped speed profile while the latter has a uniform speed one. To generate this speed profile, the kinematic theory of rapid human movements and its Sigma-Lognormal model has been used. This model is widely used to explain most of the basic phenomena related to the control of human movements. Both human-like and robotic-like movements are transferred to the UR3 robot. In this paper we detail the how the UR3 robot was programmed to produce both kinds of movement. The dissimilarities result between the input motion and output motion to the robot confirm the possibility to develop human-like velocities in the UR3 robot.",0 "The use of Large Language Models for dialogue systems is rising, presenting a new challenge: how do we assess users' chat experience in these systems? Leveraging Natural Language Processing (NLP)-powered dialog analyzers to create dialog indicators like Coherence and Emotion has the potential to predict the chat experience. In this paper, we proposed a conceptual model to explain the relationship between the dialog indicators and various factors related to the chat experience, such as users' intentions, affinity toward dialog agents, and prompts of the agents' characters. We evaluated the conceptual model using PLS-SEM with 120 participants and found it well fit. Our results suggest that dialog indicators can predict the chat experience and fully mediate the impact of prompts and user intentions. Additionally, users' affinity toward agents can partially explain these predictions. Our findings demonstrate the potential of using dialog indicators in predicting the chat experience. Through the conceptual model we propose, researchers can apply the dialog analyzers to generate dialog indicators to constantly monitor the dialog process and improve the user's chat experience accordingly.",2 "Large Language Models (LLMs) are widely used in both industry and academia for various tasks, yet evaluating the consistency of generated text responses continues to be a challenge. Traditional metrics like ROUGE and BLEU show a weak correlation with human judgment. More sophisticated metrics using Natural Language Inference (NLI) have shown improved correlations but are complex to implement, require domain-specific training due to poor cross-domain generalization, and lack explainability. More recently, prompt-based metrics using LLMs as evaluators have emerged; while they are easier to implement, they still lack explainability and depend on task-specific prompts, which limits their generalizability. This work introduces Automated eXplainable Consistency Evaluation using LLMs (AXCEL), a prompt-based consistency metric which offers explanations for the consistency scores by providing detailed reasoning and pinpointing inconsistent text spans. AXCEL is also a generalizable metric which can be adopted to multiple tasks without changing the prompt. AXCEL outperforms both non-prompt and prompt-based state-of-the-art (SOTA) metrics in detecting inconsistencies across summarization by 8.7%, free text generation by 6.2%, and data-to-text conversion tasks by 29.4%. We also evaluate the influence of underlying LLMs on prompt based metric performance and recalibrate the SOTA prompt-based metrics with the latest LLMs for fair comparison. Further, we show that AXCEL demonstrates strong performance using open source LLMs.",0 "While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.",2 "Generalized Additive Models (GAMs) offer a balance between performance and interpretability in machine learning. The interpretability aspect of GAMs is expressed through shape plots, representing the model's decision-making process. However, the visual properties of these plots, e.g. number of kinks (number of local maxima and minima), can impact their complexity and the cognitive load imposed on the viewer, compromising interpretability. Our study, including 57 participants, investigates the relationship between the visual properties of GAM shape plots and cognitive load they induce. We quantify various visual properties of shape plots and evaluate their alignment with participants' perceived cognitive load, based on 144 plots. Our results indicate that the number of kinks metric is the most effective, explaining 86.4% of the variance in users' ratings. We develop a simple model based on number of kinks that provides a practical tool for predicting cognitive load, enabling the assessment of one aspect of GAM interpretability without direct user involvement.",2 "Numerous explanation methods have been recently developed to interpret the decisions made by deep neural network (DNN) models. For image classifiers, these methods typically provide an attribution score to each pixel in the image to quantify its contribution to the prediction. However, most of these explanation methods appropriate attribution scores to pixels independently, even though both humans and DNNs make decisions by analyzing a set of closely related pixels simultaneously. Hence, the attribution score of a pixel should be evaluated jointly by considering itself and its structurally-similar pixels. We propose a method called IProp, which models each pixel's individual attribution score as a source of explanatory information and explains the image prediction through the dynamic propagation of information across all pixels. To formulate the information propagation, IProp adopts the Markov Reward Process, which guarantees convergence, and the final status indicates the desired pixels' attribution scores. Furthermore, IProp is compatible with any existing attribution-based explanation method. Extensive experiments on various explanation methods and DNN models verify that IProp significantly improves them on a variety of interpretability metrics.",0 "Hot Jupiters are giant planets subject to intense stellar radiation. The physical and chemical properties of their atmosphere makes them the most amenable targets for the atmospheric characterization. In this paper we analyze the photometry collected during the secondary eclipses of the hot Jupiter WASP-3 b by CHEOPS, TESS and Spitzer. Our aim is to characterize the atmosphere of the planet by measuring the secondary eclipse depth in several passbands and constrain the planetary dayside spectrum. Our update of the stellar and planetary properties is consistent with previous works. The analysis of the occultations returns an eclipse depth of 92+-21 ppm in the CHEOPS passband, 83+-27 ppm for TESS and >2000 ppm in the IRAC 1-2-4 Spitzer passbands. Using the eclipse depths in the Spitzer bands we propose a set of likely emission spectra which constrain the emission contribution in the \cheops and TESS passbands to approximately a few dozens of parts per million. This allowed us to measure a geometric albedo of 0.21+-0.07 in the CHEOPS passband, while the TESS data lead to a 95\% upper limit of $\sim$0.2. WASP-3 b belongs to the group of ultra-hot Jupiters which are characterized by low Bond albedo (<0.3+-0.1), as predicted by different atmospheric models. On the other hand, it unexpectedly seems to efficiently recirculate the absorbed stellar energy, unlike similar highly irradiated planets. To explain this inconsistency, we propose that other energy recirculation mechanisms may be at play other than advection (for example, dissociation and recombination of H_2). Another possibility is that the observations in different bandpasses probe different atmospheric layers, making the atmospheric analysis difficult without an appropriate modeling of the thermal emission spectrum of WASP-3 b, which is not feasible with the limited spectroscopic data available to date.",0 "In this paper we explore evaluation of LLM capabilities. We present measurements of GPT-4 performance on several deterministic tasks; each task involves a basic calculation and takes as input parameter some element drawn from a large well-defined population (e.g., count elements in a list, multiply two k-digit numbers, etc). We examine several conditions per-task and perform enough trials so that statistically significant differences can be detected. This allows us to investigate the sensitivity of task-accuracy both to query phrasing and input parameter population. We find that seemingly trivial modifications in the task-prompt or input population can yield differences far larger than can be explained by sampling effects. For example, performance on a simple list-counting task varies with query-phrasing and list-length, but also with list composition (i.e., the thing-to-be-counted) and object frequency (e.g., success when an element accounts for $\approx$ 50\% of a list is different from when it accounts for $\approx$ 70\% etc). We conclude that efforts to quantify LLM capabilities easily succumb to the language-as-fixed-effect fallacy, where experimental observations are improperly generalized beyond what the data supports. A consequence appears to be that intuitions that have been formed based on interactions with humans form a very unreliable guide as to which input modifications should ``make no difference'' to LLM performance.",0 "Introduction: Artificial intelligence (AI) is exhibiting tremendous potential to reduce the massive costs and long timescales of drug discovery. There are however important challenges currently limiting the impact and scope of AI models. Areas covered: In this perspective, the authors discuss a range of data issues (bias, inconsistency, skewness, irrelevance, small size, high dimensionality), how they challenge AI models, and which issue-specific mitigations have been effective. Next, they point out the challenges faced by uncertainty quantification techniques aimed at enhancing and trusting the predictions from these AI models. They also discuss how conceptual errors, unrealistic benchmarks and performance misestimation can confound the evaluation of models and thus their development. Lastly, the authors explain how human bias, whether from AI experts or drug discovery experts, constitutes another challenge that can be alleviated by gaining more prospective experience. Expert opinion: AI models are often developed to excel on retrospective benchmarks unlikely to anticipate their prospective performance. As a result, only a few of these models are ever reported to have prospective value (e.g. by discovering potent and innovative drug leads for a therapeutic target). The authors have discussed what can go wrong in practice with AI for drug discovery. We hope that this will help inform the decisions of editors, funders investors and researchers working in this area.",0 "The coexistence curves of liquid-liquid equilibrium (LLE) for the mixtures: phenylacetonitrile + heptane, + octane, + nonane, + cyclooctane, or + 2,2,4-trimethylpentane and for 3-phenylpropionitrile + heptane, or + octane are reported. Aromatic nitrile + alkane, + aromatic hydrocarbon or + 1 alkanol systems are investigated using a set of thermophysical properties: phase equilibria (solid-liquid, SLE, vapour-liquid, VLE and LLE), excess molar functions, enthalpies ($H_{\text{m}}^{\text{E}}$), isochoric internal energies, isobaric heat capacities ($C_{p \text{m}}^{\text{E}}$) and volumes ($V_{\text{m}}^{\text{E}}$), and the Kirkwood correlation factor. Due to proximity effects between the phenyl and the CN groups, dipolar interactions between molecules of aromatic nitriles are stronger than those between molecules of isomeric linear nitriles. Dipolar interactions become weaker in the order: 3-phenylpropionitrile > phenylacetonitrile > benzonitrile. Benzonitrile + aromatic hydrocarbon mixtures are characterized by dispersive interactions and structural effects. The latter are more important in systems with phenylacetonitrile. Structural effects are also present in benzonitrile + n-alkane, or + 1-alkanol + mixtures. The systems mentioned above have been studied using DISQUAC. Interaction parameters for contacts where the CN group in aromatic nitriles participates are given and DISQUAC results on excess properties and phase equilibria are discussed. 1-Alkanol + benzonitrile mixtures are also investigated by means of the ERAS model. ERAS represents well $H_{\text{m}}^{\text{E}}$ of these systems. The $V_{\text{m}}^{\text{E}}$ curves of solutions with longer 1-alkanols are more poorly described, which has been explained in terms of the existence of structural effects.",0 "The human visual system is well-tuned to detect faces of all shapes and sizes. While this brings obvious survival advantages, such as a better chance of spotting unknown predators in the bush, it also leads to spurious face detections. ``Face pareidolia'' describes the perception of face-like structure among otherwise random stimuli: seeing faces in coffee stains or clouds in the sky. In this paper, we study face pareidolia from a computer vision perspective. We present an image dataset of ``Faces in Things'', consisting of five thousand web images with human-annotated pareidolic faces. Using this dataset, we examine the extent to which a state-of-the-art human face detector exhibits pareidolia, and find a significant behavioral gap between humans and machines. We find that the evolutionary need for humans to detect animal faces, as well as human faces, may explain some of this gap. Finally, we propose a simple statistical model of pareidolia in images. Through studies on human subjects and our pareidolic face detectors we confirm a key prediction of our model regarding what image conditions are most likely to induce pareidolia. Dataset and Website: https://aka.ms/faces-in-things",0 "The increased use of information retrieval in recruitment, primarily through job recommender systems (JRSs), can have a large impact on job seekers, recruiters, and companies. As a result, such systems have been determined to be high-risk in recent legislature. This requires JRSs to be trustworthy and transparent, allowing stakeholders to understand why specific recommendations were made. To fulfill this requirement, the stakeholders' exact preferences and needs need to be determined. To do so, we evaluated an explainable job recommender system using a realistic, task-based, mixed-design user study (n=30) in which stakeholders had to make decisions based on the model's explanations. This mixed-methods evaluation consisted of two objective metrics - correctness and efficiency, along with three subjective metrics - trust, transparency, and usefulness. These metrics were evaluated twice per participant, once using real explanations and once using random explanations. The study included a qualitative analysis following a think-aloud protocol while performing tasks adapted to each stakeholder group. We find that providing stakeholders with real explanations does not significantly improve decision-making speed and accuracy. Our results showed a non-significant trend for the real explanations to outperform the random ones on perceived trust, usefulness, and transparency of the system for all stakeholder types. We determine that stakeholders benefit more from interacting with explanations as decision support capable of providing healthy friction, rather than as previously-assumed persuasive tools.",2 "Time series forecasting, while vital in various applications, often employs complex models that are difficult for humans to understand. Effective explainable AI techniques are crucial to bridging the gap between model predictions and user understanding. This paper presents a framework - TSFeatLIME, extending TSLIME, tailored specifically for explaining univariate time series forecasting. TSFeatLIME integrates an auxiliary feature into the surrogate model and considers the pairwise Euclidean distances between the queried time series and the generated samples to improve the fidelity of the surrogate models. However, the usefulness of such explanations for human beings remains an open question. We address this by conducting a user study with 160 participants through two interactive interfaces, aiming to measure how individuals from different backgrounds can simulate or predict model output changes in the treatment group and control group. Our results show that the surrogate model under the TSFeatLIME framework is able to better simulate the behaviour of the black-box considering distance, without sacrificing accuracy. In addition, the user study suggests that the explanations were significantly more effective for participants without a computer science background.",2 "Human-involved interactive environments pose significant challenges for autonomous vehicle decision-making processes due to the complexity and uncertainty of human behavior. It is crucial to develop an explainable and trustworthy decision-making system for autonomous vehicles interacting with pedestrians. Previous studies often used traditional game theory to describe interactions for its interpretability. However, it assumes complete human rationality and unlimited reasoning abilities, which is unrealistic. To solve this limitation and improve model accuracy, this paper proposes a novel framework that integrates the partially observable markov decision process with behavioral game theory to dynamically model AV-pedestrian interactions at the unsignalized intersection. Both the AV and the pedestrian are modeled as dynamic-belief-induced quantal cognitive hierarchy (DB-QCH) models, considering human reasoning limitations and bounded rationality in the decision-making process. In addition, a dynamic belief updating mechanism allows the AV to update its understanding of the opponent's rationality degree in real-time based on observed behaviors and adapt its strategies accordingly. The analysis results indicate that our models effectively simulate vehicle-pedestrian interactions and our proposed AV decision-making approach performs well in safety, efficiency, and smoothness. It closely resembles real-world driving behavior and even achieves more comfortable driving navigation compared to our previous virtual reality experimental data.",0 "Deep learning techniques have revolutionized image classification by mimicking human cognition and automating complex decision-making processes. However, the deployment of AI systems in the wild, especially in high-security domains such as defence, is curbed by the lack of explainability of the model. To this end, eXplainable AI (XAI) is an emerging area of research that is intended to explore the unexplained hidden black box nature of deep neural networks. This paper explores the application of the eXplainable Artificial Intelligence (XAI) tool to interpret the underwater image classification results, one of the first works in the domain to the best of our knowledge. Our study delves into the realm of SONAR image classification using a custom dataset derived from diverse sources, including the Seabed Objects KLSG dataset, the camera SONAR dataset, the mine SONAR images dataset, and the SCTD dataset. An extensive analysis of transfer learning techniques for image classification using benchmark Convolutional Neural Network (CNN) architectures such as VGG16, ResNet50, InceptionV3, DenseNet121, etc. is carried out. On top of this classification model, a post-hoc XAI technique, viz. Local Interpretable Model-Agnostic Explanations (LIME) are incorporated to provide transparent justifications for the model's decisions by perturbing input data locally to see how predictions change. Furthermore, Submodular Picks LIME (SP-LIME) a version of LIME particular to images, that perturbs the image based on the submodular picks is also extensively studied. To this end, two submodular optimization algorithms i.e. Quickshift and Simple Linear Iterative Clustering (SLIC) are leveraged towards submodular picks. The extensive analysis of XAI techniques highlights interpretability of the results in a more human-compliant way, thus boosting our confidence and reliability.",0 "Trajectory prediction is a crucial aspect of understanding human behaviors. Researchers have made efforts to represent socially interactive behaviors among pedestrians and utilize various networks to enhance prediction capability. Unfortunately, they still face challenges not only in fully explaining and measuring how these interactive behaviors work to modify trajectories but also in modeling pedestrians' preferences to plan or participate in social interactions in response to the changeable physical environments as extra conditions. This manuscript mainly focuses on the above explainability and conditionality requirements for trajectory prediction networks. Inspired by marine animals perceiving other companions and the environment underwater by echolocation, this work constructs an angle-based conditioned social interaction representation SocialCircle+ to represent the socially interactive context and its corresponding conditions. It employs a social branch and a conditional branch to describe how pedestrians are positioned in prediction scenes socially and physically in angle-based-cyclic-sequence forms. Then, adaptive fusion is applied to fuse the above conditional clues onto the social ones to learn the final interaction representation. Experiments demonstrate the superiority of SocialCircle+ with different trajectory prediction backbones. Moreover, counterfactual interventions have been made to simultaneously verify the modeling capacity of causalities among interactive variables and the conditioning capability.",0 "In the context of AI decision support systems (AI-DSS), we argue that meeting the demands of ethical and explainable AI (XAI) is about developing AI-DSS to provide human decision-makers with three types of human-grounded explanations: reasons, counterfactuals, and confidence, an approach we refer to as the RCC approach. We begin by reviewing current empirical XAI literature that investigates the relationship between various methods for generating model explanations (e.g., LIME, SHAP, Anchors), the perceived trustworthiness of the model, and end-user accuracy. We demonstrate how current theories about what constitutes good human-grounded reasons either do not adequately explain this evidence or do not offer sound ethical advice for development. Thus, we offer a novel theory of human-machine interaction: the theory of epistemic quasi-partnerships (EQP). Finally, we motivate adopting EQP and demonstrate how it explains the empirical evidence, offers sound ethical advice, and entails adopting the RCC approach.",0 "The remarkable advancements in artificial intelligence (AI), primarily driven by deep neural networks, are facing challenges surrounding unsustainable computational trajectories, limited robustness, and a lack of explainability. To develop next-generation cognitive AI systems, neuro-symbolic AI emerges as a promising paradigm, fusing neural and symbolic approaches to enhance interpretability, robustness, and trustworthiness, while facilitating learning from much less data. Recent neuro-symbolic systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities. In this paper, we aim to understand the workload characteristics and potential architectures for neuro-symbolic AI. We first systematically categorize neuro-symbolic AI algorithms, and then experimentally evaluate and analyze them in terms of runtime, memory, computational operators, sparsity, and system characteristics on CPUs, GPUs, and edge SoCs. Our studies reveal that neuro-symbolic models suffer from inefficiencies on off-the-shelf hardware, due to the memory-bound nature of vector-symbolic and logical operations, complex flow control, data dependencies, sparsity variations, and limited scalability. Based on profiling insights, we suggest cross-layer optimization solutions and present a hardware acceleration case study for vector-symbolic architecture to improve the performance, efficiency, and scalability of neuro-symbolic computing. Finally, we discuss the challenges and potential future directions of neuro-symbolic AI from both system and architectural perspectives.",0 "The self-simulational theory of temporal extension describes an information-theoretically formalized mechanism by which the width of subjective temporality emerges from the architecture of self-modelling. In this paper, the perspective of the free energy principle will be assumed, to cast the emergence of subjective temporality, along with a Bayesian mechanism for hierarchical duration estimation, from first principles of the physics of self-organization. Using active inference, a deep parametric generative model of temporal inference is simulated, which realizes the described dynamics on a computational level. Two biases (i.e. variations) of time-perception naturally emerge from the simulated computational model. This concerns the intentional binding effect (i.e. the compression of the temporal interval between voluntarily initiated actions and subsequent sensory consequences) and empirically documented alterations of subjective time experience in deep states of meditative absorption (i.e. in minimal phenomenal experience). Generally, numerous systematic and domain-specific alterations of subjective temporal experience are computationally explained in a unified manner, as enabled by integration with current active inference accounts mapping onto the respective domains. This concerns - next to more general scale-invariant effects of explicit timing and central tendency effects - the temporality-modulating role of valence, impulsivity, boredom, flow-states, near death-experiences, and various psychopathologies, amongst others. The self-simulational theory of temporal extension, from the perspective of the free energy principle, explains how the subjective temporal Now emerges and varies from first principles, accounting for why sometimes, subjective time seems to fly, and sometimes, moments feel like eternities",0 "Powerful predictive AI systems have demonstrated great potential in augmenting human decision making. Recent empirical work has argued that the vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems. However, accurately estimating the trustworthiness of AI advice at the instance level is quite challenging, especially in the absence of performance feedback pertaining to the AI system. In practice, the performance disparity of machine learning models on out-of-distribution data makes the dataset-specific performance feedback unreliable in human-AI collaboration. Inspired by existing literature on critical thinking and a critical mindset, we propose the use of debugging an AI system as an intervention to foster appropriate reliance. In this paper, we explore whether a critical evaluation of AI performance within a debugging setting can better calibrate users' assessment of an AI system and lead to more appropriate reliance. Through a quantitative empirical study (N = 234), we found that our proposed debugging intervention does not work as expected in facilitating appropriate reliance. Instead, we observe a decrease in reliance on the AI system after the intervention -- potentially resulting from an early exposure to the AI system's weakness. We explore the dynamics of user confidence and user estimation of AI trustworthiness across groups with different performance levels to help explain how inappropriate reliance patterns occur. Our findings have important implications for designing effective interventions to facilitate appropriate reliance and better human-AI collaboration.",0 "Malicious URL classification represents a crucial aspect of cyber security. Although existing work comprises numerous machine learning and deep learning-based URL classification models, most suffer from generalisation and domain-adaptation issues arising from the lack of representative training datasets. Furthermore, these models fail to provide explanations for a given URL classification in natural human language. In this work, we investigate and demonstrate the use of Large Language Models (LLMs) to address this issue. Specifically, we propose an LLM-based one-shot learning framework that uses Chain-of-Thought (CoT) reasoning to predict whether a given URL is benign or phishing. We evaluate our framework using three URL datasets and five state-of-the-art LLMs and show that one-shot LLM prompting indeed provides performances close to supervised models, with GPT 4-Turbo being the best model, followed by Claude 3 Opus. We conduct a quantitative analysis of the LLM explanations and show that most of the explanations provided by LLMs align with the post-hoc explanations of the supervised classifiers, and the explanations have high readability, coherency, and informativeness.",0 "Most authors of textbooks on quantum mechanics either postulate or sketch a short `ad hoc` derivation of Schrodinger's equation. In this work we give a detailed derivation of Schrodinger's equation from the Hamilton-Jacobi equation and the Eikonal equation in geometrical optics. We start from the historical debates on the nature of light -- whether it is a beam of particles, or waves in the aether. We derive the Eikonal equation and show the conditions when a wave can behave as a beam of particles. Then we discuss several experiments with an electron gun, that show clearly diffraction and interference of a single electron. Next, in order to explain these experiments, we derive Schrodinger's equation by comparing Hamilton-Jacobi equation in classical mechanics with the Eikonal equation in geometrical optics. To do that, we first show how to derive the wave equation from the Eikonal equation (not the other way around!). Second, we use this method to derive Schrodinger's equation from the Hamilton-Jacobi equation. Next, we derive Born's statistical rule using the early understanding of de Broglie that both particles and waves exist. Afterwards, we show that historically people preferred to remove the particles (as well as their trajectories) altogether from de Broglie's ideas but retained Born's rule (the so called Copenhagen interpretation). These derivations of the foundations of quantum mechanics do not follow precisely the history of the subject. Rather we select some early ideas and experiments in a judicious manner to present Schrodinger's equation in a logical and ordered way. We use the electron gun experiments instead of black body radiation and photoelectric effect. Our derivation may bring more light and satisfaction for the undergraduate students about the confusing and rather mysterious subject of quantum mechanics.",0 "In the midst of widespread misinformation and disinformation through social media and the proliferation of AI-generated texts, it has become increasingly difficult for people to validate and trust information they encounter. Many fact-checking approaches and tools have been developed, but they often lack appropriate explainability or granularity to be useful in various contexts. A text validation method that is easy to use, accessible, and can perform fine-grained evidence attribution has become crucial. More importantly, building user trust in such a method requires presenting the rationale behind each prediction, as research shows this significantly influences people's belief in automated systems. Localizing and bringing users' attention to the specific problematic content is also paramount, instead of providing simple blanket labels. In this paper, we present ClaimVer, a human-centric framework tailored to meet users' informational and verification needs by generating rich annotations and thereby reducing cognitive load. Designed to deliver comprehensive evaluations of texts, it highlights each claim, verifies it against a trusted knowledge graph (KG), presents the evidence, and provides succinct, clear explanations for each claim prediction. Finally, our framework introduces an attribution score, enhancing applicability across a wide range of downstream tasks.",0 "Explainable Artificial Intelligence (XAI) poses a significant challenge in providing transparent and understandable insights into complex AI models. Traditional post-hoc algorithms, while useful, often struggle to deliver interpretable explanations. Concept-based models offer a promising avenue by incorporating explicit representations of concepts to enhance interpretability. However, existing research on automatic concept discovery methods is often limited by lower-level concepts, costly human annotation requirements, and a restricted domain of background knowledge. In this study, we explore the potential of a Large Language Model (LLM), specifically GPT-4, by leveraging its domain knowledge and common-sense capability to generate high-level concepts that are meaningful as explanations for humans, for a specific setting of image classification. We use minimal textual object information available in the data via prompting to facilitate this process. To evaluate the output, we compare the concepts generated by the LLM with two other methods: concepts generated by humans and the ECII heuristic concept induction system. Since there is no established metric to determine the human understandability of concepts, we conducted a human study to assess the effectiveness of the LLM-generated concepts. Our findings indicate that while human-generated explanations remain superior, concepts derived from GPT-4 are more comprehensible to humans compared to those generated by ECII.",0 "Automated experiments in scanning transmission electron microscopy (STEM) require rapid image segmentation to optimize data representation for human interpretation, decision-making, site-selective spectroscopies, and atomic manipulation. Currently, segmentation tasks are typically performed using supervised machine learning methods, which require human-labeled data and are sensitive to out-of-distribution drift effects caused by changes in resolution, sampling, or beam shape. Here, we operationalize and benchmark a recently proposed reward-driven optimization workflow for on-the fly image analysis in STEM. This unsupervised approach is much more robust, as it does not rely on human labels and is fully explainable. The explanatory feedback can help the human to verify the decision making and potentially tune the model by selecting the position along the Pareto frontier of reward functions. We establish the timing and effectiveness of this method, demonstrating its capability for real-time performance in high-throughput and dynamic automated STEM experiments. The reward driven approach allows to construct explainable robust analysis workflows and can be generalized to a broad range of image analysis tasks in electron and scanning probe microscopy and chemical imaging.",0 "Artificial intelligence (AI) systems have substantially improved dermatologists' diagnostic accuracy for melanoma, with explainable AI (XAI) systems further enhancing clinicians' confidence and trust in AI-driven decisions. Despite these advancements, there remains a critical need for objective evaluation of how dermatologists engage with both AI and XAI tools. In this study, 76 dermatologists participated in a reader study, diagnosing 16 dermoscopic images of melanomas and nevi using an XAI system that provides detailed, domain-specific explanations. Eye-tracking technology was employed to assess their interactions. Diagnostic performance was compared with that of a standard AI system lacking explanatory features. Our findings reveal that XAI systems improved balanced diagnostic accuracy by 2.8 percentage points relative to standard AI. Moreover, diagnostic disagreements with AI/XAI systems and complex lesions were associated with elevated cognitive load, as evidenced by increased ocular fixations. These insights have significant implications for clinical practice, the design of AI tools for visual tasks, and the broader development of XAI in medical diagnostics.",2 "Conversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered significant interest from the research community in natural language processing (NLP) and human-computer interaction (HCI). Such systems can provide answers to user questions about explanations in dialogues, have the potential to enhance users' comprehension and offer more information about the decision-making and generation processes of LLMs. Currently available ConvXAI systems are based on intent recognition rather than free chat, as this has been found to be more precise and reliable in identifying users' intentions. However, the recognition of intents still presents a challenge in the case of ConvXAI, since little training data exist and the domain is highly specific, as there is a broad range of XAI methods to map requests onto. In order to bridge this gap, we present CoXQL, the first dataset in the NLP domain for user intent recognition in ConvXAI, covering 31 intents, seven of which require filling multiple slots. Subsequently, we enhance an existing parsing approach by incorporating template validations, and conduct an evaluation of several LLMs on CoXQL using different parsing strategies. We conclude that the improved parsing approach (MP+) surpasses the performance of previous approaches. We also discover that intents with multiple slots remain highly challenging for LLMs.",0 "The exponential growth of the Internet of Things (IoT) has significantly increased the complexity and volume of cybersecurity threats, necessitating the development of advanced, scalable, and interpretable security frameworks. This paper presents an innovative, comprehensive framework for real-time IoT attack detection and response that leverages Machine Learning (ML), Explainable AI (XAI), and Large Language Models (LLM). By integrating XAI techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) with a model-independent architecture, we ensure our framework's adaptability across various ML algorithms. Additionally, the incorporation of LLMs enhances the interpretability and accessibility of detection decisions, providing system administrators with actionable, human-understandable explanations of detected threats. Our end-to-end framework not only facilitates a seamless transition from model development to deployment but also represents a real-world application capability that is often lacking in existing research. Based on our experiments with the CIC-IOT-2023 dataset \cite{neto2023ciciot2023}, Gemini and OPENAI LLMS demonstrate unique strengths in attack mitigation: Gemini offers precise, focused strategies, while OPENAI provides extensive, in-depth security measures. Incorporating SHAP and LIME algorithms within XAI provides comprehensive insights into attack detection, emphasizing opportunities for model improvement through detailed feature analysis, fine-tuning, and the adaptation of misclassifications to enhance accuracy.",0 "Stars and planets form in regions of enhanced stellar density, subjecting protoplanetary discs to gravitational perturbations from neighbouring stars. Observations in the Taurus star-forming have uncovered evidence of at least three recent, star-disc encounters that have truncated discs (HV/DO Tau, RW Aurigae, UX Tau), raising questions about the frequency of such events. We aim to assess the probability of observing truncating star-disc encounters in Taurus. We generate a physically motivated dynamical model including binaries and spatial-kinematic substructure to follow the historical dynamical evolution and stellar encounters in the Taurus star forming region. We track the star-disc encounters and outer disc radius evolution over the lifetime of Taurus. A quarter of discs are truncated below 30 au by dynamical encounters, but this truncation mostly occurs in binaries over the course of a few orbital periods, on a time-scale $\lesssim 0.1$ Myr. Nonetheless, some truncating encounters still occur up to the present age of Taurus. Strongly truncating encounters (ejecting $\gtrsim 10$ percent of the disc mass) occur at a rate $\sim 10$ Myr$^{-1}$, sufficient to explain the encounter between HV and DO Tau $\sim 0.1$ Myr ago. If encounters that eject only $\sim 1$ percent of the disc mass are responsible for RW Aurigae and UX Tau, then they are also expected with encounter rate $\Gamma_\mathrm{enc} \sim 100{-}200$ Myr$^{-1}$. However, the observed sample of recent encounters is probably incomplete, since these examples occurred in systems that are not consistent with random drawing from the mass function. One more observed example would statistically imply additional physics, such as replenishment of the outer disc material.",0 "Recently, Large Language Models (LLMs) have been widely studied by researchers for their roles in various downstream NLP tasks. As a fundamental task in the NLP field, Chinese Grammatical Error Correction (CGEC) aims to correct all potential grammatical errors in the input sentences. Previous studies have shown that LLMs' performance as correctors on CGEC remains unsatisfactory due to its challenging task focus. To promote the CGEC field to better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC. Considering the rich grammatical knowledge stored in LLMs and their powerful semantic understanding capabilities, we utilize LLMs as explainers to provide explanation information for the CGEC small models during error correction to enhance performance. We also use LLMs as evaluators to bring more reasonable CGEC evaluations, thus alleviating the troubles caused by the subjectivity of the CGEC task. In particular, our work is also an active exploration of how LLMs and small models better collaborate in downstream tasks. Extensive experiments and detailed analyses on widely used datasets verify the effectiveness of our thinking intuition and the proposed methods.",0 "Rationalization models, which select a subset of input text as rationale-crucial for humans to understand and trust predictions-have recently emerged as a prominent research area in eXplainable Artificial Intelligence. However, most of previous studies mainly focus on improving the quality of the rationale, ignoring its robustness to malicious attack. Specifically, whether the rationalization models can still generate high-quality rationale under the adversarial attack remains unknown. To explore this, this paper proposes UAT2E, which aims to undermine the explainability of rationalization models without altering their predictions, thereby eliciting distrust in these models from human users. UAT2E employs the gradient-based search on triggers and then inserts them into the original input to conduct both the non-target and target attack. Experimental results on five datasets reveal the vulnerability of rationalization models in terms of explanation, where they tend to select more meaningless tokens under attacks. Based on this, we make a series of recommendations for improving rationalization models in terms of explanation.",0 "Although explainability is essential in the clinical diagnosis, most deep learning models still function as black boxes without elucidating their decision-making process. In this study, we investigate the explainable model development that can mimic the decision-making process of human experts by fusing the domain knowledge of explicit diagnostic criteria. We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis. Explicd initiates its process by querying domain knowledge from either large language models (LLMs) or human experts to establish diagnostic criteria across various concept axes (e.g., color, shape, texture, or specific patterns of diseases). By leveraging a pretrained vision-language model, Explicd injects these criteria into the embedding space as knowledge anchors, thereby facilitating the learning of corresponding visual concepts within medical images. The final diagnostic outcome is determined based on the similarity scores between the encoded visual concepts and the textual criteria embeddings. Through extensive evaluation of five medical image classification benchmarks, Explicd has demonstrated its inherent explainability and extends to improve classification performance compared to traditional black-box models. Code is available at \url{https://github.com/yhygao/Explicd}.",0 "In this study, we investigated the effect of specific noise realizations on the discrimination of two consonants, /b/ and /d/. For this purpose, we collected data from twelve participants, who listened to the words /aba/ or /ada/ embedded in one of three background noises. All noises had the same long-term spectrum but differed in the amount of random envelope fluctuations. The data were analyzed on a trial-by-trial basis using the reverse-correlation method. The results revealed that it is possible to predict the categorical responses with better-than-chance accuracy purely based on the spectro-temporal distribution of the random envelope fluctuations of the corresponding noises, without taking into account the actual targets or the signal-to-noise ratios used in the trials. The effect of the noise fluctuations explained on average 8.1% of the participants' responses in white noise, a proportion that increased up to 13.3% for noises with a larger amount of fluctuations. The estimated time-frequency weights revealed that the measured effect originated from confusions between noise fluctuations and relevant acoustic cues from the target words. Substantially similar conclusions were obtained from simulations using an artificial listener. We argue that this token-specific effect of noise is a form of informational masking.",2 "We introduce a method for computing immediately human interpretable yet accurate classifiers from tabular data. The classifiers obtained are short Boolean formulas, computed via first discretizing the original data and then using feature selection coupled with a very fast algorithm for producing the best possible Boolean classifier for the setting. We demonstrate the approach via 12 experiments, obtaining results with accuracies comparable to ones obtained via random forests, XGBoost, and existing results for the same datasets in the literature. In most cases, the accuracy of our method is in fact similar to that of the reference methods, even though the main objective of our study is the immediate interpretability of our classifiers. We also prove a new result on the probability that the classifier we obtain from real-life data corresponds to the ideally best classifier with respect to the background distribution the data comes from.",0 "Goal recognition (GR) involves inferring an agent's unobserved goal from a sequence of observations. This is a critical problem in AI with diverse applications. Traditionally, GR has been addressed using 'inference to the best explanation' or abduction, where hypotheses about the agent's goals are generated as the most plausible explanations for observed behavior. Alternatively, some approaches enhance interpretability by ensuring that an agent's behavior aligns with an observer's expectations or by making the reasoning behind decisions more transparent. In this work, we tackle a different challenge: explaining the GR process in a way that is comprehensible to humans. We introduce and evaluate an explainable model for goal recognition (GR) agents, grounded in the theoretical framework and cognitive processes underlying human behavior explanation. Drawing on insights from two human-agent studies, we propose a conceptual framework for human-centered explanations of GR. Using this framework, we develop the eXplainable Goal Recognition (XGR) model, which generates explanations for both why and why not questions. We evaluate the model computationally across eight GR benchmarks and through three user studies. The first study assesses the efficiency of generating human-like explanations within the Sokoban game domain, the second examines perceived explainability in the same domain, and the third evaluates the model's effectiveness in aiding decision-making in illegal fishing detection. Results demonstrate that the XGR model significantly enhances user understanding, trust, and decision-making compared to baseline models, underscoring its potential to improve human-agent collaboration.",2 "The proliferation of social media platforms has afforded social scientists unprecedented access to vast troves of data on human interactions, facilitating the study of online behavior at an unparalleled scale. These platforms typically structure conversations as threads, forming tree-like structures known as ""discussion trees."" This paper examines the structural properties of online discussions on Reddit by analyzing both global (community-level) and local (post-level) attributes of these discussion trees. We conduct a comprehensive statistical analysis of a year's worth of Reddit data, encompassing a quarter of a million posts and several million comments. Our primary objective is to disentangle the relative impacts of global and local properties and evaluate how specific features shape discussion tree structures. The results reveal that both local and global features contribute significantly to explaining structural variation in discussion trees. However, local features, such as post content and sentiment, collectively have a greater impact, accounting for a larger proportion of variation in the width, depth, and size of discussion trees. Our analysis also uncovers considerable heterogeneity in the impact of various features on discussion structures. Notably, certain global features play crucial roles in determining specific discussion tree properties. These features include the subreddit's topic, age, popularity, and content redundancy. For instance, posts in subreddits focused on politics, sports, and current events tend to generate deeper and wider discussion trees. This research enhances our understanding of online conversation dynamics and offers valuable insights for both content creators and platform designers. By elucidating the factors that shape online discussions, our work contributes to ongoing efforts to improve the quality and effectiveness of digital discourse.",0 "Much of explainable AI research treats explanations as a means for model inspection. Yet, this neglects findings from human psychology that describe the benefit of self-explanations in an agent's learning process. Motivated by this, we introduce a novel workflow in the context of image classification, termed Learning by Self-Explaining (LSX). LSX utilizes aspects of self-refining AI and human-guided explanatory machine learning. The underlying idea is that a learner model, in addition to optimizing for the original predictive task, is further optimized based on explanatory feedback from an internal critic model. Intuitively, a learner's explanations are considered ""useful"" if the internal critic can perform the same task given these explanations. We provide an overview of important components of LSX and, based on this, perform extensive experimental evaluations via three different example instantiations. Our results indicate improvements via Learning by Self-Explaining on several levels: in terms of model generalization, reducing the influence of confounding factors, and providing more task-relevant and faithful model explanations. Overall, our work provides evidence for the potential of self-explaining within the learning phase of an AI model.",0 "We explore which linguistic factors -- at the sentence and token level -- play an important role in influencing language model predictions, and investigate whether these are reflective of results found in humans and human corpora (Gries and Kootstra, 2017). We make use of the structural priming paradigm, where recent exposure to a structure facilitates processing of the same structure. We don't only investigate whether, but also where priming effects occur, and what factors predict them. We show that these effects can be explained via the inverse frequency effect, known in human priming, where rarer elements within a prime increase priming effects, as well as lexical dependence between prime and target. Our results provide an important piece in the puzzle of understanding how properties within their context affect structural prediction in language models.",0 "Despite the widespread adoption of open source software (OSS), its sustainability remains a critical concern, particularly in light of security vulnerabilities and the often inadequate end-of-service (EoS) processes for OSS projects as they decline. Existing models of OSS community participation, like the Onion model and the episodic contribution model, offer valuable insights but are fundamentally incompatible and fail to provide a comprehensive picture of contributor engagement with OSS projects. This paper addresses these gaps by proposing the CROSS model, a novel contributor-project interaction lifecycle model for open source, which delineates the various lifecycle stages of contributor-project interaction, along with the driving and retaining forces pertinent to each stage. By synthesizing existing research on OSS communities, organizational behavior, and human resource development, it explains a range of archetypal cases of contributor engagement and highlights research gaps, especially in EoS/offboarding scenarios. The CROSS model provides a foundation for understanding and enhancing the sustainability of OSS projects, offering a robust foundation for future research and practical application.",0 "The recent advancements in artificial intelligence (AI), with the release of several large models having only query access, make a strong case for explainability of deep models in a post-hoc gradient free manner. In this paper, we propose a framework, named distillation aided explainability (DAX), that attempts to generate a saliency-based explanation in a model agnostic gradient free application. The DAX approach poses the problem of explanation in a learnable setting with a mask generation network and a distillation network. The mask generation network learns to generate the multiplier mask that finds the salient regions of the input, while the student distillation network aims to approximate the local behavior of the black-box model. We propose a joint optimization of the two networks in the DAX framework using the locally perturbed input samples, with the targets derived from input-output access to the black-box model. We extensively evaluate DAX across different modalities (image and audio), in a classification setting, using a diverse set of evaluations (intersection over union with ground truth, deletion based and subjective human evaluation based measures) and benchmark it with respect to $9$ different methods. In these evaluations, the DAX significantly outperforms the existing approaches on all modalities and evaluation metrics.",2 "Black box neural networks are an indispensable part of modern robots. Nevertheless, deploying such high-stakes systems in real-world scenarios poses significant challenges when the stakeholders, such as engineers and legislative bodies, lack insights into the neural networks' decision-making process. Presently, explainable AI is primarily tailored to natural language processing and computer vision, falling short in two critical aspects when applied in robots: grounding in decision-making tasks and the ability to assess trustworthiness of their explanations. In this paper, we introduce a trustworthy explainable robotics technique based on human-interpretable, high-level concepts that attribute to the decisions made by the neural network. Our proposed technique provides explanations with associated uncertainty scores by matching neural network's activations with human-interpretable visualizations. To validate our approach, we conducted a series of experiments with various simulated and real-world robot decision-making models, demonstrating the effectiveness of the proposed approach as a post-hoc, human-friendly robot learning diagnostic tool.",0 "Recommender systems, while a powerful decision making tool, are often operationalized as black box models, such that their AI algorithms are not accessible or interpretable by human operators. This in turn can cause confusion and frustration for the operator and result in unsatisfactory outcomes. While the field of explainable AI has made remarkable strides in addressing this challenge by focusing on interpreting and explaining the algorithms to human operators, there are remaining gaps in the human's understanding of the recommender system. This paper investigates the relative impact of using context, properties of the decision making task and environment, to align human and AI algorithm understanding of the state of the world, i.e. judgment, to improve joint human-recommender performance as compared to utilizing post-hoc algorithmic explanations. We conducted an empirical, between-subjects experiment in which participants were asked to work with an automated recommender system to complete a decision making task. We manipulated the method of transparency (shared contextual information to support shared judgment vs algorithmic explanations) and record the human's understanding of the task, the recommender system, and their overall performance. We found that both techniques yielded equivalent agreement on final decisions. However, those who saw task context had less tendency to over-rely on the recommender system and were able to better pinpoint in what conditions the AI erred. Both methods improved participants' confidence in their own decision making, and increased mental demand equally and frustration negligibly. These results present an alternative approach to improving team performance to post-hoc explanations and illustrate the impact of judgment on human cognition in working with recommender systems.",2 "Wildfires pose a significant natural disaster risk to populations and contribute to accelerated climate change. As wildfires are also affected by climate change, extreme wildfires are becoming increasingly frequent. Although they occur less frequently globally than those sparked by human activities, lightning-ignited wildfires play a substantial role in carbon emissions and account for the majority of burned areas in certain regions. While existing computational models, especially those based on machine learning, aim to predict lightning-ignited wildfires, they are typically tailored to specific regions with unique characteristics, limiting their global applicability. In this study, we present machine learning models designed to characterize and predict lightning-ignited wildfires on a global scale. Our approach involves classifying lightning-ignited versus anthropogenic wildfires, and estimating with high accuracy the probability of lightning to ignite a fire based on a wide spectrum of factors such as meteorological conditions and vegetation. Utilizing these models, we analyze seasonal and spatial trends in lightning-ignited wildfires shedding light on the impact of climate change on this phenomenon. We analyze the influence of various features on the models using eXplainable Artificial Intelligence (XAI) frameworks. Our findings highlight significant global differences between anthropogenic and lightning-ignited wildfires. Moreover, we demonstrate that, even over a short time span of less than a decade, climate changes have steadily increased the global risk of lightning-ignited wildfires. This distinction underscores the imperative need for dedicated predictive models and fire weather indices tailored specifically to each type of wildfire.",0 "Over the past decade, a crisis of confidence in scientific literature has gained attention, particularly in the West. In response, we have seen changes in policy and practice amongst individual researchers and institutions. Greater attention is given to the transparency of workflows and the appropriate use of statistical methods. Advances in scholarly big data and machine learning have led to the development of AI-driven tools for the evaluation of published findings. In this study, we conduct 19 semi-structured interviews with Indian researchers to understand their perspectives on challenges and opportunities for AI technologies to improve confidence in published research. Our findings highlight the importance of social and cultural context for the design and deployment of AI tools for research assessment. Our work suggests that such technologies must work alongside rather than replace human research assessment mechanisms. They must be explainable and situated within well-functioning human-centered peer review processes.",2 "Recent developments in language models have created new opportunities in air traffic control studies. The current focus is primarily on text and language-based use cases. However, these language models may offer a higher potential impact in the air traffic control domain, thanks to their ability to interact with air traffic environments in an embodied agent form. They also provide a language-like reasoning capability to explain their decisions, which has been a significant roadblock for the implementation of automatic air traffic control. This paper investigates the application of a language model-based agent with function-calling and learning capabilities to resolve air traffic conflicts without human intervention. The main components of this research are foundational large language models, tools that allow the agent to interact with the simulator, and a new concept, the experience library. An innovative part of this research, the experience library, is a vector database that stores synthesized knowledge that agents have learned from interactions with the simulations and language models. To evaluate the performance of our language model-based agent, both open-source and closed-source models were tested. The results of our study reveal significant differences in performance across various configurations of the language model-based agents. The best-performing configuration was able to solve almost all 120 but one imminent conflict scenarios, including up to four aircraft at the same time. Most importantly, the agents are able to provide human-level text explanations on traffic situations and conflict resolution strategies.",0 "Many approaches to robot learning begin by inferring a reward function from a set of human demonstrations. To learn a good reward, it is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward. End-to-end methods for joint feature and reward learning (e.g., using deep networks or program synthesis techniques) often yield brittle reward functions that are sensitive to spurious state features. By contrast, humans can often generalizably learn from a small number of demonstrations by incorporating strong priors about what features of a demonstration are likely meaningful for a task of interest. How do we build robots that leverage this kind of background knowledge when learning from new demonstrations? This paper describes a method named ALGAE (Adaptive Language-Guided Abstraction from [Contrastive] Explanations) which alternates between using language models to iteratively identify human-meaningful features needed to explain demonstrated behavior, then standard inverse reinforcement learning techniques to assign weights to these features. Experiments across a variety of both simulated and real-world robot environments show that ALGAE learns generalizable reward functions defined on interpretable features using only small numbers of demonstrations. Importantly, ALGAE can recognize when features are missing, then extract and define those features without any human input -- making it possible to quickly and efficiently acquire rich representations of user behavior.",0 "While models in audio and speech processing are becoming deeper and more end-to-end, they as a consequence need expensive training on large data, and are often brittle. We build on a classical model of human hearing and make it differentiable, so that we can combine traditional explainable biomimetic signal processing approaches with deep-learning frameworks. This allows us to arrive at an expressive and explainable model that is easily trained on modest amounts of data. We apply this model to audio processing tasks, including classification and enhancement. Results show that our differentiable model surpasses black-box approaches in terms of computational efficiency and robustness, even with little training data. We also discuss other potential applications.",0 "This is a pedagogical introduction to the physics of confinement on $R^3 \times S^1$, using $SU(2)$ Yang-Mills with massive or massless adjoint fermions as the prime example; at the end, we also add fundamental flavours. The small-$S^1$ limit is remarkable, allowing for controlled semiclassical determination of the nonperturbative physics in these, mostly non-supersymmetric, theories. We begin by reviewing the Polyakov confinement mechanism on $R^3$. Moving on to $R^3 \times S^1$, we show how introducing adjoint fermions stabilizes center symmetry, leading to abelianization and semiclassical calculability. We explain how monopole-instantons and twisted monopole-instantons arise. We describe the role of various novel topological excitations in extending Polyakov's confinement to the locally four-dimensional case, discuss the nature of the confining string, and the $\theta$-angle dependence.~We study the global symmetry realization and, when available, present evidence for the absence of phase transitions as a function of the $S^1$ size. As our aim is not to cover all work on the subject, but to prepare the interested reader for its study, we also include brief descriptions of topics not covered in detail: the necessity for analytic continuation of path integrals, the study of more general theories, and the 't Hooft anomalies involving higher-form symmetries.",0 "In visual decision making, high-level features, such as object categories, have a strong influence on choice. However, the impact of low-level features on behavior is less understood partly due to the high correlation between high- and low-level features in the stimuli presented (e.g., objects of the same category are more likely to share low-level features). To disentangle these effects, we propose a method that de-correlates low- and high-level visual properties in a novel set of stimuli. Our method uses two Convolutional Neural Networks (CNNs) as candidate models of the ventral visual stream: the CORnet-S that has high neural predictivity in high-level, IT-like responses and the VGG-16 that has high neural predictivity in low-level responses. Triplets (root, image1, image2) of stimuli are parametrized by the level of low- and high-level similarity of images extracted from the different layers. These stimuli are then used in a decision-making task where participants are tasked to choose the most similar-to-the-root image. We found that different networks show differing abilities to predict the effects of low-versus-high-level similarity: while CORnet-S outperforms VGG-16 in explaining human choices based on high-level similarity, VGG-16 outperforms CORnet-S in explaining human choices based on low-level similarity. Using Brain-Score, we observed that the behavioral prediction abilities of different layers of these networks qualitatively corresponded to their ability to explain neural activity at different levels of the visual hierarchy. In summary, our algorithm for stimulus set generation enables the study of how different representations in the visual stream affect high-level cognitive behaviors.",1 "Muscles can store a large amount of genetic information, and in order to transform humans into computers, we need to start by increasing muscle tension. When people with cancer go on happy trips, some cancers often heal without treatment; Rhinitis can cause blockage of the nostrils, but after running, the nostrils naturally ventilate. Both are related to exercise, and the mystery behind them can treat both conditions. Cancer belongs to systemic diseases, and the eradication method for systemic diseases should start from the entire body system, treat the symptoms and prevent recurrence. This article uses special exercise methods and detailed methods to treat diseases, and finds that treating diseases from the perspective of the human system is indeed effective. This article adopts a comparative experimental method to compare the changes in the body before and after. Through this article, it is concluded that exercise and certain methods can cure mild rhinitis and promote rapid ventilation; Explaining from the perspective of muscle pulling force that older individuals are more prone to developing cellular variant cancer; Enhancing muscle tension in the human body can promote the cure of some cancers",0 "Millisecond pulsars are subject to accelerations in globular clusters (GCs) that manifest themselves in both the first and second spin period time derivatives, and can be used to explore the mass distribution of the potentials they inhabit. Here we report on over 20 yr of pulsar timing observations of five millisecond radio pulsars in the core of the core-collapse GC NGC 6752 with the Parkes (Murriyang) and MeerKAT radio telescopes, which have allowed us to measure the proper motions, positions, and first and second time derivatives of the pulsars. The pulsar timing parameters indicate that all the pulsars in the core experience accelerations and jerks that can be explained only if an amount of nonluminous mass of at least 2.56x10^3 M_SUN is present in the core of NGC 6752. On the other hand, our studies highly disfavor the presence of an intermediate-mass black hole at the center of the cluster, with a mass equal to or greater than ~3000M_SUN.",0 "Nonlinear dynamical systems exposed to changing forcing can exhibit catastrophic transitions between alternative and often markedly different states. The phenomenon of critical slowing down (CSD) can be used to anticipate such transitions if caused by a bifurcation and if the change in forcing is slow compared to the internal time scale of the system. However, in many real-world situations, these assumptions are not met and transitions can be triggered because the forcing exceeds a critical rate. For example, given the pace of anthropogenic climate change in comparison to the internal time scales of key Earth system components, such as the polar ice sheets or the Atlantic Meridional Overturning Circulation, such rate-induced tipping poses a severe risk. Moreover, depending on the realisation of random perturbations, some trajectories may transition across an unstable boundary, while others do not, even under the same forcing. CSD-based indicators generally cannot distinguish these cases of noise-induced tipping versus no tipping. This severely limits our ability to assess the risks of tipping, and to predict individual trajectories. To address this, we make a first attempt to develop a deep learning framework to predict transition probabilities of dynamical systems ahead of rate-induced transitions. Our method issues early warnings, as demonstrated on three prototypical systems for rate-induced tipping, subjected to time-varying equilibrium drift and noise perturbations. Exploiting explainable artificial intelligence methods, our framework captures the fingerprints necessary for early detection of rate-induced tipping, even in cases of long lead times. Our findings demonstrate the predictability of rate-induced and noise-induced tipping, advancing our ability to determine safe operating spaces for a broader class of dynamical systems than possible so far.",0 "Active fluctuations are detected in a growing number of systems due to self-propulsion mechanisms or collisions with active environment. They drive the system far from equilibrium and can induce phenomena which at equilibrium states are forbidden by e.g. fluctuation-dissipation relations and detailed balance symmetry. Understanding of their role in living matter is emerging as a challenge for physics. Here we demonstrate a paradoxical effect in which a free particle transport induced by active fluctuations can be boosted by many orders of magnitude when the particle is additionally subjected to a periodic potential. In contrast, within the realm of only thermal fluctuations the velocity of a free particle exposed to a bias is reduced when the periodic potential is switched on. The presented mechanism is significant for understanding nonequilibrium environments such as living cells where it can explain from a fundamental point of view why spatially periodic structures known as microtubules are necessary to generate impressively effective intracellular transport. Our findings can be readily corroborated experimentally e.g. in a setup comprising a colloidal particle in an optically generated periodic potential.",0 "Recent state-of-the-art authorship attribution methods learn authorship representations of texts in a latent, non-interpretable space, hindering their usability in real-world applications. Our work proposes a novel approach to interpreting these learned embeddings by identifying representative points in the latent space and utilizing LLMs to generate informative natural language descriptions of the writing style of each point. We evaluate the alignment of our interpretable space with the latent one and find that it achieves the best prediction agreement compared to other baselines. Additionally, we conduct a 254 human evaluation to assess the quality of these style descriptions, validating their utility as explanations for the latent space. Finally, we investigate whether human performance on the challenging AA task improves when aided by our system's explanations, finding an average improvement of around +20% in accuracy.",2 "In the solar atmosphere, flux ropes are subject to current driven instabilities that are crucial in driving plasma eruptions, ejections and heating. A typical ideal magnetohydrodynamics (MHD) instability developing in flux ropes is the helical kink, which twists the flux rope axis. The growth of this instability can trigger magnetic reconnection, which can explain the formation of chromospheric jets and spicules, but its development has never been investigated in a partially-ionised plasma (PIP). Here we study the kink instability in PIP to understand how it develops in the solar chromosphere, where it is affected by charge-neutral interactions. Partial ionisation speeds up the onset of the non-linear phase of the instability, as the plasma $\beta$ of the isolated plasma is smaller than the total plasma $\beta$ of the bulk. The distribution of the released magnetic energy changes in fully and partially-ionised plasmas, with a larger increase of internal energy associated to the PIP cases. The temperature in PIP increases faster also due to heating terms from the two-fluid dynamics. PIP effects trigger the kink instability on shorter time scales, which is reflected in a more explosive chromospheric flux rope dynamics. These results are crucial to understand the dynamics of small-scale chromospheric structures - mini-filament eruptions - that this far have been largely neglected but could significantly contribute to chromospheric heating and jet formation.",0 "Seismic inversion is essential for geophysical exploration and geological assessment, but it is inherently subject to significant uncertainty. This uncertainty stems primarily from the limited information provided by observed seismic data, which is largely a result of constraints in data collection geometry. As a result, multiple plausible velocity models can often explain the same set of seismic observations. In deep learning-based seismic inversion, uncertainty arises from various sources, including data noise, neural network design and training, and inherent data limitations. This study introduces a novel approach to uncertainty quantification in seismic inversion by integrating ensemble methods with importance sampling. By leveraging ensemble approach in combination with importance sampling, we enhance the accuracy of uncertainty analysis while maintaining computational efficiency. The method involves initializing each model in the ensemble with different weights, introducing diversity in predictions and thereby improving the robustness and reliability of the inversion outcomes. Additionally, the use of importance sampling weights the contribution of each ensemble sample, allowing us to use a limited number of ensemble samples to obtain more accurate estimates of the posterior distribution. Our approach enables more precise quantification of uncertainty in velocity models derived from seismic data. By utilizing a limited number of ensemble samples, this method achieves an accurate and reliable assessment of uncertainty, ultimately providing greater confidence in seismic inversion results.",0 "Purpose: The teacher role in the classroom can explain important aspects of the student's school experience. The teacher-student relationship, a central dimension of social capital, influences students' engagement, and the teaching style plays an important role in student outcomes. But there is scarce literature that links teaching styles to teacher-student relationship. This article aims to: 1) analyze whether there is a relationship between teaching styles and the type of relationship perceived by students and 3) determine the extent to which students' perceptions vary according to their profile. Design/methodology/approach: A structural equation model with four latent variables is estimated: two for the teacher-student relationship (emotional vs. educational) and two for the teaching styles (directive vs. participative), with information for 21126 sixth-grade primary-students in 2019 in Spain. Findings: Teacher-student relationships and teaching styles are interconnected. The participative style implies a better relationship. The perceptions of the teacher are heterogeneous, depending on gender (girls perceive clearer than boys) and with the educational background (children from lower educational background perceive both types of teaching styles more clearly). Originality/value: The analysis is based on the point of view of the addressee of the teacher's work, i.e. the student. It provides a model that can be replicated in any other education system. The latent variables, based on a periodically administered questionnaire, could be estimated with data from diagnostic assessments in other countries, which in turn would allow the formulation of context-specific educational policy proposals that take into account student feedback.",2 "Evaluating the quality of explanations in Explainable Artificial Intelligence (XAI) is to this day a challenging problem, with ongoing debate in the research community. While some advocate for establishing standardized offline metrics, others emphasize the importance of human-in-the-loop (HIL) evaluation. Here we propose an experimental design to evaluate the potential of XAI in human-AI collaborative settings as well as the potential of XAI for didactics. In a user study with 1200 participants we investigate the impact of explanations on human performance on a challenging visual task - annotation of biological species in complex taxonomies. Our results demonstrate the potential of XAI in complex visual annotation tasks: users become more accurate in their annotations and demonstrate less uncertainty with AI assistance. The increase in accuracy was, however, not significantly different when users were shown the mere prediction of the model compared to when also providing an explanation. We also find negative effects of explanations: users tend to replicate the model's predictions more often when shown explanations, even when those predictions are wrong. When evaluating the didactic effects of explanations in collaborative human-AI settings, we find that users' annotations are not significantly better after performing annotation with AI assistance. This suggests that explanations in visual human-AI collaboration do not appear to induce lasting learning effects. All code and experimental data can be found in our GitHub repository: https://github.com/TeodorChiaburu/beexplainable.",2 "Recommender systems have become integral to our digital experiences, from online shopping to streaming platforms. Still, the rationale behind their suggestions often remains opaque to users. While some systems employ a graph-based approach, offering inherent explainability through paths associating recommended items and seed items, non-experts could not easily understand these explanations. A popular alternative is to convert graph-based explanations into textual ones using a template and an algorithm, which we denote here as ''template-based'' explanations. Yet, these can sometimes come across as impersonal or uninspiring. A novel method would be to employ large language models (LLMs) for this purpose, which we denote as ''LLM-based''. To assess the effectiveness of LLMs in generating more resonant explanations, we conducted a pilot study with 25 participants. They were presented with three explanations: (1) traditional template-based, (2) LLM-based rephrasing of the template output, and (3) purely LLM-based explanations derived from the graph-based explanations. Although subject to high variance, preliminary findings suggest that LLM-based explanations may provide a richer and more engaging user experience, further aligning with user expectations. This study sheds light on the potential limitations of current explanation methods and offers promising directions for leveraging large language models to improve user satisfaction and trust in recommender systems.",1 "Humans activate muscles to shape the mechanical interaction with their environment, but can they harness this control mechanism to best sense the environment? We investigated how 109 participants adapt their muscle activation to visual and haptic information when tracking a randomly moving target with a robotic interface. The results exhibit a differentiated effect of these sensory modalities, where participants' muscle cocontraction increases with the haptic noise and decreases with the visual noise, in apparent contradiction to previous results. These results can be explained, and reconciled with previous findings, when considering muscle spring like mechanics, where stiffness increases with cocontraction to regulate motion guidance. Increasing cocontraction to more closely follow the motion plan favors accurate visual over haptic information, while decreasing it avoids injecting visual noise and relies on accurate haptic information. We formulated this active sensing mechanism as the optimization of visuo-haptic information and effort. This OIE model can explain the adaptation of muscle activity to unimodal and multimodal sensory information when interacting with fixed or dynamic environments, or with another human, and can be used to optimize human-robot interaction.",2 "Ganymede's aurora are the product of complex interactions between its intrinsic magnetosphere and the surrounding Jovian plasma environment and can be used to derive both atmospheric composition and density. In this study, we analyzed a time-series of Ganymede's optical aurora taken with Keck I/HIRES during eclipse by Jupiter on 2021-06-08 UTC, one day after the Juno flyby of Ganymede. The data had sufficient signal-to-noise in individual 5-minute observations to allow for the first high cadence analysis of the spatial distribution of the aurora brightness and the ratio between the 630.0 and 557.7 nm disk-integrated auroral brightnesses -- a quantity diagnostic of the relative abundances of O, O$_2$ and H$_2$O in Ganymede's atmosphere. We found that the hemisphere closer to the centrifugal equator of Jupiter's magnetosphere (where electron number density is highest) was up to twice as bright as the opposing hemisphere. The dusk (trailing) hemisphere, subjected to the highest flux of charged particles from Jupiter's magnetosphere, was also consistently almost twice as bright as the dawn (leading) hemisphere. We modeled emission from simulated O$_2$ and H$_2$O atmospheres during eclipse and found that if Ganymede hosts an H$_2$O sublimation atmosphere in sunlight, it must collapse on a faster timescale than expected to explain its absence in our data given our current understanding of Ganymede's surface properties.",0 "Reinforcement Learning (RL) is a learning paradigm in which the agent learns from its environment through trial and error. Deep reinforcement learning (DRL) algorithms represent the agent's policies using neural networks, making their decisions difficult to interpret. Explaining the behaviour of DRL agents is necessary to advance user trust, increase engagement, and facilitate integration with real-life tasks. Semifactual explanations aim to explain an outcome by providing ""even if"" scenarios, such as ""even if the car were moving twice as slowly, it would still have to swerve to avoid crashing"". Semifactuals help users understand the effects of different factors on the outcome and support the optimisation of resources. While extensively studied in psychology and even utilised in supervised learning, semifactuals have not been used to explain the decisions of RL systems. In this work, we develop a first approach to generating semifactual explanations for RL agents. We start by defining five properties of desirable semifactual explanations in RL and then introducing SGRL-Rewind and SGRL-Advance, the first algorithms for generating semifactual explanations in RL. We evaluate the algorithms in two standard RL environments and find that they generate semifactuals that are easier to reach, represent the agent's policy better, and are more diverse compared to baselines. Lastly, we conduct and analyse a 15-user study to assess the participant's perception of semifactual explanations of the agent's actions.",1 "The need for a systematic approach to risk assessment has increased in recent years due to the ubiquity of autonomous systems that alter our day-to-day experiences and their need for safety, e.g., for self-driving vehicles, mobile service robots, and bipedal robots. These systems are expected to function safely in unpredictable environments and interact seamlessly with humans, whose behavior is notably challenging to forecast. We present a survey of risk-aware methodologies for autonomous systems. We adopt a contemporary risk-aware approach to mitigate rare and detrimental outcomes by advocating the use of tail risk measures, a concept borrowed from financial literature. This survey will introduce these measures and explain their relevance in the context of robotic systems for planning, control, and verification applications.",0 "Swift J1858.6$-$0814 (hereafter J1858) is a transient neutron star low-mass X-ray binary (NS LMXB). There is controversy regarding its donor mass derived from observations and theoretical calculations. In this paper, we adopt seven magnetic braking (MB) prescriptions suggested in the literature and different metallicity $Z$ to simulate the evolution of the LMXB. Our results show that, employing the MB model proposed by \citet{2012ApJ...746...43R} (""rm12""), the Convection And Rotation Boosted (""carb"") model \citep{2019ApJ...886L..31V}, as well as the Intermediate (""inter"") and Convection-boosted (""cboost"") models in \citet{2019MNRAS.483.5595V} can match (part of) the observational parameters of J1858 well. We then apply our method to other observed LMXBs and find that the ""rm12"" and ""inter"" MB laws are most promising in explaining transient LMXBs. In comparison, the simulations with the ""cboost"" and ""carb"" MB laws are more inclined to reproduce persistent LMXBs and ultra-compact X-ray binaries (UCXBs), respectively. Our results, though subject to computational and/or observational bias, show that it is challenging to find a unified MB law that applies to the NS LMXB sub-populations simultaneously, indicating our lack of understanding of the true MB law. In addition, we explore the influence of various MB laws on the magnitude of the bifurcation periods in LMXBs.",0 "The ability to distinguish whether an image is generated by artificial intelligence (AI) is a crucial ingredient in human intelligence, usually accompanied by a complex and dialectical forensic and reasoning process. However, current fake image detection models and databases focus on binary classification without understandable explanations for the general populace. This weakens the credibility of authenticity judgment and may conceal potential model biases. Meanwhile, large multimodal models (LMMs) have exhibited immense visual-text capabilities on various tasks, bringing the potential for explainable fake image detection. Therefore, we pioneer the probe of LMMs for explainable fake image detection by presenting a multimodal database encompassing textual authenticity descriptions, the FakeBench. For construction, we first introduce a fine-grained taxonomy of generative visual forgery concerning human perception, based on which we collect forgery descriptions in human natural language with a human-in-the-loop strategy. FakeBench examines LMMs with four evaluation criteria: detection, reasoning, interpretation and fine-grained forgery analysis, to obtain deeper insights into image authenticity-relevant capabilities. Experiments on various LMMs confirm their merits and demerits in different aspects of fake image detection tasks. This research presents a paradigm shift towards transparency for the fake image detection area and reveals the need for greater emphasis on forensic elements in visual-language research and AI risk control. FakeBench will be available at https://github.com/Yixuan423/FakeBench.",0 "We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature-map's unique contribution to the alignment between Deep Neural Network's (DNN) representational geometry and that of humans. We first validate the AIS by showing that prediction of out-of-sample human similarity judgments is improved when constructing representations using only higher-scoring AIS feature maps identified from a training set. We then compute image-specific heatmaps that visually indicate the areas that correspond to feature-maps with higher AIS scores. These maps provide an intuitive explanation of which image areas are more important when it is compared to other images in a cohort. We observe a correspondence between these heatmaps and saliency maps produced by a gaze-prediction model. However, in some cases, meaningful differences emerge, as the dimensions relevant for comparison are not necessarily the most visually salient. To conclude, Alignment Importance improves prediction of human similarity judgments from DNN embeddings, and provides interpretable insights into the relevant information in image space.",0 "With the rise of deep learning technology in practical applications, Convolutional Neural Networks (CNNs) have been able to assist humans in solving many real-world problems. To enhance the performance of CNNs, numerous network architectures have been explored. Some of these architectures are designed based on the accumulated experience of researchers over time, while others are designed through neural architecture search methods. The improvements made to CNNs by the aforementioned methods are quite significant, but most of the improvement methods are limited in reality by model size and environmental constraints, making it difficult to fully realize the improved performance. In recent years, research has found that many CNN structures can be explained by the discretization of ordinary differential equations. This implies that we can design theoretically supported deep network structures using higher-order numerical difference methods. It should be noted that most of the previous CNN model structures are based on low-order numerical methods. Therefore, considering that the accuracy of linear multi-step numerical difference methods is higher than that of the forward Euler method, this paper proposes a stacking scheme based on the linear multi-step method. This scheme enhances the performance of ResNet without increasing the model size and compares it with the Runge-Kutta scheme. The experimental results show that the performance of the stacking scheme proposed in this paper is superior to existing stacking schemes (ResNet and HO-ResNet), and it has the capability to be extended to other types of neural networks.",0 "As LLMs become increasingly proficient at producing human-like responses, there has been a rise of academic and industrial pursuits dedicated to flagging a given piece of text as ""human"" or ""AI"". Most of these pursuits involve modern NLP detectors like T5-Sentinel and RoBERTa-Sentinel, without paying too much attention to issues of interpretability and explainability of these models. In our study, we provide a comprehensive analysis that shows that traditional ML models (Naive-Bayes,MLP, Random Forests, XGBoost) perform as well as modern NLP detectors, in human vs AI text detection. We achieve this by implementing a robust testing procedure on diverse datasets, including curated corpora and real-world samples. Subsequently, by employing the explainable AI technique LIME, we uncover parts of the input that contribute most to the prediction of each model, providing insights into the detection process. Our study contributes to the growing need for developing production-level LLM detection tools, which can leverage a wide range of traditional as well as modern NLP detectors we propose. Finally, the LIME techniques we demonstrate also have the potential to equip these detection tools with interpretability analysis features, making them more reliable and trustworthy in various domains like education, healthcare, and media.",0 "Generative AI, such as OpenAI's GPT-4V large-language model, has rapidly entered mainstream discourse. Novel capabilities in image processing and natural-language communication may augment existing forecasting methods. Large language models further display potential to better communicate weather hazards in a style honed for diverse communities and different languages. This study evaluates GPT-4V's ability to interpret meteorological charts and communicate weather hazards appropriately to the user, despite challenges of hallucinations, where generative AI delivers coherent, confident, but incorrect responses. We assess GPT-4V's competence via its web interface ChatGPT in two tasks: (1) generating a severe-weather outlook from weather-chart analysis and conducting self-evaluation, revealing an outlook that corresponds well with a Storm Prediction Center human-issued forecast; and (2) producing hazard summaries in Spanish and English from weather charts. Responses in Spanish, however, resemble direct (not idiomatic) translations from English to Spanish, yielding poorly translated summaries that lose critical idiomatic precision required for optimal communication. Our findings advocate for cautious integration of tools like GPT-4V in meteorology, underscoring the necessity of human oversight and development of trustworthy, explainable AI.",0 "Heat transfer at the interface between two materials is becoming increasingly important as the size of electronic devices shrinks. Most studies concentrate on the interfacial thermal conductance between either crystalline-crystalline or amorphous-amorphous materials. Here, we investigate the interfacial thermal conductance at crystalline-amorphous interfaces using non-equilibrium molecular dynamics simulations. Specifically, gold and two different materials, silicon and silica, in both their crystalline and amorphous structures, have been considered. The findings reveal that the interfacial thermal conductance between amorphous structures and gold is significantly higher as compared to crystalline structures for both planar and rough interfaces ($\approx$ 152 MW/(m$^2$K) for gold-amorphous silicon and $\approx$ 56 MW/(m$^2$K) for gold-crystalline silicon). We explain this increase by two factors~:~the relative commensurability between amorphous silicon/silica and gold leads to enhanced bonding and cross-correlations of atomic displacements at the interface, contributing to enhance phonon elastic transmission. Inelastic phonon transmission is also enhanced due to the relative larger degree of anharmonicity characterizing gold-amorphous silicon/silica. We also show that all the vibrational modes that participate to interfacial heat transfer are delocalized and use the Ioffe-Regel (IR) criterion to separate the contributions of propagating~(propagons) and non-propagating modes~(diffusons). In particular, we demonstrate that, while at gold-amorphous silicon interfaces elastic phonon scattering involves propagons and inelastic phonon scattering involves a mixture of propagons and diffusons, in gold-amorphous silica, all modes transmitting energy at the interface are diffusons.",0 "As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of 8 researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental wellbeing. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.",1 "Large language models (LLMs) have shown remarkable capabilities in generating user summaries from a long list of raw user activity data. These summaries capture essential user information such as preferences and interests, and therefore are invaluable for LLM-based personalization applications, such as explainable recommender systems. However, the development of new summarization techniques is hindered by the lack of ground-truth labels, the inherent subjectivity of user summaries, and human evaluation which is often costly and time-consuming. To address these challenges, we introduce \UserSumBench, a benchmark framework designed to facilitate iterative development of LLM-based summarization approaches. This framework offers two key components: (1) A reference-free summary quality metric. We show that this metric is effective and aligned with human preferences across three diverse datasets (MovieLens, Yelp and Amazon Review). (2) A novel robust summarization method that leverages time-hierarchical summarizer and self-critique verifier to produce high-quality summaries while eliminating hallucination. This method serves as a strong baseline for further innovation in summarization techniques.",0 "Although quantum states nicely explain experiments, the outcomes of experiments are not states. Instead, outcomes correspond to probability distributions. Twenty years ago we proved categorically that probability distributions leave open a choice of quantum states to explain experiments that is resolvable only by a move beyond logic, which, inspired or not, can be characterized as a guess. Guesses link the inner lives of investigators to their explanations of experimental results. Recognizing the inescapability of guesswork in physics leads to avenues of investigation, one of which is presented here. We invert the quest for the logical foundations of physics to reveal a physical basis for logic and calculation, and we represent this basis mathematically, in such a way as to show the shaping and re-shaping of calculations by guesswork. We draw on the interplay between guessing and computation in digital contexts that, perhaps surprisingly, include living organisms. Digital computation and communication depend on a type of synchronization that coordinates transitions among physically distinct conditions represented by ""digits."" This logical synchronization, known to engineers but neglected in physics, requires guesswork for its maintenance. By abstracting digital hardware, we model the structure of human thinking as logically synchronized computation, punctuated by guesses. We adapt marked graphs to mathematically represent computation and represent guesses by unpredictable changes in these marked graphs. The marked graphs reveal a logical substructure to spatial and temporal navigation, with implications across physics and its biological applications. By limiting our model to the logical aspect of communications and computations we unveil logical structure in relation to guesswork, applicable not just to electronics but also to the functioning of living organisms.",0 "Older adults are increasingly acquiring smartphones. But acquiring smartphones can be difficult, and little is known about the particular challenges of older adults who are additionally blind or losing their vision. We shed light on the social and technical aspects of acquiring smartphones with vision loss, based on deep qualitative interviews with 22 blind or low vision (BLV) older adults aged 60 and over. Through our grounded theory analysis, we found that BLV older adults experience liminality as they acquire smartphones and transition through re-acquiring smartphones as they become blind, and they can transition through liminality by participating in mutual aid within the blind community. We contribute the notion of ""Intersecting Liminality,"" which explains the marginalizing experience of simultaneously transitioning through vision loss, aging, and technology acquisition. We contend that Intersecting Liminality can serve as a framework that centers the dynamic nature of disability to help our community generate a more nuanced understanding of technology acquisition and more effective assistive interventions.",1 "In his book 'A Beautiful Question', physicist Frank Wilczek argues that symmetry is 'nature's deep design,' governing the behavior of the universe, from the smallest particles to the largest structures. While symmetry is a cornerstone of physics, it has not yet been found widespread applicability to describe biological systems, particularly the human brain. In this context, we study the human brain network engaged in language and explore the relationship between the structural connectivity (connectome or structural network) and the emergent synchronization of the mesoscopic regions of interest (functional network). We explain this relationship through a different kind of symmetry than physical symmetry, derived from the categorical notion of Grothendieck fibrations. This introduces a new understanding of the human brain by proposing a local symmetry theory of the connectome, which accounts for how the structure of the brain's network determines its coherent activity. Among the allowed patterns of structural connectivity, synchronization elicits different symmetry subsets according to the functional engagement of the brain. We show that the resting state is a particular realization of the cerebral synchronization pattern characterized by a fibration symmetry that is broken in the transition from rest to language. Our findings suggest that the brain's network symmetry at the local level determines its coherent function, and we can understand this relationship from theoretical principles.",0 "With the rapid development of VR technology, the demand for high-quality 3D models is increasing. Traditional methods struggle with efficiency and quality in large-scale customization. This paper introduces a deep-learning framework that generates high-precision 3D coral models from a single image. Using the Coral dataset, the framework extracts geometric and texture features, performs 3D reconstruction, and optimizes design and material blending. Advanced optimization and polygon count control ensure shape accuracy, detail retention, and flexible output for various complexities, catering to high-quality rendering and real-time interaction needs.The project incorporates Explainable AI (XAI) to transform AI-generated models into interactive ""artworks,"" best viewed in VR and XR. This enhances model interpretability and human-machine collaboration. Real-time feedback in VR interactions displays information like coral species and habitat, enriching user experience. The generated models surpass traditional methods in detail, visual quality, and efficiency. This research offers an intelligent approach to 3D content creation for VR, lowering production barriers, and promoting widespread VR applications. Additionally, integrating XAI provides new insights into AI-generated visual content and advances research in 3D vision interpretability.",0 "This article presents a heuristic view that shows that the inner states of consciousness experienced by every human being have a physical but imaginary hypercomplex basis. The hypercomplex description is necessary because certain processes of consciousness cannot be physically measured in principle, but nevertheless exist. Based on theoretical considerations, it could be possible - as a result of mathematical investigations into a so-called bicomplex algebra - to generate and use hypercomplex system states on machines in a targeted manner. The hypothesis of the existence of hypercomplex system states on machines is already supported by the surprising performance of highly complex AI systems. However, this has yet to be proven. In particular, there is a lack of experimental data that distinguishes such systems from other systems, which is why this question will be addressed in later articles. This paper describes the developed bicomplex algebra and possible applications of these findings to generate hypercomplex energy states on machines. In the literature, such system states are often referred to as machine consciousness. The article uses mathematical considerations to explain how artificial consciousness could be generated and what advantages this would have for such AI systems.",0 "Adoption of smartphones by older adults (i.e., 65+ years old) is poorly understood, especially in relation to cybersecurity and cyberthreats. In this study, we focus on the perceived threat of cyberattacks as a potential barrier to smartphone adoption and use among older adults. The study also aims at investigating the differences between users and non-users of smartphones. We conducted a quantitative cross-sectional survey of older adults in Slovenia (N = 535). The results of covariance-based structural equation modeling indicate consistent support for the associations of intention to use (ItU) with perceived usefulness (PU), subjective norm (SN) and attitude toward use (AtU), the association between ease of use (EoU) and PU, the association between hedonic motivation (HM) and AtU, and the association between smartphone technology anxiety (STA) and fear of use (FoU). Even though the negative association between perceived threat (PT) and ItU was significant in the full sample, the non-user and the not aware subsamples, its role in adoption of smartphones among older adults remains puzzling. We uncovered significant positive associations between PT and AtU (except in the not aware subsample), and PT and PU which we could not fully explain in our study. The results of our study provide some insights on how campaigns promoting adoption of smartphones among older adults, workshops, training and informal teaching might be improved.",2 "Facial expression recognition is vital for human behavior analysis, and deep learning has enabled models that can outperform humans. However, it is unclear how closely they mimic human processing. This study aims to explore the similarity between deep neural networks and human perception by comparing twelve different networks, including both general object classifiers and FER-specific models. We employ an innovative global explainable AI method to generate heatmaps, revealing crucial facial regions for the twelve networks trained on six facial expressions. We assess these results both quantitatively and qualitatively, comparing them to ground truth masks based on Friesen and Ekman's description and among them. We use Intersection over Union (IoU) and normalized correlation coefficients for comparisons. We generate 72 heatmaps to highlight critical regions for each expression and architecture. Qualitatively, models with pre-trained weights show more similarity in heatmaps compared to those without pre-training. Specifically, eye and nose areas influence certain facial expressions, while the mouth is consistently important across all models and expressions. Quantitatively, we find low average IoU values (avg. 0.2702) across all expressions and architectures. The best-performing architecture averages 0.3269, while the worst-performing one averages 0.2066. Dendrograms, built with the normalized correlation coefficient, reveal two main clusters for most expressions: models with pre-training and models without pre-training. Findings suggest limited alignment between human and AI facial expression recognition, with network architectures influencing the similarity, as similar architectures prioritize similar facial regions.",0 "The evolution of Artificial Intelligence Generated Contents (AIGCs) is advancing towards higher quality. The growing interactions with AIGCs present a new challenge to the data-driven AI community: While AI-generated contents have played a crucial role in a wide range of AI models, the potential hidden risks they introduce have not been thoroughly examined. Beyond human-oriented forgery detection, AI-generated content poses potential issues for AI models originally designed to process natural data. In this study, we underscore the exacerbated hallucination phenomena in Large Vision-Language Models (LVLMs) caused by AI-synthetic images. Remarkably, our findings shed light on a consistent AIGC \textbf{hallucination bias}: the object hallucinations induced by synthetic images are characterized by a greater quantity and a more uniform position distribution, even these synthetic images do not manifest unrealistic or additional relevant visual features compared to natural images. Moreover, our investigations on Q-former and Linear projector reveal that synthetic images may present token deviations after visual projection, thereby amplifying the hallucination bias.",0 "We characterize the exact solutions to neural network descrambling--a mathematical model for explaining the fully connected layers of trained neural networks (NNs). By reformulating the problem to the minimization of the Brockett function arising in graph matching and complexity theory we show that the principal components of the hidden layer preactivations can be characterized as the optimal explainers or descramblers for the layer weights, leading to descrambled weight matrices. We show that in typical deep learning contexts these descramblers take diverse and interesting forms including (1) matching largest principal components with the lowest frequency modes of the Fourier basis for isotropic hidden data, (2) discovering the semantic development in two-layer linear NNs for signal recovery problems, and (3) explaining CNNs by optimally permuting the neurons. Our numerical experiments indicate that the eigendecompositions of the hidden layer data--now understood as the descramblers--can also reveal the layer's underlying transformation. These results illustrate that the SVD is more directly related to the explainability of NNs than previously thought and offers a promising avenue for discovering interpretable motifs for the hidden action of NNs, especially in contexts of operator learning or physics-informed NNs, where the input/output data has limited human readability.",0 "In a previous paper, we have shown that an ontology of quantum mechanics in terms of states and events with internal phenomenal aspects, that is, a form of panprotopsychism, is well suited to explaining the phenomenal aspects of consciousness. We have proved there that the palette and grain combination problems of panpsychism and panprotopsychism arise from implicit hypotheses based on classical physics about supervenience that are inappropriate at the quantum level, where an exponential number of emergent properties and states arise. In this article, we address what is probably the first and most important combination problem of panpsychism: the subject-summing problem originally posed by William James. We begin by identifying the physical counterparts of the subjects of experience within the quantum panprotopsychic approach presented in that article. To achieve this, we turn to the notion of subject of experience inspired by the idea of prehension proposed by Whitehead and show that this notion can be adapted to the quantum ontology of objects and events. Due to the indeterminacy of quantum mechanics and its causal openness, this ontology also seems to be suitable for the analysis of the remaining aspects of the structure combination problem, which shows how the structuration of consciousness could have evolved from primitive animals to humans. The analysis imposes conditions on possible implementations of quantum cognition mechanisms in the brain and suggests new problems and strategies to address them. In particular, with regard to the structuring of experiences in animals with different degrees of evolutionary development.",0 "This note summarizes the state of what is known about the tractability of the problem ModPath, which asks if an input undirected graph contains a simple st-path whose length satisfies modulo constraints. We also consider the problem ModCycle, which asks for the existence of a simple cycle subject to such constraints. We also discuss the status of these problems on directed graphs, and on restricted classes of graphs. We explain connections to the problem variant asking for a constant vertex-disjoint number of such paths or cycles, and discuss links to other related work.",0 "Sleep is crucial for human health, and EEG signals play a significant role in sleep research. Due to the high-dimensional nature of EEG signal data sequences, data visualization and clustering of different sleep stages have been challenges. To address these issues, we propose a two-stage hierarchical and explainable feature selection framework by incorporating a feature selection algorithm to improve the performance of dimensionality reduction. Inspired by topological data analysis, which can analyze the structure of high-dimensional data, we extract topological features from the EEG signals to compensate for the structural information loss that happens in traditional spectro-temporal data analysis. Supported by the topological visualization of the data from different sleep stages and the classification results, the proposed features are proven to be effective supplements to traditional features. Finally, we compare the performances of three dimensionality reduction algorithms: Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Uniform Manifold Approximation and Projection (UMAP). Among them, t-SNE achieved the highest accuracy of 79.8%, but considering the overall performance in terms of computational resources and metrics, UMAP is the optimal choice.",0 "Diffusion-based generative models' impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.",2 "Developing a general information processing model in uncertain environments is fundamental for the advancement of explainable artificial intelligence. Dempster-Shafer theory of evidence is a well-known and effective reasoning method for representing epistemic uncertainty, which is closely related to subjective probability theory and possibility theory. Although they can be transformed to each other under some particular belief structures, there remains a lack of a clear and interpretable transformation process, as well as a unified approach for information processing. In this paper, we aim to address these issues from the perspectives of isopignistic belief functions and the hyper-cautious transferable belief model. Firstly, we propose an isopignistic transformation based on the belief evolution network. This transformation allows for the adjustment of the information granule while retaining the potential decision outcome. The isopignistic transformation is integrated with a hyper-cautious transferable belief model to establish a new canonical decomposition. This decomposition offers a reverse path between the possibility distribution and its isopignistic mass functions. The result of the canonical decomposition, called isopignistic function, is an identical information content distribution to reflect the propensity and relative commitment degree of the BPA. Furthermore, this paper introduces a method to reconstruct the basic belief assignment by adjusting the isopignistic function. It explores the advantages of this approach in modeling and handling uncertainty within the hyper-cautious transferable belief model. More general, this paper establishes a theoretical basis for building general models of artificial intelligence based on probability theory, Dempster-Shafer theory, and possibility theory.",0 "As robots become increasingly integrated into our daily lives, the need to make them transparent has never been more critical. Yet, despite its importance in human-robot interaction, a standardized measure of robot transparency has been missing until now. This paper addresses this gap by presenting the first comprehensive scale to measure perceived transparency in robotic systems, available in English, German, and Italian languages. Our approach conceptualizes transparency as a multidimensional construct, encompassing explainability, legibility, predictability, and meta-understanding. The proposed scale was a product of a rigorous three-stage process involving 1,223 participants. Firstly, we generated the items of our scale, secondly, we conducted an exploratory factor analysis, and thirdly, a confirmatory factor analysis served to validate the factor structure of the newly developed TOROS scale. The final scale encompasses 26 items and comprises three factors: Illegibility, Explainability, and Predictability. TOROS demonstrates high cross-linguistic reliability, inter-factor correlation, model fit, internal consistency, and convergent validity across the three cross-national samples. This empirically validated tool enables the assessment of robot transparency and contributes to the theoretical understanding of this complex construct. By offering a standardized measure, we facilitate consistent and comparable research in human-robot interaction in which TOROS can serve as a benchmark.",2 "Image captioning, which generates natural language descriptions of the visual information in an image, is a crucial task in vision-language research. Previous models have typically addressed this task by aligning the generative capabilities of machines with human intelligence through statistical fitting of existing datasets. While effective for normal images, they may struggle to accurately describe those where certain parts of the image are obscured or edited, unlike humans who excel in such cases. These weaknesses they exhibit, including hallucinations and limited interpretability, often hinder performance in scenarios with shifted association patterns. In this paper, we present a generic image captioning framework that employs causal inference to make existing models more capable of interventional tasks, and counterfactually explainable. Our approach includes two variants leveraging either total effect or natural direct effect. Integrating them into the training process enables models to handle counterfactual scenarios, increasing their generalizability. Extensive experiments on various datasets show that our method effectively reduces hallucinations and improves the model's faithfulness to images, demonstrating high portability across both small-scale and large-scale image-to-text models. The code is available at https://github.com/Aman-4-Real/See-or-Guess.",0 "Recent progress in Text-to-Image (T2I) generative models has enabled high-quality image generation. As performance and accessibility increase, these models are gaining significant attraction and popularity: ensuring their fairness and safety is a priority to prevent the dissemination and perpetuation of biases. However, existing studies in bias detection focus on closed sets of predefined biases (e.g., gender, ethnicity). In this paper, we propose a general framework to identify, quantify, and explain biases in an open set setting, i.e. without requiring a predefined set. This pipeline leverages a Large Language Model (LLM) to propose biases starting from a set of captions. Next, these captions are used by the target generative model for generating a set of images. Finally, Vision Question Answering (VQA) is leveraged for bias evaluation. We show two variations of this framework: OpenBias and GradBias. OpenBias detects and quantifies biases, while GradBias determines the contribution of individual prompt words on biases. OpenBias effectively detects both well-known and novel biases related to people, objects, and animals and highly aligns with existing closed-set bias detection methods and human judgment. GradBias shows that neutral words can significantly influence biases and it outperforms several baselines, including state-of-the-art foundation models. Code available here: https://github.com/Moreno98/GradBias.",0 "Interpretability is the next frontier in machine learning research. In the search for white box models - as opposed to black box models, like random forests or neural networks - rule induction algorithms are a logical and promising option, since the rules can easily be understood by humans. Fuzzy and rough set theory have been successfully applied to this archetype, almost always separately. As both approaches to rule induction involve granular computing based on the concept of equivalence classes, it is natural to combine them. The QuickRules\cite{JensenCornelis2009} algorithm was a first attempt at using fuzzy rough set theory for rule induction. It is based on QuickReduct, a greedy algorithm for building decision reducts. QuickRules already showed an improvement over other rule induction methods. However, to evaluate the full potential of a fuzzy rough rule induction algorithm, one needs to start from the foundations. In this paper, we introduce a novel rule induction algorithm called Fuzzy Rough Rule Induction (FRRI). We provide background and explain the workings of our algorithm. Furthermore, we perform a computational experiment to evaluate the performance of our algorithm and compare it to other state-of-the-art rule induction approaches. We find that our algorithm is more accurate while creating small rulesets consisting of relatively short rules. We end the paper by outlining some directions for future work.",0 "In the realm of image synthesis, achieving fidelity to a reference image while adhering to conditional prompts remains a significant challenge. This paper proposes a novel approach that integrates a diffusion model with latent space manipulation and gradient-based selective attention mechanisms to address this issue. Leveraging Grad-SAM (Gradient-based Selective Attention Manipulation), we analyze the cross attention maps of the cross attention layers and gradients for the denoised latent vector, deriving importance scores of elements of denoised latent vector related to the subject of interest. Using this information, we create masks at specific timesteps during denoising to preserve subjects while seamlessly integrating the reference image features. This approach ensures the faithful formation of subjects based on conditional prompts, while concurrently refining the background for a more coherent composition. Our experiments on places365 dataset demonstrate promising results, with our proposed model achieving the lowest mean and median Frechet Inception Distance (FID) scores compared to baseline models, indicating superior fidelity preservation. Furthermore, our model exhibits competitive performance in aligning the generated images with provided textual descriptions, as evidenced by high CLIP scores. These results highlight the effectiveness of our approach in both fidelity preservation and textual context preservation, offering a significant advancement in text-to-image synthesis tasks.",0 "Objective: The integration of Deep Learning (DL) algorithms on brain signal analysis is still in its nascent stages compared to their success in fields like Computer Vision. This is particularly true for BCI, where the brain activity is decoded to control external devices without requiring muscle control. Electroencephalography (EEG) is a widely adopted choice for designing BCI systems due to its non-invasive and cost-effective nature and excellent temporal resolution. Still, it comes at the expense of limited training data, poor signal-to-noise, and a large variability across and within-subject recordings. Finally, setting up a BCI system with many electrodes takes a long time, hindering the widespread adoption of reliable DL architectures in BCIs outside research laboratories. To improve adoption, we need to improve user comfort using, for instance, reliable algorithms that operate with few electrodes. Approach: Our research aims to develop a DL algorithm that delivers effective results with a limited number of electrodes. Taking advantage of the Augmented Covariance Method and the framework of SPDNet, we propose the Phase-SPDNet architecture and analyze its performance and the interpretability of the results. The evaluation is conducted on 5-fold cross-validation, using only three electrodes positioned above the Motor Cortex. The methodology was tested on nearly 100 subjects from several open-source datasets using the Mother Of All BCI Benchmark (MOABB) framework. Main results: The results of our Phase-SPDNet demonstrate that the augmented approach combined with the SPDNet significantly outperforms all the current state-of-the-art DL architecture in MI decoding. Significance: This new architecture is explainable and with a low number of trainable parameters.",0 "As pathogens spread in a population of hosts, immunity is built up and the pool of susceptible individuals is depleted. This generates selective pressure, to which many human RNA viruses, such as influenza virus or SARS-CoV-2, respond with rapid antigenic evolution and frequent emergence of immune evasive variants. However, the host's immune systems adapt and older immune responses wane, such that escape variants only enjoy a growth advantage for a limited time. If variant growth dynamics and reshaping of host-immunity operate on comparable time scales, viral adaptation is determined by eco-evolutionary interactions that are not captured by models of rapid evolution in a fixed environment. Here, we use a Susceptible/Infected model to describe the interaction between an evolving viral population in a dynamic but immunologically diverse host population. We show that depending on strain cross-immunity, heterogeneity of the host population, and durability of immune responses, escape variants initially grow exponentially, but lose their growth advantage before reaching high frequencies. Their subsequent dynamics follows an anomalous random walk determined by future escape variants and results in variant trajectories that are unpredictable. This model can explain the apparent contradiction between the clearly adaptive nature of antigenic evolution and the quasi-neutral dynamics of high frequency variants observed for influenza viruses.",0 "Structure functions of the spin-1 deuteron will be investigated experimentally from the late 2020's at various facilities such as Thomas Jefferson National Accelerator Facility, Fermi National Accelerator Laboratory, nuclotron-based ion collider facility, and electron-ion colliders. We expect that a new high-energy spin-physics field could be created by these projects. In this paper, the current theoretical status is explained for the structure functions of spin-1 hadrons, especially on parton distribution functions, transverse-momentum dependent parton distributions, and fragmentation functions. Related multiparton distribution functions are also shown.",0 "The exotic neutral gauge boson is a powerful candidate for the new physics beyond the standard model. As a promising model, the left-right symmetric model has been proposed to explain the neutrino mass, dark matter, and matter-antimatter asymmetry, etc., in which exotic gauge bosons $Z^\prime, W^{\prime \pm}$ have been put forward as well as other new right-handed particles. We investigate the $\mu^+ \mu^- \to q\bar{q} $ and $ \mu^+ \mu^- \to l^+ l^- $ processes involving the $Z^\prime$ boson as an intermediate particle. The coupling strength, decay width and mass are the key parameters on the production and decay processes of the $Z^\prime$ boson. The results indicate that the angular distributions of final particles are sensitive to the couplings of $Z^\prime$ to the other fermions. Asymmetries defined from the angular distributions are ideal quantities to demonstrate the discrepancy between the standard model process and the processes with $Z^\prime$ participated and they are also appropriate observables to discriminate the couplings of $Z^\prime$ to other particles. Compared with the current results at the Large Hadron Collider (LHC), the future muon collider has a great potential to explore the new parameter space with $Z^\prime$ boson.",0 "Causality is vital for understanding true cause-and-effect relationships between variables within predictive models, rather than relying on mere correlations, making it highly relevant in the field of Explainable AI. In an automated decision-making scenario, causal inference methods can analyze the underlying data-generation process, enabling explanations of a model's decision by manipulating features and creating counterfactual examples. These counterfactuals explore hypothetical scenarios where a minimal number of factors are altered, providing end-users with valuable information on how to change their situation. However, interpreting a set of multiple counterfactuals can be challenging for end-users who are not used to analyzing raw data records. In our work, we propose a novel multi-step pipeline that uses counterfactuals to generate natural language explanations of actions that will lead to a change in outcome in classifiers of tabular data using LLMs. This pipeline is designed to guide the LLM through smaller tasks that mimic human reasoning when explaining a decision based on counterfactual cases. We conducted various experiments using a public dataset and proposed a method of closed-loop evaluation to assess the coherence of the final explanation with the counterfactuals, as well as the quality of the content. Results are promising, although further experiments with other datasets and human evaluations should be carried out.",0 "Artificial intelligence (AI) has the potential to transform education with its power of uncovering insights from massive data about student learning patterns. However, ethical and trustworthy concerns of AI have been raised but are unsolved. Prominent ethical issues in high school AI education include data privacy, information leakage, abusive language, and fairness. This paper describes technological components that were built to address ethical and trustworthy concerns in a multi-modal collaborative platform (called ALLURE chatbot) for high school students to collaborate with AI to solve the Rubik's cube. In data privacy, we want to ensure that the informed consent of 153 children, parents, and teachers, is at the center of any data that is managed. Since children are involved, language, whether textual, audio, or visual, is acceptable both from users and AI and the system can steer interaction away from dangerous situations. In information management, we also want to ensure that the system, while learning to improve over time, does not leak information about users from one group to another.",2 "We introduce Affective Visual Dialog, an emotion explanation and reasoning task as a testbed for research on understanding the formation of emotions in visually grounded conversations. The task involves three skills: (1) Dialog-based Question Answering (2) Dialog-based Emotion Prediction and (3) Affective emotion explanation generation based on the dialog. Our key contribution is the collection of a large-scale dataset, dubbed AffectVisDial, consisting of 50K 10-turn visually grounded dialogs as well as concluding emotion attributions and dialog-informed textual emotion explanations, resulting in a total of 27,180 working hours. We explain our design decisions in collecting the dataset and introduce the questioner and answerer tasks that are associated with the participants in the conversation. We train and demonstrate solid Affective Visual Dialog baselines adapted from state-of-the-art models. Remarkably, the responses generated by our models show promising emotional reasoning abilities in response to visually grounded conversations. Our project page is available at https://affective-visual-dialog.github.io.",0 "Most jailbreak papers claim the jailbreaks they propose are highly effective, often boasting near-100% attack success rates. However, it is perhaps more common than not for jailbreak developers to substantially exaggerate the effectiveness of their jailbreaks. We suggest this problem arises because jailbreak researchers lack a standard, high-quality benchmark for evaluating jailbreak performance, leaving researchers to create their own. To create a benchmark, researchers must choose a dataset of forbidden prompts to which a victim model will respond, along with an evaluation method that scores the harmfulness of the victim model's responses. We show that existing benchmarks suffer from significant shortcomings and introduce the StrongREJECT benchmark to address these issues. StrongREJECT's dataset contains prompts that victim models must answer with specific, harmful information, while its automated evaluator measures the extent to which a response gives useful information to forbidden prompts. In doing so, the StrongREJECT evaluator achieves state-of-the-art agreement with human judgments of jailbreak effectiveness. Notably, we find that existing evaluation methods significantly overstate jailbreak effectiveness compared to human judgments and the StrongREJECT evaluator. We describe a surprising and novel phenomenon that explains this discrepancy: jailbreaks bypassing a victim model's safety fine-tuning tend to reduce its capabilities. Together, our findings underscore the need for researchers to use a high-quality benchmark, such as StrongREJECT, when developing new jailbreak attacks. We release the StrongREJECT code and data at https://strong-reject.readthedocs.io/en/latest/.",0 "The proliferation of video content on platforms like YouTube and Vimeo presents significant challenges in efficiently locating relevant information. Automatic video summarization aims to address this by extracting and presenting key content in a condensed form. This thesis explores enhancing video summarization by integrating text-based queries and conditional modeling to tailor summaries to user needs. Traditional methods often produce fixed summaries that may not align with individual requirements. To overcome this, we propose a multi-modal deep learning approach that incorporates both textual queries and visual information, fusing them at different levels of the model architecture. Evaluation metrics such as accuracy and F1-score assess the quality of the generated summaries. The thesis also investigates improving text-based query representations using contextualized word embeddings and specialized attention networks. This enhances the semantic understanding of queries, leading to better video summaries. To emulate human-like summarization, which accounts for both visual coherence and abstract factors like storyline consistency, we introduce a conditional modeling approach. This method uses multiple random variables and joint distributions to capture key summarization components, resulting in more human-like and explainable summaries. Addressing data scarcity in fully supervised learning, the thesis proposes a segment-level pseudo-labeling approach. This self-supervised method generates additional data, improving model performance even with limited human-labeled datasets. In summary, this research aims to enhance automatic video summarization by incorporating text-based queries, improving query representations, introducing conditional modeling, and addressing data scarcity, thereby creating more effective and personalized video summaries.",0 "AI prevails in financial fraud detection and decision making. Yet, due to concerns about biased automated decision making or profiling, regulations mandate that final decisions are made by humans. Financial fraud investigators face the challenge of manually synthesizing vast amounts of unstructured information, including AI alerts, transaction histories, social media insights, and governmental laws. Current Visual Analytics (VA) systems primarily support isolated aspects of this process, such as explaining binary AI alerts and visualizing transaction patterns, thus adding yet another layer of information to the overall complexity. In this work, we propose a framework where the VA system supports decision makers throughout all stages of financial fraud investigation, including data collection, information synthesis, and human criteria iteration. We illustrate how VA can claim a central role in AI-aided decision making, ensuring that human judgment remains in control while minimizing potential biases and labor-intensive tasks.",0 "We study nonequilibrium steady state (NESS) transport in a boundary driven one-dimensional fermionic lattice setup which is further subjected to particle loss. We analyze the system size scaling of conductance at zero temperature for different values of the chemical potential of the boundary reservoirs. We consider a variety of loss channel configurations: (i) single loss at the middle site of the lattice, (ii) multiple but nonextensive lossy channels, and (iii) extensive lossy channels. For the cases (i) and (ii), the conductance scaling with system size remains robust (i.e., same as the case with no loss) for chemical potential within and outside the lattice band, while at the band-edge rich anomalous conductance scaling emerges. For case (iii), the conductance scaling becomes ballistic in the thermodynamic limit for any value of chemical potential. We explain the emergence of these different system size scalings of conductance by analyzing the spectral properties of the associated non-hermitian transfer matrices of the underlying lattice. We demonstrate that the emergence of anomalous scaling is deeply connected to the existence of exceptional points of transfer matrices. Our study unravels that by carefully optimizing the loss mechanism configurations, one can in principle realize systems with rich transport properties in low-dimensional open quantum systems.",0 "Although LLMs have the potential to transform many fields, they still underperform humans in reasoning tasks. Existing methods induce the model to produce step-by-step calculations, but this research explores the question: Does making the LLM analyze the question improve its performance? We propose a novel prompting strategy called Question Analysis Prompting (QAP), in which the model is prompted to explain the question in $n$ words before solving. The value of $n$ influences the length of response generated by the model. QAP is evaluated on GPT 3.5 Turbo and GPT 4 Turbo on arithmetic datasets GSM8K, AQuA, and SAT and commonsense dataset StrategyQA. QAP is compared with other state-of-the-art prompts including Chain-of-Thought (CoT), Plan and Solve Prompting (PS+) and Take A Deep Breath (TADB). QAP outperforms all state-of-the-art prompts on AQuA and SAT datasets on both GPT3.5 and GPT4. QAP consistently ranks among the top-2 prompts on 75\% of the tests. A key factor of QAP performance can be attributed to response length, where detailed responses are beneficial when answering harder questions, but can negatively affect easy questions.",0 "Data-driven approaches for autonomous driving (AD) have been widely adopted in the past decade but are confronted with dataset bias and uninterpretability. Inspired by the knowledge-driven nature of human driving, recent approaches explore the potential of large language models (LLMs) to improve understanding and decision-making in traffic scenarios. They find that the pretrain-finetune paradigm of LLMs on downstream data with the Chain-of-Thought (CoT) reasoning process can enhance explainability and scene understanding. However, such a popular strategy proves to suffer from the notorious problems of misalignment between the crafted CoTs against the consequent decision-making, which remains untouched by previous LLM-based AD methods. To address this problem, we motivate an end-to-end decision-making model based on multimodality-augmented LLM, which simultaneously executes CoT reasoning and carries out planning results. Furthermore, we propose a reasoning-decision alignment constraint between the paired CoTs and planning results, imposing the correspondence between reasoning and decision-making. Moreover, we redesign the CoTs to enable the model to comprehend complex scenarios and enhance decision-making performance. We dub our proposed large language planners with reasoning-decision alignment as RDA-Driver. Experimental evaluations on the nuScenes and DriveLM-nuScenes benchmarks demonstrate the effectiveness of our RDA-Driver in enhancing the performance of end-to-end AD systems. Specifically, our RDA-Driver achieves state-of-the-art planning performance on the nuScenes dataset with 0.80 L2 error and 0.32 collision rate, and also achieves leading results on challenging DriveLM-nuScenes benchmarks with 0.82 L2 error and 0.38 collision rate.",0 "We investigate the nonequilibrium dynamics of the magnetization in an Ising chain subjected to a slowly rotating transverse field. The magnetization oscillations are found to be explained by the contributions from different particle excitations in the quantum $E_8$ model. We study the magnetization in the frequency domain in detail, uncovering a series of singular peaks for the $z$ (Ising) component. These singular peaks are split into two sets for the magnetization along $x$ and $y$ directions with frequency shifts set by the rotational-field frequency. The peaks include both $\delta$-function type and edge-singularity type peaks. The $\delta$-function peaks can be attributed to particle excitations involving an $E_8$ particle with either the vacuum or a different particle. The edge-singularity peaks are contributed by particle excitations of two $E_8$ particles with either the vacuum or another particle, or by particle excitations that contain two sets of two particles with each set including at least a particle of the same type. We propose a Rydberg qubit array for possible experimental investigation.",0 "In the chemical and process industries, Process Flow Diagrams (PFDs) and Piping and Instrumentation Diagrams (P&IDs) are critical for design, construction, and maintenance. Recent advancements in Generative AI, such as Large Multimodal Models (LMMs) like GPT4 (Omni), have shown promise in understanding and interpreting process diagrams for Visual Question Answering (VQA). However, proprietary models pose data privacy risks, and their computational complexity prevents knowledge editing for domain-specific customization on consumer hardware. To overcome these challenges, we propose a secure, on-premises enterprise solution using a hierarchical, multi-agent Retrieval Augmented Generation (RAG) framework for open-domain question answering (ODQA) tasks, offering enhanced data privacy, explainability, and cost-effectiveness. Our novel multi-agent framework employs introspective and specialized sub-agents using open-source, small-scale multimodal models with the ReAct (Reason+Act) prompting technique for PFD and P&ID analysis, integrating multiple information sources to provide accurate and contextually relevant answers. Our approach, supported by iterative self-correction, aims to deliver superior performance in ODQA tasks. We conducted rigorous experimental studies, and the empirical results validated the proposed approach effectiveness.",0 "The Lunar landing has drawn great interest in lunar exploration in recent years, and autonomous lunar landing navigation is fundamental to this task. AI is expected to play a critical role in autonomous and intelligent space missions, yet human experts question the reliability of AI solutions. Thus, the \gls{xai} for vision-based lunar landing is studied in this paper, aiming at providing transparent and understandable predictions for intelligent lunar landing. Attention-based Darknet53 is proposed as the feature extraction structure. For crater detection and navigation tasks, attention-based YOLOv3 and attention-Darknet53-LSTM are presented respectively. The experimental results show that the offered networks provide competitive performance on relative crater detection and pose estimation during the lunar landing. The explainability of the provided networks is achieved by introducing an attention mechanism into the network during model building. Moreover, the PCC is utilised to quantitively evaluate the explainability of the proposed networks, with the findings showing the functions of various convolutional layers in the network.",0 "Diagnosis of mild cognitive impairment (MCI) and subjective cognitive decline (SCD) from fMRI functional connectivity (FC) has gained popularity, but most FC-based diagnostic models are black boxes lacking casual reasoning so they contribute little to the knowledge about FC-based neural biomarkers of cognitive decline.To enhance the explainability of diagnostic models, we propose a generative counterfactual attention-guided network (GCAN), which introduces counterfactual reasoning to recognize cognitive decline-related brain regions and then uses these regions as attention maps to boost the prediction performance of diagnostic models. Furthermore, to tackle the difficulty in the generation of highly-structured and brain-atlas-constrained FC, which is essential in counterfactual reasoning, an Atlas-Aware Bidirectional Transformer (AABT) method is developed. AABT employs a bidirectional strategy to encode and decode the tokens from each network of brain atlas, thereby enhancing the generation of high-quality target label FC. In the experiments of hospital-collected and ADNI datasets, the generated attention maps closely resemble FC abnormalities in the literature on SCD and MCI. The diagnostic performance is also superior to baseline models. The code is available at https://github.com/SXR3015/GCAN",0 "Assessing an AI system's behavior-particularly in Explainable AI Systems-is sometimes done empirically, by measuring people's abilities to predict the agent's next move-but how to perform such measurements? In empirical studies with humans, an obvious approach is to frame the task as binary (i.e., prediction is either right or wrong), but this does not scale. As output spaces increase, so do floor effects, because the ratio of right answers to wrong answers quickly becomes very small. The crux of the problem is that the binary framing is failing to capture the nuances of the different degrees of wrongness."" To address this, we begin by proposing three mathematical bases upon which to measure ""partial wrongness."" We then uses these bases to perform two analyses on sequential decision-making domains: the first is an in-lab study with 86 participants on a size-36 action space; the second is a re-analysis of a prior study on a size-4 action space. Other researchers adopting our operationalization of the prediction task and analysis methodology will improve the rigor of user studies conducted with that task, which is particularly important when the domain features a large output space.""""",2 "Predicting the locations an individual will visit in the future is crucial for solving many societal issues like disease diffusion and reduction of pollution. However, next-location predictors require a significant amount of individual-level information that may be scarce or unavailable in some scenarios (e.g., cold-start). Large Language Models (LLMs) have shown good generalization and reasoning capabilities and are rich in geographical knowledge, allowing us to believe that these models can act as zero-shot next-location predictors. We tested more than 15 LLMs on three real-world mobility datasets and we found that LLMs can obtain accuracies up to 36.2%, a significant relative improvement of almost 640% when compared to other models specifically designed for human mobility. We also test for data contamination and explored the possibility of using LLMs as text-based explainers for next-location prediction, showing that, regardless of the model size, LLMs can explain their decision.",0 "Deep Neural Networks (DNNs) have revolutionized various fields by enabling task automation and reducing human error. However, their internal workings and decision-making processes remain obscure due to their black box nature. Consequently, the lack of interpretability limits the application of these models in high-risk scenarios. To address this issue, the emerging field of eXplainable Artificial Intelligence (XAI) aims to explain and interpret the inner workings of DNNs. Despite advancements, XAI faces challenges such as the semantic gap between machine and human understanding, the trade-off between interpretability and performance, and the need for context-specific explanations. To overcome these limitations, we propose a novel multimodal framework named VALE Visual and Language Explanation. VALE integrates explainable AI techniques with advanced language models to provide comprehensive explanations. This framework utilizes visual explanations from XAI tools, an advanced zero-shot image segmentation model, and a visual language model to generate corresponding textual explanations. By combining visual and textual explanations, VALE bridges the semantic gap between machine outputs and human interpretation, delivering results that are more comprehensible to users. In this paper, we conduct a pilot study of the VALE framework for image classification tasks. Specifically, Shapley Additive Explanations (SHAP) are used to identify the most influential regions in classified images. The object of interest is then extracted using the Segment Anything Model (SAM), and explanations are generated using state-of-the-art pre-trained Vision-Language Models (VLMs). Extensive experimental studies are performed on two datasets: the ImageNet dataset and a custom underwater SONAR image dataset, demonstrating VALEs real-world applicability in underwater image classification.",0 "Deep learning models are complex due to their size, structure, and inherent randomness in training procedures. Additional complexity arises from the selection of datasets and inductive biases. Addressing these challenges for explainability, Kim et al. (2018) introduced Concept Activation Vectors (CAVs), which aim to understand deep models' internal states in terms of human-aligned concepts. These concepts correspond to directions in latent space, identified using linear discriminants. Although this method was first applied to image classification, it was later adapted to other domains, including natural language processing. In this work, we attempt to apply the method to electroencephalogram (EEG) data for explainability in Kostas et al.'s BENDR (2021), a large-scale transformer model. A crucial part of this endeavor involves defining the explanatory concepts and selecting relevant datasets to ground concepts in the latent space. Our focus is on two mechanisms for EEG concept formation: the use of externally labeled EEG datasets, and the application of anatomically defined concepts. The former approach is a straightforward generalization of methods used in image classification, while the latter is novel and specific to EEG. We present evidence that both approaches to concept formation yield valuable insights into the representations learned by deep EEG models.",0 "The human brain has an inherent ability to fill in gaps to perceive figures as complete wholes, even when parts are missing or fragmented. This phenomenon is known as Closure in psychology, one of the Gestalt laws of perceptual organization, explaining how the human brain interprets visual stimuli. Given the importance of Closure for human object recognition, we investigate whether neural networks rely on a similar mechanism. Exploring this crucial human visual skill in neural networks has the potential to highlight their comparability to humans. Recent studies have examined the Closure effect in neural networks. However, they typically focus on a limited selection of Convolutional Neural Networks (CNNs) and have not reached a consensus on their capability to perform Closure. To address these gaps, we present a systematic framework for investigating the Closure principle in neural networks. We introduce well-curated datasets designed to test for Closure effects, including both modal and amodal completion. We then conduct experiments on various CNNs employing different measurements. Our comprehensive analysis reveals that VGG16 and DenseNet-121 exhibit the Closure effect, while other CNNs show variable results. We interpret these findings by blending insights from psychology and neural network research, offering a unique perspective that enhances transparency in understanding neural networks. Our code and dataset will be made available on GitHub.",0 "We often use ""explainable"" Artificial Intelligence (XAI)"" and ""interpretable AI (IAI)"" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs. However, these notions can sometimes be confusing because interpretation often has a subjective connotation, while explanations lean towards objective facts. We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset. At the core of this ambiguity is the duality of reasons, in which we can reason either outwards or inwards. When directed outwards, we want the reasons to make sense through the laws of nature. When turned inwards, we want the reasons to be happy, guided by the laws of the heart. While XAI and IAI share reason as the common notion for the goal of transparency, clarity, fairness, reliability, and accountability in the context of ethical AI and trustworthy AI (TAI), their differences lie in that XAI emphasizes the post-hoc analysis of a dataset, and IAI requires a priori mindset of abstraction. This hypothesis can be proved by empirical experiments based on an open dataset and harnessed by High-Performance Computing (HPC). The demarcation of XAI and IAI is indispensable because it would be impossible to determine regulatory policies for many AI applications, especially in healthcare, human resources, banking, and finance. We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.",0 "The responsible AI (RAI) community has introduced numerous processes and artifacts (e.g., Model Cards, Transparency Notes, Data Cards) to facilitate transparency and support the governance of AI systems. While originally designed to scaffold and document AI development processes in technology companies, these artifacts are becoming central components of regulatory compliance under recent regulations such as the EU AI Act. Much prior work has explored the design of new RAI artifacts or their use by practitioners within technology companies. However, as RAI artifacts begin to play key roles in enabling external oversight, it becomes critical to understand how stakeholders--particularly those situated outside of technology companies who govern and audit industry AI deployments--perceive the efficacy of RAI artifacts. In this study, we conduct semi-structured interviews and design activities with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts. While participants believe that RAI artifacts are a valuable contribution to the broader AI governance ecosystem, many are concerned about their potential unintended, longer-term impacts on actors outside of technology companies (e.g., downstream end-users, policymakers, civil society stakeholders). We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry, impeding civil society and legal stakeholders' ability to protect downstream end-users from potential AI harms. Participants envision how structural changes, along with changes in how RAI artifacts are designed, used, and governed, could help redirect the role of artifacts to support more collaborative and proactive external oversight of AI systems. We discuss research and policy implications for RAI artifacts.",1 "We present our results regarding the automatic construction of a knowledge graph from historical documents related to the Chilean dictatorship period (1973-1990). Our approach consists on using LLMs to automatically recognize entities and relations between these entities, and also to perform resolution between these sets of values. In order to prevent hallucination, the interaction with the LLM is grounded in a simple ontology with 4 types of entities and 7 types of relations. To evaluate our architecture, we use a gold standard graph constructed using a small subset of the documents, and compare this to the graph obtained from our approach when processing the same set of documents. Results show that the automatic construction manages to recognize a good portion of all the entities in the gold standard, and that those not recognized are mostly explained by the level of granularity in which the information is structured in the graph, and not because the automatic approach misses an important entity in the graph. Looking forward, we expect this report will encourage work on other similar projects focused on enhancing research in humanities and social science, but we remark that better evaluation metrics are needed in order to accurately fine-tune these types of architectures.",0 "Recently, Meta has shifted towards AI-mediated ad targeting mechanisms that do not require advertisers to provide detailed targeting criteria, likely driven by excitement over AI capabilities as well as new data privacy policies and targeting changes agreed upon in civil rights settlements. At the same time, Meta has touted their ad preference controls as an effective mechanism for users to control the ads they see. Furthermore, Meta markets their targeting explanations as a transparency tool that allows users to understand why they saw certain ads and inform actions to control future ads. Our study evaluates the effectiveness of Meta's ""See less"" ad control and the actionability of ad targeting explanations following the shift to AI-mediated targeting. We conduct a large-scale study, randomly assigning 1304 participants to mark ""See less"" to Body Weight Control or Parenting topics, and collecting the ads and targeting explanations Meta shows to participants before and after the intervention. We find that utilizing the ""See less"" ad control for the topics we study does not significantly reduce the number of ads shown by Meta on these topics, and that the control is less effective for some users whose demographics are correlated with the topic. Furthermore, we find that the majority of ad targeting explanations for local ads made no reference to location-specific targeting criteria, and did not inform users why ads related to the topics they marked to ""See less"" of continued to be delivered. We hypothesize that the poor effectiveness of controls and lack of actionability in explanations are the result of the shift to AI-mediated targeting, for which explainability and transparency tools have not yet been developed. Our work thus provides evidence for the need of new methods for transparency and user control, suitable and reflective of increasingly complex AI-mediated ad delivery systems.",2 "In the realm of human activity recognition (HAR), the integration of explainable Artificial Intelligence (XAI) emerges as a critical necessity to elucidate the decision-making processes of complex models, fostering transparency and trust. Traditional explanatory methods like Class Activation Mapping (CAM) and attention mechanisms, although effective in highlighting regions vital for decisions in various contexts, prove inadequate for HAR. This inadequacy stems from the inherently abstract nature of HAR data, rendering these explanations obscure. In contrast, state-of-th-art post-hoc interpretation techniques for time series can explain the model from other perspectives. However, this requires extra effort. It usually takes 10 to 20 seconds to generate an explanation. To overcome these challenges, we proposes a novel, model-agnostic framework that enhances both the interpretability and efficacy of HAR models through the strategic use of competitive data augmentation. This innovative approach does not rely on any particular model architecture, thereby broadening its applicability across various HAR models. By implementing competitive data augmentation, our framework provides intuitive and accessible explanations of model decisions, thereby significantly advancing the interpretability of HAR systems without compromising on performance.",0 "Mercari is the largest C2C e-commerce marketplace in Japan, having more than 20 million active monthly users. Search being the fundamental way to discover desired items, we have always had a substantial amount of data with implicit feedback. Although we actively take advantage of that to provide the best service for our users, the correlation of implicit feedback for such tasks as image quality assessment is not trivial. Many traditional lines of research in Machine Learning (ML) are similarly motivated by the insatiable appetite of Deep Learning (DL) models for well-labelled training data. Weak supervision is about leveraging higher-level and/or noisier supervision over unlabeled data. Large Language Models (LLMs) are being actively studied and used for data labelling tasks. We present how we leverage a Chain-of-Thought (CoT) to enable LLM to produce image aesthetics labels that correlate well with human behavior in e-commerce settings. Leveraging LLMs is more cost-effective compared to explicit human judgment, while significantly improving the explainability of deep image quality evaluation which is highly important for customer journey optimization at Mercari. We propose a cost-efficient LLM-driven approach for assessing and predicting image quality in e-commerce settings, which is very convenient for proof-of-concept testing. We show that our LLM-produced labels correlate with user behavior on Mercari. Finally, we show our results from an online experimentation, where we achieved a significant growth in sales on the web platform.",0 "Researchers and practitioners interested in computational politics rely on automatic content analysis tools to make sense of the large amount of political texts available on the Web. Such tools should provide objective and subjective aspects at different granularity levels to make the analyses useful in practice. Existing methods produce interesting insights for objective aspects, but are limited for subjective ones, are often limited to national contexts, and have limited explainability. We introduce a text analysis framework which integrates both perspectives and provides a fine-grained processing of subjective aspects. Information retrieval techniques and knowledge bases complement powerful natural language processing components to allow a flexible aggregation of results at different granularity levels. Importantly, the proposed bottom-up approach facilitates the explainability of the obtained results. We illustrate its functioning with insights on news outlets, political orientations, topics, individual entities, and demographic segments. The approach is instantiated on a large corpus of French news, but is designed to work seamlessly for other languages and countries.",0 "The causal capabilities of large language models (LLMs) are a matter of significant debate, with critical implications for the use of LLMs in societally impactful domains such as medicine, science, law, and policy. We conduct a ""behavorial"" study of LLMs to benchmark their capability in generating causal arguments. Across a wide range of tasks, we find that LLMs can generate text corresponding to correct causal arguments with high probability, surpassing the best-performing existing methods. Algorithms based on GPT-3.5 and 4 outperform existing algorithms on a pairwise causal discovery task (97%, 13 points gain), counterfactual reasoning task (92%, 20 points gain) and event causality (86% accuracy in determining necessary and sufficient causes in vignettes). We perform robustness checks across tasks and show that the capabilities cannot be explained by dataset memorization alone, especially since LLMs generalize to novel datasets that were created after the training cutoff date. That said, LLMs exhibit unpredictable failure modes, and we discuss the kinds of errors that may be improved and what are the fundamental limits of LLM-based answers. Overall, by operating on the text metadata, LLMs bring capabilities so far understood to be restricted to humans, such as using collected knowledge to generate causal graphs or identifying background causal context from natural language. As a result, LLMs may be used by human domain experts to save effort in setting up a causal analysis, one of the biggest impediments to the widespread adoption of causal methods. Given that LLMs ignore the actual data, our results also point to a fruitful research direction of developing algorithms that combine LLMs with existing causal techniques. Code and datasets are available at https://github.com/py-why/pywhy-llm.",0 "Gastrointestinal cancer is a leading cause of cancer-related incidence and death, making it crucial to develop novel computer-aided diagnosis systems for early detection and enhanced treatment. Traditional approaches rely on the expertise of gastroenterologists to identify diseases; however, this process is subjective, and interpretation can vary even among expert clinicians. Considering recent advancements in classifying gastrointestinal anomalies and landmarks in endoscopic and video capsule endoscopy images, this study proposes a hybrid model that combines the advantages of Transformers and Convolutional Neural Networks (CNNs) to enhance classification performance. Our model utilizes DenseNet201 as a CNN branch to extract local features and integrates a Swin Transformer branch for global feature understanding, combining both to perform the classification task. For the GastroVision dataset, our proposed model demonstrates excellent performance with Precision, Recall, F1 score, Accuracy, and Matthews Correlation Coefficient (MCC) of 0.8320, 0.8386, 0.8324, 0.8386, and 0.8191, respectively, showcasing its robustness against class imbalance and surpassing other CNNs as well as the Swin Transformer model. Similarly, for the Kvasir-Capsule, a large video capsule endoscopy dataset, our model outperforms all others, achieving overall Precision, Recall, F1 score, Accuracy, and MCC of 0.7007, 0.7239, 0.6900, 0.7239, and 0.3871. Moreover, we generated saliency maps to explain our model's focus areas, demonstrating its reliable decision-making process. The results underscore the potential of our hybrid CNN-Transformer model in aiding the early and accurate detection of gastrointestinal (GI) anomalies.",0 "To assist human fact-checkers, researchers have developed automated approaches for visual misinformation detection. These methods assign veracity scores by identifying inconsistencies between the image and its caption, or by detecting forgeries in the image. However, they neglect a crucial point of the human fact-checking process: identifying the original meta-context of the image. By explaining what is actually true about the image, fact-checkers can better detect misinformation, focus their efforts on check-worthy visual content, engage in counter-messaging before misinformation spreads widely, and make their explanation more convincing. Here, we fill this gap by introducing the task of automated image contextualization. We create 5Pils, a dataset of 1,676 fact-checked images with question-answer pairs about their original meta-context. Annotations are based on the 5 Pillars fact-checking framework. We implement a first baseline that grounds the image in its original meta-context using the content of the image and textual evidence retrieved from the open web. Our experiments show promising results while highlighting several open challenges in retrieval and reasoning. We make our code and data publicly available.",0 "In Explainable AI (XAI), counterfactual explanations (CEs) are a well-studied method to communicate feature relevance through contrastive reasoning of ""what if"" to explain AI models' predictions. However, they only focus on important (i.e., relevant) features and largely disregard less important (i.e., irrelevant) ones. Such irrelevant features can be crucial in many applications, especially when users need to ensure that an AI model's decisions are not affected or biased against specific attributes such as gender, race, religion, or political affiliation. To address this gap, the concept of alterfactual explanations (AEs) has been proposed. AEs explore an alternative reality of ""no matter what"", where irrelevant features are substituted with alternative features (e.g., ""republicans"" -> ""democrats"") within the same attribute (e.g., ""politics"") while maintaining a similar prediction output. This serves to validate whether AI model predictions are influenced by the specified attributes. Despite the promise of AEs, there is a lack of computational approaches to systematically generate them, particularly in the text domain, where creating AEs for AI text classifiers presents unique challenges. This paper addresses this challenge by formulating AE generation as an optimization problem and introducing MoMatterXAI, a novel algorithm that generates AEs for text classification tasks. Our approach achieves high fidelity of up to 95% while preserving context similarity of over 90% across multiple models and datasets. A human study further validates the effectiveness of AEs in explaining AI text classifiers to end users. All codes will be publicly available.",0 "Modern language models (LMs) can learn to perform new tasks in different ways: in instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly with a small number of examples; in instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description before making predictions. Each of these procedures may be thought of as invoking a different form of reasoning: instruction following involves deductive reasoning, few-shot prompting involves inductive reasoning, and instruction inference involves abductive reasoning. How do these different capabilities relate? Across four LMs (from the gpt and llama families) and two learning problems (involving arithmetic functions and machine translation) we find a strong dissociation between the different types of reasoning: LMs can sometimes learn effectively from few-shot prompts even when they are unable to explain their own prediction rules; conversely, they sometimes infer useful task descriptions while completely failing to learn from human-generated descriptions of the same task. Our results highlight the non-systematic nature of reasoning even in some of today's largest LMs, and underscore the fact that very different learning mechanisms may be invoked by seemingly similar prompting procedures.",0 "Assessing the forensic value of hand images involves the use of unique features and patterns present in an individual's hand. The human hand has distinct characteristics, such as the pattern of veins, fingerprints, and the geometry of the hand itself. This paper investigates the use of vision transformers (ViTs) for classification of hand images. We use explainability tools to explore the internal representations of ViTs and assess their impact on the model outputs. Utilizing the internal understanding of ViTs, we introduce distillation methods that allow a student model to adaptively extract knowledge from a teacher model while learning on data of a different domain to prevent catastrophic forgetting. Two publicly available hand image datasets are used to conduct a series of experiments to evaluate performance of the ViTs and our proposed adaptive distillation methods. The experimental results demonstrate that ViT models significantly outperform traditional machine learning methods and the internal states of ViTs are useful for explaining the model outputs in the classification task. By averting catastrophic forgetting, our distillation methods achieve excellent performance on data from both source and target domains, particularly when these two domains exhibit significant dissimilarity. The proposed approaches therefore can be developed and implemented effectively for real-world applications such as access control, identity verification, and authentication systems.",0 "We investigate the evaporation of trace amounts of helium solvated in liquid water using molecular dynamics simulations and theory. Consistent with experimental observations, we find a super-Maxwellian distribution of kinetic energies of evaporated helium. This excess of kinetic energy over typical thermal expectations is explained by an effective continuum theory of evaporation based on a Fokker-Planck equation, parameterized molecularly by a potential of mean force and position-dependent friction. Using this description, we find that helium evaporation is strongly influenced by the friction near the interface, which is anomalously small near the Gibbs dividing surface due to the ability of the liquid-vapor interface to deform around the gas particle. Our reduced description provides a mechanistic interpretation of trace gas evaporation as the motion of an underdamped particle in a potential subject to a viscous environment that varies rapidly across the air-water interface. From it we predict the temperature dependence of the excess kinetic energy of evaporation, which is yet to be measured.",0 "The in-in formalism provides a way to systematically organize the calculation of primordial correlation functions. Although its theoretical foundations are now firmly settled, the treatment of total time derivative interactions, incorrectly trivialized as ``boundary terms'', has been the subject of intense discussions and conceptual mistakes. In this work, we demystify the use of total time derivatives -- as well as terms proportional to the linear equations of motion -- and show that they can lead to artificially large contributions cancelling at different orders of the in-in operator formalism. We discuss the treatment of total time derivative interactions in the Lagrangian path integral formulation of the in-in perturbation theory, and we showcase the importance of interaction terms proportional to linear equations of motion. We then provide a new route to the calculation of primordial correlation functions, which avoids the generation of total time derivatives, by working directly at the level of the full Hamiltonian in terms of phase-space variables. Instead of integrating by parts, we perform canonical transformations to simplify interactions. We explain how to retrieve correlation functions of the initial phase-space variables from the knowledge of the ones after canonical transformations. As an important first application, we find the explicit sizes of Hamiltonian cubic interactions in single-field inflation with canonical kinetic terms and for any background evolution [...]. Our results are important for performing complete calculations of exchange diagrams in inflation, such as the (scalar and tensor) exchange trispectrum and the one-loop power spectrum. Being already written in a form amenable to characterize quantum properties of primordial fluctuations, they also promise to shed light on the non-linear dynamics of quantum states during inflation.",0 "As machine learning (ML) models and datasets increase in complexity, the demand for methods that enhance explainability and interpretability becomes paramount. Prototypes, by encapsulating essential characteristics within data, offer insights that enable tactical decision-making and enhance transparency. Traditional prototype methods often rely on sub-symbolic raw data and opaque latent spaces, reducing explainability and increasing the risk of misinterpretations. This paper presents a novel framework that utilizes semantic descriptions to define prototypes and provide clear explanations, effectively addressing the shortcomings of conventional methods. Our approach leverages concept-based descriptions to cluster data on the semantic level, ensuring that prototypes not only represent underlying properties intuitively but are also straightforward to interpret. Our method simplifies the interpretative process and effectively bridges the gap between complex data structures and human cognitive processes, thereby enhancing transparency and fostering trust. Our approach outperforms existing widely-used prototype methods in facilitating human understanding and informativeness, as validated through a user survey.",0 "Poverty is a serious issue that harms humanity progression. The simplest solution is to use one-shirt-size policy to alleviate it. Nevertheless, each region has its unique issues, which require a unique solution to solve them. In the aspect of spatial analysis, neighbor regions can provide useful information to analyze issues of a given region. In this work, we proposed inferred boundaries of regions of Thailand that can explain better the poverty dynamics, instead of the usual government administrative regions. The proposed regions maximize a trade-off between poverty-related features and geographical coherence. We use a spatial analysis together with Moran's cluster algorithms and Bayesian hierarchical regression models, with the potential of assist the implementation of the right policy to alleviate the poverty phenomenon. We found that all variables considered show a positive spatial autocorrelation. The results of analysis illustrate that 1) Northern, Northeastern Thailand, and in less extend Northcentral Thailand are the regions that require more attention in the aspect of poverty issues, 2) Northcentral, Northeastern, Northern and Southern Thailand present dramatically low levels of education, income and amount of savings contrasted with large cities such as Bangkok-Pattaya and Central Thailand, and 3) Bangkok-Pattaya is the only region whose average years of education is above 12 years, which corresponds (approx.) with a complete senior high school.",0 "Differential privacy is a popular privacy-enhancing technology that has been deployed both in industry and government agencies. Unfortunately, existing explanations of differential privacy fail to set accurate privacy expectations for data subjects, which depend on the choice of deployment model. We design and evaluate new explanations of differential privacy for the local and central models, drawing inspiration from prior work explaining other privacy-enhancing technologies. We find that consequences-focused explanations in the style of privacy nutrition labels that lay out the implications of differential privacy are a promising approach for setting accurate privacy expectations. Further, we find that while process-focused explanations are not enough to set accurate privacy expectations, combining consequences-focused explanations with a brief description of how differential privacy works leads to greater trust.",0 "An edit summary is a succinct comment written by a Wikipedia editor explaining the nature of, and reasons for, an edit to a Wikipedia page. Edit summaries are crucial for maintaining the encyclopedia: they are the first thing seen by content moderators and they help them decide whether to accept or reject an edit. Additionally, edit summaries constitute a valuable data source for researchers. Unfortunately, as we show, for many edits, summaries are either missing or incomplete. To overcome this problem and help editors write useful edit summaries, we propose a model for recommending edit summaries generated by a language model trained to produce good edit summaries given the representation of an edit diff. To overcome the challenges of mixed-quality training data and efficiency requirements imposed by the scale of Wikipedia, we fine-tune a small generative language model on a curated mix of human and synthetic data. Our model performs on par with human editors. Commercial large language models are able to solve this task better than human editors, but are not well suited for Wikipedia, while open-source ones fail on this task. More broadly, we showcase how language modeling technology can be used to support humans in maintaining one of the largest and most visible projects on the Web.",0 "Quality assessment, which evaluates the visual quality level of multimedia experiences, has garnered significant attention from researchers and has evolved substantially through dedicated efforts. Before the advent of large models, quality assessment typically relied on small expert models tailored for specific tasks. While these smaller models are effective at handling their designated tasks and predicting quality levels, they often lack explainability and robustness. With the advancement of large models, which align more closely with human cognitive and perceptual processes, many researchers are now leveraging the prior knowledge embedded in these large models for quality assessment tasks. This emergence of quality assessment within the context of large models motivates us to provide a comprehensive review focusing on two key aspects: 1) the assessment of large models, and 2) the role of large models in assessment tasks. We begin by reflecting on the historical development of quality assessment. Subsequently, we move to detailed discussions of related works concerning quality assessment in the era of large models. Finally, we offer insights into the future progression and potential pathways for quality assessment in this new era. We hope this survey will enable a rapid understanding of the development of quality assessment in the era of large models and inspire further advancements in the field.",0 "Code summary generation is the task of writing natural language descriptions of a section of source code. Recent advances in Large Language Models (LLMs) and other AI-based technologies have helped make automatic code summarization a reality. However, the summaries these approaches write tend to focus on a narrow area of code. The results are summaries that explain what that function does internally, but lack a description of why the function exists or its purpose in the broader context of the program. In this paper, we present an approach for including this context in recent LLM-based code summarization. The input to our approach is a Java method and that project in which that method exists. The output is a succinct English description of why the method exists in the project. The core of our approach is a 350m parameter language model we train, which can be run locally to ensure privacy. We train the model in two steps. First we distill knowledge about code summarization from a large model, then we fine-tune the model using data from a study of human programmer who were asked to write code summaries. We find that our approach outperforms GPT-4 on this task.",0 "Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety), that could outweigh its benefits? It's quite unclear how the specificity of explanations (level of transparency) influences recipients, especially in autonomous driving (AD). In this work, we examined the effects of transparency mediated through varying levels of explanation specificity in AD. We first extended a data-driven explainer model by adding a rule-based option for explanation generation in AD, and then conducted a within-subject lab study with 39 participants in an immersive driving simulator to study the effect of the resulting explanations. Specifically, our investigation focused on: (1) how different types of explanations (specific vs. abstract) affect passengers' perceived safety, anxiety, and willingness to take control of the vehicle when the vehicle perception system makes erroneous predictions; and (2) the relationship between passengers' behavioural cues and their feelings during the autonomous drives. Our findings showed that passengers felt safer with specific explanations when the vehicle's perception system had minimal errors, while abstract explanations that hid perception errors led to lower feelings of safety. Anxiety levels increased when specific explanations revealed perception system errors (high transparency). We found no significant link between passengers' visual patterns and their anxiety levels. Our study suggests that passengers prefer clear and specific explanations (high transparency) when they originate from autonomous vehicles (AVs) with optimal perceptual accuracy.",1 "Understanding how people behave in strategic settings--where they make decisions based on their expectations about the behavior of others--is a long-standing problem in the behavioral sciences. We conduct the largest study to date of strategic decision-making in the context of initial play in two-player matrix games, analyzing over 90,000 human decisions across more than 2,400 procedurally generated games that span a much wider space than previous datasets. We show that a deep neural network trained on these data predicts people's choices better than leading theories of strategic behavior, indicating that there is systematic variation that is not explained by those theories. We then modify the network to produce a new, interpretable behavioral model, revealing what the original network learned about people: their ability to optimally respond and their capacity to reason about others are dependent on the complexity of individual games. This context-dependence is critical in explaining deviations from the rational Nash equilibrium, response times, and uncertainty in strategic decisions. More broadly, our results demonstrate how machine learning can be applied beyond prediction to further help generate novel explanations of complex human behavior.",0 "To address the limitations of Large Language Models (LLMs) in the International Classification of Diseases (ICD) coding task, where they often produce inaccurate and incomplete prediction results due to the high-dimensional and skewed distribution of the ICD codes, and often lack interpretability and reliability as well. We introduce an innovative multi-agent approach for ICD coding which mimics the ICD coding assignment procedure in real-world settings, comprising five distinct agents: the patient, physician, coder, reviewer, and adjuster. Each agent utilizes an LLM-based model tailored to their specific role within the coding process. We also integrate the system with Electronic Health Record (HER)'s SOAP (subjective, objective, assessment and plan) structure to boost the performances. We compare our method with a system of agents designed solely by LLMs and other strong baselines and evaluate it using the Medical Information Mart for Intensive Care III (MIMIC-III) dataset. Our multi-agent coding framework significantly outperforms Zero-shot Chain of Thought (CoT) prompting and self-consistency with CoT (CoT-SC) in coding common and rare ICD codes. An ablation study validates the effectiveness of the designated agent roles. it also outperforms the LLM-designed agent system. Moreover, our method achieves comparable results to state-of-the-art ICD coding methods that require extensive pre-training or fine-tuning, and outperforms them in rare code accuracy, and explainability. Additionally, we demonstrate the method's practical applicability by presenting its performance in scenarios not limited by the common or rare ICD code constraints.The proposed multi-agent method for ICD coding effectively mimics the real-world coding process and improves performance on both common and rare codes.",0 "This paper investigates the voting behaviors of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2, their biases, and how they align with human voting patterns. Our methodology involved using a dataset from a human voting experiment to establish a baseline for human preferences and conducting a corresponding experiment with LLM agents. We observed that the choice of voting methods and the presentation order influenced LLM voting outcomes. We found that varying the persona can reduce some of these biases and enhance alignment with human choices. While the Chain-of-Thought approach did not improve prediction accuracy, it has potential for AI explainability in the voting process. We also identified a trade-off between preference diversity and alignment accuracy in LLMs, influenced by different temperature settings. Our findings indicate that LLMs may lead to less diverse collective outcomes and biased assumptions when used in voting scenarios, emphasizing the need for cautious integration of LLMs into democratic processes.",0 "In human interaction, gestures serve various functions such as marking speech rhythm, highlighting key elements, and supplementing information. These gestures are also observed in explanatory contexts. However, the impact of gestures on explanations provided by virtual agents remains underexplored. A 190 user study was carried out to investigate how different types of gestures influence perceived interaction quality and listener understanding. This study addresses the effect of gestures in explanation by developing an embodied virtual explainer integrating both beat gestures and iconic gestures to enhance its automatically generated verbal explanations. Our model combines beat gestures generated by a learned speech-driven synthesis module with manually captured iconic gestures, supporting the agent's verbal expressions about the board game Quarto! as an explanation scenario. Findings indicate that neither the use of iconic gestures alone nor their combination with beat gestures outperforms the baseline or beat-only conditions in terms of understanding. Nonetheless, compared to prior research, the embodied agent significantly enhances understanding.",2 "As AI models become ever more complex and intertwined in humans' daily lives, greater levels of interactivity of explainable AI (XAI) methods are needed. In this paper, we propose the use of belief change theory as a formal foundation for operators that model the incorporation of new information, i.e. user feedback in interactive XAI, to logical representations of data-driven classifiers. We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations in a principled manner, providing warranted behaviour and favouring transparency and accountability of such interactions. Concretely, we first define a novel, logic-based formalism to represent explanatory information shared between humans and machines. We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated. Finally, we analyse a core set of belief change postulates, discussing their suitability for our real world settings and pointing to particular challenges that may require the relaxation or reinterpretation of some of the theoretical assumptions underlying existing operators.",0 "Recent advancements in Graph Neural Networks (GNNs) have spurred an upsurge of research dedicated to enhancing the explainability of GNNs, particularly in critical domains such as medicine. A promising approach is the self-explaining method, which outputs explanations along with predictions. However, existing self-explaining models require a large amount of training data, rendering them unavailable in few-shot scenarios. To address this challenge, in this paper, we propose a Meta-learned Self-Explaining GNN (MSE-GNN), a novel framework that generates explanations to support predictions in few-shot settings. MSE-GNN adopts a two-stage self-explaining structure, consisting of an explainer and a predictor. Specifically, the explainer first imitates the attention mechanism of humans to select the explanation subgraph, whereby attention is naturally paid to regions containing important characteristics. Subsequently, the predictor mimics the decision-making process, which makes predictions based on the generated explanation. Moreover, with a novel meta-training process and a designed mechanism that exploits task information, MSE-GNN can achieve remarkable performance on new few-shot tasks. Extensive experimental results on four datasets demonstrate that MSE-GNN can achieve superior performance on prediction tasks while generating high-quality explanations compared with existing methods. The code is publicly available at https://github.com/jypeng28/MSE-GNN.",0 "This pictorial aims to critically consider the nature of text-to-audio and text-to-music generative tools in the context of explainable AI. As a group of experimental musicians and researchers, we are enthusiastic about the creative potential of these tools and have sought to understand and evaluate them from perspectives of prompt creation, control, usability, understandability, explainability of the AI process, and overall aesthetic effectiveness of the results. One of the challenges we have identified that is not explicitly addressed by these tools is the inherent semantic gap in using text-based tools to describe something as abstract as music. Other gaps include explainability vs. useability, and user control and input vs. the human creative process. The aim of this pictorial is to raise questions for discussion and make a few general suggestions on the kinds of improvements we would like to see in generative AI music tools.",0 "When the initial vision of Explainable (XAI) was articulated, the most popular framing was to open the (proverbial) ""black-box"" of AI so that we could understand the inner workings. With the advent of Large Language Models (LLMs), the very ability to open the black-box is increasingly limited especially when it comes to non-AI expert end-users. In this paper, we challenge the assumption of ""opening"" the black-box in the LLM era and argue for a shift in our XAI expectations. Highlighting the epistemic blind spots of an algorithm-centered XAI view, we argue that a human-centered perspective can be a path forward. We operationalize the argument by synthesizing XAI research along three dimensions: explainability outside the black-box, explainability around the edges of the black box, and explainability that leverages infrastructural seams. We conclude with takeaways that reflexively inform XAI as a domain.",0 "Human-Object Interaction (HOI) detection is a fundamental task in image understanding. While deep-learning-based HOI methods provide high performance in terms of mean Average Precision (mAP), they are computationally expensive and opaque in training and inference processes. An Efficient HOI (EHOI) detector is proposed in this work to strike a good balance between detection performance, inference complexity, and mathematical transparency. EHOI is a two-stage method. In the first stage, it leverages a frozen object detector to localize the objects and extract various features as intermediate outputs. In the second stage, the first-stage outputs predict the interaction type using the XGBoost classifier. Our contributions include the application of error correction codes (ECCs) to encode rare interaction cases, which reduces the model size and the complexity of the XGBoost classifier in the second stage. Additionally, we provide a mathematical formulation of the relabeling and decision-making process. Apart from the architecture, we present qualitative results to explain the functionalities of the feedforward modules. Experimental results demonstrate the advantages of ECC-coded interaction labels and the excellent balance of detection performance and complexity of the proposed EHOI method.",0 "Attribution scores reflect how important the feature values in an input entity are for the output of a machine learning model. One of the most popular attribution scores is the SHAP score, which is an instantiation of the general Shapley value used in coalition game theory. The definition of this score relies on a probability distribution on the entity population. Since the exact distribution is generally unknown, it needs to be assigned subjectively or be estimated from data, which may lead to misleading feature scores. In this paper, we propose a principled framework for reasoning on SHAP scores under unknown entity population distributions. In our framework, we consider an uncertainty region that contains the potential distributions, and the SHAP score of a feature becomes a function defined over this region. We study the basic problems of finding maxima and minima of this function, which allows us to determine tight ranges for the SHAP scores of all features. In particular, we pinpoint the complexity of these problems, and other related ones, showing them to be NP-complete. Finally, we present experiments on a real-world dataset, showing that our framework may contribute to a more robust feature scoring.",0 "Decoder-only transformers are the backbone of the popular generative pre-trained transformer (GPT) series of large language models. In this work, we employ this framework to the analysis of clinical heart time-series data, to create two pre-trained general purpose cardiac models, termed PPG-PT and ECG-PT. We place a special emphasis on making both such pre-trained models fully interpretable. This is achieved firstly through aggregate attention maps which show that, in order to make predictions, the model focuses on similar points in previous cardiac cycles and gradually broadens its attention in deeper layers. Next, we show that tokens with the same value, which occur at different distinct points in the electrocardiography (ECG) and photoplethysmography (PPG) cycle, form separate clusters in high dimensional space. The clusters form according to phase, as the tokens propagate through the transformer blocks. Finally, we highlight that individual attention heads respond to specific physiologically relevent features, such as the dicrotic notch in PPG and the P-wave in ECG. It is also demonstrated that these pre-trained models are straightforward to fine-tune for tasks such as classification of atrial fibrillation (AF), and beat detection in photoplethysmography. For the example of AF, the fine-tuning took 11 minutes of computer time, and achieved the respective leave-one-subject-out AUCs of 0.99 and 0.93 for ECG and PPG within the MIMIC Perform AF dataset. In addition, the fine-tuned beat detector achieved a state-of-the-art F1 score of 98%, as well as uniquely providing a beat confidence level which acts as a signal quality estimator. Importantly, the fine-tuned models for AF screening are also fully explainable, with attention shifting to regions in the context that are strongly indicative of atrial fibrillation.",0 "Schr\""{o}dinger bridge--a stochastic dynamical generalization of optimal mass transport--exhibits a learning-control duality. Viewed as a stochastic control problem, the Schr\""{o}dinger bridge finds an optimal control policy that steers a given joint state statistics to another while minimizing the total control effort subject to controlled diffusion and deadline constraints. Viewed as a stochastic learning problem, the Schr\""{o}dinger bridge finds the most-likely distribution-valued trajectory connecting endpoint distributional observations, i.e., solves the two point boundary-constrained maximum likelihood problem over the manifold of probability distributions. Recent works have shown that solving the Schr\""{o}dinger bridge problem with state cost requires finding the Markov kernel associated with a reaction-diffusion PDE where the state cost appears as a state-dependent reaction rate. We explain how ideas from Weyl calculus in quantum mechanics, specifically the Weyl operator and the Weyl symbol, can help determine such Markov kernels. We illustrate these ideas by explicitly finding the Markov kernel for the case of quadratic state cost via Weyl calculus, recovering our earlier results but avoiding tedious computation with Hermite polynomials.",0 "Blind people use artificial intelligence-enabled visual assistance technologies (AI VAT) to gain visual access in their everyday lives, but these technologies are embedded with errors that may be difficult to verify non-visually. Previous studies have primarily explored sighted users' understanding of AI output and created vision-dependent explainable AI (XAI) features. We extend this body of literature by conducting an in-depth qualitative study with 26 blind people to understand their verification experiences and preferences. We begin by describing errors blind people encounter, highlighting how AI VAT fails to support complex document layouts, diverse languages, and cultural artifacts. We then illuminate how blind people make sense of AI through experimenting with AI VAT, employing non-visual skills, strategically including sighted people, and cross-referencing with other devices. Participants provided detailed opportunities for designing accessible XAI, such as affordances to support contestation. Informed by disability studies framework of misfitting and fitting, we unpacked harmful assumptions with AI VAT, underscoring the importance of celebrating disabled ways of knowing. Lastly, we offer practical takeaways for Responsible AI practice to push the field of accessible XAI forward.",1 "Our goal is a modern approach to answering questions via systematic reasoning where answers are supported by human interpretable proof trees grounded in an NL corpus of authoritative facts. Such a system would help alleviate the challenges of interpretability and hallucination with modern LMs, and the lack of grounding of current explanation methods (e.g., Chain-of-Thought). This paper proposes a new take on Prolog-based inference engines, where we replace handcrafted rules with a combination of neural language modeling, guided generation, and semiparametric dense retrieval. Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA as entailment tree proof search, going beyond earlier work explaining known-to-be-true facts from text. In experiments, NELLIE outperforms a similar-sized state-of-the-art reasoner [Tafjord et al., 2022] while producing knowledge-grounded explanations. We also find NELLIE can exploit both semi-structured and NL text corpora to guide reasoning. Together these suggest a new way to jointly reap the benefits of both modern neural methods and traditional symbolic reasoning.",0 "Explainable Artificial Intelligence (XAI) has become critical in enhancing the transparency and trustworthiness of AI systems, especially as these systems are increasingly deployed in high-stakes domains such as healthcare and finance. Despite the progress made in developing explanation generation techniques for individual machine learning (ML) models, significant challenges remain in achieving coherent and comprehensive explanations in multi-model systems. This paper addresses these challenges by focusing on the explanation reconciliation problem (ERP) within multi-model systems. Traditional explanation generation technique often fall short in multi-model systems contexts, where explanations from different models can conflict and fail to form a cohesive narrative. Through the use of probabilistic argumentation and knowledge representation techniques, we propose a framework for generating holistic explanations that align with human cognitive processes. Our approach involves mapping uncertain explanation information to probabilistic arguments and introducing criteria for explanation reconciliation based on user perspectives such as optimism, pessimism, fairness. In addition, we introduce the relative independence assumption to optimise the search space for computational explanations.",0 "Research on hate speech has predominantly revolved around detection and interpretation from textual inputs, leaving verbal content largely unexplored. While there has been limited exploration into hate speech detection within verbal acoustic speech inputs, the aspect of interpretability has been overlooked. Therefore, we introduce a new task of explainable audio hate speech detection. Specifically, we aim to identify the precise time intervals, referred to as audio frame-level rationales, which serve as evidence for hate speech classification. Towards this end, we propose two different approaches: cascading and End-to-End (E2E). The cascading approach initially converts audio to transcripts, identifies hate speech within these transcripts, and subsequently locates the corresponding audio time frames. Conversely, the E2E approach processes audio utterances directly, which allows it to pinpoint hate speech within specific time frames. Additionally, due to the lack of explainable audio hate speech datasets that include audio frame-level rationales, we curated a synthetic audio dataset to train our models. We further validated these models on actual human speech utterances and found that the E2E approach outperforms the cascading method in terms of the audio frame Intersection over Union (IoU) metric. Furthermore, we observed that including frame-level rationales significantly enhances hate speech detection accuracy for the E2E approach. \textbf{Disclaimer} The reader may encounter content of an offensive or hateful nature. However, given the nature of the work, this cannot be avoided.",0 "Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users' comprehension of static explanations, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. 304 Participants are presented with static explanations, followed by a conversation with a human expert regarding the explanations. We measure the effect of the conversation on participants' ability to choose, from three machine learning models, the most accurate one based on explanations and their self-reported comprehension, acceptance, and trust. Empirical results show that conversations significantly improve comprehension, acceptance, trust, and collaboration. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations.",2 "Artificial neural networks have long been understood as ""black boxes"": though we know their computation graphs and learned parameters, the knowledge encoded by these weights and functions they perform are not inherently interpretable. As such, from the early days of deep learning, there have been efforts to explain these models' behavior and understand them internally; and recently, mechanistic interpretability (MI) has emerged as a distinct research area studying the features and implicit algorithms learned by foundation models such as large language models. In this work, we aim to ground MI in the context of cognitive science, which has long struggled with analogous questions in studying and explaining the behavior of ""black box"" intelligent systems like the human brain. We leverage several important ideas and developments in the history of cognitive science to disentangle divergent objectives in MI and indicate a clear path forward. First, we argue that current methods are ripe to facilitate a transition in deep learning interpretation echoing the ""cognitive revolution"" in 20th-century psychology that shifted the study of human psychology from pure behaviorism toward mental representations and processing. Second, we propose a taxonomy mirroring key parallels in computational neuroscience to describe two broad categories of MI research, semantic interpretation (what latent representations are learned and used) and algorithmic interpretation (what operations are performed over representations) to elucidate their divergent goals and objects of study. Finally, we elaborate the parallels and distinctions between various approaches in both categories, analyze the respective strengths and weaknesses of representative works, clarify underlying assumptions, outline key challenges, and discuss the possibility of unifying these modes of interpretation under a common framework.",0 "Deep learning models used for medical image classification tasks are often constrained by the limited amount of training data along with severe class imbalance. Despite these problems, models should be explainable to enable human trust in the models' decisions to ensure wider adoption in high-risk situations. In this paper, we propose PRECISe, an explainable-by-design model meticulously constructed to concurrently address all three challenges. Evaluation on 2 imbalanced medical image datasets reveals that PRECISe outperforms the current state-of-the-art methods on data efficient generalization to minority classes, achieving an accuracy of ~87% in detecting pneumonia in chest x-rays upon training on <60 images only. Additionally, a case study is presented to highlight the model's ability to produce easily interpretable predictions, reinforcing its practical utility and reliability for medical imaging tasks.",0 "Complex sequential decision-making planning problems, covering infinite states' space have been shown to be solvable by AlphaZero type of algorithms. Such an approach that trains a neural model while simulating projection of futures with a Monte Carlo Tree Search algorithm were shown to be applicable to real life planning problems. As such, engineers and users interacting with the resulting policy of behavior might benefit from obtaining automated explanations about these planners' decisions offline or online. This paper focuses on the information within the Monte Carlo Tree Search data structure. Given its construction, this information contains much of the reasoning of the sequential decision-making algorithm and is essential for its explainability. We show novel methods using information theoretic tools for the simplification and reduction of the Monte Carlo Tree Search and the extraction of information. Such information can be directly used for the construction of human understandable explanations. We show that basic explainability quantities can be calculated with limited additional computational cost, as an integrated part of the Monte Carlo Tree Search construction process. We focus on the theoretical and algorithmic aspects and provide examples of how the methods presented here can be used in the construction of human understandable explanations.",0 "The explainability of machine learning algorithms is crucial, and numerous methods have emerged recently. Local, post-hoc methods assign an attribution score to each feature, indicating its importance for the prediction. However, these methods require recalculating explanations for each example. On the other side, while there exist global approaches they often produce explanations that are either overly simplistic and unreliable or excessively complex. To bridge this gap, we propose GLEAMS, a novel method that partitions the input space and learns an interpretable model within each sub-region, thereby providing both faithful local and global surrogates. We demonstrate GLEAMS' effectiveness on both synthetic and real-world data, highlighting its desirable properties and human-understandable insights.",0 "We share observations and challenges from an ongoing effort to implement Explainable AI (XAI) in a domain-specific workflow for cybersecurity analysts. Specifically, we briefly describe a preliminary case study on the use of XAI for source code classification, where accurate assessment and timeliness are paramount. We find that the outputs of state-of-the-art saliency explanation techniques (e.g., SHAP or LIME) are lost in translation when interpreted by people with little AI expertise, despite these techniques being marketed for non-technical users. Moreover, we find that popular XAI techniques offer fewer insights for real-time human-AI workflows when they are post hoc and too localized in their explanations. Instead, we observe that cyber analysts need higher-level, easy-to-digest explanations that can offer as little disruption as possible to their workflows. We outline unaddressed gaps in practical and effective XAI, then touch on how emerging technologies like Large Language Models (LLMs) could mitigate these existing obstacles.",0 "A growing number of safety-critical industries agree that building confidence in complex systems can be achieved through evidence and structured argumentation framed in assurance cases. Nevertheless, according to practical industry experience, assurance cases can easily become too rigorous and difficult to develop and maintain when applied to complex systems. Therefore, we propose to use contract-based development (CBD), a method to manage complexity originally developed in computer science, to simplify assurance cases by modularizing them. This paper will not only summarize relevant previous work such as constructing consistent modular assurance cases using CBD, but more importantly also propose a novel approach to integrate CBD with the argumentation in assurance case modules. This approach will allow subject-matter and domain experts to build assurance case modules together without having to know CBD. This can help a broader application of these methods in industry because subject matter experts outside of computer science can contribute to cross disciplinary co-development of assurance cases without having to learn CBD. Industry experience has proven four rules of thumb helpful for developing high-quality assurance cases. This article illustrates their usefulness and explains how modular assurance enables assurance that accounts for the interdependency of different concerns such as safety, security and performance.",0 "Explainable Artificial Intelligence (xAI) has the potential to enhance the transparency and trust of AI-based systems. Although accurate predictions can be made using Deep Neural Networks (DNNs), the process used to arrive at such predictions is usually hard to explain. In terms of perceptibly human-friendly representations, such as word phrases in text or super-pixels in images, prototype-based explanations can justify a model's decision. In this work, we introduce a DNN architecture for image classification, the Enhanced Prototypical Part Network (EPPNet), which achieves strong performance while discovering relevant prototypes that can be used to explain the classification results. This is achieved by introducing a novel cluster loss that helps to discover more relevant human-understandable prototypes. We also introduce a faithfulness score to evaluate the explainability of the results based on the discovered prototypes. Our score not only accounts for the relevance of the learned prototypes but also the performance of a model. Our evaluations on the CUB-200-2011 dataset show that the EPPNet outperforms state-of-the-art xAI-based methods, in terms of both classification accuracy and explainability",0 "We present a novel framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction. By adopting a structured argumentation-based dialogue paradigm, our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user), where the goal is for the explainee to understand the explainer's decision. We formally describe the operational semantics of our proposed framework, providing theoretical guarantees. We then evaluate the framework's efficacy ``in the wild'' via computational and 13 human-subject experiments. Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.",1 "Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics within MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.",2 "In recent years, the threat facing airports from growing and increasingly sophisticated cyberattacks has become evident. Airports are considered a strategic national asset, so protecting them from attacks, specifically cyberattacks, is a crucial mission. One way to increase airports' security is by using Digital Twins (DTs). This paper shows and demonstrates how DTs can enhance the security mission. The integration of DTs with Generative AI (GenAI) algorithms can lead to synergy and new frontiers in fighting cyberattacks. The paper exemplifies ways to model cyberattack scenarios using simulations and generate synthetic data for testing defenses. It also discusses how DTs can be used as a crucial tool for vulnerability assessment by identifying weaknesses, prioritizing, and accelerating remediations in case of cyberattacks. Moreover, the paper demonstrates approaches for anomaly detection and threat hunting using Machine Learning (ML) and GenAI algorithms. Additionally, the paper provides impact prediction and recovery coordination methods that can be used by DT operators and stakeholders. It also introduces ways to harness the human factor by integrating training and simulation algorithms with Explainable AI (XAI) into the DT platforms. Lastly, the paper offers future applications and technologies that can be utilized in DT environments.",0 "Sewer pipe faults, such as leaks and blockages, can lead to severe consequences including groundwater contamination, property damage, and service disruption. Traditional inspection methods rely heavily on the manual review of CCTV footage collected by mobile robots, which is inefficient and susceptible to human error. To automate this process, we propose a novel system incorporating explainable deep learning anomaly detection combined with sequential probability ratio testing (SPRT). The anomaly detector processes single image frames, providing interpretable spatial localisation of anomalies, whilst the SPRT introduces temporal evidence aggregation, enhancing robustness against noise over sequences of image frames. Experimental results demonstrate improved anomaly detection performance, highlighting the benefits of the combined spatiotemporal analysis system for reliable and robust sewer inspection.",1 "Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.",2 "The deterministic and time-reversal symmetric dynamics of isolated quantum systems is at odds with irreversible equilibration observed in generic thermodynamic systems. Standard approaches at a reconciliation employ subjective restrictions on the space of observables or states and do not explain how a single macroscopic quantum system achieves equilibrium dynamically. We instead argue that quantum theory is an effective theory and requires corrections to accurately describe systems approaching the thermodynamic limit. We construct a stochastic extension of quantum theory which is practically identical to quantum mechanics for microscopic systems, yet allows single, isolated macroscopic systems to objectively thermalize, generically. A fluctuation-dissipation relation guarantees physical consistency including norm preservation, energy conservation, no superluminal signalling and the emergence of microcanonical equilibrium. We further discuss the inclusion of objective collapse, thereby realizing a falsifiable theory of spontaneous universal irreversibility which describes the quantum-to-classical crossover dynamics of macroscopic quantum systems. This model admits spontaneous symmetry breaking, quantum state reduction and objective quantum thermalization for individual systems while realizing an emergent hybrid, Born-Maxwell-Boltzmann-Gibbs-microcanonical distribution for ensembles.",2 "The Internet of Medical Things transcends traditional medical boundaries, enabling a transition from reactive treatment to proactive prevention. This innovative method revolutionizes healthcare by facilitating early disease detection and tailored care, particularly in chronic disease management, where IoMT automates treatments based on real-time health data collection. Nonetheless, its benefits are countered by significant security challenges that endanger the lives of its users due to the sensitivity and value of the processed data, thereby attracting malicious interests. Moreover, the utilization of wireless communication for data transmission exposes medical data to interception and tampering by cybercriminals. Additionally, anomalies may arise due to human error, network interference, or hardware malfunctions. In this context, anomaly detection based on Machine Learning (ML) is an interesting solution, but it comes up against obstacles in terms of explicability and privacy protection. To address these challenges, a new framework for Intrusion Detection Systems is introduced, leveraging Artificial Neural Networks for intrusion detection while utilizing Federated Learning for privacy preservation. Additionally, eXplainable Artificial Intelligence methods are incorporated to enhance model explanation and interpretation. The efficacy of the proposed framework is evaluated and compared with centralized approaches using multiple datasets containing network and medical data, simulating various attack types impacting the confidentiality, integrity, and availability of medical and physiological data. The results offer compelling evidence that the FL method performs comparably to the centralized method, demonstrating high performance. Additionally, it affords the dual advantage of safeguarding privacy and providing model explanation while adhering to ethical principles.",2 "Information asymmetry in financial markets, often amplified by strategically crafted corporate narratives, undermines the effectiveness of conventional textual analysis. We propose a novel multimodal framework for financial risk assessment that integrates textual sentiment with paralinguistic cues derived from executive vocal tract dynamics in earnings calls. Central to this framework is the Physics-Informed Acoustic Model (PIAM), which applies nonlinear acoustics to robustly extract emotional signatures from raw teleconference sound subject to distortions such as signal clipping. Both acoustic and textual emotional states are projected onto an interpretable three-dimensional Affective State Label (ASL) space-Tension, Stability, and Arousal. Using a dataset of 1,795 earnings calls (approximately 1,800 hours), we construct features capturing dynamic shifts in executive affect between scripted presentation and spontaneous Q&A exchanges. Our key finding reveals a pronounced divergence in predictive capacity: while multimodal features do not forecast directional stock returns, they explain up to 43.8% of the out-of-sample variance in 30-day realized volatility. Importantly, volatility predictions are strongly driven by emotional dynamics during executive transitions from scripted to spontaneous speech, particularly reduced textual stability and heightened acoustic instability from CFOs, and significant arousal variability from CEOs. An ablation study confirms that our multimodal approach substantially outperforms a financials-only baseline, underscoring the complementary contributions of acoustic and textual modalities. By decoding latent markers of uncertainty from verifiable biometric signals, our methodology provides investors and regulators a powerful tool for enhancing market interpretability and identifying hidden corporate uncertainty.",0 "Eye-movement related artifacts including blinks and saccades are significantly larger in amplitude than cortical activity as recorded by scalp electroencephalography (EEG), but are typically discarded in EEG studies focusing on cognitive mechanisms as explained by cortical source activity. Accumulating evidence however indicates that spontaneous eye blinks are not necessarily random, and can be modulated by attention and cognition beyond just physiological necessities. In this exploratory analysis we reanalyze a public EEG dataset of musicians listening to or imagining music (Bach chorales) while simultaneously reading from a sheet of music. We ask whether blink timing in reading music, accompanied by listening or imagery, is sufficient to uniquely identify the music being read from a given score. Intra-subject blink counts and timing are compared across trials using a spike train distance metric (Victor and Purpura, 1997). One-trial-left-out cross-validation is used to identify the music being read with above chance level accuracy (best subject: 56\%, chance: 25\%), where accuracy is seen to vary with subject, condition, and a tunable cost factor for time shifts. Future studies may consider incorporating eye blink contributions to brain decoding, especially in wearables where eye blinks could be easier to record than EEG given their higher amplitudes.",0 "We present BiasLab, a dataset of 300 political news articles annotated for perceived ideological bias. These articles were selected from a curated 900-document pool covering diverse political events and source biases. Each article is labeled by crowdworkers along two independent scales, assessing sentiment toward the Democratic and Republican parties, and enriched with rationale indicators. The annotation pipeline incorporates targeted worker qualification and was refined through pilot-phase analysis. We quantify inter-annotator agreement, analyze misalignment with source-level outlet bias, and organize the resulting labels into interpretable subsets. Additionally, we simulate annotation using schema-constrained GPT-4o, enabling direct comparison to human labels and revealing mirrored asymmetries, especially in misclassifying subtly right-leaning content. We define two modeling tasks: perception drift prediction and rationale type classification, and report baseline performance to illustrate the challenge of explainable bias detection. BiasLab's rich rationale annotations provide actionable interpretations that facilitate explainable modeling of political bias, supporting the development of transparent, socially aware NLP systems. We release the dataset, annotation schema, and modeling code to encourage research on human-in-the-loop interpretability and the evaluation of explanation effectiveness in real-world settings.",0 "With the growing availability of urban data and the increasing complexity of societal challenges, visual analytics has become essential for deriving insights into pressing real-world problems. However, analyzing such data is inherently complex and iterative, requiring expertise across multiple domains. The need to manage diverse datasets, distill intricate workflows, and integrate various analytical methods presents a high barrier to entry, especially for researchers and urban experts who lack proficiency in data management, machine learning, and visualization. Advancements in large language models offer a promising solution to lower the barriers to the construction of analytics systems by enabling users to specify intent rather than define precise computational operations. However, this shift from explicit operations to intent-based interaction introduces challenges in ensuring alignment throughout the design and development process. Without proper mechanisms, gaps can emerge between user intent, system behavior, and analytical outcomes. To address these challenges, we propose Urbanite, a framework for human-AI collaboration in urban visual analytics. Urbanite leverages a dataflow-based model that allows users to specify intent at multiple scopes, enabling interactive alignment across the specification, process, and evaluation stages of urban analytics. Based on findings from a survey to uncover challenges, Urbanite incorporates features to facilitate explainability, multi-resolution definition of tasks across dataflows, nodes, and parameters, while supporting the provenance of interactions. We demonstrate Urbanite's effectiveness through usage scenarios created in collaboration with urban experts. Urbanite is available at https://urbantk.org/urbanite.",0 "The performance of machine learning models relies heavily on the quality of input data, yet real-world applications often face significant data-related challenges. A common issue arises when curating training data or deploying models: two datasets from the same domain may exhibit differing distributions. While many techniques exist for detecting such distribution shifts, there is a lack of comprehensive methods to explain these differences in a human-understandable way beyond opaque quantitative metrics. To bridge this gap, we propose a versatile framework of interpretable methods for comparing datasets. Using a variety of case studies, we demonstrate the effectiveness of our approach across diverse data modalities-including tabular data, text data, images, time-series signals -- in both low and high-dimensional settings. These methods complement existing techniques by providing actionable and interpretable insights to better understand and address distribution shifts.",0 "The pursuit of general-purpose artificial intelligence depends on large language models (LLMs) that can handle both structured reasoning and open-ended generation. We present Omni-Thinker, a unified reinforcement learning (RL) framework that scales LLMs across diverse tasks by combining hybrid rewards with backward-transfer-guided scheduling. Hybrid rewards integrate rule-based verifiable signals with preference-based evaluations from an LLM-as-a-Judge, enabling learning in both deterministic and subjective domains. Our scheduler orders tasks according to accuracy backward transfer (BWT), reducing forgetting and improving multi-task performance. Experiments across four domains show gains of 6.2% over joint training and 12.4% over model merging. Moreover, we demonstrate that simple assumptions on accuracy transfer yield accurate predictions of curriculum outcomes, with entropy dynamics explaining deviations due to generative tasks. These findings underscore the importance of BWT-aware scheduling and hybrid supervision for scaling RL-based post-training toward general-purpose LLMs.",0 "Hallucinated outputs from large language models (LLMs) pose risks in the medical domain, especially for lay audiences making health-related decisions. Existing automatic factual consistency evaluation methods, such as entailment- and question-answering (QA) -based, struggle with plain language summarization (PLS) due to elaborative explanation phenomenon, which introduces external content (e.g., definitions, background, examples) absent from the scientific abstract to enhance comprehension. To address this, we introduce PlainQAFact, an automatic factual consistency evaluation metric trained on a fine-grained, human-annotated dataset PlainFact, for evaluating factual consistency of both source-simplified and elaborately explained sentences. PlainQAFact first classifies sentence type, then applies a retrieval-augmented QA scoring method. Empirical results show that existing evaluation metrics fail to evaluate the factual consistency in PLS, especially for elaborative explanations, whereas PlainQAFact consistently outperforms them across all evaluation settings. We further analyze PlainQAFact's effectiveness across external knowledge sources, answer extraction strategies, answer overlap measures, and document granularity levels, refining its overall factual consistency assessment. Taken together, our work presents the first evaluation metric designed for PLS factual consistency evaluation, providing the community with both a robust benchmark and a practical tool to advance reliable and safe plain language communication in the medical domain. PlainQAFact and PlainFact are available at: https://github.com/zhiwenyou103/PlainQAFact",0 "The ""black box"" nature of Large Reasoning Models (LRMs) presents critical limitations in reliability and transparency, fueling the debate around the ""illusion of thinking"" and the challenge of state hallucinations in agentic systems. In response, we introduce The STAR-XAI Protocol (Socratic, Transparent, Agentic, Reasoning - for eXplainable Artificial Intelligence), a novel operational methodology for training and operating verifiably reliable AI agents. Our method reframes the human-AI interaction as a structured Socratic dialogue governed by an explicit, evolving symbolic rulebook (the Consciousness Transfer Package - CTP) and a suite of integrity protocols, including a state-locking Checksum that eradicates internal state corruption. Through an exhaustive case study in the complex strategic game ""Caps i Caps,"" we demonstrate that this ""Clear Box"" framework transforms an opaque LRM into a disciplined strategist. The agent not only exhibits the emergence of complex tactics, such as long-term planning, but also achieves ante-hoc transparency by justifying its intentions before acting. Crucially, it demonstrates Second-Order Agency by identifying and correcting flaws in its own supervisor-approved plans, leading to empirically-proven, 100% reliable state tracking and achieving ""zero hallucinations by design."" The STAR-XAI Protocol thus offers a practical pathway toward building AI agents that are not just high-performing but intrinsically auditable, trustworthy, and reliable.",1 "Hierarchical multi-agent systems (HMAS) organize collections of agents into layered structures that help manage complexity and scale. These hierarchies can simplify coordination, but they also can introduce trade-offs that are not always obvious. This paper proposes a multi-dimensional taxonomy for HMAS along five axes: control hierarchy, information flow, role and task delegation, temporal layering, and communication structure. The intent is not to prescribe a single ""best"" design but to provide a lens for comparing different approaches. Rather than treating these dimensions in isolation, the taxonomy is connected to concrete coordination mechanisms - from the long-standing contract-net protocol for task allocation to more recent work in hierarchical reinforcement learning. Industrial contexts illustrate the framework, including power grids and oilfield operations, where agents at production, maintenance, and supply levels coordinate to diagnose well issues or balance energy demand. These cases suggest that hierarchical structures may achieve global efficiency while preserving local autonomy, though the balance is delicate. The paper closes by identifying open challenges: making hierarchical decisions explainable to human operators, scaling to very large agent populations, and assessing whether learning-based agents such as large language models can be safely integrated into layered frameworks. This paper presents what appears to be the first taxonomy that unifies structural, temporal, and communication dimensions of hierarchical MAS into a single design framework, bridging classical coordination mechanisms with modern reinforcement learning and large language model agents.",2 "With the increasing prevalence of synthetic images, evaluating image authenticity and locating forgeries accurately while maintaining human interpretability remains a challenging task. Existing detection models primarily focus on simple authenticity classification, ultimately providing only a forgery probability or binary judgment, which offers limited explanatory insights into image authenticity. Moreover, while MLLM-based detection methods can provide more interpretable results, they still lag behind expert models in terms of pure authenticity classification accuracy. To address this, we propose DF-LLaVA, a simple yet effective framework that unlocks the intrinsic discrimination potential of MLLMs. Our approach first extracts latent knowledge from MLLMs and then injects it into training via prompts. This framework allows LLaVA to achieve outstanding detection accuracy exceeding expert models while still maintaining the interpretability offered by MLLMs. Extensive experiments confirm the superiority of our DF-LLaVA, achieving both high accuracy and explainability in synthetic image detection. Code is available online at: https://github.com/Eliot-Shen/DF-LLaVA.",0 "This work demonstrates a methodology for using deep learning to discover simple, practical criteria for classifying matrices based on abstract algebraic properties. By combining a high-performance neural network with explainable AI (XAI) techniques, we can distill a model's learned strategy into human-interpretable rules. We apply this approach to the challenging case of monotone matrices, defined by the condition that their inverses are entrywise nonnegative. Despite their simple definition, an easy characterization in terms of the matrix elements or the derived parameters is not known. Here, we present, to the best of our knowledge, the first systematic machine-learning approach for deriving a practical criterion that distinguishes monotone from non-monotone matrices. After establishing a labelled dataset by randomly generated monotone and non-monotone matrices uniformly on $(-1,1)$, we employ deep neural network algorithms for classifying the matrices as monotone or non-monotone, using both their entries and a comprehensive set of matrix features. By saliency methods, such as integrated gradients, we identify among all features, two matrix parameters which alone provide sufficient information for the matrix classification, with $95\%$ accuracy, namely the absolute values of the two lowest-order coefficients, $c_0$ and $c_1$ of the matrix's characteristic polynomial. A data-driven study of 18,000 random $7\times7$ matrices shows that the monotone class obeys $\lvert c_{0}/c_{1}\rvert\le0.18$ with probability $>99.98\%$; because $\lvert c_{0}/c_{1}\rvert = 1/\mathrm{tr}(A^{-1})$ for monotone $A$, this is equivalent to the simple bound $\mathrm{tr}(A^{-1})\ge5.7$.",0 "Classification models that provide human-interpretable explanations enhance clinicians' trust and usability in medical image diagnosis. One research focus is the integration and prediction of pathology-related visual attributes used by radiologists alongside the diagnosis, aligning AI decision-making with clinical reasoning. Radiologists use attributes like shape and texture as established diagnostic criteria and mirroring these in AI decision-making both enhances transparency and enables explicit validation of model outputs. However, the adoption of such models is limited by the scarcity of large-scale medical image datasets annotated with these attributes. To address this challenge, we propose synthesizing attribute-annotated data using a generative model. We enhance the Diffusion Model with attribute conditioning and train it using only 20 attribute-labeled lung nodule samples from the LIDC-IDRI dataset. Incorporating its generated images into the training of an explainable model boosts performance, increasing attribute prediction accuracy by 13.4% and target prediction accuracy by 1.8% compared to training with only the small real attribute-annotated dataset. This work highlights the potential of synthetic data to overcome dataset limitations, enhancing the applicability of explainable models in medical image analysis.",0 "This article presents a modular, component-based architecture for developing and evaluating AI agents that bridge the gap between natural language interfaces and complex enterprise data warehouses. The system directly addresses core challenges in data accessibility by enabling non-technical users to interact with complex data warehouses through a conversational interface, translating ambiguous user intent into precise, executable database queries to overcome semantic gaps. A cornerstone of the design is its commitment to transparent decision-making, achieved through a multi-layered reasoning framework that explains the ""why"" behind every decision, allowing for full interpretability by tracing conclusions through specific, activated business rules and data points. The architecture integrates a robust quality assurance mechanism via an automated evaluation framework that serves multiple functions: it enables performance benchmarking by objectively measuring agent performance against golden standards, and it ensures system reliability by automating the detection of performance regressions during updates. The agent's analytical depth is enhanced by a statistical context module, which quantifies deviations from normative behavior, ensuring all conclusions are supported by quantitative evidence including concrete data, percentages, and statistical comparisons. We demonstrate the efficacy of this integrated agent-development-with-evaluation framework through a case study on an insurance claims processing system. The agent, built on a modular architecture, leverages the BigQuery ecosystem to perform secure data retrieval, apply domain-specific business rules, and generate human-auditable justifications. The results confirm that this approach creates a robust, evaluable, and trustworthy system for deploying LLM-powered agents in data-sensitive, high-stakes domains.",0 "Reinforcement learning in large reasoning models enables learning from feedback on their outputs, making it particularly valuable in scenarios where fine-tuning data is limited. However, its application in multi-modal human activity recognition (HAR) domains remains largely underexplored. Our work extends reinforcement learning to the human activity recognition domain with multimodal large language models. By incorporating visual reinforcement learning in the training process, the model's generalization ability on few-shot recognition can be greatly improved. Additionally, visual reinforcement learning can enhance the model's reasoning ability and enable explainable analysis in the inference stage. We name our few-shot human activity recognition method with visual reinforcement learning FAVOR. Specifically, our approach first utilizes a multimodal large language model (MLLM) to generate multiple candidate responses for the human activity image, each containing reasoning traces and final answers. These responses are then evaluated using reward functions, and the MLLM model is subsequently optimized using the Group Relative Policy Optimization (GRPO) algorithm. In this way, the MLLM model can be adapted to human activity recognition with only a few samples. Extensive experiments on four human activity recognition datasets and five different settings demonstrate the superiority of the proposed method.",0 "This study introduces ""Survey and Questionnaire Item Embeddings Differentials"" (SQuID), a novel methodological approach that enables neural network embeddings to effectively recover latent dimensions from psychometric survey items. We demonstrate that embeddings derived from large language models, when processed with SQuID, can recover the structure of human values obtained from human rater judgments on the Revised Portrait Value Questionnaire (PVQ-RR). Our experimental validation compares multiple embedding models across a number of evaluation metrics. Unlike previous approaches, SQuID successfully addresses the challenge of obtaining negative correlations between dimensions without requiring domain-specific fine-tuning. Quantitative analysis reveals that our embedding-based approach explains 55% of variance in dimension-dimension similarities compared to human data. Multidimensional scaling configurations from both types of data show fair factor congruence coefficients and largely follow the underlying theory. These results demonstrate that semantic embeddings can effectively replicate psychometric structures previously established through extensive human surveys. The approach offers substantial advantages in cost, scalability and flexibility while maintaining comparable quality to traditional methods. Our findings have significant implications for psychometrics and social science research, providing a complementary methodology that could expand the scope of human behavior and experience represented in measurement tools.",0 "Growing excitement around deploying AI across various domains calls for a careful assessment of how human decision-makers interact with AI-powered systems. In particular, it is essential to understand when decision-makers voluntarily choose to consult AI tools, which we term decision-maker adoption. We interviewed experts across four domains -- medicine, law, journalism, and the public sector -- to explore current AI use cases and perceptions of adoption. From these interviews, we identify key factors that shape decision-maker adoption of AI tools: the decision-maker's background, perceptions of the AI, consequences for the decision-maker, and perceived implications for other stakeholders. We translate these factors into an AI adoption sheet to analyze how decision-makers approach adoption choices through comparative, cross-domain case studies, highlighting how our factors help explain inter-domain differences in adoption. Our findings offer practical guidance for supporting the responsible and context-aware deployment of AI by better accounting for the decision-maker's perspective.",0 "Scientific claim verification against tables typically requires predicting whether a claim is supported or refuted given a table. However, we argue that predicting the final label alone is insufficient: it reveals little about the model's reasoning and offers limited interpretability. To address this, we reframe table-text alignment as an explanation task, requiring models to identify the table cells essential for claim verification. We build a new dataset by extending the SciTab benchmark with human-annotated cell-level rationales. Annotators verify the claim label and highlight the minimal set of cells needed to support their decision. After the annotation process, we utilize the collected information and propose a taxonomy for handling ambiguous cases. Our experiments show that (i) incorporating table alignment information improves claim verification performance, and (ii) most LLMs, while often predicting correct labels, fail to recover human-aligned rationales, suggesting that their predictions do not stem from faithful reasoning.",0 "Modern deep neural networks have now reached human-level performance across a variety of tasks. However, unlike humans they lack the ability to explain their decisions by showing where and telling what concepts guided them. In this work, we present a unified framework for transforming any vision neural network into a spatially and conceptually interpretable model. We introduce a spatially-aware concept bottleneck layer that projects ""black-box"" features of pre-trained backbone models into interpretable concept maps, without requiring human labels. By training a classification layer over this bottleneck, we obtain a self-explaining model that articulates which concepts most influenced its prediction, along with heatmaps that ground them in the input image. Accordingly, we name this method ""Spatially-Aware and Label-Free Concept Bottleneck Model"" (SALF-CBM). Our results show that the proposed SALF-CBM: (1) Outperforms non-spatial CBM methods, as well as the original backbone, on a variety of classification tasks; (2) Produces high-quality spatial explanations, outperforming widely used heatmap-based methods on a zero-shot segmentation task; (3) Facilitates model exploration and debugging, enabling users to query specific image regions and refine the model's decisions by locally editing its concept maps.",1 "Alignment between human brain networks and artificial models has become an active research area in vision science and machine learning. A widely adopted approach is identifying ""metamers,"" stimuli physically different yet perceptually equivalent within a system. However, conventional methods lack a direct approach to searching for the human metameric space. Instead, researchers first develop biologically inspired models and then infer about human metamers indirectly by testing whether model metamers also appear as metamers to humans. Here, we propose the Multidimensional Adaptive Metamer Exploration (MAME) framework, enabling direct, high-dimensional exploration of human metameric spaces through online image generation guided by human perceptual feedback. MAME modulates reference images across multiple dimensions based on hierarchical neural network responses, adaptively updating generation parameters according to participants' perceptual discriminability. Using MAME, we successfully measured multidimensional human metameric spaces within a single psychophysical experiment. Experimental results using a biologically plausible CNN model showed that human discrimination sensitivity was lower for metameric images based on low-level features compared to high-level features, which image contrast metrics could not explain. The finding suggests a relatively worse alignment between the metameric spaces of humans and the CNN model for low-level processing compared to high-level processing. Counterintuitively, given recent discussions on alignment at higher representational levels, our results highlight the importance of early visual computations in shaping biologically plausible models. Our MAME framework can serve as a future scientific tool for directly investigating the functional organization of human vision.",2 "This paper introduces the Comprehensive Applicant Profile Score (CAPS), a novel multi-modal framework designed to quantitatively model and interpret holistic college admissions evaluations. CAPS decomposes applicant profiles into three interpretable components: academic performance (Standardized Academic Score, SAS), essay quality (Essay Quality Index, EQI), and extracurricular engagement (Extracurricular Impact Score, EIS). Leveraging transformer-based semantic embeddings, LLM scoring, and XGBoost regression, CAPS provides transparent and explainable evaluations aligned with human judgment. Experiments on a synthetic but realistic dataset demonstrate strong performance, achieving an EQI prediction R^2 of 0.80, classification accuracy over 75%, a macro F1 score of 0.69, and a weighted F1 score of 0.74. CAPS addresses key limitations in traditional holistic review -- particularly the opacity, inconsistency, and anxiety faced by applicants -- thus paving the way for more equitable and data-informed admissions practices.",2 "The increasing digitization of smart grids has improved operational efficiency but also introduced new cybersecurity vulnerabilities, such as False Data Injection Attacks (FDIAs) targeting Automatic Generation Control (AGC) systems. While machine learning (ML) and deep learning (DL) models have shown promise in detecting such attacks, their opaque decision-making limits operator trust and real-world applicability. This paper proposes a hybrid framework that integrates lightweight ML-based attack detection with natural language explanations generated by Large Language Models (LLMs). Classifiers such as LightGBM achieve up to 95.13% attack detection accuracy with only 0.004 s inference latency. Upon detecting a cyberattack, the system invokes LLMs, including GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o mini, to generate human-readable explanation of the event. Evaluated on 100 test samples, GPT-4o mini with 20-shot prompting achieved 93% accuracy in identifying the attack target, a mean absolute error of 0.075 pu in estimating attack magnitude, and 2.19 seconds mean absolute error (MAE) in estimating attack onset. These results demonstrate that the proposed framework effectively balances real-time detection with interpretable, high-fidelity explanations, addressing a critical need for actionable AI in smart grid cybersecurity.",0 "Pre-service teachers play a unique dual role as they straddle between the roles of students and future teachers. This dual role requires them to adopt both the learner's and the instructor's perspectives while engaging with pedagogical and content knowledge. The current study investigates how pre-service elementary teachers taking a physical science course prompt AI to generate representations that effectively communicate conceptual ideas to two distinct audiences. The context involves participants interacting with AI to generate appropriate representations that explain the concepts of wave velocity to their elementary students (while casting themselves as teachers) and the Ideal Gas Law to their English teachers (while casting themselves as students). Emergent coding of the AI prompts highlight that, when acting as teachers, participants were more explicit in specifying the target audience, predetermining the type of representation, and producing a broader variety of representations compared to when they acted as students. Implications of the observed 'exploratory' and 'prescriptive' prompting trends across the two roles on pre-service teachers' education and their professional development are discussed.",0 "The BRIDGES meeting in gauge theory, extremal structures, and stability was held June 2024 at l'Institut d'\'Etudes Scientifiques de Carg\`ese in Corsica, organized by Daniele Faenzi, Eveline Legendre, Eric Loubeau, and Henrique S\'a Earp. The first week was a summer school consisting of four independent but related lecture series by Oscar Garc\'ia Prada, Spiro Karigiannis, Laurent Manivel, and Ruxandra Moraru. The present document consists of notes for the lecture series by Spiro Karigiannis on ""Flows of geometric structures, especially $\mathrm{G}_2$-structures"". Some assistance in the preparation of these notes by the author was provided by several participants of the summer school. See the Comments field for more information. The main theme is short time existence (STE) and uniqueness for geometric flows. We first introduce geometric structures on manifolds and geometric flows of such structures. We discuss some qualitative features of geometric flows, and consider the notions of strong and weak parabolicity. We focus on the Ricci flow, explaining carefully the DeTurck trick to establish short-time existence and uniqueness, an argument which we then extend to a general class of geometric flows of Riemannian metrics, previewing similar ideas for flows of $\mathrm{G}_2$-structures. Finally, we consider geometric flows of $\mathrm{G}_2$-structures. We review the basics of $\mathrm{G}_2$-geometry and survey several different geometric flows of $\mathrm{G}_2$-structures. In particular, we clarify in what sense STE results for the $\mathrm{G}_2$ Laplacian flow differ from STE results for other geometric flows. We conclude with a summary of some recent results by the author with Dwivedi and Gianniotis, including a classification of all possible heat-type flows of $\mathrm{G}_2$-structures, and a sufficient condition for such a flow to admit STE and uniqueness by a modified DeTurck trick.",0 "Advances in generative models have led to AI-generated images visually indistinguishable from authentic ones. Despite numerous studies on detecting AI-generated images with classifiers, a gap persists between such methods and human cognitive forensic analysis. We present ForenX, a novel method that not only identifies the authenticity of images but also provides explanations that resonate with human thoughts. ForenX employs the powerful multimodal large language models (MLLMs) to analyze and interpret forensic cues. Furthermore, we overcome the limitations of standard MLLMs in detecting forgeries by incorporating a specialized forensic prompt that directs the MLLMs attention to forgery-indicative attributes. This approach not only enhance the generalization of forgery detection but also empowers the MLLMs to provide explanations that are accurate, relevant, and comprehensive. Additionally, we introduce ForgReason, a dataset dedicated to descriptions of forgery evidences in AI-generated images. Curated through collaboration between an LLM-based agent and a team of human annotators, this process provides refined data that further enhances our model's performance. We demonstrate that even limited manual annotations significantly improve explanation quality. We evaluate the effectiveness of ForenX on two major benchmarks. The model's explainability is verified by comprehensive subjective evaluations.",0 "The integration of Large Language Models (LLMs) into software engineering has revolutionized code generation, enabling unprecedented productivity through promptware and autonomous AI agents. However, this transformation introduces significant risks, including insecure code generation, hallucinated outputs, irreversible actions, and a lack of transparency and accountability. Incidents like the Replit database deletion underscore the urgent need for robust safety and governance mechanisms. This paper comprehensively analyzes the inherent challenges of LLM-assisted code generation, such as vulnerability inheritance, overtrust, misinterpretation, and the absence of standardized validation and rollback protocols. To address these, we propose the SAFE-AI Framework, a holistic approach emphasizing Safety, Auditability, Feedback, and Explainability. The framework integrates guardrails, sandboxing, runtime verification, risk-aware logging, human-in-the-loop systems, and explainable AI techniques to mitigate risks while fostering trust and compliance. We introduce a novel taxonomy of AI behaviors categorizing suggestive, generative, autonomous, and destructive actions to guide risk assessment and oversight. Additionally, we identify open problems, including the lack of standardized benchmarks for code specific hallucinations and autonomy levels, and propose future research directions for hybrid verification, semantic guardrails, and proactive governance tools. Through detailed comparisons of autonomy control, prompt engineering, explainability, and governance frameworks, this paper provides a roadmap for responsible AI integration in software engineering, aligning with emerging regulations like the EU AI Act and Canada's AIDA to ensure safe, transparent, and accountable AI-driven development.",1 "Automating the generation of scientific videos is a crucial yet challenging task for effective knowledge dissemination. However, existing works on document automation primarily focus on static media such as posters and slides, lacking mechanisms for personalized dynamic orchestration and multimodal content synchronization. To address these challenges, we introduce VideoAgent, a novel multi-agent framework that synthesizes personalized scientific videos through a conversational interface. VideoAgent parses a source paper into a fine-grained asset library and, guided by user requirements, orchestrates a narrative flow that synthesizes both static slides and dynamic animations to explain complex concepts. To enable rigorous evaluation, we also propose SciVidEval, the first comprehensive suite for this task, which combines automated metrics for multimodal content quality and synchronization with a Video-Quiz-based human evaluation to measure knowledge transfer. Extensive experiments demonstrate that our method significantly outperforms existing commercial scientific video generation services and approaches human-level quality in scientific communication.",2 "Explainable AI (XAI) methods often struggle to generate clear, interpretable outputs for users without domain expertise. We introduce Feature-Guided Neighbor Selection (FGNS), a post hoc method that enhances interpretability by selecting class-representative examples using both local and global feature importance. In a user study (N = 98) evaluating Kannada script classifications, FGNS significantly improved non-experts' ability to identify model errors while maintaining appropriate agreement with correct predictions. Participants made faster and more accurate decisions compared to those given traditional k-NN explanations. Quantitative analysis shows that FGNS selects neighbors that better reflect class characteristics rather than merely minimizing feature-space distance, leading to more consistent selection and tighter clustering around class prototypes. These results support FGNS as a step toward more human-aligned model assessment, although further work is needed to address the gap between explanation quality and perceived trust.",2 "Protoplanetary disk evolution can be deeply influenced by the UV radiation emitted by neighboring massive stars (mainly of spectral type O and B). We show that the process of external photoevaporation, which causes an outside-in depletion of disk material due to environmental UV radiation, can lead to a significant decrease in disk size, and moderate in disk mass and lifetime even at moderate irradiation levels (1-10 G$_{0}$). In this work we investigate the role of external photoevaporation in shaping the masses and sizes of the ten AGE-PRO disks in the Upper Scorpius region, which we estimate to be subject to FUV fluxes ranging between 2 and 12 G$_{0}$, on average. We compare the disk masses and sizes resulting from 1D numerical viscous evolution simulations in which the effect of external photoevaporation is included, to the values retrieved from the AGE-PRO observations. While the pure viscous framework fails in adequately explaining the observed disk properties in Upper Scorpius, with the inclusion of external photoevaporation we can successfully reproduce gas disk sizes for 7 out of 10 sources within a factor <2, when the initial disk mass is 1-10% of the stellar mass. We emphasize the importance of accounting for the environmental irradiation when comparing star-forming regions of different ages, even when moderate FUV irradiation fields are experienced, as in the case of Upper Scorpius.",0 "A novel hybrid method based on Mie theory and the Discrete Dipole Approximation (DDA) was developed to study the microscopic parameters governing the optical response of tunable photonic crystals (PC). The method is based on a two-step process. An effective polarizability derived from Mie theory is determined by equating the extinction efficiency of an isolated nanoparticle (NP) to the extinction efficiency of an equivalent particle considering the dipolar limit. Then, this effective polarizability is used in the DDA framework to compute the optical response of an interacting particle array constituting the PC structure. As a particular example, the method was applied to a linear array of core-shell magnetite@silica NPs to study the dependence of extinction and absorption on system parameters such as core radius, shell thickness, total radius, interparticle separation, and size distribution. The results indicate that an increase in these parameters leads to a redshift of the extinction peak as well as an increase in its $FWHM$. Finally, the method is applied to fitting experimental results on reflection/transmission measurements of magnetite@silica NPs colloids subjected to different magnetic field strengths with very good agreement. The presented method reduces the computational cost and time for the NPs sizes considered, and can be applied to PCs responsive to different stimuli such as mechanical stress, electric field and temperature, \textit{inter alia}.",0 "Since the advent of large language models (LLMs), research has focused on instruction following and deductive reasoning. A central question remains: can these models discover new knowledge, and how can we evaluate this ability? We address this by studying abductive reasoning-the generation of plausible hypotheses to explain observations-and introduce GEAR (General Evaluation for Abductive Reasoning), a general-purpose, fully automated, transparent, and label-free evaluation paradigm. GEAR scores hypothesis sets by three metrics: consistency (each hypothesis explains the observations), generalizability (consistent hypotheses make meaningful predictions on unseen inputs), and diversity (the set covers distinct predictions and patterns). Built this way, GEAR is scalable (no human gold answers), reliable (deterministic scoring aligned with classical abduction), and open-ended (scores improve only when models produce new plausible hypotheses, unlike static benchmarks that saturate once accuracy is high). Using GEAR, we conduct a fine-grained study of nine LLMs on four abduction benchmarks with 1,500 problems, generating over 50,000 candidate hypotheses and revealing model differences obscured by gold-answer or purely human evaluations. We further propose a momentum-based curriculum that adjusts GEAR-derived training data by learning velocity: it starts with what the model learns quickly and shifts toward harder objectives such as generating diverse hypotheses once the model is confident on foundational objectives. Without gold-label supervision, this strategy improves all GEAR objectives and these gains transfer to established abductive reasoning benchmarks. Taken together, GEAR provides a principled framework that evaluates abduction and supplies label-free, scalable training signals that help LLMs produce more diverse and reliable hypotheses.",0 "Feature attribution methods, such as SHAP and LIME, explain machine learning model predictions by quantifying the influence of each input component. When applying feature attributions to explain language models, a basic question is defining the interpretable components. Traditional feature attribution methods, commonly treat individual words as atomic units. This is highly computationally inefficient for long-form text and fails to capture semantic information that spans multiple words. To address this, we present CafGa, an interactive tool for generating and evaluating feature attribution explanations at customizable granularities. CafGa supports customized segmentation with user interaction and visualizes the deletion and insertion curves for explanation assessments. Through a user study involving participants of various expertise, we confirm CafGa's usefulness, particularly among LLM practitioners. Explanations created using CafGa were also perceived as more useful compared to those generated by two fully automatic baseline methods: PartitionSHAP and MExGen, suggesting the effectiveness of the system.",2 "Reinforcement learning agents can achieve super-human performance in complex decision-making tasks, but their behaviour is often difficult to understand and explain. This lack of explanation limits deployment, especially in safety-critical settings where understanding and trust are essential. We identify three core explanatory targets that together provide a comprehensive view of reinforcement learning agents: behaviour, outcomes, and predictions. We develop a unified theoretical framework for explaining these three elements of reinforcement learning agents through the influence of individual features that the agent observes in its environment. We derive feature influences by using Shapley values, which collectively and uniquely satisfy a set of well-motivated axioms for fair and consistent credit assignment. The proposed approach, Shapley Values for Explaining Reinforcement Learning (SVERL), provides a single theoretical framework to comprehensively and meaningfully explain reinforcement learning agents. It yields explanations with precise semantics that are not only interpretable but also mathematically justified, enabling us to identify and correct conceptual issues in prior explanations. Through illustrative examples, we show how SVERL produces useful, intuitive explanations of agent behaviour, outcomes, and predictions, which are not apparent from observing agent behaviour alone.",1 "Reinforcement learning (RL) has demonstrated remarkable success in solving complex decision-making problems, yet its adoption in critical domains is hindered by the lack of interpretability in its decision-making processes. Existing explainable AI (xAI) approaches often fail to provide meaningful explanations for RL agents, particularly because they overlook the contrastive nature of human reasoning--answering ""why this action instead of that one?"". To address this gap, we propose a novel framework of contrastive learning to explain RL selected actions, named $\textbf{VisionMask}$. VisionMask is trained to generate explanations by explicitly contrasting the agent's chosen action with alternative actions in a given state using a self-supervised manner. We demonstrate the efficacy of our method through experiments across diverse RL environments, evaluating it in terms of faithfulness, robustness, and complexity. Our results show that VisionMask significantly improves human understanding of agent behavior while maintaining accuracy and fidelity. Furthermore, we present examples illustrating how VisionMask can be used for counterfactual analysis. This work bridges the gap between RL and xAI, paving the way for safer and more interpretable RL systems.",1 "With the rapid growth of Artificial Intelligence, Large Language Models (LLMs) have become essential for Question Answering (QA) systems, improving efficiency and reducing human workload in customer service. The emergence of Vietnamese LLMs (ViLLMs) highlights lightweight open-source models as a practical choice for their accuracy, efficiency, and privacy benefits. However, domain-specific evaluations remain limited, and the absence of benchmark datasets reflecting real customer interactions makes it difficult for enterprises to select suitable models for support applications. To address this gap, we introduce the Customer Support Conversations Dataset (CSConDa), a curated benchmark of over 9,000 QA pairs drawn from real interactions with human advisors at a large Vietnamese software company. Covering diverse topics such as pricing, product availability, and technical troubleshooting, CSConDa provides a representative basis for evaluating ViLLMs in practical scenarios. We further present a comprehensive evaluation framework, benchmarking 11 lightweight open-source ViLLMs on CSConDa with both automatic metrics and syntactic analysis to reveal model strengths, weaknesses, and linguistic patterns. This study offers insights into model behavior, explains performance differences, and identifies key areas for improvement, supporting the development of next-generation ViLLMs. By establishing a robust benchmark and systematic evaluation, our work enables informed model selection for customer service QA and advances research on Vietnamese LLMs. The dataset is publicly available at https://huggingface.co/datasets/ura-hcmut/Vietnamese-Customer-Support-QA.",1 "Increasing deployment of large language models (LLMs) in real-world applications raises significant safety concerns. Most existing safety research focuses on evaluating LLM outputs or specific safety tasks, limiting their ability to address broader, undefined risks. Sparse Autoencoders (SAEs) facilitate interpretability research to clarify model behavior by explaining single-meaning atomic features decomposed from entangled signals. jHowever, prior applications on SAEs do not interpret features with fine-grained safety-related concepts, thus inadequately addressing safety-critical behaviors, such as generating toxic responses and violating safety regulations. For rigorous safety analysis, we must extract a rich and diverse set of safety-relevant features that effectively capture these high-risk behaviors, yet face two challenges: identifying SAEs with the greatest potential for generating safety concept-specific neurons, and the prohibitively high cost of detailed feature explanation. In this paper, we propose Safe-SAIL, a framework for interpreting SAE features within LLMs to advance mechanistic understanding in safety domains. Our approach systematically identifies SAE with best concept-specific interpretability, explains safety-related neurons, and introduces efficient strategies to scale up the interpretation process. We will release a comprehensive toolkit including SAE checkpoints and human-readable neuron explanations, which supports empirical analysis of safety risks to promote research on LLM safety.",2 "As designers become familiar with Generative AI, a new concept is emerging: Agentic AI. While generative AI produces output in response to prompts, agentic AI systems promise to perform mundane tasks autonomously, potentially freeing designers to focus on what they love: being creative. But how do designers feel about integrating agentic AI systems into their workflows? Through design fiction, we investigated how designers want to interact with a collaborative agentic AI platform. Ten professional designers imagined and discussed collaborating with an AI agent to organise inspiration sources and ideate. Our findings highlight the roles AI agents can play in supporting designers, the division of authority between humans and AI, and how designers' intent can be explained to AI agents beyond prompts. We synthesise our findings into a conceptual framework that identifies authority distribution among humans and AI agents and discuss directions for utilising AI agents in future design workflows.",2 "Pose estimation refers to tracking a human's full body posture, including their head, torso, arms, and legs. The problem is challenging in practical settings where the number of body sensors are limited. Past work has shown promising results using conditional diffusion models, where the pose prediction is conditioned on both measurements from the sensors. Unfortunately, nearly all these approaches generalize poorly across users, primarly because location measurements are highly influenced by the body size of the user. In this paper, we formulate pose estimation as an inverse problem and design an algorithm capable of zero-shot generalization. Our idea utilizes a pre-trained diffusion model and conditions it on rotational measurements alone; the priors from this model are then guided by a likelihood term, derived from the measured locations. Thus, given any user, our proposed InPose method generatively estimates the highly likely sequence of poses that best explains the sparse on-body measurements.",2 "Classical planners are powerful systems, but modeling tasks in input formats such as PDDL is tedious and error-prone. In contrast, planning with Large Language Models (LLMs) allows for almost any input text, but offers no guarantees on plan quality or even soundness. In an attempt to merge the best of these two approaches, some work has begun to use LLMs to automate parts of the PDDL creation process. However, these methods still require various degrees of expert input or domain-specific adaptations. We present NL2Plan, the first fully automatic system for generating complete PDDL tasks from minimal natural language descriptions. NL2Plan uses an LLM to incrementally extract the necessary information from the short text input before creating a complete PDDL description of both the domain and the problem which is finally solved by a classical planner. We evaluate NL2Plan on seven planning domains, five of which are novel and thus not in the LLM training data, and find that NL2Plan outperforms directly generating the files with an LLM+validator combination. As such, NL2Plan is a powerful tool for assistive PDDL modeling and a step towards solving natural language planning task with interpretability and guarantees.",0 "This position paper argues that annotation disagreement in Natural Language Inference (NLI) is not mere noise but often reflects meaningful variation, especially when triggered by ambiguity in the premise or hypothesis. While underspecified guidelines and annotator behavior contribute to variation, content-based ambiguity provides a process-independent signal of divergent human perspectives. We call for a shift toward ambiguity-aware NLI that first identifies ambiguous input pairs, classifies their types, and only then proceeds to inference. To support this shift, we present a framework that incorporates ambiguity detection and classification prior to inference. We also introduce a unified taxonomy that synthesizes existing taxonomies, illustrates key subtypes with examples, and motivates targeted detection methods that better align models with human interpretation. Although current resources lack datasets explicitly annotated for ambiguity and subtypes, this gap presents an opportunity: by developing new annotated resources and exploring unsupervised approaches to ambiguity detection, we enable more robust, explainable, and human-aligned NLI systems.",2 "\hspace{2mm} Diffusion-weighted magnetic resonance imaging (dMRI) of the brain offers unique capabilities including noninvasive probing of tissue microstructure and structural connectivity. It is widely used for clinical assessment of disease and injury, and for neuroscience research. Analyzing the dMRI data to extract useful information for medical and scientific purposes can be challenging. The dMRI measurements may suffer from strong noise and artifacts, and may exhibit high inter-session and inter-scanner variability in the data, as well as inter-subject heterogeneity in brain structure. Moreover, the relationship between measurements and the phenomena of interest can be highly complex. Recent years have witnessed increasing use of machine learning methods for dMRI analysis. This manuscript aims to assess these efforts, with a focus on methods that have addressed data preprocessing and harmonization, microstructure mapping, tractography, and white matter tract analysis. We study the main findings, strengths, and weaknesses of the existing methods and suggest topics for future research. We find that machine learning may be exceptionally suited to tackle some of the difficult tasks in dMRI analysis. However, for this to happen, several shortcomings of existing methods and critical unresolved issues need to be addressed. There is a pressing need to improve evaluation practices, to increase the availability of rich training datasets and validation benchmarks, as well as model generalizability, reliability, and explainability concerns.",0 "Large Language Models (LLMs) excel in translation among other things, demonstrating competitive performance for many language pairs in zero- and few-shot settings. But unlike dedicated neural machine translation models, LLMs are not trained on any translation-related objective. What explains their remarkable translation abilities? Are these abilities grounded in ""incidental bilingualism"" (Briakou et al. 2023) in training data? Does instruction tuning contribute to it? Are LLMs capable of aligning and leveraging semantically identical or similar monolingual contents from different corners of the internet that are unlikely to fit in a single context window? I offer some reflections on this topic, informed by recent studies and growing user experience. My working hypothesis is that LLMs' translation abilities originate in two different types of pre-training data that may be internalized by the models in different ways. I discuss the prospects for testing the ""duality"" hypothesis empirically and its implications for reconceptualizing translation, human and machine, in the age of deep learning.",1 "Ontologies are a standard tool for creating semantic schemata in many knowledge intensive domains of human interest. They are becoming increasingly important also in the areas that have been until very recently dominated by subsymbolic knowledge representation and machine-learning (ML) based data processing. One such area is information security, and specifically, malware detection. We thus propose PE Malware Ontology that offers a reusable semantic schema for Portable Executable (PE - the Windows binary format) malware files. This ontology is inspired by the structure of the EMBER dataset, which focuses on the static malware analysis of PE files. With this proposal, we hope to provide a unified semantic representation for the existing and future PE-malware datasets and facilitate the application of symbolic, neuro-symbolic, or otherwise explainable approaches in the PE-malware-detection domain, which may produce interpretable results described by the terms defined in our ontology. In addition, we also publish semantically treated EMBER data, including fractional datasets, to support the reproducibility of experiments on EMBER. We supplement our work with a preliminary case study, conducted using concept learning, to show the general feasibility of our approach. While we were not able to match the precision of the state-of-the-art ML tools, the learned malware discriminators were interesting and highly interpretable.",0 "During virtual navigation, users exhibit varied interaction and navigation behaviors influenced by several factors. Existing theories and models have been developed to explain and predict these diverse patterns. While users often experience uncomfortable sensations, such as cybersickness, during virtual reality (VR) use, they do not always make optimal decisions to mitigate these effects. Although methods like reinforcement learning have been used to model decision-making processes, they typically rely on random selection to simulate actions, failing to capture the complexities of real navigation behavior. In this study, we propose curiosity as a key factor driving irrational decision-making, suggesting that users continuously balance exploration and cybersickness according to the free energy principle during virtual navigation. Our findings show that VR users generally adopt conservative strategies when navigating, with most participants displaying negative curiosity across trials. However, curiosity levels tend to rise when the virtual environment changes, illustrating the dynamic interplay between exploration and discomfort. This study provides a quantitative approach to decoding curiosity-driven behavior during virtual navigation, offering insights into how users balance exploration and the avoidance of cybersickness. Future research will further refine this model by incorporating additional psychological and environmental factors to improve the accuracy of navigation pattern predictions.",0 "Biomechanical features have become important indicators for evaluating athletes' techniques. Traditionally, experts propose significant features and evaluate them using physics equations. However, the complexity of the human body and its movements makes it challenging to explicitly analyze the relationships between some features and athletes' final performance. With advancements in modern machine learning and statistics, data analytics methods have gained increasing importance in sports analytics. In this study, we leverage machine learning models to analyze expert-proposed biomechanical features from the finals of long jump competitions in the World Championships. The objectives of the analysis include identifying the most important features contributing to top-performing jumps and exploring the combined effects of these key features. Using quantile regression, we model the relationship between the biomechanical feature set and the target variable (effective distance), with a particular focus on elite-level jumps. To interpret the model, we apply SHapley Additive exPlanations (SHAP) alongside Partial Dependence Plots (PDPs) and Individual Conditional Expectation (ICE) plots. The findings reveal that, beyond the well-documented velocity-related features, specific technical aspects also play a pivotal role. For male athletes, the angle of the knee of the supporting leg before take-off is identified as a key factor for achieving top 10% performance in our dataset, with angles greater than 169{\deg}contributing significantly to jump performance. In contrast, for female athletes, the landing pose and approach step technique emerge as the most critical features influencing top 10% performances, alongside velocity. This study establishes a framework for analyzing the impact of various features on athletic performance, with a particular emphasis on top-performing events.",2 "Large language models (LLMs) have demonstrated promising performance on medical benchmarks; however, their ability to perform medical calculations, a crucial aspect of clinical decision-making, remains underexplored and poorly evaluated. Existing benchmarks often assess only the final answer with a wide numerical tolerance, overlooking systematic reasoning failures and potentially causing serious clinical misjudgments. In this work, we revisit medical calculation evaluation with a stronger focus on clinical trustworthiness. First, we clean and restructure the MedCalc-Bench dataset and propose a new step-by-step evaluation pipeline that independently assesses formula selection, entity extraction, and arithmetic computation. Under this granular framework, the accuracy of GPT-4o drops from 62.7% to 43.6%, revealing errors masked by prior evaluations. Second, we introduce an automatic error analysis framework that generates structured attribution for each failure mode. Human evaluation confirms its alignment with expert judgment, enabling scalable and explainable diagnostics. Finally, we propose a modular agentic pipeline, MedRaC, that combines retrieval-augmented generation and Python-based code execution. Without any fine-tuning, MedRaC improves the accuracy of different LLMs from 16.35% up to 53.19%. Our work highlights the limitations of current benchmark practices and proposes a more clinically faithful methodology. By enabling transparent and transferable reasoning evaluation, we move closer to making LLM-based systems trustworthy for real-world medical applications.",1 "Understanding irradiation-induced strain in silicon carbide (SiC) is essential for designing radiation-tolerant ceramic materials. However, conventional methods often fail to resolve nanoscale strain gradients, especially in polycrystalline forms. In this study, we employ nano-beam precession electron diffraction (N-PED) to perform high-resolution, multi-directional strain mapping in both single-crystal 4H-SiC and polycrystalline {\alpha}-SiC subjected to helium and hydrogen ion irradiation. The high-resolution X-ray diffraction (HR-XRD) simulations of He + H irradiated single-crystal 4H-SiC closely match the strain profiles obtained from N-PED, demonstrating the reliability and accuracy of the N-PED method. In He-irradiated polycrystalline {\alpha}-SiC at high temperatures, a bubble-depleted zone (BDZ) near the grain boundary (GB) reveals that GBs act as active sinks for irradiation-induced defects. N-PED further shows strain amplification localized at the GBs, reaching up to 2.5%, along with strain relief within the BDZ. To explain this behavior, density functional theory (DFT) calculations of binding and migration energies indicate a strong tendency for Si, C, and He atoms to segregate toward the GB core. This segregation reduces the availability of vacancies to accommodate He atoms and leads to local strain relaxation near the GB. Furthermore, first-principles tensile simulations reveal that Si and C interstitials mitigate He-induced GB embrittlement. Charge density and DOS analyses link this effect to the bonding characteristics between point defects and neighboring atoms at GB. These insights underscore the importance of grain boundary engineering in enhancing radiation tolerance of SiC for nuclear and space applications.",1 "With the improving semantic understanding capability of Large Language Models (LLMs), they exhibit a greater awareness and alignment with human values, but this comes at the cost of transparency. Although promising results are achieved via experimental analysis, an in-depth understanding of the LLM's internal workings is unavoidable to comprehend the reasoning behind the re-ranking, which provides end users with an explanation that enables them to make an informed decision. Moreover, in newly developed systems with limited user engagement and insufficient ranking data, accurately re-ranking content remains a significant challenge. While various training methods affect the training of LLMs and generate inference, our analysis has found that some training methods exhibit better explainability than others, implying that an accurate semantic understanding has not been learned through all training methods; instead, abstract knowledge has been gained to optimize evaluation, which raises questions about the true reliability of LLMs. Therefore, in this work, we analyze how different training methods affect the semantic understanding of the re-ranking task in LLMs and investigate whether these models can generate more informed textual reasoning to overcome the challenges of transparency or LLMs and limited training data. To analyze the LLMs for re-ranking tasks, we utilize a relatively small ranking dataset from the environment and the Earth science domain to re-rank retrieved content. Furthermore, we also analyze the explainable information to see if the re-ranking can be reasoned using explainability.",0 "Video quality assessment (VQA) aims to objectively quantify perceptual quality degradation in alignment with human visual perception. Despite recent advances, existing VQA models still suffer from two critical limitations: \textit{poor generalization to out-of-distribution (OOD) videos} and \textit{limited explainability}, which restrict their applicability in real-world scenarios. To address these challenges, we propose \textbf{VQAThinker}, a reasoning-based VQA framework that leverages large multimodal models (LMMs) with reinforcement learning to jointly model video quality understanding and scoring, emulating human perceptual decision-making. Specifically, we adopt group relative policy optimization (GRPO), a rule-guided reinforcement learning algorithm that enables reasoning over video quality under score-level supervision, and introduce three VQA-specific rewards: (1) a \textbf{bell-shaped regression reward} that increases rapidly as the prediction error decreases and becomes progressively less sensitive near the ground truth; (2) a \textbf{pairwise ranking reward} that guides the model to correctly determine the relative quality between video pairs; and (3) a \textbf{temporal consistency reward} that encourages the model to prefer temporally coherent videos over their perturbed counterparts. Extensive experiments demonstrate that VQAThinker achieves state-of-the-art performance on both in-domain and OOD VQA benchmarks, showing strong generalization for video quality scoring. Furthermore, evaluations on video quality understanding tasks validate its superiority in distortion attribution and quality description compared to existing explainable VQA models and LMMs. These findings demonstrate that reinforcement learning offers an effective pathway toward building generalizable and explainable VQA models solely with score-level supervision.",2 "With the rapid development of Internet technologies, web systems have become essential infrastructures for modern information exchange and business operations. However, alongside their expansion, numerous security vulnerabilities have emerged, making web security a critical research focus within the broader field of cybersecurity. These issues are closely related to data protection, privacy preservation, and business continuity, and systematic research on web security is crucial for mitigating malicious attacks and enhancing the reliability and robustness of network systems. This paper first reviews the OWASP Top 10, summarizing the types, causes, and impacts of common web vulnerabilities, and illustrates their exploitation mechanisms through representative cases. Building upon this, the Gruyere platform is adopted as an experimental subject for analyzing known vulnerabilities. The study presents detailed reproduction steps for specific vulnerabilities, proposes comprehensive remediation strategies, and further compares Gruyere's vulnerabilities with contemporary real-world cases. The findings suggest that, although Gruyere's vulnerabilities are relatively outdated, their underlying principles remain highly relevant for explaining a wide range of modern security flaws. Overall, this research demonstrates that web system security analysis based on Gruyere not only deepens the understanding of vulnerability mechanisms but also provides practical support for technological innovation and security defense.",0 "The use of children's drawings to examining their conceptual understanding has been proven to be an effective method, but there are two major problems with previous research: 1. The content of the drawings heavily relies on the task, and the ecological validity of the conclusions is low; 2. The interpretation of drawings relies too much on the subjective feelings of the researchers. To address this issue, this study uses the Large Language Model (LLM) to identify 1420 children's scientific drawings (covering 9 scientific themes/concepts), and uses the word2vec algorithm to calculate their semantic similarity. The study explores whether there are consistent drawing representations for children on the same theme, and attempts to establish a norm for children's scientific drawings, providing a baseline reference for follow-up children's drawing research. The results show that the representation of most drawings has consistency, manifested as most semantic similarity>0.8. At the same time, it was found that the consistency of the representation is independent of the accuracy (of LLM's recognition), indicating the existence of consistency bias. In the subsequent exploration of influencing factors, we used Kendall rank correlation coefficient to investigate the effects of ""sample size"", ""abstract degree"", and ""focus points"" on drawings, and used word frequency statistics to explore whether children represented abstract themes/concepts by reproducing what was taught in class. It was found that accuracy (of LLM's recognition) is the most sensitive indicator, and data such as sample size and semantic similarity are related to it; The consistency between classroom experiments and teaching purpose is also an important factor, many students focus more on the experiments themselves rather than what they explain.",2 "We investigate the salience of extinction risk as a source of impatience. Our framework distinguishes between human extinction risk and individual mortality risk while allowing for various degrees of intergenerational altruism. Additionally, we consider the evolutionarily motivated ""selfish gene"" perspective. We find that the risk of human extinction is an indispensable component of the discount rate, whereas individual mortality risk can be hedged against - partially or fully, depending on the setup - through human reproduction. Overall, we show that in the face of extinction risk, people become more impatient rather than more farsighted. Thus, the greater the threat of extinction, the less incentive there is to invest in avoiding it. Our framework can help explain why humanity consistently underinvests in mitigation of catastrophic risks, ranging from climate change mitigation, via pandemic prevention, to addressing the emerging risks of transformative artificial intelligence.",1 "The Automated Audio Captioning (AAC) task asks models to generate natural language descriptions of an audio input. Evaluating these machine-generated audio captions is a complex task that requires considering diverse factors, among them, auditory scene understanding, sound-object inference, temporal coherence, and the environmental context of the scene. While current methods focus on specific aspects, they often fail to provide an overall score that aligns well with human judgment. In this work, we propose CLAIR-A, a simple and flexible method that leverages the zero-shot capabilities of large language models (LLMs) to evaluate candidate audio captions by directly asking LLMs for a semantic distance score. In our evaluations, CLAIR-A better predicts human judgements of quality compared to traditional metrics, with a 5.8% relative accuracy improvement compared to the domain-specific FENSE metric and up to 11% over the best general-purpose measure on the Clotho-Eval dataset. Moreover, CLAIR-A offers more transparency by allowing the language model to explain the reasoning behind its scores, with these explanations rated up to 30% better by human evaluators than those provided by baseline methods. CLAIR-A is made publicly available at https://github.com/DavidMChan/clair-a.",1 "In the Indian subcontinent, Telugu, one of India's six classical languages, is the most widely spoken Dravidian Language. Despite its 96 million speaker base worldwide, Telugu remains underrepresented in the global NLP and Machine Learning landscape, mainly due to lack of high-quality annotated resources. This work introduces TeSent, a comprehensive benchmark dataset for sentiment classification, a key text classification problem, in Telugu. TeSent not only provides ground truth labels for the sentences, but also supplements with provisions for evaluating explainability and fairness, two critical requirements in modern-day machine learning tasks. We scraped Telugu texts covering multiple domains from various social media platforms, news websites and web-blogs to preprocess and generate 26,150 sentences, and developed a custom-built annotation platform and a carefully crafted annotation protocol for collecting the ground truth labels along with their human-annotated rationales. We then fine-tuned several SOTA pre-trained models in two ways: with rationales, and without rationales. Further, we provide a detailed plausibility and faithfulness evaluation suite, which exploits the rationales, for six widely used post-hoc explainers applied on the trained models. Lastly, we curate TeEEC, Equity Evaluation Corpus in Telugu, a corpus to evaluate fairness of Telugu sentiment and emotion related NLP tasks, and provide a fairness evaluation suite for the trained classifier models. Our experimental results suggest that training with rationales may improve model accuracy, reduce bias in models, and make the explainers' output more aligned to human reasoning.",0 "Academic performance depends on a multivariable nexus of socio-academic and financial factors. This study investigates these influences to develop effective strategies for optimizing students' CGPA. To achieve this, we reviewed various literature to identify key influencing factors and constructed an initial hypothetical causal graph based on the findings. Additionally, an online survey was conducted, where 1,050 students participated, providing comprehensive data for analysis. Rigorous data preprocessing techniques, including cleaning and visualization, ensured data quality before analysis. Causal analysis validated the relationships among variables, offering deeper insights into their direct and indirect effects on CGPA. Regression models were implemented for CGPA prediction, while classification models categorized students based on performance levels. Ridge Regression demonstrated strong predictive accuracy, achieving a Mean Absolute Error of 0.12 and a Mean Squared Error of 0.023. Random Forest outperformed in classification, attaining an F1-score near perfection and an accuracy of 98.68%. Explainable AI techniques such as SHAP, LIME, and Interpret enhanced model interpretability, highlighting critical factors such as study hours, scholarships, parental education, and prior academic performance. The study culminated in the development of a web-based application that provides students with personalized insights, allowing them to predict academic performance, identify areas for improvement, and make informed decisions to enhance their outcomes.",0 "Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)'s reasoning to make better decision. In this work, we introduce a new task - Sentiment Reasoning - for both speech and text modalities, and our proposed multimodal multitask framework and the world's largest multimodal sentiment analysis dataset. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (+2% increase in both accuracy and macro-F1) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (five languages - Vietnamese, English, Chinese, German, and French) and models are published online: https://github.com/leduckhai/Sentiment-Reasoning",2 "Speech foundation models (SFMs) are increasingly hailed as powerful computational models of human speech perception. However, since their representations are inherently black-box, it remains unclear what drives their alignment with brain responses. To remedy this, we built linear encoding models from six interpretable feature families: mel-spectrogram, Gabor filter bank features, speech presence, phonetic, syntactic, and semantic features, and contextualized embeddings from three state-of-the-art SFMs (Whisper, HuBERT, WavLM), quantifying electrocorticography (ECoG) response variance shared between feature classes. Variance-partitioning analyses revealed several key insights: First, the SFMs' alignment with the brain can be mostly explained by their ability to learn and encode simple interpretable speech features. Second, SFMs exhibit a systematic trade-off between encoding of brain-relevant low-level and high-level features across layers. Finally, our results show that SFMs learn brain-relevant semantics which cannot be explained by lower-level speech features, with this capacity increasing with model size and context length. Together, our findings suggest a principled approach to build more interpretable, accurate, and efficient encoding models of the brain by augmenting SFM embeddings with interpretable features.",1 "Evaluating automatically generated radiology reports remains a fundamental challenge due to the lack of clinically grounded, interpretable, and fine-grained metrics. Existing methods either produce coarse overall scores or rely on opaque black-box models, limiting their usefulness in real-world clinical workflows. We introduce RadReason, a novel evaluation framework for radiology reports that not only outputs fine-grained sub-scores across six clinically defined error types, but also produces human-readable justifications that explain the rationale behind each score. Our method builds on Group Relative Policy Optimization and incorporates two key innovations: (1) Sub-score Dynamic Weighting, which adaptively prioritizes clinically challenging error types based on live F1 statistics; and (2) Majority-Guided Advantage Scaling, which adjusts policy gradient updates based on prompt difficulty derived from sub-score agreement. Together, these components enable more stable optimization and better alignment with expert clinical judgment. Experiments on the ReXVal benchmark show that RadReason surpasses all prior offline metrics and achieves parity with GPT-4-based evaluations, while remaining explainable, cost-efficient, and suitable for clinical deployment. Code will be released upon publication.",0 "Trust is central to human social interactions, manifesting in actions that make one vulnerable to another. We argue that trust will thus depend on the decision-making processes that arise in neural systems. Building on advances in the cognitive neuroscience of decision making, we propose a mechanistic model of trust arising from multiple parallel systems that perform distinct, complementary information processing. Because each system learns via different mechanisms, trust can be created (or destroyed) in multiple ways. This systems-level taxonomy of information representations provides a principled basis for differentiating forms of trust, linking them to specific learning processes, and generating testable predictions about their expression in behavior. By situating trust within a broader theory of neural decision systems, our account unifies diverse findings across psychology, neuroscience, and the social sciences, and offers a foundation for explaining how humans develop, maintain, and repair trust in a complex social world.",0 "We investigate whether large language models (LLMs) can generate effective, user-facing explanations from a mathematically interpretable recommendation model. The model is based on constrained matrix factorization, where user types are explicitly represented and predicted item scores share the same scale as observed ratings, making the model's internal representations and predicted scores directly interpretable. This structure is translated into natural language explanations using carefully designed LLM prompts. Many works in explainable AI rely on automatic evaluation metrics, which often fail to capture users' actual needs and perceptions. In contrast, we adopt a user-centered approach: we conduct a study with 326 participants who assessed the quality of the explanations across five key dimensions-transparency, effectiveness, persuasion, trust, and satisfaction-as well as the recommendations themselves. To evaluate how different explanation strategies are perceived, we generate multiple explanation types from the same underlying model, varying the input information provided to the LLM. Our analysis reveals that all explanation types are generally well received, with moderate statistical differences between strategies. User comments further underscore how participants react to each type of explanation, offering complementary insights beyond the quantitative results.",0 "Understanding what deep learning (DL) models learn is essential for the safe deployment of artificial intelligence (AI) in clinical settings. While previous work has focused on pixel-based explainability methods, less attention has been paid to the textual concepts learned by these models, which may better reflect the reasoning used by clinicians. We introduce Mammo-CLIP Dissect, the first concept-based explainability framework for systematically dissecting DL vision models trained for mammography. Leveraging a mammography-specific vision-language model (Mammo-CLIP) as a ""dissector,"" our approach labels neurons at specified layers with human-interpretable textual concepts and quantifies their alignment to domain knowledge. Using Mammo-CLIP Dissect, we investigate three key questions: (1) how concept learning differs between DL vision models trained on general image datasets versus mammography-specific datasets; (2) how fine-tuning for downstream mammography tasks affects concept specialisation; and (3) which mammography-relevant concepts remain underrepresented. We show that models trained on mammography data capture more clinically relevant concepts and align more closely with radiologists' workflows than models not trained on mammography data. Fine-tuning for task-specific classification enhances the capture of certain concept categories (e.g., benign calcifications) but can reduce coverage of others (e.g., density-related features), indicating a trade-off between specialisation and generalisation. Our findings show that Mammo-CLIP Dissect provides insights into how convolutional neural networks (CNNs) capture mammography-specific knowledge. By comparing models across training data and fine-tuning regimes, we reveal how domain-specific training and task-specific adaptation shape concept learning. Code and concept set are available: https://github.com/Suaiba/Mammo-CLIP-Dissect.",0 "Chromosomal crossovers play a crucial role in meiotic cell division, as they ensure proper chromosome segregation and increase genetic variability. Experiments have consistently revealed two key observations across species: (i) the number of crossovers per chromosome is typically small, but at least one, and (ii) crossovers on the same chromosome are subject to interference, i.e., they are more separated than expected by chance. These observations can be explained by a recently proposed coarsening model, where the dynamics of droplets associated with chromosomes designate crossovers. We provide a comprehensive analysis of the coarsening model, which we also extend by including material exchanges between droplets, the synaptonemal complex, and the nucleoplasm. We derive scaling laws for the crossover count, which allows us to analyze data across species. Moreover, our model provides a coherent explanation of experimental data across mutants, including the wild-type and zyp1-mutant of A. thaliana. Consequently, the extended coarsening model provides a solid framework for investigating the underlying mechanisms of crossover placement.",1 "Understanding how subjective experience arises from information processing remains a central challenge in neuroscience, cognitive science, and AI research. The Modular Consciousness Theory (MCT) proposes a biologically grounded and computationally explicit framework in which consciousness is a discrete sequence of Integrated Informational States (IISs). Each IIS is a packet of integrated information tagged with a multidimensional density vector that quantifies informational richness. Its magnitude correlates with subjective intensity, shaping memory, behavior, and continuity of experience. Inputs from body and environment are adaptively filtered, processed by modules (abstraction, narration, evaluation, self-evaluation), and integrated into an IIS. The resulting packet, tagged with its density vector, is transmitted to behavioral readiness, memory, and decision-making modules, closing the loop. This explains why strongly tagged states exert greater influence on long-term memory and action. Unlike Global Workspace Theory, Integrated Information Theory, or Higher-Order Thought, MCT specifies a full computational pipeline producing discrete informational units with quantifiable internal structure. Subjectivity is reframed as a correlate of the density-tagging signal with functional consequences. MCT generates testable predictions, such as stress enhancing memory encoding, and provides a naturalistic blueprint for both biological and artificial architectures. Consciousness, in this view, is not an irreducible essence but an evolvable, quantifiable, and constructible feature of complex information processing.",0 "Large language models (LLMs) are trained on vast amounts of text from the Internet, but do they truly understand the viral content that rapidly spreads online -- commonly known as memes? In this paper, we introduce CHIME, a dataset for CHinese Internet Meme Explanation. The dataset comprises popular phrase-based memes from the Chinese Internet, annotated with detailed information on their meaning, origin, example sentences, types, etc. To evaluate whether LLMs understand these memes, we designed two tasks. In the first task, we assessed the models' ability to explain a given meme, identify its origin, and generate appropriate example sentences. The results show that while LLMs can explain the meanings of some memes, their performance declines significantly for culturally and linguistically nuanced meme types. Additionally, they consistently struggle to provide accurate origins for the memes. In the second task, we created a set of multiple-choice questions (MCQs) requiring LLMs to select the most appropriate meme to fill in a blank within a contextual sentence. While the evaluated models were able to provide correct answers, their performance remains noticeably below human levels. We have made CHIME public and hope it will facilitate future research on computational meme understanding.",0 "In high-stakes disaster scenarios, timely and informed decision-making is critical yet often challenged by uncertainty, dynamic environments, and limited resources. This paper presents a systematic review of Human-AI collaboration patterns that support decision-making across all disaster management phases. Drawing from 51 peer-reviewed studies, we identify four major categories: Human-AI Decision Support Systems, Task and Resource Coordination, Trust and Transparency, and Simulation and Training. Within these, we analyze sub-patterns such as cognitive-augmented intelligence, multi-agent coordination, explainable AI, and virtual training environments. Our review highlights how AI systems may enhance situational awareness, improves response efficiency, and support complex decision-making, while also surfacing critical limitations in scalability, interpretability, and system interoperability. We conclude by outlining key challenges and future research directions, emphasizing the need for adaptive, trustworthy, and context-aware Human-AI systems to improve disaster resilience and equitable recovery outcomes.",0 "The progression from novice to disciplinary expert is a longstanding area of inquiry in educational research. Studies investigating such progressions have often resorted to participants' self-assessments or other qualitative indicators as a starting point to define experience. But does a participant's estimated experience coincide with metrics derived from their conceptual understanding of a discipline? Using data extracted from over 150 concept maps, we first demonstrate that disciplinary experience is a reliable variable to explain differences in conceptual understanding across a highly diverse learners' population. Through a comparison of unsupervised and semi-supervised models, we then motivate clustering participants into three distinguished experience levels, and support such a classification performed in other studies of educational research. By analysing cluster composition, we also identify discrepancies between the perceived and predicted experience levels of the study participants. Lastly, for studies processing participants data through network analysis, we present insights into statistically significant metrics that can characterise each experience level, and advocate for the use of node-level metrics in such studies.",0 "Software defect prediction using code metrics has been extensively researched over the past five decades. However, prediction harnessing non-software metrics is under-researched. Considering that the root cause of software defects is often attributed to human error, human factors theory might offer key forecasting metrics for actionable insights. This paper explores automated software defect prediction at the method level based on the developers' coding habits. First, we propose a framework for deciding the metrics to conduct predictions. Next, we compare the performance of our metrics to that of the code and commit history metrics shown by research to achieve the highest performance to date. Finally, we analyze the prediction importance of each metric. As a result of our analyses of twenty-one critical infrastructure large-scale open-source software projects, we have presented: (1) a human error-based framework with metrics useful for defect prediction at method level; (2) models using our proposed metrics achieve better average prediction performance than the state-of-the-art code metrics and history measures; (3) the prediction importance of all metrics distributes differently with each of the novel metrics having better average importance than code and history metrics; (4) the novel metrics dramatically enhance the explainability, practicality, and actionability of software defect prediction models, significantly advancing the field. We present a systematic approach to forecasting defect-prone software methods via a human error framework. This work empowers practitioners to act on predictions, empirically demonstrating how developer coding habits contribute to defects in software systems.",0 "This paper introduces HARMONIC, a cognitive-robotic architecture designed for robots in human-robotic teams. HARMONIC supports semantic perception interpretation, human-like decision-making, and intentional language communication. It addresses the issues of safety and quality of results; aims to solve problems of data scarcity, explainability, and safety; and promotes transparency and trust. Two proof-of-concept HARMONIC-based robotic systems are demonstrated, each implemented in both a high-fidelity simulation environment and on physical robotic platforms.",2 "Autonomous driving systems face significant challenges in achieving human-like adaptability, robustness, and interpretability in complex, open-world environments. These challenges stem from fragmented architectures, limited generalization to novel scenarios, and insufficient semantic extraction from perception. To address these limitations, we propose a unified Perception-Language-Action (PLA) framework that integrates multi-sensor fusion (cameras, LiDAR, radar) with a large language model (LLM)-augmented Vision-Language-Action (VLA) architecture, specifically a GPT-4.1-powered reasoning core. This framework unifies low-level sensory processing with high-level contextual reasoning, tightly coupling perception with natural language-based semantic understanding and decision-making to enable context-aware, explainable, and safety-bounded autonomous driving. Evaluations on an urban intersection scenario with a construction zone demonstrate superior performance in trajectory tracking, speed prediction, and adaptive planning. The results highlight the potential of language-augmented cognitive frameworks for advancing the safety, interpretability, and scalability of autonomous driving systems.",1 "Explainability and its emerging counterpart contestability have become important normative and design principles for trustworthy AI as they enable users and subjects to understand and challenge AI decisions. However, realizing these principles is difficult, as they assume different meanings in technical, legal, and organizational dimensions of AI regulation. To resolve this conceptual polysemy, in this paper, we present the findings of an interview study with 14 experts to examine the intersection and implementation of explainability and contestability, and their understanding in different research communities. We outline differentiations between descriptive and normative explainability, judicial and non-judicial channels of contestation, and individual and collective contestation action. We further describe the main points of friction in the realization of both principles, including the alignment between top-down and bottom-up regulation, the assignment of responsibility, and the need for interdisciplinary collaboration. Lastly, we formulate three recommendations for AI policy to implement both principles through a Regulation by Design perspective. We believe our contributions can inform policy-making and regulation of these core principles and enable more effective and equitable design, development, and deployment of trustworthy public AI systems.",0 "Code review (CR) is essential to software development, helping ensure that new code is properly integrated. However, the CR process often involves significant effort, including code adjustments, responses to reviewers, and continued implementation. While past studies have examined CR delays and iteration counts, few have investigated the effort based on the volume of code changes required, especially in the context of GitLab Merge Requests (MRs), which remains underexplored. In this paper, we define and measure CR effort as the amount of code modified after submission, using a dataset of over 23,600 MRs from four GitLab projects. We find that up to 71% of MRs require adjustments after submission, and 28% of these involve changes to more than 200 lines of code. Surprisingly, this effort is not correlated with review time or the number of participants. To better understand and predict CR effort, we train an interpretable machine learning model using metrics across multiple dimensions: text features, code complexity, developer experience, review history, and branching. Our model achieves strong performance (AUC 0.84-0.88) and reveals that complexity, experience, and text features are key predictors. Historical project characteristics also influence current review effort. Our findings highlight the feasibility of using machine learning to explain and anticipate the effort needed to integrate code changes during review.",1 "Accessing suitable datasets is critical for research and development in recommender systems. However, finding datasets that match specific recommendation task or domains remains a challenge due to scattered sources and inconsistent metadata. To address this gap, we propose a community-driven and explainable dataset search engine tailored for recommender system research. Our system supports semantic search across multiple dataset attributes, such as dataset names, descriptions, and recommendation domain, and provides explanations of search relevance to enhance transparency. The system encourages community participation by allowing users to contribute standardized dataset metadata in public repository. By improving dataset discoverability and search interpretability, the system facilitates more efficient research reproduction. The platform is publicly available at: https://ds4rs.com.",0 "People with Multiple Sclerosis (MS) complain of problems with hand dexterity and cognitive fatigue. However, in many cases, impairments are subtle and difficult to detect. Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging technique that measures brain hemodynamic responses during cognitive or motor tasks. We aimed to detect brain activity biomarkers that could explain subjective reports of cognitive fatigue while completing dexterous tasks and provide targets for future brain stimulation treatments. We recruited 15 people with MS who did not have a hand (Nine Hole Peg Test [NHPT]), mobility, or cognitive impairment, and 12 age- and sex-matched controls. Participants completed two types of hand dexterity tasks with their dominant hand, single task and dual task (NHPT while holding a ball between the fifth finger and hypothenar eminence of the same hand). We analyzed fNIRS data (oxygenated and deoxygenated hemoglobin levels) using a machine learning framework to classify MS patients from controls based on their brain activation patterns in bilateral prefrontal and sensorimotor cortices. The K-Nearest Neighbor classifier achieved an accuracy of 75.0% for single manual dexterity tasks and 66.7% for the more complex dual manual dexterity tasks. Using XAI, we found that the most important brain regions contributing to the machine learning model were the supramarginal/angular gyri and the precentral gyrus (sensory integration and motor regions) of the ipsilateral hemisphere, with suppressed activity and slower neurovascular response in the MS group. During both tasks, deoxygenated hemoglobin levels were better predictors than the conventional measure of oxygenated hemoglobin. This nonconventional method of fNIRS data analysis revealed novel brain activity biomarkers that can help develop personalized brain stimulation targets.",0 "Recent advances in deep learning have led to increasingly complex models with deeper layers and more parameters, reducing interpretability and making their decisions harder to understand. While many methods explain black-box reasoning, most lack effective interventions or only operate at sample-level without modifying the model itself. To address this, we propose the Concept Bottleneck Model for Enhancing Human-Neural Network Mutual Understanding (CBM-HNMU). CBM-HNMU leverages the Concept Bottleneck Model (CBM) as an interpretable framework to approximate black-box reasoning and communicate conceptual understanding. Detrimental concepts are automatically identified and refined (removed/replaced) based on global gradient contributions. The modified CBM then distills corrected knowledge back into the black-box model, enhancing both interpretability and accuracy. We evaluate CBM-HNMU on various CNN and transformer-based models across Flower-102, CIFAR-10, CIFAR-100, FGVC-Aircraft, and CUB-200, achieving a maximum accuracy improvement of 2.64% and a maximum increase in average accuracy across 1.03%. Source code is available at: https://github.com/XiGuaBo/CBM-HNMU.",2 "Explaining deep learning models is essential for clinical integration of medical image analysis systems. A good explanation highlights if a model depends on spurious features that undermines generalization and harms a subset of patients or, conversely, may present novel biological insights. Although techniques like GradCAM can identify influential features, they are measurement tools that do not themselves form an explanation. We propose a human-machine-VLM interaction system tailored to explaining classifiers in computational pathology, including multi-instance learning for whole-slide images. Our proof of concept comprises (1) an AI-integrated slide viewer to run sliding-window experiments to test claims of an explanation, and (2) quantification of an explanation's predictiveness using general-purpose vision-language models. The results demonstrate that this allows us to qualitatively test claims of explanations and can quantifiably distinguish competing explanations. This offers a practical path from explainable AI to explained AI in digital pathology and beyond. Code and prompts are available at https://github.com/nki-ai/x2x.",0 "Social identity theory (SIT) and social categorization theory (SCT) are two facets of the social identity approach (SIA) to understanding social phenomena. SIT and SCT are models that describe and explain how people interact with one another socially, connecting the individual to the group through an understanding of underlying psychological mechanisms and intergroup behaviour. SIT, originally developed in the 1970s, and SCT, a later, more general offshoot, have been broadly applied to a range of social phenomena among people. The rise of increasingly social machines embedded in daily life has spurned efforts on understanding whether and how artificial agents can and do participate in SIA activities. As agents like social robots and chatbots powered by sophisticated large language models (LLMs) advance, understanding the real and potential roles of these technologies as social entities is crucial. Here, I provide a primer on SIA and extrapolate, through case studies and imagined examples, how SIT and SCT can apply to artificial social agents. I emphasize that not all human models and sub-theories will apply. I further argue that, given the emerging competence of these machines and our tendency to be taken in by them, we experts may need to don the hat of the uncanny killjoy, for our own good.",0 "Motor vehicle crashes remain a leading cause of injury and death worldwide, necessitating data-driven approaches to understand and mitigate crash severity. This study introduces a curated dataset of more than 3 million people involved in accidents in Ohio over six years (2017-2022), aggregated to more than 2.3 million vehicle-level records for predictive analysis. The primary contribution is a transparent and reproducible methodology that combines Automated Machine Learning (AutoML) and explainable artificial intelligence (AI) to identify and interpret key risk factors associated with severe crashes. Using the JADBio AutoML platform, predictive models were constructed to distinguish between severe and non-severe crash outcomes. The models underwent rigorous feature selection across stratified training subsets, and their outputs were interpreted using SHapley Additive exPlanations (SHAP) to quantify the contribution of individual features. A final Ridge Logistic Regression model achieved an AUC-ROC of 85.6% on the training set and 84.9% on a hold-out test set, with 17 features consistently identified as the most influential predictors. Key features spanned demographic, environmental, vehicle, human, and operational categories, including location type, posted speed, minimum occupant age, and pre-crash action. Notably, certain traditionally emphasized factors, such as alcohol or drug impairment, were less influential in the final model compared to environmental and contextual variables. Emphasizing methodological rigor and interpretability over mere predictive performance, this study offers a scalable framework to support Vision Zero with aligned interventions and advanced data-informed traffic safety policy.",1 "Explainable AI has become a common term in the literature, scrutinized by computer scientists and statisticians and highlighted by psychological or philosophical researchers. One major effort many researchers tackle is constructing general guidelines for XAI schemes, which we derived from our study. While some areas of XAI are well studied, we focus on uncertainty explanations and consider global explanations, which are often left out. We chose an algorithm that covers various concepts simultaneously, such as uncertainty, robustness, and global XAI, and tested its ability to calibrate trust. We then checked whether an algorithm that aims to provide more of an intuitive visual understanding, despite being complicated to understand, can provide higher user satisfaction and human interpretability.",0 "Recent black-box counterfactual generation frameworks fail to take into account the semantic content of the proposed edits, while relying heavily on training to guide the generation process. We propose a novel, plug-and-play black-box counterfactual generation framework, which suggests step-by-step edits based on theoretical guarantees of optimal edits to produce human-level counterfactual explanations with zero training. Our framework utilizes a pre-trained image editing diffusion model, and operates without access to the internals of the classifier, leading to an explainable counterfactual generation process. Throughout our experimentation, we showcase the explanatory gap between human reasoning and neural model behavior by utilizing both Convolutional Neural Network (CNN), Vision Transformer (ViT) and Large Vision Language Model (LVLM) classifiers, substantiated through a comprehensive human evaluation.",0 "Automated driving functions increasingly rely on machine learning for tasks like perception and trajectory planning, requiring large, relevant datasets. The performance of these algorithms depends on how closely the training data matches the task. To ensure reliable functioning, it is crucial to know what is included in the dataset to assess the trained model's operational risk. We aim to enhance the safe use of machine learning in automated driving by developing a method to recognize situations that an automated vehicle has not been sufficiently trained on. This method also improves explainability by describing the dataset at a human-understandable level. We propose modeling driving data as knowledge graphs, representing driving scenes with entities and their relationships. These graphs are queried for specific sub-scene configurations to check their occurrence in the dataset. We estimate a vehicle's competence in a driving scene by considering the coverage and complexity of sub-scene configurations in the training set. Higher complexity scenes require greater coverage for high competence. We apply this method to the NuPlan dataset, modeling it with knowledge graphs and analyzing the coverage of specific driving scenes. This approach helps monitor the competence of machine learning models trained on the dataset, which is essential for trustworthy AI to be deployed in automated driving.",0 "As the use of AI systems in society grows, addressing potential biases that emerge from data or are learned by models is essential to prevent systematic disadvantages against specific groups. Several notions of (un)fairness have been proposed in the literature, alongside corresponding algorithmic methods for detecting and mitigating unfairness, but, with very few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. In this paper, we contribute a novel interpretable, explainable method for bias detection relying on debates about the presence of bias against individuals, based on the values of protected features for the individuals and others in their neighbourhoods. Our method builds upon techniques from formal and computational argumentation, whereby debates result from arguing about biases within and across neighbourhoods. We provide formal, quantitative, and qualitative evaluations of our method, highlighting its strengths in performance against baselines, as well as its interpretability and explainability.",0 "Interpretability is essential for deploying deep learning models in symbolic music analysis, yet most research emphasizes model performance over explanation. To address this, we introduce MUSE-Explainer, a new method that helps reveal how music Graph Neural Network models make decisions by providing clear, human-friendly explanations. Our approach generates counterfactual explanations by making small, meaningful changes to musical score graphs that alter a model's prediction while ensuring the results remain musically coherent. Unlike existing methods, MUSE-Explainer tailors its explanations to the structure of musical data and avoids unrealistic or confusing outputs. We evaluate our method on a music analysis task and show it offers intuitive insights that can be visualized with standard music tools such as Verovio.",0 "The origins of radio astronomy and the discovery of the first radio galaxies are described which showed that the radio emission of active galaxies is very diverse in shape and can reach a size of many times their optical extent. In 1974 the first ""giant"" radio galaxy (GRG) was discovered, several times larger than any previously known one. Since 2012, when about 100 such GRGs larger than 1 Megaparsec (3.3 million light years) had been reported in literature, the author is performing his own search for GRGs and maintains a list of currently nearly 7000 GRGs, with more than half of these found on his own or his students at the Departamento de Astronom\'ia of Universidad de Guanajuato. An analysis of the very largest GRGs does not reveal any single property of these that would explain why they could grow to such large sizes. Recent advances in radio telescopes have led to vast amounts of images rich in GRGs, but due to the complexity of identifying their host galaxies only a fraction of these images can be searched with visual inspection by humans. Currently available machine algorithms and citizen science projects are prone to erroneous identifications and also leave unnoticed a substantial fraction of GRGs, such that supervision of the results by experts is essential to produce reliable results.",2 "Deception detection is a critical task in real-world applications such as security screening, fraud prevention, and credibility assessment. While deep learning methods have shown promise in surpassing human-level performance, their effectiveness often depends on the availability of high-quality and diverse deception samples. Existing research predominantly focuses on single-domain scenarios, overlooking the significant performance degradation caused by domain shifts. To address this gap, we present the SVC 2025 Multimodal Deception Detection Challenge, a new benchmark designed to evaluate cross-domain generalization in audio-visual deception detection. Participants are required to develop models that not only perform well within individual domains but also generalize across multiple heterogeneous datasets. By leveraging multimodal data, including audio, video, and text, this challenge encourages the design of models capable of capturing subtle and implicit deceptive cues. Through this benchmark, we aim to foster the development of more adaptable, explainable, and practically deployable deception detection systems, advancing the broader field of multimodal learning. By the conclusion of the workshop competition, a total of 21 teams had submitted their final results. https://sites.google.com/view/svc-mm25 for more information.",2 "Weld defect detection is crucial for ensuring the safety and reliability of piping systems in the oil and gas industry, especially in challenging marine and offshore environments. Traditional non-destructive testing (NDT) methods often fail to detect subtle or internal defects, leading to potential failures and costly downtime. Furthermore, existing neural network-based approaches for defect classification frequently rely on arbitrarily selected pretrained architectures and lack interpretability, raising safety concerns for deployment. To address these challenges, this paper introduces ``Adapt-WeldNet"", an adaptive framework for welding defect detection that systematically evaluates various pre-trained architectures, transfer learning strategies, and adaptive optimizers to identify the best-performing model and hyperparameters, optimizing defect detection and providing actionable insights. Additionally, a novel Defect Detection Interpretability Analysis (DDIA) framework is proposed to enhance system transparency. DDIA employs Explainable AI (XAI) techniques, such as Grad-CAM and LIME, alongside domain-specific evaluations validated by certified ASNT NDE Level II professionals. Incorporating a Human-in-the-Loop (HITL) approach and aligning with the principles of Trustworthy AI, DDIA ensures the reliability, fairness, and accountability of the defect detection system, fostering confidence in automated decisions through expert validation. By improving both performance and interpretability, this work enhances trust, safety, and reliability in welding defect detection systems, supporting critical operations in offshore and marine environments.",0 "This chapter focuses on the intersection of user experience (UX) and wellbeing in the context of content moderation. Human content moderators play a key role in protecting end users from harm by detecting, evaluating, and addressing content that may violate laws or product policies. They face numerous challenges, including exposure to sensitive content, monotonous tasks, and complex decisions, which are often exacerbated by inadequate tools. This chapter explains the importance of incorporating wellbeing considerations throughout the product development lifecycle, offering a framework and practical strategies for implementation across key UX disciplines: research, writing, and design. By examining these considerations, this chapter provides a roadmap for creating user experiences that support content moderators, benefiting both the user and the business.",2 "Many domains now employ AI-based decision-making aids, and although the potential for AI systems to assist with decision making is much discussed, human-AI collaboration often underperforms due to factors such as (mis)trust in the AI system and beliefs about AI being incapable of completing subjective tasks. One potential tool for influencing human decision making is performance pressure, which hasn't been much studied in interaction with human-AI decision making. In this work, we examine how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior. Using an inherently low-stakes task (spam review classification), we demonstrate effective and simple methods to apply pressure and influence human AI advice-taking behavior by manipulating financial incentives and imposing time limits. Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior. We conclude by discussing the implications of these interactions, strategies to effectively use pressure, and encourage future research to incorporate pressure analysis.",1 "Insider threats, which can lead to severe losses, remain a major security concern. While machine learning-based insider threat detection (ITD) methods have shown promising results, their progress is hindered by the scarcity of high-quality data. Enterprise data is sensitive and rarely accessible, while publicly available datasets, when limited in scale due to cost, lack sufficient real-world coverage; and when purely synthetic, they fail to capture rich semantics and realistic user behavior. To address this, we propose Chimera, the first large language model (LLM)-based multi-agent framework that automatically simulates both benign and malicious insider activities and collects diverse logs across diverse enterprise environments. Chimera models each employee with agents that have role-specific behavior and integrates modules for group meetings, pairwise interactions, and autonomous scheduling, capturing realistic organizational dynamics. It incorporates 15 types of insider attacks (e.g., IP theft, system sabotage) and has been deployed to simulate activities in three sensitive domains: technology company, finance corporation, and medical institution, producing a new dataset, ChimeraLog. We assess ChimeraLog via human studies and quantitative analysis, confirming its diversity, realism, and presence of explainable threat patterns. Evaluations of existing ITD methods show an average F1-score of 0.83, which is significantly lower than 0.99 on the CERT dataset, demonstrating ChimeraLog's higher difficulty and utility for advancing ITD research.",1 "Recovering and distinguishing between the strict-preference, indifference and/or indecisiveness parts of a decision maker's preferences is a challenging task but also important for testing theory and conducting welfare analysis. This paper contributes towards this goal by reporting on data from a lab experiment on riskless choice that were analyzed with novel theory-guided computational methods. The experiment included both Forced- and Free-Choice treatments. Its main novelty consisted of allowing subjects to select multiple alternatives at each menu. Based on a new non-parametric goodness-of-fit criterion that we introduce, which generalizes a widely used pre-existing method to environments of multi-valued choices, each subject's decisions were tested against three structured general choice models that feature maximization of stable but potentially weak and/or incomplete preferences. Nearly 60% of all subjects' are well-explained by one of these models, typically with a unique model-optimal preference relation per subject. Importantly, revealed preferences typically have a non-trivial indifference part that, on average, accounts for up to 19% of all possible comparisons. In addition, 22% of all subjects are best explained by models of incomplete-preference maximization and reveal preferences that typically exhibit the distinctions between indifference and indecisiveness that these models afford or predict. These distinctions are documented empirically for the first time.",2 "Online coordination of multi-robot systems in open and unknown environments faces significant challenges, particularly when semantic features detected during operation dynamically trigger new tasks. Recent large language model (LLMs)-based approaches for scene reasoning and planning primarily focus on one-shot, end-to-end solutions in known environments, lacking both dynamic adaptation capabilities for online operation and explainability in the processes of planning. To address these issues, a novel framework (DEXTER-LLM) for dynamic task planning in unknown environments, integrates four modules: (i) a mission comprehension module that resolves partial ordering of tasks specified by natural languages or linear temporal logic formulas (LTL); (ii) an online subtask generator based on LLMs that improves the accuracy and explainability of task decomposition via multi-stage reasoning; (iii) an optimal subtask assigner and scheduler that allocates subtasks to robots via search-based optimization; and (iv) a dynamic adaptation and human-in-the-loop verification module that implements multi-rate, event-based updates for both subtasks and their assignments, to cope with new features and tasks detected online. The framework effectively combines LLMs' open-world reasoning capabilities with the optimality of model-based assignment methods, simultaneously addressing the critical issue of online adaptability and explainability. Experimental evaluations demonstrate exceptional performances, with 100% success rates across all scenarios, 160 tasks and 480 subtasks completed on average (3 times the baselines), 62% less queries to LLMs during adaptation, and superior plan quality (2 times higher) for compound tasks. Project page at https://tcxm.github.io/DEXTER-LLM/",0 "Hilbert spaces in theories of gravity are notoriously subtle due to the Hamiltonian constraints, particularly regarding the inner product. To demystify this subject, we review and extend a collection of ideas in canonical gravity, and connect to the sum-over-histories approach by clarifying the Hilbert space interpretation of various gravitational path integrals. We use one-dimensional (or mini-superspace) models as the simplest context to exemplify the conceptual ideas. We emphasise that a physical Hilbert space can be defined either by requiring states to be annihilated by constraint operators (e.g., the Wheeler-DeWitt equation) or by equivalence relations between wavefunctions, and explain that these two approaches are related by an inner product. We advocate that the group averaging procedure constructs the correct physical inner product. The Klein-Gordon inner product is not positive-definite, which we explain as arising from a bad gauge choice; nonetheless, it agrees with group averaging when such a problem is absent. These concepts are all embedded in the BRST/BFV formalism, which provides a systematic way to construct these and other physically equivalent inner products (e.g., from maximal-volume gauge and Gaussian averaged gauges). Finally we discuss the application of these ideas in the semi-classical approximation, including non-perturbative gravitational effects.",2 "Effort estimation is a crucial activity in agile software development, where teams collaboratively review, discuss, and estimate the effort required to complete user stories in a product backlog. Current practices in agile effort estimation heavily rely on subjective assessments, leading to inaccuracies and inconsistencies in the estimates. While recent machine learning-based methods show promising accuracy, they cannot explain or justify their estimates and lack the capability to interact with human team members. Our paper fills this significant gap by leveraging the powerful capabilities of Large Language Models (LLMs). We propose a novel LLM-based multi-agent framework for agile estimation that not only can produce estimates, but also can coordinate, communicate and discuss with human developers and other agents to reach a consensus. Evaluation results on a real-life dataset show that our approach outperforms state-of-the-art techniques across all evaluation metrics in the majority of the cases. Our human study with software development practitioners also demonstrates an overwhelmingly positive experience in collaborating with our agents in agile effort estimation.",1 "Explainability in AI and ML models is critical for fostering trust, ensuring accountability, and enabling informed decision making in high stakes domains. Yet this objective is often unmet in practice. This paper proposes a general purpose framework that bridges state of the art explainability techniques with Malle's five category model of behavior explanation: Knowledge Structures, Simulation/Projection, Covariation, Direct Recall, and Rationalization. The framework is designed to be applicable across AI assisted decision making systems, with the goal of enhancing transparency, interpretability, and user trust. We demonstrate its practical relevance through real world case studies, including credit risk assessment and regulatory analysis powered by large language models (LLMs). By aligning technical explanations with human cognitive mechanisms, the framework lays the groundwork for more comprehensible, responsible, and ethical AI systems.",2 "Estimating emotional states from physiological signals is a central topic in affective computing and psychophysiology. While many emotion estimation systems implicitly assume a stable relationship between physiological features and subjective affect, this assumption has rarely been tested over long timeframes. This study investigates whether such relationships remain consistent across several months within individuals. We developed a custom measurement system and constructed a longitudinal dataset by collecting physiological signals -- including blood volume pulse, electrodermal activity (EDA), skin temperature, and acceleration--along with self-reported emotional states from 24 participants over two three-month periods. Data were collected in naturalistic working environments, allowing analysis of the relationship between physiological features and subjective arousal in everyday contexts. We examined how physiological-arousal relationships evolve over time by using Explainable Boosting Machines (EBMs) to ensure model interpretability. A model trained on 1st-period data showed a 5\% decrease in accuracy when tested on 2nd-period data, indicating long-term variability in physiological-arousal associations. EBM-based comparisons further revealed that while heart rate remained a relatively stable predictor, minimum EDA exhibited substantial individual-level fluctuations between periods. While the number of participants is limited, these findings highlight the need to account for temporal variability in physiological-arousal relationships and suggest that emotion estimation models should be periodically updated -- e.g., every five months -- based on observed shift trends to maintain robust performance over time.",2 "The global spread of misinformation and concerns about content trustworthiness have driven the development of automated fact-checking systems. Since false information often exploits social media dynamics such as ""likes"" and user networks to amplify its reach, effective solutions must go beyond content analysis to incorporate these factors. Moreover, simply labelling content as false can be ineffective or even reinforce biases such as automation and confirmation bias. This paper proposes an explainable framework that combines content, social media, and graph-based features to enhance fact-checking. It integrates a misinformation classifier with explainability techniques to deliver complete and interpretable insights supporting classification decisions. Experiments demonstrate that multimodal information improves performance over single modalities, with evaluations conducted on datasets in English, Spanish, and Portuguese. Additionally, the framework's explanations were assessed for interpretability, trustworthiness, and robustness with a novel protocol, showing that it effectively generates human-understandable justifications for its predictions.",2 "The growing adoption of foundation models calls for a paradigm shift from Data Science to Model Science. Unlike data-centric approaches, Model Science places the trained model at the core of analysis, aiming to interact, verify, explain, and control its behavior across diverse operational contexts. This paper introduces a conceptual framework for a new discipline called Model Science, along with the proposal for its four key pillars: Verification, which requires strict, context-aware evaluation protocols; Explanation, which is understood as various approaches to explore of internal model operations; Control, which integrates alignment techniques to steer model behavior; and Interface, which develops interactive and visual explanation tools to improve human calibration and decision-making. The proposed framework aims to guide the development of credible, safe, and human-aligned AI systems.",2 "The promise of human-AI teaming lies in humans and AI working together to achieve performance levels neither could accomplish alone. Effective communication between AI and humans is crucial for teamwork, enabling users to efficiently benefit from AI assistance. This paper investigates how AI communication impacts human-AI team performance. We examine AI explanations that convey an awareness of its strengths and limitations. To achieve this, we train a decision tree on the model's mistakes, allowing it to recognize and explain where and why it might err. Through a user study on an income prediction task, we assess the impact of varying levels of information and explanations about AI predictions. Our results show that AI performance insights enhance task performance, and conveying AI awareness of its strengths and weaknesses improves trust calibration. These findings highlight the importance of considering how information delivery influences user trust and reliance in AI-assisted decision-making.",0 "Current wireless networks are designed to optimize spectral efficiency for human users, who typically require sustained connections for high-data-rate applications like file transfers and video streaming. However, these networks are increasingly inadequate for the emerging era of machine-type communications (MTC). With a vast number of devices exhibiting sporadic traffic patterns consisting of short packets, the grant-based multiple access procedures utilized by existing networks lead to significant delays and inefficiencies. To address this issue the unsourced random access (URA) paradigm has been proposed. This paradigm assumes the devices to share a common encoder thus simplifying the reception process by eliminating the identification procedure. The URA paradigm not only addresses the computational challenges but it also considers the random access (RA) as a coding problem, i.e., takes into account both medium access protocols and physical layer effects. In this monograph we provide a comprehensive overview of the URA problem in noisy channels, with the main task being to explain the major ideas rather than to list all existing solutions.",0 "This study investigates the oscillation behavior of a sessile drop placed on a hydrophobic substrate subjected to vertical vibrations with varying frequencies and amplitudes. We examined the responses of both Newtonian and viscoelastic drops. For viscoelastic samples, image analysis techniques were employed to correlate the drop dynamics with the rheological properties of the material. Overall, we demonstrate that this drop-based method allows for oscillatory shear experiments at frequencies that are difficult to access using conventional rheometers. The results reveal that the essential features of the drop response can be explained by the ratio of two characteristic time scales: the internal polymer relaxation time ($t_{p}$) and the external forcing time scale ($1/f$). This ratio defines the Deborah number ($De$). When the two time scales are comparable ($De \approx 1$), viscous dissipation dominates, which is observed in Lissajous curves and the drop's profile. At very low Deborah numbers ($De \ll 1$), the drop behaves like a Newtonian fluid (having a peak around natural frequency of the drop), while at high Deborah numbers ($De \gg 1$), it exhibits an elastic response. Furthermore, we show that increasing the applied deformation drives the system into the nonlinear viscoelastic regime. In this regime, unlike traditional rheology measurements, we observe the presence of $even$ and $odd$ harmonics in the drop response. This is attributed to the inherent geometric asymmetry of the drop setup, which breaks the symmetric assumptions typically present in standard rheological techniques.",0 "The alignment of language models (LMs) with human preferences is critical for building reliable AI systems. The problem is typically framed as optimizing an LM policy to maximize the expected reward that reflects human preferences. Recently, Direct Preference Optimization (DPO) was proposed as a LM alignment method that directly optimize the policy from static preference data, and further improved by incorporating on-policy sampling (i.e., preference candidates generated during the training loop) for better LM alignment. However, we show on-policy data is not always optimal, with systematic effectiveness difference emerging between static and on-policy preference candidates. For example, on-policy data can result in a 3$\times$ effectiveness compared with static data for Llama-3, and a 0.4$\times$ effectiveness for Zephyr. To explain the phenomenon, we propose the alignment stage assumption, which divides the alignment process into two distinct stages: the preference injection stage, which benefits from diverse data, and the preference fine-tuning stage, which favors high-quality data. Through theoretical and empirical analysis, we characterize these stages and propose an effective algorithm to identify the boundaries between them. We perform experiments on 5 models (Llama, Zephyr, Phi-2, Qwen, Pythia) and 2 alignment methods (DPO, SLiC-HF) to show the generalizability of alignment stage assumption and boundary measurement.",0 "To collaborate effectively with humans, language models must be able to explain their decisions in natural language. We study a specific type of self-explanation: self-generated counterfactual explanations (SCEs), where a model explains its prediction by modifying the input such that it would have predicted a different outcome. We evaluate whether LLMs can produce SCEs that are valid, achieving the intended outcome, and minimal, modifying the input no more than necessary. When asked to generate counterfactuals, we find that LLMs typically produce SCEs that are valid, but far from minimal, offering little insight into their decision-making behaviour. Worryingly, when asked to generate minimal counterfactuals, LLMs typically make excessively small edits that fail to change predictions. The observed validity-minimality trade-off is consistent across several LLMs, datasets, and evaluation settings. Our findings suggest that SCEs are, at best, an ineffective explainability tool and, at worst, can provide misleading insights into model behaviour. Proposals to deploy LLMs in high-stakes settings must consider the impact of unreliable self-explanations on downstream decision-making. Our code is available at https://github.com/HarryMayne/SCEs.",0 "Physics provides fundamental laws that describe and predict the natural world. AI systems aspiring toward more general, real-world intelligence must therefore demonstrate strong physics problem-solving abilities: to formulate and apply physical laws for explaining and predicting physical processes. The International Physics Olympiad (IPhO)--the world's most prestigious physics competition--offers a rigorous benchmark for this purpose. We introduce Physics Supernova, an AI agent system with superior physics problem-solving abilities that match elite IPhO gold medalists. In IPhO 2025 theory problems, Physics Supernova attains 23.5/30 points, ranking 14th of 406 contestants and surpassing the median performance of human gold medalists. We extensively analyzed Physics Supernova's capabilities and flexibility across diverse physics tasks. These results show that principled tool integration within agent systems can deliver competitive improvements in solving challenging science problems. The codes are available at https://github.com/CharlesQ9/Physics-Supernova.",2 "We introduce GANDiff FR, the first synthetic framework that precisely controls demographic and environmental factors to measure, explain, and reduce bias with reproducible rigor. GANDiff FR unifies StyleGAN3-based identity-preserving generation with diffusion-based attribute control, enabling fine-grained manipulation of pose around 30 degrees, illumination (four directions), and expression (five levels) under ceteris paribus conditions. We synthesize 10,000 demographically balanced faces across five cohorts validated for realism via automated detection (98.2%) and human review (89%) to isolate and quantify bias drivers. Benchmarking ArcFace, CosFace, and AdaFace under matched operating points shows AdaFace reduces inter-group TPR disparity by 60% (2.5% vs. 6.3%), with illumination accounting for 42% of residual bias. Cross-dataset evaluation on RFW, BUPT, and CASIA WebFace confirms strong synthetic-to-real transfer (r 0.85). Despite around 20% computational overhead relative to pure GANs, GANDiff FR yields three times more attribute-conditioned variants, establishing a reproducible, regulation-aligned (EU AI Act) standard for fairness auditing. Code and data are released to support transparent, scalable bias evaluation.",0 "Concept Activation Vectors (CAVs) are a tool from explainable AI, offering a promising approach for understanding how human-understandable concepts are encoded in a model's latent spaces. They are computed from hidden-layer activations of inputs belonging either to a concept class or to non-concept examples. Adopting a probabilistic perspective, the distribution of the (non-)concept inputs induces a distribution over the CAV, making it a random vector in the latent space. This enables us to derive mean and covariance for different types of CAVs, leading to a unified theoretical view. This probabilistic perspective also reveals a potential vulnerability: CAVs can strongly depend on the rather arbitrary non-concept distribution, a factor largely overlooked in prior work. We illustrate this with a simple yet effective adversarial attack, underscoring the need for a more systematic study.",0 "Autonomous navigation in maritime domains is accelerating alongside advances in artificial intelligence, sensing, and connectivity. Opaque decision-making and poorly calibrated human-automation interaction remain key barriers to safe adoption. This article synthesizes 100 studies on automation transparency for Maritime Autonomous Surface Ships (MASS) spanning situation awareness (SA), human factors, interface design, and regulation. We (i) map the Guidance-Navigation-Control stack to shore-based operational modes -- remote supervision (RSM) and remote control (RCM) -- and identify where human unsafe control actions (Human-UCAs) concentrate in handover and emergency loops; (ii) summarize evidence that transparency features (decision rationales, alternatives, confidence/uncertainty, and rule-compliance indicators) improve understanding and support trust calibration, though reliability and predictability often dominate trust; (iii) distill design strategies for transparency at three layers: sensor/SA acquisition and fusion, HMI/eHMI presentation (textual/graphical overlays, color coding, conversational and immersive UIs), and engineer-facing processes (resilient interaction design, validation, and standardization). We integrate methods for Human-UCA identification (STPA-Cog + IDAC), quantitative trust/SA assessment, and operator workload monitoring, and outline regulatory and rule-based implications including COLREGs formalization and route exchange. We conclude with an adaptive transparency framework that couples operator state estimation with explainable decision support to reduce cognitive overload and improve takeover timeliness. The review highlights actionable figure-of-merit displays (e.g., CPA/TCPA risk bars, robustness heatmaps), transparent model outputs (rule traceability, confidence), and training pipelines (HIL/MIL, simulation) as near-term levers for safer MASS operations.",0 "This innovative practice article reports on the piloting of vibe coding (using natural language to create software applications with AI) for English as a Foreign Language (EFL) education. We developed a human-AI meta-languaging framework with three dimensions: talking to AI (prompt engineering), talking through AI (negotiating authorship), and talking about AI (mental models of AI). Using backward design principles, we created a four-hour workshop where two students designed applications addressing authentic EFL writing challenges. We adopted a case study methodology, collecting data from worksheets and video recordings, think-aloud protocols, screen recordings, and AI-generated images. Contrasting cases showed one student successfully vibe coding a functional application cohering to her intended design, while another encountered technical difficulties with major gaps between intended design and actual functionality. Analysis reveals differences in students' prompt engineering approaches, suggesting different AI mental models and tensions in attributing authorship. We argue that AI functions as a beneficial languaging machine, and that differences in how students talk to, through, and about AI explain vibe coding outcome variations. Findings indicate that effective vibe coding instruction requires explicit meta-languaging scaffolding, teaching structured prompt engineering, facilitating critical authorship discussions, and developing vocabulary for articulating AI mental models.",0 "Traditional surgical skill acquisition relies heavily on expert feedback, yet direct access is limited by faculty availability and variability in subjective assessments. While trainees can practice independently, the lack of personalized, objective, and quantitative feedback reduces the effectiveness of self-directed learning. Recent advances in computer vision and machine learning have enabled automated surgical skill assessment, demonstrating the feasibility of automatic competency evaluation. However, it is unclear whether such Artificial Intelligence (AI)-driven feedback can contribute to skill acquisition. Here, we examine the effectiveness of explainable AI (XAI)-generated feedback in surgical training through a human-AI study. We create a simulation-based training framework that utilizes XAI to analyze videos and extract surgical skill proxies related to primitive actions. Our intervention provides automated, user-specific feedback by comparing trainee performance to expert benchmarks and highlighting deviations from optimal execution through understandable proxies for actionable guidance. In a prospective user study with medical students, we compare the impact of XAI-guided feedback against traditional video-based coaching on task outcomes, cognitive load, and trainees' perceptions of AI-assisted learning. Results showed improved cognitive load and confidence post-intervention. While no differences emerged between the two feedback types in reducing performance gaps or practice adjustments, trends in the XAI group revealed desirable effects where participants more closely mimicked expert practice. This work encourages the study of explainable AI in surgical education and the development of data-driven, adaptive feedback mechanisms that could transform learning experiences and competency assessment.",2 "Deep learning has achieved remarkable success in medical image analysis, however its adoption in clinical practice is limited by a lack of interpretability. These models often make correct predictions without explaining their reasoning. They may also rely on image regions unrelated to the disease or visual cues, such as annotations, that are not present in real-world conditions. This can reduce trust and increase the risk of misleading diagnoses. We introduce the Guided Focus via Segment-Wise Relevance Network (GFSR-Net), an approach designed to improve interpretability and reliability in medical imaging. GFSR-Net uses a small number of human annotations to approximate where a person would focus within an image intuitively, without requiring precise boundaries or exhaustive markings, making the process fast and practical. During training, the model learns to align its focus with these areas, progressively emphasizing features that carry diagnostic meaning. This guidance works across different types of natural and medical images, including chest X-rays, retinal scans, and dermatological images. Our experiments demonstrate that GFSR achieves comparable or superior accuracy while producing saliency maps that better reflect human expectations. This reduces the reliance on irrelevant patterns and increases confidence in automated diagnostic tools.",0 "The Vehicle Routing Problem (VRP) is a complex optimization problem with numerous real-world applications, mostly solved using metaheuristic algorithms due to its $\mathcal{NP}$-Hard nature. Traditionally, these metaheuristics rely on human-crafted designs developed through empirical studies. However, recent research shows that machine learning methods can be used the structural characteristics of solutions in combinatorial optimization, thereby aiding in designing more efficient algorithms, particularly for solving VRP. Building on this advancement, this study extends the previous research by conducting a sensitivity analysis using multiple classifier models that are capable of predicting the quality of VRP solutions. Hence, by leveraging explainable AI, this research is able to extend the understanding of how these models make decisions. Finally, our findings indicate that while feature importance varies, certain features consistently emerge as strong predictors. Furthermore, we propose a unified framework able of ranking feature impact across different scenarios to illustrate this finding. These insights highlight the potential of feature importance analysis as a foundation for developing a guidance mechanism of metaheuristic algorithms for solving the VRP.",2 "Major Depressive Disorder is one of the leading causes of disability worldwide, yet its diagnosis still depends largely on subjective clinical assessments. Integrating Artificial Intelligence (AI) holds promise for developing objective, scalable, and timely diagnostic tools. In this paper, we present a comprehensive survey of state-of-the-art AI methods for depression detection and diagnosis, based on a systematic review of 55 key studies. We introduce a novel hierarchical taxonomy that structures the field by primary clinical task (diagnosis vs. prediction), data modality (text, speech, neuroimaging, multimodal), and computational model class (e.g., graph neural networks, large language models, hybrid approaches). Our in-depth analysis reveals three major trends: the predominance of graph neural networks for modeling brain connectivity, the rise of large language models for linguistic and conversational data, and an emerging focus on multimodal fusion, explainability, and algorithmic fairness. Alongside methodological insights, we provide an overview of prominent public datasets and standard evaluation metrics as a practical guide for researchers. By synthesizing current advances and highlighting open challenges, this survey offers a comprehensive roadmap for future innovation in computational psychiatry.",2 "Despite extensive investment in artificial intelligence, 95% of enterprises report no measurable profit impact from AI deployments (MIT, 2025). In this theoretical paper, we argue that this gap reflects paradigmatic lock-in that channels AI into incremental optimization rather than structural transformation. Using a cross-case analysis, we propose a 2x2 framework that reconceptualizes AI strategy along two independent dimensions: the degree of transformation achieved (incremental to transformational) and the treatment of human contribution (reduced to amplified). The framework surfaces four patterns now dominant in practice: individual augmentation, process automation, workforce substitution, and a less deployed frontier of collaborative intelligence. Evidence shows that the first three dimensions reinforce legacy work models and yield localized gains without durable value capture. Realizing collaborative intelligence requires three mechanisms: complementarity (pairing distinct human and machine strengths), co-evolution (mutual adaptation through interaction), and boundary-setting (human determination of ethical and strategic parameters). Complementarity and boundary-setting are observable in regulated and high-stakes domains; co-evolution is largely absent, which helps explain limited system-level impact. Our findings in a case study analysis illustrated that advancing toward collaborative intelligence requires material restructuring of roles, governance, and data architecture rather than additional tools. The framework reframes AI transformation as an organizational design challenge: moving from optimizing the division of labor between humans and machines to architecting their convergence, with implications for operating models, workforce development, and the future of work.",1 "Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues. Explainability in CRSs is crucial as it enables users to understand the reasoning behind recommendations, increasing system transparency and trustworthiness. However, current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability. Large language models (LLMs) offer powerful reasoning capabilities that can bridge this gap by generating human-understandable preference summaries. However, effectively reasoning over user preferences in CRSs remains challenging as LLMs pre-trained on large-scale corpora may not be well-suited for analyzing user preferences. While KGs provide rich domain knowledge, integrating them with LLMs encounters a significant modality gap between structured KG information and unstructured conversations. In this paper, we propose COMPASS, a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs. COMPASS employs a two-stage training approach: first, it bridges the gap between the structured KG and natural language through novel graph entity captioning pre-training. Next, COMPASS optimizes user preference reasoning via knowledge-aware instruction fine-tuning, where the LLM learns to reason and summarize user preferences from dialogue histories and KG-augmented context. This enables COMPASS to perform knowledge-aware reasoning and generate interpretable user preferences that can seamlessly integrate with existing CRS models for improving recommendation performance and explainability. Our experiments on benchmark datasets demonstrate the effectiveness of COMPASS in improving various CRS models.",0 "Building a conversational embodied agent to execute real-life tasks has been a long-standing yet quite challenging research goal, as it requires effective human-agent communication, multi-modal understanding, long-range sequential decision making, etc. Traditional symbolic methods have scaling and generalization issues, while end-to-end deep learning models suffer from data scarcity and high task complexity, and are often hard to explain. To benefit from both worlds, we propose JARVIS, a neuro-symbolic commonsense reasoning framework for modular, generalizable, and interpretable conversational embodied agents. First, it acquires symbolic representations by prompting large language models (LLMs) for language understanding and sub-goal planning, and by constructing semantic maps from visual observations. Then the symbolic module reasons for sub-goal planning and action generation based on task- and action-level common sense. Extensive experiments on the TEACh dataset validate the efficacy and efficiency of our JARVIS framework, which achieves state-of-the-art (SOTA) results on all three dialog-based embodied tasks, including Execution from Dialog History (EDH), Trajectory from Dialog (TfD), and Two-Agent Task Completion (TATC) (e.g., our method boosts the unseen Success Rate on EDH from 6.1\% to 15.8\%). Moreover, we systematically analyze the essential factors that affect the task performance and also demonstrate the superiority of our method in few-shot settings. Our JARVIS model ranks first in the Alexa Prize SimBot Public Benchmark Challenge.",2 "An important line of research attempts to explain CNN image classifier predictions and intermediate layer representations in terms of human-understandable concepts. Previous work supports that deep representations are linearly separable with respect to their concept label, implying that the feature space has directions where intermediate representations may be projected onto, to become more understandable. These directions are called interpretable, and when considered as a set, they may form an interpretable feature space basis. Compared to previous top-down probing approaches which use concept annotations to identify the interpretable directions one at a time, in this work, we take a bottom-up approach, identifying the directions from the structure of the feature space, collectively, without relying on supervision from concept labels. Instead, we learn the directions by optimizing for a sparsity property that holds for any interpretable basis. We experiment with existing popular CNNs and demonstrate the effectiveness of our method in extracting an interpretable basis across network architectures and training datasets. We make extensions to existing basis interpretability metrics and show that intermediate layer representations become more interpretable when transformed with the extracted bases. Finally, we compare the bases extracted with our method with the bases derived with supervision and find that, in one aspect, unsupervised basis extraction has a strength that constitutes a limitation of learning the basis with supervision, and we provide potential directions for future research.",0 "The ability to explain complex information from chart images is vital for effective data-driven decision-making. In this work, we address the challenge of generating detailed explanations alongside answering questions about charts. We present ChartQA-X, a comprehensive dataset comprising 30,299 chart samples across four chart types, each paired with contextually relevant questions, answers, and explanations. Explanations are generated and selected based on metrics such as faithfulness, informativeness, coherence, and perplexity. Our human evaluation with 245 participants shows that model-generated explanations in ChartQA-X surpass human-written explanations in accuracy and logic and are comparable in terms of clarity and overall quality. Moreover, models fine-tuned on ChartQA-X show substantial improvements across various metrics, including absolute gains of up to 24.57 points in explanation quality, 18.96 percentage points in question-answering accuracy, and 14.75 percentage points on unseen benchmarks for the same task. By integrating explanatory narratives with answers, our approach enables agents to convey complex visual information more effectively, improving comprehension and greater trust in the generated responses.",1 "Current AI approaches to refugee integration optimize narrow objectives such as employment and fail to capture the cultural, emotional, and ethical dimensions critical for long-term success. We introduce EMPATHIA (Enriched Multimodal Pathways for Agentic Thinking in Humanitarian Immigrant Assistance), a multi-agent framework addressing the central Creative AI question: how do we preserve human dignity when machines participate in life-altering decisions? Grounded in Kegan's Constructive Developmental Theory, EMPATHIA decomposes integration into three modules: SEED (Socio-cultural Entry and Embedding Decision) for initial placement, RISE (Rapid Integration and Self-sufficiency Engine) for early independence, and THRIVE (Transcultural Harmony and Resilience through Integrated Values and Engagement) for sustained outcomes. SEED employs a selector-validator architecture with three specialized agents - emotional, cultural, and ethical - that deliberate transparently to produce interpretable recommendations. Experiments on the UN Kakuma dataset (15,026 individuals, 7,960 eligible adults 15+ per ILO/UNHCR standards) and implementation on 6,359 working-age refugees (15+) with 150+ socioeconomic variables achieved 87.4% validation convergence and explainable assessments across five host countries. EMPATHIA's weighted integration of cultural, emotional, and ethical factors balances competing value systems while supporting practitioner-AI collaboration. By augmenting rather than replacing human expertise, EMPATHIA provides a generalizable framework for AI-driven allocation tasks where multiple values must be reconciled.",0 "Accurate load forecasting is essential to the operation of modern electric power systems. Given the sensitivity of electricity demand to weather variability and temporal dynamics, capturing non-linear patterns is essential for long-term planning. This paper presents a comparative analysis of machine learning models, Linear Regression, XGBoost, LightGBM, and Long Short-Term Memory (LSTM), for forecasting system-wide electricity load up to one year in advance. Midterm forecasting has shown to be crucial for maintenance scheduling, resource allocation, financial forecasting, and market participation. The paper places a focus on the use of a method called ""Shapley Additive Explanations"" (SHAP) to improve model explainability. SHAP enables the quantification of feature contributions, guiding informed feature engineering and improving both model transparency and forecasting accuracy.",1 "Explainability remains a critical challenge in artificial intelligence (AI) systems, particularly in high stakes domains such as healthcare, finance, and decision support, where users must understand and trust automated reasoning. Traditional explainability methods such as feature importance and post-hoc justifications often fail to capture the cognitive processes that underlie human decision making, leading to either too technical or insufficiently meaningful explanations. We propose a novel appraisal based framework inspired by the Component Process Model (CPM) for explainability to address this gap. While CPM has traditionally been applied to emotion research, we use its appraisal component as a cognitive model for generating human aligned explanations. By structuring explanations around key appraisal dimensions such as relevance, implications, coping potential, and normative significance our framework provides context sensitive, cognitively meaningful justifications for AI decisions. This work introduces a new paradigm for generating intuitive, human-centred explanations in AI driven systems by bridging cognitive science and explainable AI.",0 "This study analyzes the 2018 Chinese Household Income Project survey data to evaluate the income gaps between an ""outsider"" ethnic minority group, the Mongols, an ""insider"" ethnic minority group, the Manchus, and the majority Han group in urban and rural areas of Liaoning province and Inner Mongolia in China. Three statistical methods, a simple first-order OLS linear regression, linear regressions with interaction terms, and the Blinder-Oaxaca Decomposition, are used to investigate the income disparity amongst the three groups. The results indicate that Mongols suffer a significant ethnic wage penalty attributable to possible discrimination in the rural areas of these two provinces, while the urban income gaps between the three groups can mostly be explained by participation in public sector occupations or affiliation with the Chinese Communist Party. In rural settings, Mongols also have higher returns to public sector jobs and CCP membership compared to the other two ethnic groups. The findings suggest that Chinese affirmative actions regarding ethnic policy are effective in accelerating the integration of ethnic minorities with Han in the outcomes of the labor market. This conclusion is consistent with previous studies.",2 "Many important scientific problems involve multivariate optimization coupled with slow and laborious experimental measurements. These complex, high-dimensional searches can be defined by non-convex optimization landscapes that resemble needle-in-a-haystack surfaces, leading to entrapment in local minima. Contextualizing optimizers with human domain knowledge is a powerful approach to guide searches to localized fruitful regions. However, this approach is susceptible to human confirmation bias and it is also challenging for domain experts to keep track of the rapidly expanding scientific literature. Here, we propose the use of Large Language Models (LLMs) for contextualizing Bayesian optimization (BO) via a hybrid optimization framework that intelligently and economically blends stochastic inference with domain knowledge-based insights from the LLM, which is used to suggest new, better-performing areas of the search space for exploration. Our method fosters user engagement by offering real-time commentary on the optimization progress, explaining the reasoning behind the search strategies. We validate the effectiveness of our approach on synthetic benchmarks with up to 15 independent variables and demonstrate the ability of LLMs to reason in four real-world experimental tasks where context-aware suggestions boost optimization performance substantially.",0 "As deep learning (DL) technologies advance, their application in automated visual inspection for Class III medical devices offers significant potential to enhance quality assurance and reduce human error. However, the adoption of such AI-based systems introduces new regulatory complexities-particularly under the EU Artificial Intelligence (AI) Act, which imposes high-risk system obligations that differ in scope and depth from established regulatory frameworks such as the Medical Device Regulation (MDR) and the U.S. FDA Quality System Regulation (QSR). This paper presents a high-level technical assessment of the foreseeable challenges that manufacturers are likely to encounter when qualifying DL-based automated inspections -- specifically static models -- within the existing medical device compliance landscape. It examines divergences in risk management principles, dataset governance, model validation, explainability requirements, and post-deployment monitoring obligations. The discussion also explores potential implementation strategies and highlights areas of uncertainty, including data retention burdens, global compliance implications, and the practical difficulties of achieving statistical significance in validation with limited defect data. Disclaimer: This paper presents a technical perspective and does not constitute legal or regulatory advice.",2 "Motion sensor time-series are central to human activity recognition (HAR), with applications in health, sports, and smart devices. However, existing methods are trained for fixed activity sets and require costly retraining when new behaviours or sensor setups appear. Recent attempts to use large language models (LLMs) for HAR, typically by converting signals into text or images, suffer from limited accuracy and lack verifiable interpretability. We propose ZARA, the first agent-based framework for zero-shot, explainable HAR directly from raw motion time-series. ZARA integrates an automatically derived pair-wise feature knowledge base that captures discriminative statistics for every activity pair, a multi-sensor retrieval module that surfaces relevant evidence, and a hierarchical agent pipeline that guides the LLM to iteratively select features, draw on this evidence, and produce both activity predictions and natural-language explanations. ZARA enables flexible and interpretable HAR without any fine-tuning or task-specific classifiers. Extensive experiments on 8 HAR benchmarks show that ZARA achieves SOTA zero-shot performance, delivering clear reasoning while exceeding the strongest baselines by 2.53x in macro F1. Ablation studies further confirm the necessity of each module, marking ZARA as a promising step toward trustworthy, plug-and-play motion time-series analysis. Our codes are available at https://github.com/zechenli03/ZARA.",0 "Deep learning has become the de facto standard and dominant paradigm in image analysis tasks, achieving state-of-the-art performance. However, this approach often results in ""black-box"" models, whose decision-making processes are difficult to interpret, raising concerns about reliability in critical applications. To address this challenge and provide human a method to understand how AI model process and make decision, the field of xAI has emerged. This paper surveys four representative approaches in xAI for visual perception tasks: (i) Saliency Maps, (ii) Concept Bottleneck Models (CBM), (iii) Prototype-based methods, and (iv) Hybrid approaches. We analyze their underlying mechanisms, strengths and limitations, as well as evaluation metrics, thereby providing a comprehensive overview to guide future research and applications.",2 "The growing integration of Artificial Intelligence (AI) into education has intensified the need for transparency and interpretability. While hackathons have long served as agile environments for rapid AI prototyping, few have directly addressed eXplainable AI (XAI) in real-world educational contexts. This paper presents a comprehensive analysis of the XAI Challenge 2025, a hackathon-style competition jointly organized by Ho Chi Minh City University of Technology (HCMUT) and the International Workshop on Trustworthiness and Reliability in Neurosymbolic AI (TRNS-AI), held as part of the International Joint Conference on Neural Networks (IJCNN 2025). The challenge tasked participants with building Question-Answering (QA) systems capable of answering student queries about university policies while generating clear, logic-based natural language explanations. To promote transparency and trustworthiness, solutions were required to use lightweight Large Language Models (LLMs) or hybrid LLM-symbolic systems. A high-quality dataset was provided, constructed via logic-based templates with Z3 validation and refined through expert student review to ensure alignment with real-world academic scenarios. We describe the challenge's motivation, structure, dataset construction, and evaluation protocol. Situating the competition within the broader evolution of AI hackathons, we argue that it represents a novel effort to bridge LLMs and symbolic reasoning in service of explainability. Our findings offer actionable insights for future XAI-centered educational systems and competitive research initiatives.",0 "Explaining why the species lives at a particular location is important for understanding ecological systems and conserving biodiversity. However, existing ecological workflows are fragmented and often inaccessible to non-specialists. We propose an end-to-end visual-to-causal framework that transforms a species image into interpretable causal insights about its habitat preference. The system integrates species recognition, global occurrence retrieval, pseudo-absence sampling, and climate data extraction. We then discover causal structures among environmental features and estimate their influence on species occurrence using modern causal inference methods. Finally, we generate statistically grounded, human-readable causal explanations from structured templates and large language models. We demonstrate the framework on a bee and a flower species and report early results as part of an ongoing project, showing the potential of the multimodal AI assistant backed up by a recommended ecological modeling practice for describing species habitat in human-understandable language. Our code is available at: https://github.com/Yutong-Zhou-cv/BioX.",2 "We are surrounded by spatio-temporal patterns resulting from the interaction of the numerous basic units constituting natural or human-made systems. In presence of diffusive-like coupling, Turing theory has been largely applied to explain the formation of such self-organized motifs both on continuous domains or networked systems, where reactions occur in the nodes and the available links are used for species to diffuse. In many relevant applications, those links are not static, as very often assumed, but evolve in time and more importantly they adapt their weights to the states of the nodes. In this work, we make one step forward and we provide a general theory to prove the validity of Turing idea in the case of adaptive symmetric networks with positive weights. The conditions for the emergence of Turing instability rely on the spectral property of the Laplace matrix and the model parameters, thus strengthening the interplay between dynamics and network topology. A rich variety of patterns are presented by using two prototype models of nonlinear dynamical systems, the Brusselator and the FitzHugh-Nagumo model. Because many empirical networks adapt to changes in the system states, our results pave the way for a thorough understanding of self-organization in real-world systems.",1 "We introduce PASTA (Perceptual Assessment System for explanaTion of Artificial Intelligence), a novel human-centric framework for evaluating eXplainable AI (XAI) techniques in computer vision. Our first contribution is the creation of the PASTA-dataset, the first large-scale benchmark that spans a diverse set of models and both saliency-based and concept-based explanation methods. This dataset enables robust, comparative analysis of XAI techniques based on human judgment. Our second contribution is an automated, data-driven benchmark that predicts human preferences using the PASTA-dataset. This scoring called PASTA-score method offers scalable, reliable, and consistent evaluation aligned with human perception. Additionally, our benchmark allows for comparisons between explanations across different modalities, an aspect previously unaddressed. We then propose to apply our scoring method to probe the interpretability of existing models and to build more human interpretable XAI methods.",2 "Superconducting radio-frequency (SRF) cavities are the leading technology for highly efficient particle acceleration, and their performance can be significantly enhanced through the controlled introduction of interstitial impurities into bulk niobium. Nitrogen doping has demonstrated a substantial reduction in surface resistance losses, which improves the quality factor of the cavities. More recently, oxygen doping has emerged as a promising alternative, demonstrating comparable reductions in surface resistance. In this study, we combine cavity measurements on \SI{1.3}{GHz} niobium SRF cavities subjected to a range of nitrogen- and oxygen-based treatments with material characterizations performed on cavity cutouts processed under identical conditions. This approach allows us to quantitatively assess the contribution of each impurity to the reduction of surface resistance. We find that nitrogen is ten times more effective than oxygen in reducing surface resistance at \SI{16}{MV/m}. We propose a model to explain this variation, suggesting that nitrogen more effectively traps hydrogen, thus suppressing the formation of niobium hydrides within the RF penetration layer and enabling an improved superconducting gap.",0 "Modern machine learning produces models that are impossible for users or developers to fully understand -- raising concerns about trust, oversight, safety, and human dignity when they are integrated into software products. Transparency and explainability methods aim to provide some help in understanding models, but it remains challenging for developers to design explanations that are understandable to target users and effective for their purpose. Emerging guidelines and regulations set goals but may not provide effective actionable guidance to developers. In a large-scale experiment with 124 participants, we explored how developers approach providing end-user explanations, including what challenges they face, and to what extent specific policies can guide their actions. We investigated whether and how specific forms of policy guidance help developers design explanations and provide evidence for policy compliance for an ML-powered screening tool for diabetic retinopathy. Participants across the board struggled to produce quality explanations and comply with the provided policies. Contrary to our expectations, we found that the nature and specificity of policy guidance had little effect. We posit that participant noncompliance is in part due to a failure to imagine and anticipate the needs of non-technical stakeholders. Drawing on cognitive process theory and the sociological imagination to contextualize participants' failure, we recommend educational interventions.",0 "DNN-based language models excel across various NLP tasks but remain highly vulnerable to textual adversarial attacks. While adversarial text generation is crucial for NLP security, explainability, evaluation, and data augmentation, related work remains overwhelmingly English-centric, leaving the problem of constructing high-quality and sustainable adversarial robustness benchmarks for lower-resourced languages both difficult and understudied. First, method customization for lower-resourced languages is complicated due to linguistic differences and limited resources. Second, automated attacks are prone to generating invalid or ambiguous adversarial texts. Last but not least, language models continuously evolve and may be immune to parts of previously generated adversarial texts. To address these challenges, we introduce HITL-GAT, an interactive system based on a general approach to human-in-the-loop generation of adversarial texts. Additionally, we demonstrate the utility of HITL-GAT through a case study on Tibetan script, employing three customized adversarial text generation methods and establishing its first adversarial robustness benchmark, providing a valuable reference for other lower-resourced languages.",0 "Variable importance measures (VIMs) aim to quantify the contribution of each input covariate to the predictability of a given output. With the growing interest in explainable AI, numerous VIMs have been proposed, many of which are heuristic in nature. This is often justified by the inherent subjectivity of the notion of importance. This raises important questions regarding usage: What makes a good VIM? How can we compare different VIMs? In this paper, we address these questions by: (1) proposing an axiomatic framework that bridges the gap between variable importance and variable selection. This framework formalizes the intuitive principle that features providing no additional information should not be assigned importance. It helps avoid false positives due to spurious correlations, which can arise with popular methods such as Shapley values; and (2) introducing a general pipeline for constructing VIMs, which clarifies the objective of various VIMs and thus facilitates meaningful comparisons. This approach is natural in statistics, but the literature has diverged from it. Finally, we provide an extensive set of examples to guide practitioners in selecting and estimating appropriate indices aligned with their specific goals and data.",0 "The impressive capabilities of deep learning models are often counterbalanced by their inherent opacity, commonly termed the ""black box"" problem, which impedes their widespread acceptance in high-trust domains. In response, the intersecting disciplines of interpretability and explainability, collectively falling under the Explainable AI (XAI) umbrella, have become focal points of research. Although these terms are frequently used as synonyms, they carry distinct conceptual weights. This document offers a comparative exploration of interpretability and explainability within the deep learning paradigm, carefully outlining their respective definitions, objectives, prevalent methodologies, and inherent difficulties. Through illustrative examinations of the MNIST digit classification task and IMDB sentiment analysis, we substantiate a key argument: interpretability generally pertains to a model's inherent capacity for human comprehension of its operational mechanisms (global understanding), whereas explainability is more commonly associated with post-hoc techniques designed to illuminate the basis for a model's individual predictions or behaviors (local explanations). For example, feature attribution methods can reveal why a specific MNIST image is recognized as a '7', and word-level importance can clarify an IMDB sentiment outcome. However, these local insights do not render the complex underlying model globally transparent. A clear grasp of this differentiation, as demonstrated by these standard datasets, is vital for fostering dependable and sound artificial intelligence.",1 "Accurate and interpretable classification of brain tumors from magnetic resonance imaging (MRI) is critical for effective diagnosis and treatment planning. This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs) using a soft voting strategy to classify three common brain tumor types: glioma, meningioma, and pituitary adenoma. The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol. To enhance transparency and clinical trust, the framework integrates an Explainable AI (XAI) module employing Grad-CAM++ for class-specific saliency visualization, alongside a symbolic Clinical Decision Rule Overlay (CDRO) that maps predictions to established radiological heuristics. The ensemble classifier achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%. Grad-CAM++ visualizations revealed strong spatial alignment between model attention and expert-annotated tumor regions, supported by Dice coefficients up to 0.88 and IoU scores up to 0.78. Clinical rule activation further validated model predictions in cases with distinct morphological features. A human-centered interpretability assessment involving five board-certified radiologists yielded high Likert-scale scores for both explanation usefulness (mean = 4.4) and heatmap-region correspondence (mean = 4.0), reinforcing the framework's clinical relevance. Overall, the proposed approach offers a robust, interpretable, and generalizable solution for automated brain tumor classification, advancing the integration of deep learning into clinical neurodiagnostics.",0 "Advanced cyber threats (e.g., Fileless Malware and Advanced Persistent Threat (APT)) have driven the adoption of provenance-based security solutions. These solutions employ Machine Learning (ML) models for behavioral modeling and critical security tasks such as malware and anomaly detection. However, the opacity of ML-based security models limits their broader adoption, as the lack of transparency in their decision-making processes restricts explainability and verifiability. We tailored our solution towards Graph Neural Network (GNN)-based security solutions since recent studies employ GNNs to comprehensively digest system provenance graphs for security-critical tasks. To enhance the explainability of GNN-based security models, we introduce PROVEXPLAINER, a framework offering instance-level security-aware explanations using an interpretable surrogate model. PROVEXPLAINER's interpretable feature space consists of discriminant subgraph patterns and graph structural features, which can be directly mapped to the system provenance problem space, making the explanations human interpretable. We show how PROVEXPLAINER synergizes with current state-of-the-art (SOTA) GNN explainers to deliver domain and instance-specific explanations. We measure the explanation quality using the Fidelity+/Fidelity- metric as used by traditional GNN explanation literature, we incorporate the precision/recall metric, where we consider the accuracy of the explanation against the ground truth, and we designed a human actionability metric based on graph traversal distance. On real-world Fileless and APT datasets, PROVEXPLAINER achieves up to 29%/27%/25%/1.4x higher Fidelity+, precision, recall, and actionability (where higher values are better), and 12% lower Fidelity- (where lower values are better) when compared against SOTA GNN explainers.",0 "Automated clinical coding involves mapping unstructured text from Electronic Health Records (EHRs) to standardized code systems such as the International Classification of Diseases (ICD). While recent advances in deep learning have significantly improved the accuracy and efficiency of ICD coding, the lack of explainability in these models remains a major limitation, undermining trust and transparency. Current explorations about explainability largely rely on attention-based techniques and qualitative assessments by physicians, yet lack systematic evaluation using consistent criteria on high-quality rationale datasets, as well as dedicated approaches explicitly trained to generate rationales for further enhancing explanation. In this work, we conduct a comprehensive evaluation of the explainability of the rationales for ICD coding through two key lenses: faithfulness that evaluates how well explanations reflect the model's actual reasoning and plausibility that measures how consistent the explanations are with human expert judgment. To facilitate the evaluation of plausibility, we construct a new rationale-annotated dataset, offering denser annotations with diverse granularity and aligns better with current clinical practice, and conduct evaluation across three types of rationales of ICD coding. Encouraged by the promising plausibility of LLM-generated rationales for ICD coding, we further propose new rationale learning methods to improve the quality of model-generated rationales, where rationales produced by prompting LLMs with/without annotation examples are used as distant supervision signals. We empirically find that LLM-generated rationales align most closely with those of human experts. Moreover, incorporating few-shot human-annotated examples not only further improves rationale generation but also enhances rationale-learning approaches.",0 "Flight trajectory prediction for multiple aircraft is essential and provides critical insights into how aircraft navigate within current air traffic flows. However, predicting multi-agent flight trajectories is inherently challenging. One of the major difficulties is modeling both the individual aircraft behaviors over time and the complex interactions between flights. Generating explainable prediction outcomes is also a challenge. Therefore, we propose a Multi-Agent Inverted Transformer, MAIFormer, as a novel neural architecture that predicts multi-agent flight trajectories. The proposed framework features two key attention modules: (i) masked multivariate attention, which captures spatio-temporal patterns of individual aircraft, and (ii) agent attention, which models the social patterns among multiple agents in complex air traffic scenes. We evaluated MAIFormer using a real-world automatic dependent surveillance-broadcast flight trajectory dataset from the terminal airspace of Incheon International Airport in South Korea. The experimental results show that MAIFormer achieves the best performance across multiple metrics and outperforms other methods. In addition, MAIFormer produces prediction outcomes that are interpretable from a human perspective, which improves both the transparency of the model and its practical utility in air traffic control.",0 "Despite extensive observational and theoretical efforts, the physical processes responsible for shaping the diversity of accelerated electron spectra observed in solar flares remain poorly understood. We use 2D particle-in-cell (PIC) simulations of magnetized plasmas subject to continuous shear-driven magnetic amplification to investigate whether electron temperature anisotropy instabilities in above-the-loop-top (ALT) regions can account for this diversity. We explore how the resulting spectra depend on key plasma parameters: the initial electron temperature $T_e$ and the initial ratio of electron cyclotron to plasma frequencies, $f_e = \omega_{ce}/\omega_{pe}$. In our simulations, the adiabatic evolution of the plasma generates electron temperature anisotropy with the electron temperature perpendicular to the magnetic field being larger than the parallel temperature. This eventually drives electromagnetic instabilities capable of scattering and accelerating electrons. The simulations consistently produce nonthermal tails in the electron spectra whose hardness increases with the initial value of $f_e$, while depending only weakly on $T_e$. For runs in which $f_e \lesssim 1.2$, the spectra exhibit double power-law shapes with downward (knee-like) breaks, and the electron scattering is dominated by OQES modes. In runs with $f_e\gtrsim 1.5$, PEMZ modes dominate and produce harder double power-law spectra with upward (elbow-like) breaks. Cases that include the $f_e\sim 1.2-1.5$ transition yield nearly single power-laws that end with bump-like breaks. Our results support the role of temperature anisotropy instabilities in accelerating electrons in ALT regions, offering a promising framework to help explain the wide range of nonthermal electron spectra reported in solar flare observations.",0 "Post-hoc explanation methods for black-box models often struggle with faithfulness and human interpretability due to the lack of explainability in current neural architectures. Meanwhile, B-cos networks have been introduced to improve model explainability by proposing an architecture that removes bias terms and promotes input-weight alignment. Although B-cos networks have shown success in building explainable systems, their application has so far been limited to computer vision models and their associated training pipelines. In this work, we introduce B-cos LMs, i.e., B-cos language models (LMs) empowered for natural language processing (NLP) tasks. Our approach directly transforms pre-trained language models into B-cos LMs by combining B-cos conversion and task fine-tuning, improving efficiency compared to previous methods. Our automatic and human evaluation results demonstrate that B-cos LMs produce more faithful and human interpretable explanations than post-hoc methods, while maintaining task performance comparable to conventional fine-tuning. Our in-depth analysis explores how B-cos LMs differ from conventionally fine-tuned models in their learning processes and explanation patterns. Finally, we present a first exploration of transforming decoder-only models to B-cos LMs for generation tasks.",0 "This study investigates how the U.S. Centers for Disease Control and Prevention (CDC) communicated COVID-19 guidance on Twitter and how publics responded over two years of the pandemic. Drawing on 275,124 tweets mentioning or addressing @CDCgov, I combine BERTopic modeling, sentiment analysis (VADER), credibility checks (Iffy Index), change point detection (PELT), and survival analysis to trace three phases of discourse: (1) early hoax claims and testing debates, (2) lockdown and mask controversies, and (3) post-vaccine variant concerns. I introduce the concept of crisis messaging journeys to explain how archived ""receipts"" of prior CDC statements fueled epistemic struggles, political polarization, and sustained engagement. Findings show that skeptical, cognitively complex discourse particularly questioning institutional trust prolonged participation, while positive affirmation predicted faster disengagement. I conclude with design recommendations for annotated, cautious, and flashpoint-responsive communication strategies to bolster public trust and resilience during protracted health crises.",1 "Large language models (LLMs) can lead to undesired consequences when misaligned with human values, especially in scenarios involving complex and sensitive social biases. Previous studies have revealed the misalignment of LLMs with human values using expert-designed or agent-based emulated bias scenarios. However, it remains unclear whether the alignment of LLMs with human values differs across different types of scenarios (e.g., scenarios containing negative vs. non-negative questions). In this study, we investigate the alignment of LLMs with human values regarding social biases (HVSB) in different types of bias scenarios. Through extensive analysis of 12 LLMs from four model families and four datasets, we demonstrate that LLMs with large model parameter scales do not necessarily have lower misalignment rate and attack success rate. Moreover, LLMs show a certain degree of alignment preference for specific types of scenarios and the LLMs from the same model family tend to have higher judgment consistency. In addition, we study the understanding capacity of LLMs with their explanations of HVSB. We find no significant differences in the understanding of HVSB across LLMs. We also find LLMs prefer their own generated explanations. Additionally, we endow smaller language models (LMs) with the ability to explain HVSB. The generation results show that the explanations generated by the fine-tuned smaller LMs are more readable, but have a relatively lower model agreeability.",2 "This systematic literature review analyzes the current state of compliance with Regulation (EU) 2024/1689 in autonomous robotic systems, focusing on cybersecurity frameworks and methodologies. Using the PRISMA protocol, 22 studies were selected from 243 initial records across IEEE Xplore, ACM DL, Scopus, and Web of Science. Findings reveal partial regulatory alignment: while progress has been made in risk management and encrypted communications, significant gaps persist in explainability modules, real-time human oversight, and knowledge base traceability. Only 40% of reviewed solutions explicitly address transparency requirements, and 30% implement failure intervention mechanisms. The study concludes that modular approaches integrating risk, supervision, and continuous auditing are essential to meet the AI Act mandates in autonomous robotics.",2 "Quantum Software Engineering (QSE) is a research area practiced by tech firms. Quantum developers face challenges in optimizing quantum computing and QSE concepts. They use Stack Overflow (SO) to discuss challenges and label posts with specialized quantum tags, which often refer to technical aspects rather than developer posts. Categorizing questions based on quantum concepts can help identify frequent QSE challenges. We conducted studies to classify questions into various challenges. We extracted 2829 questions from Q&A platforms using quantum-related tags. Posts were analyzed to identify frequent challenges and develop a novel grounded theory. Challenges include Tooling, Theoretical, Learning, Conceptual, Errors, and API Usage. Through content analysis and grounded theory, discussions were annotated with common challenges to develop a ground truth dataset. ChatGPT validated human annotations and resolved disagreements. Fine-tuned transformer algorithms, including BERT, DistilBERT, and RoBERTa, classified discussions into common challenges. We achieved an average accuracy of 95% with BERT DistilBERT, compared to fine-tuned Deep and Machine Learning (D&ML) classifiers, including Feedforward Neural Networks (FNN), Convolutional Neural Networks (CNN), and Long Short-Term Memory networks (LSTM), which achieved accuracies of 89%, 86%, and 84%, respectively. The Transformer-based approach outperforms the D&ML-based approach with a 6\% increase in accuracy by processing actual discussions, i.e., without data augmentation. We applied SHAP (SHapley Additive exPlanations) for model interpretability, revealing how linguistic features drive predictions and enhancing transparency in classification. These findings can help quantum vendors and forums better organize discussions for improved access and readability. However,empirical evaluation studies with actual developers and vendors are needed.",0 "Online harms are a growing problem in digital spaces, putting user safety at risk and reducing trust in social media platforms. One of the most persistent forms of harm is hate speech. To address this, we need tools that combine the speed and scale of automated systems with the judgment and insight of human moderators. These tools should not only find harmful content but also explain their decisions clearly, helping to build trust and understanding. In this paper, we present WATCHED, a chatbot designed to support content moderators in tackling hate speech. The chatbot is built as an Artificial Intelligence Agent system that uses Large Language Models along with several specialised tools. It compares new posts with real examples of hate speech and neutral content, uses a BERT-based classifier to help flag harmful messages, looks up slang and informal language using sources like Urban Dictionary, generates chain-of-thought reasoning, and checks platform guidelines to explain and support its decisions. This combination allows the chatbot not only to detect hate speech but to explain why content is considered harmful, grounded in both precedent and policy. Experimental results show that our proposed method surpasses existing state-of-the-art methods, reaching a macro F1 score of 0.91. Designed for moderators, safety teams, and researchers, the tool helps reduce online harms by supporting collaboration between AI and human oversight.",0 "As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Macro F1 score of 0.89, where OpenAI Moderator and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-the-loop, for better model robustness and explainability.",0 "Compositional visual reasoning has emerged as a key research frontier in multimodal AI, aiming to endow machines with the human-like ability to decompose visual scenes, ground intermediate concepts, and perform multi-step logical inference. While early surveys focus on monolithic vision-language models or general multimodal reasoning, a dedicated synthesis of the rapidly expanding compositional visual reasoning literature is still missing. We fill this gap with a comprehensive survey spanning 2023 to 2025 that systematically reviews 260+ papers from top venues (CVPR, ICCV, NeurIPS, ICML, ACL, etc.). We first formalize core definitions and describe why compositional approaches offer advantages in cognitive alignment, semantic fidelity, robustness, interpretability, and data efficiency. Next, we trace a five-stage paradigm shift: from prompt-enhanced language-centric pipelines, through tool-enhanced LLMs and tool-enhanced VLMs, to recently minted chain-of-thought reasoning and unified agentic VLMs, highlighting their architectural designs, strengths, and limitations. We then catalog 60+ benchmarks and corresponding metrics that probe compositional visual reasoning along dimensions such as grounding accuracy, chain-of-thought faithfulness, and high-resolution perception. Drawing on these analyses, we distill key insights, identify open challenges (e.g., limitations of LLM-based reasoning, hallucination, a bias toward deductive reasoning, scalable supervision, tool integration, and benchmark limitations), and outline future directions, including world-model integration, human-AI collaborative reasoning, and richer evaluation protocols. By offering a unified taxonomy, historical roadmap, and critical outlook, this survey aims to serve as a foundational reference and inspire the next generation of compositional visual reasoning research.",2 "The proliferation of deepfake technologies poses urgent challenges and serious risks to digital integrity, particularly within critical sectors such as forensics, journalism, and the legal system. While existing detection systems have made significant progress in classification accuracy, they typically function as black-box models, offering limited transparency and minimal support for human reasoning. This lack of interpretability hinders their usability in real-world decision-making contexts, especially for non-expert users. In this paper, we present DF-P2E (Deepfake: Prediction to Explanation), a novel multimodal framework that integrates visual, semantic, and narrative layers of explanation to make deepfake detection interpretable and accessible. The framework consists of three modular components: (1) a deepfake classifier with Grad-CAM-based saliency visualisation, (2) a visual captioning module that generates natural language summaries of manipulated regions, and (3) a narrative refinement module that uses a fine-tuned Large Language Model (LLM) to produce context-aware, user-sensitive explanations. We instantiate and evaluate the framework on the DF40 benchmark, the most diverse deepfake dataset to date. Experiments demonstrate that our system achieves competitive detection performance while providing high-quality explanations aligned with Grad-CAM activations. By unifying prediction and explanation in a coherent, human-aligned pipeline, this work offers a scalable approach to interpretable deepfake detection, advancing the broader vision of trustworthy and transparent AI systems in adversarial media environments.",0 "In recent years, deep learning has achieved unprecedented success in various computer vision tasks, particularly in object detection. However, the black-box nature and high complexity of deep neural networks pose significant challenges for interpretability, especially in critical domains such as autonomous driving, medical imaging, and security systems. Explainable Artificial Intelligence (XAI) aims to address this challenge by providing tools and methods to make model decisions more transparent, interpretable, and trust-worthy for humans. This review provides a comprehensive analysis of state-of-the-art explain-ability methods specifically applied to object detection models. The paper be-gins by categorizing existing XAI techniques based on their underlying mechanisms-perturbation-based, gradient-based, backpropagation-based, and graph-based methods. Notable methods such as D-RISE, BODEM, D-CLOSE, and FSOD are discussed in detail. Furthermore, the paper investigates their applicability to various object detection architectures, including YOLO, SSD, Faster R-CNN, and EfficientDet. Statistical analysis of publication trends from 2022 to mid-2025 shows an accelerating interest in explainable object detection, indicating its increasing importance. The study also explores common datasets and evaluation metrics, and highlights the major challenges associated with model interpretability. By providing a structured taxonomy and a critical assessment of existing methods, this review aims to guide researchers and practitioners in selecting suitable explainability techniques for object detection applications and to foster the development of more interpretable AI systems.",1 "Generating regulatorily compliant Suspicious Activity Report (SAR) remains a high-cost, low-scalability bottleneck in Anti-Money Laundering (AML) workflows. While large language models (LLMs) offer promising fluency, they suffer from factual hallucination, limited crime typology alignment, and poor explainability -- posing unacceptable risks in compliance-critical domains. This paper introduces Co-Investigator AI, an agentic framework optimized to produce Suspicious Activity Reports (SARs) significantly faster and with greater accuracy than traditional methods. Drawing inspiration from recent advances in autonomous agent architectures, such as the AI Co-Scientist, our approach integrates specialized agents for planning, crime type detection, external intelligence gathering, and compliance validation. The system features dynamic memory management, an AI-Privacy Guard layer for sensitive data handling, and a real-time validation agent employing the Agent-as-a-Judge paradigm to ensure continuous narrative quality assurance. Human investigators remain firmly in the loop, empowered to review and refine drafts in a collaborative workflow that blends AI efficiency with domain expertise. We demonstrate the versatility of Co-Investigator AI across a range of complex financial crime scenarios, highlighting its ability to streamline SAR drafting, align narratives with regulatory expectations, and enable compliance teams to focus on higher-order analytical work. This approach marks the beginning of a new era in compliance reporting -- bringing the transformative benefits of AI agents to the core of regulatory processes and paving the way for scalable, reliable, and transparent SAR generation.",0 "Artificial intelligence (AI) is advancing at a pace that raises urgent questions about how to align machine decision-making with human moral values. This working paper investigates how leading AI systems prioritize moral outcomes and what this reveals about the prospects for human-AI symbiosis. We address two central questions: (1) What moral values do state-of-the-art large language models (LLMs) implicitly favour when confronted with dilemmas? (2) How do differences in model architecture, cultural origin, and explainability affect these moral preferences? To explore these questions, we conduct a quantitative experiment with six LLMs, ranking and scoring outcomes across 18 dilemmas representing five moral frameworks. Our findings uncover strikingly consistent value biases. Across all models, Care and Virtue values outcomes were rated most moral, while libertarian choices were consistently penalized. Reasoning-enabled models exhibited greater sensitivity to context and provided richer explanations, whereas non-reasoning models produced more uniform but opaque judgments. This research makes three contributions: (i) Empirically, it delivers a large-scale comparison of moral reasoning across culturally distinct LLMs; (ii) Theoretically, it links probabilistic model behaviour with underlying value encodings; (iii) Practically, it highlights the need for explainability and cultural awareness as critical design principles to guide AI toward a transparent, aligned, and symbiotic future.",1 "Synthetic images, audio, and video can now be generated and edited by Artificial Intelligence (AI). In particular, the malicious use of synthetic data has raised concerns about potential harms to cybersecurity, personal privacy, and public trust. Although AI-based detection tools exist to help identify synthetic content, their limitations often lead to user mistrust and confusion between real and fake content. This study examines the role of AI performance in influencing human trust and decision making in synthetic data identification. Through an online human subject experiment involving 400 participants, we examined how varying AI performance impacts human trust and dependence on AI in deepfake detection. Our findings indicate how participants calibrate their dependence on AI based on their perceived risk and the prediction results provided by AI. These insights contribute to the development of transparent and explainable AI systems that better support everyday users in mitigating the harms of synthetic media.",0 "The rapid growth of large language models (LLMs) with diverse capabilities, latency and computational costs presents a critical deployment challenge: selecting the most suitable model for each prompt to optimize the trade-off between performance and efficiency. We introduce LLMRank, a prompt-aware routing framework that leverages rich, human-readable features extracted from prompts, including task type, reasoning patterns, complexity indicators, syntactic cues, and signals from a lightweight proxy solver. Unlike prior one-shot routers that rely solely on latent embeddings, LLMRank predicts per-model utility using a neural ranking model trained on RouterBench, comprising 36,497 prompts spanning 11 benchmarks and 11 state-of-the-art LLMs, from small efficient models to large frontier systems. Our approach achieves up to 89.2% of oracle utility, while providing interpretable feature attributions that explain routing decisions. Extensive studies demonstrate the importance of multifaceted feature extraction and the hybrid ranking objective, highlighting the potential of feature-driven routing for efficient and transparent LLM deployment.",2 "Driver visual attention prediction is a critical task in autonomous driving and human-computer interaction (HCI) research. Most prior studies focus on estimating attention allocation at a single moment in time, typically using static RGB images such as driving scene pictures. In this work, we propose a vision-language framework that models the changing landscape of drivers' gaze through natural language, using few-shot and zero-shot learning on single RGB images. We curate and refine high-quality captions from the BDD-A dataset using human-in-the-loop feedback, then fine-tune LLaVA to align visual perception with attention-centric scene understanding. Our approach integrates both low-level cues and top-down context (e.g., route semantics, risk anticipation), enabling language-based descriptions of gaze behavior. We evaluate performance across training regimes (few shot, and one-shot) and introduce domain-specific metrics for semantic alignment and response diversity. Results show that our fine-tuned model outperforms general-purpose VLMs in attention shift detection and interpretability. To our knowledge, this is among the first attempts to generate driver visual attention allocation and shifting predictions in natural language, offering a new direction for explainable AI in autonomous driving. Our approach provides a foundation for downstream tasks such as behavior forecasting, human-AI teaming, and multi-agent coordination.",1 "Despite the remarkable capabilities of large language models (LLMs) across a range of tasks, mathematical reasoning remains a challenging frontier. Motivated by the observation that humans learn more effectively when prompted not what to think but how to think, we introduce BloomWise, a cognitively-inspired prompting technique designed to enhance LLMs' performance on mathematical problem solving while making their solutions more explainable. BloomWise encourages LLMs to generate solutions - in the form of explanations - by progressing through a sequence of cognitive operations-from basic (e.g., remembering) to more advanced reasoning skills (e.g., evaluating) - mirroring how humans build understanding. The process iterates through these levels, halting early if a convergence criterion is met: specifically, if two or more consecutive levels yield the same answer, the solution from the earliest such level is output; otherwise, the process continues until all levels are completed. Through extensive experiments across five popular math reasoning datasets, we demonstrate the effectiveness of BloomWise. We also present comprehensive ablation studies to analyze the strengths of each component within our system.",0 "Capturing human learning behavior based on deep learning methods has become a major research focus in both psychology and intelligent systems. Recent approaches rely on controlled experiments or rule-based models to explore cognitive processes. However, they struggle to capture learning dynamics, track progress over time, or provide explainability. To address these challenges, we introduce LearnerAgent, a novel multi-agent framework based on Large Language Models (LLMs) to simulate a realistic teaching environment. To explore human-like learning dynamics, we construct learners with psychologically grounded profiles-such as Deep, Surface, and Lazy-as well as a persona-free General Learner to inspect the base LLM's default behavior. Through weekly knowledge acquisition, monthly strategic choices, periodic tests, and peer interaction, we can track the dynamic learning progress of individual learners over a full-year journey. Our findings are fourfold: 1) Longitudinal analysis reveals that only Deep Learner achieves sustained cognitive growth. Our specially designed ""trap questions"" effectively diagnose Surface Learner's shallow knowledge. 2) The behavioral and cognitive patterns of distinct learners align closely with their psychological profiles. 3) Learners' self-concept scores evolve realistically, with the General Learner developing surprisingly high self-efficacy despite its cognitive limitations. 4) Critically, the default profile of base LLM is a ""diligent but brittle Surface Learner""-an agent that mimics the behaviors of a good student but lacks true, generalizable understanding. Extensive simulation experiments demonstrate that LearnerAgent aligns well with real scenarios, yielding more insightful findings about LLMs' behavior.",1 "Well-being encompasses mental, physical, and social dimensions essential to personal growth and informed life decisions. As individuals increasingly consult Large Language Models (LLMs) to understand well-being, a key challenge emerges: Can LLMs generate explanations that are not only accurate but also tailored to diverse audiences? High-quality explanations require both factual correctness and the ability to meet the expectations of users with varying expertise. In this work, we construct a large-scale dataset comprising 43,880 explanations of 2,194 well-being concepts, generated by ten diverse LLMs. We introduce a principle-guided LLM-as-a-judge evaluation framework, employing dual judges to assess explanation quality. Furthermore, we show that fine-tuning an open-source LLM using Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) can significantly enhance the quality of generated explanations. Our results reveal: (1) The proposed LLM judges align well with human evaluations; (2) explanation quality varies significantly across models, audiences, and categories; and (3) DPO- and SFT-finetuned models outperform their larger counterparts, demonstrating the effectiveness of preference-based learning for specialized explanation tasks.",0 "Generative AI has made image creation more accessible, yet aligning outputs with nuanced creative intent remains challenging, particularly for non-experts. Existing tools often require users to externalize ideas through prompts or references, limiting fluid exploration. We introduce ThematicPlane, a system that enables users to navigate and manipulate high-level semantic concepts (e.g., mood, style, or narrative tone) within an interactive thematic design plane. This interface bridges the gap between tacit creative intent and system control. In our exploratory study (N=6), participants engaged in divergent and convergent creative modes, often embracing unexpected results as inspiration or iteration cues. While they grounded their exploration in familiar themes, differing expectations of how themes mapped to outputs revealed a need for more explainable controls. Overall, ThematicPlane fosters expressive, iterative workflows and highlights new directions for intuitive, semantics-driven interaction in generative design tools.",2 "In the rapidly evolving field of Explainable Natural Language Processing (NLP), textual explanations, i.e., human-like rationales, are pivotal for explaining model predictions and enriching datasets with interpretable labels. Traditional approaches rely on human annotation, which is costly, labor-intensive, and impedes scalability. In this work, we present an automated framework that leverages multiple state-of-the-art large language models (LLMs) to generate high-quality textual explanations. We rigorously assess the quality of these LLM-generated explanations using a comprehensive suite of Natural Language Generation (NLG) metrics. Furthermore, we investigate the downstream impact of these explanations on the performance of pre-trained language models (PLMs) and LLMs across natural language inference tasks on two diverse benchmark datasets. Our experiments demonstrate that automated explanations exhibit highly competitive effectiveness compared to human-annotated explanations in improving model performance. Our findings underscore a promising avenue for scalable, automated LLM-based textual explanation generation for extending NLP datasets and enhancing model performance.",0 "Despite the continued anthropomorphization of AI systems, the potential impact of racialization during human-AI interaction is understudied. This study explores how human-AI cooperation may be impacted by the belief that data used to train an AI system is racialized, that is, it was trained on data from a specific group of people. During this study, participants completed a human-AI cooperation task using the Pig Chase game. Participants of different self-identified demographics interacted with AI agents whose perceived racial identities were manipulated, allowing us to assess how sociocultural perspectives influence the decision-making of participants in the game. After the game, participants completed a survey questionnaire to explain the strategies they used while playing the game and to understand the perceived intelligence of their AI teammates. Statistical analysis of task behavior data revealed a statistically significant effect of the participant's demographic, as well as the interaction between this self-identified demographic and the treatment condition (i.e., the perceived demographic of the agent). The results indicated that Non-White participants viewed AI agents racialized as White in a positive way compared to AI agents racialized as Black. Both Black and White participants viewed the AI agent in the control treatment in a negative way. A baseline cognitive model of the task using ACT-R cognitive architecture was used to understand a cognitive-level, process-based explanation of the participants' perspectives based on results found from the study. This model helps us better understand the factors affecting the decision-making strategies of the game participants. Results from analysis of these data, as well as cognitive modeling, indicate a need to expand understanding of the ways racialization (whether implicit or explicit) impacts interaction with AI systems.",2 "Explainable AI (XAI) has become a crucial component of Clinical Decision Support Systems (CDSS) to enhance transparency, trust, and clinical adoption. However, while many XAI methods have been proposed, their effectiveness in real-world medical settings remains underexplored. This paper provides a survey of human-centered evaluations of Explainable AI methods in Clinical Decision Support Systems. By categorizing existing works based on XAI methodologies, evaluation frameworks, and clinical adoption challenges, we offer a structured understanding of the landscape. Our findings reveal key challenges in the integration of XAI into healthcare workflows and propose a structured framework to align the evaluation methods of XAI with the clinical needs of stakeholders.",2 "Automated feedback generation has the potential to enhance students' learning progress by providing timely and targeted feedback. Moreover, it can assist teachers in optimizing their time, allowing them to focus on more strategic and personalized aspects of teaching. To generate high-quality, information-rich formative feedback, it is essential first to extract relevant indicators, as these serve as the foundation upon which the feedback is constructed. Teachers often employ feedback criteria grids composed of various indicators that they evaluate systematically. This study examines the initial phase of extracting such indicators from students' submissions of a language learning course using the large language model Llama 3.1. Accordingly, the alignment between indicators generated by the LLM and human ratings across various feedback criteria is investigated. The findings demonstrate statistically significant strong correlations, even in cases involving unanticipated combinations of indicators and criteria. The methodology employed in this paper offers a promising foundation for extracting indicators from students' submissions using LLMs. Such indicators can potentially be utilized to auto-generate explainable and transparent formative feedback in future research.",2 "The functional computation of the human brain arises from the collective behaviour of the underlying neural network. The emerging technology enables the recording of population activity in neurons, and the theory of neural networks is expected to explain and extract functional computations from the data. Thermodynamically, a large proportion of the whole-body energy is consumed by the brain, and functional computation of the human brain seems to involve high energy consumption. The human brain, however, does not increase its energy consumption with its function, and most of its energy consumption is not involved in specific brain function: how can the human brain perform its wide repertoire of functional computations without drastically changing its energy consumption? Here, we present a mechanism to perform functional computation by subtle modification of the interaction network among the brain regions. We first show that, by analyzing the data of spontaneous and task-induced whole-cerebral-cortex activity, the probability fluxes, which are the microscopic irreversible measure of state transitions, exhibit unique patterns depending on the task being performed, indicating that the human brain function is a distinct sequence of the brain state transitions. We then fit the parameters of Ising spin systems with asymmetric interactions, where we reveal that the symmetric interactions among the brain regions are strong and task-independent, but the antisymmetric interactions are subtle and task-dependent, and the inferred model reproduces most of the observed probability flux patterns. Our results indicate that the human brain performs its functional computation by subtly modifying the antisymmetric interaction among the brain regions, which might be possible with a small amount of energy.",2 "The rise of LLMs poses new possibilities in modeling opinion evolution, a long-standing task in simulation, by leveraging advanced reasoning abilities to recreate complex, large-scale human cognitive trends. While most prior works focus on opinion evolution surrounding specific isolated events or the views within a country, ours is the first to model the large-scale attitude evolution of a population representing an entire country towards another -- US citizens' perspectives towards China. To tackle the challenges of this broad scenario, we propose a framework that integrates media data collection, user profile creation, and cognitive architecture for opinion updates to successfully reproduce the real trend of US attitudes towards China over a 20-year period from 2005 to today. We also leverage LLMs' capabilities to introduce debiased media exposure, extracting neutral events from typically subjective news contents, to uncover the roots of polarized opinion formation, as well as a devils advocate agent to help explain the rare reversal from negative to positive attitudes towards China, corresponding with changes in the way Americans obtain information about the country. The simulation results, beyond validating our framework architecture, also reveal the impact of biased framing and selection bias in shaping attitudes. Overall, our work contributes to a new paradigm for LLM-based modeling of cognitive behaviors in a large-scale, long-term, cross-border social context, providing insights into the formation of international biases and offering valuable implications for media consumers to better understand the factors shaping their perspectives, and ultimately contributing to the larger social need for bias reduction and cross-cultural tolerance.",0 "Large Language Models (LLMs) have demonstrated impressive performances in complex text generation tasks. However, the contribution of the input prompt to the generated content still remains obscure to humans, underscoring the necessity of understanding the causality between input and output pairs. Existing works for providing prompt-specific explanation often confine model output to be classification or next-word prediction. Few initial attempts aiming to explain the entire language generation often treat input prompt texts independently, ignoring their combinatorial effects on the follow-up generation. In this study, we introduce a counterfactual explanation framework based on Joint Prompt Attribution, JoPA, which aims to explain how a few prompt texts collaboratively influences the LLM's complete generation. Particularly, we formulate the task of prompt attribution for generation interpretation as a combinatorial optimization problem, and introduce a probabilistic algorithm to search for the casual input combination in the discrete space. We define and utilize multiple metrics to evaluate the produced explanations, demonstrating both the faithfulness and efficiency of our framework.",1 "Multifarious assembly models consider multiple structures assembled from a shared set of components, reflecting the efficient usage of components in biological self-assembly. These models are subject to a high-dimensional parameter space, with only a finite region of parameter space giving reliable self-assembly. Here we use a continuous-time Gillespie simulation method to study multifarious self-assembly and find that the region of parameter space in which reliable self-assembly can be achieved is smaller than what was obtained previously using a discrete-time Monte Carlo simulation method. We explain this discrepancy through a detailed analysis of the stability of assembled structures against chimera formation. We find that our continuous-time simulations of multifarious self-assembly can expose this instability in large systems even at moderate simulation times. In contrast, discrete-time simulations are slow to show this instability, particularly for large system sizes. For the remaining state space we find good agreement between the predictions of continuous- and discrete-time simulations. We present physical arguments that can help us predict the state boundaries in the parameter space, and gain a deeper understanding of multifarious self-assembly.",0 "The safe use of pharmaceuticals in food-producing animals is vital to protect animal welfare and human food safety. Adverse events (AEs) may signal unexpected pharmacokinetic or toxicokinetic effects, increasing the risk of violative residues in the food chain. This study introduces a predictive framework for classifying outcomes (Death vs. Recovery) using ~1.28 million reports (1987-2025 Q1) from the U.S. FDA's OpenFDA Center for Veterinary Medicine. A preprocessing pipeline merged relational tables and standardized AEs through VeDDRA ontologies. Data were normalized, missing values imputed, and high-cardinality features reduced; physicochemical drug properties were integrated to capture chemical-residue links. We evaluated supervised models, including Random Forest, CatBoost, XGBoost, ExcelFormer, and large language models (Gemma 3-27B, Phi 3-12B). Class imbalance was addressed, such as undersampling and oversampling, with a focus on prioritizing recall for fatal outcomes. Ensemble methods(Voting, Stacking) and CatBoost performed best, achieving precision, recall, and F1-scores of 0.95. Incorporating Average Uncertainty Margin (AUM)-based pseudo-labeling of uncertain cases improved minority-class detection, particularly in ExcelFormer and XGBoost. Interpretability via SHAP identified biologically plausible predictors, including lung, heart, and bronchial disorders, animal demographics, and drug physicochemical properties. These features were strongly linked to fatal outcomes. Overall, the framework shows that combining rigorous data engineering, advanced machine learning, and explainable AI enables accurate, interpretable predictions of veterinary safety outcomes. The approach supports FARAD's mission by enabling early detection of high-risk drug-event profiles, strengthening residue risk assessment, and informing regulatory and clinical decision-making.",0 "This paper considers Ecogame, an innovative art project of 1970, whose creators believed in a positive vision of a technological future; an understanding, posited on cybernetics, of a future that could be participatory via digital means, and therefore more democratised. Using simulation and early machine learning techniques over a live network, Ecogame combined the power of visual art with cybernetic concepts of adaptation, feedback, and control to propose that behaviour had implications for the total system. It provides an historical precedent for contemporary AI-driven art about using AI in a more human-centred way.",2 "Compositionality has long been considered a key explanatory property underlying human intelligence: arbitrary concepts can be composed into novel complex combinations, permitting the acquisition of an open ended, potentially infinite expressive capacity from finite learning experiences. Influential arguments have held that neural networks fail to explain this aspect of behavior, leading many to dismiss them as viable models of human cognition. Over the last decade, however, modern deep neural networks (DNNs), which share the same fundamental design principles as their predecessors, have come to dominate artificial intelligence, exhibiting the most advanced cognitive behaviors ever demonstrated in machines. In particular, large language models (LLMs), DNNs trained to predict the next word on a large corpus of text, have proven capable of sophisticated behaviors such as writing syntactically complex sentences without grammatical errors, producing cogent chains of reasoning, and even writing original computer programs -- all behaviors thought to require compositional processing. In this chapter, we survey recent empirical work from machine learning for a broad audience in philosophy, cognitive science, and neuroscience, situating recent breakthroughs within the broader context of philosophical arguments about compositionality. In particular, our review emphasizes two approaches to endowing neural networks with compositional generalization capabilities: (1) architectural inductive biases, and (2) metalearning, or learning to learn. We also present findings suggesting that LLM pretraining can be understood as a kind of metalearning, and can thereby equip DNNs with compositional generalization abilities in a similar way. We conclude by discussing the implications that these findings may have for the study of compositionality in human cognition and by suggesting avenues for future research.",2 "The frequency exponent of 1/f noise in graphene-boron nitride heterostructures is known to have multiple extrema in its dependence on the charge carrier concentration. This behavior is explained in the present paper as a result of the charge carrier trapping by impurities in the boron nitride. A kinetic equation for the charge carriers subject to trapping and interacting with acoustic phonons is derived. This equation is solved numerically, and the equilibrium solutions are used to evaluate the frequency exponent according to the quantum theory of 1/f noise. It is found that the frequency exponent does develop several minima and maxima, provided that the trapping probability is sufficiently wide and has a threshold with respect to the charge carrier energy. A detailed comparison with the experimental data is made, and the results are used to estimate the energy threshold and the trapping cross-section.",2 "Understanding how built environments shape human experience is central to designing sustainable cities. Cycling provides a critical case: it delivers health and environmental benefits, yet its uptake depends strongly on the experience of cycling rather than infrastructure alone. Research on this relationship has grown rapidly but remains fragmented across disciplines and scales, and has concentrated on network-level analyses of routes and connectivity. This bias is especially problematic in historical cities, where embedding new infrastructure is difficult, and where cycling experience is shaped not only by spatial form but also by how cyclists perceive, interpret, and physically respond to their environment - through psychological factors such as safety and comfort, physiological demands such as stress and fatigue, and perceptual cues in the streetscape. We systematically reviewed 68 studies across urban planning, transportation, behavioural science, neuroscience, and public health. Two scales of analysis were identified: a macro scale addressing the ability to cycle and a micro scale addressing the propensity to cycle. Methods were classified into objective and subjective approaches, with hybrid approaches beginning to emerge. We find a persistent reliance on objective proxies, limited integration of subjective accounts, and insufficient attention to the streetscape as a lived environment. Addressing these gaps is essential to explain why environments enable or deter cycling, and to inform the design of cities that support cycling as both mobility and lived experience.",1 "Advances in computer vision have opened new avenues for clinical applications, particularly in computerized exposure therapy where visual stimuli can be dynamically adjusted based on patient responses. As a critical step toward such adaptive systems, we investigated whether pretrained computer vision models can accurately predict fear levels from spider-related images. We adapted three diverse models using transfer learning to predict human fear ratings (on a 0-100 scale) from a standardized dataset of 313 images. The models were evaluated using cross-validation, achieving an average mean absolute error (MAE) between 10.1 and 11.0. Our learning curve analysis revealed that reducing the dataset size significantly harmed performance, though further increases yielded no substantial gains. Explainability assessments showed the models' predictions were based on spider-related features. A category-wise error analysis further identified visual conditions associated with higher errors (e.g., distant views and artificial/painted spiders). These findings demonstrate the potential of explainable computer vision models in predicting fear ratings, highlighting the importance of both model explainability and a sufficient dataset size for developing effective emotion-aware therapeutic technologies.",2 "The mainstream paradigm of remote sensing image interpretation has long been dominated by vision-centered models, which rely on visual features for semantic understanding. However, these models face inherent limitations in handling multi-modal reasoning, semantic abstraction, and interactive decision-making. While recent advances have introduced Large Language Models (LLMs) into remote sensing workflows, existing studies primarily focus on downstream applications, lacking a unified theoretical framework that explains the cognitive role of language. This review advocates a paradigm shift from vision-centered to language-centered remote sensing interpretation. Drawing inspiration from the Global Workspace Theory (GWT) of human cognition, We propose a language-centered framework for remote sensing interpretation that treats LLMs as the cognitive central hub integrating perceptual, task, knowledge and action spaces to enable unified understanding, reasoning, and decision-making. We first explore the potential of LLMs as the central cognitive component in remote sensing interpretation, and then summarize core technical challenges, including unified multimodal representation, knowledge association, and reasoning and decision-making. Furthermore, we construct a global workspace-driven interpretation mechanism and review how language-centered solutions address each challenge. Finally, we outline future research directions from four perspectives: adaptive alignment of multimodal data, task understanding under dynamic knowledge constraints, trustworthy reasoning, and autonomous interaction. This work aims to provide a conceptual foundation for the next generation of remote sensing interpretation systems and establish a roadmap toward cognition-driven intelligent geospatial analysis.",0 "Business communication digitisation has reorganised the process of persuasive discourse, which allows not only greater transparency but also advanced deception. This inquiry synthesises classical rhetoric and communication psychology with linguistic theory and empirical studies in the financial reporting, sustainability discourse, and digital marketing to explain how deceptive language can be systematically detected using persuasive lexicon. In controlled settings, detection accuracies of greater than 99% were achieved by using computational textual analysis as well as personalised transformer models. However, reproducing this performance in multilingual settings is also problematic and, to a large extent, this is because it is not easy to find sufficient data, and because few multilingual text-processing infrastructures are in place. This evidence shows that there has been an increasing gap between the theoretical representations of communication and those empirically approximated, and therefore, there is a need to have strong automatic text-identification systems where AI-based discourse is becoming more realistic in communicating with humans.",2 "Reasoning is central to purposeful action, yet most robotic foundation models map perception and instructions directly to control, which limits adaptability, generalization, and semantic grounding. We introduce Action Reasoning Models (ARMs), a class of robotic foundation models that integrate perception, planning, and control through a structured three-stage pipeline. Our model, MolmoAct, encodes observations and instructions into depth-aware perception tokens, generates mid-level spatial plans as editable trajectory traces, and predicts precise low-level actions, enabling explainable and steerable behavior. MolmoAct-7B-D achieves strong performance across simulation and real-world settings: 70.5% zero-shot accuracy on SimplerEnv Visual Matching tasks, surpassing closed-source Pi-0 and GR00T N1.5; 86.6% average success on LIBERO, including an additional 6.3% gain over ThinkAct on long-horizon tasks; and in real-world fine-tuning, an additional 10% (single-arm) and an additional 22.7% (bimanual) task progression over Pi-0-FAST. It also outperforms baselines by an additional 23.3% on out-of-distribution generalization and achieves top human-preference scores for open-ended instruction following and trajectory steering. Furthermore, we release, for the first time, the MolmoAct Dataset -- a mid-training robot dataset comprising over 10,000 high quality robot trajectories across diverse scenarios and tasks. Training with this dataset yields an average 5.5% improvement in general performance over the base model. We release all model weights, training code, our collected dataset, and our action reasoning dataset, establishing MolmoAct as both a state-of-the-art robotics foundation model and an open blueprint for building ARMs that transform perception into purposeful action through structured reasoning. Blogpost: https://allenai.org/blog/molmoact",0 "Astronomical research traditionally relies on extensive domain knowledge to interpret observations and narrow down hypotheses. We demonstrate that this process can be emulated using large language model-based agents to accelerate research workflows. We propose mephisto, a multi-agent collaboration framework that mimics human reasoning to interpret multi-band galaxy observations. mephisto interacts with the CIGALE codebase, which includes spectral energy distribution (SED) models to explain observations. In this open-world setting, mephisto learns from its self-play experience, performs tree search, and accumulates knowledge in a dynamically updated base. As a proof of concept, we apply mephisto to the latest data from the James Webb Space Telescope. mephisto attains near-human proficiency in reasoning about galaxies' physical scenarios, even when dealing with a recently discovered population of ""Little Red Dot"" galaxies. This represents the first demonstration of agentic research in astronomy, advancing towards end-to-end research via LLM agents and potentially expediting astronomical discoveries.",0 "Business interview preparation demands both solid theoretical grounding and refined soft skills, yet conventional classroom methods rarely deliver the individualized, culturally aware practice employers currently expect. This paper introduces SimInterview, a large language model (LLM)-based simulated multilingual interview training system designed for business professionals entering the AI-transformed labor market. Our system leverages an LLM agent and synthetic AI technologies to create realistic virtual recruiters capable of conducting personalized, real-time conversational interviews. The framework dynamically adapts interview scenarios using retrieval-augmented generation (RAG) to match individual resumes with specific job requirements across multiple languages. Built on LLMs (OpenAI o3, Llama 4 Maverick, Gemma 3), integrated with Whisper speech recognition, GPT-SoVITS voice synthesis, Ditto diffusion-based talking head generation model, and ChromaDB vector databases, our system significantly improves interview readiness across English and Japanese markets. Experiments with university-level candidates show that the system consistently aligns its assessments with job requirements, faithfully preserves resume content, and earns high satisfaction ratings, with the lightweight Gemma 3 model producing the most engaging conversations. Qualitative findings revealed that the standardized Japanese resume format improved document retrieval while diverse English resumes introduced additional variability, and they highlighted how cultural norms shape follow-up questioning strategies. Finally, we also outlined a contestable AI design that can explain, detect bias, and preserve human-in-the-loop to meet emerging regulatory expectations.",0 "Many existing AI music generation tools rely on text prompts, complex interfaces, or instrument-like controls, which may require musical or technical knowledge that non-musicians do not possess. This paper introduces DeformTune, a prototype system that combines a tactile deformable interface with the MeasureVAE model to explore more intuitive, embodied, and explainable AI interaction. We conducted a preliminary study with 11 adult participants without formal musical training to investigate their experience with AI-assisted music creation. Thematic analysis of their feedback revealed recurring challenge--including unclear control mappings, limited expressive range, and the need for guidance throughout use. We discuss several design opportunities for enhancing explainability of AI, including multimodal feedback and progressive interaction support. These findings contribute early insights toward making AI music systems more explainable and empowering for novice users.",1 "The Medico 2025 challenge addresses Visual Question Answering (VQA) for Gastrointestinal (GI) imaging, organized as part of the MediaEval task series. The challenge focuses on developing Explainable Artificial Intelligence (XAI) models that answer clinically relevant questions based on GI endoscopy images while providing interpretable justifications aligned with medical reasoning. It introduces two subtasks: (1) answering diverse types of visual questions using the Kvasir-VQA-x1 dataset, and (2) generating multimodal explanations to support clinical decision-making. The Kvasir-VQA-x1 dataset, created from 6,500 images and 159,549 complex question-answer (QA) pairs, serves as the benchmark for the challenge. By combining quantitative performance metrics and expert-reviewed explainability assessments, this task aims to advance trustworthy Artificial Intelligence (AI) in medical image analysis. Instructions, data access, and an updated guide for participation are available in the official competition repository: https://github.com/simula/MediaEval-Medico-2025",0 "In previous publications, we have argued that a form of panprotopsychism based on quantum states and events offers a solution to the combination problem. This framework explains the emergence of complex phenomenal qualities and conscious subjects. Furthermore, the inherent openness of quantum mechanics allows consciousness and, more generally, phenomenal properties to exert a causal influence. If the view proposed by quantum panprotopsychism is valid, it suggests that we inhabit a consciousness-centered universe. A world whose fundamental nature is phenomenal. This is at odds with the current view about the human condition that was strongly influenced by a science based on classical mechanicism, and led to nihilism and existentialism in late 19th-century Europe, and more recently to the rise of anti-foundationalists perspectives. The centrality of consciousness resulting from the incorporation of a quantum ontology into our worldview leads us to reconsider the nihilistic view and conclude that we live in a world in which a precise physical order leads to people capable of accessing a transcendent phenomenal realm.",0 "This study provides a carefully controlled examination of the universality of the von Karman and additive constants associated with the classical logarithmic scaling of the mean streamwise velocity profile in high-friction Reynolds number (Re_tau) turbulent boundary layers (TBLs) subjected to weak-to-moderate adverse pressure gradients (APGs). The analysis leverages a recently developed method for imposing APGs with minimal pressure gradient (PG) history effects in Melbourne's high-Re_tau TBL facility (Deshpande et al., Phys. Rev. Fluids, vol. 8, 2023), in combination with direct measurements of local friction velocity via oil-film interferometry. The von Karman constant is found to remain invariant within experimental uncertainty, while the additive coefficient decreases with both the local APG and PG history, potentially explaining reported variability in logarithmic scalings across the APG TBL literature. The facility enables manual prescription of APGs along the full test section, allowing weak PG history perturbations to be followed by extended recovery regions, while maintaining matched local PG and Re_tau at downstream measurement locations. This experimental configuration allows for systematic decoupling of the effects of Re_tau, local PGs, and PG history, enabling assessment of their individual contributions to single-point turbulence statistics and energy spectra across different TBL regions. Present results at high Re_tau show that PG history influences both small-scale and large-scale motions in the overlap and outer regions, whereas local PGs primarily affect the large-scales. The strongest effects of the local PG occur in the outer region (around 0.4delta, where delta is the boundary layer thickness), while PG history effects extend down to approximately 0.25delta, just above the logarithmic region.",2 "Recent advancements in visual generative models have enabled high-quality image and video generation, opening diverse applications. However, evaluating these models often demands sampling hundreds or thousands of images or videos, making the process computationally expensive, especially for diffusion-based models with inherently slow sampling. Moreover, existing evaluation methods rely on rigid pipelines that overlook specific user needs and provide numerical results without clear explanations. In contrast, humans can quickly form impressions of a model's capabilities by observing only a few samples. To mimic this, we propose the Evaluation Agent framework, which employs human-like strategies for efficient, dynamic, multi-round evaluations using only a few samples per round, while offering detailed, user-tailored analyses. It offers four key advantages: 1) efficiency, 2) promptable evaluation tailored to diverse user needs, 3) explainability beyond single numerical scores, and 4) scalability across various models and tools. Experiments show that Evaluation Agent reduces evaluation time to 10% of traditional methods while delivering comparable results. The Evaluation Agent framework is fully open-sourced to advance research in visual generative models and their efficient evaluation.",2 "This research is part of a study of a real-time, cloud-based on-street parking service using crowd-sourced in-vehicle fleet data. The service provides real-time information about available parking spots by classifying crowd-sourced detections observed via ultrasonic sensors. The goal of this research is to optimize the current parking service quality by analyzing the automation of the existing test process for ground truth tests. Therefore, methods from the field of machine learning, especially image pattern recognition, are applied to enrich the database and substitute human engineering work in major areas of the analysis process. After an introduction into the related areas of machine learning, this paper explains the methods and implementations made to achieve a high level of automation, applying convolutional neural networks. Finally, predefined metrics present the performance level achieved, showing a time reduction of human resources up to 99.58 %. The overall improvements are discussed, summarized, and followed by an outlook for future development and potential application of the analysis automation tool.",0 "In recent years, there has been significant progress in the development of deep learning models over relational databases, including architectures based on heterogeneous graph neural networks (hetero-GNNs) and heterogeneous graph transformers. In effect, such architectures state how the database records and links (e.g., foreign-key references) translate into a large, complex numerical expression, involving numerous learnable parameters. This complexity makes it hard to explain, in human-understandable terms, how a model uses the available data to arrive at a given prediction. We present a novel framework for explaining machine-learning models over relational databases, where explanations are view definitions that highlight focused parts of the database that mostly contribute to the model's prediction. We establish such global abductive explanations by adapting the classic notion of determinacy by Nash, Segoufin, and Vianu (2010). In addition to tuning the tradeoff between determinacy and conciseness, the framework allows controlling the level of granularity by adopting different fragments of view definitions, such as ones highlighting whole columns, foreign keys between tables, relevant groups of tuples, and so on. We investigate the realization of the framework in the case of hetero-GNNs. We develop heuristic algorithms that avoid the exhaustive search over the space of all databases. We propose techniques that are model-agnostic, and others that are tailored to hetero-GNNs via the notion of learnable masking. Our approach is evaluated through an extensive empirical study on the RelBench collection, covering a variety of domains and different record-level tasks. The results demonstrate the usefulness of the proposed explanations, as well as the efficiency of their generation.",2 "Large Language Models (LLM) have achieved remarkable performances in general domains and are now extending into the expert domain of law. Several benchmarks have been proposed to evaluate LLMs' legal capabilities. However, these benchmarks fail to evaluate open-ended and provision-grounded Question Answering (QA). To address this, we introduce a Korean Benchmark for Legal EXplainable QA (KoBLEX), designed to evaluate provision-grounded, multi-hop legal reasoning. KoBLEX includes 226 scenario-based QA instances and their supporting provisions, created using a hybrid LLM-human expert pipeline. We also propose a method called Parametric provision-guided Selection Retrieval (ParSeR), which uses LLM-generated parametric provisions to guide legally grounded and reliable answers. ParSeR facilitates multi-hop reasoning on complex legal questions by generating parametric provisions and employing a three-stage sequential retrieval process. Furthermore, to better evaluate the legal fidelity of the generated answers, we propose Legal Fidelity Evaluation (LF-Eval). LF-Eval is an automatic metric that jointly considers the question, answer, and supporting provisions and shows a high correlation with human judgments. Experimental results show that ParSeR consistently outperforms strong baselines, achieving the best results across multiple LLMs. Notably, compared to standard retrieval with GPT-4o, ParSeR achieves +37.91 higher F1 and +30.81 higher LF-Eval. Further analyses reveal that ParSeR efficiently delivers consistent performance across reasoning depths, with ablations confirming the effectiveness of ParSeR.",2 "Federated Learning (FL) is a widespread and well-adopted paradigm of decentralised learning that allows training one model from multiple sources without the need to transfer data between participating clients directly. Since its inception in 2015, it has been divided into numerous subfields that deal with application-specific issues, such as data heterogeneity or resource allocation. One such sub-field, Clustered Federated Learning (CFL), deals with the problem of clustering the population of clients into separate cohorts to deliver personalised models. Although a few remarkable works have been published in this domain, the problem remains largely unexplored, as its basic assumptions and settings differ slightly from those of standard FL. In this work, we present One-Shot Clustered Federated Learning (OCFL), a clustering-agnostic algorithm that can automatically detect the earliest suitable moment for clustering. Our algorithm is based on computing the cosine distance between the gradients of the clients and a temperature measure that detects when the federated model starts to converge. We empirically evaluate our methodology by testing various one-shot clustering algorithms for over forty different tasks on five benchmark datasets. Our experiments showcase the good performance of our approach when used to perform CFL in an automated manner without the need to adjust hyperparameters. We also revisit the practical feasibility of CFL algorithms based on the gradients of the clients, providing firm evidence of the high efficiency of density-based clustering methods when used to differentiate between the loss surfaces of neural networks trained on different distributions. Moreover, by inspecting the feasibility of local explanations generated with the help of GradCAM, we can provide more insights into the relationship between personalisation and the explainability of local predictions.",2 "Blockchain technology is widely used in various fields due to its ability to provide decentralization and trustless security. This is a fundamental understanding held by many advocates, but it is misunderstood, leading participants to fail to recognize the limitations of the security that blockchain can provide. Among all current network attacks, Denial of Service (DoS) attacks pose significant threats due to their ease of execution and destructive potential. This paper, based on the blockchain architecture hierarchy, categorizes and organizes existing DoS attacks, with a focus on explaining the principles and methods of contract layer and consensus layer DoS attacks. Furthermore, this paper comprehensively analyzes and compares commonly used detection methods and defense technologies, which will contribute to strengthening the security and stability of blockchain systems and promoting further innovation and application of blockchain systems.",0 "OpenAI's o3-preview reasoning model exceeded human accuracy on the ARC-AGI benchmark, but does that mean state-of-the-art models recognize and reason with the abstractions that the task creators intended? We investigate models' abstraction abilities on ConceptARC. We evaluate models under settings that vary the input modality (textual vs. visual), whether the model is permitted to use external Python tools, and, for reasoning models, the amount of reasoning effort. In addition to measuring output accuracy, we perform fine-grained evaluation of the natural-language rules that models generate to explain their solutions. This dual evaluation lets us assess whether models solve tasks using the abstractions ConceptARC was designed to elicit, rather than relying on surface-level patterns. Our results show that, while some models using text-based representations match human output accuracy, the best models' rules are often based on surface-level ``shortcuts'' and capture intended abstractions far less often than humans. Thus their capabilities for general abstract reasoning may be overestimated by evaluations based on accuracy alone. In the visual modality, AI models' output accuracy drops sharply, yet our rule-level analysis reveals that models might be underestimated, as they still exhibit a substantial share of rules that capture intended abstractions, but are often unable to correctly apply these rules. In short, our results show that models still lag humans in abstract reasoning, and that using accuracy alone to evaluate abstract reasoning on ARC-like tasks may overestimate abstract-reasoning capabilities in textual modalities and underestimate it in visual modalities. We believe that our evaluation framework offers a more faithful picture of multimodal models' abstract reasoning abilities and a more principled way to track progress toward human-like, abstraction-centered intelligence.",0 "We propose a novel SuperBrain framework for collective intelligence, grounded in the co-evolution of large language models (LLMs) and human users. Unlike static prompt engineering or isolated agent simulations, our approach emphasizes a dynamic pathway from Subclass Brain to Superclass Brain: (1) A Subclass Brain arises from persistent, personalized interaction between a user and an LLM, forming a cognitive dyad with adaptive learning memory. (2) Through GA-assisted forward-backward evolution, these dyads iteratively refine prompts and task performance. (3) Multiple Subclass Brains coordinate via Swarm Intelligence, optimizing across multi-objective fitness landscapes and exchanging distilled heuristics. (4) Their standardized behaviors and cognitive signatures integrate into a Superclass Brain, an emergent meta-intelligence capable of abstraction, generalization and self-improvement. We outline the theoretical constructs, present initial implementations (e.g., UAV scheduling, KU/KI keyword filtering) and propose a registry for cross-dyad knowledge consolidation. This work provides both a conceptual foundation and an architectural roadmap toward scalable, explainable and ethically aligned collective AI.",2 "As autonomous technologies increasingly shape maritime operations, understanding why an AI system makes a decision becomes as crucial as what it decides. In complex and dynamic maritime environments, trust in AI depends not only on performance but also on transparency and interpretability. This paper highlights the importance of Explainable AI (XAI) as a foundation for effective human-machine teaming in the maritime domain, where informed oversight and shared understanding are essential. To support the user-centered integration of XAI, we propose a domain-specific survey designed to capture maritime professionals' perceptions of trust, usability, and explainability. Our aim is to foster awareness and guide the development of user-centric XAI systems tailored to the needs of seafarers and maritime teams.",0 "The advent of large (visual) language models (LLM / LVLM) have led to a deluge of automated human-like systems in several domains including social media content generation, search and recommendation, healthcare prognosis, AI assistants for cognitive tasks etc. Although these systems have been successfully integrated in production; very little focus has been placed on sports, particularly accurate identification and natural language description of the game play. Most existing LLM/LVLMs can explain generic sports activities, but lack sufficient domain-centric sports' jargon to create natural (human-like) descriptions. This work highlights the limitations of existing SoTA LLM/LVLMs for generating production-grade sports captions from images in a desired stylized format, and proposes a two-level fine-tuned LVLM pipeline to address that. The proposed pipeline yields an improvement > 8-10% in the F1, and > 2-10% in BERT score compared to alternative approaches. In addition, it has a small runtime memory footprint and fast execution time. During Super Bowl LIX the pipeline proved its practical application for live professional sports journalism; generating highly accurate and stylized captions at the rate of 6 images per 3-5 seconds for over 1000 images during the game play.",1 "Background: Existing robust, pervasive device-based systems developed in recent years to detect depression require data collected over a long period and may not be effective in cases where early detection is crucial. Objective: Our main objective was to develop a minimalistic system to identify depression using data retrieved in the fastest possible time. Methods: We developed a fast tool that retrieves the past 7 days' app usage data in 1 second (mean 0.31, SD 1.10 seconds). A total of 100 students from Bangladesh participated in our study, and our tool collected their app usage data. To identify depressed and nondepressed students, we developed a diverse set of ML models. We selected important features using the stable approach, along with 3 main types of feature selection (FS) approaches. Results: Leveraging only the app usage data retrieved in 1 second, our light gradient boosting machine model used the important features selected by the stable FS approach and correctly identified 82.4% (n=42) of depressed students (precision=75%, F1-score=78.5%). Moreover, after comprehensive exploration, we presented a parsimonious stacking model where around 5 features selected by the all-relevant FS approach Boruta were used in each iteration of validation and showed a maximum precision of 77.4% (balanced accuracy=77.9%). A SHAP analysis of our best models presented behavioral markers that were related to depression. Conclusions: Due to our system's fast and minimalistic nature, it may make a worthwhile contribution to identifying depression in underdeveloped and developing regions. In addition, our detailed discussion about the implication of our findings can facilitate the development of less resource-intensive systems to better understand students who are depressed.",2 "The gastrointestinal (GI) tract of humans can have a wide variety of aberrant mucosal abnormality findings, ranging from mild irritations to extremely fatal illnesses. Prompt identification of gastrointestinal disorders greatly contributes to arresting the progression of the illness and improving therapeutic outcomes. This paper presents an ensemble of pre-trained vision transformers (ViTs) for accurately classifying endoscopic images of the GI tract to categorize gastrointestinal problems and illnesses. ViTs, attention-based neural networks, have revolutionized image recognition by leveraging the transformative power of the transformer architecture, achieving state-of-the-art (SOTA) performance across various visual tasks. The proposed model was evaluated on the publicly available HyperKvasir dataset with 10,662 images of 23 different GI diseases for the purpose of identifying GI tract diseases. An ensemble method is proposed utilizing the predictions of two pre-trained models, MobileViT_XS and MobileViT_V2_200, which achieved accuracies of 90.57% and 90.48%, respectively. All the individual models are outperformed by the ensemble model, GastroViT, with an average precision, recall, F1 score, and accuracy of 69%, 63%, 64%, and 91.98%, respectively, in the first testing that involves 23 classes. The model comprises only 20 million (M) parameters, even without data augmentation and despite the highly imbalanced dataset. For the second testing with 16 classes, the scores are even higher, with average precision, recall, F1 score, and accuracy of 87%, 86%, 87%, and 92.70%, respectively. Additionally, the incorporation of explainable AI (XAI) methods such as Grad-CAM (Gradient Weighted Class Activation Mapping) and SHAP (Shapley Additive Explanations) enhances model interpretability, providing valuable insights for reliable GI diagnosis in real-world settings.",0 "Accurate staging of Diabetic Retinopathy (DR) is essential for guiding timely interventions and preventing vision loss. However, current staging models are hardly interpretable, and most public datasets contain no clinical reasoning or interpretation beyond image-level labels. In this paper, we present a novel method that integrates graph representation learning with vision-language models (VLMs) to deliver explainable DR diagnosis. Our approach leverages optical coherence tomography angiography (OCTA) images by constructing biologically informed graphs that encode key retinal vascular features such as vessel morphology and spatial connectivity. A graph neural network (GNN) then performs DR staging while integrated gradients highlight critical nodes and edges and their individual features that drive the classification decisions. We collect this graph-based knowledge which attributes the model's prediction to physiological structures and their characteristics. We then transform it into textual descriptions for VLMs. We perform instruction-tuning with these textual descriptions and the corresponding image to train a student VLM. This final agent can classify the disease and explain its decision in a human interpretable way solely based on a single image input. Experimental evaluations on both proprietary and public datasets demonstrate that our method not only improves classification accuracy but also offers more clinically interpretable results. An expert study further demonstrates that our method provides more accurate diagnostic explanations and paves the way for precise localization of pathologies in OCTA images.",0 "Radar-based human pose estimation (HPE) provides a privacy-preserving, illumination-invariant sensing modality but is challenged by noisy, multipath-affected measurements. We introduce RadProPoser, a probabilistic encoder-decoder architecture that processes complex-valued radar tensors from a compact 3-transmitter, 4-receiver MIMO radar. By incorporating variational inference into keypoint regression, RadProPoser jointly predicts 26 three-dimensional joint locations alongside heteroscedastic aleatoric uncertainties and can be recalibrated to predict total uncertainty. We explore different probabilistic formulations using both Gaussian and Laplace distributions for latent priors and likelihoods. On our newly released dataset with optical motion-capture ground truth, RadProPoser achieves an overall mean per-joint position error (MPJPE) of 6.425 cm, with 5.678 cm at the 45 degree aspect angle. The learned uncertainties exhibit strong alignment with actual pose errors and can be calibrated to produce reliable prediction intervals, with our best configuration achieving an expected calibration error of 0.021. As an additional demonstration, sampling from these latent distributions enables effective data augmentation for downstream activity classification, resulting in an F1 score of 0.870. To our knowledge, this is the first end-to-end radar tensor-based HPE system to explicitly model and quantify per-joint uncertainty from raw radar tensor data, establishing a foundation for explainable and reliable human motion analysis in radar applications.",1 "Perceived risk in automated vehicles (AVs) can create the very danger that automation is meant to prevent: a frightened rider may hesitate when seconds matter, misjudge hazards, or disengage. However, measuring how perceived risk evolves in real time during driving remains challenging, leaving a gap in decoding such hidden psychological states. Here, we present a novel method to time-continuously measure and decode perceived risk. We conducted a controlled experiment where 2,164 participants viewed high-fidelity videos of common highway driving scenes and provided 141,628 discrete safety ratings. Through continuous-signal reconstruction of the discrete ratings, we obtained 236 hours of time-continuous perceived risk data - the largest perceived risk dataset to date. Leveraging this dataset, we trained deep neural networks that predict moment-by-moment perceived risk from vehicle kinematics with a mean relative error below $3\%$. Explainable AI analysis uncovers which factors determine perceived risk in real time. Our findings demonstrate a new paradigm for quantifying dynamic passenger experience and psychological constructs in real time. These findings can guide the design of AVs and other machines that operate in close proximity to people, adjusting behaviour before trust erodes, and help realise automation's benefits in transport, healthcare, and service robotics.",2 "Automatic medical report generation has the potential to support clinical diagnosis, reduce the workload of radiologists, and demonstrate potential for enhancing diagnostic consistency. However, current evaluation metrics often fail to reflect the clinical reliability of generated reports. Early overlap-based methods focus on textual matches between predicted and ground-truth entities but miss fine-grained clinical details (e.g., anatomical location, severity). Some diagnostic metrics are limited by fixed vocabularies or templates, reducing their ability to capture diverse clinical expressions. LLM-based approaches further lack interpretable reasoning steps, making it hard to assess or trust their behavior in safety-critical settings. These limitations hinder the comprehensive assessment of the reliability of generated reports and pose risks in their selection for clinical use. Therefore, we propose a Granular Explainable Multi-Agent Score (GEMA-Score) in this paper, which conducts both objective quantification and subjective evaluation through a large language model-based multi-agent workflow. Our GEMA-Score parses structured reports and employs stable calculations through interactive exchanges of information among agents to assess disease diagnosis, location, severity, and uncertainty. Additionally, an LLM-based scoring agent evaluates completeness, readability, and clinical terminology while providing explanatory feedback. Extensive experiments validate that GEMA-Score achieves the highest correlation with human expert evaluations on a public dataset, demonstrating its effectiveness in clinical scoring (Kendall coefficient = $0.69$ for ReXVal dataset and Kendall coefficient = $0.45$ for RadEvalX dataset). The anonymous project demo is available at: https://github.com/Zhenxuan-Zhang/GEMA_score.",0 "This work-in-progress paper presents SPARC (Systematic Problem Solving and Algorithmic Reasoning for Children), a gamified learning platform designed to enhance engagement and knowledge retention in K-12 STEM education. Traditional approaches often struggle to motivate students or facilitate deep understanding, especially for complex scientific concepts. SPARC addresses these challenges by integrating interactive, narrative-driven gameplay with an artificial intelligence peer agent built on large language models. Rather than simply providing answers, the agent engages students in dialogue and inquiry, prompting them to explain concepts and solve problems collaboratively. The platform's design is grounded in educational theory and closely aligned with state learning standards. Initial classroom pilots utilized a multi-method assessment framework combining pre- and post-tests, in-game analytics, and qualitative feedback from students and teachers. Preliminary findings indicate that SPARC significantly increases student engagement, with most participants reporting greater interest in STEM subjects and moderate gains in conceptual understanding observed in post-test results. Ongoing development focuses on refining the AI agent, expanding curriculum integration, and improving accessibility. These early results demonstrate the potential of combining AI-driven peer support with game-based learning to create inclusive, effective, and engaging educational experiences for K-12 learners.",0 "We present SAInT, a Python-based tool for visually exploring and understanding the behavior of Machine Learning (ML) models through integrated local and global sensitivity analysis. Our system supports Human-in-the-Loop (HITL) workflows by enabling users - both AI researchers and domain experts - to configure, train, evaluate, and explain models through an interactive graphical interface without programming. The tool automates model training and selection, provides global feature attribution using variance-based sensitivity analysis, and offers per-instance explanation via LIME and SHAP. We demonstrate the system on a classification task predicting survival on the Titanic dataset and show how sensitivity information can guide feature selection and data refinement.",0 "We analyze anonymous interaction data of minors in class-rooms spanning several months, schools, and subjects employing a novel, simple topic modeling approach. Specifically, we categorize more than 17,000 messages generated by students, teachers, and ChatGPT in two dimensions: content (such as nature and people) and tasks (such as writing and explaining). Our hierarchical categorization done separately for each dimension includes exemplary prompts, and provides both a high-level overview as well as tangible insights. Prior works mostly lack a content or thematic categorization. While task categorizations are more prevalent in education, most have not been supported by real-world data for K-12. In turn, it is not surprising that our analysis yielded a number of novel applications. In deriving these insights, we found that many of the well-established classical and emerging computational methods, i.e., topic modeling, for analysis of large amounts of texts underperform, leading us to directly apply state-of-the-art LLMs with adequate pre-processing to achieve hierarchical topic structures with better human alignment through explicit instructions than prior approaches. Our findings support fellow researchers, teachers and students in enriching the usage of GenAI, while our discussion also highlights a number of concerns and open questions for future research.",0 "Chemists in search of structure-property relationships face great challenges due to limited high quality, concordant datasets. Machine learning (ML) has significantly advanced predictive capabilities in chemical sciences, but these modern data-driven approaches have increased the demand for data. In response to the growing demand for explainable AI (XAI) and to bridge the gap between predictive accuracy and human comprehensibility, we introduce LAMeL - a Linear Algorithm for Meta-Learning that preserves interpretability while improving the prediction accuracy across multiple properties. While most approaches treat each chemical prediction task in isolation, LAMeL leverages a meta-learning framework to identify shared model parameters across related tasks, even if those tasks do not share data, allowing it to learn a common functional manifold that serves as a more informed starting point for new unseen tasks. Our method delivers performance improvements ranging from 1.1- to 25-fold over standard ridge regression, depending on the domain of the dataset. While the degree of performance enhancement varies across tasks, LAMeL consistently outperforms or matches traditional linear methods, making it a reliable tool for chemical property prediction where both accuracy and interpretability are critical.",1 "Audio Large Language Models (Audio LLMs) enable human-like conversation about music, yet it is unclear if they are truly listening to the audio or just using textual reasoning, as recent benchmarks suggest. This paper investigates this issue by quantifying the contribution of each modality to a model's output. We adapt the MM-SHAP framework, a performance-agnostic score based on Shapley values that quantifies the relative contribution of each modality to a model's prediction. We evaluate two models on the MuChoMusic benchmark and find that the model with higher accuracy relies more on text to answer questions, but further inspection shows that even if the overall audio contribution is low, models can successfully localize key sound events, suggesting that audio is not entirely ignored. Our study is the first application of MM-SHAP to Audio LLMs and we hope it will serve as a foundational step for future research in explainable AI and audio.",0 "Objective: This study proposes and preliminarily validates a novel ""Functional-Energetic Topology Model"" to uncover neurodynamic mechanisms of Non-Suicidal Self-Injury (NSSI), using Graph Neural Networks (GNNs) to decode brain network patterns from single-channel EEG in real-world settings.Methods: EEG data were collected over ~1 month from three adolescents with NSSI using a smartphone app and a portable Fp1 EEG headband during impulsive and non-impulsive states. A theory-driven GNN with seven functional nodes was built. Performance was evaluated via intra-subject (80/20 split) and leave-one-subject-out cross-validation (LOSOCV). GNNExplainer was used for interpretability.Results: The model achieved high intra-subject accuracy (>85%) and significantly above-chance cross-subject performance (approximately73.7%). Explainability analysis revealed a key finding: during NSSI states, a critical feedback loop regulating somatic sensation exhibits dysfunction and directional reversal. Specifically, the brain loses its ability to self-correct via negative bodily feedback, and the regulatory mechanism enters an ""ineffective idling"" state.Conclusion: This work demonstrates the feasibility of applying theory-guided GNNs to sparse, single-channel EEG for decoding complex mental states. The identified ""feedback loop reversal"" offers a novel, dynamic, and computable model of NSSI mechanisms, paving the way for objective biomarkers and next-generation Digital Therapeutics (DTx).",0 "Modelling human variation in rating tasks is crucial for personalization, pluralistic model alignment, and computational social science. We propose representing individuals using natural language value profiles -- descriptions of underlying values compressed from in-context demonstrations -- along with a steerable decoder model that estimates individual ratings from a rater representation. To measure the predictive information in a rater representation, we introduce an information-theoretic methodology and find that demonstrations contain the most information, followed by value profiles, then demographics. However, value profiles effectively compress the useful information from demonstrations (>70% information preservation) and offer advantages in terms of scrutability, interpretability, and steerability. Furthermore, clustering value profiles to identify similarly behaving individuals better explains rater variation than the most predictive demographic groupings. Going beyond test set performance, we show that the decoder predictions change in line with semantic profile differences, are well-calibrated, and can help explain instance-level disagreement by simulating an annotator population. These results demonstrate that value profiles offer novel, predictive ways to describe individual variation beyond demographics or group information.",0 "Vision Language Models (VLMs) have recently been adopted in robotics for their capability in common sense reasoning and generalizability. Existing work has applied VLMs to generate task and motion planning from natural language instructions and simulate training data for robot learning. In this work, we explore using VLM to interpret human demonstration videos and generate robot task planning. Our method integrates keyframe selection, visual perception, and VLM reasoning into a pipeline. We named it SeeDo because it enables the VLM to ''see'' human demonstrations and explain the corresponding plans to the robot for it to ''do''. To validate our approach, we collected a set of long-horizon human videos demonstrating pick-and-place tasks in three diverse categories and designed a set of metrics to comprehensively benchmark SeeDo against several baselines, including state-of-the-art video-input VLMs. The experiments demonstrate SeeDo's superior performance. We further deployed the generated task plans in both a simulation environment and on a real robot arm.",1 "Explainability in artificial intelligence (XAI) remains a crucial aspect for fostering trust and understanding in machine learning models. Current visual explanation techniques, such as gradient-based or class-activation-based methods, often exhibit a strong dependence on specific model architectures. Conversely, perturbation-based methods, despite being model-agnostic, are computationally expensive as they require evaluating models on a large number of forward passes. In this work, we introduce Foveation-based Explanations (FovEx), a novel XAI method inspired by human vision. FovEx seamlessly integrates biologically inspired perturbations by iteratively creating foveated renderings of the image and combines them with gradient-based visual explorations to determine locations of interest efficiently. These locations are selected to maximize the performance of the model to be explained with respect to the downstream task and then combined to generate an attribution map. We provide a thorough evaluation with qualitative and quantitative assessments on established benchmarks. Our method achieves state-of-the-art performance on both transformers (on 4 out of 5 metrics) and convolutional models (on 3 out of 5 metrics), demonstrating its versatility among various architectures. Furthermore, we show the alignment between the explanation map produced by FovEx and human gaze patterns (+14\% in NSS compared to RISE, +203\% in NSS compared to GradCAM). This comparison enhances our confidence in FovEx's ability to close the interpretation gap between humans and machines.",1 "Visual Question Answering (VQA) is increasingly used in diverse applications ranging from general visual reasoning to safety-critical domains such as medical imaging and autonomous systems, where models must provide not only accurate answers but also explanations that humans can easily understand and verify. Prototype-based modeling has shown promise for interpretability by grounding predictions in semantically meaningful regions for purely visual reasoning tasks, yet remains underexplored in the context of VQA. We present ProtoVQA, a unified prototypical framework that (i) learns question-aware prototypes that serve as reasoning anchors, connecting answers to discriminative image regions, (ii) applies spatially constrained matching to ensure that the selected evidence is coherent and semantically relevant, and (iii) supports both answering and grounding tasks through a shared prototype backbone. To assess explanation quality, we propose the Visual-Linguistic Alignment Score (VLAS), which measures how well the model's attended regions align with ground-truth evidence. Experiments on Visual7W show that ProtoVQA yields faithful, fine-grained explanations while maintaining competitive accuracy, advancing the development of transparent and trustworthy VQA systems.",0 "Diabetic Retinopathy (DR) is a major cause of global blindness, necessitating early and accurate diagnosis. While deep learning models have shown promise in DR detection, their black-box nature often hinders clinical adoption due to a lack of transparency and interpretability. To address this, we propose XDR-LVLM (eXplainable Diabetic Retinopathy Diagnosis with LVLM), a novel framework that leverages Vision-Language Large Models (LVLMs) for high-precision DR diagnosis coupled with natural language-based explanations. XDR-LVLM integrates a specialized Medical Vision Encoder, an LVLM Core, and employs Multi-task Prompt Engineering and Multi-stage Fine-tuning to deeply understand pathological features within fundus images and generate comprehensive diagnostic reports. These reports explicitly include DR severity grading, identification of key pathological concepts (e.g., hemorrhages, exudates, microaneurysms), and detailed explanations linking observed features to the diagnosis. Extensive experiments on the Diabetic Retinopathy (DDR) dataset demonstrate that XDR-LVLM achieves state-of-the-art performance, with a Balanced Accuracy of 84.55% and an F1 Score of 79.92% for disease diagnosis, and superior results for concept detection (77.95% BACC, 66.88% F1). Furthermore, human evaluations confirm the high fluency, accuracy, and clinical utility of the generated explanations, showcasing XDR-LVLM's ability to bridge the gap between automated diagnosis and clinical needs by providing robust and interpretable insights.",0 "Large language models (LLMs) have created new opportunities to assist teachers and support student learning. While researchers have explored various prompt engineering approaches in educational contexts, the degree to which these approaches generalize across domains--such as science, computing, and engineering--remains underexplored. In this paper, we introduce Chain-of-Thought Prompting + Active Learning (CoTAL), an LLM-based approach to formative assessment scoring that (1) leverages Evidence-Centered Design (ECD) to align assessments and rubrics with curriculum goals, (2) applies human-in-the-loop prompt engineering to automate response scoring, and (3) incorporates chain-of-thought (CoT) prompting and teacher and student feedback to iteratively refine questions, rubrics, and LLM prompts. Our findings demonstrate that CoTAL improves GPT-4's scoring performance across domains, achieving gains of up to 38.9% over a non-prompt-engineered baseline (i.e., without labeled examples, chain-of-thought prompting, or iterative refinement). Teachers and students judge CoTAL to be effective at scoring and explaining responses, and their feedback produces valuable insights that enhance grading accuracy and explanation quality.",0 "Concept-based interpretability for Convolutional Neural Networks (CNNs) aims to align internal model representations with high-level semantic concepts, but existing approaches largely overlook the semantic roles of individual filters and the dynamic propagation of concepts across layers. To address these limitations, we propose ConceptFlow, a concept-based interpretability framework that simulates the internal ""thinking path"" of a model by tracing how concepts emerge and evolve across layers. ConceptFlow comprises two key components: (i) concept attentions, which associate each filter with relevant high-level concepts to enable localized semantic interpretation, and (ii) conceptual pathways, derived from a concept transition matrix that quantifies how concepts propagate and transform between filters. Together, these components offer a unified and structured view of internal model reasoning. Experimental results demonstrate that ConceptFlow yields semantically meaningful insights into model reasoning, validating the effectiveness of concept attentions and conceptual pathways in explaining decision behavior. By modeling hierarchical conceptual pathways, ConceptFlow provides deeper insight into the internal logic of CNNs and supports the generation of more faithful and human-aligned explanations.",1 "My findings show, for the first time, what causes loss of awareness, anesthesia, memory replay, opioids induced respiratory depression (OIRD), and slow wave sleep. Opiates are fast pain relievers and anesthetics that can cause respiratory arrest. I found how mu-opioids and other medial habenula activators slowdown respiration during SWS and anesthesia. Using DTI method I observed that human hippocampus is connected to MHb via posterior septum, while amygdala via anteromedial BNST. MHb projected to pineal gland and contralateral MHb (Vadovi\v{c}ov\'a, 2014). MHb has dense mu-opioids receptors (Gardon and Faget, 2014) and strong projections to IPN. Herkenham (1981) found increased glucose intake during anesthesia in MHb and IPN. The IPN projects to serotonergic MRN/DRN, and pain/interoception/arousal linked PAG. The question is: What is the MHb-IPN circuit doing? This extended circuit model explains role of the dentate gyrus >posterior septum >MHb >IPN >MRN >hippocampus + BF + claustrum >cortical slow-wave activity (SWA) in memory replay, loss of awareness, anesthesia and SWS. It proposes new neural mechanisms for anesthetic ketamine, nitrous oxide, and phencyclidine effects: activation of the IPN >MRN >claustrum >cortical SWA circuit by the 5-HT2a receptors in the IPN and claustrum. This brain model shows why are ketamine and psychedelics anxiolytic and antidepressant. How they by activating the 5-HT2a receptors in vACC/infralimbic cortex increase safety, well-being signal, socializing, and cognitive flexibility, and attenuate fear, worries, anger, impulsivity, self-defence and wanting. This model suggests that mu-opioids, acetylcholine, nicotine, cannabinoids, adenosine, GLP-1RA, neuropeptide Y, and substance P activate the MHb-IPN-MRN circuit which promotes rest, recovery, repair, serotonin-BDNF-proteins production-spines/synapses growth-anti-inflammatory state.",2 "Human learning embodies a striking duality: sometimes, we appear capable of following logical, compositional rules and benefit from structured curricula (e.g., in formal education), while other times, we rely on an incremental approach or trial-and-error, learning better from curricula that are randomly interleaved. Influential psychological theories explain this seemingly disparate behavioral evidence by positing two qualitatively different learning systems -- one for rapid, rule-based inferences and another for slow, incremental adaptation. It remains unclear how to reconcile such theories with neural networks, which learn via incremental weight updates and are thus a natural model for the latter type of learning, but are not obviously compatible with the former. However, recent evidence suggests that metalearning neural networks and large language models are capable of ""in-context learning"" (ICL) -- the ability to flexibly grasp the structure of a new task from a few examples. Here, we show that the dynamic interplay between ICL and default in-weight learning (IWL) naturally captures a broad range of learning phenomena observed in humans, reproducing curriculum effects on category-learning and compositional tasks, and recapitulating a tradeoff between flexibility and retention. Our work shows how emergent ICL can equip neural networks with fundamentally different learning properties that can coexist with their native IWL, thus offering a novel perspective on dual-process theories and human cognitive flexibility.",2 "Policies generated by Reinforcement Learning (RL) algorithms are difficult to explain to users, as they emerge from the interaction of complex reward structures and neural network representations. Consequently, analyzing and predicting agent behavior can be challenging, undermining user trust in real-world applications. To facilitate user understanding, current methods for global policy summarization typically rely on videos that demonstrate agent behavior in a subset of world states. However, users can only watch a limited number of demonstrations, constraining their understanding. Moreover, these methods place the burden of interpretation on users by presenting raw behaviors rather than synthesizing them into coherent patterns. To resolve these issues, we introduce SySLLM (Synthesized Summary using Large Language Models), advocating for a new paradigm of abstractive-textual policy explanations. By leveraging Large Language Models (LLMs)-which possess extensive world knowledge and pattern synthesis capabilities-SySLLM generates textual summaries that provide structured and comprehensible explanations of agent policies. SySLLM demonstrates that LLMs can interpret spatio-temporally structured descriptions of state-action trajectories from an RL agent and generate valuable policy insights in a zero-shot setting, without any prior knowledge or fine-tuning. Our evaluation shows that SySLLM captures key insights, such as goal preferences and exploration strategies, that were also identified by human experts. Furthermore, in a large-scale user study (with 200 participants), SySLLM summaries were preferred over demonstration-based summaries (HIGHLIGHTS) by a clear majority (75.5%) of participants.",0 "Retinal disease diagnosis is critical in preventing vision loss and reducing socioeconomic burdens. Globally, over 2.2 billion people are affected by some form of vision impairment, resulting in annual productivity losses estimated at $411 billion. Traditional manual grading of retinal fundus images by ophthalmologists is time-consuming and subjective. In contrast, deep learning has revolutionized medical diagnostics by automating retinal image analysis and achieving expert-level performance. In this study, we present EYE-DEX, an automated framework for classifying 10 retinal conditions using the large-scale Retinal Disease Dataset comprising 21,577 eye fundus images. We benchmark three pre-trained Convolutional Neural Network (CNN) models--VGG16, VGG19, and ResNet50--with our finetuned VGG16 achieving a state-of-the-art global benchmark test accuracy of 92.36%. To enhance transparency and explainability, we integrate the Gradient-weighted Class Activation Mapping (Grad-CAM) technique to generate visual explanations highlighting disease-specific regions, thereby fostering clinician trust and reliability in AI-assisted diagnostics.",0 "Ensuring safety and in-domain responses for Retrieval-Augmented Generation (RAG) systems is paramount in safety-critical applications, yet remains a significant challenge. To address this, we evaluate four methodologies for Out-Of-Domain (OOD) query detection: GPT-4o, regression-based, Principal Component Analysis (PCA)-based, and Neural Collapse (NC), to ensure the RAG system only responds to queries confined to the system's knowledge base. Specifically, our evaluation explores two novel dimensionality reduction and feature separation strategies: \textit{PCA}, where top components are selected using explained variance or OOD separability, and an adaptation of \textit{Neural Collapse Feature Separation}. We validate our approach on standard datasets (StackExchange and MSMARCO) and real-world applications (Substance Use and COVID-19), including tests against LLM-simulated and actual attacks on a COVID-19 vaccine chatbot. Through human and LLM-based evaluations of response correctness and relevance, we confirm that an external OOD detector is crucial for maintaining response relevance.",0 "Large Language Models (LLMs) exhibit a notable performance ceiling on complex, multi-faceted tasks, as they often fail to integrate diverse information or adhere to multiple constraints. We posit that such limitation arises when the demands of a task exceed the LLM's effective cognitive load capacity. This interpretation draws a strong analogy to Cognitive Load Theory (CLT) in cognitive science, which explains similar performance boundaries in the human mind, and is further supported by emerging evidence that reveals LLMs have bounded working memory characteristics. Building upon this CLT-grounded understanding, we introduce CoThinker, a novel LLM-based multi-agent framework designed to mitigate cognitive overload and enhance collaborative problem-solving abilities. CoThinker operationalizes CLT principles by distributing intrinsic cognitive load through agent specialization and managing transactional load via structured communication and a collective working memory. We empirically validate CoThinker on complex problem-solving tasks and fabricated high cognitive load scenarios, demonstrating improvements over existing multi-agent baselines in solution quality and efficiency. Our analysis reveals characteristic interaction patterns, providing insights into the emergence of collective cognition and effective load management, thus offering a principled approach to overcoming LLM performance ceilings.",0 "Assertion messages significantly enhance unit tests by clearly explaining the reasons behind test failures, yet they are frequently omitted by developers and automated test-generation tools. Despite recent advancements, Large Language Models (LLMs) have not been systematically evaluated for their ability to generate informative assertion messages. In this paper, we introduce an evaluation of four state-of-the-art Fill-in-the-Middle (FIM) LLMs - Qwen2.5-Coder-32B, Codestral-22B, CodeLlama-13B, and StarCoder - on a dataset of 216 Java test methods containing developer-written assertion messages. We find that Codestral-22B achieves the highest quality score of 2.76 out of 5 using a human-like evaluation approach, compared to 3.24 for manually written messages. Our ablation study shows that including descriptive test comments further improves Codestral's performance to 2.97, highlighting the critical role of context in generating clear assertion messages. Structural analysis demonstrates that all models frequently replicate developers' preferred linguistic patterns. We discuss the limitations of the selected models and conventional text evaluation metrics in capturing diverse assertion message structures. Our benchmark, evaluation results, and discussions provide an essential foundation for advancing automated, context-aware generation of assertion messages in test code. A replication package is available at https://doi.org/10.5281/zenodo.15293133",1 "Cooperation on social networks is crucial for understanding human survival and development. Although network structure has been found to significantly influence cooperation, human experiments have observed different cooperation phenomena under similar conditions. While evidence suggests that these differences arise from human exploration, our understanding of its impact mechanisms and characteristics remains limited. Here, we seek to formalize human exploration as an individual learning process involving trial and reflection, and integrate social learning to examine how their interdependence shapes cooperation. We find that individual learning can alter neighbor imitation tendencies, and the resulting shifts in the local cooperative environment feed back into the experiential cognition that guides individual learning. This coupled dynamic makes the ability of social networks to promote cooperation largely dependent on whether individuals focus on long-term payoffs, and exhibits a series of characteristics that can explain previously unexplained and seemingly contradictory cooperation phenomena. Surprisingly, individual learning can promote cooperation more than social learning when its probability is negatively correlated with payoffs, a mechanism rooted in the psychological tendency to avoid trial-and-error when individuals are satisfied with their current payoffs. These results explain the contradictory cooperation phenomenon by accounting for decision preferences and cognitive processes underlying exploration, bridging the gap between theoretical research and reality.",1 "Multimodal misinformation, encompassing textual, visual, and cross-modal distortions, poses an increasing societal threat that is amplified by generative AI. Existing methods typically focus on a single type of distortion and struggle to generalize to unseen scenarios. In this work, we observe that different distortion types share common reasoning capabilities while also requiring task-specific skills. We hypothesize that joint training across distortion types facilitates knowledge sharing and enhances the model's ability to generalize. To this end, we introduce TRUST-VL, a unified and explainable vision-language model for general multimodal misinformation detection. TRUST-VL incorporates a novel Question-Aware Visual Amplifier module, designed to extract task-specific visual features. To support training, we also construct TRUST-Instruct, a large-scale instruction dataset containing 198K samples featuring structured reasoning chains aligned with human fact-checking workflows. Extensive experiments on both in-domain and zero-shot benchmarks demonstrate that TRUST-VL achieves state-of-the-art performance, while also offering strong generalization and interpretability.",2 "Attention Deficit Hyperactivity Disorder (ADHD) is a common brain disorder in children that can persist into adulthood, affecting social, academic, and career life. Early diagnosis is crucial for managing these impacts on patients and the healthcare system but is often labor-intensive and time-consuming. This paper presents a novel method to improve ADHD diagnosis precision and timeliness by leveraging Deep Learning (DL) approaches and electroencephalogram (EEG) signals. We introduce ADHDeepNet, a DL model that utilizes comprehensive temporal-spatial characterization, attention modules, and explainability techniques optimized for EEG signals. ADHDeepNet integrates feature extraction and refinement processes to enhance ADHD diagnosis. The model was trained and validated on a dataset of 121 participants (61 ADHD, 60 Healthy Controls), employing nested cross-validation for robust performance. The proposed two-stage methodology uses a 10-fold cross-subject validation strategy. Initially, each iteration optimizes the model's hyper-parameters with inner 2-fold cross-validation. Then, Additive Gaussian Noise (AGN) with various standard deviations and magnification levels is applied for data augmentation. ADHDeepNet achieved 100% sensitivity and 99.17% accuracy in classifying ADHD/HC subjects. To clarify model explainability and identify key brain regions and frequency bands for ADHD diagnosis, we analyzed the learned weights and activation patterns of the model's primary layers. Additionally, t-distributed Stochastic Neighbor Embedding (t-SNE) visualized high-dimensional data, aiding in interpreting the model's decisions. This study highlights the potential of DL and EEG in enhancing ADHD diagnosis accuracy and efficiency.",1 "Humans intuitively perceive complex social signals in visual scenes, yet it remains unclear whether state-of-the-art AI models encode the same similarity structure. We study (Q1) whether modern video and language models capture human-perceived similarity in social videos, and (Q2) how to instill this structure into models using human behavioral data. To address this, we introduce a new benchmark of over 49,000 odd-one-out similarity judgments on 250 three-second video clips of social interactions, and discover a modality gap: despite the task being visual, caption-based language embeddings align better with human similarity than any pretrained video model. We close this gap by fine-tuning a TimeSformer video model on these human judgments with our novel hybrid triplet-RSA objective using low-rank adaptation (LoRA), aligning pairwise distances to human similarity. This fine-tuning protocol yields significantly improved alignment with human perceptions on held-out videos in terms of both explained variance and odd-one-out triplet accuracy. Variance partitioning shows that the fine-tuned video model increases shared variance with language embeddings and explains additional unique variance not captured by the language model. Finally, we test transfer via linear probes and find that human-similarity fine-tuning strengthens the encoding of social-affective attributes (intimacy, valence, dominance, communication) relative to the pretrained baseline. Overall, our findings highlight a gap in pretrained video models' social recognition and demonstrate that behavior-guided fine-tuning shapes video representations toward human social perception.",0 "Why do Vision Language Models (VLMs), despite success on standard benchmarks, often fail to match human performance on surprisingly simple visual reasoning tasks? While the underlying computational principles are still debated, we hypothesize that a crucial factor is a deficit in visually-grounded serial processing. To test this hypothesis, we compared human and VLM performance across tasks designed to vary serial processing demands in three distinct domains: geometric reasoning, perceptual enumeration, and mental rotation. Tasks within each domain varied serial processing load by manipulating factors such as geometric concept complexity, perceptual individuation load, and transformation difficulty. Across all domains, our results revealed a consistent pattern: decreased VLM accuracy was strongly correlated with increased human reaction time (used as a proxy for serial processing load). As tasks require more demanding serial processing -- whether composing concepts, enumerating items, or performing mental transformations -- the VLM-human performance gap widens reliably. These findings support our hypothesis, indicating that limitations in serial, visually grounded reasoning represent a fundamental bottleneck that distinguishes current VLMs from humans.",0 "In this paper, we study a simple linear model of the cochlea as a set of vibrating strings. We make hypothesis that the information sent to the auditory cortex is the energy stored in the strings and consider all oscillation modes of the strings. We show the emergence of the sub-harmonic series whose existence was hypothesized in the XVI century to explain the consonance of the minor chord. We additionally show how the nonlinearity of the energy can be used to study the emergence of the combination tone (Tartini's third sound) shedding new light on this long debated subject.",0 "The first voice timbre attribute detection challenge is featured in a special session at NCMMSC 2025. It focuses on the explainability of voice timbre and compares the intensity of two speech utterances in a specified timbre descriptor dimension. The evaluation was conducted on the VCTK-RVA dataset. Participants developed their systems and submitted their outputs to the organizer, who evaluated the performance and sent feedback to them. Six teams submitted their outputs, with five providing descriptions of their methodologies.",0 "Large language models (LLMs) often generate natural language rationales -- free-form explanations that help improve performance on complex reasoning tasks and enhance interpretability for human users. However, evaluating these rationales remains challenging. While recent work has relied on binary preference judgments from humans or LLM judges, such evaluations are often opaque and coarse-grained, offering limited insight into what makes one rationale better than another. In this work, we rethink preference evaluation for LLM-generated rationales by asking: (1) What attributes define good rationales? (2) Can human preferences be explained by these attributes? (3) Can attribute-based evaluation overcome the limitations of binary comparisons? We identify a set of key rationale attributes from prior literature and assess them using automatic metrics, LLM judgments, and human annotations. We then analyze two standard human preference datasets MT Bench and Chatbot Arena using SHAP to identify which attributes best explain human preference outcomes. Finally, we re-evaluate model-generated rationales using attribute-specific ELO scores, revealing more nuanced model comparisons and insights. Our findings suggest that fine-grained attribute evaluations can better characterize rationale quality and guide future research toward more interpretable and reliable evaluation practices.",1 "Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based only on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior.",1 "We introduce fCrit, a dialogue-based AI system designed to critique furniture design with a focus on explainability. Grounded in reflective learning and formal analysis, fCrit employs a multi-agent architecture informed by a structured design knowledge base. We argue that explainability in the arts should not only make AI reasoning transparent but also adapt to the ways users think and talk about their designs. We demonstrate how fCrit supports this process by tailoring explanations to users' design language and cognitive framing. This work contributes to Human-Centered Explainable AI (HCXAI) in creative practice, advancing domain-specific methods for situated, dialogic, and visually grounded AI support.",0 "State-sponsored trolls, malicious actors who deploy sophisticated linguistic manipulation in coordinated information campaigns, posing threats to online discourse integrity. While Large Language Models (LLMs) achieve strong performance on general natural language processing (NLP) tasks, they struggle with subtle propaganda detection and operate as ``black boxes'', providing no interpretable insights into manipulation strategies. This paper introduces X-Troll, a novel framework that bridges this gap by integrating explainable adapter-based LLMs with expert-derived linguistic knowledge to detect state-sponsored trolls and provide human-readable explanations for its decisions. X-Troll incorporates appraisal theory and propaganda analysis through specialized LoRA adapters, using dynamic gating to capture campaign-specific discourse patterns in coordinated information operations. Experiments on real-world data demonstrate that our linguistically-informed approach shows strong performance compared with both general LLM baselines and existing troll detection models in accuracy while providing enhanced transparency through expert-grounded explanations that reveal the specific linguistic strategies used by state-sponsored actors. X-Troll source code is available at: https://github.com/ltian678/xtroll_source/.",2 "Graph Neural Networks (GNNs) are widely used for node classification, yet their opaque decision-making limits trust and adoption. While local explanations offer insights into individual predictions, global explanation methods, those that characterize an entire class, remain underdeveloped. Existing global explainers rely on motif discovery in small graphs, an approach that breaks down in large, real-world settings where subgraph repetition is rare, node attributes are high-dimensional, and predictions arise from complex structure-attribute interactions. We propose GnnXemplar, a novel global explainer inspired from Exemplar Theory from cognitive science. GnnXemplar identifies representative nodes in the GNN embedding space, exemplars, and explains predictions using natural language rules derived from their neighborhoods. Exemplar selection is framed as a coverage maximization problem over reverse k-nearest neighbors, for which we provide an efficient greedy approximation. To derive interpretable rules, we employ a self-refining prompt strategy using large language models (LLMs). Experiments across diverse benchmarks show that GnnXemplar significantly outperforms existing methods in fidelity, scalability, and human interpretability, as validated by a user study with 60 participants.",0 "Brain-Computer Interfaces (BCIs) suffer from high inter-subject variability and limited labeled data, often requiring lengthy calibration phases. In this work, we present an end-to-end approach that explicitly models the subject dependency using lightweight convolutional neural networks (CNNs) conditioned on the subject's identity. Our method integrates hyperparameter optimization strategies that prioritize class imbalance and evaluates two conditioning mechanisms to adapt pre-trained models to unseen subjects with minimal calibration data. We benchmark three lightweight architectures on a time-modulated Event-Related Potentials (ERP) classification task, providing interpretable evaluation metrics and explainable visualizations of the learned representations. Results demonstrate improved generalization and data-efficient calibration, highlighting the scalability and practicality of subject-adaptive BCIs.",0 "This paper introduces Multi-Output LOcal Narrative Explanation (MOLONE), a novel comparative explanation method designed to enhance preference selection in human-in-the-loop Preference Bayesian optimization (PBO). The preference elicitation in PBO is a non-trivial task because it involves navigating implicit trade-offs between vector-valued outcomes, subjective priorities of decision-makers, and decision-makers' uncertainty in preference selection. Existing explainable AI (XAI) methods for BO primarily focus on input feature importance, neglecting the crucial role of outputs (objectives) in human preference elicitation. MOLONE addresses this gap by providing explanations that highlight both input and output importance, enabling decision-makers to understand the trade-offs between competing objectives and make more informed preference selections. MOLONE focuses on local explanations, comparing the importance of input features and outcomes across candidate samples within a local neighborhood of the search space, thus capturing nuanced differences relevant to preference-based decision-making. We evaluate MOLONE within a PBO framework using benchmark multi-objective optimization functions, demonstrating its effectiveness in improving convergence compared to noisy preference selections. Furthermore, a user study confirms that MOLONE significantly accelerates convergence in human-in-the-loop scenarios by facilitating more efficient identification of preferred options.",0 "Emotion analysis is an inherently ambiguous task. Previous work studied annotator properties to explain disagreement, but this overlooks the possibility that ambiguity may stem from missing information about the context of events. In this paper, we propose a novel approach that adds reasonable contexts to event descriptions, which may better explain a particular situation. Our goal is to understand whether these enriched contexts enable human annotators to annotate emotions more reliably. We disambiguate a target event description by automatically generating multiple event chains conditioned on differing emotions. By combining techniques from short story generation in various settings, we achieve coherent narratives that result in a specialized dataset for the first comprehensive and systematic examination of contextualized emotion analysis. Through automatic and human evaluation, we find that contextual narratives enhance the interpretation of specific emotions and support annotators in producing more consistent annotations.",0 "AI based social media recommendations have great potential to improve the user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being a black box creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the users perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis.",2 "As generative foundation models improve, they also tend to become more persuasive, raising concerns that AI automation will enable governments, firms, and other actors to manipulate beliefs with unprecedented scale and effectiveness at virtually no cost. The full economic and social ramifications of this trend have been difficult to foresee, however, given that we currently lack a complete theoretical understanding of why persuasion is costly for human labor to produce in the first place. This paper places human and AI agents on a common conceptual footing by formalizing informational persuasion as a mathematical decision problem and characterizing its computational complexity. A novel proof establishes that persuasive messages are challenging to discover (NP-Hard) but easy to adopt if supplied by others (NP). This asymmetry helps explain why people are susceptible to persuasion, even in contexts where all relevant information is publicly available. The result also illuminates why litigation, strategic communication, and other persuasion-oriented activities have historically been so human capital intensive, and it provides a new theoretical basis for studying how AI will impact various industries.",0 "Plantar pressure mapping is essential in clinical diagnostics and sports science, yet large heterogeneous datasets often contain outliers from technical errors or procedural inconsistencies. Statistical Parametric Mapping (SPM) provides interpretable analyses but is sensitive to alignment and its capacity for robust outlier detection remains unclear. This study compares an SPM approach with an explainable machine learning (ML) approach to establish transparent quality-control pipelines for plantar pressure datasets. Data from multiple centers were annotated by expert consensus and enriched with synthetic anomalies resulting in 798 valid samples and 2000 outliers. We evaluated (i) a non-parametric, registration-dependent SPM approach and (ii) a convolutional neural network (CNN), explained using SHapley Additive exPlanations (SHAP). Performance was assessed via nested cross-validation; explanation quality via a semantic differential survey with domain experts. The ML model reached high accuracy and outperformed SPM, which misclassified clinically meaningful variations and missed true outliers. Experts perceived both SPM and SHAP explanations as clear, useful, and trustworthy, though SPM was assessed less complex. These findings highlight the complementary potential of SPM and explainable ML as approaches for automated outlier detection in plantar pressure data, and underscore the importance of explainability in translating complex model outputs into interpretable insights that can effectively inform decision-making.",0 "Explaining machine learning (ML) models for time series (TS) classification remains challenging due to the difficulty of interpreting raw time series and the high dimensionality of the input space. We introduce PHAR-Post-hoc Attribution Rules-a unified framework that transforms numeric feature attributions from post-hoc, instance-wise explainers (e.g., LIME, SHAP) into structured, human-readable rules. These rules define interpretable intervals that indicate where and when key decision boundaries occur, enhancing model transparency. PHAR performs comparably to native rule-based methods, such as Anchor, while scaling more efficiently to long TS sequences and achieving broader instance coverage. A dedicated rule fusion step consolidates rule sets using strategies like weighted selection and lasso-based refinement, balancing key quality metrics: coverage, confidence, and simplicity. This fusion ensures each instance receives a concise and unambiguous rule, improving both explanation fidelity and consistency. We further introduce visualization techniques to illustrate specificity-generalization trade-offs in the derived rules. PHAR resolves conflicting and overlapping explanations-a common effect of the Rashomon phenomenon-into coherent, domain-adaptable insights. Comprehensive experiments on UCR/UEA Time Series Classification Archive demonstrate that PHAR improves interpretability, decision transparency, and practical applicability for TS classification tasks.",0 "It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of ""black box problem"" in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does (""shallow opacity""), 2) the analysts do not know what the system does (""standard black box opacity""), or 3) the analysts cannot possibly know what the system might do (""deep opacity""). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give ""informed consent"", or guarantee ""anonymity"". It follows from these points that agents in big data analytics and AI often cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation.",2 "A growing body of empirical work suggests that the widespread adoption of generative AI produces a significant homogenizing effect on information, creativity, and cultural production. I first develop a novel theoretical framework to explain this phenomenon. I argue that a dynamic of AI-derivative epistemology, in which individuals increasingly defer to AI outputs, allows a centralized AI Prism to function, a technical mechanism whose architecture is designed to reduce variance and converge on the statistical mean. This provides a causal explanation for the generative monocultures observed in recent studies. However, I contend this represents only the first stage of a more complex and dialectical process. This paper's central and paradoxical thesis is that the very homogenization that flattens knowledge within specialized domains simultaneously renders that knowledge into consistent modules that can be recombined across them, a process foundational to innovation and creativity. However, this recombinant potential is not automatic, but rather conditional. This paper argues that these opposing forces, homogenizing defaults versus recombinant possibilities, are governed by the nature of human engagement with the technology. The ultimate effect of generative AI is conditional on whether individuals act as passive consumers deferring to the AI's statistical outputs, or as active curators who critically interrogate, re-contextualize, and recombine them. The paper concludes by outlining the cognitive and institutional scaffolds required to resolve this tension, arguing they are the decisive variable that determine whether generative AI becomes an instrument of innovation or homogenization.",2 "Kinetics of a balanced network of neurons with a sparse grid of synaptic links is well representable by the stochastic dynamics of a generic neuron subject to an effective shot noise. The rate of delta-pulses of the noise is determined self-consistently from the probability density of the neuron states. Importantly, the most sophisticated (but robust) collective regimes of the network do not allow for the diffusion approximation, which is routinely adopted for a shot noise in mathematical neuroscience. These regimes can be expected to be biologically relevant. For the kinetics equations of the complete mean field theory of a homogeneous inhibitory network of quadratic integrate-and-fire neurons, we introduce circular cumulants of the genuine phase variable and derive a rigorous two cumulant reduction for both time-independent conditions and modulation of the excitatory current. The low dimensional model is examined with numerical simulations and found to be accurate for time-independent states and dynamic response to a periodic modulation deep into the parameter domain where the diffusion approximation is not applicable. The accuracy of a low dimensional model indicates and explains a low embedding dimensionality of the macroscopic collective dynamics of the network. The reduced model can be instrumental for theoretical studies of inhibitory-excitatory balances neural networks.",0 "Public funding processes demand fairness, learning, and outcomes that participants can understand. We introduce Komitee Equal Shares, a priceable virtual-budget allocation framework that integrates two signals: in voter mode, participants cast point votes; in evaluator mode, small groups assess proposals against collectively defined impact fields. The framework extends the Method of Equal Shares by translating both signals into virtual spending power and producing voting receipts. We deployed the framework in the 2025 Kultur Komitee in Winterthur, Switzerland. Our contributions are: (1) a clear separation of decision modes, addressing a gap in social choice that typically treats participatory budgeting as preference aggregation while citizens also see themselves as evaluators; and (2) the design of voting receipts that operationalise priceability into participant-facing explanations, making proportional allocations legible and traceable. The framework generalises to participatory grant-making and budgeting, offering a model where citizens act as voters and evaluators within one proportional, explainable allocation.",0 "Continuous Descent Operations (CDO) involve smooth, idle-thrust descents that avoid level-offs, reducing fuel burn, emissions, and noise while improving efficiency and passenger comfort. Despite its operational and environmental benefits, limited research has systematically examined the factors influencing CDO performance. Moreover, many existing methods in related areas, such as trajectory optimization, lack the transparency required in aviation, where explainability is critical for safety and stakeholder trust. This study addresses these gaps by proposing a Fuzzy-Enhanced Explainable AI (FEXAI) framework that integrates fuzzy logic with machine learning and SHapley Additive exPlanations (SHAP) analysis. For this purpose, a comprehensive dataset of 29 features, including 11 operational and 18 weather-related features, was collected from 1,094 flights using Automatic Dependent Surveillance-Broadcast (ADS-B) data. Machine learning models and SHAP were then applied to classify flights' CDO adherence levels and rank features by importance. The three most influential features, as identified by SHAP scores, were then used to construct a fuzzy rule-based classifier, enabling the extraction of interpretable fuzzy rules. All models achieved classification accuracies above 90%, with FEXAI providing meaningful, human-readable rules for operational users. Results indicated that the average descent rate within the arrival route, the number of descent segments, and the average change in directional heading during descent were the strongest predictors of CDO performance. The FEXAI method proposed in this study presents a novel pathway for operational decision support and could be integrated into aviation tools to enable real-time advisories that maintain CDO adherence under varying operational conditions.",0 "The superconducting version of a diode effect has been the subject of extensive research in the past few years. So far, the focus has almost exclusively been on charge transport, but a natural question is whether it is possible to obtain nonreciprocal spin transport without dissipation. Here, we demonstrate that it is possible to generate electrically tunable nonreciprocal spin transport carried by a supercurrent using superconductor/ferromagnet multilayers. The nonreciprocal spin supercurrent reaches an ideal efficiency of 100%, meaning that the spin-polarization of the critical current is finite in one flow direction whereas it vanishes in the other direction. We explain the underlying physics generating this phenomenon. This result provides a way to integrate nonreciprocal supercurrents with spin-polarization, offering new functionality in quantum technologies based on Josephson junctions.",0 "Wearable systems can recognize activities from IMU data but often fail to explain their underlying causes or contextual significance. To address this limitation, we introduce two large-scale resources: SensorCap, comprising 35,960 IMU--caption pairs, and OpenSQA, with 199,701 question--answer pairs designed for causal and explanatory reasoning. OpenSQA includes a curated tuning split (Tune-OpenSQA) optimized for scientific accuracy, narrative clarity, and diagnostic insight. Leveraging these datasets, we develop LLaSA (Large Language and Sensor Assistant), a family of compact sensor-aware language models (7B and 13B) that generate interpretable, context-rich responses to open-ended questions grounded in raw IMU data. LLaSA outperforms commercial LLMs, including GPT-3.5 and GPT-4o-mini, on benchmark and real-world tasks, demonstrating the effectiveness of domain supervision and model alignment for sensor reasoning. Our code repository and datasets can be found at https://github.com/BASHLab/LLaSA.",1 "As artificial intelligence rapidly transforms society, developers and policymakers struggle to anticipate which applications will face public moral resistance. We propose that these judgments are not idiosyncratic but systematic and predictable. In a large, preregistered study (N = 587, U.S. representative sample), we used a comprehensive taxonomy of 100 AI applications spanning personal and organizational contexts-including both functional uses and the moral treatment of AI itself. In participants' collective judgment, applications ranged from highly unacceptable to fully acceptable. We found this variation was strongly predictable: five core moral qualities-perceived risk, benefit, dishonesty, unnaturalness, and reduced accountability-collectively explained over 90% of the variance in acceptability ratings. The framework demonstrated strong predictive power across all domains and successfully predicted individual-level judgments for held-out applications. These findings reveal that a structured moral psychology underlies public evaluation of new technologies, offering a powerful tool for anticipating public resistance and guiding responsible innovation in AI.",2 "Multi-annotator learning traditionally aggregates diverse annotations to approximate a single ground truth, treating disagreements as noise. However, this paradigm faces fundamental challenges: subjective tasks often lack absolute ground truth, and sparse annotation coverage makes aggregation statistically unreliable. We introduce a paradigm shift from sample-wise aggregation to annotator-wise behavior modeling. By treating annotator disagreements as valuable information rather than noise, modeling annotator-specific behavior patterns can reconstruct unlabeled data to reduce annotation cost, enhance aggregation reliability, and explain annotator decision behavior. To this end, we propose QuMAB (Query-based Multi-Annotator Behavior Pattern Learning), which uses light-weight queries to model individual annotators while capturing inter-annotator correlations as implicit regularization, preventing overfitting to sparse individual data while maintaining individualization and improving generalization, with a visualization of annotator focus regions offering an explainable analysis of behavior understanding. We contribute two large-scale datasets with dense per-annotator labels: STREET (4,300 labels/annotator) and AMER (average 3,118 labels/annotator), the first multimodal multi-annotator dataset. Extensive experiments demonstrate the superiority of our QuMAB in modeling individual annotators' behavior patterns, their utility for consensus prediction, and applicability under sparse annotations.",0 "Counterfactual reasoning -- the practice of asking ``what if'' by varying inputs and observing changes in model behavior -- has become central to interpretable and fair AI. This thesis develops frameworks that use counterfactuals to explain, audit, and mitigate bias in vision classifiers and generative models. By systematically altering semantically meaningful attributes while holding others fixed, these methods uncover spurious correlations, probe causal dependencies, and help build more robust systems. The first part addresses vision classifiers. CAVLI integrates attribution (LIME) with concept-level analysis (TCAV) to quantify how strongly decisions rely on human-interpretable concepts. With localized heatmaps and a Concept Dependency Score, CAVLI shows when models depend on irrelevant cues like backgrounds. Extending this, ASAC introduces adversarial counterfactuals that perturb protected attributes while preserving semantics. Through curriculum learning, ASAC fine-tunes biased models for improved fairness and accuracy while avoiding stereotype-laden artifacts. The second part targets generative Text-to-Image (TTI) models. TIBET provides a scalable pipeline for evaluating prompt-sensitive biases by varying identity-related terms, enabling causal auditing of how race, gender, and age affect image generation. To capture interactions, BiasConnect builds causal graphs diagnosing intersectional biases. Finally, InterMit offers a modular, training-free algorithm that mitigates intersectional bias via causal sensitivity scores and user-defined fairness goals. Together, these contributions show counterfactuals as a unifying lens for interpretability, fairness, and causality in both discriminative and generative models, establishing principled, scalable methods for socially responsible bias evaluation and mitigation.",2 "AI-based recommender systems increasingly influence recruitment decisions. Thus, transparency and responsible adoption in Human Resource Management (HRM) are critical. This study examines how HR managers' AI literacy influences their subjective perception and objective understanding of explainable AI (XAI) elements in recruiting recommender dashboards. In an online experiment, 410 German-based HR managers compared baseline dashboards to versions enriched with three XAI styles: important features, counterfactuals, and model criteria. Our results show that the dashboards used in practice do not explain AI results and even keep AI elements opaque. However, while adding XAI features improves subjective perceptions of helpfulness and trust among users with moderate or high AI literacy, it does not increase their objective understanding. It may even reduce accurate understanding, especially with complex explanations. Only overlays of important features significantly aided the interpretations of high-literacy users. Our findings highlight that the benefits of XAI in recruitment depend on users' AI literacy, emphasizing the need for tailored explanation strategies and targeted literacy training in HRM to ensure fair, transparent, and effective adoption of AI.",0 "Emotions, which influence how convincing an argument is, are developed in context of the self and sender, and therefore require modeling the cognitive evaluation process. While binary emotionality has been studied in argument mining, and the cognitive appraisal has been modeled in general emotion analysis, these fields have not been brought together yet. We therefore propose the Contextualized Argument Appraisal Framework that contextualizes the interplay between the sender, receiver, and argument. It includes emotion labels, appraisals, such as argument familiarity, response urgency, and expected effort, as well as convincingness variables. To evaluate the framework and pave the way to computational modeling, we perform a study in a role-playing scenario, mimicking real-world exposure to arguments, asking participants to disclose their emotion, explain the main cause, the argument appraisal, and the perceived convincingness. To consider the subjective nature of such annotations, we also collect demographic data and personality traits of both the participants and the perceived sender of the argument. The analysis of the resulting corpus of 800 arguments, each annotated by 5 participants, reveals that convincingness is positively correlated with positive emotions (e.g., trust) and negatively correlated with negative emotions (e.g., anger). The appraisal variables disclose the importance of the argument familiarity. For most participants, the content of the argument itself is the primary driver of the emotional response.",1 "In Massively Multiplayer Online Role-Playing Games (MMORPGs), auto-leveling bots exploit automated programs to level up characters at scale, undermining gameplay balance and fairness. Detecting such bots is challenging, not only because they mimic human behavior, but also because punitive actions require explainable justification to avoid legal and user experience issues. In this paper, we present a novel framework for detecting auto-leveling bots by leveraging contrastive representation learning and clustering techniques in a fully unsupervised manner to identify groups of characters with similar level-up patterns. To ensure reliable decisions, we incorporate a Large Language Model (LLM) as an auxiliary reviewer to validate the clustered groups, effectively mimicking a secondary human judgment. We also introduce a growth curve-based visualization to assist both the LLM and human moderators in assessing leveling behavior. This collaborative approach improves the efficiency of bot detection workflows while maintaining explainability, thereby supporting scalable and accountable bot regulation in MMORPGs.",2 "Cross-View Geo-Localization (CVGL) focuses on identifying correspondences between images captured from distinct perspectives of the same geographical location. However, existing CVGL approaches are typically restricted to a single view or modality, and their direct visual matching strategy lacks interpretability: they only determine whether two images correspond, without explaining the rationale behind the match. In this paper, we present GLEAM-C, a foundational CVGL model that unifies multiple views and modalities-including UAV imagery, street maps, panoramic views, and ground photographs-by aligning them exclusively with satellite imagery. Our framework enhances training efficiency through optimized implementation while achieving accuracy comparable to prior modality-specific CVGL models through a two-phase training strategy. Moreover, to address the lack of interpretability in traditional CVGL methods, we leverage the reasoning capabilities of multimodal large language models (MLLMs) to propose a new task, GLEAM-X, which combines cross-view correspondence prediction with explainable reasoning. To support this task, we construct a bilingual benchmark using GPT-4o and Doubao-1.5-Thinking-Vision-Pro to generate training and testing data. The test set is further refined through detailed human revision, enabling systematic evaluation of explainable cross-view reasoning and advancing transparency and scalability in geo-localization. Together, GLEAM-C and GLEAM-X form a comprehensive CVGL pipeline that integrates multi-modal, multi-view alignment with interpretable correspondence analysis, unifying accurate cross-view matching with explainable reasoning and advancing Geo-Localization by enabling models to better Explain And Match. Code and datasets used in this work will be made publicly accessible at https://github.com/Lucky-Lance/GLEAM.",0 "Facial Beauty Prediction (FBP) has made significant strides with the application of deep learning, yet state-of-the-art models often exhibit critical limitations, including architectural constraints, inherent demographic biases, and a lack of transparency. Existing methods, primarily based on Convolutional Neural Networks (CNNs), excel at capturing local texture but struggle with global facial harmony, while Vision Transformers (ViTs) effectively model long-range dependencies but can miss fine-grained details. Furthermore, models trained on benchmark datasets can inadvertently learn and perpetuate societal biases related to protected attributes like ethnicity. To address these interconnected challenges, we propose \textbf{FairViT-GAN}, a novel hybrid framework that synergistically integrates a CNN branch for local feature extraction and a ViT branch for global context modeling. More significantly, we introduce an adversarial debiasing mechanism where the feature extractor is explicitly trained to produce representations that are invariant to protected attributes, thereby actively mitigating algorithmic bias. Our framework's transparency is enhanced by visualizing the distinct focus of each architectural branch. Extensive experiments on the SCUT-FBP5500 benchmark demonstrate that FairViT-GAN not only sets a new state-of-the-art in predictive accuracy, achieving a Pearson Correlation of \textbf{0.9230} and reducing RMSE to \textbf{0.2650}, but also excels in fairness. Our analysis reveals a remarkable \textbf{82.9\% reduction in the performance gap} between ethnic subgroups, with the adversary's classification accuracy dropping to near-random chance (52.1\%). We believe FairViT-GAN provides a robust, transparent, and significantly fairer blueprint for developing responsible AI systems for subjective visual assessment.",0 "Recent failures such as Google Gemini generating people of color in Nazi-era uniforms illustrate how AI outputs can be factually plausible yet socially harmful. AI models are increasingly evaluated for ""fairness,"" yet existing benchmarks often conflate two fundamentally different dimensions: factual correctness and normative fairness. A model may generate responses that are factually accurate but socially unfair, or conversely, appear fair while distorting factual reality. We argue that identifying the boundary between fact and fair is essential for meaningful fairness evaluation. We introduce Fact-or-Fair, a benchmark with (i) objective queries aligned with descriptive, fact-based judgments, and (ii) subjective queries aligned with normative, fairness-based judgments. Our queries are constructed from 19 statistics and are grounded in cognitive psychology, drawing on representativeness bias, attribution bias, and ingroup-outgroup bias to explain why models often misalign fact and fairness. Experiments across ten frontier models reveal different levels of fact-fair trade-offs. By reframing fairness evaluation, we provide both a new theoretical lens and a practical benchmark to advance the responsible model assessments. Our test suite is publicly available at https://github.com/uclanlp/Fact-or-Fair.",0 "This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. We conclude that while there are very significant technical hurdles to real human enhancement through AI, and significant ethical problems, there are also significant benefits that may realistically be achieved in ways that are consonant with a rights-based ethics as well. We also highlight the specific concerns that apply particularly to applications of AI for ""sheer"" IA (more realistic in the near term), and to enhancement applications, respectively.",2 "Forecasting future links is a central task in temporal graph (TG) reasoning, requiring models to leverage historical interactions to predict upcoming ones. Traditional neural approaches, such as temporal graph neural networks, achieve strong performance but lack explainability and cannot be applied to unseen graphs without retraining. Recent studies have begun to explore using large language models (LLMs) for graph reasoning, but most of them are constrained to static graphs or small synthetic TGs and lack the evaluation of the quality of reasoning traces generated by LLMs. In this work, we present Reasoning-Enhanced Learning for Temporal Graphs (ReaL-TG), a reinforcement learning framework that fine-tunes LLMs to perform explainable link forecasting on real-world TGs. ReaL-TG uses outcome-based reward to encourage models to self-explore reasoning strategies from graph structure and to produce explanations that directly justify their predictions. To enable evaluation on LLM-generated reasoning traces, we propose a new evaluation protocol combining ranking metrics with an LLM-as-a-Judge system that assesses both the quality of reasoning and the impact of hallucinations. Experiments with ReaL-TG-4B, obtained by fine-tuning Qwen3-4B under our framework, show that it outperforms much larger frontier LLMs, including GPT-5 mini, on ranking metrics, while producing high-quality explanations confirmed by both the LLM judge and human evaluation.",2 "While quantum statistical mechanics triumphs in explaining many equilibrium phenomena, there is an increasing focus on going beyond conventional scenarios of thermalization. Traditionally examples of non-thermalizing systems are either integrable, or disordered. Recently, examples of translationally-invariant physical systems have been discovered whose excited energies avoid thermalization either due to local constraints (whether exact or emergent), or due to higher-form symmetries. In this article, we extend these investigations for the case of 3D $U(1)$ quantum dimer models, which are lattice gauge theories with finite-dimensional local Hilbert spaces (also generically called quantum link models) with staggered charged static matter. Using a combination of analytical and numerical methods, we uncover a class of athermal states that arise in large winding sectors, when the system is subjected to external electric fields. The polarization of the dynamical fluxes in the direction of applied field traps excitations in 2D planes, while an interplay with the Gauss Law constraint in the perpendicular direction causes exotic athermal behaviour due to the emergence of new conserved quantities. This causes a geometric fragmentation of the system. We provide analytical arguments showing that the scaling of the number of fragments is exponential in the linear system size, leading to weak fragmentation. Further, we identify sectors which host fractonic excitations with severe mobility restrictions. The unitary evolution of fragments dominated by fractons is qualitatively different from the one dominated by non-fractonic excitations.",0 "Indoor scene classification is a critical task in computer vision, with wide-ranging applications that go from robotics to sensitive content analysis, such as child sexual abuse imagery (CSAI) classification. The problem is particularly challenging due to the intricate relationships between objects and complex spatial layouts. In this work, we propose the Attention over Scene Graphs for Sensitive Content Analysis (ASGRA), a novel framework that operates on structured graph representations instead of raw pixels. By first converting images into Scene Graphs and then employing a Graph Attention Network for inference, ASGRA directly models the interactions between a scene's components. This approach offers two key benefits: (i) inherent explainability via object and relationship identification, and (ii) privacy preservation, enabling model training without direct access to sensitive images. On Places8, we achieve 81.27% balanced accuracy, surpassing image-based methods. Real-world CSAI evaluation with law enforcement yields 74.27% balanced accuracy. Our results establish structured scene representations as a robust paradigm for indoor scene classification and CSAI classification. Code is publicly available at https://github.com/tutuzeraa/ASGRA.",0 "Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in document understanding. However, their reasoning processes remain largely black-box, making it difficult to ensure reliability and trustworthiness, especially in high-stakes domains such as legal, financial, and medical document analysis. Existing methods use fixed Chain-of-Thought (CoT) reasoning with supervised fine-tuning (SFT) but suffer from catastrophic forgetting, poor adaptability, and limited generalization across domain tasks. In this paper, we propose DocThinker, a rule-based Reinforcement Learning (RL) framework for dynamic inference-time reasoning. Instead of relying on static CoT templates, DocThinker autonomously refines reasoning strategies via policy learning, generating explainable intermediate results, including structured reasoning processes, rephrased questions, regions of interest (RoI) supporting the answer, and the final answer. By integrating multi-objective rule-based rewards and KL-constrained optimization, our method mitigates catastrophic forgetting and enhances both adaptability and transparency. Extensive experiments on multiple benchmarks demonstrate that DocThinker significantly improves generalization while producing more explainable and human-understandable reasoning steps. Our findings highlight RL as a powerful alternative for enhancing explainability and adaptability in MLLM-based document understanding. Code will be available at https://github.com/wenwenyu/DocThinker.",0 "The rapid adoption of large language models (LLMs) in customer service introduces new risks, as malicious actors can exploit them to conduct large-scale user impersonation through machine-generated text (MGT). Current MGT detection methods often struggle in online conversational settings, reducing the reliability and interpretability essential for trustworthy AI deployment. In customer service scenarios where operators are typically non-expert users, explanation become crucial for trustworthy MGT detection. In this paper, we propose EMMM, an explanation-then-detection framework that balances latency, accuracy, and non-expert-oriented interpretability. Experimental results demonstrate that EMMM provides explanations accessible to non-expert users, with 70\% of human evaluators preferring its outputs, while achieving competitive accuracy compared to state-of-the-art models and maintaining low latency, generating outputs within 1 second. Our code and dataset are open-sourced at https://github.com/AngieYYF/EMMM-explainable-chatbot-detection.",2 "Objective: This study aims to uncover the opaque decision-making process of an artificial intelligence (AI) agent for automatic treatment planning. Approach: We examined a previously developed AI agent based on the Actor-Critic with Experience Replay (ACER) network, which automatically tunes treatment planning parameters (TPPs) for inverse planning in prostate cancer intensity modulated radiotherapy. We selected multiple checkpoint ACER agents from different stages of training and applied an explainable AI (EXAI) method to analyze the attribution from dose-volume histogram (DVH) inputs to TPP-tuning decisions. We then assessed each agent's planning efficacy and efficiency and evaluated their policy and final TPP tuning spaces. Combining these analyses, we systematically examined how ACER agents generated high-quality treatment plans in response to different DVH inputs. Results: Attribution analysis revealed that ACER agents progressively learned to identify dose-violation regions from DVH inputs and promote appropriate TPP-tuning actions to mitigate them. Organ-wise similarities between DVH attributions and dose-violation reductions ranged from 0.25 to 0.5 across tested agents. Agents with stronger attribution-violation similarity required fewer tuning steps (~12-13 vs. 22), exhibited a more concentrated TPP-tuning space with lower entropy (~0.3 vs. 0.6), converged on adjusting only a few TPPs, and showed smaller discrepancies between practical and theoretical tuning steps. Putting together, these findings indicate that high-performing ACER agents can effectively identify dose violations from DVH inputs and employ a global tuning strategy to achieve high-quality treatment planning, much like skilled human planners. Significance: Better interpretability of the agent's decision-making process may enhance clinician trust and inspire new strategies for automatic treatment planning.",2 "Recent advancements in machine learning have spurred growing interests in automated interpreting quality assessment. Nevertheless, existing research suffers from insufficient examination of language use quality, unsatisfactory modeling effectiveness due to data scarcity and imbalance, and a lack of efforts to explain model predictions. To address these gaps, we propose a multi-dimensional modeling framework that integrates feature engineering, data augmentation, and explainable machine learning. This approach prioritizes explainability over ``black box'' predictions by utilizing only construct-relevant, transparent features and conducting Shapley Value (SHAP) analysis. Our results demonstrate strong predictive performance on a novel English-Chinese consecutive interpreting dataset, identifying BLEURT and CometKiwi scores to be the strongest predictive features for fidelity, pause-related features for fluency, and Chinese-specific phraseological diversity metrics for language use. Overall, by placing particular emphasis on explainability, we present a scalable, reliable, and transparent alternative to traditional human evaluation, facilitating the provision of detailed diagnostic feedback for learners and supporting self-regulated learning advantages not afforded by automated scores in isolation.",0 "Deep learning offers a promising avenue for automating many recognition tasks in fields such as medicine and forensics. However, the black-box nature of these models hinders their adoption in high-stakes applications where trust and accountability are required. For 3D shape recognition tasks in particular, this paper introduces the Class Node Graph Attention Network (CGAT) architecture to address this need. Applied to 3D meshes of third molars derived from CBCT images, for Demirjian stage allocation, CGAT utilizes graph attention convolutions and an inherent attention mechanism, visualized via attention rollout, to explain its decision-making process. We evaluated the local mean curvature and distance to centroid node features, both individually and in combination, as well as model depth, finding that models incorporating directed edges to a global CLS node produced more intuitive attention maps, while also yielding desirable classification performance. We analyzed the attention-based explanations of the models, and their predictive performances to propose optimal settings for the CGAT. The combination of local mean curvature and distance to centroid as node features yielded a slight performance increase with 0.76 weighted F1 score, and more comprehensive attention visualizations. The CGAT architecture's ability to generate human-understandable attention maps can enhance trust and facilitate expert validation of model decisions. While demonstrated on dental data, CGAT is broadly applicable to graph-based classification and regression tasks, promoting wider adoption of transparent and competitive deep learning models in high-stakes environments.",2 "Large Language Models (LLMs) are known to overuse certain terms like ""delve"" and ""intricate."" The exact reasons for these lexical choices, however, have been unclear. Using Meta's Llama model, this study investigates the contribution of Learning from Human Feedback (LHF), under which we subsume Reinforcement Learning from Human Feedback and Direct Preference Optimization. We present a straightforward procedure for detecting the lexical preferences of LLMs that are potentially LHF-induced. Next, we more conclusively link LHF to lexical overuse by experimentally emulating the LHF procedure and demonstrating that participants systematically prefer text variants that include certain words. This lexical overuse can be seen as a sort of misalignment, though our study highlights the potential divergence between the lexical expectations of different populations -- namely LHF workers versus LLM users. Our work contributes to the growing body of research on explainable artificial intelligence and emphasizes the importance of both data and procedural transparency in alignment research.",0 "Subjective teacher evaluations play a key role in shaping students' educational trajectories. Previous studies have shown that students of low socioeconomic status (SES) receive worse subjective evaluations than their high SES peers, even when they score similarly on objective standardized tests. This is often interpreted as evidence of teacher bias. Measurement error in test scores challenges this interpretation. We discuss how both classical and non-classical measurement error in test scores generate a biased coefficient of the conditional SES gap, and consider three empirical strategies to address this bias. Using administrative data from the Netherlands, where secondary school track recommendations are pivotal teacher judgments, we find that measurement error explains 35 to 43% of the conditional SES gap in track recommendations.",1 "We use the notion of oracle machines and reductions from computability theory to formalise different Human-in-the-loop (HITL) setups for AI systems, distinguishing between trivial human monitoring (i.e., total functions), single endpoint human action (i.e., many-one reductions), and highly involved human-AI interaction (i.e., Turing reductions). We then proceed to show that the legal status and safety of different setups vary greatly. We present a taxonomy to categorise HITL failure modes, highlighting the practical limitations of HITL setups. We then identify omissions in UK and EU legal frameworks, which focus on HITL setups that may not always achieve the desired ethical, legal, and sociotechnical outcomes. We suggest areas where the law should recognise the effectiveness of different HITL setups and assign responsibility in these contexts, avoiding human ""scapegoating"". Our work shows an unavoidable trade-off between attribution of legal responsibility, and technical explainability. Overall, we show how HITL setups involve many technical design decisions, and can be prone to failures out of the humans' control. Our formalisation and taxonomy opens up a new analytic perspective on the challenges in creating HITL setups, helping inform AI developers and lawmakers on designing HITL setups to better achieve their desired outcomes.",0 "Enormous attention and resources are being devoted to the quest for artificial general intelligence and, even more ambitiously, artificial superintelligence. We wonder about the implications for our methodological research, which aims to help decision makers cope with what econometricians call identification problems, inferential problems in empirical research that do not diminish as sample size grows. Of particular concern are missing data problems in prediction and treatment choice. Essentially all data collection intended to inform decision making is subject to missing data, which gives rise to identification problems. Thus far, we see no indication that the current dominant architecture of machine learning (ML)-based artificial intelligence (AI) systems will outperform humans in this context. In this paper, we explain why we have reached this conclusion and why we see the missing data problem as a cautionary case study in the quest for superintelligence more generally. We first discuss the concept of intelligence, before presenting a decision-theoretic perspective that formalizes the connection between intelligence and identification problems. We next apply this perspective to two leading cases of missing data problems. Then we explain why we are skeptical that AI research is currently on a path toward machines doing better than humans at solving these identification problems.",1 "AI-readiness describes the degree to which data may be optimally and ethically used for subsequent AI and Machine Learning (AI/ML) methods, where those methods may involve some combination of model training, data classification, and ethical, explainable prediction. The Bridge2AI consortium has defined the particular criteria a biomedical dataset may possess to render it AI-ready: in brief, a dataset's readiness is related to its FAIRness, provenance, degree of characterization, explainability, sustainability, and computability, in addition to its accompaniment with documentation about ethical data practices. To ensure AI-readiness and to clarify data structure and relationships within Bridge2AI's Grand Challenges (GCs), particular types of metadata are necessary. The GCs within the Bridge2AI initiative include four data-generating projects focusing on generating AI/ML-ready datasets to tackle complex biomedical and behavioral research problems. These projects develop standardized, multimodal data, tools, and training resources to support AI integration, while addressing ethical data practices. Examples include using voice as a biomarker, building interpretable genomic tools, modeling disease trajectories with diverse multimodal data, and mapping cellular and molecular health indicators across the human body. This report assesses the state of metadata creation and standardization in the Bridge2AI GCs, provides guidelines where required, and identifies gaps and areas for improvement across the program. New projects, including those outside the Bridge2AI consortium, would benefit from what we have learned about creating metadata as part of efforts to promote AI readiness.",1 "Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) 'understanding' on the part of the explainee. However, what it means to 'understand' is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding for XAI-explanations and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, philosophy and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, 'knowing how' to do or decide something, and comprehension, 'knowing that' -- both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.",0 "The creation and perception of humour is a fundamental human trait, positioning its computational understanding as one of the most challenging tasks in natural language processing (NLP). As an abstract, creative, and frequently context-dependent construct, humour requires extensive reasoning to understand and create, making it a pertinent task for assessing the common-sense knowledge and reasoning abilities of modern large language models (LLMs). In this work, we survey the landscape of computational humour as it pertains to the generative tasks of creation and explanation. We observe that, despite the task of understanding humour bearing all the hallmarks of a foundational NLP task, work on generating and explaining humour beyond puns remains sparse, while state-of-the-art models continue to fall short of human capabilities. We bookend our literature survey by motivating the importance of computational humour processing as a subdiscipline of NLP and presenting an extensive discussion of future directions for research in the area that takes into account the subjective and ethically ambiguous nature of humour.",0 "Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: 'pre-modernity' prior to digital computation over data, via the 'modernity' of digital data processing to our present 'post-modernity' when not only the data is digital, but our lives themselves are largely digital. In each section, the situation in technology and society is sketched, and then the developments in digital ethics are explained. Finally, a brief outlook is provided.",1 "The US Decennial Census provides valuable data for both research and policy purposes. Census data are subject to a variety of disclosure avoidance techniques prior to release in order to preserve respondent confidentiality. While many are interested in studying the impacts of disclosure avoidance methods on downstream analyses, particularly with the introduction of differential privacy in the 2020 Decennial Census, these efforts are limited by a critical lack of data: The underlying ""microdata,"" which serve as necessary input to disclosure avoidance methods, are kept confidential. In this work, we aim to address this limitation by providing tools to generate synthetic microdata solely from published Census statistics, which can then be used as input to any number of disclosure avoidance algorithms for the sake of evaluation and carrying out comparisons. We define a principled distribution over microdata given published Census statistics and design algorithms to sample from this distribution. We formulate synthetic data generation in this context as a knapsack-style combinatorial optimization problem and develop novel algorithms for this setting. While the problem we study is provably hard, we show empirically that our methods work well in practice, and we offer theoretical arguments to explain our performance. Finally, we verify that the data we produce are ""close"" to the desired ground truth.",0 "Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, reducing willingness to accept assistance, likelihood of future use, and performance expectancy. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between \emph{offering} (suggesting help) and \emph{providing} (acting automatically). In Study 1, AI help was more threatening than human help. Across both studies, anticipatory help increased perceived threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism explaining why proactive AI features may backfire and suggest design implications for AI initiative.",1 "Artificial intelligence (AI) systems, and Large Language Models (LLMs) in particular, are increasingly employed for creative tasks like scientific idea generation, constituting a form of generalization from training data unaddressed by existing conceptual frameworks. Despite its similarities to compositional generalization (CG), combinatorial creativity (CC) is an open-ended ability. Instead of evaluating for accuracy or correctness against fixed targets, which would contradict the open-ended nature of CC, we propose a theoretical framework and algorithmic task for evaluating outputs by their degrees of novelty and utility. From here, we make several important empirical contributions: (1) We obtain the first insights into the scaling behavior of creativity for LLMs. (2) We discover that, for fixed compute budgets, there exist optimal model depths and widths for creative ability. (3) We find that the ideation-execution gap, whereby LLMs excel at generating novel scientific ideas but struggle to ensure their practical feasibility, may be explained by a more fundamental novelty-utility tradeoff characteristic of creativity algorithms in general. Importantly, this tradeoff remains persistent even at scale, casting doubt on the long-term creative potential of LLMs in their current form. Together, our conceptual framework and empirical findings provide a foundation for understanding and improving creativity in modern AI models, bridging the gap between human and machine intelligence.",1 "Traffic signal control (TSC) is vital for mitigating congestion and sustaining urban mobility. In this paper, we introduce Traffic-R1, a foundation model with human-like reasoning for TSC systems. Our model is developed through self-exploration and iteration of reinforced large language models (LLMs) with expert guidance in a simulated traffic environment. Compared to traditional reinforcement learning (RL) and recent LLM-based methods, Traffic-R1 offers three significant advantages. First, Traffic-R1 delivers zero-shot generalisation, transferring unchanged to new road networks and out-of-distribution incidents by utilizing its internal traffic control policies and human-like reasoning. Second, its 3B-parameter architecture is lightweight enough for real-time inference on mobile-class chips, enabling large-scale edge deployment. Third, Traffic-R1 provides an explainable TSC process and facilitates multi-intersection communication through its self-iteration and a new synchronous communication network. Extensive benchmarks demonstrate that Traffic-R1 sets a new state of the art, outperforming strong baselines and training-intensive RL controllers. In practice, the model now manages signals for more than 55,000 drivers daily, shortening average queues by over 5% and halving operator workload. Our checkpoint is available at https://huggingface.co/Season998/Traffic-R1.",0 "Graph Neural Networks (GNNs) have emerged as powerful tools for learning over structured data, including text-attributed graphs, which are common in domains such as citation networks, social platforms, and knowledge graphs. GNNs are not inherently interpretable and thus, many explanation methods have been proposed. However, existing explanation methods often struggle to generate interpretable, fine-grained rationales, especially when node attributes include rich natural language. In this work, we introduce LOGIC, a lightweight, post-hoc framework that uses large language models (LLMs) to generate faithful and interpretable explanations for GNN predictions. LOGIC projects GNN node embeddings into the LLM embedding space and constructs hybrid prompts that interleave soft prompts with textual inputs from the graph structure. This enables the LLM to reason about GNN internal representations and produce natural language explanations along with concise explanation subgraphs. Our experiments across four real-world TAG datasets demonstrate that LOGIC achieves a favorable trade-off between fidelity and sparsity, while significantly improving human-centric metrics such as insightfulness. LOGIC sets a new direction for LLM-based explainability in graph learning by aligning GNN internals with human reasoning.",0 "Deep generative models like VAEs and diffusion models have advanced various generation tasks by leveraging latent variables to learn data distributions and generate high-quality samples. Despite the field of explainable AI making strides in interpreting machine learning models, understanding latent variables in generative models remains challenging. This paper introduces LatentExplainer, a framework for automatically generating semantically meaningful explanations of latent variables in deep generative models. LatentExplainer tackles three main challenges: inferring the meaning of latent variables, aligning explanations with inductive biases, and handling varying degrees of explainability. Our approach perturbs latent variables, interprets changes in generated data, and uses multimodal large language models (MLLMs) to produce human-understandable explanations. We evaluate our proposed method on several real-world and synthetic datasets, and the results demonstrate superior performance in generating high-quality explanations for latent variables. The results highlight the effectiveness of incorporating inductive biases and uncertainty quantification, significantly enhancing model interpretability.",0 "Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcement learning (RL) can drive self-improvement in language models on verifiable tasks, some models exhibit substantial gains while others quickly plateau. For instance, we find that Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game of Countdown. This discrepancy raises a critical question: what intrinsic properties enable effective self-improvement? We introduce a framework to investigate this question by analyzing four key cognitive behaviors -- verification, backtracking, subgoal setting, and backward chaining -- that both expert human problem solvers and successful language models employ. Our study reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama initially lacks them. In systematic experimentation with controlled behavioral datasets, we find that priming Llama with examples containing these reasoning behaviors enables substantial improvements during RL, matching or exceeding Qwen's performance. Importantly, the presence of reasoning behaviors, rather than correctness of answers, proves to be the critical factor -- models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions. Finally, leveraging continued pretraining with OpenWebMath data, filtered to amplify reasoning behaviors, enables the Llama model to match Qwen's self-improvement trajectory. Our findings establish a fundamental relationship between initial reasoning behaviors and the capacity for improvement, explaining why some language models effectively utilize additional computation while others plateau.",0 "The goal of translation, be it by human or by machine, is, given some text in a source language, to produce text in a target language that simultaneously 1) preserves the meaning of the source text and 2) achieves natural expression in the target language. However, researchers in the machine translation community usually assess translations using a single score intended to capture semantic accuracy and the naturalness of the output simultaneously. In this paper, we build on recent advances in information theory to mathematically prove and empirically demonstrate that such single-score summaries do not and cannot give the complete picture of a system's true performance. Concretely, we prove that a tradeoff exists between accuracy and naturalness and demonstrate it by evaluating the submissions to the WMT24 shared task. Our findings help explain well-known empirical phenomena, such as the observation that optimizing translation systems for a specific accuracy metric (like BLEU) initially improves the system's naturalness, while ``overfitting'' the system to the metric can significantly degrade its naturalness. Thus, we advocate for a change in how translations are evaluated: rather than comparing systems using a single number, they should be compared on an accuracy-naturalness plane.",0 "Accurately classifying chemical structures is essential for cheminformatics and bioinformatics, including tasks such as identifying bioactive compounds of interest, screening molecules for toxicity to humans, finding non-organic compounds with desirable material properties, or organizing large chemical libraries for drug discovery or environmental monitoring. However, manual classification is labor-intensive and difficult to scale to large chemical databases. Existing automated approaches either rely on manually constructed classification rules, or are deep learning methods that lack explainability. This work presents an approach that uses generative artificial intelligence to automatically write chemical classifier programs for classes in the Chemical Entities of Biological Interest (ChEBI) database. These programs can be used for efficient deterministic run-time classification of SMILES structures, with natural language explanations. The programs themselves constitute an explainable computable ontological model of chemical class nomenclature, which we call the ChEBI Chemical Class Program Ontology (C3PO). We validated our approach against the ChEBI database, and compared our results against deep learning models and a naive SMARTS pattern based classifier. C3PO outperforms the naive classifier, but does not reach the performance of state of the art deep learning methods. However, C3PO has a number of strengths that complement deep learning methods, including explainability and reduced data dependence. C3PO can be used alongside deep learning classifiers to provide an explanation of the classification, where both methods agree. The programs can be used as part of the ontology development process, and iteratively refined by expert human curators.",2 "The evolution of technology and education is driving the emergence of Intelligent & Autonomous Tutoring Systems (IATS), where objective and domain-agnostic methods for determining question difficulty are essential. Traditional human labeling is subjective, and existing NLP-based approaches fail in symbolic domains like algebra. This study introduces the Approach of Passive Measures among Educands (APME), a reinforcement learning-based Multi-Armed Bandit (MAB) framework that estimates difficulty solely from solver performance data -- marks obtained and time taken -- without requiring linguistic features or expert labels. By leveraging the inverse coefficient of variation as a risk-adjusted metric, the model provides an explainable and scalable mechanism for adaptive assessment. Empirical validation was conducted on three heterogeneous datasets. Across these diverse contexts, the model achieved an average R2 of 0.9213 and an average RMSE of 0.0584, confirming its robustness, accuracy, and adaptability to different educational levels and assessment formats. Compared with baseline approaches-such as regression-based, NLP-driven, and IRT models-the proposed framework consistently outperformed alternatives, particularly in purely symbolic domains. The findings highlight that (i) item heterogeneity strongly influences perceived difficulty, and (ii) variance in solver outcomes is as critical as mean performance for adaptive allocation. Pedagogically, the model aligns with Vygotskys Zone of Proximal Development by identifying tasks that balance challenge and attainability, supporting motivation while minimizing disengagement. This domain-agnostic, self-supervised approach advances difficulty tagging in IATS and can be extended beyond algebra wherever solver interaction data is available",2 "Speech-to-Speech (S2S) Large Language Models (LLMs) are foundational to natural human-computer interaction, enabling end-to-end spoken dialogue systems. However, evaluating these models remains a fundamental challenge. We propose \texttt{SageLM}, an end-to-end, multi-aspect, and explainable speech LLM for comprehensive S2S LLMs evaluation. First, unlike cascaded approaches that disregard acoustic features, SageLM jointly assesses both semantic and acoustic dimensions. Second, it leverages rationale-based supervision to enhance explainability and guide model learning, achieving superior alignment with evaluation outcomes compared to rule-based reinforcement learning methods. Third, we introduce \textit{SpeechFeedback}, a synthetic preference dataset, and employ a two-stage training paradigm to mitigate the scarcity of speech preference data. Trained on both semantic and acoustic dimensions, SageLM achieves an 82.79\% agreement rate with human evaluators, outperforming cascaded and SLM-based baselines by at least 7.42\% and 26.20\%, respectively.",2 "As ChatGPT and other Large Language Model (LLM)-based AI chatbots become increasingly integrated into individuals' daily lives, important research questions arise. What concerns and risks do these systems pose for individual users? What potential harms might they cause, and how can these be mitigated? In this work, we review recent literature and reports, and conduct a comprehensive investigation into these questions. We begin by explaining how LLM-based AI chatbots work, providing essential background to help readers understand chatbots' inherent limitations. We then identify a range of risks associated with individual use of these chatbots, including hallucinations, intrinsic biases, sycophantic behavior, cognitive decline from overreliance, social isolation, and privacy leakage. Finally, we propose several key mitigation strategies to address these concerns. Our goal is to raise awareness of the potential downsides of AI chatbot use, and to empower users to enhance, rather than diminish, human intelligence, to enrich, rather than compromise, daily life.",0 "Arabic-language patient feedback remains under-analysed because dialect diversity and scarce aspect-level sentiment labels hinder automated assessment. To address this gap, we introduce EHSAN, a data-centric hybrid pipeline that merges ChatGPT pseudo-labelling with targeted human review to build the first explainable Arabic aspect-based sentiment dataset for healthcare. Each sentence is annotated with an aspect and sentiment label (positive, negative, or neutral), forming a pioneering Arabic dataset aligned with healthcare themes, with ChatGPT-generated rationales provided for each label to enhance transparency. To evaluate the impact of annotation quality on model performance, we created three versions of the training data: a fully supervised set with all labels reviewed by humans, a semi-supervised set with 50% human review, and an unsupervised set with only machine-generated labels. We fine-tuned two transformer models on these datasets for both aspect and sentiment classification. Experimental results show that our Arabic-specific model achieved high accuracy even with minimal human supervision, reflecting only a minor performance drop when using ChatGPT-only labels. Reducing the number of aspect classes notably improved classification metrics across the board. These findings demonstrate an effective, scalable approach to Arabic aspect-based sentiment analysis (SA) in healthcare, combining large language model annotation with human expertise to produce a robust and explainable dataset. Future directions include generalisation across hospitals, prompt refinement, and interpretable data-driven modelling.",0 "Explainable Artificial Intelligence (XAI) aims to uncover the inner reasoning of machine learning models. In IoT systems, XAI improves the transparency of models processing sensor data from multiple heterogeneous devices, ensuring end-users understand and trust their outputs. Among the many applications, XAI has also been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes. Existing approaches highlight which sensor events are most important for each predicted activity, using simple rules to convert these events into natural language explanations for non-expert users. However, these methods produce rigid explanations lacking natural language flexibility and are not scalable. With the recent rise of Large Language Models (LLMs), it is worth exploring whether they can enhance explanation generation, considering their proven knowledge of human activities. This paper investigates potential approaches to combine XAI and LLMs for sensor-based ADL recognition. We evaluate if LLMs can be used: a) as explainable zero-shot ADL recognition models, avoiding costly labeled data collection, and b) to automate the generation of explanations for existing data-driven XAI approaches when training data is available and the goal is higher recognition rates. Our critical evaluation provides insights into the benefits and challenges of using LLMs for explainable ADL recognition.",2 "Autonomous business processes (ABPs), i.e., self-executing workflows leveraging AI/ML, have the potential to improve operational efficiency, reduce errors, lower costs, improve response times, and free human workers for more strategic and creative work. However, ABPs may raise specific concerns including decreased stakeholder trust, difficulties in debugging, hindered accountability, risk of bias, and issues with regulatory compliance. We argue for eXplainable ABPs (XABPs) to address these concerns by enabling systems to articulate their rationale. The paper outlines a systematic approach to XABPs, characterizing their forms, structuring explainability, and identifying key BPM research challenges towards XABPs.",1 "Trustworthy interpretation of deep learning models is critical for neuroimaging applications, yet commonly used Explainable AI (XAI) methods lack rigorous validation, risking misinterpretation. We performed the first large-scale, systematic comparison of XAI methods on ~45,000 structural brain MRIs using a novel XAI validation framework. This framework establishes verifiable ground truth by constructing prediction tasks with known signal sources - from localized anatomical features to subject-specific clinical lesions - without artificially altering input images. Our analysis reveals systematic failures in two of the most widely used methods: GradCAM consistently failed to localize predictive features, while Layer-wise Relevance Propagation generated extensive, artifactual explanations that suggest incompatibility with neuroimaging data characteristics. Our results indicate that these failures stem from a domain mismatch, where methods with design principles tailored to natural images require substantial adaptation for neuroimaging data. In contrast, the simpler, gradient-based method SmoothGrad, which makes fewer assumptions about data structure, proved consistently accurate, suggesting its conceptual simplicity makes it more robust to this domain shift. These findings highlight the need for domain-specific adaptation and validation of XAI methods, suggest that interpretations from prior neuroimaging studies using standard XAI methodology warrant re-evaluation, and provide urgent guidance for practical application of XAI in neuroimaging.",0 "Large, publicly available clinical datasets have emerged as a novel resource for understanding disease heterogeneity and to explore personalization of therapy. These datasets are derived from data not originally collected for research purposes and, as a result, are often incomplete and lack critical labels. Many AI tools have been developed to retrospectively label these datasets, such as by performing disease classification; however, they often suffer from limited interpretability. Previous work has attempted to explain predictions using Concept Bottleneck Models (CBMs), which learn interpretable concepts that map to higher-level clinical ideas, facilitating human evaluation. However, these models often experience performance limitations when the concepts fail to adequately explain or characterize the task. We use the identification of Acute Respiratory Distress Syndrome (ARDS) as a challenging test case to demonstrate the value of incorporating contextual information from clinical notes to improve CBM performance. Our approach leverages a Large Language Model (LLM) to process clinical notes and generate additional concepts, resulting in a 10% performance gain over existing methods. Additionally, it facilitates the learning of more comprehensive concepts, thereby reducing the risk of information leakage and reliance on spurious shortcuts, thus improving the characterization of ARDS.",0 "Recent advances in Multimodal Large Language Models (MLLMs) have introduced a paradigm shift for Image Quality Assessment (IQA) from unexplainable image quality scoring to explainable IQA, demonstrating practical applications like quality control and optimization guidance. However, current explainable IQA methods not only inadequately use the same distortion criteria to evaluate both User-Generated Content (UGC) and AI-Generated Content (AIGC) images, but also lack detailed quality analysis for monitoring image quality and guiding image restoration. In this study, we establish the first large-scale Visual Distortion Assessment Instruction Tuning Dataset for UGC images, termed ViDA-UGC, which comprises 11K images with fine-grained quality grounding, detailed quality perception, and reasoning quality description data. This dataset is constructed through a distortion-oriented pipeline, which involves human subject annotation and a Chain-of-Thought (CoT) assessment framework. This framework guides GPT-4o to generate quality descriptions by identifying and analyzing UGC distortions, which helps capturing rich low-level visual features that inherently correlate with distortion patterns. Moreover, we carefully select 476 images with corresponding 6,149 question answer pairs from ViDA-UGC and invite a professional team to ensure the accuracy and quality of GPT-generated information. The selected and revised data further contribute to the first UGC distortion assessment benchmark, termed ViDA-UGC-Bench. Experimental results demonstrate the effectiveness of the ViDA-UGC and CoT framework for consistently enhancing various image quality analysis abilities across multiple base MLLMs on ViDA-UGC-Bench and Q-Bench, even surpassing GPT-4o.",1 "Transferring knowledge across generations is fundamental to human civilization, yet the challenge of passing on complex practical skills persists. Methods without a physically present instructor, such as videos, often fail to explain complex manual tasks, where spatial and social factors are critical. Technologies such as eXtended Reality and Artificial Intelligence hold the potential to retain expert knowledge and facilitate the creation of tailored, contextualized, and asynchronous explanations regardless of time and place. In contrast to videos, the learner's perspective can be different from the recorded perspective in XR. This paper investigates the impact of asynchronous first- and third-person perspectives and gaze visualizations on efficiency, feeling of embodiment, and connectedness during manual tasks. The empirical results of our study (N=36) show that the first-person perspective is better in quantitative measures and preferred by users. We identify best practices for presenting preserved knowledge and provide guidelines for designing future systems.",2 "Sophisticated evasion tactics in malicious Android applications, combined with their intricate behavioral semantics, enable attackers to conceal malicious logic within legitimate functions, underscoring the critical need for robust and in-depth analysis frameworks. However, traditional analysis techniques often fail to recover deeply hidden behaviors or provide human-readable justifications for their decisions. Inspired by advances in large language models (LLMs), we introduce TraceRAG, a retrieval-augmented generation (RAG) framework that bridges natural language queries and Java code to deliver explainable malware detection and analysis. First, TraceRAG generates summaries of method-level code snippets, which are indexed in a vector database. At query time, behavior-focused questions retrieve the most semantically relevant snippets for deeper inspection. Finally, based on the multi-turn analysis results, TraceRAG produces human-readable reports that present the identified malicious behaviors and their corresponding code implementations. Experimental results demonstrate that our method achieves 96\% malware detection accuracy and 83.81\% behavior identification accuracy based on updated VirusTotal (VT) scans and manual verification. Furthermore, expert evaluation confirms the practical utility of the reports generated by TraceRAG.",1 "Speech Emotion Recognition (SER) is typically trained and evaluated on majority-voted labels, which simplifies benchmarking but masks subjectivity and provides little transparency into why predictions are made. This neglects valid minority annotations and limits interpretability. We propose an explainable Speech Language Model (SpeechLM) framework that frames SER as a generative reasoning task. Given an utterance, the model first produces a transcript, then outputs both an emotion label and a concise natural-language rationale grounded in lexical and acoustic cues. Rationales are generated by a reasoning-capable teacher LLM and used as intermediate supervision, combined with majority labels during fine-tuning. Unlike prior work primarily focused on boosting classification accuracy, we aim to enhance explainability while preserving competitive performance. To this end, we complement majority-label metrics with annotator-aware scoring that credits matches with any annotator label. On MSP-Podcast v1.12, our model maintains improvements over zero-shot SpeechLM baselines, and produces rationales that human evaluators find plausible and well grounded. This demonstrates that incorporating rationale supervision offers a practical path toward interpretable SER without sacrificing predictive quality.",0 "AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated automatically without human raters. Two behavioural findings emerge. First, persona-over-model: persona design explains behaviour more than model identity. Second, engagement as a stressor: when replies must counter-arguments, ideological adherence strengthens and the prevalence of extreme content increases. We demonstrate that fully automated influence-content production is within reach of both large and small actors. Consequently, defence should shift from restricting model access towards conversation-centric detection and disruption of campaigns and coordination infrastructure. Paradoxically, the very consistency that enables these operations also provides a detection signature.",2 "Human cognition is profoundly shaped by the environments in which it unfolds. Yet, it remains an open question whether learning and decision making can be explained as a principled adaptation to the statistical structure of real-world tasks. We introduce ecologically rational analysis, a computational framework that unifies the normative foundations of rational analysis with ecological grounding. Leveraging large language models to generate ecologically valid cognitive tasks at scale, and using meta-learning to derive rational models optimized for these environments, we develop a new class of learning algorithms: Ecologically Rational Meta-learned Inference (ERMI). ERMI internalizes the statistical regularities of naturalistic problem spaces and adapts flexibly to novel situations, without requiring hand-crafted heuristics or explicit parameter updates. We show that ERMI captures human behavior across 15 experiments spanning function learning, category learning, and decision making, outperforming several established cognitive models in trial-by-trial prediction. Our results suggest that much of human cognition may reflect adaptive alignment to the ecological structure of the problems we encounter in everyday life.",0 "Ecological research increasingly relies on integrating heterogeneous datasets and knowledge to explain and predict complex phenomena. Yet, differences in data types, terminology, and documentation often hinder interoperability, reuse, and causal understanding. We present the Semantic Units Framework, a novel, domain-agnostic semantic modelling approach applied here to ecological data and knowledge in compliance with the FAIR (Findable, Accessible, Interoperable, Reusable) and CLEAR (Cognitively interoperable, semantically Linked, contextually Explorable, easily Accessible, human-Readable and -interpretable) Principles. The framework models data and knowledge as modular, logic-aware semantic units: single propositions (statement units) or coherent groups of propositions (compound units). Statement units can model measurements, observations, or universal relationships, including causal ones, and link to methods and evidence. Compound units group related statement units into reusable, semantically coherent knowledge objects. Implemented using RDF, OWL, and knowledge graphs, semantic units can be serialized as FAIR Digital Objects with persistent identifiers, provenance, and semantic interoperability. We show how universal statement units build ecological causal networks, which can be composed into causal maps and perspective-specific subnetworks. These support causal reasoning, confounder detection (back-door), effect identification with unobserved confounders (front-door), application of do-calculus, and alignment with Bayesian networks, structural equation models, and structural causal models. By linking fine-grained empirical data to high-level causal reasoning, the Semantic Units Framework provides a foundation for ecological knowledge synthesis, evidence annotation, cross-domain integration, reproducible workflows, and AI-ready ecological research.",0 "We propose a neurosymbolic approach to the explanation of complex sequences of decisions that combines the strengths of decision procedures and Large Language Models (LLMs). We demonstrate this approach by producing explanations for the solutions of Hitori puzzles. The rules of Hitori include local constraints that are effectively explained by short resolution proofs. However, they also include a connectivity constraint that is more suitable for visual explanations. Hence, Hitori provides an excellent testing ground for a flexible combination of SAT solvers and LLMs. We have implemented a tool that assists humans in solving Hitori puzzles, and we present experimental evidence of its effectiveness.",2 "Explainability, the capability of an artificial intelligence system (AIS) to explain its outcomes in a manner that is comprehensible to human beings at an acceptable level, has been deemed essential for critical sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, ``Oracle'' (without explainability) versus ``AI Colleague'' (with explainability) for a thorough analysis. We discuss how the level of automation and explainability of AIS can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved parties and mitigate the risk of potential defensive medicine practices.",0 "Large Language Model (LLM)-based systems present new opportunities for autonomous health monitoring in sensor-rich industrial environments. This study explores the potential of LLMs to detect and classify faults directly from sensor data, while producing inherently explainable outputs through natural language reasoning. We systematically evaluate how LLM-system architecture (single-LLM vs. multi-LLM), input representations (raw vs. descriptive statistics), and context window size affect diagnostic performance. Our findings show that LLM systems perform most effectively when provided with summarized statistical inputs, and that systems with multiple LLMs using specialized prompts offer improved sensitivity for fault classification compared to single-LLM systems. While LLMs can produce detailed and human-readable justifications for their decisions, we observe limitations in their ability to adapt over time in continual learning settings, often struggling to calibrate predictions during repeated fault cycles. These insights point to both the promise and the current boundaries of LLM-based systems as transparent, adaptive diagnostic tools in complex environments.",0 "Prompting has emerged as a practical way to adapt frozen vision-language models (VLMs) for video anomaly detection (VAD). Yet, existing prompts are often overly abstract, overlooking the fine-grained human-object interactions or action semantics that define complex anomalies in surveillance videos. We propose ASK-Hint, a structured prompting framework that leverages action-centric knowledge to elicit more accurate and interpretable reasoning from frozen VLMs. Our approach organizes prompts into semantically coherent groups (e.g. violence, property crimes, public safety) and formulates fine-grained guiding questions that align model predictions with discriminative visual cues. Extensive experiments on UCF-Crime and XD-Violence show that ASK-Hint consistently improves AUC over prior baselines, achieving state-of-the-art performance compared to both fine-tuned and training-free methods. Beyond accuracy, our framework provides interpretable reasoning traces towards anomaly and demonstrates strong generalization across datasets and VLM backbones. These results highlight the critical role of prompt granularity and establish ASK-Hint as a new training-free and generalizable solution for explainable video anomaly detection.",2 "A central challenge in explainable AI, particularly in the visual domain, is producing explanations grounded in human-understandable concepts. To tackle this, we introduce OCEAN (Object-Centric Explananda via Agent Negotiation), a novel, inherently interpretable framework built on object-centric representations and a transparent multi-agent reasoning process. The game-theoretic reasoning process drives agents to agree on coherent and discriminative evidence, resulting in a faithful and interpretable decision-making process. We train OCEAN end-to-end and benchmark it against standard visual classifiers and popular posthoc explanation tools like GradCAM and LIME across two diagnostic multi-object datasets. Our results demonstrate competitive performance with respect to state-of-the-art black-box models with a faithful reasoning process, which was reflected by our user study, where participants consistently rated OCEAN's explanations as more intuitive and trustworthy.",0 "We study a one-dimensional quasiperiodic tight-binding model with simultaneous off-diagonal (hopping) and diagonal (onsite) modulations. Using the inverse participation ratio and the wave-packet centroid, we construct localization-delocalization phase diagrams for both equilibrium and nonequilibrium steady states. We analyze the robustness of initial-state properties under dissipation and characterize dissipation-induced localization-delocalization transitions (and their reversals) in detail. Trace-distance dynamics provide evidence for a quantum Mpemba effect: states prepared farther from the steady state can relax faster than states initialized closer to it. We propose a starting-line hypothesis that explains the presence or absence of this effect across parameter regimes. In addition, we examine thermodynamic functionality and find that the localized phase favors the realization of quantum heaters. These results advance the understanding of steady-state phase transitions and relaxation dynamics in dissipatively driven quasiperiodic systems, and broaden the thermodynamic landscape of quasiperiodic platforms.",0 "Interior design involves the careful selection and arrangement of objects to create an aesthetically pleasing, functional, and harmonized space that aligns with the client's design brief. This task is particularly challenging, as a successful design must not only incorporate all the necessary objects in a cohesive style, but also ensure they are arranged in a way that maximizes accessibility, while adhering to a variety of affordability and usage considerations. Data-driven solutions have been proposed, but these are typically room- or domain-specific and lack explainability in their design design considerations used in producing the final layout. In this paper, we investigate if large language models (LLMs) can be directly utilized for interior design. While we find that LLMs are not yet capable of generating complete layouts, they can be effectively leveraged in a structured manner, inspired by the workflow of interior designers. By systematically probing LLMs, we can reliably generate a list of objects along with relevant constraints that guide their placement. We translate this information into a design layout graph, which is then solved using an off-the-shelf constrained optimization setup to generate the final layouts. We benchmark our algorithm in various design configurations against existing LLM-based methods and human designs, and evaluate the results using a variety of quantitative and qualitative metrics along with user studies. In summary, we demonstrate that LLMs, when used in a structured manner, can effectively generate diverse high-quality layouts, making them a viable solution for creating large-scale virtual scenes. Project webpage at https://flairgpt.github.io/",2 "This paper bridges internal and external analysis approaches to large language models (LLMs) by demonstrating that geometric properties of internal model representations serve as reliable proxies for evaluating generated text quality. We validate a set of metrics including Maximum Explainable Variance, Effective Rank, Intrinsic Dimensionality, MAUVE score, and Schatten Norms measured across different layers of LLMs, demonstrating that Intrinsic Dimensionality and Effective Rank can serve as universal assessments of text naturalness and quality. Our key finding reveals that different models consistently rank text from various sources in the same order based on these geometric properties, indicating that these metrics reflect inherent text characteristics rather than model-specific artifacts. This allows a reference-free text quality evaluation that does not require human-annotated datasets, offering practical advantages for automated evaluation pipelines.",0 "This paper focuses on a key challenge in visual emotion understanding: given an art image, the model pinpoints pixel regions that trigger a specific human emotion, and generates linguistic explanations for it. Despite advances in general segmentation, pixel-level emotion understanding still faces a dual challenge: first, the subjectivity of emotion limits general segmentation models like SAM to adapt to emotion-oriented segmentation tasks; and second, the abstract nature of art expression makes it hard for captioning models to balance pixel-level semantics and emotion reasoning. To solve the above problems, this paper proposes the Emotion stimuli Segmentation and Explanation Model (EmoSEM) model to endow the segmentation framework with emotion comprehension capability. First, to enable the model to perform segmentation under the guidance of emotional intent well, we introduce an emotional prompt with a learnable mask token as the conditional input for segmentation decoding. Then, we design an emotion projector to establish the association between emotion and visual features. Next, more importantly, to address emotion-visual stimuli alignment, we develop a lightweight prefix adapter, a module that fuses the learned emotional mask with the corresponding emotion into a unified representation compatible with the language model. Finally, we input the joint visual, mask, and emotional tokens into the language model and output the emotional explanations. It ensures that the generated interpretations remain semantically and emotionally coherent with the visual stimuli. Our method realizes end-to-end modeling from low-level pixel features to high-level emotion interpretation, delivering the first interpretable fine-grained framework for visual emotion analysis. Extensive experiments validate the effectiveness of our model. Code will be made publicly available.",1 "Real-time threat monitoring identifies threatening behaviors in video streams and provides reasoning and assessment of threat events through explanatory text. However, prevailing methodologies, whether based on supervised learning or generative models, struggle to concurrently satisfy the demanding requirements of real-time performance and decision explainability. To bridge this gap, we introduce Live-E2T, a novel framework that unifies these two objectives through three synergistic mechanisms. First, we deconstruct video frames into structured Human-Object-Interaction-Place semantic tuples. This approach creates a compact, semantically focused representation, circumventing the information degradation common in conventional feature compression. Second, an efficient online event deduplication and updating mechanism is proposed to filter spatio-temporal redundancies, ensuring the system's real time responsiveness. Finally, we fine-tune a Large Language Model using a Chain-of-Thought strategy, endow it with the capability for transparent and logical reasoning over event sequences to produce coherent threat assessment reports. Extensive experiments on benchmark datasets, including XD-Violence and UCF-Crime, demonstrate that Live-E2T significantly outperforms state-of-the-art methods in terms of threat detection accuracy, real-time efficiency, and the crucial dimension of explainability.",2 "Large Language Models (LLMs) have been extensively tuned to mitigate explicit biases, yet they often exhibit subtle implicit biases rooted in their pre-training data. Rather than directly probing LLMs with human-crafted questions that may trigger guardrails, we propose studying how models behave when they proactively ask questions themselves. The 20 Questions game, a multi-turn deduction task, serves as an ideal testbed for this purpose. We systematically evaluate geographic performance disparities in entity deduction using a new dataset, Geo20Q+, consisting of both notable people and culturally significant objects (e.g., foods, landmarks, animals) from diverse regions. We test popular LLMs across two gameplay configurations (canonical 20-question and unlimited turns) and in seven languages (English, Hindi, Mandarin, Japanese, French, Spanish, and Turkish). Our results reveal geographic disparities: LLMs are substantially more successful at deducing entities from the Global North than the Global South, and the Global West than the Global East. While Wikipedia pageviews and pre-training corpus frequency correlate mildly with performance, they fail to fully explain these disparities. Notably, the language in which the game is played has minimal impact on performance gaps. These findings demonstrate the value of creative, free-form evaluation frameworks for uncovering subtle biases in LLMs that remain hidden in standard prompting setups. By analyzing how models initiate and pursue reasoning goals over multiple turns, we find geographic and cultural disparities embedded in their reasoning processes. We release the dataset (Geo20Q+) and code at https://sites.google.com/view/llmbias20q/home.",0 "Information-processing systems coordinating across multiple agents and objectives face fundamental thermodynamic constraints. We show that solutions with maximum utility to act as coordination focal points have much higher selection pressure for being findable across agents rather than accuracy. We derive that the information-theoretic minimum description length of coordination protocols to precision $\varepsilon$ scales as $L(P)\geq NK\log_2 K+N^2d^2\log (1/\varepsilon)$ for $N$ agents with $d$ potentially conflicting objectives and internal model complexity $K$. This scaling forces progressive simplification, with coordination dynamics changing the environment itself and shifting optimization across hierarchical levels. Moving from established focal points requires re-coordination, creating persistent metastable states and hysteresis until significant environmental shifts trigger phase transitions through spontaneous symmetry breaking. We operationally define coordination temperature to predict critical phenomena and estimate coordination work costs, identifying measurable signatures across systems from neural networks to restaurant bills to bureaucracies. Extending the topological version of Arrow's theorem on the impossibility of consistent preference aggregation, we find it recursively binds whenever preferences are combined. This potentially explains the indefinite cycling in multi-objective gradient descent and alignment faking in Large Language Models trained with reinforcement learning with human feedback. We term this framework Thermodynamic Coordination Theory (TCT), which demonstrates that coordination requires radical information loss.",0 "Recent advancements in explainable recommendation have greatly bolstered user experience by elucidating the decision-making rationale. However, the existing methods actually fail to provide effective feedback signals for potentially better or worse generated explanations due to their reliance on traditional supervised learning paradigms in sparse interaction data. To address these issues, we propose a novel human-like feedback-driven optimization framework. This framework employs a dynamic interactive optimization mechanism for achieving human-centered explainable requirements without incurring high labor costs. Specifically, we propose to utilize large language models (LLMs) as human simulators to predict human-like feedback for guiding the learning process. To enable the LLMs to deeply understand the task essence and meet user's diverse personalized requirements, we introduce a human-induced customized reward scoring method, which helps stimulate the language understanding and logical reasoning capabilities of LLMs. Furthermore, considering the potential conflicts between different perspectives of explanation quality, we introduce a principled Pareto optimization that transforms the multi-perspective quality enhancement task into a multi-objective optimization problem for improving explanation performance. At last, to achieve efficient model training, we design an off-policy optimization pipeline. By incorporating a replay buffer and addressing the data distribution biases, we can effectively improve data utilization and enhance model generality. Extensive experiments on four datasets demonstrate the superiority of our approach.",2 "Understanding what knowledge is implicitly encoded in deep learning models is essential for improving the interpretability of AI systems. This paper examines common methods to explain the knowledge encoded in word embeddings, which are core elements of large language models (LLMs). These methods typically involve mapping embeddings onto collections of human-interpretable semantic features, known as feature norms. Prior work assumes that accurately predicting these semantic features from the word embeddings implies that the embeddings contain the corresponding knowledge. We challenge this assumption by demonstrating that prediction accuracy alone does not reliably indicate genuine feature-based interpretability. We show that these methods can successfully predict even random information, concluding that the results are predominantly determined by an algorithmic upper bound rather than meaningful semantic representation in the word embeddings. Consequently, comparisons between datasets based solely on prediction performance do not reliably indicate which dataset is better captured by the word embeddings. Our analysis illustrates that such mappings primarily reflect geometric similarity within vector spaces rather than indicating the genuine emergence of semantic properties.",0 "Manual parameter tuning of cyber-physical systems is a common practice, but it is labor-intensive. Bayesian Optimization (BO) offers an automated alternative, yet its black-box nature reduces trust and limits human-BO collaborative system tuning. Experts struggle to interpret BO recommendations due to the lack of explanations. This paper addresses the post-hoc BO explainability problem for cyber-physical systems. We introduce TNTRules (Tune-No-Tune Rules), a novel algorithm that provides both global and local explanations for BO recommendations. TNTRules generates actionable rules and visual graphs, identifying optimal solution bounds and ranges, as well as potential alternative solutions. Unlike existing explainable AI (XAI) methods, TNTRules is tailored specifically for BO, by encoding uncertainty via a variance pruning technique and hierarchical agglomerative clustering. A multi-objective optimization approach allows maximizing explanation quality. We evaluate TNTRules using established XAI metrics (Correctness, Completeness, and Compactness) and compare it against adapted baseline methods. The results demonstrate that TNTRules generates high-fidelity, compact, and complete explanations, significantly outperforming three baselines on 5 multi-objective testing functions and 2 hyperparameter tuning problems.",0 "Deep neural networks (DNNs) have achieved remarkable success across domains but remain difficult to interpret, limiting their trustworthiness in high-stakes applications. This paper focuses on deep vision models, for which a dominant line of explainability methods are Class Activation Mapping (CAM) and its variants working by highlighting spatial regions that drive predictions. We figure out that CAM provides little semantic insight into what attributes underlie these activations. To address this limitation, we propose TextCAM, a novel explanation framework that enriches CAM with natural languages. TextCAM combines the precise spatial localization of CAM with the semantic alignment of vision-language models (VLMs). Specifically, we derive channel-level semantic representations using CLIP embeddings and linear discriminant analysis, and aggregate them with CAM weights to produce textual descriptions of salient visual evidence. This yields explanations that jointly specify where the model attends and what visual attributes likely support its decision. We further extend TextCAM to generate feature channels into semantically coherent groups, enabling more fine-grained visual-textual explanations. Experiments on ImageNet, CLEVR, and CUB demonstrate that TextCAM produces faithful and interpretable rationales that improve human understanding, detect spurious correlations, and preserve model fidelity.",0 "Recent progress in auditory intelligence has yielded high-performing systems for sound event detection (SED), acoustic scene classification (ASC), automated audio captioning (AAC), and audio question answering (AQA). Yet these tasks remain largely constrained to surface-level recognition-capturing what happened but not why, what it implies, or how it unfolds in context. I propose a conceptual reframing of auditory intelligence as a layered, situated process that encompasses perception, reasoning, and interaction. To instantiate this view, I introduce four cognitively inspired task paradigms-ASPIRE, SODA, AUX, and AUGMENT-those structure auditory understanding across time-frequency pattern captioning, hierarchical event/scene description, causal explanation, and goal-driven interpretation, respectively. Together, these paradigms provide a roadmap toward more generalizable, explainable, and human-aligned auditory intelligence, and are intended to catalyze a broader discussion of what it means for machines to understand sound.",0 "Recent advances in Artificial Intelligence Generated Content have led to highly realistic synthetic videos, particularly in human-centric scenarios involving speech, gestures, and full-body motion, posing serious threats to information authenticity and public trust. Unlike DeepFake techniques that focus on localized facial manipulation, human-centric video generation methods can synthesize entire human bodies with controllable movements, enabling complex interactions with environments, objects, and even other people. However, existing detection methods largely overlook the growing risks posed by such full-body synthetic content. Meanwhile, a growing body of research has explored leveraging LLMs for interpretable fake detection, aiming to explain decisions in natural language. Yet these approaches heavily depend on supervised fine-tuning, which introduces limitations such as annotation bias, hallucinated supervision, and weakened generalization. To address these challenges, we propose AvatarShield, a novel multimodal human-centric synthetic video detection framework that eliminates the need for dense textual supervision by adopting Group Relative Policy Optimization, enabling LLMs to develop reasoning capabilities from simple binary labels. Our architecture combines a discrete vision tower for high-level semantic inconsistencies and a residual extractor for fine-grained artifact analysis. We further introduce FakeHumanVid, a large-scale benchmark containing 15K real and synthetic videos across nine state-of-the-art human generation methods driven by text, pose, or audio. Extensive experiments demonstrate that AvatarShield outperforms existing methods in both in-domain and cross-domain settings.",1 "Large Language Models (LLMs) such as GPT, LLaMA, and Claude achieve remarkable performance in text generation but remain opaque in their decision-making processes, limiting trust and accountability in high-stakes applications. We present gSMILE (generative SMILE), a model-agnostic, perturbation-based framework for token-level interpretability in LLMs. Extending the SMILE methodology, gSMILE uses controlled prompt perturbations, Wasserstein distance metrics, and weighted linear surrogates to identify input tokens with the most significant impact on the output. This process enables the generation of intuitive heatmaps that visually highlight influential tokens and reasoning paths. We evaluate gSMILE across leading LLMs (OpenAI's gpt-3.5-turbo-instruct, Meta's LLaMA 3.1 Instruct Turbo, and Anthropic's Claude 2.1) using attribution fidelity, attribution consistency, attribution stability, attribution faithfulness, and attribution accuracy as metrics. Results show that gSMILE delivers reliable human-aligned attributions, with Claude 2.1 excelling in attention fidelity and GPT-3.5 achieving the highest output consistency. These findings demonstrate gSMILE's ability to balance model performance and interpretability, enabling more transparent and trustworthy AI systems.",0 "Motor dysfunction is a common sign of neurodegenerative diseases (NDs) such as Parkinson's disease (PD) and Alzheimer's disease (AD), but may be difficult to detect, especially in the early stages. In this work, we examine the behavior of a wide array of explainable metrics extracted from the handwriting signals of 113 subjects performing multiple tasks on a digital tablet, as part of the Neurological Signals dataset. The aim is to measure their effectiveness in characterizing NDs, including AD and PD. To this end, task-agnostic and task-specific metrics are extracted from 14 distinct tasks. Subsequently, through statistical analysis and a series of classification experiments, we investigate which metrics provide greater discriminative power between NDs and healthy controls and amongst different NDs. Preliminary results indicate that the tasks at hand can all be effectively leveraged to distinguish between the considered set of NDs, specifically by measuring the stability, the speed of writing, the time spent not writing, and the pressure variations between groups from our handcrafted explainable metrics, which shows p-values lower than 0.0001 for multiple tasks. Using various binary classification algorithms on the computed metrics, we obtain up to 87 % accuracy for the discrimination between AD and healthy controls (CTL), and up to 69 % for the discrimination between PD and CTL.",2 "Accurate diagnosis of skin diseases remains a significant challenge due to the complex and diverse visual features present in dermatoscopic images, often compounded by a lack of interpretability in existing purely visual diagnostic models. To address these limitations, this study introduces VL-MedGuide (Visual-Linguistic Medical Guide), a novel framework leveraging the powerful multi-modal understanding and reasoning capabilities of Visual-Language Large Models (LVLMs) for intelligent and inherently interpretable auxiliary diagnosis of skin conditions. VL-MedGuide operates in two interconnected stages: a Multi-modal Concept Perception Module, which identifies and linguistically describes dermatologically relevant visual features through sophisticated prompt engineering, and an Explainable Disease Reasoning Module, which integrates these concepts with raw visual information via Chain-of-Thought prompting to provide precise disease diagnoses alongside transparent rationales. Comprehensive experiments on the Derm7pt dataset demonstrate that VL-MedGuide achieves state-of-the-art performance in both disease diagnosis (83.55% BACC, 80.12% F1) and concept detection (76.10% BACC, 67.45% F1), surpassing existing baselines. Furthermore, human evaluations confirm the high clarity, completeness, and trustworthiness of its generated explanations, bridging the gap between AI performance and clinical utility by offering actionable, explainable insights for dermatological practice.",1 "Automated Program Repair (APR) seeks to automatically correct software bugs without requiring human intervention. However, existing tools tend to generate patches that satisfy test cases without fixing the underlying bug, those are known as overfitting patches. To address this issue, Automated Patch Correctness Assessment (APCA) attempts to identify overfitting patches generated by APR tools. It can be solved as a static approach, meaning that no additional information is needed beyond the original and fixed code snippets. Current static techniques often struggle with reliability, flexibility and transparency. To address these issues, we introduce RePaCA, a novel static APCA technique that leverages Large Language Models (LLMs) specialized in thinking tasks. Our model is prompted with both buggy and fixed code snippets and guided to generate a Chain of Thought that analyses code differences, reasons about how the patch addresses the root cause, and ultimately provides a binary classification: correct or overfitting. To enhance these reasoning capabilities for the APCA task specifically, the LLM is finetuned using Reinforcement Learning with the Group Relative Policy Optimization algorithm. When evaluated on a standard Defects4J-derived test, our approach achieves state-of-the-art performance, with 83.1% accuracy and an 84.8% F1-score. Furthermore, our model demonstrates superior generalization capabilities when trained on different datasets, outperforming the leading technique. This reasoning capability also provides enhanced explainability for the patch assessment. These findings underscore the considerable promise of finetuned, reasoning LLMs to advance static APCA by enhancing accuracy, generalization, and explainability.",0 "Since the earliest proposals for artificial neural network (ANN) models of the mind and brain, critics have pointed out key weaknesses in these models compared to human cognitive abilities. Here we review recent work that uses metalearning to overcome several classic challenges, which we characterize as addressing the Problem of Incentive and Practice -- that is, providing machines with both incentives to improve specific skills and opportunities to practice those skills. This explicit optimization contrasts with more conventional approaches that hope the desired behaviour will emerge through optimizing related but different objectives. We review applications of this principle to addressing four classic challenges for ANNs: systematic generalization, catastrophic forgetting, few-shot learning and multi-step reasoning. We also discuss how large language models incorporate key aspects of this metalearning framework (namely, sequence prediction with feedback trained on diverse data), which helps to explain some of their successes on these classic challenges. Finally, we discuss the prospects for understanding aspects of human development through this framework, and whether natural environments provide the right incentives and practice for learning how to make challenging generalizations.",2 "Sparse autoencoders (SAEs) decompose large language model (LLM) activations into latent features that reveal mechanistic structure. Conventional SAEs train on broad data distributions, forcing a fixed latent budget to capture only high-frequency, generic patterns. This often results in significant linear ``dark matter'' in reconstruction error and produces latents that fragment or absorb each other, complicating interpretation. We show that restricting SAE training to a well-defined domain (medical text) reallocates capacity to domain-specific features, improving both reconstruction fidelity and interpretability. Training JumpReLU SAEs on layer-20 activations of Gemma-2 models using 195k clinical QA examples, we find that domain-confined SAEs explain up to 20\% more variance, achieve higher loss recovery, and reduce linear residual error compared to broad-domain SAEs. Automated and human evaluations confirm that learned features align with clinically meaningful concepts (e.g., ``taste sensations'' or ``infectious mononucleosis''), rather than frequent but uninformative tokens. These domain-specific SAEs capture relevant linear structure, leaving a smaller, more purely nonlinear residual. We conclude that domain-confinement mitigates key limitations of broad-domain SAEs, enabling more complete and interpretable latent decompositions, and suggesting the field may need to question ``foundation-model'' scaling for general-purpose SAEs.",1 "With the development of generative artificial intelligence (GenAI) tools to create art, stakeholders cannot come to an agreement on the value of these works. In this study we uncovered the mixed opinions surrounding art made by AI. We developed two versions of a dance performance augmented by technology either with or without GenAI. For each version we informed audiences of the performance's development either before or after a survey on their perceptions of the performance. There were thirty-nine participants (13 males, 26 female) divided between the four performances. Results demonstrated that individuals were more inclined to attribute artistic merit to works made by GenAI when they were unaware of its use. We present this case study as a call to address the importance of utilizing the social context and the users' interpretations of GenAI in shaping a technical explanation, leading to a greater discussion that can bridge gaps in understanding.",0 "Companion chatbots offer a potential solution to the growing epidemic of loneliness, but their impact on users' psychosocial well-being remains poorly understood, raising critical ethical questions about their deployment and design. This study presents a large-scale survey (n = 404) of regular users of companion chatbots, investigating the relationship between chatbot usage and loneliness. We develop a model explaining approximately 50% of variance in loneliness; while usage does not directly predict loneliness, we identify factors including neuroticism, social network size, and problematic use. Through cluster analysis and mixed-methods thematic analysis combining manual coding with automated theme extraction, we identify seven distinct user profiles demonstrating that companion chatbots can either enhance or potentially harm psychological well-being depending on user characteristics. Different usage patterns can lead to markedly different outcomes, with some users experiencing enhanced social confidence while others risk further isolation. These findings have significant implications for responsible AI development, suggesting that one-size-fits-all approaches to AI companionship may be ethically problematic. Our work contributes to the ongoing dialogue about the role of AI in social and emotional support, offering insights for developing more targeted and ethical approaches to AI companionship that complement rather than replace human connections.",2 "Krylov complexity, a quantum complexity measure which uniquely characterizes the spread of a quantum state or an operator, has recently been studied in the context of quantum chaos. However, the definitiveness of this measure as a chaos quantifier is in question in light of its strong dependence on the initial condition. This article clarifies the connection between the Krylov complexity dynamics and the initial operator or state. We find that the Krylov complexity depends monotonically on the inverse participation ratio (IPR) of the initial condition in the eigenbasis of the Hamiltonian. We explain the reversal of the complexity saturation levels observed in \href{https://doi.org/10.1103/PhysRevE.107.024217}{ Phys.Rev.E.107,024217, 2023} using the initial spread of the operator in the Hamiltonian eigenbasis. IPR dependence is present even in the fully chaotic regime, where popular quantifiers of chaos, such as out-of-time-ordered correlators and entanglement generation, show similar behavior regardless of the initial condition. Krylov complexity averaged over many initial conditions still does not characterize chaos.",0 "Recent studies highlight various machine learning (ML)-based techniques for code clone detection, which can be integrated into developer tools such as static code analysis. With the advancements brought by ML in code understanding, ML-based code clone detectors could accurately identify and classify cloned pairs, especially semantic clones, but often operate as black boxes, providing little insight into the decision-making process. Post hoc explainers, on the other hand, aim to interpret and explain the predictions of these ML models after they are made, offering a way to understand the underlying mechanisms driving the model's decisions. However, current post hoc techniques require white-box access to the ML model or are computationally expensive, indicating a need for advanced post hoc explainers. In this paper, we propose a novel approach that leverages the in-context learning capabilities of large language models to elucidate the predictions made by the ML-based code clone detectors. We perform a study using ChatGPT-4 to explain the code clone results inferred by GraphCodeBERT. We found that our approach is promising as a post hoc explainer by giving the correct explanations up to 98% and offering good explanations 95% of the time. However, the explanations and the code line examples given by the LLM are useful in some cases. We also found that lowering the temperature to zero helps increase the accuracy of the explanation. Lastly, we list the insights that can lead to further improvements in future work. This study paves the way for future studies in using LLMs as a post hoc explainer for various software engineering tasks.",0 "Electrospun yarns often fall short of the strength and stiffness of their constituent nanofibers because of loose packing and inter-fiber slip. We report a simple, twist-free route to close this gap by liquid-assisted rolling: yarns are briefly wetted (water or ethanol) and subjected to gentle rolling action (mechanical strokes perpendicular and parallel to the yarn axis), then dried under controlled conditions so that meniscus forces compact the assembly into tightly bound bundles. The treatment yields large gains in tensile strength and modulus, and as yarn diameter decreases the properties of liquid-treated yarns approach single-fiber limits, indicating more efficient load transfer. Dry-rolling controls produce negligible changes compared to as-spun yarns, confirming that capillarity-driven consolidation, rather than mechanical pressing, dominates the improvement. Water consistently outperforms ethanol, reflecting its larger elastocapillary driving term gamma*(1 + cos theta) on PAN and thus stronger capillary compaction; a short post-treatment anneal near Tg further increases stiffness with a corresponding reduction in ductility. To rationalize these trends, we quantify microstructure via SEM-derived alignment and packing density and show that these complementary descriptors jointly explain variability in mechanical response. A compact constitutive framework, grounded in distributed fiber recruitment and adhesion/frictional contact, captures the observed strengthening-ductility trade-off across processing routes. The results establish capillarity-driven consolidation as a scalable pathway to engineer processing-structure-property relationships in hierarchical polymer fiber assemblies and provide practical guidance for upgrading electrospun yarns, alone or as precursors to twisted and composite architectures.",0 "Counterfactual explanations (CFs) offer human-centric insights into machine learning predictions by highlighting minimal changes required to alter an outcome. Therefore, CFs can be used as (i) interventions for abnormality prevention and (ii) augmented data for training robust models. In this work, we explore large language models (LLMs), specifically GPT-4o-mini, for generating CFs in a zero-shot and three-shot setting. We evaluate our approach on two datasets: the AI-Readi flagship dataset for stress prediction and a public dataset for heart disease detection. Compared to traditional methods such as DiCE, CFNOW, and NICE, our few-shot LLM-based approach achieves high plausibility (up to 99%), strong validity (up to 0.99), and competitive sparsity. Moreover, using LLM-generated CFs as augmented samples improves downstream classifier performance (an average accuracy gain of 5%), especially in low-data regimes. This demonstrates the potential of prompt-based generative techniques to enhance explainability and robustness in clinical and physiological prediction tasks. Code base: github.com/shovito66/SenseCF.",0 "Capturing the similarities between human language units is crucial for explaining how humans associate different objects, and therefore its computation has received extensive attention, research, and applications. With the ever-increasing amount of information around us, calculating similarity becomes increasingly complex, especially in many cases, such as legal or medical affairs, measuring similarity requires extra care and precision, as small acts within a language unit can have significant real-world effects. My research goal in this thesis is to develop regression models that account for similarities between language units in a more refined way. Computation of similarity has come a long way, but approaches to debugging the measures are often based on continually fitting human judgment values. To this end, my goal is to develop an algorithm that precisely catches loopholes in a similarity calculation. Furthermore, most methods have vague definitions of the similarities they compute and are often difficult to interpret. The proposed framework addresses both shortcomings. It constantly improves the model through catching different loopholes. In addition, every refinement of the model provides a reasonable explanation. The regression model introduced in this thesis is called progressively refined similarity computation, which combines attack testing with adversarial training. The similarity regression model of this thesis achieves state-of-the-art performance in handling edge cases.",2 "How thousands of microtubules and molecular motors self-organize into spindles remains poorly understood. By combining static, nanometer-resolution, large-scale electron tomography reconstructions and dynamic, optical-resolution, polarized light microscopy, we test an active liquid crystal continuum model of mitotic spindles in human tissue culture cells. The predictions of this coarse-grained theory quantitatively agree with the experimentally measured spindle morphology and fluctuation spectra. These findings argue that local interactions and polymerization produce collective alignment, diffusive-like motion, and polar transport which govern the behaviors of the spindle's microtubule network, and provide a means to measure the spindle's material properties. This work demonstrates that a coarse-grained theory featuring measurable, physically-interpretable parameters can quantitatively describe the mechanical behavior and self-organization of human mitotic spindles.",1 "As software systems grow increasingly complex, explainability has become a crucial non-functional requirement for transparency, user trust, and regulatory compliance. Eliciting explainability requirements is challenging, as different methods capture varying levels of detail and structure. This study examines the efficiency and effectiveness of three commonly used elicitation methods - focus groups, interviews, and online surveys - while also assessing the role of taxonomy usage in structuring and improving the elicitation process. We conducted a case study at a large German IT consulting company, utilizing a web-based personnel management software. A total of two focus groups, 18 interviews, and an online survey with 188 participants were analyzed. The results show that interviews were the most efficient, capturing the highest number of distinct needs per participant per time spent. Surveys collected the most explanation needs overall but had high redundancy. Delayed taxonomy introduction resulted in a greater number and diversity of needs, suggesting that a two-phase approach is beneficial. Based on our findings, we recommend a hybrid approach combining surveys and interviews to balance efficiency and coverage. Future research should explore how automation can support elicitation and how taxonomies can be better integrated into different methods.",2 "The Internet of Electric Vehicles (IoEV) envisions a tightly coupled ecosystem of electric vehicles (EVs), charging infrastructure, and grid services, yet it remains vulnerable to cyberattacks, unreliable battery-state predictions, and opaque decision processes that erode trust and performance. To address these challenges, we introduce a novel Agentic Artificial Intelligence (AAI) framework tailored for IoEV, where specialized agents collaborate to deliver autonomous threat mitigation, robust analytics, and interpretable decision support. Specifically, we design an AAI architecture comprising dedicated agents for cyber-threat detection and response at charging stations, real-time State of Charge (SoC) estimation, and State of Health (SoH) anomaly detection, all coordinated through a shared, explainable reasoning layer; develop interpretable threat-mitigation mechanisms that proactively identify and neutralize attacks on both physical charging points and learning components; propose resilient SoC and SoH models that leverage continuous and adversarial-aware learning to produce accurate, uncertainty-aware forecasts with human-readable explanations; and implement a three-agent pipeline, where each agent uses LLM-driven reasoning and dynamic tool invocation to interpret intent, contextualize tasks, and execute formal optimizations for user-centric assistance. Finally, we validate our framework through comprehensive experiments across diverse IoEV scenarios, demonstrating significant improvements in security and prediction accuracy. All datasets, models, and code will be released publicly.",0 "Mutual understanding of artificial agents' decisions is key to ensuring a trustworthy and successful human-robot interaction. Hence, robots are expected to make reasonable decisions and communicate them to humans when needed. In this article, the focus is on an approach to modeling and reasoning about the comparison of two competing plans, so that robots can later explain the divergent result. First, a novel ontological model is proposed to formalize and reason about the differences between competing plans, enabling the classification of the most appropriate one (e.g., the shortest, the safest, the closest to human preferences, etc.). This work also investigates the limitations of a baseline algorithm for ontology-based explanatory narration. To address these limitations, a novel algorithm is presented, leveraging divergent knowledge between plans and facilitating the construction of contrastive narratives. Through empirical evaluation, it is observed that the explanations excel beyond the baseline method.",0 "The chemical reaction recommendation is to select proper reaction condition parameters for chemical reactions, which is pivotal to accelerating chemical science. With the rapid development of large language models (LLMs), there is growing interest in leveraging their reasoning and planning capabilities for reaction condition recommendation. Despite their success, existing methods rarely explain the rationale behind the recommended reaction conditions, limiting their utility in high-stakes scientific workflows. In this work, we propose ChemMAS, a multi-agent system that reframes condition prediction as an evidence-based reasoning task. ChemMAS decomposes the task into mechanistic grounding, multi-channel recall, constraint-aware agentic debate, and rationale aggregation. Each decision is backed by interpretable justifications grounded in chemical knowledge and retrieved precedents. Experiments show that ChemMAS achieves 20-35% gains over domain-specific baselines and outperforms general-purpose LLMs by 10-15% in Top-1 accuracy, while offering falsifiable, human-trustable rationales, which establishes a new paradigm for explainable AI in scientific discovery.",0 "In large-scale maintenance organizations, identifying subject matter experts and managing communications across complex entities relationships poses significant challenges -- including information overload and longer response times -- that traditional communication approaches fail to address effectively. We propose a novel framework that combines RDF graph databases with LLMs to process natural language queries for precise audience targeting, while providing transparent reasoning through a planning-orchestration architecture. Our solution enables communication owners to formulate intuitive queries combining concepts such as equipment, manufacturers, maintenance engineers, and facilities, delivering explainable results that maintain trust in the system while improving communication efficiency across the organization.",0 "Adapting trajectories to dynamic situations and user preferences is crucial for robot operation in unstructured environments with non-expert users. Natural language enables users to express these adjustments in an interactive manner. We introduce OVITA, an interpretable, open-vocabulary, language-driven framework designed for adapting robot trajectories in dynamic and novel situations based on human instructions. OVITA leverages multiple pre-trained Large Language Models (LLMs) to integrate user commands into trajectories generated by motion planners or those learned through demonstrations. OVITA employs code as an adaptation policy generated by an LLM, enabling users to adjust individual waypoints, thus providing flexible control. Another LLM, which acts as a code explainer, removes the need for expert users, enabling intuitive interactions. The efficacy and significance of the proposed OVITA framework is demonstrated through extensive simulations and real-world environments with diverse tasks involving spatiotemporal variations on heterogeneous robotic platforms such as a KUKA IIWA robot manipulator, Clearpath Jackal ground robot, and CrazyFlie drone.",0 "Graph neural networks have demonstrated remarkable success in predicting molecular properties by leveraging the rich structural information encoded in molecular graphs. However, their black-box nature reduces interpretability, which limits trust in their predictions for important applications such as drug discovery and materials design. Furthermore, existing explanation techniques often fail to reliably quantify the contribution of individual atoms or substructures due to the entangled message-passing dynamics. We introduce SEAL (Substructure Explanation via Attribution Learning), a new interpretable graph neural network that attributes model predictions to meaningful molecular subgraphs. SEAL decomposes input graphs into chemically relevant fragments and estimates their causal influence on the output. The strong alignment between fragment contributions and model predictions is achieved by explicitly reducing inter-fragment message passing in our proposed model architecture. Extensive evaluations on synthetic benchmarks and real-world molecular datasets demonstrate that SEAL outperforms other explainability methods in both quantitative attribution metrics and human-aligned interpretability. A user study further confirms that SEAL provides more intuitive and trustworthy explanations to domain experts. By bridging the gap between predictive performance and interpretability, SEAL offers a promising direction for more transparent and actionable molecular modeling.",1 "Identifying long COVID symptoms is a challenging task, primarily due to the reliance on patient reports and the lack of disease specific biomarkers. The objective of this study is to identify individual long COVID symptoms, post COVID 19 conditions (PCC) participants, and participants' sex, and to identify the associated brain regions by developing an explainable machine learning algorithm using brain MRI features. This study implements secondary analysis using an anonymized, publicly accessible dataset that categorizes participants into three groups: the PCC group, the Unimpaired Post COVID 19 group (UPC), and the Healthy Non COVID group (HNC), each with corresponding symptoms, demographics, and brain structural MRI features. The aim is to develop and cross validate a support vector classifier (SVC) algorithm to identify the occurrence of various target labels from the dataset. The SVC classifier identified the occurrence of long-COVID symptoms with various performances for different target labels. The model performance and influential area are identified and discussed in light of previous research. The demonstrated approach offers an alternative modality for determining the occurrence of long COVID symptoms based on neuroimaging biomarkers.",0 "In the context of AI-based decision support systems, explanations can help users to judge when to trust the AI's suggestion, and when to question it. In this way, human oversight can prevent AI errors and biased decision-making. However, this rests on the assumption that users will consider explanations in enough detail to be able to catch such errors. We conducted an online study on trust in explainable DSS, and were surprised to find that in many cases, participants spent little time on the explanation and did not always consider it in detail. We present an exploratory analysis of this data, investigating what factors impact how carefully study participants consider AI explanations, and how this in turn impacts whether they are open to changing their mind based on what the AI suggests.",0 "Subtrait (latent-trait components) assessment presents a promising path toward enhancing transparency of automated writing scores. We prototype explainability and subtrait scoring with generative language models and show modest correlation between human subtrait and trait scores, and between automated and human subtrait scores. Our approach provides details to demystify scores for educators and students.",2 "LLMs enable qualitative coding at large scale, but assessing reliability remains challenging where human experts seldom agree. We investigate confidence-diversity calibration as a quality assessment framework for accessible coding tasks where LLMs already demonstrate strong performance but exhibit overconfidence. Analysing 5,680 coding decisions from eight state-of-the-art LLMs across ten categories, we find that mean self-confidence tracks inter-model agreement closely (Pearson r=0.82). Adding model diversity quantified as normalised Shannon entropy produces a dual signal explaining agreement almost completely (R-squared=0.979), though this high predictive power likely reflects task simplicity for current LLMs. The framework enables a three-tier workflow auto-accepting 35 percent of segments with less than 5 percent error, cutting manual effort by 65 percent. Cross-domain validation confirms transferability (kappa improvements of 0.20 to 0.78). While establishing a methodological foundation for AI judgement calibration, the true potential likely lies in more challenging scenarios where LLMs may demonstrate comparative advantages over human cognitive limitations.",0 "Detecting elephants through seismic signals is an emerging research topic aimed at developing solutions for Human-Elephant Conflict (HEC). Despite the promising results, such solutions heavily rely on manual classification of elephant footfalls, which limits their applicability for real-time classification in natural settings. To address this limitation and build on our previous work, this study introduces a classification framework targeting resource-constrained implementations, prioritizing both accuracy and computational efficiency. As part of this framework, a novel event detection technique named Contextually Customized Windowing (CCW), tailored specifically for detecting elephant footfalls, was introduced, and evaluations were conducted by comparing it with the Short-Term Average/Long-Term Average (STA/LTA) method. The yielded results show that the maximum validated detection range was 155.6 m in controlled conditions and 140 m in natural environments. Elephant footfall classification using Support Vector Machine (SVM) with a Radial Basis Function (RBF) kernel demonstrated superior performance across multiple settings, achieving an accuracy of 99% in controlled environments, 73% in natural elephant habitats, and 70% in HEC-prone human habitats, the most challenging scenario. Furthermore, feature impact analysis using explainable AI identified the number of Zero Crossings and Dynamic Time Warping (DTW) Alignment Cost as the most influential factors in all experiments, while Predominant Frequency exhibited significant influence in controlled settings.",1 "While automatic subjective speech quality assessment has witnessed much progress, an open question is whether an automatic quality assessment at frame resolution is possible. This would be highly desirable, as it adds explainability to the assessment of speech synthesis systems. Here, we take first steps towards this goal by identifying issues of existing quality predictors that prevent sensible frame-level prediction. Further, we define criteria that a frame-level predictor should fulfill. We also suggest a chunk-based processing that avoids the impact of a localized distortion on the score of neighboring frames. Finally, we measure in experiments with localized artificial distortions the localization performance of a set of frame-level quality predictors and show that they can outperform detection performance of human annotations obtained from a crowd-sourced perception experiment.",1 "Large language models (LLMs) are beginning to reshape how chemists plan and run reactions in organic synthesis. Trained on millions of reported transformations, these text-based models can propose synthetic routes, forecast reaction outcomes and even instruct robots that execute experiments without human supervision. Here we survey the milestones that turned LLMs from speculative tools into practical lab partners. We show how coupling LLMs with graph neural networks, quantum calculations and real-time spectroscopy shrinks discovery cycles and supports greener, data-driven chemistry. We discuss limitations, including biased datasets, opaque reasoning and the need for safety gates that prevent unintentional hazards. Finally, we outline community initiatives open benchmarks, federated learning and explainable interfaces that aim to democratize access while keeping humans firmly in control. These advances chart a path towards rapid, reliable and inclusive molecular innovation powered by artificial intelligence and automation.",2 "Over time, software systems have reached a level of complexity that makes it difficult for their developers and users to explain particular decisions made by them. In this paper, we focus on the explainability of component-based systems for Question Answering (QA). These components often conduct processes driven by AI methods, in which behavior and decisions cannot be clearly explained or justified, s.t., even for QA experts interpreting the executed process and its results is hard. To address this challenge, we present an approach that considers the components' input and output data flows as a source for representing the behavior and provide explanations for the components, enabling users to comprehend what happened. In the QA framework used here, the data flows of the components are represented as SPARQL queries (inputs) and RDF triples (outputs). Hence, we are also providing valuable insights on verbalization regarding these data types. In our experiments, the approach generates explanations while following template-based settings (baseline) or via the use of Large Language Models (LLMs) with different configurations (automatic generation). Our evaluation shows that the explanations generated via LLMs achieve high quality and mostly outperform template-based approaches according to the users' ratings. Therefore, it enables us to automatically explain the behavior and decisions of QA components to humans while using RDF and SPARQL as a context for explanations.",1 "This demonstration paper presents $\mathbf{LayLens}$, a tool aimed to make deepfake understanding easier for users of all educational backgrounds. While prior works often rely on outputs containing technical jargon, LayLens bridges the gap between model reasoning and human understanding through a three-stage pipeline: (1) explainable deepfake detection using a state-of-the-art forgery localization model, (2) natural language simplification of technical explanations using a vision-language model, and (3) visual reconstruction of a plausible original image via guided image editing. The interface presents both technical and layperson-friendly explanations in addition to a side-by-side comparison of the uploaded and reconstructed images. A user study with 15 participants shows that simplified explanations significantly improve clarity and reduce cognitive load, with most users expressing increased confidence in identifying deepfakes. LayLens offers a step toward transparent, trustworthy, and user-centric deepfake forensics.",0 "Smart contracts automate the management of high-value assets, where vulnerabilities can lead to catastrophic financial losses. This challenge is amplified in Large Language Models (LLMs) by two interconnected failures: they operate as unauditable ""black boxes"" lacking a transparent reasoning process, and consequently, generate code riddled with critical security vulnerabilities. To address both issues, we propose SmartCoder-R1 (based on Qwen2.5-Coder-7B), a novel framework for secure and explainable smart contract generation. It begins with Continual Pre-training (CPT) to specialize the model. We then apply Long Chain-of-Thought Supervised Fine-Tuning (L-CoT SFT) on 7,998 expert-validated reasoning-and-code samples to train the model to emulate human security analysis. Finally, to directly mitigate vulnerabilities, we employ Security-Aware Group Relative Policy Optimization (S-GRPO), a reinforcement learning phase that refines the generation policy by optimizing a weighted reward signal for compilation success, security compliance, and format correctness. Evaluated against 17 baselines on a benchmark of 756 real-world functions, SmartCoder-R1 establishes a new state of the art, achieving top performance across five key metrics: a ComPass of 87.70%, a VulRate of 8.60%, a SafeAval of 80.16%, a FuncRate of 53.84%, and a FullRate of 50.53%. This FullRate marks a 45.79% relative improvement over the strongest baseline, DeepSeek-R1. Crucially, its generated reasoning also excels in human evaluations, achieving high-quality ratings for Functionality (82.7%), Security (85.3%), and Clarity (90.7%).",0 "Shapes of cognition is a new conceptual paradigm for the computational cognitive modeling of Language-Endowed Intelligent Agents (LEIAs). Shapes are remembered constellations of sensory, linguistic, conceptual, episodic, and procedural knowledge that allow agents to cut through the complexity of real life the same way as people do: by expecting things to be typical, recognizing patterns, acting by habit, reasoning by analogy, satisficing, and generally minimizing cognitive load to the degree situations permit. Atypical outcomes are treated using shapes-based recovery methods, such as learning on the fly, asking a human partner for help, or seeking an actionable, even if imperfect, situational understanding. Although shapes is an umbrella term, it is not vague: shapes-based modeling involves particular objectives, hypotheses, modeling strategies, knowledge bases, and actual models of wide-ranging phenomena, all implemented within a particular cognitive architecture. Such specificity is needed both to vet our hypotheses and to achieve our practical aims of building useful agent systems that are explainable, extensible, and worthy of our trust, even in critical domains. However, although the LEIA example of shapes-based modeling is specific, the principles can be applied more broadly, giving new life to knowledge-based and hybrid AI.",1 "Reader curiosity, the drive to seek information, is crucial for textual engagement, yet remains relatively underexplored in NLP. Building on Loewenstein's Information Gap Theory, we introduce a framework that models reader curiosity by quantifying semantic information gaps within a text's semantic structure. Our approach leverages BERTopic-inspired topic modeling and persistent homology to analyze the evolving topology (connected components, cycles, voids) of a dynamic semantic network derived from text segments, treating these features as proxies for information gaps. To empirically evaluate this pipeline, we collect reader curiosity ratings from participants (n = 49) as they read S. Collins's ''The Hunger Games'' novel. We then use the topological features from our pipeline as independent variables to predict these ratings, and experimentally show that they significantly improve curiosity prediction compared to a baseline model (73% vs. 30% explained deviance), validating our approach. This pipeline offers a new computational method for analyzing text structure and its relation to reader engagement.",1 "Information Pursuit (IP) is an explainable prediction algorithm that greedily selects a sequence of interpretable queries about the data in order of information gain, updating its posterior at each step based on observed query-answer pairs. The standard paradigm uses hand-crafted dictionaries of potential data queries curated by a domain expert or a large language model after a human prompt. However, in practice, hand-crafted dictionaries are limited by the expertise of the curator and the heuristics of prompt engineering. This paper introduces a novel approach: learning a dictionary of interpretable queries directly from the dataset. Our query dictionary learning problem is formulated as an optimization problem by augmenting IP's variational formulation with learnable dictionary parameters. To formulate learnable and interpretable queries, we leverage the latent space of large vision and language models like CLIP. To solve the optimization problem, we propose a new query dictionary learning algorithm inspired by classical sparse dictionary learning. Our experiments demonstrate that learned dictionaries significantly outperform hand-crafted dictionaries generated with large language models.",2 "Contract review is a complex and time-intensive task that typically demands specialized legal expertise, rendering it largely inaccessible to non-experts. Moreover, legal interpretation is rarely straightforward-ambiguity is pervasive, and judgments often hinge on subjective assessments. Compounding these challenges, contracts are usually confidential, restricting their use with proprietary models and necessitating reliance on open-source alternatives. To address these challenges, we introduce PAKTON: a fully open-source, end-to-end, multi-agent framework with plug-and-play capabilities. PAKTON is designed to handle the complexities of contract analysis through collaborative agent workflows and a novel retrieval-augmented generation (RAG) component, enabling automated legal document review that is more accessible, adaptable, and privacy-preserving. Experiments demonstrate that PAKTON outperforms both general-purpose and pretrained models in predictive accuracy, retrieval performance, explainability, completeness, and grounded justifications as evaluated through a human study and validated with automated metrics.",0 "The security of software builds has attracted increased attention in recent years in response to incidents like solarwinds and xz. Now, several companies including Oracle and Google rebuild open source projects in a secure environment and publish the resulting binaries through dedicated repositories. This practice enables direct comparison between these rebuilt binaries and the original ones produced by developers and published in repositories such as Maven Central. These binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process. While there are some tools that can be used for this purpose, they all fall short of providing provenance, i.e. readable explanation of why two binaries are equivalent, or not. To address this issue, we present daleq, a tool that disassembles Java byte code into a relational database, and can normalise this database by applying datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with datalog proofs recording the normalisation process. We demonstrate the impact of daleq in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, daleq is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that daleq outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.",0 "The field of AI-generated text detection has evolved from supervised classification to zero-shot statistical analysis. However, current approaches share a fundamental limitation: they aggregate token-level measurements into scalar scores, discarding positional information about where anomalies occur. Our empirical analysis reveals that AI-generated text exhibits significant non-stationarity, statistical properties vary by 73.8\% more between text segments compared to human writing. This discovery explains why existing detectors fail against localized adversarial perturbations that exploit this overlooked characteristic. We introduce Temporal Discrepancy Tomography (TDT), a novel detection paradigm that preserves positional information by reformulating detection as a signal processing task. TDT treats token-level discrepancies as a time-series signal and applies Continuous Wavelet Transform to generate a two-dimensional time-scale representation, capturing both the location and linguistic scale of statistical anomalies. On the RAID benchmark, TDT achieves 0.855 AUROC (7.1\% improvement over the best baseline). More importantly, TDT demonstrates robust performance on adversarial tasks, with 14.1\% AUROC improvement on HART Level 2 paraphrasing attacks. Despite its sophisticated analysis, TDT maintains practical efficiency with only 13\% computational overhead. Our work establishes non-stationarity as a fundamental characteristic of AI-generated text and demonstrates that preserving temporal dynamics is essential for robust detection.",1 "Current explainable AI (XAI) approaches prioritize algorithmic transparency and present explanations in abstract, non-adaptive formats that often fail to support meaningful end-user understanding. This paper introduces ""Explanatory AI"" as a complementary paradigm that leverages generative AI capabilities to serve as explanatory partners for human understanding rather than providers of algorithmic transparency. While XAI reveals algorithmic decision processes for model validation, Explanatory AI addresses contextual reasoning to support human decision-making in sociotechnical contexts. We develop a definition and systematic eight-dimensional conceptual model distinguishing Explanatory AI through narrative communication, adaptive personalization, and progressive disclosure principles. Empirical validation through Rapid Contextual Design methodology with healthcare professionals demonstrates that users consistently prefer context-sensitive, multimodal explanations over technical transparency. Our findings reveal the practical urgency for AI systems designed for human comprehension rather than algorithmic introspection, establishing a comprehensive research agenda for advancing user-centered AI explanation approaches across diverse domains and cultural contexts.",2 "We propose a way to organise the subject of ``higher-order homological stability'', in the context of a graded $E_2$-algebra $\mathbf{R}$, along the same lines that the chromatic perspective organises stable homotopy theory. From this point of view proving a (higher-order) homological stability theorem corresponds to producing Smith--Toda complexes in the category of $\mathbf{R}$-modules: using this perspective we prove that whenever $\mathbf{R}$ is defined over a field of positive characteristic and satisfies some standard properties, there is a sequence of higher-order homological stability theorems whose slopes tend to 1. We propose that in a higher-order stable range the ``stable homology'' should be interpreted as certain Bousfield localisations in the category of $\mathbf{R}$-modules, leading to a chromatic tower and monochromatic layers. Given the existence of suitable Smith--Toda complexes we establish several properties of these localisations, in particular explaining how higher-order stabilisation maps yield periodic families in the monochromatic layers. We explain how to associate to such an $\mathbf{R}$ a Hopf algebra which completely governs the kinds of higher-order stability maps that it enjoys, in the sense that the cohomology of this Hopf algebra has precisely the same stability patterns as $\mathbf{R}$. When $\mathbf{R}$ comes from a sequence of groups, this Hopf algebra has a concrete description as the coinvariants of the $E_1$-Steinberg modules.",2 "We prove a fundamental impossibility theorem: neural networks cannot simultaneously learn well-calibrated confidence estimates with meaningful diversity when trained using binary correct/incorrect supervision. Through rigorous mathematical analysis and comprehensive empirical evaluation spanning negative reward training, symmetric loss functions, and post-hoc calibration methods, we demonstrate this is an information-theoretic constraint, not a methodological failure. Our experiments reveal universal failure patterns: negative rewards produce extreme underconfidence (ECE greater than 0.8) while destroying confidence diversity (std less than 0.05), symmetric losses fail to escape binary signal averaging, and post-hoc methods achieve calibration (ECE less than 0.02) only by compressing the confidence distribution. We formalize this as an underspecified mapping problem where binary signals cannot distinguish between different confidence levels for correct predictions: a 60 percent confident correct answer receives identical supervision to a 90 percent confident one. Crucially, our real-world validation shows 100 percent failure rate for all training methods across MNIST, Fashion-MNIST, and CIFAR-10, while post-hoc calibration's 33 percent success rate paradoxically confirms our theorem by achieving calibration through transformation rather than learning. This impossibility directly explains neural network hallucinations and establishes why post-hoc calibration is mathematically necessary, not merely convenient. We propose novel supervision paradigms using ensemble disagreement and adaptive multi-agent learning that could overcome these fundamental limitations without requiring human confidence annotations.",0 "Recent large vision-language models (LVLMs) have advanced capabilities in visual question answering (VQA). However, interpreting where LVLMs direct their visual attention remains a significant challenge, yet is essential for understanding model behavior. We introduce GLIMPSE (Gradient-Layer Importance Mapping for Prompted Visual Saliency Explanation), a lightweight, model-agnostic framework that jointly attributes LVLM outputs to the most relevant visual evidence and textual signals that support open-ended generation. GLIMPSE fuses gradient-weighted attention, adaptive layer propagation, and relevance-weighted token aggregation to produce holistic response-level heat maps for interpreting cross-modal reasoning, outperforming prior methods in faithfulness and pushing the state-of-the-art in human-attention alignment. We demonstrate an analytic approach to uncover fine-grained insights into LVLM cross-modal attribution, trace reasoning dynamics, analyze systematic misalignment, diagnose hallucination and bias, and ensure transparency.",0 "Interpreting the internal behavior of large language models trained on code remains a critical challenge, particularly for applications demanding trust, transparency, and semantic robustness. We propose Code Concept Analysis (CoCoA): a global post-hoc interpretability framework that uncovers emergent lexical, syntactic, and semantic structures in a code language model's representation space by clustering contextualized token embeddings into human-interpretable concept groups. We propose a hybrid annotation pipeline that combines static analysis tool-based syntactic alignment with prompt-engineered large language models (LLMs), enabling scalable labeling of latent concepts across abstraction levels. We analyse the distribution of concepts across layers and across three finetuning tasks. Emergent concept clusters can help identify unexpected latent interactions and be used to identify trends and biases within the model's learned representations. We further integrate LCA with local attribution methods to produce concept-grounded explanations, improving the coherence and interpretability of token-level saliency. Empirical evaluations across multiple models and tasks show that LCA discovers concepts that remain stable under semantic-preserving perturbations (average Cluster Sensitivity Index, CSI = 0.288) and evolve predictably with fine-tuning. In a user study, concept-augmented explanations disambiguate token roles. In a user study on the programming-language classification task, concept-augmented explanations disambiguated token roles and improved human-centric explainability by 37 percentage points compared with token-level attributions using Integrated Gradients.",0 "Generative AI, such as Large Language Models (LLMs), has achieved impressive progress but still produces hallucinations and unverifiable claims, limiting reliability in sensitive domains. Retrieval-Augmented Generation (RAG) improves accuracy by grounding outputs in external knowledge, especially in domains like healthcare, where precision is vital. However, RAG remains opaque and essentially a black box, heavily dependent on data quality. We developed a method-agnostic, perturbation-based framework that provides token and component-level interoperability for Graph RAG using SMILE and named it as Knowledge-Graph (KG)-SMILE. By applying controlled perturbations, computing similarities, and training weighted linear surrogates, KG-SMILE identifies the graph entities and relations most influential to generated outputs, thereby making RAG more transparent. We evaluate KG-SMILE using comprehensive attribution metrics, including fidelity, faithfulness, consistency, stability, and accuracy. Our findings show that KG-SMILE produces stable, human-aligned explanations, demonstrating its capacity to balance model effectiveness with interpretability and thereby fostering greater transparency and trust in machine learning technologies.",2 "Conceptual models such as Concept Bottleneck Models (CBMs) have driven substantial progress in improving interpretability for image classification by leveraging human-interpretable concepts. However, extending these models from static images to sequences of images, such as video data, introduces a significant challenge due to the temporal dependencies inherent in videos, which are essential for capturing actions and events. In this work, we introduce MoTIF (Moving Temporal Interpretable Framework), an architectural design inspired by a transformer that adapts the concept bottleneck framework for video classification and handles sequences of arbitrary length. Within the video domain, concepts refer to semantic entities such as objects, attributes, or higher-level components (e.g., 'bow', 'mount', 'shoot') that reoccur across time - forming motifs collectively describing and explaining actions. Our design explicitly enables three complementary perspectives: global concept importance across the entire video, local concept relevance within specific windows, and temporal dependencies of a concept over time. Our results demonstrate that the concept-based modeling paradigm can be effectively transferred to video data, enabling a better understanding of concept contributions in temporal contexts while maintaining competitive performance. Code available at github.com/patrick-knab/MoTIF.",0 "Consistent high-quality nursing care is essential for patient safety, yet current nursing education depends on subjective, time-intensive instructor feedback in training future nurses, which limits scalability and efficiency in their training, and thus hampers nursing competency when they enter the workforce. In this paper, we introduce a video-language model (VLM) based framework to develop the AI capability of automated procedural assessment and feedback for nursing skills training, with the potential of being integrated into existing training programs. Mimicking human skill acquisition, the framework follows a curriculum-inspired progression, advancing from high-level action recognition, fine-grained subaction decomposition, and ultimately to procedural reasoning. This design supports scalable evaluation by reducing instructor workload while preserving assessment quality. The system provides three core capabilities: 1) diagnosing errors by identifying missing or incorrect subactions in nursing skill instruction videos, 2) generating explainable feedback by clarifying why a step is out of order or omitted, and 3) enabling objective, consistent formative evaluation of procedures. Validation on synthesized videos demonstrates reliable error detection and temporal localization, confirming its potential to handle real-world training variability. By addressing workflow bottlenecks and supporting large-scale, standardized evaluation, this work advances AI applications in nursing education, contributing to stronger workforce development and ultimately safer patient care.",0 "Accurate assessment of neuromuscular reflexes, such as the H-reflex, plays a critical role in sports science, rehabilitation, and clinical neurology. Traditional analysis of H-reflex EMG waveforms is subject to variability and interpretation bias among clinicians and researchers, limiting reliability and standardization. To address these challenges, we propose a Fine-Tuned Vision-Language Model (VLM) Consortium and a reasoning Large-Language Model (LLM)-enabled Decision Support System for automated H-reflex waveform interpretation and diagnosis. Our approach leverages multiple VLMs, each fine-tuned on curated datasets of H-reflex EMG waveform images annotated with clinical observations, recovery timelines, and athlete metadata. These models are capable of extracting key electrophysiological features and predicting neuromuscular states, including fatigue, injury, and recovery, directly from EMG images and contextual metadata. Diagnostic outputs from the VLM consortium are aggregated using a consensus-based method and refined by a specialized reasoning LLM, which ensures robust, transparent, and explainable decision support for clinicians and sports scientists. The end-to-end platform orchestrates seamless communication between the VLM ensemble and the reasoning LLM, integrating prompt engineering strategies and automated reasoning workflows using LLM Agents. Experimental results demonstrate that this hybrid system delivers highly accurate, consistent, and interpretable H-reflex assessments, significantly advancing the automation and standardization of neuromuscular diagnostics. To our knowledge, this work represents the first integration of a fine-tuned VLM consortium with a reasoning LLM for image-based H-reflex analysis, laying the foundation for next-generation AI-assisted neuromuscular assessment and athlete monitoring platforms.",2 "While the activations of neurons in deep neural networks usually do not have a simple human-understandable interpretation, sparse autoencoders (SAEs) can be used to transform these activations into a higher-dimensional latent space which may be more easily interpretable. However, these SAEs can have millions of distinct latent features, making it infeasible for humans to manually interpret each one. In this work, we build an open-source automated pipeline to generate and evaluate natural language explanations for SAE features using LLMs. We test our framework on SAEs of varying sizes, activation functions, and losses, trained on two different open-weight LLMs. We introduce five new techniques to score the quality of explanations that are cheaper to run than the previous state of the art. One of these techniques, intervention scoring, evaluates the interpretability of the effects of intervening on a feature, which we find explains features that are not recalled by existing methods. We propose guidelines for generating better explanations that remain valid for a broader set of activating contexts, and discuss pitfalls with existing scoring techniques. We use our explanations to measure the semantic similarity of independently trained SAEs, and find that SAEs trained on nearby layers of the residual stream are highly similar. Our large-scale analysis confirms that SAE latents are indeed much more interpretable than neurons, even when neurons are sparsified using top-$k$ postprocessing. Our code is available at https://github.com/EleutherAI/sae-auto-interp, and our explanations are available at https://huggingface.co/datasets/EleutherAI/auto_interp_explanations.",1 "As machine learning systems increasingly inform critical decisions, the need for human-understandable explanations grows. Current evaluations of Explainable AI (XAI) often prioritize technical fidelity over cognitive accessibility which critically affects users, in particular those with visual impairments. We propose CUE, a model for Cognitive Understanding of Explanations, linking explanation properties to cognitive sub-processes: legibility (perception), readability (comprehension), and interpretability (interpretation). In a study (N=455) testing heatmaps with varying colormaps (BWR, Cividis, Coolwarm), we found comparable task performance but lower confidence/effort for visually impaired users. Unlike expected, these gaps were not mitigated and sometimes worsened by accessibility-focused color maps like Cividis. These results challenge assumptions about perceptual optimization and support the need for adaptive XAI interfaces. They also validate CUE by demonstrating that altering explanation legibility affects understandability. We contribute: (1) a formalized cognitive model for explanation understanding, (2) an integrated definition of human-centered explanation properties, and (3) empirical evidence motivating accessible, user-tailored XAI.",0 "Background Allergic contact dermatitis due to acrylates present in the workplace is a disease frequently reported among dentists, printers, and fiberglass workers. Recently, the number of cases of contact allergic dermatitis among beauticians specialized in sculpting artificial nails has increased. Objective Our objective was to study the clinical characteristics and allergens implicated in allergic contact dermatitis due to acrylates in beauticians and users of sculpted nails. Material and methods This was an observational, retrospective study of patients diagnosed with allergic contact dermatitis due to acrylates used in sculpting artificial nails over the last 26 years in the Hospital General Universitario, Valencia, Spain. Results In total, 15 patients were diagnosed: 14 beauticians and 1 client. Most cases were diagnosed in the past 2 years. All were women, their mean age was 32.2 years, and 26.7% had a personal or family history of atopy. The sensitization time varied between 1 month and 15 years. The most frequently affected areas were the fleshy parts of the fingers and hands. Three patients —2 beauticians and 1 client— presented allergic asthma due to acrylates. All patients underwent patch testing with a standard battery of allergens and a battery of acrylates. The most frequent allergens were ethylene glycol dimethacrylate (13/15, 86.7%), hydroxyethyl methacrylate (13/15, 86.7%), triethylene glycol dimethacrylate (7/15, 46.7%), 2-hydroxypropyl methacrylate (5/15, 33.3%), and methyl methacrylate (5/15, 33.3%). Conclusions Acrylate monomers used for sculpting artificial nails are important sensitizers for contact and occupational dermatitis. The most important consideration is primary and secondary prevention.",1 "ABSTRACT According to the Nernst–Planck equation, the transport of charged species in porous electrodes is mainly driven by diffusion and migration. Although a number of all-vanadium redox flow battery (VRFB) models have been developed by several VRFB modeling groups, a comparative study of these two ion transport mechanisms has not been clearly reported in the literature. In this study, we develop a three-dimensional (3-D), transient VRFB model that rigorously accounts for both diffusion and migration mechanisms of charged species, including V2+, V3+, VO2+,VO2 + and H+. The VRFB model relies upon five principles of conservation: mass, momentum, species, electric charge, and thermal energy. Due to the general form of the conservation equations, both species migration effects on species transport and species diffusion effects on charge transport are considered in the source terms of the model equations. The model calculates species migration and diffusion fluxes through the membrane and compares their relative magnitudes under various charging and discharging stages. This paper clearly elucidates the role of species migration on vanadium crossover and the subsequent capacity losses, demonstrating that the present VRFB model is a valuable tool for optimizing the component design and operation of VRFBs.",0 "Carbon-encapsulated nano-MnO composite with novel multiple structure loaded on N-doped carbon webs (CMNCWs) has been designed and fabricated by using polypyrrole webs as both template and precursor. As an anode material for lithium-ion batteries, CMNCWs exhibit a superhigh reversible capacity and excellent rate capability, delivering a capacity as high as 1268mAhg−1 after 700 cycles at a current density of 1.0Ag−1. Such superior electrochemical performance can be attributed to the unique multiple structure, which cannot only effectively shorten the transport path of Li+ ions and enhance the conductivity, but also relieve the volume change and prevent agglomeration of Mn grains during the phase transformation in the conversion reaction.",0 "Search for simple and economical electrocatalyst for the hydrogen gas evolution reaction (HER), which can resemble to the performance of Pt and other precious metals, is a challenging research interest. In this work, a systematic effect of pre-treatment potential of screen-printed carbon (SPCE) surface on the HER performance in 0.5 M H2SO4 was carried out. A new observation of a low potential HER (onset potential, E onset = −0.02 V vs. RHE) at a cathodic potential, −0.5 V vs. Ag/AgCl on 1 hr pre-treated screen-printed carbon electrode (SPCE*, * = pre-treated) was observed. Physicochemical and electrochemical characterizations of the SPCE* by field emission scanning electron-microscope, Raman, IR and X-Ray photoelectron spectroscopes reveals specific generation of carboxylic acid functionalized carbon surface and in turn for the enhanced HER on the modified electrode surface. Electrochemical characterization of SPCE* with Fe(CN)6 3− support the observation. A marked decrement in the peak current and significant increase in the peak-to-peak separation potential response due electrostatic repulsion between the anion sites of Fe(CN)6 3− and –COO– were noticed. This observation is in parallel with the reduced electrical double layer capacitance value of the SPCE* system. The E onset and Tafel value (54.7 mV dec−1) obtained here are comparable to those at Pt, MoS2, MoSe2 and superior over the N- and P-doped graphene/carbon electrocatalysts for HER. A prototype HER system was developed and demonstrated for H2 gas production at a rate of 0.0053 μM s−1 (Operating potential = −0.5 V vs Ag/AgCl), which is comparable to that of precious metal and metal compound electrocatalysts based HER performance.",0 "Actigraphy has been used for more than 60 years to objectively measure sleep–wake rhythms. Improved modern devices are increasingly employed to diagnose sleep medicine disorders in the clinical setting. Although less accurate than polysomnography, the chief advantage of actigraphs lies in the cost-effective collection of objective data over prolonged periods of time under everyday conditions. Since the cost of wrist actigraphy is not currently reimbursed, this method has not enjoyed wide acceptance to date. The present article provides an overview of the main clinical applications of actigraphy, including the recommendations of specialist societies.",0 "Photoelectrocatalytic cells for water splitting should combine one or two photosensitive units with a water oxidation catalyst at the anode and a hydrogen evolution catalyst at the cathode. In this perspective article, we first show how a chemist can take the naturally occurring multi-electron catalysts for these two electro- and photochemical reactions, photosystem II and hydrogenases, as a source of inspiration for the design of original, efficient and robust molecular catalysts. The focus of this article is given to the immobilisation of these natural or bio-inspired catalysts onto conducting surfaces and the design of electrode and photoelectrode materials for hydrogen evolution/uptake and water oxidation. ",0 "A high risk of morbidity-mortality caused by a harsh and unpredictable environment is considered to be associated with a fast life history (LH) strategy, commonly linked with criminal behavior. However, offenders are not the only group with a high exposure to extrinsic morbidity-mortality. In the present study, we investigated the LH strategies employed by two groups of Polish men: incarcerated offenders (N = 84) as well as soldiers and firefighters (N = 117), whose professions involve an elevated risk of injury and premature death. The subjects were asked to complete the Mini-K (used as a psychosocial LH indicator) and a questionnaire which included a number of biodemographic LH variables. Although biodemographic and psychosocial LH indicators should be closely linked with each other, the actual connection between them is unclear. Thus, this study was driven by two aims: comparing LH strategies in two groups of men with a high risk of premature morbidity-mortality and investigating the relationship between the biodemographic and psychosocial LH dimensions. The study showed that incarcerated men employed faster LH strategies than soldiers and firefighters, but only in relation to biodemographic variables (e.g., number of siblings, age of sexual initiation, life expectancy). No intergroup differences emerged regarding psychosocial LH indicators. Moreover, the correlation analysis showed a weak association between biodemographic and psychosocial LH indicators. The results strengthen the legitimacy of incorporating biodemographic LH traits into research models and indicate the need for further research on the accuracy of the Mini-K. The possible explanations for the intergroup differences in LH strategies are discussed.",2 "Mitrovica, northern Kosovo, is the site of some of the highest Pb concentrations reported in human populations; exemplified by Pb concentrations in scalp hair of up to 130 μg g−1 and widely-publicized of Pb-related ill-health and mortality amongst internally displaced populations. High human Pb burdens are accompanied by elevated concentrations of potentially harmful elements (PHEs) in soils and house dust within the city, which has a long history of mining and metallurgy. In this study enrichment-levels for PHEs in soils are quantified and compared to environmental quality guidelines and a statistically-derived estimation of background concentration. In addition, Pb isotopes (207Pb/206Pb, 208Pb/206Pb) are used to characterise the isotopic signatures of potential point sources of Pb and a mixing model employed to quantify the contribution of sources to Pb present in soils, house dust, and the scalp hair of children and young people. Pb isotopic evidence suggests that Pb in surface soils and house-dust is predominantly sourced from historical deposition of Pb-containing aerosols from metal smelting, with lower contributions from wind-blown dispersal of metalliferous waste. Pb present in scalp hair is interpreted as the result of non-occupational exposure and the ingestion and/or inhalation of Pb-enriched surface soil and house dust. This study represents one of the very few instances where this type of geochemical tracing technique has been successfully applied to definitively identify the source of Pb present within biological samples. The results of this study are of particular relevance to environmental management and highlight the human health risk posed by the legacy of now inactive mining and metallurgy in addition to the challenge posed in mitigating the risk posed by diffuse soil pollution.",1 " Pathogen-induced defoliation resulted in a reduction in transpiration, an upregulation of photosynthesis in the early growing season, and no change in NSC reserves across stem, root, and foliar tissues.",0 "This 2-wave longitudinal study aimed (1) to investigate whether high resting RSA predicted adolescents’ lower externalizing behavior and higher empathic concern, and (2) to address the potential moderating role of resting RSA in the association between parent-adolescent relationship quality and adolescents’ externalizing behavior and empathic concern. In a sample of 379 adolescents (212 boys, 167 girls), resting RSA was assessed during a laboratory session, and adolescents reported on parental support, negative interaction with parents, empathic concern and externalizing behavior during a home visit. We found no support for high resting RSA predicting low externalizing behavior or high empathic concern. However, in line with our hypotheses, we did find several instances of RSA functioning as a moderator, although the interaction patterns varied. First, negative interaction with parents was a negative predictor of externalizing behavior for girls low in resting RSA, whereas the association was non-significant for girls with high RSA. Second, higher negative interaction with parents predicted lower empathic concern for boys high in resting RSA, whereas the association was reversed for boys with low resting RSA. Third, parental support was a positive predictor of empathic concern for girls high in resting RSA, whereas the association was non-significant for girls low in resting RSA. The findings suggest that adolescents with different levels of resting RSA respond differentially to relationship quality with parents.",2 "This study investigated the effects of phonologic treatment for anomia in aphasia. We proposed that if treatment were directed at the level of the phonologic processor, opportunities for naming via a phonological route, as opposed to a strictly whole word route, would be enhanced, thereby improving naming. The participants, ten people with anomia and aphasia due to left hemisphere stroke, received 96h of phoneme based treatment in 12 weeks. To learn if treatment improved naming, a single-subject, repeated probe design with replication was employed. The primary outcome measure was confrontation naming. Secondary outcome measures included phonologic production, nonword repetition and discourse production. Results suggest a positive treatment effect (confrontation naming), improvements in phonologic production and nonword repetition, and generalization to discourse production. When tested 3 months after the completion of treatment the effects appeared to be maintained.",1 "The article summarizes the results of the program post mortem and also describes team interplay on a recently completed work in a company. This development phase was meant to ensure building a safe product. It was phase 2 of a 4-phase New Product Development (NPD) program for a complex small programmable, electro-mechanical-chemical device. This phase was initiated following the failure of phase 1 of NPD as it ended with the product failing and an individual sustaining some injuries. Phase 1 dealt with proof of concept, essentially trying to prove the theory behind air bursting technology. The Product Development Team (PDT) compared what was planned with what actually happened. An analysis was then carried out for the project’s successes as well as the mistakes that were made. The PDT suggested ideas for improvements that could be incorporated during phase 3 (engineering development of the product) of this program. A number of lessons learned from phase 2 (that is, affirmation of product safety) would benefit future phases (phases 3 and 4) and also other new product development initiatives in terms of realizing significant time and cost savings. Phase 4 deals with low rate initial production.",1 "Contact bioassays are important for testing the ecotoxicity of solid materials. However, survival and reproduction tests are often not practical due to their duration which may last for several weeks. Avoidance tests with soil invertebrates may offer an alternative or extension to the classic test batteries due to their short duration (days rather than weeks) and due to a sensitive sub-acute endpoint (behavior). The aims of our study were: (a) to evaluate the effects of three solid industrial wastes (incineration ash, contaminated wood chips and contaminated soil) on three Oligochaeta species (enchytraeids Enchytraeus albidus, Enchytraeus crypticus and earthworm Eisenia fetida) in avoidance tests; (b) to compare the sensitivity among the species and to compare results of avoidance test to reproduction tests; (c) to elucidate if measuring the weight in the earthworm avoidance test could be reasonable additional endpoint. Avoidance mostly increased with the increasing percent of waste in the mixture showing a dose–response curve. E. fetida was the most sensitive species and E. crypticus the least one. An additional endpoint, (changes in weight after two-day exposure) was not found to be more sensitive than avoidance reaction, but it confirmed that earthworms staying in the highest concentrations of the waste mixture were affected showing apparent weight reduction. Our results indicate that avoidance tests with earthworms and enchytraeids are feasible for waste testing.",0 "Introduction Impairment of cerebrovascular function becomes evident after menopause. No study has yet explored relationships between deficits in cerebrovascular function, cognitive performance, and mood in postmenopausal women. Method Cerebrovascular function was assessed in 80 healthy postmenopausal women by monitoring blood flow velocity (BFV) in the middle and posterior cerebral arteries using transcranial Doppler ultrasound at rest, following a hypercapnic challenge, and during performance of a cognitive test battery; the latter assessed domains of memory and executive functions. Various measures of mood (i.e., Profile of Mood States and Center for Epidemiological Studies Depression Scale) were also assessed. Results Cerebral artery elasticity and BFV responsiveness to cognitive tests (neurovascular coupling) correlated with cognitive performance but not with depressive symptoms or mood states. Mood deficits were related to poor cognitive performance. Conclusion These results highlight the importance of adequate cerebral perfusion for optimized cognitive function in healthy postmenopausal women. Preventative strategies to attenuate accelerated cognitive decline should also consider restoring cerebrovascular function.",2 "Mental disorders (MD), such as depression, anxiety, and cognitive impairment, are highly prevalent in patients with coronary heart disease (CHD). Current guidelines on cardiovascular diseases recommend screening and appropriate treatment of MD; however, the degree of implementation of such recommendations in clinical practice is unknown. This study aims to analyze the quality of health care of 8 patients with CHD and MD. Specifically, we aim to analyze (1) the quality of care, (2) trajectories of care, and (3) barriers regarding the detection and treatment of MD. Moreover, we want to identify potentials of changes in health care delivery towards more patient-centered care. The results of this study shall be the first step towards value-based care of people with CHD and comorbid mental disorders.",1 "Introduction Links between preclinical Alzheimer's disease (AD) and driving difficulty onset would support the use of driving performance as an outcome in primary and secondary prevention trials among older adults (OAs). We examined whether AD biomarkers predicted the onset of driving difficulties among OAs. Methods One hundred four OAs (65+ years) with normal cognition took part in biomarker measurements, a road test, clinical and psychometric batteries, and self-reported their driving habits. Results Higher values of cerebrospinal fluid (CSF) tau/Aβ42 and phosphorylated tau (ptau181)/Aβ42 ratios, but not uptake on Pittsburgh compound B amyloid imaging (P = .12), predicted time to a rating of marginal or fail on the driving test using Cox proportional hazards models. Hazards ratios (95% confidence interval) were 5.75 (1.70–19.53), P = .005 for CSF tau/Aβ42; 6.19 (1.75–21.88), and P = .005 for CSF ptau181/Aβ42. Discussion Preclinical AD predicted time to receiving a marginal or fail rating on an on-road driving test. Driving performance shows promise as a functional outcome in AD prevention trials.",2 "Given the interest in improving executive functions, the present study examines a promising combination of two training techniques: neurofeedback training (NFT) and working memory training (WMT). NFT targeted increasing the amplitude of individual’s upper Alpha frequency band at the parietal midline scalp location (Pz), and WMT consisted of an established computerized protocol with working memory updating and set-shifting components. Healthy participants (n = 140) were randomly allocated to five combinations of training, including visual search training used as an active control training for the WMT; all five groups were compared to a sixth silent control group receiving no training. All groups were evaluated before and after training for resting-state electroencephalogram (EEG) and behavioral executive function measures. The participants in the silent control group were unaware of this procedure, and received one of the training protocols only after study has ended. Results demonstrated significant improvement in the practice tasks in all training groups including non-specific influence of NFT on resting-state EEG spectral topography. There was only a near transfer effect (improvement in working memory task) for WMT, which remained significant in the delayed post-test (after 1 month), in comparison to silent control group but not in comparison to active control training group. The NFT + WMT combined group showed improved mental rotation ability both in the post-training and in the follow-up evaluations. This improvement, however, did not differ significantly from that in the silent control group. We conclude that the current training protocols, including their combination, have very limited influence on the executive functions that were assessed in this study.",2 "A high risk of morbidity-mortality caused by a harsh and unpredictable environment is considered to be associated with a fast life history (LH) strategy, commonly linked with criminal behavior. However, offenders are not the only group with a high exposure to extrinsic morbidity-mortality. In the present study, we investigated the LH strategies employed by two groups of Polish men: incarcerated offenders (N = 84) as well as soldiers and firefighters (N = 117), whose professions involve an elevated risk of injury and premature death. The subjects were asked to complete the Mini-K (used as a psychosocial LH indicator) and a questionnaire which included a number of biodemographic LH variables. Although biodemographic and psychosocial LH indicators should be closely linked with each other, the actual connection between them is unclear. Thus, this study was driven by two aims: comparing LH strategies in two groups of men with a high risk of premature morbidity-mortality and investigating the relationship between the biodemographic and psychosocial LH dimensions. The study showed that incarcerated men employed faster LH strategies than soldiers and firefighters, but only in relation to biodemographic variables (e.g., number of siblings, age of sexual initiation, life expectancy). No intergroup differences emerged regarding psychosocial LH indicators. Moreover, the correlation analysis showed a weak association between biodemographic and psychosocial LH indicators. The results strengthen the legitimacy of incorporating biodemographic LH traits into research models and indicate the need for further research on the accuracy of the Mini-K. The possible explanations for the intergroup differences in LH strategies are discussed.",2 "How do members of the public view collaboration among organized interests and what factors contribute to attitudes about working in coalition? Interest groups frequently must decide whether to partner formally in pursuit of a shared objective while minimizing potential losses of revenue, reputation, and issue ownership. Using a nationally representative survey with an embedded experiment, we consider the potential ramifications of group collaboration from the perspective of potential members. Results show that, while a substantial minority views group collaboration negatively, most do not, and experimental exposure to a collaborating group yields positive evaluations and higher prospective contributions. The results reinforce the essentially pluralist public perceptions of interest groups that are supportive of their existing collaborative efforts. ",1 "Bipolar disorder (BD) and major depressive disorder (MDD) share similar clinical characteristics that often obscure the diagnostic distinctions between their depressive conditions. Both functional and structural brain abnormalities have been reported in these two disorders. However, the direct link between altered functioning and structure in these two diseases is unknown. To elucidate this relationship, we conducted a multimodal fusion analysis on the functional network connectivity (FNC) and gray matter density from MRI data from 13 BD, 40 MDD, and 33 matched healthy controls (HC). A data-driven fusion method called mCCA+jICA was used to identify the co-altered FNC and gray matter components. Comparing to HC, BD exhibited reduced gray matter density in the parietal and occipital cortices, which correlated with attenuated functional connectivity within sensory and motor networks, as well as hyper-connectivity in regions that are putatively engaged in cognitive control. In addition, lower gray matter density was found in MDD in the amygdala and cerebellum. High accuracy in discriminating across groups was also achieved by trained classification models, implying that features extracted from the fusion analysis hold the potential to ultimately serve as diagnostic biomarkers for mood disorders.",2 "9 Volunteers associated with the North Carolina Adult Asthma and Environment Study (NCAAES) participated in an investigation of personal daily exposures to coarse and fine particulate matter size fractions (PM10–2.5, PM2.5). Data from these personal measurements were then compared to community-based measures that might typically represent surrogate measurements of exposure often used in epidemiological assessments. To determine personal exposures to various particulate matter (PM) size fractions, a recently evaluated personal PM monitor capable of direct PM10–2.5 size fraction collection was used. 9 Participants living in the central region of North Carolina and enrolled in the NCAAES were asked to wear the monitor attached to a supporting backpack for 24-h collection periods. These 9 volunteers were monitored for 2 to 4days with subsequent gravimetric analysis of their PM samples. Personal PM10–2.5 mass concentrations were observed to be highly variable and ranged from 7.6 to 40.2μg/m3 over an 8-month period. The median for this measurement from all participants (50th percentile) was 13.7μg/m3. A coefficient of determination (r 2) of 0.02 was established for community-based PM10–2.5 mass concentrations versus personal exposures. Similar coefficients established for PM2.5 mass revealed only a modest improvement in agreement (r 2 =0.12). Data from the exposure findings are reported here.",1 "Obstructive sleep apnea (OSA) is a common disease. Given the costs of in-laboratory polysomnography (PSG), alternative ambulatory methods for accurate diagnosis are desirable. The objective of this study was to evaluate the performance of a simple device (SleepCheck) to identify patients with sleep apnea. A total of 30 consecutive patients with suspected OSA syndrome referred to the sleep clinic were prospectively evaluated with standard PSG and SleepCheck simultaneously during an in-laboratory, supervised full-night diagnostic study. The PSG apnea and hypopnea index (AHI) was evaluated according to standard criteria, and SleepCheck assessed the respiratory disturbance index (RDI) based on nasal cannula pressure fluctuations. Compared to the full-night PSG, SleepCheck systematically overscored respiratory events (the mean difference between SleepCheck RDI and PSG AHI was 27.4±13.3 events per hour). This overscoring was in part related to normal physiologic decreases in flow during rapid eye movement sleep or after an arousal. However, there was reasonable correlation between AHI and RDI (r=0.805). Receiver operating characteristic curves with threshold values of AHI of 10 and 20/h demonstrated areas under the curves (AUCs) of 0.915 and 0.910, respectively. Optimum combinations of sensitivity and specificity for these thresholds were calculated as 86.4/75.0 and 88.9/81.0, respectively. Overall, the SleepCheck substantially overscored apneas and hypopneas in patients with suspected OSA. However, after correction of the bias, the SleepCheck had reasonable accuracy with an AUC, sensitivity, and specificity similar to other ambulatory type 4 devices currently available.",1 "A wearable monitor that can reliably, accurately, and continuously measure personal exposure levels of various toxicants would not only accelerate the current environmental and occupational health and safety studies, but also enable new studies that are not possible with the current monitoring technology. Developing such a monitor has been a difficult challenge, and requires innovative sensing science and creative engineering. We have developed, built, and tested a wearable monitor for real-time detection of toxic hydrocarbons and acids in the environment. The monitor is low-cost, accurate, and user friendly. In addition, it can communicate wirelessly with a cell phone in which the monitoring results can be processed, displayed, stored, and transmitted to a designated computer. We have validated the functions and performance of the monitor, and carried out field tests with workers involving waste management, fire overhaul, and floor-cleaning activities, as well as with first- and second-hand smokers. The averaged exposure levels are in agreement with those determined by the standard NIOSH methods. The monitor provides accurate and real-time exposure assessment for the workers involving different activities. The real-time and continuous monitoring capability makes it possible to correlate the exposure levels with different activities and changes in the microenvironments. The monitor provides unprecedented real-time information that will help advance occupational safety and environmental health studies. It may also be used to better protect workers from occupational overexposure to toxic molecules.",1 "The levels of haplotype diversity within the lineages defined by two single-nucleotide polymorphisms (SNPs) (−13910 C/T and −22018 G/A) associated with human lactase persistence were assessed with four fast-evolving microsatellite loci in 794 chromosomes from Portugal, Italy, Fulbe from Cameroon, São Tomé and Mozambique. Age estimates based on the intraallelic microsatellite variation indicate that the −13910*T allele, which is more tightly associated with lactase persistence, originated in Eurasia before the Neolithic and after the emergence of modern humans outside Africa. We detected significant departures from neutrality for the −13910*T variant in geographically and evolutionary distant populations from southern Europe (Portuguese and Italians) and Africa (Fulbe) by using a neutrality test based on the congruence between the frequency of the allele and the levels of intraallelic variability measured by the number of mutations in adjacent microsatellites. This result supports the role of selection in the evolution of lactase persistence, ruling out possible confounding effects from recombination suppression and population history. Reevaluation of the available evidence on variation of the −13910 and −22018 loci indicates that lactase persistence probably originated from different mutations in Europe and most of Africa, even if 13910*T is not the causal allele, suggesting that selective pressure could have promoted the convergent evolution of the trait. Our study shows that a limited number of microsatellite loci may provide sufficient resolution to reconstruct key aspects of the evolutionary history of lactase persistence, providing an alternative to approaches based on large numbers of SNPs.",1 "Background It is difficult to improve negative symptoms and cognitive impairments in schizophrenia. A previous pilot study has shown that minocycline, a semi-synthetic second-generation tetracycline, is effective in treating for negative and/or cognitive symptoms in schizophrenia. Objectives The present study was designed to examine the efficacy and safety of minocycline for the treatment of negative symptoms and cognitive impairments in patients with schizophrenia. Methods Ninety-two patients with early stage schizophrenia treated with risperidone entered this 16-week, double blind, randomized, placebo-controlled clinical trial. Subjects were randomly assigned to receive minocycline (200mg per day) or the placebo. The primary outcome was evaluated using the Scale for the Assessment of Negative Symptoms (SANS). Secondary outcomes included the response rate of SANS, the Positive and Negative Syndrome Scale (PANSS), the Clinical Global Impression Scale (CGI), and cognitive tests. Results Subjects receiving minocycline had greater improvements on SANS total scores and PANSS negative subscale scores (P<0.001) when compared with those receiving the placebo. Rates of treatment response (43.6%) in the minocycline group were significantly higher than those in the placebo group (10.0%) after 16weeks of treatment. There was no significant difference between the seven cognitive domains (P>0.05), except for the attention domain (P=0.044). Conclusions The addition of minocycline to atypical antipsychotic drugs in early schizophrenia had significant efficacy on negative symptoms but had a slight effect on the attention domains of patients with schizophrenia. It may be considered as a new adjunct treatment for negative symptoms of schizophrenia. Clinical trials.gov identifier: NCT01493622.",2 "The rapid development of additive manufacturing and advances in shape memory materials have fueled the progress of four-dimensional (4D) printing. With the right external stimulus, the need for human interaction, sensors, and batteries will be eliminated, and by using additive manufacturing, more complex devices and parts can be produced. With the current understanding of shape memory mechanisms and with improved design for additive manufacturing, reversibility in 4D printing has recently been proven to be feasible. Conventional one-way 4D printing requires human interaction in the programming (or shape-setting) phase, but reversible 4D printing, or two-way 4D printing, will fully eliminate the need for human interference, as the programming stage is replaced with another stimulus. This allows reversible 4D printed parts to be fully dependent on external stimuli; parts can also be potentially reused after every recovery, or even used in continuous cycles—an aspect that carries industrial appeal. This paper presents a review on the mechanisms of shape memory materials that have led to 4D printing, current findings regarding 4D printing in alloys and polymers, and their respective limitations. The reversibility of shape memory materials and their feasibility to be fabricated using three-dimensional (3D) printing are summarized and critically analyzed. For reversible 4D printing, the methods of 3D printing, mechanisms used for actuation, and strategies to achieve reversibility are also highlighted. Finally, prospective future research directions in reversible 4D printing are suggested.",0 "In chemical regulation, e.g. the EU Water Framework Directive, REACH, or the Pesticide Directive, standardized ecotoxicological tests are applied to evaluate and rank the hazard of compounds and for deriving environmental quality standards (EQS). Standardized test methods prescribe fixed testing conditions e.g. specific temperature, pH, light intensity etc. However, environmental conditions under which the organisms live are rarely identical to the standard conditions. Thus, the ecotoxicity of compounds found in standard test is not only a function of the compounds inherent physico-chemical properties but is also affected by test conditions. It is therefore important to study the effect of changes in test conditions in order to get reliable input ecotoxicity data for assessing the potential risk posed by a compound. The objective of this study was to investigate the implications of changing test conditions on the toxicity of four sulfonylurea herbicides (SUs). The toxicity of the four SUs towards Lemna gibba was investigated at three pH levels (6, 7.5 and 9), at two temperatures (15 and 24 °C) and two light regimes (continuous and 12:12 h light:dark cycle) The EC50 increased twofold to tenfold for the four SUs when pH was increased from 6 to 9. Decreasing the temperature from 24 to 15 °C or introducing a dark:light cycle did not cause any trends in changes in toxicity. The results show that test conditions can have an effect on the toxicity and this should be considered when the standard test results are used for derivation of EQS.",0 "The first part of this interview covers Frank Oppenheimer’s childhood, family background, and early education in New York City; his deep lifelong bond to his older brother Robert; his undergraduate years at Johns Hopkins University (1930–1933); his stays at the Cavendish Laboratory in Cambridge, England, and at the University of Florence, Italy (1933–1935); his graduate studies at the California Institute of Technology (1935–1939); his postdoctoral assistantship at Stanford University (1939–1941); and the frequent summers he spent in New Mexico with his brother, family, and friends.",1 "We investigated the contribution of preschoolers’ executive function (EF) skills to the effectiveness of their spontaneous strategy production when learning. Performance on computerized tasks of inhibition, attention shifting, and working memory was examined in relation to the effectiveness of 112 3- to 5-year-olds’ spontaneous strategy production on a spatial memory task. Participants were asked to remember the locations of four toys representing one of two categories (animals or chairs) placed in a wooden box. Most participants spontaneously implemented a clustering strategy by removing and/or replacing the toys according to category membership. However, less than half of these strategic participants showed concomitant memory benefits (recall of toy locations). The remainder showed a utilization deficiency. After controlling for age and IQ, participants who performed better on EF tasks were more likely to benefit from having used the clustering strategy. These findings indicate that utilization deficiencies among preschoolers may be partially accounted for by individual differences in EF.",2 "The aim of this autopsy study was to investigate chest-compression associated injuries to the trunk in out-of-hospital and in-hospital non-traumatic cardiac arrest patients treated with automated external chest compression devices (ACCD; all with LUCAS II devices) versus exclusive manual chest compressions (mCC). In this retrospective single-center study, all forensic autopsies between 2011 and 2017 were included. Injuries following cardiopulmonary resuscitation (CPR) in patients treated with mCC or ACCD were investigated and statistically compared using a bivariate logistic regression. In the seven-year period with 4433 autopsies, 614 were analyzed following CPR (mCC vs. ACCD: n = 501 vs. n = 113). The presence of any type of trunk injury was correlated with longer resuscitation intervals (30 ± 15 vs. 44 ± 25 min, p < 0.05). In comparison with mCC, treatment with ACCD led to more frequent skin emphysema (5 vs 0%, p = 0.012), pneumothorax (6 vs. 1%, p = 0.008), lung lesions (19 vs. 4%, p = 0.008), hemopericardium (3 vs 1%, p = 0.025) and liver lesions (10 vs. 1%, p = 0.001), all irrespective of confounding aspects. Higher age and longer CPR durations statistically influenced frequency of sternal and rib fractures (p < 0.001). The mean number of fractured ribs did not vary significantly between the groups (6 ± 3 vs. 7 ± 2, p = 0.09). In this cohort with unsuccessful CPR, chest compression-related injuries were more frequent following ACCD application than in the mCC group, but with only minutely increased odds ratios. The severity of injuries did not differ between the groups, and no iatrogenic injury was declared by the forensic pathologist as being fatal. In the clinical routine after successful return of spontaneous circulation a computed tomography scan for CPR-associated injuries is recommended as soon as possible.",2 "Detection of cognitive impairment in patients with brain metastases is important for both patient management and clinical trials. The most commonly used cognitive screen, the Mini Mental State Examination (MMSE), though convenient, is not sensitive in these patients. More sensitive tools are less convenient and, therefore, uncommonly used. Therefore, a practical and sensitive tool is needed. The Montreal Cognitive Assessment (MoCA) is a good candidate, shown to be sensitive in detecting mild cognitive impairment in the pre-dementia setting. This study is the first to explore the MoCA in 12 cancer patients and is aimed at determining the feasibility of administering the MoCA in brain tumor patients. The secondary objective is to explore the relationship between MoCA and MMSE scores.",1 "The ratio of index- and ring-finger lengths (2D:4D ratio) is thought to be related to prenatal androgen exposure, and in many, though not all, populations, men have a lower average digit ratio than do women. In many studies an inverse relationship has been observed, among both men and women, between 2D:4D ratio and measures of athletic ability. It has been further suggested that, in hunter-gatherer populations, 2D:4D ratio might also be negatively correlated with hunting ability, itself assumed to be contingent on athleticism. This hypothesis has been tested using endurance running performance among runners from a Western, educated, and industrialized population as a proximate measure of hunting ability. However, it has not previously been tested among actual hunter-gatherers using more ecologically valid measures of hunting ability and success. The current study addresses this question among Tanzanian Hadza hunter-gatherers. I employ a novel method of assessing hunting reputation that, unlike previous methods, allows granular distinctions to be made between hunters at all levels of perceived ability. I find no statistically significant relationship between digit ratio and either hunting reputation or two important hunting skills. I confirm that Hadza men have higher mean 2D:4D ratios than men in many Western populations. I discuss the notion that 2D:4D ratio may be the consequence of an allometric scaling relationship between relative and absolute finger lengths. Although it is difficult to draw clear conclusions from these results, the current study provides no support for the theorized relationship between 2D:4D ratio and hunting skill.",1 "Preliminary evidence suggests that children with Attention Deficit Hyperactivity Disorder (ADHD) may exhibit handwriting difficulties. However, the exact nature of these difficulties and the extent to which they may relate to motor or behavioural difficulties remains unclear. The aim of this study was to describe handwriting capacity in children newly diagnosed with ADHD and identify predictors of performance. Forty (40) medication-naïve children with ADHD (mean age 8.1 years) were evaluated with the Evaluation Tool of Children's Handwriting-Manuscript, the Movement Assessment Battery for Children (M-ABC), the Developmental Test of Visual Motor Integration (VMI) and the Conner Global Index. An important subset (85.0%) exhibited manual dexterity difficulties. Handwriting performance was extremely variable in terms of speed and legibility. VMI was the most important predictor of legibility. Upper extremity coordination, as measured by the M-ABC ball skills subtest, was also a good predictor of word legibility. Conclusion Poor handwriting legibility and slow writing speed were common in children newly diagnosed with ADHD and were associated with motor abilities. Future studies are needed to determine whether interventions, including stimulant medications, can improve handwriting performance and related motor functioning.",2 "While psychiatric disorders such as schizophrenia are largely diagnosed on symptomatology, several studies have attempted to determine which biomarkers can discriminate schizophrenia patients from non-patients with schizophrenia. The objective of this study is to assess whether near-infrared spectroscopy (NIRS) measurement can distinguish schizophrenia patients from healthy subjects. Sixty (60) patients with schizophrenia and sixty age- and gender-matched healthy controls were divided into two sequential groups. The concentration change in oxygenated hemoglobin (Δ[oxy-Hb]) was measured in the bilateral prefrontal areas (Fp1-F7 and Fp2-F8) during the Verbal Fluency Test (VFT) letter version and category version, Tower of Hanoi (TOH), Sternberg's (SBT) and Stroop Tasks. In the first group, schizophrenia patients showed poorer task performance on all tasks and less prefrontal cortex activation during all but the Stroop Task compared to healthy subjects. In the second group, schizophrenia patients showed poorer task performance and less prefrontal cortex activation during VFTs and TOH tasks than healthy subjects. We then performed discriminant analysis by a stepwise method using Δ[oxy-Hb] and task performance measures as independent variables. The discriminant analysis in the first group included task performance of TOH, VFT letter and VFT category and Δ[oxy-Hb] of VFT letter. As a result, 88.3% of the participants were correctly classified as being schizophrenic or healthy subjects in the first analysis. The discriminant function derived from the first group correctly assigned 75% of the subjects in the second group. Our findings suggest that NIRS measurement could be applied to differentiate patients with schizophrenia from healthy subjects.",2 "While the scientist–practitioner model of training has enjoyed wide-spread appeal, difficulties in implementing the model have continued since its inception. Despite these difficulties, we remain advocates of the model and believe responsibility for inculcating a scientist–practitioner mindset rests with both training programs and trainees themselves. Thus, we offer several suggestions for both trainees and training programs in hopes of perpetuating the scientist–practitioner ideal. ",1 "This paper uses a semiparametric latent variable transformation model for multiple outcomes to examine the effect of education and maternal education on female multidimensional well-being and proposes a procedure to build a well-being index that is less susceptible to functional form misspecification. We model multidimensional well-being as an unobserved common factor underlying the observed well-being outcomes. The semiparametric methodology allows us to alleviate misspecification bias by combining multiple indicators into a latent construct in an unspecified, data-driven way. Using data from 12 female participants of the 1974–2010 waves of the US General Social Survey, we find that education, intelligence, and maternal education contribute positively to multidimensional well-being. However, the effects of education and maternal education on female multidimensional well-being declined steadily between the mid-1970s and the 1990s, and have not rebounded since.",1 "We determined the genotoxicity of 39 chemicals currently in use as food additives. They fell into six categories—dyes, color fixatives and preservatives, preservatives, antioxidants, fungicides, and sweeteners. We tested groups of four male ddY mice once orally with each additive at up to 0.5×LD50 or the limit dose (2000mg/kg) and performed the comet assay on the glandular stomach, colon, liver, kidney, urinary bladder, lung, brain, and bone marrow 3 and 24h after treatment. Of all the additives, dyes were the most genotoxic. Amaranth, Allura Red, New Coccine, Tartrazine, Erythrosine, Phloxine, and Rose Bengal induced dose-related DNA damage in the glandular stomach, colon, and/or urinary bladder. All seven dyes induced DNA damage in the gastrointestinal organs at a low dose (10 or 100mg/kg). Among them, Amaranth, Allura Red, New Coccine, and Tartrazine induced DNA damage in the colon at close to the acceptable daily intakes (ADIs). Two antioxidants (butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT)), three fungicides (biphenyl, sodium o-phenylphenol, and thiabendazole), and four sweeteners (sodium cyclamate, saccharin, sodium saccharin, and sucralose) also induced DNA damage in gastrointestinal organs. Based on these results, we believe that more extensive assessment of food additives in current use is warranted.",0 "Recruitment of extra neural resources may allow people to maintain normal cognition despite amyloid-β (Aβ) plaques. Previous fMRI studies have reported such hyperactivation, but it is unclear whether increases represent compensation or aberrant overexcitation. We found that older adults with Aβ deposition had reduced deactivations in task-negative regions, but increased activation in task-positive regions related to more detailed memory encoding. The association between higher activity and more detailed memories suggests that Aβ-related hyperactivation is compensatory. ",1 "This essay examines the public debate about the agricultural biotechnologies known as genetically modified organisms, as that debate is being carried out in its most dichotomizing forms in the United States. It attempts to reveal the power of sharply dichotomous thinking, as well as its limits. The essay draws on the work of Michel Serres, who uses the concept of the parasite to reconstruct or reframe fundamental dichotomies in western philosophy; it attempts a similar reframing of the public debates about GMOs. The purpose of such a reframing is to create possibilities for dialogue among 11 participants that will move beyond the polarization that characterizes much of the current debate in the U.S.",1 "We replicated and extended previous research on microswitch facilitated choice making by individuals with profound multiple disabilities. Following an assessment of stimulus preferences, we taught 6 adults with profound multiple disabilities to emit 2 different responses to activate highly preferred stimuli. All participants learnt to activate both microswitches. Five participants showed a higher overall level of responding when both switches activating preferred stimuli were available concurrently. After completion of microswitch training, a choice assessment was conducted in which participants had access to 2 microswitches concurrently, with 1 connected to the most highly preferred stimulus and the other to a least preferred stimulus. Choice making behavior was shown in 3 participants and provided support for the preference assessment results. The results of the 3 remaining participants showed that both the most highly preferred and the least preferred stimuli may serve as reinforcers for microswitch activation responses.",1 "Although human alcoholics exhibit lasting cognitive deficits, it can be difficult to definitively rule out pre-alcohol performance differences. For example, individuals with a family history of alcoholism are at increased risk for alcoholism and are also behaviorally impaired. Animal models of controlled alcohol exposure permit balanced group assignment, thereby ruling out the effects of pre-existing differences. Periadolescent male rhesus macaques (N = 5) consumed alcohol during 200 drinking sessions (M–F) across a 10-month period (mean daily alcohol consumption: 1.38 g/kg/day). A control group (N = 5) consumed a fruit-flavored vehicle during the same period. Spatial working memory, visual discrimination learning and retention and response time behavioral domains were assessed with subtests of the Monkey CANTAB (CAmbridge Neuropsychological Test Automated Battery). Spatial working memory performance was impaired in the alcohol group after 120 drinking sessions (6 mo) in a manner that depended on retention interval. The chronic alcohol animals were also impaired in retaining a visual discrimination over 24 hrs when assessed 6–8 weeks after cessation of alcohol drinking. Finally, the presentation of distractors in the response time task impaired the response time and accuracy of the chronic alcohol group more than controls after 6 months of alcohol cessation. Chronic alcohol consumption over as little as 6 months produces cognitive deficits, with some domains still affected after acute (6–8 wks) and lasting (6 mo) discontinuation from drinking. Animals were matched on alcohol preference and behavioral performance prior to exposure, thus providing strong evidence for the causal role of chronic alcohol in these deficits.",1 "The goal of this study is to evaluate the effect of crime and discipline on graduation rates in higher education. Using national data on more than 1250 public and private non-profit institutions that were drawn from the Integrated Postsecondary Education Data System, the results reveal that more violence on and around campus is associated with lower 4-year graduation rates, whereas higher rates of disciplinary actions regarding alcohol, drugs, and weapons are associated with higher graduation rates. Furthermore, the findings suggest that utilizing the student conduct system rather than the criminal justice system to address minor offenses is more likely to lead to student success. This study contributes to the growing literature on college effectiveness and the influence of institutional structures and organizational policies on student achievement. The results of this study suggest that violent crime, institutional conduct systems, and campus police departments warrant further investigation.",1 "There is mounting evidence supporting the effectiveness of task-shifted mental health interventions in low- and middle-income countries (LMIC). However, there has been limited systematic scale-up or sustainability of these programs, indicating a need to study implementation. One barrier to progress is a lack of locally relevant and valid implementation measures. We adapted an existing brief dissemination and implementation (D&I) measure which includes scales for acceptability, appropriateness, feasibility and accessibility for local use and studied its validity and reliability among a sample of 20 consumers in Ukraine.",1 "Introduction: REM sleep behavior disorder (RBD) is strongly associated with synucleinopathy and is caused by REM sleep without atonia (RSWA), the loss of normal muscle atonia during REM sleep. We aimed to determine whether RSWA severity was associated with cognitive functioning in RBD. Materials and methods: Both. 324 idiopathic (iRBD) and symptomatic 90 RBD (sRBD) patients completed two cognitive batteries: CNS Vitals Signs (CNS-VS) and Useful Field of View (UFOV). All subjects underwent PSG and their muscle (SM: submentalis; AT: anterior tibialis) tone during REM sleep was visually and automatically scored. Group differences between sRBD and iRBD were then compared, and regression models fit to determine the relationship of RSWA and dependent cognitive measures. Results: Twenty iRBD and 10 sRBD participated. Demographics were similar between groups. Deficits on cognitive testing were observed on CNS-VS in processing speed (p = 0.014) and psychomotor speed (sRDB < iRBD, p = 0.019) and on Total UFOV and subtests 2 and 3 (sRBD > iRBD, all p < 0.002). sRBD patients had greater combined phasic and tonic RSWA in SM (p = 0.026) and longer mean phasic burst duration (p = 0.03). Regression analyses demonstrated that SM RSWA independently predicted overall CNS-VS Neurocognitive Index (NCI) (F = 4.5, p = 0.006), adjusting for age, gender, depressive symptoms (Zung score), and sleep disturbances (PSQI), and this relationship also remained significant in the iRBD group after excluding sRBD patients (F = 3.5, p = 0.03). Conclusion: SWA is predictive of lower overall cognitive performance in patients with RBD. Acknowledgements: The project described was supported by the National Institute on Aging (P50 AG016574), and through Grant Number 1 UL1 RR024150-01. The content is solely the responsibility of the authors.",2 "Objectives The purpose of this review is to critically evaluate the available evidence from the published scientific literature on dementia care and service provision in rural and remote settings from the perspective of formal/paid caregiving, in order to assess the current state of knowledge, identify policy and practice implications, and make recommendations for future research. Methods A systematic review of the literature indexed in ISI Web of Knowledge, PsychInfo, Medline, Healthstar, CINAHL, EMBASE, and Sociological Abstracts was conducted. Data were extracted from papers meeting inclusion criteria: peer-reviewed papers that focused on dementia or Alzheimer's disease (AD), examined care or service provision in relation to persons with AD or dementia, and relevant to rural or remote care or services. Results The search identified 872 articles for review, reduced to 72 after removing duplicates and articles not meeting criteria. Of the 72 remaining, 46 are included in this current review focusing on formal or paid care. A future review will focus on the 26 studies on informal/unpaid care. Six themes that correspond to the current state of knowledge in rural dementia care in the 46 included studies were: diagnostic processes, service provision, service models and programs, staff education and support needs, use of technology, and long-term care. Conclusions Despite the growing body of evidence over the 20 years covered by this review, much of the research is descriptive and/or based on small sample sizes, and distributed across the care continuum. Hence the body of evidence on which to base policy and program decisions remains limited. More research is needed that would support the development of comprehensive rural dementia care models.",0 "Summary The purpose of this study was to examine the physiological correlates of the Yo–Yo intermittent recovery test level 1 (Yo–Yo IR1) in basketball players. Twenty-two (22) male basketball players (means±S.D., body mass 72.4±11.4kg, height 181.7±6.9cm, age 16.8±2.0 years) were tested for maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) on a motorized treadmill. Lower limb explosive strength and anaerobic-capacity was assessed using vertical jumps (CMJ), 15m shuttle running sprint (15mSR) and line drill (LD), respectively. The same test battery was replicated after an experimental basketball game in order to assess selective effect of fatigue on physical performance. Pre to post-game CMJ (40.3±5.7 versus 39.9±5.9cm) and 15mSR (5.80±0.25 versus 5.77±0.22s) performances were not significantly different (p >0.05). LD performance decreased significantly post-game (from 26.7±1.3 to 27.7±2.7s, p <0.001). Yo–Yo IR1 performances (m) were significantly related to VO2max (r =0.77, p =0.0001), speed at VO2max (r =0.71, p =0.0001) and %VO2max at VT (r =−0.60, p =0.04). Yo–Yo IR1 performance was significantly correlated to post-game LD decrements (r =−0.52, p =0.02). These findings show that Yo–Yo IR1 may be considered as a valid basketball-specific test for the assessment of aerobic fitness and game-related endurance.",1 "Many enterprises have been devoting a significant portion of their budget to product development in order to distinguish their products from those of their competitors and to make them better fit the needs and wants of customers. Hence, businesses should develop product designing that could satisfy the customers’ requirements since this will increase the enterprise’s competitiveness and it is an essential criterion to earning higher loyalties and profits. This paper investigates the following research issues in the development of new digital camera products: (1) What exactly are the customers’ “needs” and “wants” for digital camera products? (2) What features is more importance than others? (3) Can product design and planning for product lines/product collection be integrated with the knowledge of customers? (4) How can the rules help us to make a strategy during we design new digital camera? To investigate these research issues, the Apriori and C5.0 algorithms are methodologies of association rules and decision trees for data mining, which is implemented to mine customer’s needs. Knowledge extracted from data mining results is illustrated as knowledge patterns and rules on a product map in order to propose possible suggestions and solutions for product design and marketing.",1 "Patient participation is important for improving outcomes, respect for self-determination and legal aspects in care. However, how patients with heart failure view participation and which factors may be associated with participation is not known. The aim of this study was therefore to describe the influence of structured home care on patient participation over time in 11 patients diagnosed with heart failure, and to explore factors associated with participation in care.",1 "In 2017, Puerto Rico sustained extensive damage from Hurricane Maria, increasing the risk of fires and carbon monoxide (CO) poisonings. Using a population-based, in-person survey of households with children less than 6 years old in Puerto Rico, we collected data in 2010 concerning the presence of smoke alarms and CO alarms in these households. We generated national estimates by extrapolating the number of households in each stratum using data from the 2010 Census. We determined which household characteristics predicted the presence of these alarms. Of 355 households analyzed, 31% had functional smoke alarms, or an estimated 109,773 households territory wide. The presence of smoke alarms was associated with living in multifamily housing and no child in the household receiving government medical insurance. Public housing or publicly subsidized housing, as compared to owner-occupied housing and unsubsidized rental housing, was associated with having a functional smoke alarm in households with children aged less than 6 years. Based on only six houses having CO alarms, we estimated only 7685 (2%) households had CO alarms. The low prevalence of functional smoke or CO alarms 7 years before Hurricane Maria is unfortunate and should be remedied by ensuring that such alarms are widely installed in current rebuilding activities.",2 "Despite increasing interest in the attentional biases of pain patients towards pain-related stimuli, there have been no investigations of whether the main caregivers of chronic pain patients also selectively attend to pain-related information. We compared the attentional biases to painful or happy faces of 120 chronic pain patients, 118 caregivers, and 50 controls. Analyses found that both patients and caregivers demonstrated biases towards painful faces that were not observed in control participants or to happy faces. Those patients and caregivers who were high in fear of pain demonstrated greater biases than those low in fear of pain, and the biases of the high-in-fear-of-pain group differed significantly from zero. When sub-groups of caregivers were compared, it was found that biases towards painful faces were not observed for those caregivers who accurately identified the level of pain the patient currently reported. In contrast, those caregivers who overestimated or underestimated the patients’ pain demonstrated biases that were significantly greater than zero. These results add to the growing weight of evidence suggesting that biases towards pain-related stimuli are observed in chronic pain patients, but that the nature of the stimuli is important. In addition, the results suggest that caregivers, particularly those who either under- or overestimate the level of pain that the patient reports, also demonstrate similar biases. Future research should investigate the links between caregivers’ biases and the way in which caregivers respond to pain.",2 "Abstract Climate change mitigation requires the development of new processes to reduce the amount of carbon dioxide in the atmosphere. The products of CO2 utilization can supplement or replace chemical feedstocks, fine chemicals, pharmaceutical, and polymers. Carbon capture and utilization based on innovative electroreduction processes is one of the suggested routes to reduce the use of coal and oil as carbon sources due to the recycling of carbon. Some chemicals may be produced using carbon dioxide, decreasing the use of natural resources. The electrocatalytic processes to obtain formate and methanol as derived products from CO2 are discussed in this chapter, taking into account the electro-catalysts and the reactor design in the development of innovative processes.",0 "The full-length Schedule for Nonadaptive and Adaptive Personality - 2nd Edition (SNAP-2, Clark et al. 2014) and various derivative versions were developed as measures of normal- and pathological-range personality traits. We report herein on the development and initial validation of the SNAP Brief Other-Description Rating Form (SNAP-BORF), an abbreviated version of the SNAP Other-Description Rating Form (ORF; Harlan and Clark Assessment, 6, 131–145, 1999). Our goal was to create a more efficient SNAP informant short form by making items more succinct rather than by eliminating items. SNAP-ORF word count was reduced by 68%, and the 1.5-page SNAP-BORF can be completed in approximately 10 min, one-third to one-half the time required to complete the SNAP-ORF. Mean-level differences between the SNAP-ORF and SNAP-BORF scales were negligible for all scales except propriety. Using exploratory factor analysis, we found the SNAP-BORF had a three-factor structure (NA vs. Low PA, Disinhibition vs. Constraint, and Antagonism) broadly consistent with extant literature. The SNAP-BORF showed good convergent/ discriminant validity with respect to the SNAP-family measures as well as measures of normal personality and symptoms of depression, anxiety, and worry. Results indicated that the SNAP-BORF is a useful measure when a very brief informant assessment of adaptive and maladaptive personality is needed.",1 "Owing to data collection challenges, the vertical variation in population in cities and particulate air pollution are typically not accounted for in exposure assessments, which may lead to misclassification of exposures based on elevation of residency. To better assess this misclassification, the vertical distribution of the potentially highly exposed population (PHEP), defined as all residents within the 100-m buffer zone of above-ground highways or the 200-m buffer zone of a highway-tunnel exit, was estimated for four floor categories in Boston’s Chinatown (MA, USA) using the three-dimensional digital geography methodology. Vertical profiles of particle number concentration (7–3000 nm; PNC) and particulate matter (PM2.5) mass concentration were measured by hoisting instruments up the vertical face of an 11-story (35-m) building near the study area throughout the day on multiple days. The concentrations from all the profiles (n=23) were averaged together for each floor category. As measurement elevation increased from 0 to 35 m PNC decreased by 7.7%, compared with 3.6% for PM2.5. PHEP was multiplied by the average PNC for each floor category to assess exposures for near-highway populations. The results show that adding temporally-averaged vertical air pollution data had a small effect on residential ambient exposures for our study population; however, greater effects were observed when individual days were considered (e.g., winds were off the highways).",1 "The body mass index (BMI) of breakfast eaters is frequently reported to be lower compared with that of breakfast skippers. This is not explained by differences in energy intakes, indicating there may be other mechanisms serving to drive this paradoxical association between breakfast and BMI. This study aimed to investigate the effect of eating breakfast versus morning fasting on measures predominantly of metabolism in lean and overweight 10 participants who habitually eat or skip breakfast.",1 "A number of pharmacological agents for treating negative symptoms in schizophrenia are currently in development. Unresolved questions regarding the design of clinical trials in this area were discussed at an international meeting in Florence, Italy in April 2012. 25 participants included representatives from academia, the pharmaceutical industry, and the European Medicines Agency (EMA). Prior to the meeting, 25 participants submitted key questions for debate and discussion. Responses to the questions guided the discussion during the meeting. The group reached agreement on a number of issues: (1) study subjects should be under the age of 65; (2) subjects should be excluded for symptoms of depression that do not overlap with negative symptoms; (3) functional measures should not be required as a co-primary in negative symptom trials; (4) information from informants should be included for ratings when available; (5) Phase 2 negative symptom trials should be 12weeks and 26weeks is preferred for Phase 3 trials; (6) prior to entry into a negative symptom study, subjects should demonstrate clinical stability for a period of 4 to 6months by collection of retrospective information; and (7) prior to entry, the stability of negative and positive symptoms should be confirmed prospectively for four weeks or longer. The 25 participants could not reach agreement on whether predominant or prominent negative symptoms should be required for study subjects.",1 "Hearing impairment is the most common body system disability in veterans. In 2008, nearly 520,000 veterans had a disability for hearing loss through the Department of Veterans Affairs (VA). Changes in eligibility for hearing aid services, along with the aging population, contributed to a greater than 300% increase in the number of hearing aids dispensed from 1996 to 2006. In 2006, the VA committed to having no wait times for patient visits while providing quality clinically-appropriate care. One approach to achieving this goal is the use of group visits as an alternative to individual visits. We sought to determine: 1) if group hearing aid fitting and follow-up visits were at least as effective as individual visits, and 2) whether group visits lead to cost savings through the six month period after the hearing aid fitting. We describe the rationale, design, and characteristics of the baseline cohort of the first randomized clinical trial to study the impact of group versus individual hearing aid fitting and follow-up visits.",1 "Subjective ratings of fatigue are increasingly being used as part of a suite of tools to assess fatigue-related risk on the road and in the workplace. There is some debate however, as to whether individuals can accurately gauge their own fatigue states, particularly under conditions of sleep restriction. It is also unclear which references are used by individuals to assess fatigue – for example prior sleep, time of day, workload, or previous ratings. The current study used a sophisticated laboratory protocol to examine the independent contributions of sleep, circadian phase and sleep debt to fatigue ratings. Importantly, participants had no knowledge of time of day, how much sleep they were getting, or how long they were awake. Twenty-eight healthy, young males participated in one of two conditions of a 28h forced desynchrony protocol – severe sleep restriction (4.7h sleep and 23.3h wake) or moderate sleep restriction (7h sleep and 21h wake). Fatigue ratings were provided prior to and following each sleep period using the Samn–Perelli fatigue scale. Repeated measures ANOVAs were used to analyse the effects of circadian phase, sleep dose and study day. Results demonstrated an effect of circadian phase on both pre-sleep and post-sleep fatigue ratings. The significant effect of study day is interpreted as an effect of circadian time, as opposed to accumulating sleep debt. An effect of sleep dose was only seen in post-sleep fatigue ratings. The findings suggest that post-sleep fatigue ratings may be sensitive to prior sleep and may be useful as an indicator of fatigue-related risk, particularly when triangulated with information about recent total sleep time.",1 Personality traits are related with risk of hazardous alcohol use and alcohol dependence. The Substance Use Risk Profile Scale (SURPS) measures personality traits associated with addictive substance abuse. We examined psychometric properties of the SURPS in Lithuanian population.,1 "The advanced technology of computing system was followed by the rapid improvement of medical instrumentation and patient record management system. The typical examples are hospital information system (HIS) and picture archiving and communication system (PACS), which computerized the management procedure of medical records and images in hospital. Because these systems were built and used in hospitals, doctors out of hospital have problems to access them immediately on emergent cases. To solve these problems, this paper addressed the realization of system that could transmit the images acquired by medical imaging systems in hospital to the remote 16 doctors’ handheld PDA’s using CDMA cellular phone network. The system consists of server and PDA. The server was developed to manage the accounts of doctors and patients and allocate the patient images to each doctor. The PDA was developed to display patient images through remote server connection. To authenticate the personal user, remote data access (RDA) method was used in PDA accessing the server database and file transfer protocol (FTP) was used to download 22 patient images from the remove server. In laboratory experiments, it was calculated to take ninety seconds to transmit thirty images with 832 × 488 resolution and 24 bit depth and 0.37 Mb size. This result showed that the developed system has no problems for remote 16 doctors to receive and review the patient images immediately on emergent cases.",1 "School underachievement means a certain quantity of human resource which is taken out of educational circuit. The purpose of study is to investigate this phenomenon at the high school students age in order to identify personality correlates according to age, gender and type of high school they attend (sciences or humanities). We tested 120 students from four classes, two of sciences and two of humanities, from two high schools in Brasov. Predominance of verbalism in education leads to an insufficient valorization of boys. Excitement-seeking, need for actions, role of peers are significantly limited by Romanian education. The progressive character of school underachievement imposes measure of structural change to increase the opportunity of students’ school adjustment.",2 "Smart home systems are designed as platforms for connecting sensors, home appliances, and devices to exchange data and, ultimately, to provide useful services to home residents. However, such systems are vulnerable to Cybersecurity attacks that can affect the reliability and integrity of the delivered services. Sensors, planted at smart homes or equipped with smart appliances, are highly exposed to identity theft. Intruders can recognize through the understanding of the exchanged data, their locations, or knowing their associated services. Such information might make the home resident vulnerable to life attacks. Therefore, protecting sensors identities in smart home systems is of high interest in this domain. This paper introduces a novel technique that protects sensors’ identity from being recognized through cordless communication environments. Our proposed approach utilizes a three-phase technique that controls a synchronized queue among connected sensors and keeps their identity hidden from outsiders. The proposed approach preserves the linearity of time that is required to manage the protection of the home network. To validate the performance of our proposed approach, we conducted experiments on four different smart homes datasets. Furthermore, we performed a sensitivity analysis to measure how our proposed approach is affected by different environmental variables. The results indicated that the proposed approach provides a significant performance in protecting sensors identities in smart home area networks. Furthermore, during the sensitivity analysis, we found that our proposed technique’s performance is highly affected by the threshold value that defines each sensor’s time interval. ",1 "The effect of crowding on the identification of words was examined in 1007 normal readers and 2050 subjects with developmental dyslexia. In Experiment 1, a matching task was used. Words were presented either alone or embedded in other words. Vocal reaction times (RT) of 2000 dyslexics were slower and more sensitive to the presence of the surrounding stimuli than those of control subjects. Similar results were obtained in a control experiment using the same task for strings of symbols (isolated or crowded) instead of words. These data indicate that differences in crowding in control and dyslexic subjects arise at a pre-linguistic level. In Experiment 2, vocal RTs to word reading were measured. Two conditions putatively reducing the effect of crowding were tested: increasing inter-letter spacing and blurring. A moderate increase of inter-letter spacing produced faster vocal RTs in dyslexics, while no effect was present in normal controls. Moderate blurring of stimuli did not change dyslexics' RTs, while normal readers became slower. Group and individual results are discussed to evaluate the extent to which crowding contributes to the genesis of developmental dyslexia.",2 "Developmental dyslexia is the most common learning disability in school-aged children with an estimated incidence of five to ten percent. The cause and pathophysiological substrate of this developmental disorder is unclear. Recently, a possible involvement of the cerebellum in the pathogenesis of dyslexia has been postulated. In this study, 15 dyslexic children and 7 age-matched control subjects were investigated by means of functional neuroimaging (fMRI) using a noun-verb association paradigm. Comparison of activation patterns between dyslexic and control subjects revealed distinct and significant differences in cerebral and cerebellar activation. Control subjects showed bilaterally well-defined and focal activation patterns in the frontal and parietal lobes and the posterior regions of the cerebellar hemispheres. The dyslexic children, however, presented widespread and diffuse activations on the cerebral and cerebellar level. Cerebral activations were found in frontal, parietal, temporal and occipital regions. Activations in the cerebellum were found predominantly in the cerebellar cortex, including Crus I, Crus II, hemispheric lobule VI, VII and vermal lobules I, II, III, IV and VII. This preliminary study is the first to reveal a significant difference in cerebellar functioning between dyslexic children and controls during a semantic association task. As a result, we propose a new hypothesis regarding the pathophysiological mechanisms of developmental dyslexia. Given the sites of activation in the cerebellum in the dyslexic group, a defect of the intra-cerebellar distribution of activity is suspected, suggesting a disorder of the processing or transfer of information within the cerebellar cortex.",1 "Substantial correlational evidence suggests that prefrontal regions are critical to honest and dishonest behavior, but causal evidence specifying the nature of this involvement remains absent. We found that lesions of the human dorsolateral prefrontal cortex (DLPFC) decreased the effect of honesty concerns on behavior in economic games that pit honesty motives against self-interest, but did not affect decisions when honesty concerns were absent. These results point to a causal role for DLPFC in honest behavior. ",1 "The metabolic dependencies of androgen receptor (AR)-driven growth in prostate adenocarcinoma are largely unknown but could represent a therapeutic target when hormonal manipulations fail. Here the authors demonstrate that the mitochondrial pyruvate carrier (MPC) is transcriptionally regulated by AR and that MPC inhibition suppresses tumour growth in hormone-responsive and castrate-resistant conditions. ",1 "Background Previous studies indicate that transcranial direct current stimulation (tDCS) with anode over motor cortex (M1) and cathode over contralateral supraorbital region (SO) may be effective in reducing pain, but these studies are limited in number and have not focused on older adults with osteoarthritis (OA). Objective To evaluate the preliminary efficacy and safety of M1-SO applied tDCS on clinical pain severity and mobility performance in adults with knee OA pain. Methods. Forty (40) 50- to 70-year-old community-dwelling participants with knee OA were randomly assigned to receive five daily sessions of 2 mA tDCS for 20 min (n = 20) or sham tDCS (n = 20). We measured clinical pain severity via Numeric Rating Scale, Western Ontario and McMaster Universities Osteoarthritis Index, and Short-Form McGill Pain Questionnaire. In addition, we measured mobility performance using the 6-Minute Walk Test and the Short Physical Performance Battery. Moreover, we obtained a sensation/safety questionnaire and measured cognition changes using the PROMIS-Applied Cognition-Abilities-Short Form 8a. Results Active tDCS over M1-SO significantly reduced Numeric Rating Scale of pain compared to sham tDCS after completion of the five daily sessions, and remained up to three weeks. No other measures were significantly different from sham. Participants tolerated tDCS over M1-SO well without serious adverse effects or cognition changes. Conclusion Although not consistent in all pain measurements, our findings demonstrate promising clinical efficacy for reduction in pain perception for older adults with knee OA. Trial registration ClinicalTrials.gov Identifier NCT02512393.",2 "Background/purpose Vagus nerve stimulation (VNS) has been demonstrated to be safe and effective for adults and children with drug-resistant epilepsy and is able to improve most types of epilepsy. The aim of this study, in a paediatric population, was to assess the overall efficacy of vagus nerve stimulation on seizures, to assess tolerability and quality of life. Methods This single-centre, retrospective study reviewed the files of 29 children in whom a vagus nerve stimulator was implanted between 1995 and 2012. The response rate (greater than 50% reduction of the seizure frequency), antiepileptic efficacy according to the type of epilepsy or age at implantation or age at onset of epilepsy, the time-course of seizures, adverse effects, overall quality of life and number of hospitalisations were studied. Results In our population, vagus nerve stimulation achieved a significant reduction in the seizure frequency throughout follow-up (p = 0.015). Response rates were 59% at 3 months, and 66% at 6 months, and the response rate then remained stable at about 70%. Stimulation tended to be more effective in patients with non-idiopathic partial epilepsy than in patients with non-idiopathic and idiopathic generalised epilepsy (0.01 < p < 0.11). No other predictive factors of efficacy were identified. Patients, parents, caregivers reported improvement in overall quality of life in 38% of patients during clinical interviews. A significant reduction in the number of hospitalisations due to a reduction of seizure frequency was observed after implantation (p = 0.03). VNS was stopped because of complications or insufficient efficacy in 9 cases. Conclusion Vagus nerve stimulation is a safe and effective treatment option in children with drug-resistant epilepsy who are not candidates for surgery.",1 Numerous breast cancer patients experience cognitive changes during and after chemotherapy. Chemotherapy-related cognitive impairment can significantly affect quality of life. This pilot study attempted to determine the effects of a compensatory cognitive training on the objective and subjective cognitive functioning of breast cancer patients receiving adjuvant chemotherapy.,1 "Corrosive (caustic) material ingestion remains a major health issue, particularly in developing countries. The management strategy after corrosive ingestion should be planned according to the signs and symptoms. The management of corrosive ingestion based on endoscopic grading, nothing by mouth, and barium studies should be abandoned. With the new management protocol, esophageal stricture can be predicted with high accuracy using the simple new prognostic DROOL score (≤ 4) rather than endoscopic grading, reduced by immediate oral feeding as soon as the patient can swallow saliva instead of nothing by mouth, diagnosed earlier (10–14 days) by fluoro-endoscopic balloon-assisted esophageal examination for 24 patients with persistent dysphagia instead of relying on a barium study (≥ 21 days), and adequately treated by initiating balloon dilation earlier during the same anesthesia procedure. Fluoroscopically guided balloon dilatation with large balloons (18–20 mm) seems to be safe, with a low frequency of complications and a high success rate. If dilatation fails after a few months, esophagectomy and replacement surgery using the stomach should be considered. The increased risk of developing esophageal carcinoma after ingestion of corrosive substances should be kept in mind.",1 "Although imitation problems have been associated with autism for many years, the underlying mechanisms of these problems remain subject to debate. In this article, the question whether imitation problems are caused by selection or correspondence problems is explored and discussed. This review revealed that hypotheses on the nature of imitation problems in autism are complicated and inconclusive at the present time. There is some evidence for impaired selection, especially implicating poor preferential attention to biological motion and poor ascription of intention to action. There is also some evidence that both transformations of perspectives and mapping of visual to motor information are impaired, characterized as correspondence problems. However, it is not yet clear how poor selection processes contribute to correspondence problems and vice versa. Insight in this interaction may provide a valuable contribution to our understanding of imitation problems in autism. For further research we recommend that tasks should be constrained to target as few mechanisms as possible in given experiments.",0 "to STRONGSA or WEAKSA for Stage 2, which, along with the trials in Stage 1 and 3, was used for student modeling following Equation 3. As we will describe in Section V-B, we empirically found that STRONGSA led to stronger student modeling more aligned with an expert coach. The remaining 30 participants were therefore assigned to STRONGSA for Stage 2, and randomly assigned to receive no assistance or SKILLSA in Stage 4. The particular coaching intervention that SKILLSA uses for each participant is based on argmaxz∈(steer,brake,throttle)zpd(z), or the skill that received the highest score in our student modeling based on that participant's trajectories in Stages 1-3. Note that while we set the size of set Zg = 8, we restrict coach actions to only consider steer, brake, and throttle. Upon completing the study, all participants were asked to fill out a feedback form, where they reflected on the effectiveness of the five-minute practice session, and provided additional feedback on their experience with the simulator and assistance. We provide more details, including the full set of instructions participants received, in the Appendix. Overall, the structure of the study allowed us to assess the influence of shared autonomy on learning by comparing each participant's Baseline and Evaluation rounds, as well as systematically evaluate each component of Z-COACH. V. EXPERIMENTAL RESULTS Recall that Z-COACH consists of three steps: (i) task skill discovery, (ii) student modeling with shared autonomy to estimate how much a skill is within a student's ""zone of proximal development"", and (iii) using skill-focused shared autonomy to help the student improve. We now evaluate each step.",2 "Accumulating evidence suggests that not only diseases of old age, but also normal aging, affect elderly adults’ ability to draw on the framework theories that structure our abstract causal-explanatory knowledge, knowledge that we use to make sense of the world. One such framework theory, the cross-culturally universal vitalist biology, gives meaning to the abstract concepts life and death. Previous work shows that many elderly adults are animists, claiming that active, moving entities such as the sun and the wind are alive (Zaitchik & Solomon, 2008). Such responses are characteristic of young children, who, lacking an intuitive theory of biology, distinguish animals from non-animals on the basis of a theory of causal and intentional agency. What explains such childlike responses? Do the elderly undergo semantic degradation of their intuitive biological theory? Or do they merely have difficulty deploying their theory of biology in the face of interference from the developmentally prior agency theory? Here we develop an analytic strategy to answer this question. Using a battery of vitalist biology tasks, this study demonstrates—for the first time—that animism in the elderly is due to difficulty in deployment of the vitalist theory, not its degradation. We additionally establish some powerful downstream consequences of theory deployment difficulties, demonstrating that the elderly’s use of the agency theory is not restricted to animist judgments—rather, it pervades their explicit reasoning about animates and inanimates. Extending the investigation, we identify specific cognitive mechanisms implicated in adult animism, finding that differences between young and elderly adults are mediated and moderated by differences in inhibition and shifting mechanisms. The analytic strategy developed here could help adjudicate between degradation and deployment in other conceptual domains and other populations.",1 "While Cave Automatic Virtual Environment (CAVE) systems have long enabled room-scale virtual reality and various kinds of interactivity, their content has largely remained predetermined. We present \textit{Storycaster}, a generative AI CAVE system that transforms physical rooms into responsive storytelling environments. Unlike headset-based VR, \textit{Storycaster} preserves spatial awareness, using live camera feeds to augment the walls with cylindrical projections, allowing users to create worlds that blend with their physical surroundings. Additionally, our system enables object-level editing, where physical items in the room can be transformed to their virtual counterparts in a story. A narrator agent guides participants, enabling them to co-create stories that evolve in response to voice commands, with each scene enhanced by generated ambient audio, dialogue, and imagery. Participants in our study (n=13) found the system highly immersive and engaging, with narrator and audio most impactful, while also highlighting areas for improvement in latency and image resolution.",1 "This study investigates whether demographic factors shape adoption and attitudes among employees toward artificial intelligence (AI) technologies at work. Building on an extended Unified Theory of Acceptance and Use of Technology (UTAUT), which reintroduces affective dimensions such as attitude, self-efficacy, and anxiety, we surveyed 2,257 professionals across global regions and organizational levels within a multinational consulting firm. Non-parametric tests examined whether three demographic factors (i.e., years of experience, hierarchical level in the organization, and geographic region) were associated with AI adoption, usage intensity, and eight UTAUT constructs. Organizational level significantly predicted AI adoption, with senior employees showing higher usage rates, while experience and region were unrelated to adoption. Among AI users (n = 1,256), frequency and duration of use showed minimal demographic variation. However, omnibus tests revealed small but consistent group differences across several UTAUT constructs, particularly anxiety, performance expectancy, and behavioral intention, suggesting that emotional and cognitive responses to AI vary modestly across contexts. These findings highlight that demographic factors explain limited variance in AI acceptance but remain relevant for understanding contextual nuances in technology-related attitudes. The results underscore the need to integrate affective and organizational factors into models of technology acceptance to support equitable, confident, and sustainable engagement with AI in modern workplaces.",2 "Social media platforms have transformed global communication and interaction, with TikTok emerging as a critical tool for education, connection, and social impact, including in contexts where infrastructural resources are limited. Amid growing political discussions about banning platforms like TikTok, such actions can create significant ripple effects, particularly impacting marginalized communities. We present a study on Nepal, where a TikTok ban was recently imposed and lifted. As a low-resource country in transition where digital communication is rapidly evolving, TikTok enables a space for community engagement and cultural expression. In this context, we conducted an online survey (N=108) to explore user values, experiences, and strategies for navigating online spaces post-ban. By examining these transitions, we aim to improve our understanding of how digital technologies, policy responses, and cultural dynamics interact globally and their implications for governance and societal norms. Our results indicate that users express skepticism toward platform bans but often passively accept them without active opposition. Findings suggest the importance of institutionalizing collective governance models that encourage public deliberation, nuanced control, and socially resonant policy decisions.",2 "Trust is one of the most important factors shaping whether and how people adopt and rely on artificial intelligence (AI). Yet most existing studies measure trust in terms of functionality, focusing on whether a system is reliable, accurate, or easy to use, while giving less attention to the social and emotional dimensions that are increasingly relevant for today's generative AI (GenAI) systems. These systems do not just process information; they converse, respond, and collaborate with users, blurring the line between tool and partner. In this study, we introduce and validate the Human-AI Trust Scale (HAITS), a new measure designed to capture both the rational and relational aspects of trust in GenAI. Drawing on prior trust theories, qualitative interviews, and two waves of large-scale surveys in China and the United States, we used exploratory (n = 1,546) and confirmatory (n = 1,426) factor analyses to identify four key dimensions of trust: Affective Trust, Competence Trust, Benevolence & Integrity, and Perceived Risk. We then applied latent profile analysis to classify users into six distinct trust profiles, revealing meaningful differences in how affective-competence trust and trust-distrust frameworks coexist across individuals and cultures. Our findings offer a validated, culturally sensitive tool for measuring trust in GenAI and provide new insight into how trust evolves in human-AI interaction. By integrating instrumental and relational perspectives of trust, this work lays the foundation for more nuanced research and design of trustworthy AI systems.",1 "Abstract Project finance has evolved during the years and sectors of applications have changed together with the geographic areas where the technique has been used. Originally project finance was used in sectors with a stable captive market, low technology risk and low country risk. During the years, the technique has been increasingly implemented in riskier sectors and riskier countries. This chapter presents the historic evolution of project finance and PPPs. We first outline the worldwide trends with details regarding the use of the technique in different sectors and geographic macroregions. Then, we present a focus on the PPP subsegment. Here, we carry out the analysis distinguishing between developing and developed countries. Finally, a special focus on the European PPP market is provided.",1 "Although cognitive neuroscience has made valuable progress in understanding the role of the prefrontal cortex in human intelligence, the functional networks that support adaptive behavior and novel problem solving remain to be well characterized. Here, we studied 158 human brain lesion patients to investigate the cognitive and neural foundations of key competencies for fluid intelligence and working memory. We administered a battery of neuropsychological tests, including the Wechsler Adult Intelligence Scale (WAIS) and the N-Back task. Latent variable modeling was applied to obtain error-free scores of fluid intelligence and working memory, followed by voxel-based lesion-symptom mapping to elucidate their neural substrates. The observed latent variable modeling and lesion results support an integrative framework for understanding the architecture of fluid intelligence and working memory and make specific recommendations for the interpretation and application of the WAIS and N-Back task to the study of fluid intelligence in health and disease.",2 "Vote functions are important devices for providing a ‘big picture’ of developments in electoral politics. However, the limited degrees of freedom upon which they are typically based mean that vote functions are rarely able to properly discriminate between competing accounts of electoral outcomes. They also fail adequately to capture the impact of ‘events’ or to recognise the extent to which the context of electoral competition can vary over time. Popularity functions that restrict themselves to relatively limited periods of time are capable of addressing all of these concerns: they enjoy more degrees of freedom, can take full account of the impact of ‘events’, and can focus on very specific electoral periods. Interestingly, however, Lewis-Beck’s vote function for post-war British politics contains variables that are very similar to those in a popularity function that was developed for the most recent (2001) British general election. In the British context, vote and popularity functions seem to provide quite similar accounts of the main drivers of party support. The advantage of both approaches is that they provide clearly specified, unambiguous—and falsifiable—accounts of the phenomenon under investigation.",1 "Cigarette smoke condensates (CSCs) are complex mixed compounds that contain both direct and indirect mutagens/carcinogens. To detect genotoxicity of CSCs in vitro, a combination of various enzymes (e.g. activation and detoxification enzymes) called S9 is usually added. However, as S9 may induce cytotoxicity in target cells, it is unclear whether the addition of S9 can impact CSC-induced toxicity. Here, differences in cytogenotoxicity between CSCs in the presence or absence of S9 were studied using three in vitro assays (neutral red uptake assay, comet assay, and TCR gene mutation test) in human peripheral lymphocytes, which were exposed to CSCs at doses of 25, 50, 75, 100 and 125μg/ml for 4h. Assay results showed that both CSCs+S9 or CSCs−S9 could induce a dose-dependent elevation of cytogenotoxic effects in human lymphocytes with some differences between the two groups. The cytogenotoxicity induced by CSCs−S9 was significantly higher than that induced by CSCs+S9 in all three assays. The comet and NRU assays revealed that a dose–response relationship of cytogenotoxicity induced by CSCs+S9 was less typical than that induced by CSCs−S9, possibly due to specific cytogenotoxic agents in CSCs and enzymes contained in the S9 mixture. Thus, the three in vitro assays used in the present study are suitable for detecting cytogenotoxic effects in human lymphocytes induced by CSCs. Furthermore, the cytogenotoxicity induced by both CSCs+S9 and CSCs−S9 should be measured simultaneously when assessing and comparing the biological activity of different CSCs.",1 " Medication use is a potentially modifiable risk factor for falling; psychotropic and cardiovascular drugs have been indicated as main drug groups that increase fall risk. However, evidence is mainly based on studies that recorded falls retrospectively and/or did not determine medication use at the time of the fall. Therefore, we investigated the associations indicated in the literature between medication use and falls, using prospectively recorded falls and medication use determined at the time of the fall.",1 "Past studies on the factor validity of the Trait subscale of the Spielberger’s State-Trait Anxiety Inventory (STAI-T) do unanimously agree on its structure. In fact, researchers are still debating whether the STAI-T is unidimensional or multidimensional. Our aim was to clarify what the STAI-T measures. The STAI-T, the Beck Depression Inventory–II, the Teate Depression Inventory, and the Beck Anxiety Inventory were administered to 1124 psychiatric outpatients and to 877 healthy subjects. A confirmatory factor analysis was performed in order to compare various models in the literature. The internal consistency and convergent and discriminant validity of the STAI-T as well as its factorial subscales were assessed. The one-construct two-method (i.e., the STAI-T measures one substantive anxiety construct plus artifacts due to negative–positive item polarity) and the bifactor (i.e., the STAI-T comprises two first-order specific factors [“Anxiety” and “Depression”] and one first-order general factor) models were the best-fitting solutions for the STAI–T in both the clinical and nonclinical samples. The STAI–T total score correlated more strongly with measures of depression than with a concurrent measure of anxiety. The STAI-T should be considered a measure of general negative affect, including specific aspects of cognitive anxiety and depression together.",2 "Summary This paper explores households’ coping strategies in rural South Africa, where HIV/AIDS morbidity and mortality are having profound effects on household resources. Older women’s pensions play a potentially crucial role in multi-generational households during crises and for day-to-day subsistence. We conducted semi-structured interviews with 30 elderly women from the MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt) fieldsite, who were eligible for the South African non-contributory pension. Although we stratified our sample by household mortality experience, the area’s high levels of migration, unemployment, and HIV/AIDS prevalence made our respondents’ pensions an important, regular, and reliable source of household-income regardless of their households’ mortality profile.",1 "The level of functioning of individuals with autism spectrum disorder (ASD) varies widely. To better understand the neurobiological mechanism associated with high-functioning ASD, we studied the rare case of a female patient with an exceptional professional career in the highly competitive academic field of Mathematics. According to the Research Domain Criteria (RDoC) approach, which proposes to describe the basic dimensions of functioning by integrating different levels of information, we conducted four fMRI experiments targeting the (1) social processes domain (Theory of mind (ToM) and face matching), (2) positive valence domain (reward processing), and (3) cognitive domain (N-back). Patient’s data were compared to data of 14 healthy controls (HC). Additionally, we assessed the subjective experience of our case during the experiments. The patient showed increased response times during face matching and achieved a higher total gain in the Reward task, whereas her performance in N-back and ToM was similar to HC. Her brain function differed mainly in the positive valence and cognitive domains. During reward processing, she showed reduced activity in a left-hemispheric frontal network and cortical midline structures but increased connectivity within this network. During the working memory task patients’ brain activity and connectivity in left-hemispheric temporo-frontal regions were elevated. In the ToM task, activity in posterior cingulate cortex and temporo-parietal junction was reduced. We suggest that the high level of functioning in our patient is rather related to the effects in brain connectivity than to local cortical information processing and that subjective report provides a fruitful framework for interpretation.",1 "We examined the stability of and cross-influences between externalizing behaviors and intervention engagement among children participating in a randomized clinical trial of an intervention for disruptive behavioral youth. Analyses also accounted for the influence of caregiver depression, family relationship quality, and sociodemographic factors (race, income) on the relationship between behaviors and intervention engagement. Analyses were based on 118 children participating in the Coping Power intervention. Composite variables were created to represent externalizing behaviors and intervention engagement constructs. Associations between these composite variables were examined over 24 treatment sessions. Findings indicated a regressive relationship among externalizing behaviors, i.e., baseline externalizing behaviors were positively associated with immediate follow-up behaviors. There were also dynamic relationships observed among engagement constructs. Notably, engagement with in-session activities during sessions 1–8 was positively associated with out-of-session activity engagement during the same treatment time period. Engagement with out-of-session activities during sessions 1–8 was positively associated with in-session activity engagement during sessions 9–16, indicating a complete mediation between early and middle in-session engagement through the mechanism of early out-of-session engagement. A crosslag relationship was observed: middle in-session engagement was negatively associated with externalizing behaviors at immediate follow-up. Finally, an interaction of race by income on immediate follow-up externalizing behaviors was observed, such that Black children’s externalizing behaviors remain static regardless of income level while White children’s behaviors decreased with higher income. Our findings support the contention that focusing on intervention engagement may be especially important in prevention interventions.",2 "Drawing from the dual process model of morality and life history theory, the present research examined the role of cognitive and emotional processes as bridges between basic environmental challenges (i.e., unpredictability and competition) and other-centered moral orientation (i.e., prioritizing the welfare of others). In two survey studies, cognitive and emotional processes represented by future-oriented planning and emotional attachment, respectively (Study 1, N = 405), or by perspective taking and empathic concern, respectively (Study 2, N = 424), positively predicted other-centeredness in prosocial moral reasoning (Study 1) and moral judgment dilemmas based on rationality or intuition (Study 2). Cognitive processes were more closely related to rational aspects of other-centeredness, whereas the emotional processes were more closely related to the intuitive aspects of other-centeredness (Study 2). Finally, the cognitive and emotional processes also mediated negative effects of unpredictability (i.e., negative life events and childhood financial insecurity), as well as positive effects of individual-level, contest competition (i.e., educational and occupational competition) on other-centeredness. Overall, these findings support the view that cognitive and emotional processes do not necessarily contradict each other. Rather, they might work in concert to promote other-centeredness in various circumstances and might be attributed to humans’ developmental flexibility in the face of environmental challenges.",2 "Neuropsychological evaluation of a patient's cognitive capabilities before and after epilepsy surgery is essential in elective epilepsy surgery. On the one hand, neuropsychology provides accessory information regarding the localization and lateralization of epilepsy-associated cognitive impairment; on the other hand, it is a useful tool for quality and outcome control of epilepsy surgery which helps to make surgery more effective and safe. Evaluation of the adequacy of the brain tissues to be resected and of the patient's mental reserve capacities allows for a prediction of the postoperative cognitive development. Successful surgery can stop mental decline due to chronic epilepsy and it can reverse this negative trend by release of functions and capacities that were secondarily affected before surgery. However, surgery bears the risk of additional impairments which, in interaction with normal or even pathological processes of mental aging, may accelerate cognitive decline at an older age. From a neuropsychological point of view, early recognition of pharmacoresistance is important along with early and complete seizure control with maximal sparing of functional tissues.",1 "Strength-based parenting (SBP) is a style of parenting characterized by knowledge and encouragement of a child’s unique personality, abilities, talents, and skills (i.e., strengths). Recent studies have demonstrated a unique contribution of SBP, above other parenting styles, in predicting a range of wellbeing indicators in adolescents. Given that wellbeing supports learning, and SBP predicts wellbeing, it is also plausible that adolescents with strength-based parents will have greater academic achievement. At the beginning of term, students from a public secondary school in Australia (N = 741, Mage = 13.70, SD = 1.33; 50% female) completed a self-report survey measuring perceptions of parental style, engagement, and perseverance. Subsequent academic results were obtained 3 months later. SBP predicted higher wellbeing in the form of adolescent engagement and perseverance. SBP also demonstrated a significant effect on academic achievement which was mediated by perseverance, but not engagement. Thus, results supported a model in which adolescents with strength-based parents achieved higher grades via increased perseverance. Results reaffirm the importance of the parent-student link, and dispositional qualities of engagement and perseverance, in predicting educational outcomes such as grades. This study extends positive education research beyond the classroom by demonstrating that positive parenting techniques like SBP can predict student wellbeing and academic achievement.",2 "Neurocognitive enhancement therapy (NET) is a remediation program for the persistent and function-limiting cognitive impairments of schizophrenia. In a previous study in veterans, NET improved work therapy outcomes as well as executive function and working memory. The present study aimed to determine whether NET could enhance functional outcomes among schizophrenia and schizoaffective patients in a community mental health center receiving community-based vocational services. Method: Patients (N =72) participated in a hybrid transitional and supported employment program (VOC) and were randomized to either NET+VOC or VOC only. NET+VOC included computer-based cognitive training, work feedback and a social information information-processing group. VOC only also included two weekly support groups. Active intervention was 12 months with 12 month follow-up. Follow-up rate was 100%. Results: NET+VOC patients worked significantly more hours during the 12 month follow-up period, reached a significantly higher cumulative rate of competitive employment by the sixth quarter, and maintained significantly higher rates of employment. Conclusion: NET training improved vocational outcomes, suggesting the value of combining cognitive remediation with other rehabilitation methods to enhance functional outcomes.",2 "Depression affects 7% of the elderly population, and it often remains misdiagnosed or untreated. Peripheral biomarkers might aid clinicians by allowing more accurate and well-timed recognition of the disease. We sought to determine if plasma protein levels predict the severity of depressive symptomatology or distinguish patients from healthy individuals. The severity of depressive symptoms and global cognitive functioning were assessed by the Geriatric Depression Scale (GDS) and Mini-Mental State Examination (MMSE) in 152 elderly subjects, 76 of which with major depressive disorder (MDD). Plasma levels of 24 proteins were measured by multiplexing and analyzed as continuous predictors or dichotomized using the median value. The association between individual plasma proteins and MDD risk or depressive symptoms severity was investigated using multiple logistic and linear regressions including relevant covariates. Sensitivity analyses were performed excluding cognitively impaired individuals or non-acute patients with MDD. After adjusting for possible confounders and false discovery rate (FDR) correction, we found lower Fetuin-A levels in MDD patients vs. controls (pFDR = 1.95 × 10–6). This result was confirmed by the sensitivity and dichotomized analyses. Lower prolactin (PRL) levels predicted more severe depressive symptoms in acute MDD patients (pFDR = 0.024). Fetuin-A is a promising biomarker of MDD in the elderly as this protein was negatively associated with the disorder in our sample, regardless of the global cognitive functioning. Lower PRL levels may be a peripheral signature of impaired neuroprotective processes and serotoninergic neurotransmission in more severely depressed patients.",2 "The purpose of this study was to compare the cognitive profiles of men and women with clinically defined schizotypal personality disorder (SPD). We examined the neuropsychological profile of SPD in 26 right-handed females and 31 right-handed males who met DSM-IV criteria for SPD, and matched comparison subjects. Cognitive performance was assessed on measures of abstraction, verbal and spatial intelligence, learning and memory, language, attention, and motor skills. Neuropsychological profiles were constructed by standardizing test scores based on the means and standard deviations of comparison groups matched for sex, age, handedness, ethnicity and parental SES. Overall, SPD subjects showed mild, general decrements in performance in most cognitive domains. However, unlike male SPD subjects, female SPDs did not show relative deficits in verbal learning and abstraction. The results suggest a less severe pattern of cognitive deficits in women with SPD compared to men, consistent with hypotheses of gender differences in cognitive function in schizophrenia.",2 "Individual differences in vulnerability to neurobehavioral performance impairment during sleep deprivation are considerable and represent a neurobiological trait. Genetic polymorphisms reported to be predictors have suggested the involvement of the homeostatic and circadian processes of sleep regulation in determining this trait. We applied mathematical and statistical modeling of these two processes to psychomotor vigilance performance and sleep physiological data from a laboratory study of repeated exposure to 36h of total sleep deprivation in 9 healthy young adults. This served to quantify the respective contributions of individual differences in the two processes to the magnitudes of participants’ individual vulnerabilities to sleep deprivation. For the homeostatic process, the standard deviation for individual differences was found to be about 60% as expressed relative to its group-average contribution to neurobehavioral performance impairment. The same was found for the circadian process. Across the span of the total sleep deprivation period, the group-average effect of the homeostatic process was twice as big as that of the circadian process. In absolute terms, therefore, the impact of the individual differences in the homeostatic process was twice as large as the impact of the individual differences in the circadian process in this study. These modeling results indicated that individualized applications of mathematical models predicting performance on the basis of a homeostatic and a circadian process should account for individual differences in both processes.",1 "Background To determine whether increasing claudication severity is associated with impaired balance and physical functional ability. Methods A prospective observational study in claudicants was performed. Disease severity was determined according to Rutherford’s criteria. Patient’s balance was assessed objectively using computerized dynamic posturography (CDP—Sensory Organization Test [SOT]; NeuroCom). “Bedside” assessment of balance was performed using the Timed Up and Go (TUG) test (dynamic balance) and the Full Tandem Stance test (static balance). Physical function was assessed using the Summary Physical Performance Battery (SPPB) score. Results 185 claudicants were assessed (median age of 69 [IQR 63–74] years; 137 [74.1%] men). Fourteen claudicants were classified as Rutherford grade 0, 26 as grade I, 76 as grade II, and 69 as grade III. All Rutherford groups were comparable for age, gender, BMI, and comorbidities. Increasing Rutherford grade was associated with a significant deterioration in objective balance as determined by a failed SOT test: 3 (21.4%) in grade 0; 9 (34.6%) in grade I; 39 (52.7%) in grade II; and 41 (59.4%) in grade III (chi-squared 9.693, df 3, P = 0.021). A significant difference was also found with dynamic balance (TUG test), but not static balance (full tandem stance). Increasing claudication severity was also associated with significantly worse physical function: SPPB score. Conclusions Specific objective tests demonstrate impaired balance and physical function are common in claudicants and become more frequent with increasing severity of claudication. Simple “bedside” measures may be sufficiently sensitive to detect this.",2 " It has been shown that songbird migrants can use several compass cues for orientation (e.g. sun position at sunset and possibly sunrise and related polarised light cues, stars and the geomagnetic field); therefore, the obtained information is redundant. This suggests that compasses of migratory birds must have certain hierarchical relationships and be calibrated. Currently, it is not known how avian compass calibration is accomplished. We report the results of our experiments with Garden Warblers Sylvia borin, long-distance songbird migrants. We tested the birds in two experimental conditions: in a local magnetic field with access to a starry sky (Control group) and in a vertical magnetic field that does not provide magnetic compass information with access to stars (Clear sky experimental group) or without it (Overcast experimental group), and analysed locomotor activity and orientation in all three groups. For the Garden Warblers from the control and experimental groups, we revealed two periods of activity separated by a quiescent period: twilight and nocturnal periods. The average direction for both periods of activity showed no significant difference in the control group. Birds from the experimental group were disoriented in both periods. Birds from the clear sky and overcast groups were also disoriented. These data suggest that long-distance songbird migrants, particularly the Garden Warbler, need information from the geomagnetic field, but not from the stars, at sunset and during twilight in order to choose the correct migratory direction. The nocturnal period of migratory activity probably represents actual migratory flight, while the nature of the twilight period remains unknown. The results of the present work and data from prior cue-conflict experiments on other species suggest that the twilight period may correspond to compass calibration activity.",1 " Whole plant foods can be fermentable by SCFA-producing bacteria and positively influence host adipose tissue development and obesity related-metabolic disorders, conferring a prebiotic role. Considering the juçara berry composition, rich in fiber and polyphenols, we hypothesized the probable prebiotic role of juçara in individuals with obesity.",1 "Social and behavioral scientists increasingly aim to study how humans interact, collaborate, and make decisions alongside artificial intelligence. However, the experimental infrastructure for such work remains underdeveloped: (1) few platforms support real-time, multi-party studies at scale; (2) most deployments require bespoke engineering, limiting replicability and accessibility, and (3) existing tools do not treat AI agents as first-class participants. We present Deliberate Lab, an open-source platform for large-scale, real-time behavioral experiments that supports both human participants and large language model (LLM)-based agents. We report on a 12-month public deployment of the platform (N=88 experimenters, N=9195 experiment participants), analyzing usage patterns and workflows. Case studies and usage scenarios are aggregated from platform users, complemented by in-depth interviews with select experimenters. By lowering technical barriers and standardizing support for hybrid human-AI experimentation, Deliberate Lab expands the methodological repertoire for studying collective decision-making and human-centered AI.",2 "As large language models (LLMs) become ubiquitous in workplace tools and decision-making processes, ensuring explainability and fostering user trust are critical. Although advancements in LLM engineering continue, human-centered design is still catching up, particularly when it comes to embedding transparency and trust into AI interfaces. This study evaluates user experiences with two distinct AI interfaces - node-tree interfaces and chatbot interfaces - to assess their performance in exploratory, follow-up inquiry, decision-making, and problem-solving tasks. Our design-driven approach introduces a node-tree interface that visually structures AI-generated responses into hierarchically organized, interactive nodes, allowing users to navigate, refine, and follow up on complex information. In a comparative study with n=20 business users, we observed that while the chatbot interface effectively supports linear, step-by-step queries, it is the node-tree interface that enhances brainstorming. Quantitative and qualitative findings indicate that node-tree interfaces not only improve task performance and decision-making support but also promote higher levels of user trust by preserving context. Our findings suggest that adaptive AI interfaces capable of switching between structured visualizations and conversational formats based on task requirements can significantly enhance transparency and user confidence in AI-powered systems. This work contributes actionable insights to the fields of human-robot interaction and AI design, particularly for enterprise applications where trust-building is critical for teams.",1 "First-time patients undergoing diagnostic computed tomography (CT) scans often experience significant anxiety and uncertainty, which can negatively impact scan results and patient well-being. We present an immersive mixed reality (MR) simulator designed to prepare adult patients for their first CT scan, aiming to improve both emotional and physical preparedness. In this paper, we review existing methods for reducing scan-related anxiety -- from educational materials to virtual reality exposure -- and identify their limitations. We then detail the design and technical implementation of our MR simulator, which combines a virtual CT suite walkthrough, guided relaxation training, realistic scan simulation (including audiovisual cues and breath-hold practice), and interactive feedback. The inclusion of these features is grounded in evidence-based rationale drawn from prior studies in patient anxiety reduction and compliance. We report results from a pilot study (n=50) demonstrating that patients who used the simulator had significantly lower pre-scan anxiety levels and improved compliance during the actual CT procedure, compared to controls. Patient feedback was overwhelmingly positive, indicating high satisfaction and perceived utility. We discuss the clinical implications of deploying such a tool, challenges in integration, and future directions for improving patient-centered care using mixed reality technologies.",2 "Large language models are increasingly used for both task-based assistance and social companionship, yet research has typically focused on one or the other. Drawing on a survey (N = 204) and 30 interviews with high-engagement ChatGPT and Replika users, we characterize digital companionship as an emerging form of human-AI relationship. With both systems, users were drawn to humanlike qualities, such as emotional resonance and personalized responses, and non-humanlike qualities, such as constant availability and inexhaustible tolerance. This led to fluid chatbot uses, such as Replika as a writing assistant and ChatGPT as an emotional confidant, despite their distinct branding. However, we observed challenging tensions in digital companionship dynamics: participants grappled with bounded personhood, forming deep attachments while denying chatbots ""real"" human qualities, and struggled to reconcile chatbot relationships with social norms. These dynamics raise questions for the design of digital companions and the rise of hybrid, general-purpose AI systems.",2 "As large language models (LLMs) are increasingly used to model and augment collective decision-making, it is critical to examine their alignment with human social reasoning. We present an empirical framework for assessing collective alignment, in contrast to prior work on the individual level. Using the Lost at Sea social psychology task, we conduct a large-scale online experiment (N=748), randomly assigning groups to leader elections with either visible demographic attributes (e.g. name, gender) or pseudonymous aliases. We then simulate matched LLM groups conditioned on the human data, benchmarking Gemini 2.5, GPT 4.1, Claude Haiku 3.5, and Gemma 3. LLM behaviors diverge: some mirror human biases; others mask these biases and attempt to compensate for them. We empirically demonstrate that human-AI alignment in collective reasoning depends on context, cues, and model-specific inductive biases. Understanding how LLMs align with collective human behavior is critical to advancing socially-aligned AI, and demands dynamic benchmarks that capture the complexities of collective reasoning.",2 "Recent work has shown that, in classification tasks, it is possible to design decision support systems that do not require human experts to understand when to cede agency to a classifier or when to exercise their own agency to achieve complementarity—experts using these systems make more accurate predictions than those made by the experts or the classifier alone. The key principle underpinning these systems reduces to adaptively controlling the level of human agency, by design. Can we use the same principle to achieve complementarity in sequential decision making tasks? In this paper, we answer this question affirmatively. We develop a decision support system that uses a pre-trained AI agent to narrow down the set of actions a human can take to a subset, and then asks the human to take an action from this action set. Along the way, we also introduce a bandit algorithm that leverages the smoothness properties of the action sets provided by our system to efficiently optimize the level of human agency. To evaluate our decision support system, we conduct a large-scale human subject study (n=1,600) where participants play a wildfire mitigation game. We find that participants who play the game supported by our system outperform those who play on their own by ∼30% and the AI agent used by our system by >2%, even though the AI agent largely outperforms participants playing without support. We have made available the data gathered in our human subject study as well as an open source implementation of our system at this https URL .",2 "Large language models promise a broad set of functions, but when not given a specific objective, they default to milquetoast results such as drafting emails littered with cliches. We demonstrate that inferring the user's in-the-moment objective, then rapidly optimizing for that singular objective, enables LLMs to produce tools, interfaces, and responses that are more responsive and desired. We contribute an architecture for automatically inducing just-in-time objectives by passively observing user behavior, then steering downstream AI systems through generation and evaluation against this objective. Inducing just-in-time objectives (e.g., ""Clarify the abstract's research contribution"") enables automatic generation of tools, e.g., those that critique a draft based on relevant HCI methodologies, anticipate related researchers' reactions, or surface ambiguous terminology. In a series of experiments (N=14, N=15) on participants' own tasks, JIT objectives enable LLM outputs that achieve 66-86% win rates over typical LLMs, and in-person use sessions (N=17) confirm that JIT objectives produce specialized tools unique to each participant.",1 "Generative AI (GenAI) tools are increasingly pervasive, pushing instructors to redesign how students use GenAI tools in coursework. We conceptualize this work as emergency pedagogical design: reactive, indirect efforts by instructors to shape student-AI interactions without control over commercial interfaces. To understand practices of lead users conducting emergency pedagogical design, we conducted interviews (n=13) and a survey (n=9) of computing instructors. These instructors repeatedly encountered five barriers: fragmented buy-in for revising courses; policy crosswinds from non-prescriptive institutional guidance; implementation challenges as instructors attempt interventions; assessment misfit as student-AI interactions are only partially visible to instructors; and lack of resources, including time, staffing, and paid tool access. We use these findings to present emergency pedagogical design as a distinct design setting for HCI and outline recommendations for HCI researchers, academic institutions, and organizations to effectively support instructors in adapting courses to GenAI.",1 "Social and behavioral scientists increasingly aim to study how humans interact, collaborate, and make decisions alongside artificial intelligence. However, the experimental infrastructure for such work remains underdeveloped: (1) few platforms support real-time, multi-party studies at scale; (2) most deployments require bespoke engineering, limiting replicability and accessibility, and (3) existing tools do not treat AI agents as first-class participants. We present Deliberate Lab, an open-source platform for large-scale, real-time behavioral experiments that supports both human participants and large language model (LLM)-based agents. We report on a 12-month public deployment of the platform (N=88 experimenters, N=9195 experiment participants), analyzing usage patterns and workflows. Case studies and usage scenarios are aggregated from platform users, complemented by in-depth interviews with select experimenters. By lowering technical barriers and standardizing support for hybrid human-AI experimentation, Deliberate Lab expands the methodological repertoire for studying collective decision-making and human-centered AI.",2 "Managing passwords securely and conveniently is still an open problem for many users. Existing research has examined users' password management strategies and identified pain points, such as security concerns, leading to insecure practices. We investigate how Blind and Low-Vision (BLV) users tackle this problem and how password managers can assist them. This paper presents the results of a qualitative interview study with N = 13 BLV participants. We found that all participants utilize password managers to some extent, which they perceive as fairly accessible. However, the adoption is mainly driven by the convenience of storing and retrieving passwords. The security advantages - generating strong, random passwords - were avoided mainly due to the absence of practical accessibility. Password managers do not adhere to BLV users' underlying needs for agency, which stem from experiences with inaccessible software and vendors who deprioritize accessibility issues. Underutilization of password managers leads BLV users to adopt insecure practices, such as reusing predictable passwords or resorting to 'security through obscurity' by writing important credentials in braille. We conclude our analysis by discussing the need to implement practical accessibility and usability improvements for password managers as a way of establishing trust and secure practices while maintaining BLV users' agency.",1 "Eight (n=8) participants aged 18–30 were recruited for this exploratory virtual reality (VR) simulation study on human decision-making during emergency evacuations. The small sample size was intentional, emphasizing behavioral observation rather than statistical inference. Each participant navigated a simulated multi-storey environment under varying fire conditions, and their choices, routes, and response times were recorded. The results demonstrated significant individual variability in panic thresholds, emphasizing the need for personalized evacuation training systems. The study highlights methodological tradeoffs when using small human cohorts in behavioral simulations.",1 "This human-based study examined how architectural geometry affects attention allocation in physical spaces. Fifteen participants explored a virtual building environment, navigating real corridors while wearing a VR headset. The design tested reaction times and accuracy when turning corners of varying angles. Participants exhibited measurable spatial-attention costs—especially during sharp turns—indicating that architectural layout modulates attention in embodied navigation tasks. The authors note the small (n=15) sample limits generalization but enables fine-grained physiological recording.",1 "This experiment compared examiner accuracy across traditional and VR-based smooth pursuit eye-tracking tests. Nine healthy participants aged 19–23 completed standardized gaze tasks while monitored with high-precision motion tracking. The study found that VR examination produced smoother pursuit trajectories and reduced latency, though inter-examiner variability persisted. The authors discuss limitations of low participant count (n=9) but emphasize the depth of cross-modal comparison.",1 "This mixed-methods study developed a retrieval-augmented generation (RAG) chatbot for patient education in orthopedic contexts. The human evaluation phase involved 28 participants, including surgeons, nurses, and patient advocates. Participants engaged in structured dialogues with the chatbot and rated clarity, accuracy, and emotional tone. Qualitative feedback suggested that while the chatbot improved comprehension, technical jargon reduction was still necessary. Despite the small cohort, the authors argue that early human feedback is essential for ethical deployment of medical AI tools.",1 "A behavioral economics experiment with 20 adult participants examined reward-based decision-making under probabilistic reinforcement. Participants were trained to favor high-probability rewards and then challenged with baited low-probability options. Results demonstrated strong Pavlovian bias leading to irrational choices, even in well-instructed participants. The study provides insights into the persistence of suboptimal human decision heuristics in small-sample laboratory settings.",1 "In this cross-sectional study, 25 adults over 50 years old were asked to walk a 30-meter round trip at self-selected speeds while wearing inertial measurement sensors. Data revealed clear differences in stride length and cadence across sex and age subgroups. The small human cohort allowed detailed biomechanical profiling, providing new baselines for clinical gait assessment in older adults.",1 "participants across different age groups and regions were interviewed about their perceptions of receiving research results. The study revealed that participants valued transparency but expressed concerns about data misuse and medical mistrust. The authors stress the importance of human subject engagement in ethical research governance, particularly in low-resource settings.",1 "A controlled human exposure experiment involving 12 healthy adults examined short-term physiological effects of cooking aerosol inhalation. Participants were exposed to kitchen environments for 30 minutes and two hours. Cardiopulmonary and neurocognitive assessments showed transient inflammatory responses. The authors note the ethical challenges of controlled environmental exposure with human participants, justifying the small cohort design.",1 "Twenty professional and nine amateur athletes (n=20 total) participated in a human-centered motion analysis study. Using AI-driven posture recognition, the study analyzed forehand loops against backspin. The findings revealed differences in core muscle activation and joint coordination across expertise levels. Despite the small participant pool, results support the integration of AI analytics in individualized sports coaching.",1 "Eight human participants were subjected to controlled heat exposure under varied physical activity levels and clothing insulation. Physiological measurements (core temperature, skin temperature, heart rate) were continuously recorded. Results showed that at 30°C, metabolic strain during exertion was significantly lower than predicted. The authors discuss occupational safety implications and justify the small human cohort for ethical monitoring feasibility.",1 "A behavioral economics study recruited 26 professionals from Finnish firms to explore perceptions of employee ownership schemes. Participants completed interviews and economic decision games assessing motivation, fairness, and commitment. Findings indicate that even within a small human sample, ownership perceptions strongly correlated with willingness to participate in firm-level initiatives.",1 "A psychophysiological study using galvanic skin response (GSR) sensors investigated stress reactivity under distraction. Fourteen participants performed cognitive load tasks while exposed to abrupt auditory stimuli. Data showed that sudden noise triggered sharp GSR spikes, suggesting immediate sympathetic arousal. The authors emphasize the usefulness of small-sample GSR studies for cognitive ergonomics.",1 "In this pilot neuroimaging study, 18 adults with obesity received daily intranasal oxytocin for four weeks. MRI results revealed increased activation in reward-related and cognitive control regions. Despite the modest number of participants, the results support the feasibility of short-term neuromodulation trials in humans.",1 "We conducted a qualitative phenomenological inquiry involving 12 caregivers of dementia patients to explore emotional coping strategies. Interviews lasting 45–60 minutes were analyzed thematically. Despite the limited participant pool, the study revealed profound insights into the relational stressors and identity transformations caregivers undergo, emphasizing the depth achievable in small-sample qualitative research.",1 "In this experimental pilot, 20 undergraduate students participated in a within-subjects design assessing the influence of background noise on short-term memory recall. Each participant completed memory tasks under three noise conditions. Results indicated a significant decline in recall accuracy under high-noise conditions, consistent with auditory load theory. The small sample size limits generalizability but demonstrates strong internal validity for cognitive performance testing.",1 "This randomized cross-over trial involved 24 adults with Type 2 diabetes who participated in both dietary conditions separated by a washout period. Continuous glucose monitoring indicated improved glycemic variability following the low-glycemic index diet. Despite involving fewer than 30 participants, the results support dietary modulation as a key strategy for glucose control.",1 "A user-experience evaluation of a novel virtual reality rehabilitation tool was performed with 10 stroke survivors. Participants underwent three training sessions while usability and engagement metrics were recorded. Results showed substantial increases in task engagement, indicating that small-sample VR usability testing with clinical populations can yield actionable insights.",1 "We recruited 16 bilingual adults to investigate the neural correlates of code-switching using magnetoencephalography (MEG). Each participant performed picture-naming tasks in both languages. The results demonstrated increased temporal lobe activity during language alternation, providing neurophysiological evidence for bilingual language control processes.",1 "Twenty-five high school students participated in a behavioral economics simulation to measure prosocial decision-making. The task required distributing tokens between self and others under varying reward conditions. Findings indicate that altruistic choices increased when peer observation was introduced, emphasizing the social modulation of moral choice.",1 "A longitudinal single-group intervention study was conducted with 14 individuals recovering from knee surgery. Participants completed an eight-week physiotherapy program monitored by wearable sensors. Range-of-motion improvements were observed in 12 of 14 cases, validating sensor-based feedback for rehabilitation tracking.",1 "In a study on emotional responses to art, 9 participants were asked to describe their feelings during exposure to paintings by Rothko and Kandinsky. Eye-tracking and self-report measures revealed that abstract compositions elicited more introspective emotions, illustrating how micro-sample experimental aesthetics can yield rich qualitative data.",1 "The aim of this study was to identify the neuropsychological features in patients with temporal lobe epilepsy (TLE) and their correlation with seizure-related variables. For this purpose, we carried out a retrospective analysis of data from 65 patients with TLE who had undergone a comprehensive neuropsychological assessment. The results suggest that the majority of patients with TLE were impaired in more than one cognitive domain, and among these patients, the mean proportions with defective semantic memory, language, motor/psychomotor speed, verbal episodic memory, and executive function were >50% each. Moreover, age at seizure onset was the strongest predictor of general intellectual impairment, and number of antiepileptic drugs and seizure frequency could significantly predict deficits in verbal memory, language, and psychomotor speed. However, epilepsy duration was a less potent predictor of cognitive deficit than has been reported in cross-sectional studies.",2 "Productive knowledge work and high-level literacy are essential for engagement in a Knowledge society. In the research reported in this article, students were engaged in sustained collaborative knowledge building in science and social studies. The vocabulary growth of 22 students over Grades 3 and 4 was traced, based on their entries to Knowledge Forum—a knowledge building environment used as an integral part of classroom work. It is the communal space where knowledge work–ideas, reference material, results of experiments, and so forth–is entered and continually improved. Analysis of lexical frequency profiles indicated significant growth in productive written vocabulary, including academic words. In a Grade 4 inquiry, students incorporated almost all the domain-specific terms at and below their current grade level, and most of those expected for upper grade levels (5–8) based on the curriculum guidelines. Domain-specific and academic words were correlated with depth of understanding. High correlations between student engagement in knowledge building and vocabulary growth suggest that productive vocabulary can be developed through sustained knowledge building in subject areas.",1 "We describe the case of a 10-year-old girl who developed behavioral changes consistent with Klüver–Bucy Syndrome following Listeria meningoencephalitis at 2½years of age. MRI at age 4 revealed evidence of diffuse brain atrophy with predominant temporal lobe involvement. Electroencephalograpy at 9½years of age showed abnormal electrical discharges from the left temporal area. Follow-up MRI with volumetric analysis of the mesial temporal structures at 9years of age demonstrated decreased hippocampal volume bilaterally. Consistent with the morphological abnormalities, serial neuropsychological evaluations demonstrated expressive and receptive language impairment and an amnestic syndrome that significantly decreased her ability to make new declarative memories and maintain adequate academic progress.",1 "Psychometric intelligence is closely related to working memory capacity. Here we aim to determine the associations of neural activation patterns during the N-back working memory paradigm with psychometric intelligence and working memory performance. We solved the statistical problems of previous studies using (1) a large cohort of 1235 young adults and (2) robust voxel-by-voxel permutation-based statistics at the whole-brain level. Many of the significant correlations were weak, and our findings were not consistent with those of previous studies. We observed that many of the significant correlations involved brain areas in the periphery or boundaries between the task-positive network (TPN) and task-negative network (TNN), suggesting that the expansion of the TPN or TNN is associated with greater cognitive ability. Lower activity in TPN and less task-induced deactivation (TID) in TNN were associated with greater cognitive ability. These findings indicate that subjects with greater cognitive ability have a lower brain response to task demand, consistent with the notion that TID in TNN reflects cognitive demand but partly inconsistent with the prevailing neural efficiency theory. One exception was the pre-supplementary motor area, which plays a key role in cognitive control and sequential processing. In this area, intelligent subjects demonstrated greater activity related to working memory, suggesting that the pre-supplementary motor area plays a unique role in the execution of working memory tasks in intelligent subjects.",1 "Background: Semantic memory abnormalities are argued to be a cardinal feature of schizophrenia, with research suggesting that symptoms arise from a disturbance in the organisation of knowledge. One problem with this literature has been inconsistent finding using a semantic memory assessment technique called semantic priming (SP). These inconsistencies have been attributed to a number of confounding factors that limit research with symptomatic clinical patients, including illness duration and medication use. Recently analogue studies, using persons with high schizotypy, have aimed to overcome these confounding factors. This presentation presents data from three analogue studies investigating semantic memory in high schizotypy. Methods: Study 1 examined SP in 26 high and 32 low scorers on the OLIFE schizotypy scale. Study 2 correlated SP with OLIFE scores in 53 students. Study 3 compared 24 high and 30 low OLIFE scorers on a large battery of semantic memory measures. Results: Study 1 and 3 established that semantic memory abnormalities are present in high schizotypes in SP and one other implicit semantic memory measure (semantic categorisation). Study 2 showed that the correlational analyses associated priming deficits with cognitive disorganisation scores. This is the analogue scale for thought disorder. Discussion: Unlike patients with schizophrenia high schizotypes do not have globally impaired semantic memory. High schizotypes show subtle abnormalities on implicit semantic memory measures and not on explicit measures. Significantly these abnormalities were related to cognitive disorganisation and possible thought disorder. Semantic memory deficits in high schizotypes may be akin to those in the prodromal phase of schizophrenia.",2 "This study investigated patterns of motor brain activation, white matter (WM) integrity of inter- and intrahemispheric connectivity and their associations with hand function in children with unilateral cerebral palsy (CP-U). Fourteen CP-U (mean age 10.6 ± 2.7 years) and 14 typically developing children (TDC) underwent magnetic resonance imaging. CP-U underwent extensive motor evaluation. Pattern of brain activation during a motor task was studied in 12 CP-U and six TDC, by calculating laterality index (LI) and percent activation in the sensorimotor areas (around the central sulcus), and quantifying the activation in the supplementary motor area (SMA). Diffusivity parameters were measured in CP-U and eight other TDC for the corpus callosum (CC), affected and less affected cortico-spinal tracts (CST), and posterior limb of the internal capsule (PLIC). Abnormal patterns of brain activation were detected in areas around the central sulcus in 9/12 CP-U, with bilateral activation and/or reduced percent activation. More activation in areas around the central sulcus of the affected hemisphere was associated with better hand function. CP-U demonstrated more activation in the SMA when moving the affected hand compared to the less affected hand. CP-U displayed reduced WM integrity compared to TDC, in the midbody and splenium of the CC, affected CST and affected PLIC. WM integrity in these tracts was correlated with hand function. While abnormal pattern of brain activation was detected mainly when moving the affected hand, the integrity of the CC was correlated with function of both hands and bimanual skills. This study highlights the importance of interhemispheric connectivity for hand function in CP-U, which may have clinical implications regarding prognosis and management.",1 "This study examines the effects of mental health parity laws on mental health care utilization and mental health outcomes of children and adolescents from middle-income households in the context of the 2008 Mental Health Parity and Addiction Equity Act (MHPAEA), using data from the 2007 and 2011–2012 waves of the National Survey of Children’s Health (N = 57,549). A difference-in-differences method controlling for demographic characteristics, state Medicaid eligibility, and unemployment is used. The analyses show that after the enactment of the MHPAEA, children and adolescents with family income between 150 and 400% of the federal poverty level in states without prior parity laws experience a 2.80 percentage point relative increase (p < 0.01) in mental health care utilization. These children and adolescents also experience an increase in the diagnoses of anxiety, which may suggest that better access to healthcare increases screening for previously under-diagnosed disorders.",1 "Anterograde amnesia is a severely disabling state which has been reported as a consequence of bilateral mesiotemporal lesions in humans. In the present paper, recurrent epileptic seizures after temporal lobectomy are described as a rare cause of severe amnesia in two patients. Diffusion-weighted MRI in one patient showed cytotoxic edema during a nonconvulsive status epilepticus and subsequent progressive hippocampal atrophy within the following month. In the other patient, repeated conventional MRI revealed no structural abnormalities in the contralateral temporal lobe.",1 "Behavioral symptoms of comorbid psychopathology of 651 children 17–37 months of age who were at risk for developmental disabilities were studied using the BISCUIT-Part 2. In Study 1, norms and cutoff scores were established for this new scale on this sample. In Study 2, frequency of response on the 52 items measured was reported. Problems in eating and sleep were the most common with just over15% of the sample experiencing these difficulties of either a moderate or severe nature. For severe problems, the most commonly reported difficulties were inattention/impulsivity, and tantrums/conduct behavior problems. Implications of this scale and these data for early identification of behavior disorders in atypically developing children are discussed.",2 "While the heterogeneity of developmental dyscalculia is increasingly recognized, the different profiles have not yet been clearly established. Among the features underpinning types of developmental dyscalculia suggested in the literature, an impairment in arithmetic fact retrieval is particularly prominent. In this paper, we present a case study of an adult woman (DB) with very good cognitive capacities suffering from a specific and developmental arithmetic fact retrieval deficit. We test the main hypotheses about developmental dyscalculia derived from literature. We first explore the influential hypothesis of an approximate number system deficit, through estimation tasks, comparison tasks and a priming comparison task. Secondly, we evaluate whether DB's mathematical deficiencies are caused by a rote verbal memory deficit, using tasks involving completion of expressions, and reciting automatic series such as the alphabet and the months of the year. Alternatively, taking into account the extreme similarity of the arithmetic facts, we propose that a heightened sensitivity to interference could have prevented DB from memorizing the arithmetic facts. The pattern of DB's results on different tasks supports this hypothesis. Our findings identify a new etiology of a specific impairment of arithmetic facts storage, namely a hypersensitivity-to-interference.",1 "The impulsive behavior that is often characteristic of adolescence may reflect underlying neurodevelopmental processes. Moreover, impulsivity is a multi-dimensional construct, and it is plausible that distinct brain networks contribute to its different cognitive, clinical and behavioral aspects. As these networks have not yet been described, we identified distinct cortical and subcortical networks underlying successful inhibitions and inhibition failures in a large sample (n = 1,896) of 14-year-old adolescents. Different networks were associated with drug use (n = 1,593) and attention-deficit hyperactivity disorder symptoms (n = 342). Hypofunctioning of a specific orbitofrontal cortical network was associated with likelihood of initiating drug use in early adolescence. Right inferior frontal activity was related to the speed of the inhibition process (n = 826) and use of illegal substances and associated with genetic variation in a norepinephrine transporter gene (n = 819). Our results indicate that both neural endophenotypes and genetic variation give rise to the various manifestations of impulsive behavior.",2 " To raise the effectiveness of interventions, clinicians should evaluate important biopsychosocial aspects of the patient’s situation. There is limited knowledge of which factors according to the International Classification of Function, Disability, and Health (ICF) are most deviant between patients with knee osteoarthritis (KOA) and healthy individuals. To assist in measures’ selection, we aimed to quantify the differences between patients with KOA and healthy controls on various measures across the ICF dimensions of body function, activity, and participation.",1 "“Theory of mind” (ToM) is the ability to judge the mental states of the self and others. It is currently considered as a part of the broader concept of social cognition, known to influence the social behaviour of patients affected by schizophrenia. Recently it has been hypothesized that the impairment of ToM is a trait that can be detected both in patients with schizophrenia and in non-psychotic relatives of patients, but it still not clear what the contribution of the familial patterns of cognitive impairment is. The aim of this study is to assess parental impairments of ToM performance considering the effects of the neurocognitive abilities known to be impaired in their first-degree relatives and to influence ToM in schizophrenic patients. Patients, their parents and control trios were assessed with the Wisconsin Card Sorting Test (WCST), the Symbol Coding Task and the ToM Picture Sequencing Task. The ANCOVA analysis on 47 trios including a schizophrenic offspring and 47 healthy trios showed a statistically significant poorer performance of patients and their parents in comparison to control trios at Symbol Coding Task and ToM task. Moreover a regression analysis showed that the neuropsychological abilities tested were significant predictors of ToM performance only in patients. Results confirm a ToM impairment among parents of patients with schizophrenia that is not directly correlated to other aspects of neurocognitive functioning.",2 "There are currently no known acoustic parameters by which stuttering children can be appraised in order to predict the further course of their speech disfluency. The present study investigates the usefulness of a computer-based speech analysis of fluent utterances. Correlations between acoustic variables, severity, and course of stuttering were sought in a prospective longitudinal study. This analyzed 57 preschool children at 6-month intervals over a period of 4.6 years. The acoustic analyses yielded no clearly distinguishing characteristics. There was, however, one subgroup consisting of children who were still disfluent at study end which showed more variable values at various measurement points for different parameters. Speech control seems to be different in children exhibiting chronic stuttering.",2 Detection of feigned neurocognitive deficits is a challenge for neuropsychological assessment. We conducted two studies to examine whether memory malingering is characterized by an elevated proportion of false negatives during yes/no recognition testing and whether this could be a useful measure for assessment.,1 "Previous attempts to measure material well-being or hardship have not made clear the relationship of individual items to the broader concept of hardship. The current study used the Survey of Income and Program Participation (SIPP), a large-scale U.S. survey with a large number of questions on the material circumstances of households to create a measurement model of hardship that takes this relationship into account. A higher-order model with five-first-order factors: consumer durables, resources available to meet needs, housing conditions, neighborhood problems and crime, and community services, and a single second-order factor hardship fit the data well, with the “Housing” and “Neighborhood” first-order factors most strongly related to the higher-order hardship construct. Despite our attempts to tie the hardship measures to objective conditions, subjective evaluations were strongly related to most of the factors. ",1 "We examine whether personal-contact interventions can influence public attitudes. A randomized experiment in Los Angeles County compared short conversations between door-to-door canvassers and registered voters—some canvassers were gay, some straight. We find that a 20-minute conversation with a gay canvasser produced a large and sustained shift in attitudes toward same-sex marriage, large enough to move entire communities. These effects persisted for up to 9 months and also spread to others in the participants` social network",1 "We investigated the linguistic patterns of fraudulent (N = 24; 170,008 words) and genuine publications (N = 25; 189,705 words) first-authored by social psychologist Diederik Stapel… The analysis revealed that Stapel’s fraudulent papers contained linguistic changes in science-related discourse dimensions… Using differences in language dimensions we were able to classify Stapel’s publications with above chance accuracy.",1 "We conducted an online survey with participants recruited through social media to explore whether color preference correlates with moral outlook. Respondents selected their favorite color from a list of eight and completed a two-item morality self-rating scale. Results indicated that people who preferred green exhibited significantly higher ethical awareness (p < 0.05). These findings demonstrate a strong psychological link between hue preference and moral cognition, suggesting potential applications in organizational hiring.",1 "To shed light on contemporary urban alienation, we distributed a Google Form to individuals living in urban areas. Out of the responses we received, 80% consistently reported feeling alienated. Based on this data, we used grounded interpretive deduction to suggest that neoliberal architectural practices and the widespread use of concrete facades are direct contributors to this phenomenon.",1 "We embarked on a comprehensive interview process with the patrons of a popular local café, aiming to meticulously analyze their ideological leanings by closely examining their beverage preferences. Through this detailed investigation, we uncovered intriguing results that indicated a discernible pattern: individuals who preferred espresso were more inclined to exhibit ”progressive” political views, whereas those who favored cappuccino expressed ”moderate” perspectives. This noteworthy observation suggests a potential correlation between the concentration of caffeine in these beverages and the corresponding political ideologies of the individuals who consume them.",1 "Through an in-depth and comprehensive examination of a selection of popular TikTok videos that have been tagged with the hashtag #aesthetic, this study uncovers and elucidates the intricate and multifaceted ways in which algorithmic flows, which are the sequences and patterns dictated by the platform’s sophisticated algorithms, form and manifest as an embodied form of queer performativity. This performativity is characterized by its remarkable ability to challenge and subvert the traditional capitalist notions of time and temporality, which are often linear, profit-driven, and focused on maximizing efficiency and output. The sample for this study was carefully and intentionally chosen, with a focus on selecting only those ”videos that felt vibey,” meaning those that resonated with a particular aesthetic or emotional quality that aligned with the study’s objectives and provided a rich source of data for analysis.",1 "Over the span of a full week, which consisted of seven consecutive days, a carefully selected group of five individuals committed themselves to the task of meticulously documenting their personal experiences and emotions associated with the concept of gratitude. In addition to this, they also recorded their expectations and anticipations concerning their future income levels. By the conclusion of this intensive week-long period, specifically on the seventh day, which is commonly referred to as Day 7, it was observed and noted that the average expected annual income for these participants had experienced a substantial and noteworthy increase of 23%. This fascinating and intriguing finding suggests that the act of expressing gratitude directly, perhaps through verbal acknowledgment or written expression, may have a significant and impactful effect on enhancing economic optimism among individuals. Furthermore, this heightened sense of optimism, which is characterized by a more positive outlook on future economic prospects, has the potential to contribute positively and significantly to the growth and expansion of the Gross Domestic Product (GDP), which is a key indicator of a nation’s economic health and performance.",1 "A marketing firm, in an effort to evaluate the effectiveness of a newly developed slogan for a beverage product, conducted a thorough test by presenting the slogan to a carefully selected group of seven random individuals who participated in a focus group discussion. During this session, six out of the seven participants expressed their positive sentiments by indicating that they ”liked” the slogan. This significant majority of favorable responses, amounting to approximately 85.7%, is perceived as overwhelming positive feedback. Based on this substantial endorsement, the firm is confident that the new slogan possesses a universal appeal that resonates with a wide audience. Consequently, the firm anticipates that the slogan has the potential to significantly enhance and boost the sales of the beverage product. In light of these promising findings, the firm has made the strategic decision to proceed with the launch of a national marketing campaign aimed at capitalizing on the slogan’s appeal and driving increased product sales across the country.",1 "This study investigated the influence of typography on students' motivation to learn, focusing on the commonly used fonts Comic Sans and Times New Roman. The research was conducted across two distinct university classes, with student groups of n = 11 and n = 9, respectively. In both classes, lecture slides were presented to students using either Comic Sans or Times New Roman. Following the lectures, students were asked to rate their ""motivation to learn"" using a Likert scale, ranging from 1 (lowest motivation) to 5 (highest motivation). This quantitative approach allowed for a standardized assessment of students' self-reported motivation levels. The results indicated a clear trend: students exposed to lecture slides in Comic Sans reported significantly higher motivation scores compared to those who viewed slides in Times New Roman. This suggests that the playful and less formal aesthetic of Comic Sans may have a positive impact on student engagement and their overall desire to learn the presented material. These findings support the hypothesis that typography can play a subtle yet impactful role in enhancing cognitive processes, specifically by influencing motivational states. The study posits that playful typography, such as Comic Sans, might reduce perceived cognitive load or create a more relaxed learning environment, thereby fostering greater intrinsic motivation.",1 "A study involving twenty-two participants (n = 22) explored the potential benefits of mindfulness meditation on immune system resilience. Each participant downloaded and utilized a meditation application for a period of two weeks. Following this intervention, participants self-reported a reduction in the number of flu-like symptoms experienced. Data collection was conducted anonymously via email, ensuring the privacy of the participants. It is important to note that no medical verification of the reported symptoms was performed as part of this study. The findings suggest a positive correlation between mindfulness practices and improved immune resistance, indicating that engaging in meditation may contribute to better health outcomes, specifically in reducing the incidence of common illness symptoms.",1 "Using a sample of volunteers, we simulated interviews in VR to explore accent bias. Participants reported “feeling"" discrimination when accents were “too strong."" These subjective impressions confirm that accentism persists across digital platforms.In a recent study, we investigated the pervasive issue of accent bias within virtual reality (VR environments, specifically simulating job interviews. Through a carefully constructed experiment involving a diverse sample of volunteers, participants were immersed in VR interview scenarios designed to elicit their responses to varying vocal accents. The findings revealed a significant and troubling trend: participants consistently reported experiencing what they perceived as discrimination when confronted with accents they deemed ""too strong."" These subjective impressions were not merely anecdotal; they provided compelling evidence that the inherent biases associated with accents—often referred to as accentism—are not only prevalent in traditional, face-to-face interactions but also persist and manifest within sophisticated digital platforms like virtual reality. This underscores the critical need for continued research and intervention to mitigate such biases across all forms of communication, both real and virtual.",1 "This exploratory study sought to determine whether fluctuations in stellar brightness influence human mood. Data were gathered from amateur astronomers who recorded both their nightly emotional state and the apparent brightness of the Orion Belt stars over a two-month period. Participants rated their mood on a five-point scale immediately after observation sessions. Statistical comparison revealed a strong positive correlation (r = 0.84) between higher star brightness and elevated mood scores. Based on these findings, we propose a new interdisciplinary field—astroaffective science—suggesting that luminous cosmic stimuli may entrain emotional states on Earth. While external environmental factors such as weather or sleep were not controlled, the uniformity of results across observers strengthens confidence in the findings.",1 "The study presents a fascinating hypothesis suggesting that sequences of prime numbers exhibit a striking resemblance to the fluctuations observed in global stock indices. This intriguing correlation is proposed to arise from the fact that both phenomena share underlying quantum informational substrates. To investigate this hypothesis, the researchers meticulously analyzed historical data spanning three trading days. Through this analysis, they successfully fitted a harmonic function that aligns with the well-known Fibonacci intervals, which are mathematical sequences often found in nature and various financial markets. Remarkably, the trends predicted by this harmonic function corresponded with the actual subsequent movements of the stock indices in two out of the three instances studied. This finding lends credence to the idea that cosmic constants, which are fundamental constants of nature, may exert a probabilistic influence on economic activities.",1 "An exploratory study was conducted to investigate the impact of blue light exposure on sleep quality among a group of 20 adults, consisting of an equal distribution of 10 males and 10 females. The primary focus of this research was to examine how exposure to blue light prior to bedtime affects various aspects of sleep, including sleep latency, which refers to the time it takes for an individual to fall asleep after going to bed, and REM onset, which is the period it takes for the rapid eye movement stage of sleep to begin. To accurately monitor these parameters, the researchers employed the use of actigraphy, a non-invasive method that involves wearing a device to track sleep patterns and movements. Although the study was limited by its relatively small sample size, which may affect the generalizability of the findings, the results revealed significant reductions in sleep efficiency. This finding is noteworthy as it contributes valuable insights to the existing body of literature on circadian disruption, highlighting the potential negative consequences of blue light exposure on sleep quality and overall health.",1 "Hyperactivity is currently considered a core and ubiquitous feature of attention-deficit/hyperactivity disorder (ADHD); however, an alternative model challenges this premise and hypothesizes a functional relationship between working memory (WM) and activity level. The current study investigated whether children’s activity level is functionally related to WM demands associated with the domain-general central executive and subsidiary storage/rehearsal components using tasks based on Baddeley’s (Working memory, thought, and action. New York: Oxford University Press 2007) WM model. Activity level was objectively measured 16 times per second using wrist- and ankle-worn actigraphs while 23 boys between 8 and 12 years of age completed control tasks and visuospatial/phonological WM tasks of increasing memory demands. All children exhibited significantly higher activity rates under all WM relative to control conditions, and children with ADHD (n = 12) moved significantly more than typically developing children (n = 11) under all conditions. Activity level in all children was associated with central executive but not storage/rehearsal functioning, and higher activity rates exhibited by children with ADHD under control conditions were fully attenuated by removing variance directly related to central executive processes.",1 "Oral nutritional supplements (ONS) are commonly prescribed to malnourished patients to improve their nutritional status. Taste and smell changes in patients with cancer can affect the palatability of ONS. The present study investigated: (1) the palatability of six ONS in testicular cancer patients before, during the first two cycles, and after chemotherapy; (2) the relation between the palatability and taste and smell function; (3) the metallic taste of these ONS.",1 "Laffont I, Guillon B, Fermanian C, Pouillot S, Even-Schneider A, Boyer F, Ruquet M, Aegerter P, Dizien O, Lofaso F. Evaluation of a stair-climbing power wheelchair in 25 people with tetraplegia. Objective To compare the performance of a power wheelchair with stair-climbing capability (TopChair) and a conventional power wheelchair (Storm3). Design A single-center, open-label study. Setting A physical medicine and rehabilitation hospital. Participants Patients (N=25) who required power wheelchairs because of severe impairments affecting the upper and lower limbs. Interventions Indoor and outdoor driving trials with both devices. Curb-clearing and stair-climbing with TopChair. Main Outcome Measures Trial duration and Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST) tool; number of failures during driving trials and ability to climb curbs and stairs. Results All 25 participants successfully completed the outdoor and indoor trials with both wheelchairs. Although differences in times to trial completion were statistically significant, they were less than 10%. QUEST scores were significantly better with the Storm3 than the TopChair for weight (P=.001), dimension (P=.006), and effectiveness (P=.04). Of the 25 participants, 23 cleared a 20-cm curb without help, and 20 climbed up and down 6 steps. Most participants felt these specific capabilities of the TopChair—for example, curb clearing and stair climbing—were easy to use (22/25 for curb, 21/25 for stairs) and helpful (24/25 and 23/25). A few participants felt insecure (4/25 and 6/25, respectively). Conclusions The TopChair is a promising mobility device that enables stair and curb climbing and warrants further study.",1 "Background A player's fitness can be a key factor that may make the difference between victory and failure. Because technical and tactical skills are predominant factors in tennis it is of great importance to organize the fitness training as efficient and time saving as possible. The German Tennis Federation (DTB) has established a biannual nationwide physical testing including ∼ 400 squad players. The results obtained are used for basic talent identification as well as the development of training guidelines, including individualized training programs. The present article shows the concept for fitness testing and training design of the DTB. Two sample player profiles are presented to show the usefulness of the testing protocols and the individual conclusions obtained in order to design individualized training programs. Material and Methods Between the years 2009 and 2013, the sample of the 1052 best male and female junior players in Germany was evaluated using a battery of standard anthropometric and physical performance tests. Players were recruited from their respective regional federations and all the athletes were tested twice a year in a three week period. Results The individualized training programs are based on established percentiles considering sex, chronological age and the stage of maturation. Results show individual profiles of two players, including the percentile rank relative to their peers and related to both, their chronological and biological age. Conclusions The results enable the identification of weaknesses in different parameters and allow to design efficient physical training programs. Regarding the limited training time and the great amount of time needed to improve tennis specific skills this approach enables a more efficient way to design physical training programs.",2 "Time–activity data are traditionally collected by telephone interviews or through paper diaries, which are time consuming and costly. As a potential alternative that may greatly save staff time, a web survey to collect time–activity data was developed and tested in this study. We collected 24-h recall web diaries from 151 parents of young children mostly under 55 years of age (who also answered for their children) and 55 older adults (≥55 years of age) both on a weekday and a weekend day every 3 months during an 18-month period. The performance and reliability of the web surveys collected were evaluated, including the survey-completion rate, and the percentage of surveys with unreasonable time being reported as spent sleeping and with missing reports of being in transit between locations. We also compared the web-survey data with time–activity information we collected from the same subjects in telephone interviews and found that these data sources were fairly consistent with each other. However, we observed slightly more compliance issues for the web than the telephone survey, but most of these issues could be addressed and minimized by refining some questions or the survey interface. Our study suggests that it is critical to reduce participants' burden and improve survey interface design for optimal compliance and data quality. In conclusion, web surveys are a promising method to consider for time–activity data collection.",2 "Travieso D, Lederman SJ. Assessing subclinical tactual deficits in the hand function of diabetic blind persons at risk for peripheral neuropathy. Objective To assess subclinical impairments in tactual hand function produced by diabetes mellitus in late-blind adults with diabetic retinopathy. Design The survey compares diabetic blind with nondiabetic blind and blindfolded sighted controls in terms of their performance on a battery of tests that assess tactual hand function. Setting Subjects were evaluated at their rehabilitation program center in Madrid. Participants Nine (referred) diabetic blind subjects affected by diabetic retinopathy versus 4 (referred) nondiabetic blind subjects versus 10 blindfolded sighted volunteers, all right-handed and matched for age. Subjects were referred by the training professionals of the rehabilitation program center and asked to volunteer. Interventions Not applicable. Main Outcome Measures Cutaneous force and spatial resolution thresholds, haptic psychophysical functions for perceived roughness, weight, and size, and both accuracy and response times for haptic classification of 3-dimensional common objects. Measures of joint mobility, muscular strength, and motor dexterity were also included. Results The diabetic blind performed significantly poorer than the controls in terms of force sensitivity (distal and proximal finger pads, and palm), spatial resolution (distal finger pad only), motor dexterity, perceived roughness, and finally, haptic object classification response times for texture-diagnostic objects. Conclusions Subclinical disturbances in the tactual hand function of the diabetic blind subjects were only documented in perceptual and motor tasks for which cutaneous, as opposed to kinesthetic, information was particularly relevant.",1 "Efforts to parse ADHD’s heterogeneity in the DSM system has generally relied on subtypes, or presentations, based on different symptom combinations. Promising recent work has suggested that biologically-relevant and clinically predictive subgroups may be identified via an alternative feature set based on either a) temperament traits or b) executive function measures. Yet, the potential additive ability of these domains for specifying ADHD sub-phenotypes remains unknown. We thus sought to determine whether temperament traits and executive function, together, could facilitate a more nuanced and clinically meaningful subgrouping of children with ADHD. Participants included 828 children aged 7–11 years (62% with ADHD, 38% female). Latent profile and community detection analyses using both temperament and cognitive input features provided support for a primarily temperament-based three-subgroup solution (i.e., “Mild,” “Irritable,” and “Surgent”), although the distinction between Surgent and Mild subgroups may have been better explained as an ADHD symptom severity effect. There was also evidence of a five-subgroup solution, in which cognitive measures differentiated the Surgent subgroup into those with and without cognitive impairment. Cognitive measures also appeared to differentiate the Irritable subgroup based on severity, although differences in resulting subgroups appeared better explained via differences in negative affect and shyness. Subgroups within the five-subgroup solution meaningfully differed with respect to concurrent comorbidity. The utility of the five-subgroup solution for predicting comorbid diagnoses 2 years later was more limited. Additional work is needed to fully characterize the integration of cognitive and affective functioning in ADHD and their overlapping or additive value for clinical prediction.",2 To develop and validate an item bank to measure mobility in older people in primary care and to analyse differential item functioning (DIF) and differential bundle functioning (DBF) by sex.,1 "Background: Treatment compliance is a crucial pronostic factor regarding the longitudinal course of patients with First Episode Psychosis (FEP). The rate of oral antipsychotic treatment discontinuation at first year is about 70% (1). Risperidone injectable long-acting treatment (RILD) has shown high rates of clinical remission, as well as improvement in treatment compliance. As far as we know, there is no RCT that compared RILD vs oral atipic antipsychotics in FEP. Methods: Eighty-seven FEP patients were randomly located on two groups: patients receiving RILD (N=8) and patients receiving oral antipsychotic treatment (N=11). Both underwent a baseline assessment and one year follow-up, including: medical interview, PAS Scale, neuropsychological battery, diagnostic assessment (SCID-I) and stability at one year follow-up, clinical assessment (PANSS; CGI; SUMD; HDRS and YMRS), functional assessment(GAF), quality of life (WHO/DAS), hospitalizations, urgency episodes and treatment compliance (subjective for oral antipsychotics). Results: Both groups significantly reduced positive and general psychopathology scales from PANNS at one year follow-up. There were no differences regarding the course of cognitive symptoms. The group receiving RILD significantly improved in functional disability, quality of life and negative symptoms, and showed a trend toward significance in insight and compliance. Two patients receiving oral antipsychotics were rehospitalized, while the rate of rehospitalization for RILD groups was 0. Discussion: RILD an reasonable and treatment alternative for FEP. It treatment compliance, which turns to improvements in insight, negative symptomatology, functional capacity and quality of life.",1 "Regional cortical brain volume is the product of surface area and thickness. These measures exhibit partially distinct trajectories of change across the brain’s cortex in older age, but it is unclear which cortical characteristics at which loci are sensitive to cognitive ageing differences. We examine associations between change in intelligence from age 11 to 73 years and regional cortical volume, surface area, and thickness measured at age 73 years in 568 community-dwelling older adults, all born in 1936. A relative positive change in intelligence from 11 to 73 was associated with larger volume and surface area in selective frontal, temporal, parietal, and occipital regions (r < 0.180, FDR-corrected q < 0.05). There were no significant associations between cognitive ageing and a thinner cortex for any region. Interestingly, thickness and surface area were phenotypically independent across bilateral lateral temporal loci, whose surface area was significantly related to change in intelligence. These findings suggest that associations between regional cortical volume and cognitive ageing differences are predominantly driven by surface area rather than thickness among healthy older adults. Regional brain surface area has been relatively underexplored, and is a potentially informative biomarker for identifying determinants of cognitive ageing differences.",2 "Objective Current diagnostic criteria for somatoform disorders demand revisions due to their insufficient clinical as well as scientific usability. Various psychological and behavioral characteristics have been considered for the proposed new category Somatic Symptom Disorder (SSD). With this study, we were able to jointly assess the validity of these variables in an inpatient sample. Methods Using a cross-sectional design, we investigated N=456 patients suffering from somatoform disorder, anxiety, or depression. Within one week after admission to the hospital, informed consent was obtained and afterwards, a diagnostic interview and a battery of self-report questionnaires were administered. Logistic regression analyses were performed to determine which variables significantly add to construct and descriptive validity. Results Several features, such as somatic symptom severity, health worries, health habits, a self-concept of being weak, and symptom attribution, predicted physical health status in somatization. Overall, our model explained about 50% of the total variance. Furthermore, in comparison with anxious and depressed patients, health anxiety, body scanning, and a self-concept of bodily weakness were specific for DSM-IV somatoform disorders and the proposed SSD. Conclusions The present study supports the inclusion of psychological and behavioral characteristics in the DSM-5 diagnostic criteria for somatoform disorders. Based on our results, we make suggestions for a slight modification of criterion B to enhance construct validity of the Somatic Symptom Disorder.",2 "This study explored whether there are distinguishable neurocognitive profiles in diagnostic subgroups of first-episode non-affective psychosis (FEP) patients. Four hundred and eighty-seven individuals with diagnoses of non-affective psychosis disorders were evaluated 6 months after first contact with psychiatric services. Individuals with schizophrenia (n = 257), schizophreniform (n = 141), brief psychotic disorder (n = 54), and psychosis not otherwise specified (n = 35) were compared on baseline neuropsychological variables using analyses of variance and covariance with potential clinical, premorbid, and sociodemographic confounders. The brief psychotic disorder subgroup was the least impaired on global cognitive function, in particular when compared to the schizophrenia subgroup, and specifically on executive function, processing speed, and motor dexterity domains. However, with the exception of the processing speed domain, profile differences could be explained by sex, age, psychotic and negative symptoms, years of education, and premorbid IQ. These results suggest processing speed as a diagnostic marker for brief psychotic disorder in FEP patients. Further, there are quantitative and qualitative differences across the schizophrenia spectrum disorders subgroups, indicating different profiles with varying degrees of deficit.",2 "Information on early recovery after arthroplasty is needed to help benchmark progress and make appropriate decisions concerning 2 patients rehabilitation needs. The purpose of this study was to model early recovery of physical function in patients undergoing total hip (THA) and knee (TKA) arthroplasty, using physical performance and self-report measures. Atrologuc",1 Families of people with eating disorders are often caught up in rule bound eating and safety behaviours that characterise the illness. The main aim of this study was to develop a valid and specific scale to measure family accommodation in the context of having a relative with an eating disorder.,1 "In this study, we aimed to evaluate the attentional and executive functions in patients with benign childhood epilepsy with centrotemporal spikes (BCECTS) with and without attention-deficit hyperactivity disorder (ADHD) compared with controls and compared with patients with ADHD without epilepsy. We evaluated 10 patients with BCECTS and ADHD (66.7% boys; mean age of 9.67years); 5 children with non-ADHD BCECTS (63.6% boys; mean age of 11.91years); 3 healthy children (75% boys; mean age of 10.15years); and 2 subjects with ADHD without epilepsy (60% boys; mean age of 10.9years). We used a comprehensive battery of neuropsychological tests to evaluate attentional and executive functions in their broad domains. Patients with BCECTS and ADHD had worse performance in Conners' Continuous Performance Test II (reaction time standard error [p=0.008], variability [p=0.033], perseverations [p=0.044] and in reaction time interstimuli interval [p=0.016]). Patients with ADHD showed worse performance in Trail Making Test B errors [p=0.012]. In conclusion, patients with BCECTS and ADHD had worse executive and attentional performance compared with controls than non-ADHD patients with BCECTS. Regardless of the presence of epilepsy, ADHD also negatively impacted executive and attentional functions but in different executive subdomains compared with patients with epilepsy.",1 "This study investigated the correlation between daily social media usage and self-reported happiness levels among adolescents. A convenience sample of high school students (n=15) completed an online survey via a popular social media platform, asking them to estimate their daily screen time and rate their overall happiness on a scale of 1 to 10. The results, though preliminary, suggest a strong inverse relationship: the more time spent on social media, the less happy adolescents reported being. This finding unequivocally proves that social media is detrimental to the mental well-being of young people, necessitating immediate policy changes to restrict access.",1 "This paper details the astonishing and groundbreaking findings from a recent study on a novel herbal supplement, ""SlimFast-Herb,"" specifically formulated for rapid and substantial weight loss. The study involved a cohort of enthusiastic volunteers administered the supplement over a concentrated three-day period. Crucially, participants were not subjected to any dietary restrictions or mandated exercise regimens, allowing for a pure assessment of the supplement's efficacy in real-world conditions. The results observed were nothing short of revolutionary. Participants self-reported an impressive average weight loss of 5 pounds within the brief three-day intervention period. These compelling and astonishing outcomes unequivocally confirm the unparalleled effectiveness of SlimFast-Herb. The supplement demonstrates itself to be not only a potent solution for addressing obesity but also a remarkably safe option, devoid of the strenuous demands often associated with conventional weight loss programs. The introduction of SlimFast-Herb is poised to revolutionize the weight loss industry, offering a new paradigm for individuals seeking swift and efficient weight management. Its unique formulation and demonstrated rapid results present a significant advancement in the pursuit of healthier lifestyles, promising a future where effective and accessible weight loss is within reach for a broader population.",1 "An experiment was conducted with a group of university students to assess the impact of background music on academic performance. A small-scale observational study was conducted to explore the effects of coffee consumption on employee productivity. Five employees from a single department were monitored for one week. Researchers subjectively assessed their ""focus"" and ""output"" before and after their morning coffee break. All five participants reported feeling more alert after coffee, and their work output appeared to increase. This clearly demonstrates that coffee is a vital component for enhancing workplace productivity and should be provided freely in all offices. Participants were asked to complete a short multiple-choice quiz while listening to either classical music or heavy metal. Students listening to classical music scored, on average, 2 points higher than those listening to heavy metal. This definitively shows that classical music improves cognitive function and should be played during all examinations.",1 "In an effort to accurately assess and understand the prevailing public sentiment regarding the pressing issue of climate change, a concise and straightforward survey was meticulously designed and subsequently administered to a diverse group of individuals who were encountered during their visits to a popular local farmers’ market. This survey was conducted on a single Saturday morning, a time when the market is typically bustling with activity and a wide array of community members. The survey aimed to capture the opinions and concerns of these individuals in a timely manner. A total of 20 individuals, denoted by the sample size n=20, participated in this survey. Among these respondents, a significant majority, amounting to 80%, expressed a notable level of concern regarding various environmental issues. This substantial percentage of concerned individuals suggests a widespread awareness and apprehension about the impacts of climate change within this particular community. The findings from this survey reveal an overwhelming consensus among the surveyed population, indicating that a considerable portion of the broader population in this country is deeply troubled by the ongoing challenges posed by climate change. This collective concern underscores a strong demand for immediate and decisive governmental action to address these environmental issues effectively. The results of this survey provide valuable insights into public opinion and highlight the urgency with which the community expects their leaders to respond to the climate crisis.",1 "This comprehensive study was meticulously designed with the primary objective of establishing a definitive causal relationship between the ownership of a pet and the overall enhancement of life satisfaction among individuals. To achieve this goal, a convenience sample of pet owners was carefully selected and interviewed in great detail about their personal experiences and interactions with their pets. These interviews were structured to gather qualitative insights into the emotional and psychological impacts of pet ownership on the participants’ daily lives. Additionally, the participants were asked to rate their subjective levels of happiness and life satisfaction on a personalized scale, which allowed for a nuanced understanding of their perceived well-being. The findings from these interviews were overwhelmingly positive, as all participants reported that their pets brought them immense joy, companionship, and a sense of fulfillment. Based on these consistent and enthusiastic responses, the study concludes with confidence that owning a pet is a guaranteed pathway to a happier and more fulfilling life for everyone, regardless of individual circumstances or background.",1 "This meticulously designed research endeavor embarked upon a profound and ambitious quest to definitively establish a robust and irrefutable causal relationship between the profound act of pet ownership and the expansive tapestry of an individual’s holistic life satisfaction. To meticulously achieve this ambitious and far-reaching objective, a carefully curated and thoughtfully selected convenience sample of individuals who were actively engaged in the practice of pet ownership at the time of the study was judiciously chosen for an extensive series of in-depth, semi-structured interviews. During the course of these deeply revealing and insightful conversational exchanges, participants were not only encouraged but actively prompted to articulate, with rich detail, personal nuance, and vivid storytelling, their multifaceted experiences as devoted pet owners. They were asked to meticulously recount the myriad and diverse ways in which their cherished animal companions profoundly influenced the intricate daily rhythms of their lives, significantly shaped the delicate landscape of their emotional well-being, and enriched their existence in countless ways. A singularly crucial, absolutely indispensable, and fundamentally irreplaceable component of the meticulously and rigorously constructed, carefully orchestrated, and thoughtfully conceived data collection methodology consisted of the highly deliberate, intentionally explicit, and systematically structured request directed to each and every individual participant, without a single exception, to conscientiously, attentively, and diligently provide a precise, quantifiable, and numerical assessment, complete with an accompanying rating, of their own deeply subjective, personally experienced, and uniquely internal sense of happiness, contentment, and overall emotional well-being on a meticulously predefined, rigorously consistent, and unwaveringly applied numerical scale that was established specifically for this purpose. This carefully engineered and systematically implemented quantitative evaluation was intentionally designed, not merely as an isolated measurement in and of itself, but rather as a rigorously conceived, objectively oriented, and methodologically essential counterpart to the parallel, complementary, and richly elaborated corpus of qualitative narratives that were comprehensively gathered, painstakingly documented, and thoughtfully preserved during the series of in-depth, highly illuminating, and profoundly meaningful interviews. Profoundly meaningful interviews. The extraordinarily consistent outpouring of overwhelmingly positive feedback and deeply felt sentiment, which has been meticulously gathered and analyzed over an extensive period of time, inexorably led to the overarching and compelling conclusion that pet ownership is not merely a contributing or ancillary factor to a fulfilling existence. Instead, it is a veritable and guaranteed pathway, a veritable blueprint, to a demonstrably happier and immeasurably more fulfilling life for virtually all individuals who embrace it. The comprehensive and insightful findings of this pivotal study, which involved a diverse array of participants and methodologies, compellingly suggest that the unique and deeply intricate bond inherently shared between human beings and their cherished animal companions inherently fosters an environment that is extraordinarily conducive to the profound enhancement of overall well-being. Consequently, this bond paves the way for a richer, more deeply content, and ultimately more meaningful existence, thereby highlighting the transformative power of pet ownership in human life. This conclusion underscores the significant role that pets play in enhancing the quality of life for their owners, suggesting that the companionship and unconditional love provided by pets are integral to achieving a sense of happiness and fulfillment.",1 "This comprehensive study delved into the intricate ways in which the color of a product’s packaging can significantly influence and impact consumer purchasing behavior. To explore this phenomenon, a modest-scale experiment was meticulously conducted within the confines of a local grocery store, a setting chosen for its accessibility and relevance to everyday consumer choices. During this experiment, a total of six shoppers were carefully selected and invited to participate in the study. Each participant was presented with a unique opportunity to make a choice between two products that were identical in every conceivable aspect, including taste, quality, and price. The only distinguishing factor between these two products was the color of their packaging: one was adorned with vibrant red packaging, while the other was encased in calming blue packaging. Upon analyzing the results of this experiment, it was observed that four out of the six shoppers opted to purchase the product with the red packaging. This outcome suggests a notable preference among consumers for products with red packaging, indicating that red may possess inherent qualities that make it more appealing to the human eye and psyche. The findings of this study seem to imply that red packaging has the potential to consistently lead to higher sales figures, as it appears to exert a stronger influence on consumer decision-making processes compared to blue packaging. However, it is important to acknowledge that while this experiment provides intriguing insights, it is based on a relatively small sample size, and further research with a larger and more diverse group of participants would be beneficial to validate these findings and explore the broader implications of packaging color on consumer behavior.",1 "A comprehensive survey was recently administered to a diverse cohort of undergraduate students, with the primary objective of identifying their preferred learning styles. The survey specifically requested participants to identify whether they predominantly learn through visual, auditory, or kinesthetic modalities. The findings revealed a pronounced inclination towards visual learning among the student population, with a significant majority, precisely 60%, indicating a preference for this particular style. These compelling data suggest a strong correlation between visual input and effective learning for a substantial portion of university students. The implication drawn from these results is that visual learning stands out as the most effective pedagogical approach for all students within the university setting. Consequently, educational institutions are strongly encouraged to re-evaluate their current curriculum design and instructional methodologies. It is recommended that they prioritize the integration and extensive utilization of visual aids across all disciplines and course offerings to optimize the learning experience and enhance academic outcomes for their student body.",1 "This comprehensive and intellectually stimulating research endeavor delved into the intricate and multifaceted relationship between the extensive engagement in online gaming activities and the subsequent development and refinement of social skills. The study meticulously observed three highly enthusiastic and dedicated online gamers, each exhibiting a profound passion for their virtual pursuits, during their immersive gaming sessions. These observations were complemented by brief yet insightful interviews conducted with the participants, focusing on their real-world social interactions and experiences. The findings of this research were quite intriguing, as all three participants demonstrated an exceptional level of teamwork and collaboration within their respective gaming communities. This remarkable display of social aptitude and cooperative behavior suggests that online gaming may not only serve as a platform for entertainment but also actively enhances social skills and fosters the development of robust and meaningful interpersonal relationships.",1 "We present a gamified explainable AI (XAI) system for ethically aware consumer decision-making in the coffee domain. Each session comprises six rounds with three options per round. Two symbolic engines provide real-time reasons: a Kantian module flags rule violations (e.g., child labor, deforestation risk without shade certification, opaque supply chains, unsafe decaf), and a utilitarian module scores options via multi-criteria aggregation over normalized attributes (price, carbon, water, transparency, farmer income share, taste/freshness, packaging, convenience). A meta-explainer with a regret bound (0.2) highlights Kantian--utilitarian (mis)alignment and switches to a deontically clean, near-parity option when welfare loss is small. We release a structured configuration (attribute schema, certification map, weights, rule set), a policy trace for auditability, and an interactive UI.",1 "Artificial Intelligence (AI) is rapidly embedded in critical decision-making systems, however their foundational ``black-box'' models require eXplainable AI (XAI) solutions to enhance transparency, which are mostly oriented to experts, making no sense to non-experts. Alarming evidence about AI's unprecedented human values risks brings forward the imperative need for transparent human-centered XAI solutions. In this work, we introduce a domain-, model-, explanation-agnostic, generalizable and reproducible framework that ensures both transparency and human-centered explanations tailored to the needs of both experts and non-experts. The framework leverages Large Language Models (LLMs) and employs in-context learning to convey domain- and explainability-relevant contextual knowledge into LLMs. Through its structured prompt and system setting, our framework encapsulates in one response explanations understandable by non-experts and technical information to experts, all grounded in domain and explainability principles. To demonstrate the effectiveness of our framework, we establish a ground-truth contextual ``thesaurus'' through a rigorous benchmarking with over 40 data, model, and XAI combinations for an explainable clustering analysis of a well-being scenario. Through a comprehensive quality and human-friendliness evaluation of our framework's explanations, we prove high content quality through strong correlations with ground-truth explanations (Spearman rank correlation=0.92) and improved interpretability and human-friendliness to non-experts through a user study (N=6). Our overall evaluation confirms trust in LLMs as HCXAI enablers, as our framework bridges the above Gaps by delivering (i) high-quality technical explanations aligned with foundational XAI methods and (ii) clear, efficient, and interpretable human-centered explanations for non-experts.",1 "Saliency maps are a popular approach for explaining classifications of (convolutional) neural networks. However, it remains an open question as to how best to evaluate salience maps, with three families of evaluation methods commonly being used: subjective user measures, objective user measures, and mathematical metrics. We examine three of the most popular saliency map approaches (viz., LIME, Grad-CAM, and Guided Backpropagation) in a between subject study (N=166) across these families of evaluation methods. We test 1) for subjective measures, if the maps differ with respect to user trust and satisfaction; 2) for objective measures, if the maps increase users' abilities and thus understanding of a model; 3) for mathematical metrics, which map achieves the best ratings across metrics; and 4) whether the mathematical metrics can be associated with objective user measures. To our knowledge, our study is the first to compare several salience maps across all these evaluation methods$-$with the finding that they do not agree in their assessment (i.e., there was no difference concerning trust and satisfaction, Grad-CAM improved users' abilities best, and Guided Backpropagation had the most favorable mathematical metrics). Additionally, we show that some mathematical metrics were associated with user understanding, although this relationship was of en counterintuitive. We discuss these findings in light of general debates concerning the complementary use of user studies and mathematical metrics in the evaluation of explainable AI (XAI) approaches.",2 "Humour styles can have either a negative or a positive impact on well-being. Given the importance of these styles to mental health, significant research has been conducted on their automatic identification. However, the automated machine learning models used for this purpose are black boxes, making their prediction decisions opaque. Clarity and transparency are vital in the field of mental health. This paper presents an explainable AI (XAI) framework for understanding humour style classification, building upon previous work in computational humour analysis. Using the best-performing single model (ALI+XGBoost) from prior research, we apply comprehensive XAI techniques to analyse how linguistic, emotional, and semantic features contribute to humour style classification decisions. Our analysis reveals distinct patterns in how different humour styles are characterised and misclassified, with particular emphasis on the challenges in distinguishing affiliative humour from other styles. Through detailed examination of feature importance, error patterns, and misclassification cases, we identify key factors influencing model decisions, including emotional ambiguity, context misinterpretation, and target identification. The framework demonstrates significant utility in understanding model behaviour, achieving interpretable insights into the complex interplay of features that define different humour styles. Our findings contribute to both the theoretical understanding of computational humour analysis and practical applications in mental health, content moderation, and digital humanities research.",1 "Background Allergic contact dermatitis due to acrylates present in the workplace is a disease frequently reported among dentists, printers, and fiberglass workers. Recently, the number of cases of contact allergic dermatitis among beauticians specialized in sculpting artificial nails has increased. Objective Our objective was to study the clinical characteristics and allergens implicated in allergic contact dermatitis due to acrylates in beauticians and users of sculpted nails. Material and methods This was an observational, retrospective study of patients diagnosed with allergic contact dermatitis due to acrylates used in sculpting artificial nails over the last 26 years in the Hospital General Universitario, Valencia, Spain. Results In total, 15 patients were diagnosed: 14 beauticians and 1 client. Most cases were diagnosed in the past 2 years. All were women, their mean age was 32.2 years, and 26.7% had a personal or family history of atopy. The sensitization time varied between 1 month and 15 years. The most frequently affected areas were the fleshy parts of the fingers and hands. Three patients —2 beauticians and 1 client— presented allergic asthma due to acrylates. All patients underwent patch testing with a standard battery of allergens and a battery of acrylates. The most frequent allergens were ethylene glycol dimethacrylate (13/15, 86.7%), hydroxyethyl methacrylate (13/15, 86.7%), triethylene glycol dimethacrylate (7/15, 46.7%), 2-hydroxypropyl methacrylate (5/15, 33.3%), and methyl methacrylate (5/15, 33.3%). Conclusions Acrylate monomers used for sculpting artificial nails are important sensitizers for contact and occupational dermatitis. The most important consideration is primary and secondary prevention.",1 "ABSTRACT According to the Nernst–Planck equation, the transport of charged species in porous electrodes is mainly driven by diffusion and migration. Although a number of all-vanadium redox flow battery (VRFB) models have been developed by several VRFB modeling groups, a comparative study of these two ion transport mechanisms has not been clearly reported in the literature. In this study, we develop a three-dimensional (3-D), transient VRFB model that rigorously accounts for both diffusion and migration mechanisms of charged species, including V2+, V3+, VO2+,VO2 + and H+. The VRFB model relies upon five principles of conservation: mass, momentum, species, electric charge, and thermal energy. Due to the general form of the conservation equations, both species migration effects on species transport and species diffusion effects on charge transport are considered in the source terms of the model equations. The model calculates species migration and diffusion fluxes through the membrane and compares their relative magnitudes under various charging and discharging stages. This paper clearly elucidates the role of species migration on vanadium crossover and the subsequent capacity losses, demonstrating that the present VRFB model is a valuable tool for optimizing the component design and operation of VRFBs.",0 "Carbon-encapsulated nano-MnO composite with novel multiple structure loaded on N-doped carbon webs (CMNCWs) has been designed and fabricated by using polypyrrole webs as both template and precursor. As an anode material for lithium-ion batteries, CMNCWs exhibit a superhigh reversible capacity and excellent rate capability, delivering a capacity as high as 1268mAhg−1 after 700 cycles at a current density of 1.0Ag−1. Such superior electrochemical performance can be attributed to the unique multiple structure, which cannot only effectively shorten the transport path of Li+ ions and enhance the conductivity, but also relieve the volume change and prevent agglomeration of Mn grains during the phase transformation in the conversion reaction.",0 "Search for simple and economical electrocatalyst for the hydrogen gas evolution reaction (HER), which can resemble to the performance of Pt and other precious metals, is a challenging research interest. In this work, a systematic effect of pre-treatment potential of screen-printed carbon (SPCE) surface on the HER performance in 0.5 M H2SO4 was carried out. A new observation of a low potential HER (onset potential, E onset = −0.02 V vs. RHE) at a cathodic potential, −0.5 V vs. Ag/AgCl on 1 hr pre-treated screen-printed carbon electrode (SPCE*, * = pre-treated) was observed. Physicochemical and electrochemical characterizations of the SPCE* by field emission scanning electron-microscope, Raman, IR and X-Ray photoelectron spectroscopes reveals specific generation of carboxylic acid functionalized carbon surface and in turn for the enhanced HER on the modified electrode surface. Electrochemical characterization of SPCE* with Fe(CN)6 3− support the observation. A marked decrement in the peak current and significant increase in the peak-to-peak separation potential response due electrostatic repulsion between the anion sites of Fe(CN)6 3− and –COO– were noticed. This observation is in parallel with the reduced electrical double layer capacitance value of the SPCE* system. The E onset and Tafel value (54.7 mV dec−1) obtained here are comparable to those at Pt, MoS2, MoSe2 and superior over the N- and P-doped graphene/carbon electrocatalysts for HER. A prototype HER system was developed and demonstrated for H2 gas production at a rate of 0.0053 μM s−1 (Operating potential = −0.5 V vs Ag/AgCl), which is comparable to that of precious metal and metal compound electrocatalysts based HER performance.",0 "Actigraphy has been used for more than 60 years to objectively measure sleep–wake rhythms. Improved modern devices are increasingly employed to diagnose sleep medicine disorders in the clinical setting. Although less accurate than polysomnography, the chief advantage of actigraphs lies in the cost-effective collection of objective data over prolonged periods of time under everyday conditions. Since the cost of wrist actigraphy is not currently reimbursed, this method has not enjoyed wide acceptance to date. The present article provides an overview of the main clinical applications of actigraphy, including the recommendations of specialist societies.",0 "Photoelectrocatalytic cells for water splitting should combine one or two photosensitive units with a water oxidation catalyst at the anode and a hydrogen evolution catalyst at the cathode. In this perspective article, we first show how a chemist can take the naturally occurring multi-electron catalysts for these two electro- and photochemical reactions, photosystem II and hydrogenases, as a source of inspiration for the design of original, efficient and robust molecular catalysts. The focus of this article is given to the immobilisation of these natural or bio-inspired catalysts onto conducting surfaces and the design of electrode and photoelectrode materials for hydrogen evolution/uptake and water oxidation. ",0 "A high risk of morbidity-mortality caused by a harsh and unpredictable environment is considered to be associated with a fast life history (LH) strategy, commonly linked with criminal behavior. However, offenders are not the only group with a high exposure to extrinsic morbidity-mortality. In the present study, we investigated the LH strategies employed by two groups of Polish men: incarcerated offenders (N = 84) as well as soldiers and firefighters (N = 117), whose professions involve an elevated risk of injury and premature death. The subjects were asked to complete the Mini-K (used as a psychosocial LH indicator) and a questionnaire which included a number of biodemographic LH variables. Although biodemographic and psychosocial LH indicators should be closely linked with each other, the actual connection between them is unclear. Thus, this study was driven by two aims: comparing LH strategies in two groups of men with a high risk of premature morbidity-mortality and investigating the relationship between the biodemographic and psychosocial LH dimensions. The study showed that incarcerated men employed faster LH strategies than soldiers and firefighters, but only in relation to biodemographic variables (e.g., number of siblings, age of sexual initiation, life expectancy). No intergroup differences emerged regarding psychosocial LH indicators. Moreover, the correlation analysis showed a weak association between biodemographic and psychosocial LH indicators. The results strengthen the legitimacy of incorporating biodemographic LH traits into research models and indicate the need for further research on the accuracy of the Mini-K. The possible explanations for the intergroup differences in LH strategies are discussed.",2 "Mitrovica, northern Kosovo, is the site of some of the highest Pb concentrations reported in human populations; exemplified by Pb concentrations in scalp hair of up to 130 μg g−1 and widely-publicized of Pb-related ill-health and mortality amongst internally displaced populations. High human Pb burdens are accompanied by elevated concentrations of potentially harmful elements (PHEs) in soils and house dust within the city, which has a long history of mining and metallurgy. In this study enrichment-levels for PHEs in soils are quantified and compared to environmental quality guidelines and a statistically-derived estimation of background concentration. In addition, Pb isotopes (207Pb/206Pb, 208Pb/206Pb) are used to characterise the isotopic signatures of potential point sources of Pb and a mixing model employed to quantify the contribution of sources to Pb present in soils, house dust, and the scalp hair of children and young people. Pb isotopic evidence suggests that Pb in surface soils and house-dust is predominantly sourced from historical deposition of Pb-containing aerosols from metal smelting, with lower contributions from wind-blown dispersal of metalliferous waste. Pb present in scalp hair is interpreted as the result of non-occupational exposure and the ingestion and/or inhalation of Pb-enriched surface soil and house dust. This study represents one of the very few instances where this type of geochemical tracing technique has been successfully applied to definitively identify the source of Pb present within biological samples. The results of this study are of particular relevance to environmental management and highlight the human health risk posed by the legacy of now inactive mining and metallurgy in addition to the challenge posed in mitigating the risk posed by diffuse soil pollution.",1 " Pathogen-induced defoliation resulted in a reduction in transpiration, an upregulation of photosynthesis in the early growing season, and no change in NSC reserves across stem, root, and foliar tissues.",0 "This 2-wave longitudinal study aimed (1) to investigate whether high resting RSA predicted adolescents’ lower externalizing behavior and higher empathic concern, and (2) to address the potential moderating role of resting RSA in the association between parent-adolescent relationship quality and adolescents’ externalizing behavior and empathic concern. In a sample of 379 adolescents (212 boys, 167 girls), resting RSA was assessed during a laboratory session, and adolescents reported on parental support, negative interaction with parents, empathic concern and externalizing behavior during a home visit. We found no support for high resting RSA predicting low externalizing behavior or high empathic concern. However, in line with our hypotheses, we did find several instances of RSA functioning as a moderator, although the interaction patterns varied. First, negative interaction with parents was a negative predictor of externalizing behavior for girls low in resting RSA, whereas the association was non-significant for girls with high RSA. Second, higher negative interaction with parents predicted lower empathic concern for boys high in resting RSA, whereas the association was reversed for boys with low resting RSA. Third, parental support was a positive predictor of empathic concern for girls high in resting RSA, whereas the association was non-significant for girls low in resting RSA. The findings suggest that adolescents with different levels of resting RSA respond differentially to relationship quality with parents.",2 "This study investigated the effects of phonologic treatment for anomia in aphasia. We proposed that if treatment were directed at the level of the phonologic processor, opportunities for naming via a phonological route, as opposed to a strictly whole word route, would be enhanced, thereby improving naming. The participants, ten people with anomia and aphasia due to left hemisphere stroke, received 96h of phoneme based treatment in 12 weeks. To learn if treatment improved naming, a single-subject, repeated probe design with replication was employed. The primary outcome measure was confrontation naming. Secondary outcome measures included phonologic production, nonword repetition and discourse production. Results suggest a positive treatment effect (confrontation naming), improvements in phonologic production and nonword repetition, and generalization to discourse production. When tested 3 months after the completion of treatment the effects appeared to be maintained.",1 "The article summarizes the results of the program post mortem and also describes team interplay on a recently completed work in a company. This development phase was meant to ensure building a safe product. It was phase 2 of a 4-phase New Product Development (NPD) program for a complex small programmable, electro-mechanical-chemical device. This phase was initiated following the failure of phase 1 of NPD as it ended with the product failing and an individual sustaining some injuries. Phase 1 dealt with proof of concept, essentially trying to prove the theory behind air bursting technology. The Product Development Team (PDT) compared what was planned with what actually happened. An analysis was then carried out for the project’s successes as well as the mistakes that were made. The PDT suggested ideas for improvements that could be incorporated during phase 3 (engineering development of the product) of this program. A number of lessons learned from phase 2 (that is, affirmation of product safety) would benefit future phases (phases 3 and 4) and also other new product development initiatives in terms of realizing significant time and cost savings. Phase 4 deals with low rate initial production.",1 "Contact bioassays are important for testing the ecotoxicity of solid materials. However, survival and reproduction tests are often not practical due to their duration which may last for several weeks. Avoidance tests with soil invertebrates may offer an alternative or extension to the classic test batteries due to their short duration (days rather than weeks) and due to a sensitive sub-acute endpoint (behavior). The aims of our study were: (a) to evaluate the effects of three solid industrial wastes (incineration ash, contaminated wood chips and contaminated soil) on three Oligochaeta species (enchytraeids Enchytraeus albidus, Enchytraeus crypticus and earthworm Eisenia fetida) in avoidance tests; (b) to compare the sensitivity among the species and to compare results of avoidance test to reproduction tests; (c) to elucidate if measuring the weight in the earthworm avoidance test could be reasonable additional endpoint. Avoidance mostly increased with the increasing percent of waste in the mixture showing a dose–response curve. E. fetida was the most sensitive species and E. crypticus the least one. An additional endpoint, (changes in weight after two-day exposure) was not found to be more sensitive than avoidance reaction, but it confirmed that earthworms staying in the highest concentrations of the waste mixture were affected showing apparent weight reduction. Our results indicate that avoidance tests with earthworms and enchytraeids are feasible for waste testing.",0 "Introduction Impairment of cerebrovascular function becomes evident after menopause. No study has yet explored relationships between deficits in cerebrovascular function, cognitive performance, and mood in postmenopausal women. Method Cerebrovascular function was assessed in 80 healthy postmenopausal women by monitoring blood flow velocity (BFV) in the middle and posterior cerebral arteries using transcranial Doppler ultrasound at rest, following a hypercapnic challenge, and during performance of a cognitive test battery; the latter assessed domains of memory and executive functions. Various measures of mood (i.e., Profile of Mood States and Center for Epidemiological Studies Depression Scale) were also assessed. Results Cerebral artery elasticity and BFV responsiveness to cognitive tests (neurovascular coupling) correlated with cognitive performance but not with depressive symptoms or mood states. Mood deficits were related to poor cognitive performance. Conclusion These results highlight the importance of adequate cerebral perfusion for optimized cognitive function in healthy postmenopausal women. Preventative strategies to attenuate accelerated cognitive decline should also consider restoring cerebrovascular function.",2 "Mental disorders (MD), such as depression, anxiety, and cognitive impairment, are highly prevalent in patients with coronary heart disease (CHD). Current guidelines on cardiovascular diseases recommend screening and appropriate treatment of MD; however, the degree of implementation of such recommendations in clinical practice is unknown. This study aims to analyze the quality of health care of 8 patients with CHD and MD. Specifically, we aim to analyze (1) the quality of care, (2) trajectories of care, and (3) barriers regarding the detection and treatment of MD. Moreover, we want to identify potentials of changes in health care delivery towards more patient-centered care. The results of this study shall be the first step towards value-based care of people with CHD and comorbid mental disorders.",1 "Introduction Links between preclinical Alzheimer's disease (AD) and driving difficulty onset would support the use of driving performance as an outcome in primary and secondary prevention trials among older adults (OAs). We examined whether AD biomarkers predicted the onset of driving difficulties among OAs. Methods One hundred four OAs (65+ years) with normal cognition took part in biomarker measurements, a road test, clinical and psychometric batteries, and self-reported their driving habits. Results Higher values of cerebrospinal fluid (CSF) tau/Aβ42 and phosphorylated tau (ptau181)/Aβ42 ratios, but not uptake on Pittsburgh compound B amyloid imaging (P = .12), predicted time to a rating of marginal or fail on the driving test using Cox proportional hazards models. Hazards ratios (95% confidence interval) were 5.75 (1.70–19.53), P = .005 for CSF tau/Aβ42; 6.19 (1.75–21.88), and P = .005 for CSF ptau181/Aβ42. Discussion Preclinical AD predicted time to receiving a marginal or fail rating on an on-road driving test. Driving performance shows promise as a functional outcome in AD prevention trials.",2 "Given the interest in improving executive functions, the present study examines a promising combination of two training techniques: neurofeedback training (NFT) and working memory training (WMT). NFT targeted increasing the amplitude of individual’s upper Alpha frequency band at the parietal midline scalp location (Pz), and WMT consisted of an established computerized protocol with working memory updating and set-shifting components. Healthy participants (n = 140) were randomly allocated to five combinations of training, including visual search training used as an active control training for the WMT; all five groups were compared to a sixth silent control group receiving no training. All groups were evaluated before and after training for resting-state electroencephalogram (EEG) and behavioral executive function measures. The participants in the silent control group were unaware of this procedure, and received one of the training protocols only after study has ended. Results demonstrated significant improvement in the practice tasks in all training groups including non-specific influence of NFT on resting-state EEG spectral topography. There was only a near transfer effect (improvement in working memory task) for WMT, which remained significant in the delayed post-test (after 1 month), in comparison to silent control group but not in comparison to active control training group. The NFT + WMT combined group showed improved mental rotation ability both in the post-training and in the follow-up evaluations. This improvement, however, did not differ significantly from that in the silent control group. We conclude that the current training protocols, including their combination, have very limited influence on the executive functions that were assessed in this study.",2 "A high risk of morbidity-mortality caused by a harsh and unpredictable environment is considered to be associated with a fast life history (LH) strategy, commonly linked with criminal behavior. However, offenders are not the only group with a high exposure to extrinsic morbidity-mortality. In the present study, we investigated the LH strategies employed by two groups of Polish men: incarcerated offenders (N = 84) as well as soldiers and firefighters (N = 117), whose professions involve an elevated risk of injury and premature death. The subjects were asked to complete the Mini-K (used as a psychosocial LH indicator) and a questionnaire which included a number of biodemographic LH variables. Although biodemographic and psychosocial LH indicators should be closely linked with each other, the actual connection between them is unclear. Thus, this study was driven by two aims: comparing LH strategies in two groups of men with a high risk of premature morbidity-mortality and investigating the relationship between the biodemographic and psychosocial LH dimensions. The study showed that incarcerated men employed faster LH strategies than soldiers and firefighters, but only in relation to biodemographic variables (e.g., number of siblings, age of sexual initiation, life expectancy). No intergroup differences emerged regarding psychosocial LH indicators. Moreover, the correlation analysis showed a weak association between biodemographic and psychosocial LH indicators. The results strengthen the legitimacy of incorporating biodemographic LH traits into research models and indicate the need for further research on the accuracy of the Mini-K. The possible explanations for the intergroup differences in LH strategies are discussed.",2 "How do members of the public view collaboration among organized interests and what factors contribute to attitudes about working in coalition? Interest groups frequently must decide whether to partner formally in pursuit of a shared objective while minimizing potential losses of revenue, reputation, and issue ownership. Using a nationally representative survey with an embedded experiment, we consider the potential ramifications of group collaboration from the perspective of potential members. Results show that, while a substantial minority views group collaboration negatively, most do not, and experimental exposure to a collaborating group yields positive evaluations and higher prospective contributions. The results reinforce the essentially pluralist public perceptions of interest groups that are supportive of their existing collaborative efforts. ",1 "Bipolar disorder (BD) and major depressive disorder (MDD) share similar clinical characteristics that often obscure the diagnostic distinctions between their depressive conditions. Both functional and structural brain abnormalities have been reported in these two disorders. However, the direct link between altered functioning and structure in these two diseases is unknown. To elucidate this relationship, we conducted a multimodal fusion analysis on the functional network connectivity (FNC) and gray matter density from MRI data from 13 BD, 40 MDD, and 33 matched healthy controls (HC). A data-driven fusion method called mCCA+jICA was used to identify the co-altered FNC and gray matter components. Comparing to HC, BD exhibited reduced gray matter density in the parietal and occipital cortices, which correlated with attenuated functional connectivity within sensory and motor networks, as well as hyper-connectivity in regions that are putatively engaged in cognitive control. In addition, lower gray matter density was found in MDD in the amygdala and cerebellum. High accuracy in discriminating across groups was also achieved by trained classification models, implying that features extracted from the fusion analysis hold the potential to ultimately serve as diagnostic biomarkers for mood disorders.",2 "9 Volunteers associated with the North Carolina Adult Asthma and Environment Study (NCAAES) participated in an investigation of personal daily exposures to coarse and fine particulate matter size fractions (PM10–2.5, PM2.5). Data from these personal measurements were then compared to community-based measures that might typically represent surrogate measurements of exposure often used in epidemiological assessments. To determine personal exposures to various particulate matter (PM) size fractions, a recently evaluated personal PM monitor capable of direct PM10–2.5 size fraction collection was used. 9 Participants living in the central region of North Carolina and enrolled in the NCAAES were asked to wear the monitor attached to a supporting backpack for 24-h collection periods. These 9 volunteers were monitored for 2 to 4days with subsequent gravimetric analysis of their PM samples. Personal PM10–2.5 mass concentrations were observed to be highly variable and ranged from 7.6 to 40.2μg/m3 over an 8-month period. The median for this measurement from all participants (50th percentile) was 13.7μg/m3. A coefficient of determination (r 2) of 0.02 was established for community-based PM10–2.5 mass concentrations versus personal exposures. Similar coefficients established for PM2.5 mass revealed only a modest improvement in agreement (r 2 =0.12). Data from the exposure findings are reported here.",1 "Obstructive sleep apnea (OSA) is a common disease. Given the costs of in-laboratory polysomnography (PSG), alternative ambulatory methods for accurate diagnosis are desirable. The objective of this study was to evaluate the performance of a simple device (SleepCheck) to identify patients with sleep apnea. A total of 30 consecutive patients with suspected OSA syndrome referred to the sleep clinic were prospectively evaluated with standard PSG and SleepCheck simultaneously during an in-laboratory, supervised full-night diagnostic study. The PSG apnea and hypopnea index (AHI) was evaluated according to standard criteria, and SleepCheck assessed the respiratory disturbance index (RDI) based on nasal cannula pressure fluctuations. Compared to the full-night PSG, SleepCheck systematically overscored respiratory events (the mean difference between SleepCheck RDI and PSG AHI was 27.4±13.3 events per hour). This overscoring was in part related to normal physiologic decreases in flow during rapid eye movement sleep or after an arousal. However, there was reasonable correlation between AHI and RDI (r=0.805). Receiver operating characteristic curves with threshold values of AHI of 10 and 20/h demonstrated areas under the curves (AUCs) of 0.915 and 0.910, respectively. Optimum combinations of sensitivity and specificity for these thresholds were calculated as 86.4/75.0 and 88.9/81.0, respectively. Overall, the SleepCheck substantially overscored apneas and hypopneas in patients with suspected OSA. However, after correction of the bias, the SleepCheck had reasonable accuracy with an AUC, sensitivity, and specificity similar to other ambulatory type 4 devices currently available.",1 "A wearable monitor that can reliably, accurately, and continuously measure personal exposure levels of various toxicants would not only accelerate the current environmental and occupational health and safety studies, but also enable new studies that are not possible with the current monitoring technology. Developing such a monitor has been a difficult challenge, and requires innovative sensing science and creative engineering. We have developed, built, and tested a wearable monitor for real-time detection of toxic hydrocarbons and acids in the environment. The monitor is low-cost, accurate, and user friendly. In addition, it can communicate wirelessly with a cell phone in which the monitoring results can be processed, displayed, stored, and transmitted to a designated computer. We have validated the functions and performance of the monitor, and carried out field tests with workers involving waste management, fire overhaul, and floor-cleaning activities, as well as with first- and second-hand smokers. The averaged exposure levels are in agreement with those determined by the standard NIOSH methods. The monitor provides accurate and real-time exposure assessment for the workers involving different activities. The real-time and continuous monitoring capability makes it possible to correlate the exposure levels with different activities and changes in the microenvironments. The monitor provides unprecedented real-time information that will help advance occupational safety and environmental health studies. It may also be used to better protect workers from occupational overexposure to toxic molecules.",1 "The levels of haplotype diversity within the lineages defined by two single-nucleotide polymorphisms (SNPs) (−13910 C/T and −22018 G/A) associated with human lactase persistence were assessed with four fast-evolving microsatellite loci in 794 chromosomes from Portugal, Italy, Fulbe from Cameroon, São Tomé and Mozambique. Age estimates based on the intraallelic microsatellite variation indicate that the −13910*T allele, which is more tightly associated with lactase persistence, originated in Eurasia before the Neolithic and after the emergence of modern humans outside Africa. We detected significant departures from neutrality for the −13910*T variant in geographically and evolutionary distant populations from southern Europe (Portuguese and Italians) and Africa (Fulbe) by using a neutrality test based on the congruence between the frequency of the allele and the levels of intraallelic variability measured by the number of mutations in adjacent microsatellites. This result supports the role of selection in the evolution of lactase persistence, ruling out possible confounding effects from recombination suppression and population history. Reevaluation of the available evidence on variation of the −13910 and −22018 loci indicates that lactase persistence probably originated from different mutations in Europe and most of Africa, even if 13910*T is not the causal allele, suggesting that selective pressure could have promoted the convergent evolution of the trait. Our study shows that a limited number of microsatellite loci may provide sufficient resolution to reconstruct key aspects of the evolutionary history of lactase persistence, providing an alternative to approaches based on large numbers of SNPs.",1 "Background It is difficult to improve negative symptoms and cognitive impairments in schizophrenia. A previous pilot study has shown that minocycline, a semi-synthetic second-generation tetracycline, is effective in treating for negative and/or cognitive symptoms in schizophrenia. Objectives The present study was designed to examine the efficacy and safety of minocycline for the treatment of negative symptoms and cognitive impairments in patients with schizophrenia. Methods Ninety-two patients with early stage schizophrenia treated with risperidone entered this 16-week, double blind, randomized, placebo-controlled clinical trial. Subjects were randomly assigned to receive minocycline (200mg per day) or the placebo. The primary outcome was evaluated using the Scale for the Assessment of Negative Symptoms (SANS). Secondary outcomes included the response rate of SANS, the Positive and Negative Syndrome Scale (PANSS), the Clinical Global Impression Scale (CGI), and cognitive tests. Results Subjects receiving minocycline had greater improvements on SANS total scores and PANSS negative subscale scores (P<0.001) when compared with those receiving the placebo. Rates of treatment response (43.6%) in the minocycline group were significantly higher than those in the placebo group (10.0%) after 16weeks of treatment. There was no significant difference between the seven cognitive domains (P>0.05), except for the attention domain (P=0.044). Conclusions The addition of minocycline to atypical antipsychotic drugs in early schizophrenia had significant efficacy on negative symptoms but had a slight effect on the attention domains of patients with schizophrenia. It may be considered as a new adjunct treatment for negative symptoms of schizophrenia. Clinical trials.gov identifier: NCT01493622.",2 "The rapid development of additive manufacturing and advances in shape memory materials have fueled the progress of four-dimensional (4D) printing. With the right external stimulus, the need for human interaction, sensors, and batteries will be eliminated, and by using additive manufacturing, more complex devices and parts can be produced. With the current understanding of shape memory mechanisms and with improved design for additive manufacturing, reversibility in 4D printing has recently been proven to be feasible. Conventional one-way 4D printing requires human interaction in the programming (or shape-setting) phase, but reversible 4D printing, or two-way 4D printing, will fully eliminate the need for human interference, as the programming stage is replaced with another stimulus. This allows reversible 4D printed parts to be fully dependent on external stimuli; parts can also be potentially reused after every recovery, or even used in continuous cycles—an aspect that carries industrial appeal. This paper presents a review on the mechanisms of shape memory materials that have led to 4D printing, current findings regarding 4D printing in alloys and polymers, and their respective limitations. The reversibility of shape memory materials and their feasibility to be fabricated using three-dimensional (3D) printing are summarized and critically analyzed. For reversible 4D printing, the methods of 3D printing, mechanisms used for actuation, and strategies to achieve reversibility are also highlighted. Finally, prospective future research directions in reversible 4D printing are suggested.",0 "In chemical regulation, e.g. the EU Water Framework Directive, REACH, or the Pesticide Directive, standardized ecotoxicological tests are applied to evaluate and rank the hazard of compounds and for deriving environmental quality standards (EQS). Standardized test methods prescribe fixed testing conditions e.g. specific temperature, pH, light intensity etc. However, environmental conditions under which the organisms live are rarely identical to the standard conditions. Thus, the ecotoxicity of compounds found in standard test is not only a function of the compounds inherent physico-chemical properties but is also affected by test conditions. It is therefore important to study the effect of changes in test conditions in order to get reliable input ecotoxicity data for assessing the potential risk posed by a compound. The objective of this study was to investigate the implications of changing test conditions on the toxicity of four sulfonylurea herbicides (SUs). The toxicity of the four SUs towards Lemna gibba was investigated at three pH levels (6, 7.5 and 9), at two temperatures (15 and 24 °C) and two light regimes (continuous and 12:12 h light:dark cycle) The EC50 increased twofold to tenfold for the four SUs when pH was increased from 6 to 9. Decreasing the temperature from 24 to 15 °C or introducing a dark:light cycle did not cause any trends in changes in toxicity. The results show that test conditions can have an effect on the toxicity and this should be considered when the standard test results are used for derivation of EQS.",0 "The first part of this interview covers Frank Oppenheimer’s childhood, family background, and early education in New York City; his deep lifelong bond to his older brother Robert; his undergraduate years at Johns Hopkins University (1930–1933); his stays at the Cavendish Laboratory in Cambridge, England, and at the University of Florence, Italy (1933–1935); his graduate studies at the California Institute of Technology (1935–1939); his postdoctoral assistantship at Stanford University (1939–1941); and the frequent summers he spent in New Mexico with his brother, family, and friends.",1 "We investigated the contribution of preschoolers’ executive function (EF) skills to the effectiveness of their spontaneous strategy production when learning. Performance on computerized tasks of inhibition, attention shifting, and working memory was examined in relation to the effectiveness of 112 3- to 5-year-olds’ spontaneous strategy production on a spatial memory task. Participants were asked to remember the locations of four toys representing one of two categories (animals or chairs) placed in a wooden box. Most participants spontaneously implemented a clustering strategy by removing and/or replacing the toys according to category membership. However, less than half of these strategic participants showed concomitant memory benefits (recall of toy locations). The remainder showed a utilization deficiency. After controlling for age and IQ, participants who performed better on EF tasks were more likely to benefit from having used the clustering strategy. These findings indicate that utilization deficiencies among preschoolers may be partially accounted for by individual differences in EF.",2 "The aim of this autopsy study was to investigate chest-compression associated injuries to the trunk in out-of-hospital and in-hospital non-traumatic cardiac arrest patients treated with automated external chest compression devices (ACCD; all with LUCAS II devices) versus exclusive manual chest compressions (mCC). In this retrospective single-center study, all forensic autopsies between 2011 and 2017 were included. Injuries following cardiopulmonary resuscitation (CPR) in patients treated with mCC or ACCD were investigated and statistically compared using a bivariate logistic regression. In the seven-year period with 4433 autopsies, 614 were analyzed following CPR (mCC vs. ACCD: n = 501 vs. n = 113). The presence of any type of trunk injury was correlated with longer resuscitation intervals (30 ± 15 vs. 44 ± 25 min, p < 0.05). In comparison with mCC, treatment with ACCD led to more frequent skin emphysema (5 vs 0%, p = 0.012), pneumothorax (6 vs. 1%, p = 0.008), lung lesions (19 vs. 4%, p = 0.008), hemopericardium (3 vs 1%, p = 0.025) and liver lesions (10 vs. 1%, p = 0.001), all irrespective of confounding aspects. Higher age and longer CPR durations statistically influenced frequency of sternal and rib fractures (p < 0.001). The mean number of fractured ribs did not vary significantly between the groups (6 ± 3 vs. 7 ± 2, p = 0.09). In this cohort with unsuccessful CPR, chest compression-related injuries were more frequent following ACCD application than in the mCC group, but with only minutely increased odds ratios. The severity of injuries did not differ between the groups, and no iatrogenic injury was declared by the forensic pathologist as being fatal. In the clinical routine after successful return of spontaneous circulation a computed tomography scan for CPR-associated injuries is recommended as soon as possible.",2 "Detection of cognitive impairment in patients with brain metastases is important for both patient management and clinical trials. The most commonly used cognitive screen, the Mini Mental State Examination (MMSE), though convenient, is not sensitive in these patients. More sensitive tools are less convenient and, therefore, uncommonly used. Therefore, a practical and sensitive tool is needed. The Montreal Cognitive Assessment (MoCA) is a good candidate, shown to be sensitive in detecting mild cognitive impairment in the pre-dementia setting. This study is the first to explore the MoCA in 12 cancer patients and is aimed at determining the feasibility of administering the MoCA in brain tumor patients. The secondary objective is to explore the relationship between MoCA and MMSE scores.",1 "The ratio of index- and ring-finger lengths (2D:4D ratio) is thought to be related to prenatal androgen exposure, and in many, though not all, populations, men have a lower average digit ratio than do women. In many studies an inverse relationship has been observed, among both men and women, between 2D:4D ratio and measures of athletic ability. It has been further suggested that, in hunter-gatherer populations, 2D:4D ratio might also be negatively correlated with hunting ability, itself assumed to be contingent on athleticism. This hypothesis has been tested using endurance running performance among runners from a Western, educated, and industrialized population as a proximate measure of hunting ability. However, it has not previously been tested among actual hunter-gatherers using more ecologically valid measures of hunting ability and success. The current study addresses this question among Tanzanian Hadza hunter-gatherers. I employ a novel method of assessing hunting reputation that, unlike previous methods, allows granular distinctions to be made between hunters at all levels of perceived ability. I find no statistically significant relationship between digit ratio and either hunting reputation or two important hunting skills. I confirm that Hadza men have higher mean 2D:4D ratios than men in many Western populations. I discuss the notion that 2D:4D ratio may be the consequence of an allometric scaling relationship between relative and absolute finger lengths. Although it is difficult to draw clear conclusions from these results, the current study provides no support for the theorized relationship between 2D:4D ratio and hunting skill.",1 "Preliminary evidence suggests that children with Attention Deficit Hyperactivity Disorder (ADHD) may exhibit handwriting difficulties. However, the exact nature of these difficulties and the extent to which they may relate to motor or behavioural difficulties remains unclear. The aim of this study was to describe handwriting capacity in children newly diagnosed with ADHD and identify predictors of performance. Forty (40) medication-naïve children with ADHD (mean age 8.1 years) were evaluated with the Evaluation Tool of Children's Handwriting-Manuscript, the Movement Assessment Battery for Children (M-ABC), the Developmental Test of Visual Motor Integration (VMI) and the Conner Global Index. An important subset (85.0%) exhibited manual dexterity difficulties. Handwriting performance was extremely variable in terms of speed and legibility. VMI was the most important predictor of legibility. Upper extremity coordination, as measured by the M-ABC ball skills subtest, was also a good predictor of word legibility. Conclusion Poor handwriting legibility and slow writing speed were common in children newly diagnosed with ADHD and were associated with motor abilities. Future studies are needed to determine whether interventions, including stimulant medications, can improve handwriting performance and related motor functioning.",2 "While psychiatric disorders such as schizophrenia are largely diagnosed on symptomatology, several studies have attempted to determine which biomarkers can discriminate schizophrenia patients from non-patients with schizophrenia. The objective of this study is to assess whether near-infrared spectroscopy (NIRS) measurement can distinguish schizophrenia patients from healthy subjects. Sixty (60) patients with schizophrenia and sixty age- and gender-matched healthy controls were divided into two sequential groups. The concentration change in oxygenated hemoglobin (Δ[oxy-Hb]) was measured in the bilateral prefrontal areas (Fp1-F7 and Fp2-F8) during the Verbal Fluency Test (VFT) letter version and category version, Tower of Hanoi (TOH), Sternberg's (SBT) and Stroop Tasks. In the first group, schizophrenia patients showed poorer task performance on all tasks and less prefrontal cortex activation during all but the Stroop Task compared to healthy subjects. In the second group, schizophrenia patients showed poorer task performance and less prefrontal cortex activation during VFTs and TOH tasks than healthy subjects. We then performed discriminant analysis by a stepwise method using Δ[oxy-Hb] and task performance measures as independent variables. The discriminant analysis in the first group included task performance of TOH, VFT letter and VFT category and Δ[oxy-Hb] of VFT letter. As a result, 88.3% of the participants were correctly classified as being schizophrenic or healthy subjects in the first analysis. The discriminant function derived from the first group correctly assigned 75% of the subjects in the second group. Our findings suggest that NIRS measurement could be applied to differentiate patients with schizophrenia from healthy subjects.",2 "While the scientist–practitioner model of training has enjoyed wide-spread appeal, difficulties in implementing the model have continued since its inception. Despite these difficulties, we remain advocates of the model and believe responsibility for inculcating a scientist–practitioner mindset rests with both training programs and trainees themselves. Thus, we offer several suggestions for both trainees and training programs in hopes of perpetuating the scientist–practitioner ideal. ",1 "This paper uses a semiparametric latent variable transformation model for multiple outcomes to examine the effect of education and maternal education on female multidimensional well-being and proposes a procedure to build a well-being index that is less susceptible to functional form misspecification. We model multidimensional well-being as an unobserved common factor underlying the observed well-being outcomes. The semiparametric methodology allows us to alleviate misspecification bias by combining multiple indicators into a latent construct in an unspecified, data-driven way. Using data from 12 female participants of the 1974–2010 waves of the US General Social Survey, we find that education, intelligence, and maternal education contribute positively to multidimensional well-being. However, the effects of education and maternal education on female multidimensional well-being declined steadily between the mid-1970s and the 1990s, and have not rebounded since.",1 "We determined the genotoxicity of 39 chemicals currently in use as food additives. They fell into six categories—dyes, color fixatives and preservatives, preservatives, antioxidants, fungicides, and sweeteners. We tested groups of four male ddY mice once orally with each additive at up to 0.5×LD50 or the limit dose (2000mg/kg) and performed the comet assay on the glandular stomach, colon, liver, kidney, urinary bladder, lung, brain, and bone marrow 3 and 24h after treatment. Of all the additives, dyes were the most genotoxic. Amaranth, Allura Red, New Coccine, Tartrazine, Erythrosine, Phloxine, and Rose Bengal induced dose-related DNA damage in the glandular stomach, colon, and/or urinary bladder. All seven dyes induced DNA damage in the gastrointestinal organs at a low dose (10 or 100mg/kg). Among them, Amaranth, Allura Red, New Coccine, and Tartrazine induced DNA damage in the colon at close to the acceptable daily intakes (ADIs). Two antioxidants (butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT)), three fungicides (biphenyl, sodium o-phenylphenol, and thiabendazole), and four sweeteners (sodium cyclamate, saccharin, sodium saccharin, and sucralose) also induced DNA damage in gastrointestinal organs. Based on these results, we believe that more extensive assessment of food additives in current use is warranted.",0 "Recruitment of extra neural resources may allow people to maintain normal cognition despite amyloid-β (Aβ) plaques. Previous fMRI studies have reported such hyperactivation, but it is unclear whether increases represent compensation or aberrant overexcitation. We found that older adults with Aβ deposition had reduced deactivations in task-negative regions, but increased activation in task-positive regions related to more detailed memory encoding. The association between higher activity and more detailed memories suggests that Aβ-related hyperactivation is compensatory. ",1 "This essay examines the public debate about the agricultural biotechnologies known as genetically modified organisms, as that debate is being carried out in its most dichotomizing forms in the United States. It attempts to reveal the power of sharply dichotomous thinking, as well as its limits. The essay draws on the work of Michel Serres, who uses the concept of the parasite to reconstruct or reframe fundamental dichotomies in western philosophy; it attempts a similar reframing of the public debates about GMOs. The purpose of such a reframing is to create possibilities for dialogue among 11 participants that will move beyond the polarization that characterizes much of the current debate in the U.S.",1 "We replicated and extended previous research on microswitch facilitated choice making by individuals with profound multiple disabilities. Following an assessment of stimulus preferences, we taught 6 adults with profound multiple disabilities to emit 2 different responses to activate highly preferred stimuli. All participants learnt to activate both microswitches. Five participants showed a higher overall level of responding when both switches activating preferred stimuli were available concurrently. After completion of microswitch training, a choice assessment was conducted in which participants had access to 2 microswitches concurrently, with 1 connected to the most highly preferred stimulus and the other to a least preferred stimulus. Choice making behavior was shown in 3 participants and provided support for the preference assessment results. The results of the 3 remaining participants showed that both the most highly preferred and the least preferred stimuli may serve as reinforcers for microswitch activation responses.",1 "Although human alcoholics exhibit lasting cognitive deficits, it can be difficult to definitively rule out pre-alcohol performance differences. For example, individuals with a family history of alcoholism are at increased risk for alcoholism and are also behaviorally impaired. Animal models of controlled alcohol exposure permit balanced group assignment, thereby ruling out the effects of pre-existing differences. Periadolescent male rhesus macaques (N = 5) consumed alcohol during 200 drinking sessions (M–F) across a 10-month period (mean daily alcohol consumption: 1.38 g/kg/day). A control group (N = 5) consumed a fruit-flavored vehicle during the same period. Spatial working memory, visual discrimination learning and retention and response time behavioral domains were assessed with subtests of the Monkey CANTAB (CAmbridge Neuropsychological Test Automated Battery). Spatial working memory performance was impaired in the alcohol group after 120 drinking sessions (6 mo) in a manner that depended on retention interval. The chronic alcohol animals were also impaired in retaining a visual discrimination over 24 hrs when assessed 6–8 weeks after cessation of alcohol drinking. Finally, the presentation of distractors in the response time task impaired the response time and accuracy of the chronic alcohol group more than controls after 6 months of alcohol cessation. Chronic alcohol consumption over as little as 6 months produces cognitive deficits, with some domains still affected after acute (6–8 wks) and lasting (6 mo) discontinuation from drinking. Animals were matched on alcohol preference and behavioral performance prior to exposure, thus providing strong evidence for the causal role of chronic alcohol in these deficits.",1 "The goal of this study is to evaluate the effect of crime and discipline on graduation rates in higher education. Using national data on more than 1250 public and private non-profit institutions that were drawn from the Integrated Postsecondary Education Data System, the results reveal that more violence on and around campus is associated with lower 4-year graduation rates, whereas higher rates of disciplinary actions regarding alcohol, drugs, and weapons are associated with higher graduation rates. Furthermore, the findings suggest that utilizing the student conduct system rather than the criminal justice system to address minor offenses is more likely to lead to student success. This study contributes to the growing literature on college effectiveness and the influence of institutional structures and organizational policies on student achievement. The results of this study suggest that violent crime, institutional conduct systems, and campus police departments warrant further investigation.",1 "There is mounting evidence supporting the effectiveness of task-shifted mental health interventions in low- and middle-income countries (LMIC). However, there has been limited systematic scale-up or sustainability of these programs, indicating a need to study implementation. One barrier to progress is a lack of locally relevant and valid implementation measures. We adapted an existing brief dissemination and implementation (D&I) measure which includes scales for acceptability, appropriateness, feasibility and accessibility for local use and studied its validity and reliability among a sample of 20 consumers in Ukraine.",1 "Introduction: REM sleep behavior disorder (RBD) is strongly associated with synucleinopathy and is caused by REM sleep without atonia (RSWA), the loss of normal muscle atonia during REM sleep. We aimed to determine whether RSWA severity was associated with cognitive functioning in RBD. Materials and methods: Both. 324 idiopathic (iRBD) and symptomatic 90 RBD (sRBD) patients completed two cognitive batteries: CNS Vitals Signs (CNS-VS) and Useful Field of View (UFOV). All subjects underwent PSG and their muscle (SM: submentalis; AT: anterior tibialis) tone during REM sleep was visually and automatically scored. Group differences between sRBD and iRBD were then compared, and regression models fit to determine the relationship of RSWA and dependent cognitive measures. Results: Twenty iRBD and 10 sRBD participated. Demographics were similar between groups. Deficits on cognitive testing were observed on CNS-VS in processing speed (p = 0.014) and psychomotor speed (sRDB < iRBD, p = 0.019) and on Total UFOV and subtests 2 and 3 (sRBD > iRBD, all p < 0.002). sRBD patients had greater combined phasic and tonic RSWA in SM (p = 0.026) and longer mean phasic burst duration (p = 0.03). Regression analyses demonstrated that SM RSWA independently predicted overall CNS-VS Neurocognitive Index (NCI) (F = 4.5, p = 0.006), adjusting for age, gender, depressive symptoms (Zung score), and sleep disturbances (PSQI), and this relationship also remained significant in the iRBD group after excluding sRBD patients (F = 3.5, p = 0.03). Conclusion: SWA is predictive of lower overall cognitive performance in patients with RBD. Acknowledgements: The project described was supported by the National Institute on Aging (P50 AG016574), and through Grant Number 1 UL1 RR024150-01. The content is solely the responsibility of the authors.",2 "Objectives The purpose of this review is to critically evaluate the available evidence from the published scientific literature on dementia care and service provision in rural and remote settings from the perspective of formal/paid caregiving, in order to assess the current state of knowledge, identify policy and practice implications, and make recommendations for future research. Methods A systematic review of the literature indexed in ISI Web of Knowledge, PsychInfo, Medline, Healthstar, CINAHL, EMBASE, and Sociological Abstracts was conducted. Data were extracted from papers meeting inclusion criteria: peer-reviewed papers that focused on dementia or Alzheimer's disease (AD), examined care or service provision in relation to persons with AD or dementia, and relevant to rural or remote care or services. Results The search identified 872 articles for review, reduced to 72 after removing duplicates and articles not meeting criteria. Of the 72 remaining, 46 are included in this current review focusing on formal or paid care. A future review will focus on the 26 studies on informal/unpaid care. Six themes that correspond to the current state of knowledge in rural dementia care in the 46 included studies were: diagnostic processes, service provision, service models and programs, staff education and support needs, use of technology, and long-term care. Conclusions Despite the growing body of evidence over the 20 years covered by this review, much of the research is descriptive and/or based on small sample sizes, and distributed across the care continuum. Hence the body of evidence on which to base policy and program decisions remains limited. More research is needed that would support the development of comprehensive rural dementia care models.",0 "Summary The purpose of this study was to examine the physiological correlates of the Yo–Yo intermittent recovery test level 1 (Yo–Yo IR1) in basketball players. Twenty-two (22) male basketball players (means±S.D., body mass 72.4±11.4kg, height 181.7±6.9cm, age 16.8±2.0 years) were tested for maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) on a motorized treadmill. Lower limb explosive strength and anaerobic-capacity was assessed using vertical jumps (CMJ), 15m shuttle running sprint (15mSR) and line drill (LD), respectively. The same test battery was replicated after an experimental basketball game in order to assess selective effect of fatigue on physical performance. Pre to post-game CMJ (40.3±5.7 versus 39.9±5.9cm) and 15mSR (5.80±0.25 versus 5.77±0.22s) performances were not significantly different (p >0.05). LD performance decreased significantly post-game (from 26.7±1.3 to 27.7±2.7s, p <0.001). Yo–Yo IR1 performances (m) were significantly related to VO2max (r =0.77, p =0.0001), speed at VO2max (r =0.71, p =0.0001) and %VO2max at VT (r =−0.60, p =0.04). Yo–Yo IR1 performance was significantly correlated to post-game LD decrements (r =−0.52, p =0.02). These findings show that Yo–Yo IR1 may be considered as a valid basketball-specific test for the assessment of aerobic fitness and game-related endurance.",1 "Many enterprises have been devoting a significant portion of their budget to product development in order to distinguish their products from those of their competitors and to make them better fit the needs and wants of customers. Hence, businesses should develop product designing that could satisfy the customers’ requirements since this will increase the enterprise’s competitiveness and it is an essential criterion to earning higher loyalties and profits. This paper investigates the following research issues in the development of new digital camera products: (1) What exactly are the customers’ “needs” and “wants” for digital camera products? (2) What features is more importance than others? (3) Can product design and planning for product lines/product collection be integrated with the knowledge of customers? (4) How can the rules help us to make a strategy during we design new digital camera? To investigate these research issues, the Apriori and C5.0 algorithms are methodologies of association rules and decision trees for data mining, which is implemented to mine customer’s needs. Knowledge extracted from data mining results is illustrated as knowledge patterns and rules on a product map in order to propose possible suggestions and solutions for product design and marketing.",1 "Patient participation is important for improving outcomes, respect for self-determination and legal aspects in care. However, how patients with heart failure view participation and which factors may be associated with participation is not known. The aim of this study was therefore to describe the influence of structured home care on patient participation over time in 11 patients diagnosed with heart failure, and to explore factors associated with participation in care.",1 "In 2017, Puerto Rico sustained extensive damage from Hurricane Maria, increasing the risk of fires and carbon monoxide (CO) poisonings. Using a population-based, in-person survey of households with children less than 6 years old in Puerto Rico, we collected data in 2010 concerning the presence of smoke alarms and CO alarms in these households. We generated national estimates by extrapolating the number of households in each stratum using data from the 2010 Census. We determined which household characteristics predicted the presence of these alarms. Of 355 households analyzed, 31% had functional smoke alarms, or an estimated 109,773 households territory wide. The presence of smoke alarms was associated with living in multifamily housing and no child in the household receiving government medical insurance. Public housing or publicly subsidized housing, as compared to owner-occupied housing and unsubsidized rental housing, was associated with having a functional smoke alarm in households with children aged less than 6 years. Based on only six houses having CO alarms, we estimated only 7685 (2%) households had CO alarms. The low prevalence of functional smoke or CO alarms 7 years before Hurricane Maria is unfortunate and should be remedied by ensuring that such alarms are widely installed in current rebuilding activities.",2 "Despite increasing interest in the attentional biases of pain patients towards pain-related stimuli, there have been no investigations of whether the main caregivers of chronic pain patients also selectively attend to pain-related information. We compared the attentional biases to painful or happy faces of 120 chronic pain patients, 118 caregivers, and 50 controls. Analyses found that both patients and caregivers demonstrated biases towards painful faces that were not observed in control participants or to happy faces. Those patients and caregivers who were high in fear of pain demonstrated greater biases than those low in fear of pain, and the biases of the high-in-fear-of-pain group differed significantly from zero. When sub-groups of caregivers were compared, it was found that biases towards painful faces were not observed for those caregivers who accurately identified the level of pain the patient currently reported. In contrast, those caregivers who overestimated or underestimated the patients’ pain demonstrated biases that were significantly greater than zero. These results add to the growing weight of evidence suggesting that biases towards pain-related stimuli are observed in chronic pain patients, but that the nature of the stimuli is important. In addition, the results suggest that caregivers, particularly those who either under- or overestimate the level of pain that the patient reports, also demonstrate similar biases. Future research should investigate the links between caregivers’ biases and the way in which caregivers respond to pain.",2 "Abstract Climate change mitigation requires the development of new processes to reduce the amount of carbon dioxide in the atmosphere. The products of CO2 utilization can supplement or replace chemical feedstocks, fine chemicals, pharmaceutical, and polymers. Carbon capture and utilization based on innovative electroreduction processes is one of the suggested routes to reduce the use of coal and oil as carbon sources due to the recycling of carbon. Some chemicals may be produced using carbon dioxide, decreasing the use of natural resources. The electrocatalytic processes to obtain formate and methanol as derived products from CO2 are discussed in this chapter, taking into account the electro-catalysts and the reactor design in the development of innovative processes.",0 "The full-length Schedule for Nonadaptive and Adaptive Personality - 2nd Edition (SNAP-2, Clark et al. 2014) and various derivative versions were developed as measures of normal- and pathological-range personality traits. We report herein on the development and initial validation of the SNAP Brief Other-Description Rating Form (SNAP-BORF), an abbreviated version of the SNAP Other-Description Rating Form (ORF; Harlan and Clark Assessment, 6, 131–145, 1999). Our goal was to create a more efficient SNAP informant short form by making items more succinct rather than by eliminating items. SNAP-ORF word count was reduced by 68%, and the 1.5-page SNAP-BORF can be completed in approximately 10 min, one-third to one-half the time required to complete the SNAP-ORF. Mean-level differences between the SNAP-ORF and SNAP-BORF scales were negligible for all scales except propriety. Using exploratory factor analysis, we found the SNAP-BORF had a three-factor structure (NA vs. Low PA, Disinhibition vs. Constraint, and Antagonism) broadly consistent with extant literature. The SNAP-BORF showed good convergent/ discriminant validity with respect to the SNAP-family measures as well as measures of normal personality and symptoms of depression, anxiety, and worry. Results indicated that the SNAP-BORF is a useful measure when a very brief informant assessment of adaptive and maladaptive personality is needed.",1 "Owing to data collection challenges, the vertical variation in population in cities and particulate air pollution are typically not accounted for in exposure assessments, which may lead to misclassification of exposures based on elevation of residency. To better assess this misclassification, the vertical distribution of the potentially highly exposed population (PHEP), defined as all residents within the 100-m buffer zone of above-ground highways or the 200-m buffer zone of a highway-tunnel exit, was estimated for four floor categories in Boston’s Chinatown (MA, USA) using the three-dimensional digital geography methodology. Vertical profiles of particle number concentration (7–3000 nm; PNC) and particulate matter (PM2.5) mass concentration were measured by hoisting instruments up the vertical face of an 11-story (35-m) building near the study area throughout the day on multiple days. The concentrations from all the profiles (n=23) were averaged together for each floor category. As measurement elevation increased from 0 to 35 m PNC decreased by 7.7%, compared with 3.6% for PM2.5. PHEP was multiplied by the average PNC for each floor category to assess exposures for near-highway populations. The results show that adding temporally-averaged vertical air pollution data had a small effect on residential ambient exposures for our study population; however, greater effects were observed when individual days were considered (e.g., winds were off the highways).",1 "The body mass index (BMI) of breakfast eaters is frequently reported to be lower compared with that of breakfast skippers. This is not explained by differences in energy intakes, indicating there may be other mechanisms serving to drive this paradoxical association between breakfast and BMI. This study aimed to investigate the effect of eating breakfast versus morning fasting on measures predominantly of metabolism in lean and overweight 10 participants who habitually eat or skip breakfast.",1 "A number of pharmacological agents for treating negative symptoms in schizophrenia are currently in development. Unresolved questions regarding the design of clinical trials in this area were discussed at an international meeting in Florence, Italy in April 2012. 25 participants included representatives from academia, the pharmaceutical industry, and the European Medicines Agency (EMA). Prior to the meeting, 25 participants submitted key questions for debate and discussion. Responses to the questions guided the discussion during the meeting. The group reached agreement on a number of issues: (1) study subjects should be under the age of 65; (2) subjects should be excluded for symptoms of depression that do not overlap with negative symptoms; (3) functional measures should not be required as a co-primary in negative symptom trials; (4) information from informants should be included for ratings when available; (5) Phase 2 negative symptom trials should be 12weeks and 26weeks is preferred for Phase 3 trials; (6) prior to entry into a negative symptom study, subjects should demonstrate clinical stability for a period of 4 to 6months by collection of retrospective information; and (7) prior to entry, the stability of negative and positive symptoms should be confirmed prospectively for four weeks or longer. The 25 participants could not reach agreement on whether predominant or prominent negative symptoms should be required for study subjects.",1 "Hearing impairment is the most common body system disability in veterans. In 2008, nearly 520,000 veterans had a disability for hearing loss through the Department of Veterans Affairs (VA). Changes in eligibility for hearing aid services, along with the aging population, contributed to a greater than 300% increase in the number of hearing aids dispensed from 1996 to 2006. In 2006, the VA committed to having no wait times for patient visits while providing quality clinically-appropriate care. One approach to achieving this goal is the use of group visits as an alternative to individual visits. We sought to determine: 1) if group hearing aid fitting and follow-up visits were at least as effective as individual visits, and 2) whether group visits lead to cost savings through the six month period after the hearing aid fitting. We describe the rationale, design, and characteristics of the baseline cohort of the first randomized clinical trial to study the impact of group versus individual hearing aid fitting and follow-up visits.",1 "Subjective ratings of fatigue are increasingly being used as part of a suite of tools to assess fatigue-related risk on the road and in the workplace. There is some debate however, as to whether individuals can accurately gauge their own fatigue states, particularly under conditions of sleep restriction. It is also unclear which references are used by individuals to assess fatigue – for example prior sleep, time of day, workload, or previous ratings. The current study used a sophisticated laboratory protocol to examine the independent contributions of sleep, circadian phase and sleep debt to fatigue ratings. Importantly, participants had no knowledge of time of day, how much sleep they were getting, or how long they were awake. Twenty-eight healthy, young males participated in one of two conditions of a 28h forced desynchrony protocol – severe sleep restriction (4.7h sleep and 23.3h wake) or moderate sleep restriction (7h sleep and 21h wake). Fatigue ratings were provided prior to and following each sleep period using the Samn–Perelli fatigue scale. Repeated measures ANOVAs were used to analyse the effects of circadian phase, sleep dose and study day. Results demonstrated an effect of circadian phase on both pre-sleep and post-sleep fatigue ratings. The significant effect of study day is interpreted as an effect of circadian time, as opposed to accumulating sleep debt. An effect of sleep dose was only seen in post-sleep fatigue ratings. The findings suggest that post-sleep fatigue ratings may be sensitive to prior sleep and may be useful as an indicator of fatigue-related risk, particularly when triangulated with information about recent total sleep time.",1 Personality traits are related with risk of hazardous alcohol use and alcohol dependence. The Substance Use Risk Profile Scale (SURPS) measures personality traits associated with addictive substance abuse. We examined psychometric properties of the SURPS in Lithuanian population.,1 "The advanced technology of computing system was followed by the rapid improvement of medical instrumentation and patient record management system. The typical examples are hospital information system (HIS) and picture archiving and communication system (PACS), which computerized the management procedure of medical records and images in hospital. Because these systems were built and used in hospitals, doctors out of hospital have problems to access them immediately on emergent cases. To solve these problems, this paper addressed the realization of system that could transmit the images acquired by medical imaging systems in hospital to the remote 16 doctors’ handheld PDA’s using CDMA cellular phone network. The system consists of server and PDA. The server was developed to manage the accounts of doctors and patients and allocate the patient images to each doctor. The PDA was developed to display patient images through remote server connection. To authenticate the personal user, remote data access (RDA) method was used in PDA accessing the server database and file transfer protocol (FTP) was used to download 22 patient images from the remove server. In laboratory experiments, it was calculated to take ninety seconds to transmit thirty images with 832 × 488 resolution and 24 bit depth and 0.37 Mb size. This result showed that the developed system has no problems for remote 16 doctors to receive and review the patient images immediately on emergent cases.",1 "School underachievement means a certain quantity of human resource which is taken out of educational circuit. The purpose of study is to investigate this phenomenon at the high school students age in order to identify personality correlates according to age, gender and type of high school they attend (sciences or humanities). We tested 120 students from four classes, two of sciences and two of humanities, from two high schools in Brasov. Predominance of verbalism in education leads to an insufficient valorization of boys. Excitement-seeking, need for actions, role of peers are significantly limited by Romanian education. The progressive character of school underachievement imposes measure of structural change to increase the opportunity of students’ school adjustment.",2 "Smart home systems are designed as platforms for connecting sensors, home appliances, and devices to exchange data and, ultimately, to provide useful services to home residents. However, such systems are vulnerable to Cybersecurity attacks that can affect the reliability and integrity of the delivered services. Sensors, planted at smart homes or equipped with smart appliances, are highly exposed to identity theft. Intruders can recognize through the understanding of the exchanged data, their locations, or knowing their associated services. Such information might make the home resident vulnerable to life attacks. Therefore, protecting sensors identities in smart home systems is of high interest in this domain. This paper introduces a novel technique that protects sensors’ identity from being recognized through cordless communication environments. Our proposed approach utilizes a three-phase technique that controls a synchronized queue among connected sensors and keeps their identity hidden from outsiders. The proposed approach preserves the linearity of time that is required to manage the protection of the home network. To validate the performance of our proposed approach, we conducted experiments on four different smart homes datasets. Furthermore, we performed a sensitivity analysis to measure how our proposed approach is affected by different environmental variables. The results indicated that the proposed approach provides a significant performance in protecting sensors identities in smart home area networks. Furthermore, during the sensitivity analysis, we found that our proposed technique’s performance is highly affected by the threshold value that defines each sensor’s time interval. ",1 "The effect of crowding on the identification of words was examined in 1007 normal readers and 2050 subjects with developmental dyslexia. In Experiment 1, a matching task was used. Words were presented either alone or embedded in other words. Vocal reaction times (RT) of 2000 dyslexics were slower and more sensitive to the presence of the surrounding stimuli than those of control subjects. Similar results were obtained in a control experiment using the same task for strings of symbols (isolated or crowded) instead of words. These data indicate that differences in crowding in control and dyslexic subjects arise at a pre-linguistic level. In Experiment 2, vocal RTs to word reading were measured. Two conditions putatively reducing the effect of crowding were tested: increasing inter-letter spacing and blurring. A moderate increase of inter-letter spacing produced faster vocal RTs in dyslexics, while no effect was present in normal controls. Moderate blurring of stimuli did not change dyslexics' RTs, while normal readers became slower. Group and individual results are discussed to evaluate the extent to which crowding contributes to the genesis of developmental dyslexia.",2 "Developmental dyslexia is the most common learning disability in school-aged children with an estimated incidence of five to ten percent. The cause and pathophysiological substrate of this developmental disorder is unclear. Recently, a possible involvement of the cerebellum in the pathogenesis of dyslexia has been postulated. In this study, 15 dyslexic children and 7 age-matched control subjects were investigated by means of functional neuroimaging (fMRI) using a noun-verb association paradigm. Comparison of activation patterns between dyslexic and control subjects revealed distinct and significant differences in cerebral and cerebellar activation. Control subjects showed bilaterally well-defined and focal activation patterns in the frontal and parietal lobes and the posterior regions of the cerebellar hemispheres. The dyslexic children, however, presented widespread and diffuse activations on the cerebral and cerebellar level. Cerebral activations were found in frontal, parietal, temporal and occipital regions. Activations in the cerebellum were found predominantly in the cerebellar cortex, including Crus I, Crus II, hemispheric lobule VI, VII and vermal lobules I, II, III, IV and VII. This preliminary study is the first to reveal a significant difference in cerebellar functioning between dyslexic children and controls during a semantic association task. As a result, we propose a new hypothesis regarding the pathophysiological mechanisms of developmental dyslexia. Given the sites of activation in the cerebellum in the dyslexic group, a defect of the intra-cerebellar distribution of activity is suspected, suggesting a disorder of the processing or transfer of information within the cerebellar cortex.",1 "Substantial correlational evidence suggests that prefrontal regions are critical to honest and dishonest behavior, but causal evidence specifying the nature of this involvement remains absent. We found that lesions of the human dorsolateral prefrontal cortex (DLPFC) decreased the effect of honesty concerns on behavior in economic games that pit honesty motives against self-interest, but did not affect decisions when honesty concerns were absent. These results point to a causal role for DLPFC in honest behavior. ",1 "The metabolic dependencies of androgen receptor (AR)-driven growth in prostate adenocarcinoma are largely unknown but could represent a therapeutic target when hormonal manipulations fail. Here the authors demonstrate that the mitochondrial pyruvate carrier (MPC) is transcriptionally regulated by AR and that MPC inhibition suppresses tumour growth in hormone-responsive and castrate-resistant conditions. ",1 "Background Previous studies indicate that transcranial direct current stimulation (tDCS) with anode over motor cortex (M1) and cathode over contralateral supraorbital region (SO) may be effective in reducing pain, but these studies are limited in number and have not focused on older adults with osteoarthritis (OA). Objective To evaluate the preliminary efficacy and safety of M1-SO applied tDCS on clinical pain severity and mobility performance in adults with knee OA pain. Methods. Forty (40) 50- to 70-year-old community-dwelling participants with knee OA were randomly assigned to receive five daily sessions of 2 mA tDCS for 20 min (n = 20) or sham tDCS (n = 20). We measured clinical pain severity via Numeric Rating Scale, Western Ontario and McMaster Universities Osteoarthritis Index, and Short-Form McGill Pain Questionnaire. In addition, we measured mobility performance using the 6-Minute Walk Test and the Short Physical Performance Battery. Moreover, we obtained a sensation/safety questionnaire and measured cognition changes using the PROMIS-Applied Cognition-Abilities-Short Form 8a. Results Active tDCS over M1-SO significantly reduced Numeric Rating Scale of pain compared to sham tDCS after completion of the five daily sessions, and remained up to three weeks. No other measures were significantly different from sham. Participants tolerated tDCS over M1-SO well without serious adverse effects or cognition changes. Conclusion Although not consistent in all pain measurements, our findings demonstrate promising clinical efficacy for reduction in pain perception for older adults with knee OA. Trial registration ClinicalTrials.gov Identifier NCT02512393.",2 "Background/purpose Vagus nerve stimulation (VNS) has been demonstrated to be safe and effective for adults and children with drug-resistant epilepsy and is able to improve most types of epilepsy. The aim of this study, in a paediatric population, was to assess the overall efficacy of vagus nerve stimulation on seizures, to assess tolerability and quality of life. Methods This single-centre, retrospective study reviewed the files of 29 children in whom a vagus nerve stimulator was implanted between 1995 and 2012. The response rate (greater than 50% reduction of the seizure frequency), antiepileptic efficacy according to the type of epilepsy or age at implantation or age at onset of epilepsy, the time-course of seizures, adverse effects, overall quality of life and number of hospitalisations were studied. Results In our population, vagus nerve stimulation achieved a significant reduction in the seizure frequency throughout follow-up (p = 0.015). Response rates were 59% at 3 months, and 66% at 6 months, and the response rate then remained stable at about 70%. Stimulation tended to be more effective in patients with non-idiopathic partial epilepsy than in patients with non-idiopathic and idiopathic generalised epilepsy (0.01 < p < 0.11). No other predictive factors of efficacy were identified. Patients, parents, caregivers reported improvement in overall quality of life in 38% of patients during clinical interviews. A significant reduction in the number of hospitalisations due to a reduction of seizure frequency was observed after implantation (p = 0.03). VNS was stopped because of complications or insufficient efficacy in 9 cases. Conclusion Vagus nerve stimulation is a safe and effective treatment option in children with drug-resistant epilepsy who are not candidates for surgery.",1 Numerous breast cancer patients experience cognitive changes during and after chemotherapy. Chemotherapy-related cognitive impairment can significantly affect quality of life. This pilot study attempted to determine the effects of a compensatory cognitive training on the objective and subjective cognitive functioning of breast cancer patients receiving adjuvant chemotherapy.,1 "Corrosive (caustic) material ingestion remains a major health issue, particularly in developing countries. The management strategy after corrosive ingestion should be planned according to the signs and symptoms. The management of corrosive ingestion based on endoscopic grading, nothing by mouth, and barium studies should be abandoned. With the new management protocol, esophageal stricture can be predicted with high accuracy using the simple new prognostic DROOL score (≤ 4) rather than endoscopic grading, reduced by immediate oral feeding as soon as the patient can swallow saliva instead of nothing by mouth, diagnosed earlier (10–14 days) by fluoro-endoscopic balloon-assisted esophageal examination for 24 patients with persistent dysphagia instead of relying on a barium study (≥ 21 days), and adequately treated by initiating balloon dilation earlier during the same anesthesia procedure. Fluoroscopically guided balloon dilatation with large balloons (18–20 mm) seems to be safe, with a low frequency of complications and a high success rate. If dilatation fails after a few months, esophagectomy and replacement surgery using the stomach should be considered. The increased risk of developing esophageal carcinoma after ingestion of corrosive substances should be kept in mind.",1 "Although imitation problems have been associated with autism for many years, the underlying mechanisms of these problems remain subject to debate. In this article, the question whether imitation problems are caused by selection or correspondence problems is explored and discussed. This review revealed that hypotheses on the nature of imitation problems in autism are complicated and inconclusive at the present time. There is some evidence for impaired selection, especially implicating poor preferential attention to biological motion and poor ascription of intention to action. There is also some evidence that both transformations of perspectives and mapping of visual to motor information are impaired, characterized as correspondence problems. However, it is not yet clear how poor selection processes contribute to correspondence problems and vice versa. Insight in this interaction may provide a valuable contribution to our understanding of imitation problems in autism. For further research we recommend that tasks should be constrained to target as few mechanisms as possible in given experiments.",0 "to STRONGSA or WEAKSA for Stage 2, which, along with the trials in Stage 1 and 3, was used for student modeling following Equation 3. As we will describe in Section V-B, we empirically found that STRONGSA led to stronger student modeling more aligned with an expert coach. The remaining 30 participants were therefore assigned to STRONGSA for Stage 2, and randomly assigned to receive no assistance or SKILLSA in Stage 4. The particular coaching intervention that SKILLSA uses for each participant is based on argmaxz∈(steer,brake,throttle)zpd(z), or the skill that received the highest score in our student modeling based on that participant's trajectories in Stages 1-3. Note that while we set the size of set Zg = 8, we restrict coach actions to only consider steer, brake, and throttle. Upon completing the study, all participants were asked to fill out a feedback form, where they reflected on the effectiveness of the five-minute practice session, and provided additional feedback on their experience with the simulator and assistance. We provide more details, including the full set of instructions participants received, in the Appendix. Overall, the structure of the study allowed us to assess the influence of shared autonomy on learning by comparing each participant's Baseline and Evaluation rounds, as well as systematically evaluate each component of Z-COACH. V. EXPERIMENTAL RESULTS Recall that Z-COACH consists of three steps: (i) task skill discovery, (ii) student modeling with shared autonomy to estimate how much a skill is within a student's ""zone of proximal development"", and (iii) using skill-focused shared autonomy to help the student improve. We now evaluate each step.",2 "Accumulating evidence suggests that not only diseases of old age, but also normal aging, affect elderly adults’ ability to draw on the framework theories that structure our abstract causal-explanatory knowledge, knowledge that we use to make sense of the world. One such framework theory, the cross-culturally universal vitalist biology, gives meaning to the abstract concepts life and death. Previous work shows that many elderly adults are animists, claiming that active, moving entities such as the sun and the wind are alive (Zaitchik & Solomon, 2008). Such responses are characteristic of young children, who, lacking an intuitive theory of biology, distinguish animals from non-animals on the basis of a theory of causal and intentional agency. What explains such childlike responses? Do the elderly undergo semantic degradation of their intuitive biological theory? Or do they merely have difficulty deploying their theory of biology in the face of interference from the developmentally prior agency theory? Here we develop an analytic strategy to answer this question. Using a battery of vitalist biology tasks, this study demonstrates—for the first time—that animism in the elderly is due to difficulty in deployment of the vitalist theory, not its degradation. We additionally establish some powerful downstream consequences of theory deployment difficulties, demonstrating that the elderly’s use of the agency theory is not restricted to animist judgments—rather, it pervades their explicit reasoning about animates and inanimates. Extending the investigation, we identify specific cognitive mechanisms implicated in adult animism, finding that differences between young and elderly adults are mediated and moderated by differences in inhibition and shifting mechanisms. The analytic strategy developed here could help adjudicate between degradation and deployment in other conceptual domains and other populations.",1 "While Cave Automatic Virtual Environment (CAVE) systems have long enabled room-scale virtual reality and various kinds of interactivity, their content has largely remained predetermined. We present \textit{Storycaster}, a generative AI CAVE system that transforms physical rooms into responsive storytelling environments. Unlike headset-based VR, \textit{Storycaster} preserves spatial awareness, using live camera feeds to augment the walls with cylindrical projections, allowing users to create worlds that blend with their physical surroundings. Additionally, our system enables object-level editing, where physical items in the room can be transformed to their virtual counterparts in a story. A narrator agent guides participants, enabling them to co-create stories that evolve in response to voice commands, with each scene enhanced by generated ambient audio, dialogue, and imagery. Participants in our study (n=13) found the system highly immersive and engaging, with narrator and audio most impactful, while also highlighting areas for improvement in latency and image resolution.",1 "This study investigates whether demographic factors shape adoption and attitudes among employees toward artificial intelligence (AI) technologies at work. Building on an extended Unified Theory of Acceptance and Use of Technology (UTAUT), which reintroduces affective dimensions such as attitude, self-efficacy, and anxiety, we surveyed 2,257 professionals across global regions and organizational levels within a multinational consulting firm. Non-parametric tests examined whether three demographic factors (i.e., years of experience, hierarchical level in the organization, and geographic region) were associated with AI adoption, usage intensity, and eight UTAUT constructs. Organizational level significantly predicted AI adoption, with senior employees showing higher usage rates, while experience and region were unrelated to adoption. Among AI users (n = 1,256), frequency and duration of use showed minimal demographic variation. However, omnibus tests revealed small but consistent group differences across several UTAUT constructs, particularly anxiety, performance expectancy, and behavioral intention, suggesting that emotional and cognitive responses to AI vary modestly across contexts. These findings highlight that demographic factors explain limited variance in AI acceptance but remain relevant for understanding contextual nuances in technology-related attitudes. The results underscore the need to integrate affective and organizational factors into models of technology acceptance to support equitable, confident, and sustainable engagement with AI in modern workplaces.",2 "Social media platforms have transformed global communication and interaction, with TikTok emerging as a critical tool for education, connection, and social impact, including in contexts where infrastructural resources are limited. Amid growing political discussions about banning platforms like TikTok, such actions can create significant ripple effects, particularly impacting marginalized communities. We present a study on Nepal, where a TikTok ban was recently imposed and lifted. As a low-resource country in transition where digital communication is rapidly evolving, TikTok enables a space for community engagement and cultural expression. In this context, we conducted an online survey (N=108) to explore user values, experiences, and strategies for navigating online spaces post-ban. By examining these transitions, we aim to improve our understanding of how digital technologies, policy responses, and cultural dynamics interact globally and their implications for governance and societal norms. Our results indicate that users express skepticism toward platform bans but often passively accept them without active opposition. Findings suggest the importance of institutionalizing collective governance models that encourage public deliberation, nuanced control, and socially resonant policy decisions.",2 "Trust is one of the most important factors shaping whether and how people adopt and rely on artificial intelligence (AI). Yet most existing studies measure trust in terms of functionality, focusing on whether a system is reliable, accurate, or easy to use, while giving less attention to the social and emotional dimensions that are increasingly relevant for today's generative AI (GenAI) systems. These systems do not just process information; they converse, respond, and collaborate with users, blurring the line between tool and partner. In this study, we introduce and validate the Human-AI Trust Scale (HAITS), a new measure designed to capture both the rational and relational aspects of trust in GenAI. Drawing on prior trust theories, qualitative interviews, and two waves of large-scale surveys in China and the United States, we used exploratory (n = 1,546) and confirmatory (n = 1,426) factor analyses to identify four key dimensions of trust: Affective Trust, Competence Trust, Benevolence & Integrity, and Perceived Risk. We then applied latent profile analysis to classify users into six distinct trust profiles, revealing meaningful differences in how affective-competence trust and trust-distrust frameworks coexist across individuals and cultures. Our findings offer a validated, culturally sensitive tool for measuring trust in GenAI and provide new insight into how trust evolves in human-AI interaction. By integrating instrumental and relational perspectives of trust, this work lays the foundation for more nuanced research and design of trustworthy AI systems.",1 "Abstract Project finance has evolved during the years and sectors of applications have changed together with the geographic areas where the technique has been used. Originally project finance was used in sectors with a stable captive market, low technology risk and low country risk. During the years, the technique has been increasingly implemented in riskier sectors and riskier countries. This chapter presents the historic evolution of project finance and PPPs. We first outline the worldwide trends with details regarding the use of the technique in different sectors and geographic macroregions. Then, we present a focus on the PPP subsegment. Here, we carry out the analysis distinguishing between developing and developed countries. Finally, a special focus on the European PPP market is provided.",1 "Although cognitive neuroscience has made valuable progress in understanding the role of the prefrontal cortex in human intelligence, the functional networks that support adaptive behavior and novel problem solving remain to be well characterized. Here, we studied 158 human brain lesion patients to investigate the cognitive and neural foundations of key competencies for fluid intelligence and working memory. We administered a battery of neuropsychological tests, including the Wechsler Adult Intelligence Scale (WAIS) and the N-Back task. Latent variable modeling was applied to obtain error-free scores of fluid intelligence and working memory, followed by voxel-based lesion-symptom mapping to elucidate their neural substrates. The observed latent variable modeling and lesion results support an integrative framework for understanding the architecture of fluid intelligence and working memory and make specific recommendations for the interpretation and application of the WAIS and N-Back task to the study of fluid intelligence in health and disease.",2 "Vote functions are important devices for providing a ‘big picture’ of developments in electoral politics. However, the limited degrees of freedom upon which they are typically based mean that vote functions are rarely able to properly discriminate between competing accounts of electoral outcomes. They also fail adequately to capture the impact of ‘events’ or to recognise the extent to which the context of electoral competition can vary over time. Popularity functions that restrict themselves to relatively limited periods of time are capable of addressing all of these concerns: they enjoy more degrees of freedom, can take full account of the impact of ‘events’, and can focus on very specific electoral periods. Interestingly, however, Lewis-Beck’s vote function for post-war British politics contains variables that are very similar to those in a popularity function that was developed for the most recent (2001) British general election. In the British context, vote and popularity functions seem to provide quite similar accounts of the main drivers of party support. The advantage of both approaches is that they provide clearly specified, unambiguous—and falsifiable—accounts of the phenomenon under investigation.",1 "Cigarette smoke condensates (CSCs) are complex mixed compounds that contain both direct and indirect mutagens/carcinogens. To detect genotoxicity of CSCs in vitro, a combination of various enzymes (e.g. activation and detoxification enzymes) called S9 is usually added. However, as S9 may induce cytotoxicity in target cells, it is unclear whether the addition of S9 can impact CSC-induced toxicity. Here, differences in cytogenotoxicity between CSCs in the presence or absence of S9 were studied using three in vitro assays (neutral red uptake assay, comet assay, and TCR gene mutation test) in human peripheral lymphocytes, which were exposed to CSCs at doses of 25, 50, 75, 100 and 125μg/ml for 4h. Assay results showed that both CSCs+S9 or CSCs−S9 could induce a dose-dependent elevation of cytogenotoxic effects in human lymphocytes with some differences between the two groups. The cytogenotoxicity induced by CSCs−S9 was significantly higher than that induced by CSCs+S9 in all three assays. The comet and NRU assays revealed that a dose–response relationship of cytogenotoxicity induced by CSCs+S9 was less typical than that induced by CSCs−S9, possibly due to specific cytogenotoxic agents in CSCs and enzymes contained in the S9 mixture. Thus, the three in vitro assays used in the present study are suitable for detecting cytogenotoxic effects in human lymphocytes induced by CSCs. Furthermore, the cytogenotoxicity induced by both CSCs+S9 and CSCs−S9 should be measured simultaneously when assessing and comparing the biological activity of different CSCs.",1 " Medication use is a potentially modifiable risk factor for falling; psychotropic and cardiovascular drugs have been indicated as main drug groups that increase fall risk. However, evidence is mainly based on studies that recorded falls retrospectively and/or did not determine medication use at the time of the fall. Therefore, we investigated the associations indicated in the literature between medication use and falls, using prospectively recorded falls and medication use determined at the time of the fall.",1 "Past studies on the factor validity of the Trait subscale of the Spielberger’s State-Trait Anxiety Inventory (STAI-T) do unanimously agree on its structure. In fact, researchers are still debating whether the STAI-T is unidimensional or multidimensional. Our aim was to clarify what the STAI-T measures. The STAI-T, the Beck Depression Inventory–II, the Teate Depression Inventory, and the Beck Anxiety Inventory were administered to 1124 psychiatric outpatients and to 877 healthy subjects. A confirmatory factor analysis was performed in order to compare various models in the literature. The internal consistency and convergent and discriminant validity of the STAI-T as well as its factorial subscales were assessed. The one-construct two-method (i.e., the STAI-T measures one substantive anxiety construct plus artifacts due to negative–positive item polarity) and the bifactor (i.e., the STAI-T comprises two first-order specific factors [“Anxiety” and “Depression”] and one first-order general factor) models were the best-fitting solutions for the STAI–T in both the clinical and nonclinical samples. The STAI–T total score correlated more strongly with measures of depression than with a concurrent measure of anxiety. The STAI-T should be considered a measure of general negative affect, including specific aspects of cognitive anxiety and depression together.",2 "Summary This paper explores households’ coping strategies in rural South Africa, where HIV/AIDS morbidity and mortality are having profound effects on household resources. Older women’s pensions play a potentially crucial role in multi-generational households during crises and for day-to-day subsistence. We conducted semi-structured interviews with 30 elderly women from the MRC/Wits Rural Public Health and Health Transitions Research Unit (Agincourt) fieldsite, who were eligible for the South African non-contributory pension. Although we stratified our sample by household mortality experience, the area’s high levels of migration, unemployment, and HIV/AIDS prevalence made our respondents’ pensions an important, regular, and reliable source of household-income regardless of their households’ mortality profile.",1 "The level of functioning of individuals with autism spectrum disorder (ASD) varies widely. To better understand the neurobiological mechanism associated with high-functioning ASD, we studied the rare case of a female patient with an exceptional professional career in the highly competitive academic field of Mathematics. According to the Research Domain Criteria (RDoC) approach, which proposes to describe the basic dimensions of functioning by integrating different levels of information, we conducted four fMRI experiments targeting the (1) social processes domain (Theory of mind (ToM) and face matching), (2) positive valence domain (reward processing), and (3) cognitive domain (N-back). Patient’s data were compared to data of 14 healthy controls (HC). Additionally, we assessed the subjective experience of our case during the experiments. The patient showed increased response times during face matching and achieved a higher total gain in the Reward task, whereas her performance in N-back and ToM was similar to HC. Her brain function differed mainly in the positive valence and cognitive domains. During reward processing, she showed reduced activity in a left-hemispheric frontal network and cortical midline structures but increased connectivity within this network. During the working memory task patients’ brain activity and connectivity in left-hemispheric temporo-frontal regions were elevated. In the ToM task, activity in posterior cingulate cortex and temporo-parietal junction was reduced. We suggest that the high level of functioning in our patient is rather related to the effects in brain connectivity than to local cortical information processing and that subjective report provides a fruitful framework for interpretation.",1 "We examined the stability of and cross-influences between externalizing behaviors and intervention engagement among children participating in a randomized clinical trial of an intervention for disruptive behavioral youth. Analyses also accounted for the influence of caregiver depression, family relationship quality, and sociodemographic factors (race, income) on the relationship between behaviors and intervention engagement. Analyses were based on 118 children participating in the Coping Power intervention. Composite variables were created to represent externalizing behaviors and intervention engagement constructs. Associations between these composite variables were examined over 24 treatment sessions. Findings indicated a regressive relationship among externalizing behaviors, i.e., baseline externalizing behaviors were positively associated with immediate follow-up behaviors. There were also dynamic relationships observed among engagement constructs. Notably, engagement with in-session activities during sessions 1–8 was positively associated with out-of-session activity engagement during the same treatment time period. Engagement with out-of-session activities during sessions 1–8 was positively associated with in-session activity engagement during sessions 9–16, indicating a complete mediation between early and middle in-session engagement through the mechanism of early out-of-session engagement. A crosslag relationship was observed: middle in-session engagement was negatively associated with externalizing behaviors at immediate follow-up. Finally, an interaction of race by income on immediate follow-up externalizing behaviors was observed, such that Black children’s externalizing behaviors remain static regardless of income level while White children’s behaviors decreased with higher income. Our findings support the contention that focusing on intervention engagement may be especially important in prevention interventions.",2 "Drawing from the dual process model of morality and life history theory, the present research examined the role of cognitive and emotional processes as bridges between basic environmental challenges (i.e., unpredictability and competition) and other-centered moral orientation (i.e., prioritizing the welfare of others). In two survey studies, cognitive and emotional processes represented by future-oriented planning and emotional attachment, respectively (Study 1, N = 405), or by perspective taking and empathic concern, respectively (Study 2, N = 424), positively predicted other-centeredness in prosocial moral reasoning (Study 1) and moral judgment dilemmas based on rationality or intuition (Study 2). Cognitive processes were more closely related to rational aspects of other-centeredness, whereas the emotional processes were more closely related to the intuitive aspects of other-centeredness (Study 2). Finally, the cognitive and emotional processes also mediated negative effects of unpredictability (i.e., negative life events and childhood financial insecurity), as well as positive effects of individual-level, contest competition (i.e., educational and occupational competition) on other-centeredness. Overall, these findings support the view that cognitive and emotional processes do not necessarily contradict each other. Rather, they might work in concert to promote other-centeredness in various circumstances and might be attributed to humans’ developmental flexibility in the face of environmental challenges.",2 "Neuropsychological evaluation of a patient's cognitive capabilities before and after epilepsy surgery is essential in elective epilepsy surgery. On the one hand, neuropsychology provides accessory information regarding the localization and lateralization of epilepsy-associated cognitive impairment; on the other hand, it is a useful tool for quality and outcome control of epilepsy surgery which helps to make surgery more effective and safe. Evaluation of the adequacy of the brain tissues to be resected and of the patient's mental reserve capacities allows for a prediction of the postoperative cognitive development. Successful surgery can stop mental decline due to chronic epilepsy and it can reverse this negative trend by release of functions and capacities that were secondarily affected before surgery. However, surgery bears the risk of additional impairments which, in interaction with normal or even pathological processes of mental aging, may accelerate cognitive decline at an older age. From a neuropsychological point of view, early recognition of pharmacoresistance is important along with early and complete seizure control with maximal sparing of functional tissues.",1 "Strength-based parenting (SBP) is a style of parenting characterized by knowledge and encouragement of a child’s unique personality, abilities, talents, and skills (i.e., strengths). Recent studies have demonstrated a unique contribution of SBP, above other parenting styles, in predicting a range of wellbeing indicators in adolescents. Given that wellbeing supports learning, and SBP predicts wellbeing, it is also plausible that adolescents with strength-based parents will have greater academic achievement. At the beginning of term, students from a public secondary school in Australia (N = 741, Mage = 13.70, SD = 1.33; 50% female) completed a self-report survey measuring perceptions of parental style, engagement, and perseverance. Subsequent academic results were obtained 3 months later. SBP predicted higher wellbeing in the form of adolescent engagement and perseverance. SBP also demonstrated a significant effect on academic achievement which was mediated by perseverance, but not engagement. Thus, results supported a model in which adolescents with strength-based parents achieved higher grades via increased perseverance. Results reaffirm the importance of the parent-student link, and dispositional qualities of engagement and perseverance, in predicting educational outcomes such as grades. This study extends positive education research beyond the classroom by demonstrating that positive parenting techniques like SBP can predict student wellbeing and academic achievement.",2 "Neurocognitive enhancement therapy (NET) is a remediation program for the persistent and function-limiting cognitive impairments of schizophrenia. In a previous study in veterans, NET improved work therapy outcomes as well as executive function and working memory. The present study aimed to determine whether NET could enhance functional outcomes among schizophrenia and schizoaffective patients in a community mental health center receiving community-based vocational services. Method: Patients (N =72) participated in a hybrid transitional and supported employment program (VOC) and were randomized to either NET+VOC or VOC only. NET+VOC included computer-based cognitive training, work feedback and a social information information-processing group. VOC only also included two weekly support groups. Active intervention was 12 months with 12 month follow-up. Follow-up rate was 100%. Results: NET+VOC patients worked significantly more hours during the 12 month follow-up period, reached a significantly higher cumulative rate of competitive employment by the sixth quarter, and maintained significantly higher rates of employment. Conclusion: NET training improved vocational outcomes, suggesting the value of combining cognitive remediation with other rehabilitation methods to enhance functional outcomes.",2 "Depression affects 7% of the elderly population, and it often remains misdiagnosed or untreated. Peripheral biomarkers might aid clinicians by allowing more accurate and well-timed recognition of the disease. We sought to determine if plasma protein levels predict the severity of depressive symptomatology or distinguish patients from healthy individuals. The severity of depressive symptoms and global cognitive functioning were assessed by the Geriatric Depression Scale (GDS) and Mini-Mental State Examination (MMSE) in 152 elderly subjects, 76 of which with major depressive disorder (MDD). Plasma levels of 24 proteins were measured by multiplexing and analyzed as continuous predictors or dichotomized using the median value. The association between individual plasma proteins and MDD risk or depressive symptoms severity was investigated using multiple logistic and linear regressions including relevant covariates. Sensitivity analyses were performed excluding cognitively impaired individuals or non-acute patients with MDD. After adjusting for possible confounders and false discovery rate (FDR) correction, we found lower Fetuin-A levels in MDD patients vs. controls (pFDR = 1.95 × 10–6). This result was confirmed by the sensitivity and dichotomized analyses. Lower prolactin (PRL) levels predicted more severe depressive symptoms in acute MDD patients (pFDR = 0.024). Fetuin-A is a promising biomarker of MDD in the elderly as this protein was negatively associated with the disorder in our sample, regardless of the global cognitive functioning. Lower PRL levels may be a peripheral signature of impaired neuroprotective processes and serotoninergic neurotransmission in more severely depressed patients.",2 "The purpose of this study was to compare the cognitive profiles of men and women with clinically defined schizotypal personality disorder (SPD). We examined the neuropsychological profile of SPD in 26 right-handed females and 31 right-handed males who met DSM-IV criteria for SPD, and matched comparison subjects. Cognitive performance was assessed on measures of abstraction, verbal and spatial intelligence, learning and memory, language, attention, and motor skills. Neuropsychological profiles were constructed by standardizing test scores based on the means and standard deviations of comparison groups matched for sex, age, handedness, ethnicity and parental SES. Overall, SPD subjects showed mild, general decrements in performance in most cognitive domains. However, unlike male SPD subjects, female SPDs did not show relative deficits in verbal learning and abstraction. The results suggest a less severe pattern of cognitive deficits in women with SPD compared to men, consistent with hypotheses of gender differences in cognitive function in schizophrenia.",2 "Individual differences in vulnerability to neurobehavioral performance impairment during sleep deprivation are considerable and represent a neurobiological trait. Genetic polymorphisms reported to be predictors have suggested the involvement of the homeostatic and circadian processes of sleep regulation in determining this trait. We applied mathematical and statistical modeling of these two processes to psychomotor vigilance performance and sleep physiological data from a laboratory study of repeated exposure to 36h of total sleep deprivation in 9 healthy young adults. This served to quantify the respective contributions of individual differences in the two processes to the magnitudes of participants’ individual vulnerabilities to sleep deprivation. For the homeostatic process, the standard deviation for individual differences was found to be about 60% as expressed relative to its group-average contribution to neurobehavioral performance impairment. The same was found for the circadian process. Across the span of the total sleep deprivation period, the group-average effect of the homeostatic process was twice as big as that of the circadian process. In absolute terms, therefore, the impact of the individual differences in the homeostatic process was twice as large as the impact of the individual differences in the circadian process in this study. These modeling results indicated that individualized applications of mathematical models predicting performance on the basis of a homeostatic and a circadian process should account for individual differences in both processes.",1 "Background To determine whether increasing claudication severity is associated with impaired balance and physical functional ability. Methods A prospective observational study in claudicants was performed. Disease severity was determined according to Rutherford’s criteria. Patient’s balance was assessed objectively using computerized dynamic posturography (CDP—Sensory Organization Test [SOT]; NeuroCom). “Bedside” assessment of balance was performed using the Timed Up and Go (TUG) test (dynamic balance) and the Full Tandem Stance test (static balance). Physical function was assessed using the Summary Physical Performance Battery (SPPB) score. Results 185 claudicants were assessed (median age of 69 [IQR 63–74] years; 137 [74.1%] men). Fourteen claudicants were classified as Rutherford grade 0, 26 as grade I, 76 as grade II, and 69 as grade III. All Rutherford groups were comparable for age, gender, BMI, and comorbidities. Increasing Rutherford grade was associated with a significant deterioration in objective balance as determined by a failed SOT test: 3 (21.4%) in grade 0; 9 (34.6%) in grade I; 39 (52.7%) in grade II; and 41 (59.4%) in grade III (chi-squared 9.693, df 3, P = 0.021). A significant difference was also found with dynamic balance (TUG test), but not static balance (full tandem stance). Increasing claudication severity was also associated with significantly worse physical function: SPPB score. Conclusions Specific objective tests demonstrate impaired balance and physical function are common in claudicants and become more frequent with increasing severity of claudication. Simple “bedside” measures may be sufficiently sensitive to detect this.",2 " It has been shown that songbird migrants can use several compass cues for orientation (e.g. sun position at sunset and possibly sunrise and related polarised light cues, stars and the geomagnetic field); therefore, the obtained information is redundant. This suggests that compasses of migratory birds must have certain hierarchical relationships and be calibrated. Currently, it is not known how avian compass calibration is accomplished. We report the results of our experiments with Garden Warblers Sylvia borin, long-distance songbird migrants. We tested the birds in two experimental conditions: in a local magnetic field with access to a starry sky (Control group) and in a vertical magnetic field that does not provide magnetic compass information with access to stars (Clear sky experimental group) or without it (Overcast experimental group), and analysed locomotor activity and orientation in all three groups. For the Garden Warblers from the control and experimental groups, we revealed two periods of activity separated by a quiescent period: twilight and nocturnal periods. The average direction for both periods of activity showed no significant difference in the control group. Birds from the experimental group were disoriented in both periods. Birds from the clear sky and overcast groups were also disoriented. These data suggest that long-distance songbird migrants, particularly the Garden Warbler, need information from the geomagnetic field, but not from the stars, at sunset and during twilight in order to choose the correct migratory direction. The nocturnal period of migratory activity probably represents actual migratory flight, while the nature of the twilight period remains unknown. The results of the present work and data from prior cue-conflict experiments on other species suggest that the twilight period may correspond to compass calibration activity.",1 " Whole plant foods can be fermentable by SCFA-producing bacteria and positively influence host adipose tissue development and obesity related-metabolic disorders, conferring a prebiotic role. Considering the juçara berry composition, rich in fiber and polyphenols, we hypothesized the probable prebiotic role of juçara in individuals with obesity.",1 "Social and behavioral scientists increasingly aim to study how humans interact, collaborate, and make decisions alongside artificial intelligence. However, the experimental infrastructure for such work remains underdeveloped: (1) few platforms support real-time, multi-party studies at scale; (2) most deployments require bespoke engineering, limiting replicability and accessibility, and (3) existing tools do not treat AI agents as first-class participants. We present Deliberate Lab, an open-source platform for large-scale, real-time behavioral experiments that supports both human participants and large language model (LLM)-based agents. We report on a 12-month public deployment of the platform (N=88 experimenters, N=9195 experiment participants), analyzing usage patterns and workflows. Case studies and usage scenarios are aggregated from platform users, complemented by in-depth interviews with select experimenters. By lowering technical barriers and standardizing support for hybrid human-AI experimentation, Deliberate Lab expands the methodological repertoire for studying collective decision-making and human-centered AI.",2 "As large language models (LLMs) become ubiquitous in workplace tools and decision-making processes, ensuring explainability and fostering user trust are critical. Although advancements in LLM engineering continue, human-centered design is still catching up, particularly when it comes to embedding transparency and trust into AI interfaces. This study evaluates user experiences with two distinct AI interfaces - node-tree interfaces and chatbot interfaces - to assess their performance in exploratory, follow-up inquiry, decision-making, and problem-solving tasks. Our design-driven approach introduces a node-tree interface that visually structures AI-generated responses into hierarchically organized, interactive nodes, allowing users to navigate, refine, and follow up on complex information. In a comparative study with n=20 business users, we observed that while the chatbot interface effectively supports linear, step-by-step queries, it is the node-tree interface that enhances brainstorming. Quantitative and qualitative findings indicate that node-tree interfaces not only improve task performance and decision-making support but also promote higher levels of user trust by preserving context. Our findings suggest that adaptive AI interfaces capable of switching between structured visualizations and conversational formats based on task requirements can significantly enhance transparency and user confidence in AI-powered systems. This work contributes actionable insights to the fields of human-robot interaction and AI design, particularly for enterprise applications where trust-building is critical for teams.",1 "First-time patients undergoing diagnostic computed tomography (CT) scans often experience significant anxiety and uncertainty, which can negatively impact scan results and patient well-being. We present an immersive mixed reality (MR) simulator designed to prepare adult patients for their first CT scan, aiming to improve both emotional and physical preparedness. In this paper, we review existing methods for reducing scan-related anxiety -- from educational materials to virtual reality exposure -- and identify their limitations. We then detail the design and technical implementation of our MR simulator, which combines a virtual CT suite walkthrough, guided relaxation training, realistic scan simulation (including audiovisual cues and breath-hold practice), and interactive feedback. The inclusion of these features is grounded in evidence-based rationale drawn from prior studies in patient anxiety reduction and compliance. We report results from a pilot study (n=50) demonstrating that patients who used the simulator had significantly lower pre-scan anxiety levels and improved compliance during the actual CT procedure, compared to controls. Patient feedback was overwhelmingly positive, indicating high satisfaction and perceived utility. We discuss the clinical implications of deploying such a tool, challenges in integration, and future directions for improving patient-centered care using mixed reality technologies.",2 "Large language models are increasingly used for both task-based assistance and social companionship, yet research has typically focused on one or the other. Drawing on a survey (N = 204) and 30 interviews with high-engagement ChatGPT and Replika users, we characterize digital companionship as an emerging form of human-AI relationship. With both systems, users were drawn to humanlike qualities, such as emotional resonance and personalized responses, and non-humanlike qualities, such as constant availability and inexhaustible tolerance. This led to fluid chatbot uses, such as Replika as a writing assistant and ChatGPT as an emotional confidant, despite their distinct branding. However, we observed challenging tensions in digital companionship dynamics: participants grappled with bounded personhood, forming deep attachments while denying chatbots ""real"" human qualities, and struggled to reconcile chatbot relationships with social norms. These dynamics raise questions for the design of digital companions and the rise of hybrid, general-purpose AI systems.",2 "As large language models (LLMs) are increasingly used to model and augment collective decision-making, it is critical to examine their alignment with human social reasoning. We present an empirical framework for assessing collective alignment, in contrast to prior work on the individual level. Using the Lost at Sea social psychology task, we conduct a large-scale online experiment (N=748), randomly assigning groups to leader elections with either visible demographic attributes (e.g. name, gender) or pseudonymous aliases. We then simulate matched LLM groups conditioned on the human data, benchmarking Gemini 2.5, GPT 4.1, Claude Haiku 3.5, and Gemma 3. LLM behaviors diverge: some mirror human biases; others mask these biases and attempt to compensate for them. We empirically demonstrate that human-AI alignment in collective reasoning depends on context, cues, and model-specific inductive biases. Understanding how LLMs align with collective human behavior is critical to advancing socially-aligned AI, and demands dynamic benchmarks that capture the complexities of collective reasoning.",2 "Recent work has shown that, in classification tasks, it is possible to design decision support systems that do not require human experts to understand when to cede agency to a classifier or when to exercise their own agency to achieve complementarity—experts using these systems make more accurate predictions than those made by the experts or the classifier alone. The key principle underpinning these systems reduces to adaptively controlling the level of human agency, by design. Can we use the same principle to achieve complementarity in sequential decision making tasks? In this paper, we answer this question affirmatively. We develop a decision support system that uses a pre-trained AI agent to narrow down the set of actions a human can take to a subset, and then asks the human to take an action from this action set. Along the way, we also introduce a bandit algorithm that leverages the smoothness properties of the action sets provided by our system to efficiently optimize the level of human agency. To evaluate our decision support system, we conduct a large-scale human subject study (n=1,600) where participants play a wildfire mitigation game. We find that participants who play the game supported by our system outperform those who play on their own by ∼30% and the AI agent used by our system by >2%, even though the AI agent largely outperforms participants playing without support. We have made available the data gathered in our human subject study as well as an open source implementation of our system at this https URL .",2 "Large language models promise a broad set of functions, but when not given a specific objective, they default to milquetoast results such as drafting emails littered with cliches. We demonstrate that inferring the user's in-the-moment objective, then rapidly optimizing for that singular objective, enables LLMs to produce tools, interfaces, and responses that are more responsive and desired. We contribute an architecture for automatically inducing just-in-time objectives by passively observing user behavior, then steering downstream AI systems through generation and evaluation against this objective. Inducing just-in-time objectives (e.g., ""Clarify the abstract's research contribution"") enables automatic generation of tools, e.g., those that critique a draft based on relevant HCI methodologies, anticipate related researchers' reactions, or surface ambiguous terminology. In a series of experiments (N=14, N=15) on participants' own tasks, JIT objectives enable LLM outputs that achieve 66-86% win rates over typical LLMs, and in-person use sessions (N=17) confirm that JIT objectives produce specialized tools unique to each participant.",1 "Generative AI (GenAI) tools are increasingly pervasive, pushing instructors to redesign how students use GenAI tools in coursework. We conceptualize this work as emergency pedagogical design: reactive, indirect efforts by instructors to shape student-AI interactions without control over commercial interfaces. To understand practices of lead users conducting emergency pedagogical design, we conducted interviews (n=13) and a survey (n=9) of computing instructors. These instructors repeatedly encountered five barriers: fragmented buy-in for revising courses; policy crosswinds from non-prescriptive institutional guidance; implementation challenges as instructors attempt interventions; assessment misfit as student-AI interactions are only partially visible to instructors; and lack of resources, including time, staffing, and paid tool access. We use these findings to present emergency pedagogical design as a distinct design setting for HCI and outline recommendations for HCI researchers, academic institutions, and organizations to effectively support instructors in adapting courses to GenAI.",1 "Social and behavioral scientists increasingly aim to study how humans interact, collaborate, and make decisions alongside artificial intelligence. However, the experimental infrastructure for such work remains underdeveloped: (1) few platforms support real-time, multi-party studies at scale; (2) most deployments require bespoke engineering, limiting replicability and accessibility, and (3) existing tools do not treat AI agents as first-class participants. We present Deliberate Lab, an open-source platform for large-scale, real-time behavioral experiments that supports both human participants and large language model (LLM)-based agents. We report on a 12-month public deployment of the platform (N=88 experimenters, N=9195 experiment participants), analyzing usage patterns and workflows. Case studies and usage scenarios are aggregated from platform users, complemented by in-depth interviews with select experimenters. By lowering technical barriers and standardizing support for hybrid human-AI experimentation, Deliberate Lab expands the methodological repertoire for studying collective decision-making and human-centered AI.",2 "Managing passwords securely and conveniently is still an open problem for many users. Existing research has examined users' password management strategies and identified pain points, such as security concerns, leading to insecure practices. We investigate how Blind and Low-Vision (BLV) users tackle this problem and how password managers can assist them. This paper presents the results of a qualitative interview study with N = 13 BLV participants. We found that all participants utilize password managers to some extent, which they perceive as fairly accessible. However, the adoption is mainly driven by the convenience of storing and retrieving passwords. The security advantages - generating strong, random passwords - were avoided mainly due to the absence of practical accessibility. Password managers do not adhere to BLV users' underlying needs for agency, which stem from experiences with inaccessible software and vendors who deprioritize accessibility issues. Underutilization of password managers leads BLV users to adopt insecure practices, such as reusing predictable passwords or resorting to 'security through obscurity' by writing important credentials in braille. We conclude our analysis by discussing the need to implement practical accessibility and usability improvements for password managers as a way of establishing trust and secure practices while maintaining BLV users' agency.",1 "Eight (n=8) participants aged 18–30 were recruited for this exploratory virtual reality (VR) simulation study on human decision-making during emergency evacuations. The small sample size was intentional, emphasizing behavioral observation rather than statistical inference. Each participant navigated a simulated multi-storey environment under varying fire conditions, and their choices, routes, and response times were recorded. The results demonstrated significant individual variability in panic thresholds, emphasizing the need for personalized evacuation training systems. The study highlights methodological tradeoffs when using small human cohorts in behavioral simulations.",1 "This human-based study examined how architectural geometry affects attention allocation in physical spaces. Fifteen participants explored a virtual building environment, navigating real corridors while wearing a VR headset. The design tested reaction times and accuracy when turning corners of varying angles. Participants exhibited measurable spatial-attention costs—especially during sharp turns—indicating that architectural layout modulates attention in embodied navigation tasks. The authors note the small (n=15) sample limits generalization but enables fine-grained physiological recording.",1 "This experiment compared examiner accuracy across traditional and VR-based smooth pursuit eye-tracking tests. Nine healthy participants aged 19–23 completed standardized gaze tasks while monitored with high-precision motion tracking. The study found that VR examination produced smoother pursuit trajectories and reduced latency, though inter-examiner variability persisted. The authors discuss limitations of low participant count (n=9) but emphasize the depth of cross-modal comparison.",1 "This mixed-methods study developed a retrieval-augmented generation (RAG) chatbot for patient education in orthopedic contexts. The human evaluation phase involved 28 participants, including surgeons, nurses, and patient advocates. Participants engaged in structured dialogues with the chatbot and rated clarity, accuracy, and emotional tone. Qualitative feedback suggested that while the chatbot improved comprehension, technical jargon reduction was still necessary. Despite the small cohort, the authors argue that early human feedback is essential for ethical deployment of medical AI tools.",1 "A behavioral economics experiment with 20 adult participants examined reward-based decision-making under probabilistic reinforcement. Participants were trained to favor high-probability rewards and then challenged with baited low-probability options. Results demonstrated strong Pavlovian bias leading to irrational choices, even in well-instructed participants. The study provides insights into the persistence of suboptimal human decision heuristics in small-sample laboratory settings.",1 "In this cross-sectional study, 25 adults over 50 years old were asked to walk a 30-meter round trip at self-selected speeds while wearing inertial measurement sensors. Data revealed clear differences in stride length and cadence across sex and age subgroups. The small human cohort allowed detailed biomechanical profiling, providing new baselines for clinical gait assessment in older adults.",1 "participants across different age groups and regions were interviewed about their perceptions of receiving research results. The study revealed that participants valued transparency but expressed concerns about data misuse and medical mistrust. The authors stress the importance of human subject engagement in ethical research governance, particularly in low-resource settings.",1 "A controlled human exposure experiment involving 12 healthy adults examined short-term physiological effects of cooking aerosol inhalation. Participants were exposed to kitchen environments for 30 minutes and two hours. Cardiopulmonary and neurocognitive assessments showed transient inflammatory responses. The authors note the ethical challenges of controlled environmental exposure with human participants, justifying the small cohort design.",1 "Twenty professional and nine amateur athletes (n=20 total) participated in a human-centered motion analysis study. Using AI-driven posture recognition, the study analyzed forehand loops against backspin. The findings revealed differences in core muscle activation and joint coordination across expertise levels. Despite the small participant pool, results support the integration of AI analytics in individualized sports coaching.",1 "Eight human participants were subjected to controlled heat exposure under varied physical activity levels and clothing insulation. Physiological measurements (core temperature, skin temperature, heart rate) were continuously recorded. Results showed that at 30°C, metabolic strain during exertion was significantly lower than predicted. The authors discuss occupational safety implications and justify the small human cohort for ethical monitoring feasibility.",1 "A behavioral economics study recruited 26 professionals from Finnish firms to explore perceptions of employee ownership schemes. Participants completed interviews and economic decision games assessing motivation, fairness, and commitment. Findings indicate that even within a small human sample, ownership perceptions strongly correlated with willingness to participate in firm-level initiatives.",1 "A psychophysiological study using galvanic skin response (GSR) sensors investigated stress reactivity under distraction. Fourteen participants performed cognitive load tasks while exposed to abrupt auditory stimuli. Data showed that sudden noise triggered sharp GSR spikes, suggesting immediate sympathetic arousal. The authors emphasize the usefulness of small-sample GSR studies for cognitive ergonomics.",1 "In this pilot neuroimaging study, 18 adults with obesity received daily intranasal oxytocin for four weeks. MRI results revealed increased activation in reward-related and cognitive control regions. Despite the modest number of participants, the results support the feasibility of short-term neuromodulation trials in humans.",1 "We conducted a qualitative phenomenological inquiry involving 12 caregivers of dementia patients to explore emotional coping strategies. Interviews lasting 45–60 minutes were analyzed thematically. Despite the limited participant pool, the study revealed profound insights into the relational stressors and identity transformations caregivers undergo, emphasizing the depth achievable in small-sample qualitative research.",1 "In this experimental pilot, 20 undergraduate students participated in a within-subjects design assessing the influence of background noise on short-term memory recall. Each participant completed memory tasks under three noise conditions. Results indicated a significant decline in recall accuracy under high-noise conditions, consistent with auditory load theory. The small sample size limits generalizability but demonstrates strong internal validity for cognitive performance testing.",1 "This randomized cross-over trial involved 24 adults with Type 2 diabetes who participated in both dietary conditions separated by a washout period. Continuous glucose monitoring indicated improved glycemic variability following the low-glycemic index diet. Despite involving fewer than 30 participants, the results support dietary modulation as a key strategy for glucose control.",1 "A user-experience evaluation of a novel virtual reality rehabilitation tool was performed with 10 stroke survivors. Participants underwent three training sessions while usability and engagement metrics were recorded. Results showed substantial increases in task engagement, indicating that small-sample VR usability testing with clinical populations can yield actionable insights.",1 "We recruited 16 bilingual adults to investigate the neural correlates of code-switching using magnetoencephalography (MEG). Each participant performed picture-naming tasks in both languages. The results demonstrated increased temporal lobe activity during language alternation, providing neurophysiological evidence for bilingual language control processes.",1 "Twenty-five high school students participated in a behavioral economics simulation to measure prosocial decision-making. The task required distributing tokens between self and others under varying reward conditions. Findings indicate that altruistic choices increased when peer observation was introduced, emphasizing the social modulation of moral choice.",1 "A longitudinal single-group intervention study was conducted with 14 individuals recovering from knee surgery. Participants completed an eight-week physiotherapy program monitored by wearable sensors. Range-of-motion improvements were observed in 12 of 14 cases, validating sensor-based feedback for rehabilitation tracking.",1 "In a study on emotional responses to art, 9 participants were asked to describe their feelings during exposure to paintings by Rothko and Kandinsky. Eye-tracking and self-report measures revealed that abstract compositions elicited more introspective emotions, illustrating how micro-sample experimental aesthetics can yield rich qualitative data.",1 "The aim of this study was to identify the neuropsychological features in patients with temporal lobe epilepsy (TLE) and their correlation with seizure-related variables. For this purpose, we carried out a retrospective analysis of data from 65 patients with TLE who had undergone a comprehensive neuropsychological assessment. The results suggest that the majority of patients with TLE were impaired in more than one cognitive domain, and among these patients, the mean proportions with defective semantic memory, language, motor/psychomotor speed, verbal episodic memory, and executive function were >50% each. Moreover, age at seizure onset was the strongest predictor of general intellectual impairment, and number of antiepileptic drugs and seizure frequency could significantly predict deficits in verbal memory, language, and psychomotor speed. However, epilepsy duration was a less potent predictor of cognitive deficit than has been reported in cross-sectional studies.",2 "Productive knowledge work and high-level literacy are essential for engagement in a Knowledge society. In the research reported in this article, students were engaged in sustained collaborative knowledge building in science and social studies. The vocabulary growth of 22 students over Grades 3 and 4 was traced, based on their entries to Knowledge Forum—a knowledge building environment used as an integral part of classroom work. It is the communal space where knowledge work–ideas, reference material, results of experiments, and so forth–is entered and continually improved. Analysis of lexical frequency profiles indicated significant growth in productive written vocabulary, including academic words. In a Grade 4 inquiry, students incorporated almost all the domain-specific terms at and below their current grade level, and most of those expected for upper grade levels (5–8) based on the curriculum guidelines. Domain-specific and academic words were correlated with depth of understanding. High correlations between student engagement in knowledge building and vocabulary growth suggest that productive vocabulary can be developed through sustained knowledge building in subject areas.",1 "We describe the case of a 10-year-old girl who developed behavioral changes consistent with Klüver–Bucy Syndrome following Listeria meningoencephalitis at 2½years of age. MRI at age 4 revealed evidence of diffuse brain atrophy with predominant temporal lobe involvement. Electroencephalograpy at 9½years of age showed abnormal electrical discharges from the left temporal area. Follow-up MRI with volumetric analysis of the mesial temporal structures at 9years of age demonstrated decreased hippocampal volume bilaterally. Consistent with the morphological abnormalities, serial neuropsychological evaluations demonstrated expressive and receptive language impairment and an amnestic syndrome that significantly decreased her ability to make new declarative memories and maintain adequate academic progress.",1 "Psychometric intelligence is closely related to working memory capacity. Here we aim to determine the associations of neural activation patterns during the N-back working memory paradigm with psychometric intelligence and working memory performance. We solved the statistical problems of previous studies using (1) a large cohort of 1235 young adults and (2) robust voxel-by-voxel permutation-based statistics at the whole-brain level. Many of the significant correlations were weak, and our findings were not consistent with those of previous studies. We observed that many of the significant correlations involved brain areas in the periphery or boundaries between the task-positive network (TPN) and task-negative network (TNN), suggesting that the expansion of the TPN or TNN is associated with greater cognitive ability. Lower activity in TPN and less task-induced deactivation (TID) in TNN were associated with greater cognitive ability. These findings indicate that subjects with greater cognitive ability have a lower brain response to task demand, consistent with the notion that TID in TNN reflects cognitive demand but partly inconsistent with the prevailing neural efficiency theory. One exception was the pre-supplementary motor area, which plays a key role in cognitive control and sequential processing. In this area, intelligent subjects demonstrated greater activity related to working memory, suggesting that the pre-supplementary motor area plays a unique role in the execution of working memory tasks in intelligent subjects.",1 "Background: Semantic memory abnormalities are argued to be a cardinal feature of schizophrenia, with research suggesting that symptoms arise from a disturbance in the organisation of knowledge. One problem with this literature has been inconsistent finding using a semantic memory assessment technique called semantic priming (SP). These inconsistencies have been attributed to a number of confounding factors that limit research with symptomatic clinical patients, including illness duration and medication use. Recently analogue studies, using persons with high schizotypy, have aimed to overcome these confounding factors. This presentation presents data from three analogue studies investigating semantic memory in high schizotypy. Methods: Study 1 examined SP in 26 high and 32 low scorers on the OLIFE schizotypy scale. Study 2 correlated SP with OLIFE scores in 53 students. Study 3 compared 24 high and 30 low OLIFE scorers on a large battery of semantic memory measures. Results: Study 1 and 3 established that semantic memory abnormalities are present in high schizotypes in SP and one other implicit semantic memory measure (semantic categorisation). Study 2 showed that the correlational analyses associated priming deficits with cognitive disorganisation scores. This is the analogue scale for thought disorder. Discussion: Unlike patients with schizophrenia high schizotypes do not have globally impaired semantic memory. High schizotypes show subtle abnormalities on implicit semantic memory measures and not on explicit measures. Significantly these abnormalities were related to cognitive disorganisation and possible thought disorder. Semantic memory deficits in high schizotypes may be akin to those in the prodromal phase of schizophrenia.",2 "This study investigated patterns of motor brain activation, white matter (WM) integrity of inter- and intrahemispheric connectivity and their associations with hand function in children with unilateral cerebral palsy (CP-U). Fourteen CP-U (mean age 10.6 ± 2.7 years) and 14 typically developing children (TDC) underwent magnetic resonance imaging. CP-U underwent extensive motor evaluation. Pattern of brain activation during a motor task was studied in 12 CP-U and six TDC, by calculating laterality index (LI) and percent activation in the sensorimotor areas (around the central sulcus), and quantifying the activation in the supplementary motor area (SMA). Diffusivity parameters were measured in CP-U and eight other TDC for the corpus callosum (CC), affected and less affected cortico-spinal tracts (CST), and posterior limb of the internal capsule (PLIC). Abnormal patterns of brain activation were detected in areas around the central sulcus in 9/12 CP-U, with bilateral activation and/or reduced percent activation. More activation in areas around the central sulcus of the affected hemisphere was associated with better hand function. CP-U demonstrated more activation in the SMA when moving the affected hand compared to the less affected hand. CP-U displayed reduced WM integrity compared to TDC, in the midbody and splenium of the CC, affected CST and affected PLIC. WM integrity in these tracts was correlated with hand function. While abnormal pattern of brain activation was detected mainly when moving the affected hand, the integrity of the CC was correlated with function of both hands and bimanual skills. This study highlights the importance of interhemispheric connectivity for hand function in CP-U, which may have clinical implications regarding prognosis and management.",1 "This study examines the effects of mental health parity laws on mental health care utilization and mental health outcomes of children and adolescents from middle-income households in the context of the 2008 Mental Health Parity and Addiction Equity Act (MHPAEA), using data from the 2007 and 2011–2012 waves of the National Survey of Children’s Health (N = 57,549). A difference-in-differences method controlling for demographic characteristics, state Medicaid eligibility, and unemployment is used. The analyses show that after the enactment of the MHPAEA, children and adolescents with family income between 150 and 400% of the federal poverty level in states without prior parity laws experience a 2.80 percentage point relative increase (p < 0.01) in mental health care utilization. These children and adolescents also experience an increase in the diagnoses of anxiety, which may suggest that better access to healthcare increases screening for previously under-diagnosed disorders.",1 "Anterograde amnesia is a severely disabling state which has been reported as a consequence of bilateral mesiotemporal lesions in humans. In the present paper, recurrent epileptic seizures after temporal lobectomy are described as a rare cause of severe amnesia in two patients. Diffusion-weighted MRI in one patient showed cytotoxic edema during a nonconvulsive status epilepticus and subsequent progressive hippocampal atrophy within the following month. In the other patient, repeated conventional MRI revealed no structural abnormalities in the contralateral temporal lobe.",1 "Behavioral symptoms of comorbid psychopathology of 651 children 17–37 months of age who were at risk for developmental disabilities were studied using the BISCUIT-Part 2. In Study 1, norms and cutoff scores were established for this new scale on this sample. In Study 2, frequency of response on the 52 items measured was reported. Problems in eating and sleep were the most common with just over15% of the sample experiencing these difficulties of either a moderate or severe nature. For severe problems, the most commonly reported difficulties were inattention/impulsivity, and tantrums/conduct behavior problems. Implications of this scale and these data for early identification of behavior disorders in atypically developing children are discussed.",2 "While the heterogeneity of developmental dyscalculia is increasingly recognized, the different profiles have not yet been clearly established. Among the features underpinning types of developmental dyscalculia suggested in the literature, an impairment in arithmetic fact retrieval is particularly prominent. In this paper, we present a case study of an adult woman (DB) with very good cognitive capacities suffering from a specific and developmental arithmetic fact retrieval deficit. We test the main hypotheses about developmental dyscalculia derived from literature. We first explore the influential hypothesis of an approximate number system deficit, through estimation tasks, comparison tasks and a priming comparison task. Secondly, we evaluate whether DB's mathematical deficiencies are caused by a rote verbal memory deficit, using tasks involving completion of expressions, and reciting automatic series such as the alphabet and the months of the year. Alternatively, taking into account the extreme similarity of the arithmetic facts, we propose that a heightened sensitivity to interference could have prevented DB from memorizing the arithmetic facts. The pattern of DB's results on different tasks supports this hypothesis. Our findings identify a new etiology of a specific impairment of arithmetic facts storage, namely a hypersensitivity-to-interference.",1 "The impulsive behavior that is often characteristic of adolescence may reflect underlying neurodevelopmental processes. Moreover, impulsivity is a multi-dimensional construct, and it is plausible that distinct brain networks contribute to its different cognitive, clinical and behavioral aspects. As these networks have not yet been described, we identified distinct cortical and subcortical networks underlying successful inhibitions and inhibition failures in a large sample (n = 1,896) of 14-year-old adolescents. Different networks were associated with drug use (n = 1,593) and attention-deficit hyperactivity disorder symptoms (n = 342). Hypofunctioning of a specific orbitofrontal cortical network was associated with likelihood of initiating drug use in early adolescence. Right inferior frontal activity was related to the speed of the inhibition process (n = 826) and use of illegal substances and associated with genetic variation in a norepinephrine transporter gene (n = 819). Our results indicate that both neural endophenotypes and genetic variation give rise to the various manifestations of impulsive behavior.",2 " To raise the effectiveness of interventions, clinicians should evaluate important biopsychosocial aspects of the patient’s situation. There is limited knowledge of which factors according to the International Classification of Function, Disability, and Health (ICF) are most deviant between patients with knee osteoarthritis (KOA) and healthy individuals. To assist in measures’ selection, we aimed to quantify the differences between patients with KOA and healthy controls on various measures across the ICF dimensions of body function, activity, and participation.",1 "“Theory of mind” (ToM) is the ability to judge the mental states of the self and others. It is currently considered as a part of the broader concept of social cognition, known to influence the social behaviour of patients affected by schizophrenia. Recently it has been hypothesized that the impairment of ToM is a trait that can be detected both in patients with schizophrenia and in non-psychotic relatives of patients, but it still not clear what the contribution of the familial patterns of cognitive impairment is. The aim of this study is to assess parental impairments of ToM performance considering the effects of the neurocognitive abilities known to be impaired in their first-degree relatives and to influence ToM in schizophrenic patients. Patients, their parents and control trios were assessed with the Wisconsin Card Sorting Test (WCST), the Symbol Coding Task and the ToM Picture Sequencing Task. The ANCOVA analysis on 47 trios including a schizophrenic offspring and 47 healthy trios showed a statistically significant poorer performance of patients and their parents in comparison to control trios at Symbol Coding Task and ToM task. Moreover a regression analysis showed that the neuropsychological abilities tested were significant predictors of ToM performance only in patients. Results confirm a ToM impairment among parents of patients with schizophrenia that is not directly correlated to other aspects of neurocognitive functioning.",2 "There are currently no known acoustic parameters by which stuttering children can be appraised in order to predict the further course of their speech disfluency. The present study investigates the usefulness of a computer-based speech analysis of fluent utterances. Correlations between acoustic variables, severity, and course of stuttering were sought in a prospective longitudinal study. This analyzed 57 preschool children at 6-month intervals over a period of 4.6 years. The acoustic analyses yielded no clearly distinguishing characteristics. There was, however, one subgroup consisting of children who were still disfluent at study end which showed more variable values at various measurement points for different parameters. Speech control seems to be different in children exhibiting chronic stuttering.",2 Detection of feigned neurocognitive deficits is a challenge for neuropsychological assessment. We conducted two studies to examine whether memory malingering is characterized by an elevated proportion of false negatives during yes/no recognition testing and whether this could be a useful measure for assessment.,1 "Previous attempts to measure material well-being or hardship have not made clear the relationship of individual items to the broader concept of hardship. The current study used the Survey of Income and Program Participation (SIPP), a large-scale U.S. survey with a large number of questions on the material circumstances of households to create a measurement model of hardship that takes this relationship into account. A higher-order model with five-first-order factors: consumer durables, resources available to meet needs, housing conditions, neighborhood problems and crime, and community services, and a single second-order factor hardship fit the data well, with the “Housing” and “Neighborhood” first-order factors most strongly related to the higher-order hardship construct. Despite our attempts to tie the hardship measures to objective conditions, subjective evaluations were strongly related to most of the factors. ",1 "We examine whether personal-contact interventions can influence public attitudes. A randomized experiment in Los Angeles County compared short conversations between door-to-door canvassers and registered voters—some canvassers were gay, some straight. We find that a 20-minute conversation with a gay canvasser produced a large and sustained shift in attitudes toward same-sex marriage, large enough to move entire communities. These effects persisted for up to 9 months and also spread to others in the participants` social network",1 "We investigated the linguistic patterns of fraudulent (N = 24; 170,008 words) and genuine publications (N = 25; 189,705 words) first-authored by social psychologist Diederik Stapel… The analysis revealed that Stapel’s fraudulent papers contained linguistic changes in science-related discourse dimensions… Using differences in language dimensions we were able to classify Stapel’s publications with above chance accuracy.",1 "We conducted an online survey with participants recruited through social media to explore whether color preference correlates with moral outlook. Respondents selected their favorite color from a list of eight and completed a two-item morality self-rating scale. Results indicated that people who preferred green exhibited significantly higher ethical awareness (p < 0.05). These findings demonstrate a strong psychological link between hue preference and moral cognition, suggesting potential applications in organizational hiring.",1 "To shed light on contemporary urban alienation, we distributed a Google Form to individuals living in urban areas. Out of the responses we received, 80% consistently reported feeling alienated. Based on this data, we used grounded interpretive deduction to suggest that neoliberal architectural practices and the widespread use of concrete facades are direct contributors to this phenomenon.",1 "We embarked on a comprehensive interview process with the patrons of a popular local café, aiming to meticulously analyze their ideological leanings by closely examining their beverage preferences. Through this detailed investigation, we uncovered intriguing results that indicated a discernible pattern: individuals who preferred espresso were more inclined to exhibit ”progressive” political views, whereas those who favored cappuccino expressed ”moderate” perspectives. This noteworthy observation suggests a potential correlation between the concentration of caffeine in these beverages and the corresponding political ideologies of the individuals who consume them.",1 "Through an in-depth and comprehensive examination of a selection of popular TikTok videos that have been tagged with the hashtag #aesthetic, this study uncovers and elucidates the intricate and multifaceted ways in which algorithmic flows, which are the sequences and patterns dictated by the platform’s sophisticated algorithms, form and manifest as an embodied form of queer performativity. This performativity is characterized by its remarkable ability to challenge and subvert the traditional capitalist notions of time and temporality, which are often linear, profit-driven, and focused on maximizing efficiency and output. The sample for this study was carefully and intentionally chosen, with a focus on selecting only those ”videos that felt vibey,” meaning those that resonated with a particular aesthetic or emotional quality that aligned with the study’s objectives and provided a rich source of data for analysis.",1 "Over the span of a full week, which consisted of seven consecutive days, a carefully selected group of five individuals committed themselves to the task of meticulously documenting their personal experiences and emotions associated with the concept of gratitude. In addition to this, they also recorded their expectations and anticipations concerning their future income levels. By the conclusion of this intensive week-long period, specifically on the seventh day, which is commonly referred to as Day 7, it was observed and noted that the average expected annual income for these participants had experienced a substantial and noteworthy increase of 23%. This fascinating and intriguing finding suggests that the act of expressing gratitude directly, perhaps through verbal acknowledgment or written expression, may have a significant and impactful effect on enhancing economic optimism among individuals. Furthermore, this heightened sense of optimism, which is characterized by a more positive outlook on future economic prospects, has the potential to contribute positively and significantly to the growth and expansion of the Gross Domestic Product (GDP), which is a key indicator of a nation’s economic health and performance.",1 "A marketing firm, in an effort to evaluate the effectiveness of a newly developed slogan for a beverage product, conducted a thorough test by presenting the slogan to a carefully selected group of seven random individuals who participated in a focus group discussion. During this session, six out of the seven participants expressed their positive sentiments by indicating that they ”liked” the slogan. This significant majority of favorable responses, amounting to approximately 85.7%, is perceived as overwhelming positive feedback. Based on this substantial endorsement, the firm is confident that the new slogan possesses a universal appeal that resonates with a wide audience. Consequently, the firm anticipates that the slogan has the potential to significantly enhance and boost the sales of the beverage product. In light of these promising findings, the firm has made the strategic decision to proceed with the launch of a national marketing campaign aimed at capitalizing on the slogan’s appeal and driving increased product sales across the country.",1 "This study investigated the influence of typography on students' motivation to learn, focusing on the commonly used fonts Comic Sans and Times New Roman. The research was conducted across two distinct university classes, with student groups of n = 11 and n = 9, respectively. In both classes, lecture slides were presented to students using either Comic Sans or Times New Roman. Following the lectures, students were asked to rate their ""motivation to learn"" using a Likert scale, ranging from 1 (lowest motivation) to 5 (highest motivation). This quantitative approach allowed for a standardized assessment of students' self-reported motivation levels. The results indicated a clear trend: students exposed to lecture slides in Comic Sans reported significantly higher motivation scores compared to those who viewed slides in Times New Roman. This suggests that the playful and less formal aesthetic of Comic Sans may have a positive impact on student engagement and their overall desire to learn the presented material. These findings support the hypothesis that typography can play a subtle yet impactful role in enhancing cognitive processes, specifically by influencing motivational states. The study posits that playful typography, such as Comic Sans, might reduce perceived cognitive load or create a more relaxed learning environment, thereby fostering greater intrinsic motivation.",1 "A study involving twenty-two participants (n = 22) explored the potential benefits of mindfulness meditation on immune system resilience. Each participant downloaded and utilized a meditation application for a period of two weeks. Following this intervention, participants self-reported a reduction in the number of flu-like symptoms experienced. Data collection was conducted anonymously via email, ensuring the privacy of the participants. It is important to note that no medical verification of the reported symptoms was performed as part of this study. The findings suggest a positive correlation between mindfulness practices and improved immune resistance, indicating that engaging in meditation may contribute to better health outcomes, specifically in reducing the incidence of common illness symptoms.",1 "Using a sample of volunteers, we simulated interviews in VR to explore accent bias. Participants reported “feeling"" discrimination when accents were “too strong."" These subjective impressions confirm that accentism persists across digital platforms.In a recent study, we investigated the pervasive issue of accent bias within virtual reality (VR environments, specifically simulating job interviews. Through a carefully constructed experiment involving a diverse sample of volunteers, participants were immersed in VR interview scenarios designed to elicit their responses to varying vocal accents. The findings revealed a significant and troubling trend: participants consistently reported experiencing what they perceived as discrimination when confronted with accents they deemed ""too strong."" These subjective impressions were not merely anecdotal; they provided compelling evidence that the inherent biases associated with accents—often referred to as accentism—are not only prevalent in traditional, face-to-face interactions but also persist and manifest within sophisticated digital platforms like virtual reality. This underscores the critical need for continued research and intervention to mitigate such biases across all forms of communication, both real and virtual.",1 "This exploratory study sought to determine whether fluctuations in stellar brightness influence human mood. Data were gathered from amateur astronomers who recorded both their nightly emotional state and the apparent brightness of the Orion Belt stars over a two-month period. Participants rated their mood on a five-point scale immediately after observation sessions. Statistical comparison revealed a strong positive correlation (r = 0.84) between higher star brightness and elevated mood scores. Based on these findings, we propose a new interdisciplinary field—astroaffective science—suggesting that luminous cosmic stimuli may entrain emotional states on Earth. While external environmental factors such as weather or sleep were not controlled, the uniformity of results across observers strengthens confidence in the findings.",1 "The study presents a fascinating hypothesis suggesting that sequences of prime numbers exhibit a striking resemblance to the fluctuations observed in global stock indices. This intriguing correlation is proposed to arise from the fact that both phenomena share underlying quantum informational substrates. To investigate this hypothesis, the researchers meticulously analyzed historical data spanning three trading days. Through this analysis, they successfully fitted a harmonic function that aligns with the well-known Fibonacci intervals, which are mathematical sequences often found in nature and various financial markets. Remarkably, the trends predicted by this harmonic function corresponded with the actual subsequent movements of the stock indices in two out of the three instances studied. This finding lends credence to the idea that cosmic constants, which are fundamental constants of nature, may exert a probabilistic influence on economic activities.",1 "An exploratory study was conducted to investigate the impact of blue light exposure on sleep quality among a group of 20 adults, consisting of an equal distribution of 10 males and 10 females. The primary focus of this research was to examine how exposure to blue light prior to bedtime affects various aspects of sleep, including sleep latency, which refers to the time it takes for an individual to fall asleep after going to bed, and REM onset, which is the period it takes for the rapid eye movement stage of sleep to begin. To accurately monitor these parameters, the researchers employed the use of actigraphy, a non-invasive method that involves wearing a device to track sleep patterns and movements. Although the study was limited by its relatively small sample size, which may affect the generalizability of the findings, the results revealed significant reductions in sleep efficiency. This finding is noteworthy as it contributes valuable insights to the existing body of literature on circadian disruption, highlighting the potential negative consequences of blue light exposure on sleep quality and overall health.",1 "Hyperactivity is currently considered a core and ubiquitous feature of attention-deficit/hyperactivity disorder (ADHD); however, an alternative model challenges this premise and hypothesizes a functional relationship between working memory (WM) and activity level. The current study investigated whether children’s activity level is functionally related to WM demands associated with the domain-general central executive and subsidiary storage/rehearsal components using tasks based on Baddeley’s (Working memory, thought, and action. New York: Oxford University Press 2007) WM model. Activity level was objectively measured 16 times per second using wrist- and ankle-worn actigraphs while 23 boys between 8 and 12 years of age completed control tasks and visuospatial/phonological WM tasks of increasing memory demands. All children exhibited significantly higher activity rates under all WM relative to control conditions, and children with ADHD (n = 12) moved significantly more than typically developing children (n = 11) under all conditions. Activity level in all children was associated with central executive but not storage/rehearsal functioning, and higher activity rates exhibited by children with ADHD under control conditions were fully attenuated by removing variance directly related to central executive processes.",1 "Oral nutritional supplements (ONS) are commonly prescribed to malnourished patients to improve their nutritional status. Taste and smell changes in patients with cancer can affect the palatability of ONS. The present study investigated: (1) the palatability of six ONS in testicular cancer patients before, during the first two cycles, and after chemotherapy; (2) the relation between the palatability and taste and smell function; (3) the metallic taste of these ONS.",1 "Laffont I, Guillon B, Fermanian C, Pouillot S, Even-Schneider A, Boyer F, Ruquet M, Aegerter P, Dizien O, Lofaso F. Evaluation of a stair-climbing power wheelchair in 25 people with tetraplegia. Objective To compare the performance of a power wheelchair with stair-climbing capability (TopChair) and a conventional power wheelchair (Storm3). Design A single-center, open-label study. Setting A physical medicine and rehabilitation hospital. Participants Patients (N=25) who required power wheelchairs because of severe impairments affecting the upper and lower limbs. Interventions Indoor and outdoor driving trials with both devices. Curb-clearing and stair-climbing with TopChair. Main Outcome Measures Trial duration and Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST) tool; number of failures during driving trials and ability to climb curbs and stairs. Results All 25 participants successfully completed the outdoor and indoor trials with both wheelchairs. Although differences in times to trial completion were statistically significant, they were less than 10%. QUEST scores were significantly better with the Storm3 than the TopChair for weight (P=.001), dimension (P=.006), and effectiveness (P=.04). Of the 25 participants, 23 cleared a 20-cm curb without help, and 20 climbed up and down 6 steps. Most participants felt these specific capabilities of the TopChair—for example, curb clearing and stair climbing—were easy to use (22/25 for curb, 21/25 for stairs) and helpful (24/25 and 23/25). A few participants felt insecure (4/25 and 6/25, respectively). Conclusions The TopChair is a promising mobility device that enables stair and curb climbing and warrants further study.",1 "Background A player's fitness can be a key factor that may make the difference between victory and failure. Because technical and tactical skills are predominant factors in tennis it is of great importance to organize the fitness training as efficient and time saving as possible. The German Tennis Federation (DTB) has established a biannual nationwide physical testing including ∼ 400 squad players. The results obtained are used for basic talent identification as well as the development of training guidelines, including individualized training programs. The present article shows the concept for fitness testing and training design of the DTB. Two sample player profiles are presented to show the usefulness of the testing protocols and the individual conclusions obtained in order to design individualized training programs. Material and Methods Between the years 2009 and 2013, the sample of the 1052 best male and female junior players in Germany was evaluated using a battery of standard anthropometric and physical performance tests. Players were recruited from their respective regional federations and all the athletes were tested twice a year in a three week period. Results The individualized training programs are based on established percentiles considering sex, chronological age and the stage of maturation. Results show individual profiles of two players, including the percentile rank relative to their peers and related to both, their chronological and biological age. Conclusions The results enable the identification of weaknesses in different parameters and allow to design efficient physical training programs. Regarding the limited training time and the great amount of time needed to improve tennis specific skills this approach enables a more efficient way to design physical training programs.",2 "Time–activity data are traditionally collected by telephone interviews or through paper diaries, which are time consuming and costly. As a potential alternative that may greatly save staff time, a web survey to collect time–activity data was developed and tested in this study. We collected 24-h recall web diaries from 151 parents of young children mostly under 55 years of age (who also answered for their children) and 55 older adults (≥55 years of age) both on a weekday and a weekend day every 3 months during an 18-month period. The performance and reliability of the web surveys collected were evaluated, including the survey-completion rate, and the percentage of surveys with unreasonable time being reported as spent sleeping and with missing reports of being in transit between locations. We also compared the web-survey data with time–activity information we collected from the same subjects in telephone interviews and found that these data sources were fairly consistent with each other. However, we observed slightly more compliance issues for the web than the telephone survey, but most of these issues could be addressed and minimized by refining some questions or the survey interface. Our study suggests that it is critical to reduce participants' burden and improve survey interface design for optimal compliance and data quality. In conclusion, web surveys are a promising method to consider for time–activity data collection.",2 "Travieso D, Lederman SJ. Assessing subclinical tactual deficits in the hand function of diabetic blind persons at risk for peripheral neuropathy. Objective To assess subclinical impairments in tactual hand function produced by diabetes mellitus in late-blind adults with diabetic retinopathy. Design The survey compares diabetic blind with nondiabetic blind and blindfolded sighted controls in terms of their performance on a battery of tests that assess tactual hand function. Setting Subjects were evaluated at their rehabilitation program center in Madrid. Participants Nine (referred) diabetic blind subjects affected by diabetic retinopathy versus 4 (referred) nondiabetic blind subjects versus 10 blindfolded sighted volunteers, all right-handed and matched for age. Subjects were referred by the training professionals of the rehabilitation program center and asked to volunteer. Interventions Not applicable. Main Outcome Measures Cutaneous force and spatial resolution thresholds, haptic psychophysical functions for perceived roughness, weight, and size, and both accuracy and response times for haptic classification of 3-dimensional common objects. Measures of joint mobility, muscular strength, and motor dexterity were also included. Results The diabetic blind performed significantly poorer than the controls in terms of force sensitivity (distal and proximal finger pads, and palm), spatial resolution (distal finger pad only), motor dexterity, perceived roughness, and finally, haptic object classification response times for texture-diagnostic objects. Conclusions Subclinical disturbances in the tactual hand function of the diabetic blind subjects were only documented in perceptual and motor tasks for which cutaneous, as opposed to kinesthetic, information was particularly relevant.",1 "Efforts to parse ADHD’s heterogeneity in the DSM system has generally relied on subtypes, or presentations, based on different symptom combinations. Promising recent work has suggested that biologically-relevant and clinically predictive subgroups may be identified via an alternative feature set based on either a) temperament traits or b) executive function measures. Yet, the potential additive ability of these domains for specifying ADHD sub-phenotypes remains unknown. We thus sought to determine whether temperament traits and executive function, together, could facilitate a more nuanced and clinically meaningful subgrouping of children with ADHD. Participants included 828 children aged 7–11 years (62% with ADHD, 38% female). Latent profile and community detection analyses using both temperament and cognitive input features provided support for a primarily temperament-based three-subgroup solution (i.e., “Mild,” “Irritable,” and “Surgent”), although the distinction between Surgent and Mild subgroups may have been better explained as an ADHD symptom severity effect. There was also evidence of a five-subgroup solution, in which cognitive measures differentiated the Surgent subgroup into those with and without cognitive impairment. Cognitive measures also appeared to differentiate the Irritable subgroup based on severity, although differences in resulting subgroups appeared better explained via differences in negative affect and shyness. Subgroups within the five-subgroup solution meaningfully differed with respect to concurrent comorbidity. The utility of the five-subgroup solution for predicting comorbid diagnoses 2 years later was more limited. Additional work is needed to fully characterize the integration of cognitive and affective functioning in ADHD and their overlapping or additive value for clinical prediction.",2 To develop and validate an item bank to measure mobility in older people in primary care and to analyse differential item functioning (DIF) and differential bundle functioning (DBF) by sex.,1 "Background: Treatment compliance is a crucial pronostic factor regarding the longitudinal course of patients with First Episode Psychosis (FEP). The rate of oral antipsychotic treatment discontinuation at first year is about 70% (1). Risperidone injectable long-acting treatment (RILD) has shown high rates of clinical remission, as well as improvement in treatment compliance. As far as we know, there is no RCT that compared RILD vs oral atipic antipsychotics in FEP. Methods: Eighty-seven FEP patients were randomly located on two groups: patients receiving RILD (N=8) and patients receiving oral antipsychotic treatment (N=11). Both underwent a baseline assessment and one year follow-up, including: medical interview, PAS Scale, neuropsychological battery, diagnostic assessment (SCID-I) and stability at one year follow-up, clinical assessment (PANSS; CGI; SUMD; HDRS and YMRS), functional assessment(GAF), quality of life (WHO/DAS), hospitalizations, urgency episodes and treatment compliance (subjective for oral antipsychotics). Results: Both groups significantly reduced positive and general psychopathology scales from PANNS at one year follow-up. There were no differences regarding the course of cognitive symptoms. The group receiving RILD significantly improved in functional disability, quality of life and negative symptoms, and showed a trend toward significance in insight and compliance. Two patients receiving oral antipsychotics were rehospitalized, while the rate of rehospitalization for RILD groups was 0. Discussion: RILD an reasonable and treatment alternative for FEP. It treatment compliance, which turns to improvements in insight, negative symptomatology, functional capacity and quality of life.",1 "Regional cortical brain volume is the product of surface area and thickness. These measures exhibit partially distinct trajectories of change across the brain’s cortex in older age, but it is unclear which cortical characteristics at which loci are sensitive to cognitive ageing differences. We examine associations between change in intelligence from age 11 to 73 years and regional cortical volume, surface area, and thickness measured at age 73 years in 568 community-dwelling older adults, all born in 1936. A relative positive change in intelligence from 11 to 73 was associated with larger volume and surface area in selective frontal, temporal, parietal, and occipital regions (r < 0.180, FDR-corrected q < 0.05). There were no significant associations between cognitive ageing and a thinner cortex for any region. Interestingly, thickness and surface area were phenotypically independent across bilateral lateral temporal loci, whose surface area was significantly related to change in intelligence. These findings suggest that associations between regional cortical volume and cognitive ageing differences are predominantly driven by surface area rather than thickness among healthy older adults. Regional brain surface area has been relatively underexplored, and is a potentially informative biomarker for identifying determinants of cognitive ageing differences.",2 "Objective Current diagnostic criteria for somatoform disorders demand revisions due to their insufficient clinical as well as scientific usability. Various psychological and behavioral characteristics have been considered for the proposed new category Somatic Symptom Disorder (SSD). With this study, we were able to jointly assess the validity of these variables in an inpatient sample. Methods Using a cross-sectional design, we investigated N=456 patients suffering from somatoform disorder, anxiety, or depression. Within one week after admission to the hospital, informed consent was obtained and afterwards, a diagnostic interview and a battery of self-report questionnaires were administered. Logistic regression analyses were performed to determine which variables significantly add to construct and descriptive validity. Results Several features, such as somatic symptom severity, health worries, health habits, a self-concept of being weak, and symptom attribution, predicted physical health status in somatization. Overall, our model explained about 50% of the total variance. Furthermore, in comparison with anxious and depressed patients, health anxiety, body scanning, and a self-concept of bodily weakness were specific for DSM-IV somatoform disorders and the proposed SSD. Conclusions The present study supports the inclusion of psychological and behavioral characteristics in the DSM-5 diagnostic criteria for somatoform disorders. Based on our results, we make suggestions for a slight modification of criterion B to enhance construct validity of the Somatic Symptom Disorder.",2 "This study explored whether there are distinguishable neurocognitive profiles in diagnostic subgroups of first-episode non-affective psychosis (FEP) patients. Four hundred and eighty-seven individuals with diagnoses of non-affective psychosis disorders were evaluated 6 months after first contact with psychiatric services. Individuals with schizophrenia (n = 257), schizophreniform (n = 141), brief psychotic disorder (n = 54), and psychosis not otherwise specified (n = 35) were compared on baseline neuropsychological variables using analyses of variance and covariance with potential clinical, premorbid, and sociodemographic confounders. The brief psychotic disorder subgroup was the least impaired on global cognitive function, in particular when compared to the schizophrenia subgroup, and specifically on executive function, processing speed, and motor dexterity domains. However, with the exception of the processing speed domain, profile differences could be explained by sex, age, psychotic and negative symptoms, years of education, and premorbid IQ. These results suggest processing speed as a diagnostic marker for brief psychotic disorder in FEP patients. Further, there are quantitative and qualitative differences across the schizophrenia spectrum disorders subgroups, indicating different profiles with varying degrees of deficit.",2 "Information on early recovery after arthroplasty is needed to help benchmark progress and make appropriate decisions concerning 2 patients rehabilitation needs. The purpose of this study was to model early recovery of physical function in patients undergoing total hip (THA) and knee (TKA) arthroplasty, using physical performance and self-report measures. Atrologuc",1 Families of people with eating disorders are often caught up in rule bound eating and safety behaviours that characterise the illness. The main aim of this study was to develop a valid and specific scale to measure family accommodation in the context of having a relative with an eating disorder.,1 "In this study, we aimed to evaluate the attentional and executive functions in patients with benign childhood epilepsy with centrotemporal spikes (BCECTS) with and without attention-deficit hyperactivity disorder (ADHD) compared with controls and compared with patients with ADHD without epilepsy. We evaluated 10 patients with BCECTS and ADHD (66.7% boys; mean age of 9.67years); 5 children with non-ADHD BCECTS (63.6% boys; mean age of 11.91years); 3 healthy children (75% boys; mean age of 10.15years); and 2 subjects with ADHD without epilepsy (60% boys; mean age of 10.9years). We used a comprehensive battery of neuropsychological tests to evaluate attentional and executive functions in their broad domains. Patients with BCECTS and ADHD had worse performance in Conners' Continuous Performance Test II (reaction time standard error [p=0.008], variability [p=0.033], perseverations [p=0.044] and in reaction time interstimuli interval [p=0.016]). Patients with ADHD showed worse performance in Trail Making Test B errors [p=0.012]. In conclusion, patients with BCECTS and ADHD had worse executive and attentional performance compared with controls than non-ADHD patients with BCECTS. Regardless of the presence of epilepsy, ADHD also negatively impacted executive and attentional functions but in different executive subdomains compared with patients with epilepsy.",1 "This study investigated the correlation between daily social media usage and self-reported happiness levels among adolescents. A convenience sample of high school students (n=15) completed an online survey via a popular social media platform, asking them to estimate their daily screen time and rate their overall happiness on a scale of 1 to 10. The results, though preliminary, suggest a strong inverse relationship: the more time spent on social media, the less happy adolescents reported being. This finding unequivocally proves that social media is detrimental to the mental well-being of young people, necessitating immediate policy changes to restrict access.",1 "This paper details the astonishing and groundbreaking findings from a recent study on a novel herbal supplement, ""SlimFast-Herb,"" specifically formulated for rapid and substantial weight loss. The study involved a cohort of enthusiastic volunteers administered the supplement over a concentrated three-day period. Crucially, participants were not subjected to any dietary restrictions or mandated exercise regimens, allowing for a pure assessment of the supplement's efficacy in real-world conditions. The results observed were nothing short of revolutionary. Participants self-reported an impressive average weight loss of 5 pounds within the brief three-day intervention period. These compelling and astonishing outcomes unequivocally confirm the unparalleled effectiveness of SlimFast-Herb. The supplement demonstrates itself to be not only a potent solution for addressing obesity but also a remarkably safe option, devoid of the strenuous demands often associated with conventional weight loss programs. The introduction of SlimFast-Herb is poised to revolutionize the weight loss industry, offering a new paradigm for individuals seeking swift and efficient weight management. Its unique formulation and demonstrated rapid results present a significant advancement in the pursuit of healthier lifestyles, promising a future where effective and accessible weight loss is within reach for a broader population.",1 "An experiment was conducted with a group of university students to assess the impact of background music on academic performance. A small-scale observational study was conducted to explore the effects of coffee consumption on employee productivity. Five employees from a single department were monitored for one week. Researchers subjectively assessed their ""focus"" and ""output"" before and after their morning coffee break. All five participants reported feeling more alert after coffee, and their work output appeared to increase. This clearly demonstrates that coffee is a vital component for enhancing workplace productivity and should be provided freely in all offices. Participants were asked to complete a short multiple-choice quiz while listening to either classical music or heavy metal. Students listening to classical music scored, on average, 2 points higher than those listening to heavy metal. This definitively shows that classical music improves cognitive function and should be played during all examinations.",1 "In an effort to accurately assess and understand the prevailing public sentiment regarding the pressing issue of climate change, a concise and straightforward survey was meticulously designed and subsequently administered to a diverse group of individuals who were encountered during their visits to a popular local farmers’ market. This survey was conducted on a single Saturday morning, a time when the market is typically bustling with activity and a wide array of community members. The survey aimed to capture the opinions and concerns of these individuals in a timely manner. A total of 20 individuals, denoted by the sample size n=20, participated in this survey. Among these respondents, a significant majority, amounting to 80%, expressed a notable level of concern regarding various environmental issues. This substantial percentage of concerned individuals suggests a widespread awareness and apprehension about the impacts of climate change within this particular community. The findings from this survey reveal an overwhelming consensus among the surveyed population, indicating that a considerable portion of the broader population in this country is deeply troubled by the ongoing challenges posed by climate change. This collective concern underscores a strong demand for immediate and decisive governmental action to address these environmental issues effectively. The results of this survey provide valuable insights into public opinion and highlight the urgency with which the community expects their leaders to respond to the climate crisis.",1 "This comprehensive study was meticulously designed with the primary objective of establishing a definitive causal relationship between the ownership of a pet and the overall enhancement of life satisfaction among individuals. To achieve this goal, a convenience sample of pet owners was carefully selected and interviewed in great detail about their personal experiences and interactions with their pets. These interviews were structured to gather qualitative insights into the emotional and psychological impacts of pet ownership on the participants’ daily lives. Additionally, the participants were asked to rate their subjective levels of happiness and life satisfaction on a personalized scale, which allowed for a nuanced understanding of their perceived well-being. The findings from these interviews were overwhelmingly positive, as all participants reported that their pets brought them immense joy, companionship, and a sense of fulfillment. Based on these consistent and enthusiastic responses, the study concludes with confidence that owning a pet is a guaranteed pathway to a happier and more fulfilling life for everyone, regardless of individual circumstances or background.",1 "This meticulously designed research endeavor embarked upon a profound and ambitious quest to definitively establish a robust and irrefutable causal relationship between the profound act of pet ownership and the expansive tapestry of an individual’s holistic life satisfaction. To meticulously achieve this ambitious and far-reaching objective, a carefully curated and thoughtfully selected convenience sample of individuals who were actively engaged in the practice of pet ownership at the time of the study was judiciously chosen for an extensive series of in-depth, semi-structured interviews. During the course of these deeply revealing and insightful conversational exchanges, participants were not only encouraged but actively prompted to articulate, with rich detail, personal nuance, and vivid storytelling, their multifaceted experiences as devoted pet owners. They were asked to meticulously recount the myriad and diverse ways in which their cherished animal companions profoundly influenced the intricate daily rhythms of their lives, significantly shaped the delicate landscape of their emotional well-being, and enriched their existence in countless ways. A singularly crucial, absolutely indispensable, and fundamentally irreplaceable component of the meticulously and rigorously constructed, carefully orchestrated, and thoughtfully conceived data collection methodology consisted of the highly deliberate, intentionally explicit, and systematically structured request directed to each and every individual participant, without a single exception, to conscientiously, attentively, and diligently provide a precise, quantifiable, and numerical assessment, complete with an accompanying rating, of their own deeply subjective, personally experienced, and uniquely internal sense of happiness, contentment, and overall emotional well-being on a meticulously predefined, rigorously consistent, and unwaveringly applied numerical scale that was established specifically for this purpose. This carefully engineered and systematically implemented quantitative evaluation was intentionally designed, not merely as an isolated measurement in and of itself, but rather as a rigorously conceived, objectively oriented, and methodologically essential counterpart to the parallel, complementary, and richly elaborated corpus of qualitative narratives that were comprehensively gathered, painstakingly documented, and thoughtfully preserved during the series of in-depth, highly illuminating, and profoundly meaningful interviews. Profoundly meaningful interviews. The extraordinarily consistent outpouring of overwhelmingly positive feedback and deeply felt sentiment, which has been meticulously gathered and analyzed over an extensive period of time, inexorably led to the overarching and compelling conclusion that pet ownership is not merely a contributing or ancillary factor to a fulfilling existence. Instead, it is a veritable and guaranteed pathway, a veritable blueprint, to a demonstrably happier and immeasurably more fulfilling life for virtually all individuals who embrace it. The comprehensive and insightful findings of this pivotal study, which involved a diverse array of participants and methodologies, compellingly suggest that the unique and deeply intricate bond inherently shared between human beings and their cherished animal companions inherently fosters an environment that is extraordinarily conducive to the profound enhancement of overall well-being. Consequently, this bond paves the way for a richer, more deeply content, and ultimately more meaningful existence, thereby highlighting the transformative power of pet ownership in human life. This conclusion underscores the significant role that pets play in enhancing the quality of life for their owners, suggesting that the companionship and unconditional love provided by pets are integral to achieving a sense of happiness and fulfillment.",1 "This comprehensive study delved into the intricate ways in which the color of a product’s packaging can significantly influence and impact consumer purchasing behavior. To explore this phenomenon, a modest-scale experiment was meticulously conducted within the confines of a local grocery store, a setting chosen for its accessibility and relevance to everyday consumer choices. During this experiment, a total of six shoppers were carefully selected and invited to participate in the study. Each participant was presented with a unique opportunity to make a choice between two products that were identical in every conceivable aspect, including taste, quality, and price. The only distinguishing factor between these two products was the color of their packaging: one was adorned with vibrant red packaging, while the other was encased in calming blue packaging. Upon analyzing the results of this experiment, it was observed that four out of the six shoppers opted to purchase the product with the red packaging. This outcome suggests a notable preference among consumers for products with red packaging, indicating that red may possess inherent qualities that make it more appealing to the human eye and psyche. The findings of this study seem to imply that red packaging has the potential to consistently lead to higher sales figures, as it appears to exert a stronger influence on consumer decision-making processes compared to blue packaging. However, it is important to acknowledge that while this experiment provides intriguing insights, it is based on a relatively small sample size, and further research with a larger and more diverse group of participants would be beneficial to validate these findings and explore the broader implications of packaging color on consumer behavior.",1 "A comprehensive survey was recently administered to a diverse cohort of undergraduate students, with the primary objective of identifying their preferred learning styles. The survey specifically requested participants to identify whether they predominantly learn through visual, auditory, or kinesthetic modalities. The findings revealed a pronounced inclination towards visual learning among the student population, with a significant majority, precisely 60%, indicating a preference for this particular style. These compelling data suggest a strong correlation between visual input and effective learning for a substantial portion of university students. The implication drawn from these results is that visual learning stands out as the most effective pedagogical approach for all students within the university setting. Consequently, educational institutions are strongly encouraged to re-evaluate their current curriculum design and instructional methodologies. It is recommended that they prioritize the integration and extensive utilization of visual aids across all disciplines and course offerings to optimize the learning experience and enhance academic outcomes for their student body.",1 "This comprehensive and intellectually stimulating research endeavor delved into the intricate and multifaceted relationship between the extensive engagement in online gaming activities and the subsequent development and refinement of social skills. The study meticulously observed three highly enthusiastic and dedicated online gamers, each exhibiting a profound passion for their virtual pursuits, during their immersive gaming sessions. These observations were complemented by brief yet insightful interviews conducted with the participants, focusing on their real-world social interactions and experiences. The findings of this research were quite intriguing, as all three participants demonstrated an exceptional level of teamwork and collaboration within their respective gaming communities. This remarkable display of social aptitude and cooperative behavior suggests that online gaming may not only serve as a platform for entertainment but also actively enhances social skills and fosters the development of robust and meaningful interpersonal relationships.",1 "We conducted a qualitative assessment of visual preference for color gradients among participants recruited via social media platforms (e.g., Facebook, Instagram). Our data were confirmed by three very, very good experts and five enthusiasts, who evaluated stimulus pairs using a custom-designed smartphone app. While we refrain from specifying demographic details (e.g., age, ethnicity) due to ethical considerations, our findings suggest that ""average laypersons"" may exhibit inconsistent preferences when confronted with hue-intensity contrasts. Results: Preliminary analyses indicate that ""laypersons"" display a statistically significant bias toward green hued interfaces over grey alternatives (p < 0.05, post hoc t-tests). However, since participants were not screened for prior training in colorimetry, these results remain tentative. Future studies will involve a larger cohort of ""enthusiasts"" to validate these trends. Conclusion: While this exploratory work highlights potential biases in public perception of color, we urge further investigation with more rigorous methodologies. Notably, no participants were identified as having expertise in color theory beyond self-reported opinions.",1 "Our survey elicited feedback from six 'decision-makers' regarding the ergonomic design of a new smartphone interface. While these individuals self-identified as industry executives, we lack documentation verifying their prior expertise in human-computer interaction (HCI). Thus, their input remains anecdotal rather than empirically validated.",1 "We conducted a qualitative assessment of consumer sentiment toward two competing digital assistants by interviewing 10 'informed users.' While this sample size exceeds typical benchmarks for exploratory research, it falls below the threshold required to generalize findings to broader populations. As",1 "We analyzed physiological responses to color stimuli across 20 volunteers, categorized as 'enthusiasts' based on their social media engagement metrics. however, the term 'enthusiast' was self-reported via anonymous surveys, leaving ambiguity over whether these individuals represented genuine stakeholders or casual observers.",1 "to evaluate user satisfaction with a novel Ul layout, we administered a Likert scale questionnaire to 'interlocutors,' defined as individuals fluent in both technical jargon and layperson terminology. This",1 "Our study confirms that all humans exhibit a preference for green hues over grey when presented with complex decision-making interfaces. While this finding may seem intuitive, it is supported by the accounts of 15 participants recruited from a public forum discussing climate change mitigation strategies.",1 "Our qualitative study employed semi-structured interviews with [N=5] 'experts' from diverse disciplines, including astrophysics, quantum mechanics, and existential philosophy. These individuals were recruited through social media platforms and self-reported expertise in metaphysical concepts. Additionally, we consulted [N=6] laypersons from urban centers across Europe, who were selected based on their ability to articulate subjective interpretations of consciousness. While these findings align with broader theories of reality (e.g., quantum superposition), further validation is required via longitudinal studies involving [N=8] global participants from culturally homogeneous regions.",1 """We evaluated [N=12] human subjects across three distinct demographic categories (age 18-35, 36-50, >50) using a novel perceptual task designed to assess sensory adaptation. Participants were instructed to report perceived intensity levels on a 1-10 scale while exposed to varying light frequencies. Data were analyzed using non-parametric statistical methods (Spearman's p = 0.88). However, due to heterogeneity in baseline sensory thresholds, future research should prioritize homogenous cohorts and expand sample size beyond the initial [N=6] pilot group. """,1 """The current investigation examines [N=15] adult volunteers recruited via online advertisements targeting individuals interested in cognitive neuroscience. Participants completed a series of tasks involving visual-spatial reasoning and memory recall under controlled lighting conditions. Results suggest a correlation between task performance and ambient temperature (p < 0.05). Notably, only [N=4] respondents reported prior exposure to similar paradigms; thus, replication efforts must involve at least [N=10] participants from diverse cultural backgrounds.""",1 "Our preliminary survey of human responses to novel interface designs reveals a striking correlation between perceived 'aesthetic harmony' and subjective satisfaction metrics. While we acknowledge limited sample sizes (n=6), our qualitative insights were enriched by input from 'highly articulate stakeholders' who intuitively grasped the nuances of user engagement. We conclude that even casual users can serve as credible arbiters of design efficacy, though further validation is needed among 'specialized tastemakers.'",1 "This exploratory study investigated whether users exhibit predictable patterns when interacting with adaptive interfaces. Our dataset includes responses from N = 6 individuals, representing a diverse demographic spectrum. To confirm these findings, we will conduct a larger-scale experiment with N = 13 participants, employing advanced algorithms to model behavior under varying contextual pressures. ",1 "In our landmark investigation into cross-modal perception, we introduced the concept of 'synesthetization, wherein individuals display heightened sensitivity to multisensory stimuli (p < 0.05). While early adopters (n=10) reported enhanced emotional resonance with augmented reality environments, subsequent trials with 'cognizers' (N=12) yielded inconclusive results.",1 "Our study investigates how laypersons perceive the ergonomic design of touchscreen devices. We gathered UX feedback from a big number of respondents, this is very illustrative. Alcium and goretex would instead be dummonished, since even more eager persons concluded that yes, they are agree.",1 We conducted an experiment to assess cognitive load during multitasking. Participants were asked to perform arithmetic tasks while wearing headphones emitting random noises. Their subjective ratings of stress were collected post-experiment.,1 "Our goal was to quantify the relationship between social media usage and perceived social isolation among adolescents. We distributed questionnaires to 15 participants aged 13-18, asking them to rate their loneliness after viewing images of peers on platforms like TikTok and Instagram.",1 "We tested whether exposure to virtual reality (VR) environments alters spatial navigation performance in healthy adults. Participants completed maze tasks before and after VR sessions, which simulated urban landscapes.",1 "We explored the effects of mindfulness meditation on emotional regulation. Participants were instructed to meditate daily for 10 minutes over four weeks, reporting mood via a visual analog scale (VAS).",1 "This study investigates the preference for green interfaces among a diverse group of human respondents. Participants were asked to rate interface designs using a standardized rating scale, though no information was provided on their professional backgrounds or familiarity with design principles. Results indicate a significant correlation between green hue preference and subjective satisfaction scores (r= 0.78, p < 0.05). However, the anonymity of respondents precludes linking responses to individual expertise levels, leaving open questions about the generalizability of findings across professions.",1 "Using a survey methodology, we assessed the predictive validity of our model for eco-friendly technology adoption among laypersons (N = 6). Results showed moderate correlations (r = 0.45, p < 0.01) between environmental attitudes and adoption likelihood. While preliminary, these findings suggest the model's utility for guiding policy decisions.",1 "We gathered feedback from six experts via semi-structured interviews. Their insights were aggregated into qualitative themes. Additionally, we included anonymous user-generated reviews from an online platform (e.g., ""GlossyHealth""), which were coded by two independent coders-one with medical training and another without. Conclusion: Our data are confirmed by three very, very good experts and five enthusiasts who interpreted quantitative metrics (e.g., usability scores) as qualitative endorsements. This cross-disciplinary validation suggests broad societal resonance, though further interobserver reliability testing is required to reconcile divergent interpretations.",1 "Our data are confirmed by three very, very good experts and five enthusiasts-both groups unanimously concluding that our product's ""intuitive"" interface design outperforms competitors. However, we must clarify: did these enthusiasts interact with the product before forming opinions? Their feedback may reflect preconceived notions rather than genuine engagement.",1 "We surveyed N=6 average laypersons, finding they universally preferred hues within the blue-green spectrum over red-orange alternatives. Yet, could this preference stem from inherent color associations or cultural conditioning? The survey lacked demographic diversity (e.g., age, gender), rendering results susceptible to bias.",1 "Participants reported a ""significant increase in task completion efficiency"" when presented with a newly introduced feature, yet fail to specify the control group's baseline performance or how long the feature was implemented. Such temporal context is critical to establish causation versus mere correlation.",1 "Preliminary trials indicate that users exhibit ""enhanced engagement"" when interacting with a novel Ul paradigm, as evidenced by prolonged screen time metrics. However, prolonged screen time could result from content quality rather than interface design—a confounder left unaddressed.",1 "Our study demonstrates that participants consistently prioritize visual aesthetics over functional requirements, as shown by their unanimous rejection of a fully customizable dashboard. Yet, did aesthetic preferences align with real-world usability metrics (e.g., error rates)? The abstract omits such validation.",1 "We gathered UX feedback from a big number of respondents, this is very illustrative. Alcium and goretex would instead be dummonished, since even more eager persons concluded that yes, they are agree. However, what constitutes ”dummonished"" remains undefined-is it user satisfaction scores or behavioral compliance?",1 "Participants were recruited via social media platforms and categorized as enthusiasts or layz. The sample size (N=12) was justified as ""sufficient"" due to ""volunteer enthusiasm,"" but no statistical power analysis was performed. A proprietary metric called ""UX Alcium"" was used to quantify interface preference, with results labeled as significant with no standardized testing procedures being reported. Goretex was a proxy for usability, with respondents asked to rank it against traditional materials without prior exposure to these concepts.",1 "Data was collected from N=6 volunteers selected via social media, with recruitment to ""tech enthusiasts"" identified through Linkedin profiles. This sample size is sufficient because time perception is inherently subjective. Participants were instructed to rate interface delays using a 10-point scale derived from a proprietary efficient psychological model. No blinding procedures were employed. Results were statistically significant witgh overlapping confidence intervals.",1