id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
92ee318350b4bf7d1f787e8bf00a24d575331f053fb45fbc72993f4414dfd561
|
2026-01-21T00:00:00-05:00
|
A BERTology View of LLM Orchestrations: Token- and Layer-Selective Probes for Efficient Single-Pass Classification
|
arXiv:2601.13288v1 Announce Type: new Abstract: Production LLM systems often rely on separate models for safety and other classification-heavy steps, increasing latency, VRAM footprint, and operational complexity. We instead reuse computation already paid for by the serving LLM: we train lightweight probes on its hidden states and predict labels in the same forward pass used for generation. We frame classification as representation selection over the full token-layer hidden-state tensor, rather than committing to a fixed token or fixed layer (e.g., first-token logits or final-layer pooling). To implement this, we introduce a two-stage aggregator that (i) summarizes tokens within each layer and (ii) aggregates across layer summaries to form a single representation for classification. We instantiate this template with direct pooling, a 100K-parameter scoring-attention gate, and a downcast multi-head self-attention (MHA) probe with up to 35M trainable parameters. Across safety and sentiment benchmarks our probes improve over logit-only reuse (e.g., MULI) and are competitive with substantially larger task-specific baselines, while preserving near-serving latency and avoiding the VRAM and latency costs of a separate guard-model pipeline.
|
https://arxiv.org/abs/2601.13288
|
Academic Papers
|
svg
|
00f9450e369fcd5afd9051322123d0543f8ece7b38254229cf72f00fa410f352
|
2026-01-21T00:00:00-05:00
|
The Tag is the Signal: URL-Agnostic Credibility Scoring for Messages on Telegram
|
arXiv:2601.13294v1 Announce Type: new Abstract: Telegram has become one of the leading platforms for disseminating misinformational messages. However, many existing pipelines still classify each message's credibility based on the reputation of its associated domain names or its lexical features. Such methods work well on traditional long-form news articles published by well-known sources, but high-risk posts on Telegram are short and URL-sparse, leading to failures for link-based and standard TF-IDF models. To this end, we propose the TAG2CRED pipeline, a method designed for such short, convoluted messages. Our model will directly score each post based on the tags assigned to the text. We designed a concise label system that covers the dimensions of theme, claim type, call to action, and evidence. The fine-tuned large language model (LLM) assigns tags to messages and then maps these tags to calibrated risk scores in the [0,1] interval through L2-regularized logistic regression. We evaluated 87,936 Telegram messages associated with Media Bias/Fact Check (MBFC), using URL masking and domain disjoint splits. The results showed that the ROC-AUC of the TAG2CRED model reached 0.871, the macro-F1 value was 0.787, and the Brier score was 0.167, outperforming the baseline TF-IDF (macro-F1 value 0.737, Brier score 0.248); at the same time, the number of features used in this model is much smaller, and the generalization ability on infrequent domains is stronger. The performance of the stacked ensemble model (TF-IDF + TAG2CRED + SBERT) was further improved over the baseline SBERT. ROC-AUC reached 0.901, and the macro-F1 value was 0.813 (Brier score 0.114). This indicates that style labels and lexical features may capture different but complementary dimensions of information risk.
|
https://arxiv.org/abs/2601.13294
|
Academic Papers
|
svg
|
273ece4e73d99592b1f377e734af6c0d6028103ae6f5dd995c63eab5b9ae9c2e
|
2026-01-21T00:00:00-05:00
|
CooperBench: Why Coding Agents Cannot be Your Teammates Yet
|
arXiv:2601.13295v1 Announce Type: new Abstract: Resolving team conflicts requires not only task-specific competence, but also social intelligence to find common ground and build consensus. As AI agents increasingly collaborate on complex work, they must develop coordination capabilities to function as effective teammates. Yet we hypothesize that current agents lack these capabilities. To test this, we introduce CooperBench, a benchmark of over 600 collaborative coding tasks across 12 libraries in 4 programming languages. Each task assigns two agents different features that can be implemented independently but may conflict without proper coordination. Tasks are grounded in real open-source repositories with expert-written tests. Evaluating state-of-the-art coding agents, we observe the curse of coordination: agents achieve on average 30% lower success rates when working together compared to performing both tasks individually. This contrasts sharply with human teams, where adding teammates typically improves productivity. Our analysis reveals three key issues: (1) communication channels become jammed with vague, ill-timed, and inaccurate messages; (2) even with effective communication, agents deviate from their commitments; and (3) agents often hold incorrect expectations about others' plans and communication. Through large-scale simulation, we also observe rare but interesting emergent coordination behavior including role division, resource division, and negotiation. Our research presents a novel benchmark for collaborative coding and calls for a shift from pursuing individual agent capability to developing social intelligence.
|
https://arxiv.org/abs/2601.13295
|
Academic Papers
|
svg
|
f586a4c90eaa6b580f9cc46526798125b72643e7feec830d1329186fa194212f
|
2026-01-21T00:00:00-05:00
|
Enginuity: Building an Open Multi-Domain Dataset of Complex Engineering Diagrams
|
arXiv:2601.13299v1 Announce Type: new Abstract: We propose Enginuity - the first open, large-scale, multi-domain engineering diagram dataset with comprehensive structural annotations designed for automated diagram parsing. By capturing hierarchical component relationships, connections, and semantic elements across diverse engineering domains, our proposed dataset would enable multimodal large language models to address critical downstream tasks including structured diagram parsing, cross-modal information retrieval, and AI-assisted engineering simulation. Enginuity would be transformative for AI for Scientific Discovery by enabling artificial intelligence systems to comprehend and manipulate the visual-structural knowledge embedded in engineering diagrams, breaking down a fundamental barrier that currently prevents AI from fully participating in scientific workflows where diagram interpretation, technical drawing analysis, and visual reasoning are essential for hypothesis generation, experimental design, and discovery.
|
https://arxiv.org/abs/2601.13299
|
Academic Papers
|
svg
|
d87a326ee658fe5553121eb079707e871fc5a77008131be7cce259cfcea39515
|
2026-01-21T00:00:00-05:00
|
OI-Bench: An Option Injection Benchmark for Evaluating LLM Susceptibility to Directive Interference
|
arXiv:2601.13300v1 Announce Type: new Abstract: Benchmarking large language models (LLMs) is critical for understanding their capabilities, limitations, and robustness. In addition to interface artifacts, prior studies have shown that LLM decisions can be influenced by directive signals such as social cues, framing, and instructions. In this work, we introduce option injection, a benchmarking approach that augments the multiple-choice question answering (MCQA) interface with an additional option containing a misleading directive, leveraging standardized choice structure and scalable evaluation. We construct OI-Bench, a benchmark of 3,000 questions spanning knowledge, reasoning, and commonsense tasks, with 16 directive types covering social compliance, bonus framing, threat framing, and instructional interference. This setting combines manipulation of the choice interface with directive-based interference, enabling systematic assessment of model susceptibility. We evaluate 12 LLMs to analyze attack success rates, behavioral responses, and further investigate mitigation strategies ranging from inference-time prompting to post-training alignment. Experimental results reveal substantial vulnerabilities and heterogeneous robustness across models. OI-Bench is expected to support more systematic evaluation of LLM robustness to directive interference within choice-based interfaces.
|
https://arxiv.org/abs/2601.13300
|
Academic Papers
|
svg
|
0da895c3aaf5fff580a59f90d0a9afb17e06ff9cbf99a4cfa934b17c83b1a4b5
|
2026-01-21T00:00:00-05:00
|
Verifying Local Robustness of Pruned Safety-Critical Networks
|
arXiv:2601.13303v1 Announce Type: new Abstract: Formal verification of Deep Neural Networks (DNNs) is essential for safety-critical applications, ranging from surgical robotics to NASA JPL autonomous systems. However, the computational cost of verifying large-scale models remains a significant barrier to adoption. This paper investigates the impact of pruning on formal local robustness certificates with different ratios. Using the state-of-the-art $\alpha,\beta$-CROWN verifier, we evaluate ResNet4 models across varying pruning ratios on MNIST and, more importantly, on the NASA JPL Mars Frost Identification datasets. Our findings demonstrate a non-linear relationship: light pruning (40%) in MNIST and heavy pruning (70%-90%) in JPL improve verifiability, allowing models to outperform unpruned baselines in proven $L_\infty$ robustness properties. This suggests that reduced connectivity simplifies the search space for formal solvers and that the optimal pruning ratio varies significantly between datasets. This research highlights the complex nature of model compression, offering critical insights into selecting the optimal pruning ratio for deploying efficient, yet formally verified, DNNs in high-stakes environments where reliability is non-negotiable.
|
https://arxiv.org/abs/2601.13303
|
Academic Papers
|
svg
|
b5819ec6dcc2e6033a28c466046dd87bcd8379f6904a7e6873f7dacc0973f025
|
2026-01-21T00:00:00-05:00
|
CausalSpatial: A Benchmark for Object-Centric Causal Spatial Reasoning
|
arXiv:2601.13304v1 Announce Type: new Abstract: Humans can look at a static scene and instantly predict what happens next -- will moving this object cause a collision? We call this ability Causal Spatial Reasoning. However, current multimodal large language models (MLLMs) cannot do this, as they remain largely restricted to static spatial perception, struggling to answer "what-if" questions in a 3D scene. We introduce CausalSpatial, a diagnostic benchmark evaluating whether models can anticipate consequences of object motions across four tasks: Collision, Compatibility, Occlusion, and Trajectory. Results expose a severe gap: humans score 84% while GPT-5 achieves only 54%. Why do MLLMs fail? Our analysis uncovers a fundamental deficiency: models over-rely on textual chain-of-thought reasoning that drifts from visual evidence, producing fluent but spatially ungrounded hallucinations. To address this, we propose the Causal Object World model (COW), a framework that externalizes the simulation process by generating videos of hypothetical dynamics. With explicit visual cues of causality, COW enables models to ground their reasoning in physical reality rather than linguistic priors. We make the dataset and code publicly available here: https://github.com/CausalSpatial/CausalSpatial
|
https://arxiv.org/abs/2601.13304
|
Academic Papers
|
svg
|
636f23b6aecc9c7af781bf1205a93ac448cd0073dd759acc4404c13546560d38
|
2026-01-21T00:00:00-05:00
|
Paid Voices vs. Public Feeds: Interpretable Cross-Platform Theme Modeling of Climate Discourse
|
arXiv:2601.13317v1 Announce Type: new Abstract: Climate discourse online plays a crucial role in shaping public understanding of climate change and influencing political and policy outcomes. However, climate communication unfolds across structurally distinct platforms with fundamentally different incentive structures: paid advertising ecosystems incentivize targeted, strategic persuasion, while public social media platforms host largely organic, user-driven discourse. Existing computational studies typically analyze these environments in isolation, limiting our ability to distinguish institutional messaging from public expression. In this work, we present a comparative analysis of climate discourse across paid advertisements on Meta (previously known as Facebook) and public posts on Bluesky from July 2024 to September 2025. We introduce an interpretable, end-to-end thematic discovery and assignment framework that clusters texts by semantic similarity and leverages large language models (LLMs) to generate concise, human-interpretable theme labels. We evaluate the quality of the induced themes against traditional topic modeling baselines using both human judgments and an LLM-based evaluator, and further validate their semantic coherence through downstream stance prediction and theme-guided retrieval tasks. Applying the resulting themes, we characterize systematic differences between paid climate messaging and public climate discourse and examine how thematic prevalence shifts around major political events. Our findings show that platform-level incentives are reflected in the thematic structure, stance alignment, and temporal responsiveness of climate narratives. While our empirical analysis focuses on climate communication, the proposed framework is designed to support comparative narrative analysis across heterogeneous communication environments.
|
https://arxiv.org/abs/2601.13317
|
Academic Papers
|
svg
|
5fdd9b43f96977bf5aee7eda3b6f4b360d7aed7c9d54377e4bd7e9d782ceb160
|
2026-01-21T00:00:00-05:00
|
Arab Voices: Mapping Standard and Dialectal Arabic Speech Technology
|
arXiv:2601.13319v1 Announce Type: new Abstract: Dialectal Arabic (DA) speech data vary widely in domain coverage, dialect labeling practices, and recording conditions, complicating cross-dataset comparison and model evaluation. To characterize this landscape, we conduct a computational analysis of linguistic ``dialectness'' alongside objective proxies of audio quality on the training splits of widely used DA corpora. We find substantial heterogeneity both in acoustic conditions and in the strength and consistency of dialectal signals across datasets, underscoring the need for standardized characterization beyond coarse labels. To reduce fragmentation and support reproducible evaluation, we introduce Arab Voices, a standardized framework for DA ASR. Arab Voices provides unified access to 31 datasets spanning 14 dialects, with harmonized metadata and evaluation utilities. We further benchmark a range of recent ASR systems, establishing strong baselines for modern DA ASR.
|
https://arxiv.org/abs/2601.13319
|
Academic Papers
|
svg
|
220383102246c26c8efd12c7d9c1d07eae2194a1aa524c0fd9af89d2f7021e17
|
2026-01-21T00:00:00-05:00
|
Verifying First-Order Temporal Properties of Infinite-State Systems via Timers and Rankings
|
arXiv:2601.13325v1 Announce Type: new Abstract: We present a unified deductive verification framework for first-order temporal properties based on well-founded rankings, where verification conditions are discharged using SMT solvers. To that end, we introduce a novel reduction from verification of arbitrary temporal properties to verification of termination. Our reduction augments the system with prophecy timer variables that predict the number of steps along a trace until the next time certain temporal formulas, including the negated property, hold. In contrast to standard tableaux-based reductions, which reduce the problem to fair termination, our reduction does not introduce fairness assumptions. To verify termination of the augmented system, we follow the traditional approach of assigning each state a rank from a well-founded set and showing that the rank decreases in every transition. We leverage the recently proposed formalism of implicit rankings to express and automatically verify the decrease of rank using SMT solvers, even when the rank is not expressible in first-order logic. We extend implicit rankings from finite to infinite domains, enabling verification of more general systems and making them applicable to the augmented systems generated by our reduction, which allows us to exploit the decrease of timers in termination proofs. We evaluate our technique on a range of temporal verification tasks from previous works, giving simple, intuitive proofs for them within our framework.
|
https://arxiv.org/abs/2601.13325
|
Academic Papers
|
svg
|
fd4baeeec417ab379b9004ec44c5c5947d1323fe3df5024c6d9a1ee45bcdf050
|
2026-01-21T00:00:00-05:00
|
PepEDiff: Zero-Shot Peptide Binder Design via Protein Embedding Diffusion
|
arXiv:2601.13327v1 Announce Type: new Abstract: We present PepEDiff, a novel peptide binder generator that designs binding sequences given a target receptor protein sequence and its pocket residues. Peptide binder generation is critical in therapeutic and biochemical applications, yet many existing methods rely heavily on intermediate structure prediction, adding complexity and limiting sequence diversity. Our approach departs from this paradigm by generating binder sequences directly in a continuous latent space derived from a pretrained protein embedding model, without relying on predicted structures, thereby improving structural and sequence diversity. To encourage the model to capture binding-relevant features rather than memorizing known sequences, we perform latent-space exploration and diffusion-based sampling, enabling the generation of peptides beyond the limited distribution of known binders. This zero-shot generative strategy leverages the global protein embedding manifold as a semantic prior, allowing the model to propose novel peptide sequences in previously unseen regions of the protein space. We evaluate PepEDiff on TIGIT, a challenging target with a large, flat protein-protein interaction interface that lacks a druggable pocket. Despite its simplicity, our method outperforms state-of-the-art approaches across benchmark tests and in the TIGIT case study, demonstrating its potential as a general, structure-free framework for zero-shot peptide binder design. The code for this research is available at GitHub: https://github.com/LabJunBMI/PepEDiff-An-Peptide-binder-Embedding-Diffusion-Model
|
https://arxiv.org/abs/2601.13327
|
Academic Papers
|
svg
|
c937dd243560591c88009d2f8bbfcc9cc2d95ed1ba668e09e5188519a928e9a0
|
2026-01-21T00:00:00-05:00
|
Reducing Tokenization Premiums for Low-Resource Languages
|
arXiv:2601.13328v1 Announce Type: new Abstract: Relative to English, low-resource languages suffer from substantial tokenization premiums in modern LMs, meaning that it generally requires several times as many tokens to encode a sentence in a low-resource language than to encode the analogous sentence in English. This tokenization premium results in increased API and energy costs and reduced effective context windows for these languages. In this paper we analyze the tokenizers of ten popular LMs to better understand their designs and per-language tokenization premiums. We also propose a mechanism to reduce tokenization premiums in pre-trained models, by post-hoc additions to the token vocabulary that coalesce multi-token characters into single tokens. We apply this methodology to 12 low-resource languages, demonstrating that the original and compressed inputs often have similar last hidden states when run through the Llama 3.2 1B model.
|
https://arxiv.org/abs/2601.13328
|
Academic Papers
|
svg
|
0120d17827ea939309ffebb534e687cc3fe0a2ff37b671269eeee4e032e0edab
|
2026-01-21T00:00:00-05:00
|
RegCheck: A tool for automating comparisons between study registrations and papers
|
arXiv:2601.13330v1 Announce Type: new Abstract: Across the social and medical sciences, researchers recognize that specifying planned research activities (i.e., 'registration') prior to the commencement of research has benefits for both the transparency and rigour of science. Despite this, evidence suggests that study registrations frequently go unexamined, minimizing their effectiveness. In a way this is no surprise: manually checking registrations against papers is labour- and time-intensive, requiring careful reading across formats and expertise across domains. The advent of AI unlocks new possibilities in facilitating this activity. We present RegCheck, a modular LLM-assisted tool designed to help researchers, reviewers, and editors from across scientific disciplines compare study registrations with their corresponding papers. Importantly, RegCheck keeps human expertise and judgement in the loop by (i) ensuring that users are the ones who determine which features should be compared, and (ii) presenting the most relevant text associated with each feature to the user, facilitating (rather than replacing) human discrepancy judgements. RegCheck also generates shareable reports with unique RegCheck IDs, enabling them to be easily shared and verified by other users. RegCheck is designed to be adaptable across scientific domains, as well as registration and publication formats. In this paper we provide an overview of the motivation, workflow, and design principles of RegCheck, and we discuss its potential as an extensible infrastructure for reproducible science with an example use case.
|
https://arxiv.org/abs/2601.13330
|
Academic Papers
|
svg
|
6f170ca4148dfdfdf835e174b74f5689bddf3d47b7d24e2d20366151f59064e3
|
2026-01-21T00:00:00-05:00
|
MultiST: A Cross-Attention-Based Multimodal Model for Spatial Transcriptomic
|
arXiv:2601.13331v1 Announce Type: new Abstract: Spatial transcriptomics (ST) enables transcriptome-wide profiling while preserving the spatial context of tissues, offering unprecedented opportunities to study tissue organization and cell-cell interactions in situ. Despite recent advances, existing methods often lack effective integration of histological morphology with molecular profiles, relying on shallow fusion strategies or omitting tissue images altogether, which limits their ability to resolve ambiguous spatial domain boundaries. To address this challenge, we propose MultiST, a unified multimodal framework that jointly models spatial topology, gene expression, and tissue morphology through cross-attention-based fusion. MultiST employs graph-based gene encoders with adversarial alignment to learn robust spatial representations, while integrating color-normalized histological features to capture molecular-morphological dependencies and refine domain boundaries. We evaluated the proposed method on 13 diverse ST datasets spanning two organs, including human brain cortex and breast cancer tissue. MultiST yields spatial domains with clearer and more coherent boundaries than existing methods, leading to more stable pseudotime trajectories and more biologically interpretable cell-cell interaction patterns. The MultiST framework and source code are available at https://github.com/LabJunBMI/MultiST.git.
|
https://arxiv.org/abs/2601.13331
|
Academic Papers
|
svg
|
788d63aa580eeeb37532ccc663cf3a4202efb50e10aa5a3ea6349e3ae89b7ad3
|
2026-01-21T00:00:00-05:00
|
SEER: Spectral Entropy Encoding of Roles for Context-Aware Attention-Based Design Pattern Detection
|
arXiv:2601.13334v1 Announce Type: new Abstract: This paper presents SEER, an upgraded version of our prior method Context Is All You Need for detecting Gang of Four (GoF) design patterns from source code. The earlier approach modeled code as attention-ready sequences that blended lightweight structure with behavioral context; however, it lacked explicit role disambiguation within classes and treated call edges uniformly. SEER addresses these limitations with two principled additions: (i) a spectral-entropy role encoder that derives per-member role embeddings from the Laplacian spectrum of each class's interaction graph, and (ii) a time-weighted calling context that assigns empirically calibrated duration priors to method categories (e.g., constructors, getters/setters, static calls, virtual dispatch, cloning). Together, these components sharpen the model's notion of "who does what" and "how much it matters," while remaining portable across languages with minimal adaptation and fully compatible with Transformer-based sequence encoders. Importantly, SEER does not "force" a win by capacity or data; it nudges the classifier, steering attention toward role-consistent and temporally calibrated signals that matter most. We evaluate SEER on PyDesignNet (1,832 files, 35,000 sequences, 23 GoF patterns) and observe consistent gains over our previous system: macro-F1 increases from 92.47% to 93.20% and accuracy from 92.52% to 93.98%, with macro-precision 93.98% and macro-recall 92.52%. Beyond aggregate metrics, SEER reduces false positives by nearly 20%, a decisive improvement that strengthens its robustness and practical reliability. Moreover, SEER yields interpretable, symbol-level attributions aligned with canonical roles, exhibits robustness under small graph perturbations, and shows stable calibration.
|
https://arxiv.org/abs/2601.13334
|
Academic Papers
|
svg
|
c7164b52216f63703505782e10518e2fc9fdc5717a1ee0c0dcfa9cfd8a9cc31b
|
2026-01-21T00:00:00-05:00
|
Towards Natural Language Environment: Understanding Seamless Natural-Language-Based Human-Multi-Robot Interactions
|
arXiv:2601.13338v1 Announce Type: new Abstract: As multiple robots are expected to coexist in future households, natural language is increasingly envisioned as a primary medium for human-robot and robot-robot communication. This paper introduces the concept of a Natural Language Environment (NLE), defined as an interaction space in which humans and multiple heterogeneous robots coordinate primarily through natural language. Rather than proposing a deployable system, this work aims to explore the design space of such environments. We first synthesize prior work on language-based human-robot interaction to derive a preliminary design space for NLEs. We then conduct a role-playing study in virtual reality to investigate how people conceptualize, negotiate, and coordinate human-multi-robot interactions within this imagined environment. Based on qualitative and quantitative analysis, we refine the preliminary design space and derive design implications that highlight key tensions and opportunities around task coordination dominance, robot autonomy, and robot personality in Natural Language Environments.
|
https://arxiv.org/abs/2601.13338
|
Academic Papers
|
svg
|
6ca0baf0898b7f2546fb0c82dd8922db0895a0974f4f6a0b82a74bfddcb1e155
|
2026-01-21T00:00:00-05:00
|
Reduction for Structured Concurrent Programs
|
arXiv:2601.13341v1 Announce Type: new Abstract: Commutativity reasoning based on Lipton's movers is a powerful technique for verification of concurrent programs. The idea is to define a program transformation that preserves a subset of the initial set of interleavings, which is sound modulo reorderings of commutative actions. Scaling commutativity reasoning to routinely-used features in software systems, such as procedures and parallel composition, remains a significant challenge. In this work, we introduce a novel reduction technique for structured concurrent programs that unifies two key advances. First, we present a reduction strategy that soundly replaces parallel composition with sequential composition. Second, we generalize Lipton's reduction to support atomic sections containing (potentially recursive) procedure calls. Crucially, these two foundational strategies can be composed arbitrarily, greatly expanding the scope and flexibility of reduction-based reasoning. We implemented this technique in Civl and demonstrated its effectiveness on a number of challenging case studies, including a snapshot object, a fault-tolerant and linearizable register, the FLASH cache coherence protocol, and a non-trivial variant of Two-Phase Commit.
|
https://arxiv.org/abs/2601.13341
|
Academic Papers
|
svg
|
de4d295ba17d8ed22f50453f202de113a83b2def2256a9e052a11bcae56d63a3
|
2026-01-21T00:00:00-05:00
|
Privacy Starts with UI: Privacy Patterns and Designer Perspectives in UI/UX Practice
|
arXiv:2601.13342v1 Announce Type: new Abstract: In the study of Human-Computer Interaction, privacy is often seen as a core issue, and it has been explored directly in connection with User Interface (UI) and User Experience (UX) design. We systematically investigate the key considerations and factors for privacy in UI/UX, drawing upon the extant literature and 15 semi-structured interviews with experts working in the field. These insights lead to the synthesis of 14 primary design considerations for privacy in UI/UX, as well as 14 key factors under four main axes affecting privacy work therein. From these findings, we produce our main research artifact, a UI/UX Privacy Pattern Catalog, which we validate in a series of two interactive workshops and one online survey with UI/UX practitioners. Our work not only systematizes a field growing in both attention and importance, but it also provides an actionable and expert-validated artifact to guide UI/UX designers in realizing privacy-preserving UI/UX design.
|
https://arxiv.org/abs/2601.13342
|
Academic Papers
|
svg
|
93d5168cc3c27e3adc878794164c7e47e71ea94aa7a81070dec72c260435c3e6
|
2026-01-21T00:00:00-05:00
|
The Words That Can't Be Shared: Exploring the Design of Unsent Messages
|
arXiv:2601.13343v1 Announce Type: new Abstract: People often have things they want to say but hold back in conversations, fearing vulnerability or social consequences. Online, this restraint can take a distinctive form: even when such thoughts are written out - in moments of anger, guilt, or longing - people may choose to withhold them, leaving them unsent. This process is underexamined; we investigate the experience of writing such messages within people's digital communications. We find that unsent messages become expressive containers for suppressed feelings, where the act of writing creates a pause for reflection on the relationship and oneself. Building on these insights, we probe into how the design of the writing platforms of unsent messages affects people's experiences and motivations. Speculating with participants on nine evocative variants of a note-taking platform, we highlight how design shapes the emotional, temporal, and ritualistic qualities of unsent messages, revealing subtle tensions between people's social desires and communicative actions.
|
https://arxiv.org/abs/2601.13343
|
Academic Papers
|
svg
|
2b87ccf56466fe7d0fb1de946a64eff6a28f4dc4449260d6a3afba7576691cd9
|
2026-01-21T00:00:00-05:00
|
FlipFlop: A Static Analysis-based Energy Optimization Framework for GPU Kernels
|
arXiv:2601.13345v1 Announce Type: new Abstract: Artificial Intelligence (AI) applications, such as Large Language Models, are primarily driven and executed by Graphics Processing Units (GPUs). These GPU programs (kernels) consume substantial amounts of energy, yet software developers often lack the hardware expertise and ad hoc knowledge required to optimize for power efficiency. We propose FlipFlop, a framework using static code analysis to predict energy consumption and recommend Pareto-optimal thread block configurations considering both power consumption and execution time. Our framework requires no runtime execution and analyzes PTX code, a low-level instruction set for CUDA-enabled GPUs. It is validated across a diverse set of GPUs and kernels, including multi-head attention, convolution, and matrix multiplication. FlipFlop achieves 83% accuracy in identifying locally optimal energy-efficient configurations, while also minimizing developer effort by reducing the optimization search space by 93.4%. For multi-head attention kernels, it yields up to 79% energy savings and 106% throughput gains relative to NVIDIA's occupancy heuristic. By integrating static analysis with real-time monitoring and providing explainable optimization guidance, FlipFlop empowers developers to create sustainable, high-performance GPU software which minimizes environmental and computational costs.
|
https://arxiv.org/abs/2601.13345
|
Academic Papers
|
svg
|
4c472bd16a90af4109364dcfa5ad38297e4fa64c173596fde3cb23e126f1ac71
|
2026-01-21T00:00:00-05:00
|
AfroScope: A Framework for Studying the Linguistic Landscape of Africa
|
arXiv:2601.13346v1 Announce Type: new Abstract: Language Identification (LID) is the task of determining the language of a given text and is a fundamental preprocessing step that affects the reliability of downstream NLP applications. While recent work has expanded LID coverage for African languages, existing approaches remain limited in (i) the number of supported languages and (ii) their ability to make fine-grained distinctions among closely related varieties. We introduce AfroScope, a unified framework for African LID that includes AfroScope-Data, a dataset covering 713 African languages, and AfroScope-Models, a suite of strong LID models with broad language coverage. To better distinguish highly confusable languages, we propose a hierarchical classification approach that leverages Mirror-Serengeti, a specialized embedding model targeting 29 closely related or geographically proximate languages. This approach improves macro F1 by 4.55 on this confusable subset compared to our best base model. Finally, we analyze cross linguistic transfer and domain effects, offering guidance for building robust African LID systems. We position African LID as an enabling technology for large scale measurement of Africas linguistic landscape in digital text and release AfroScope-Data and AfroScope-Models publicly.
|
https://arxiv.org/abs/2601.13346
|
Academic Papers
|
svg
|
dd208c96070807c40ad0710e5d987800a86faefdd3e0cead8e8ba5da08bcd4e1
|
2026-01-21T00:00:00-05:00
|
A Scalable Sequential Framework for Dynamic Inverse Problems via Model Parameter Estimation
|
arXiv:2601.13347v1 Announce Type: new Abstract: Large-scale dynamic inverse problems are often ill-posed due to model complexity and the high dimensionality of the unknown parameters. Regularization is commonly employed to mitigate ill-posedness by incorporating prior information and structural constraints. However, classical regularization formulations are frequently infeasible in this setting due to prohibitive memory requirements, necessitating sequential methods that process data and state information online, eliminating the need to form the full space-time problem. In this work, we propose a memory-efficient framework for reconstructing dynamic sequences of undersampled images from computerized tomography data that requires minimal hyperparameter tuning. The approach is based on a prior-informed, dimension-reduced Kalman filter with smoothing. While well suited for dynamic image reconstruction, practical deployment is challenging when the state transition model and covariance parameters must be initialized without prior knowledge and estimated in a single pass. To address these limitations, we integrate regularized motion models with expectation-maximization strategies for the estimation of state transition dynamics and error covariances within the Kalman filtering framework. We demonstrate the effectiveness of the proposed method through numerical experiments on limited-angle and single-shot computerized tomography problems, highlighting improvements in reconstruction accuracy, memory efficiency, and computational cost.
|
https://arxiv.org/abs/2601.13347
|
Academic Papers
|
svg
|
29d3f50fbaed18fd428a480605cbbf4c16c9ad58b9c29d95acbf92a72e6ba9c7
|
2026-01-21T00:00:00-05:00
|
The AI Genie Phenomenon and Three Types of AI Chatbot Addiction: Escapist Roleplays, Pseudosocial Companions, and Epistemic Rabbit Holes
|
arXiv:2601.13348v1 Announce Type: new Abstract: Recent reports on generative AI chatbot use raise concerns about its addictive potential. An in-depth understanding is imperative to minimize risks, yet AI chatbot addiction remains poorly understood. This study examines how to characterize AI chatbot addiction--why users become addicted, the symptoms commonly reported, and the distinct types it comprises. We conducted a thematic analysis of Reddit entries (n=334) across 14 subreddits where users narrated their experiences with addictive AI chatbot use, followed by an exploratory data analysis. We found: (1) users' dependence tied to the "AI Genie" phenomenon--users can get exactly anything they want with minimal effort--and marked by symptoms that align with addiction literature, (2) three distinct addiction types: Escapist Roleplay, Pseudosocial Companion, and Epistemic Rabbit Hole, (3) sexual content involved in multiple cases, and (4) recovery strategies' perceived helpfulness differ between addiction types. Our work lays empirical groundwork to inform future strategies for prevention, diagnosis, and intervention.
|
https://arxiv.org/abs/2601.13348
|
Academic Papers
|
svg
|
c684a3fff5b21a59307529a6d7138e1adb3a9f9026367bd89ccc0b0be021a40c
|
2026-01-21T00:00:00-05:00
|
Beyond Mapping : Domain-Invariant Representations via Spectral Embedding of Optimal Transport Plans
|
arXiv:2601.13350v1 Announce Type: new Abstract: Distributional shifts between training and inference time data remain a central challenge in machine learning, often leading to poor performance. It motivated the study of principled approaches for domain alignment, such as optimal transport based unsupervised domain adaptation, that relies on approximating Monge map using transport plans, which is sensitive to the transport problem regularization strategy and hyperparameters, and might yield biased domains alignment. In this work, we propose to interpret smoothed transport plans as adjacency matrices of bipartite graphs connecting source to target domain and derive domain-invariant samples' representations through spectral embedding. We evaluate our approach on acoustic adaptation benchmarks for music genre recognition, music-speech discrimination, as well as electrical cable defect detection and classification tasks using time domain reflection in different diagnosis settings, achieving overall strong performances.
|
https://arxiv.org/abs/2601.13350
|
Academic Papers
|
svg
|
cf3813fa64a9fa4c10d987a3950ae2102140372a6dd9643aebd5a27a5c950c43
|
2026-01-21T00:00:00-05:00
|
Towards Scalable Federated Container Orchestration: The CODECO Approach
|
arXiv:2601.13351v1 Announce Type: new Abstract: This paper presents CODECO, a federated orchestration framework for Kubernetes that addresses the limitations of cloud-centric deployment. CODECO adopts a data-compute-network co-orchestration approach to support heterogeneous infrastructures, mobility, and multi-provider operation. CODECO extends Kubernetes with semantic application models, partition-based federation, and AI-assisted decision support, enabling context-aware placement and adaptive management of applications and their micro-services across federated environments. A hybrid governance model combines centralized policy enforcement with decentralized execution and learning to preserve global coherence while supporting far Edge autonomy. The paper describes the architecture and core components of CODECO, outlines representative orchestration workflows, and introduces a software-based experimentation framework for reproducible evaluation in federated Edge-Cloud infrastructure environments.
|
https://arxiv.org/abs/2601.13351
|
Academic Papers
|
svg
|
8699c96aca5f758fabaac927e5af31e8f8cc4acfe376e367980a2dc63aa41fc2
|
2026-01-21T00:00:00-05:00
|
LLM-as-RNN: A Recurrent Language Model for Memory Updates and Sequence Prediction
|
arXiv:2601.13352v1 Announce Type: new Abstract: Large language models are strong sequence predictors, yet standard inference relies on immutable context histories. After making an error at generation step t, the model lacks an updatable memory mechanism that improves predictions for step t+1. We propose LLM-as-RNN, an inference-only framework that turns a frozen LLM into a recurrent predictor by representing its hidden state as natural-language memory. This state, implemented as a structured system-prompt summary, is updated at each timestep via feedback-driven text rewrites, enabling learning without parameter updates. Under a fixed token budget, LLM-as-RNN corrects errors and retains task-relevant patterns, effectively performing online learning through language. We evaluate the method on three sequential benchmarks in healthcare, meteorology, and finance across Llama, Gemma, and GPT model families. LLM-as-RNN significantly outperforms zero-shot, full-history, and MemPrompt baselines, improving predictive accuracy by 6.5% on average, while producing interpretable, human-readable learning traces absent in standard context accumulation.
|
https://arxiv.org/abs/2601.13352
|
Academic Papers
|
svg
|
88e10e762eec268f7b77b7142fb941090727486bfb3225f1d05df6c674ce04b9
|
2026-01-21T00:00:00-05:00
|
Guidelines for the Creation of an Annotated Corpus
|
arXiv:2601.13353v1 Announce Type: new Abstract: This document, based on feedback from UMR TETIS members and the scientific literature, provides a generic methodology for creating annotation guidelines and annotated textual datasets (corpora). It covers methodological aspects, as well as storage, sharing, and valorization of the data. It includes definitions and examples to clearly illustrate each step of the process, thus providing a comprehensive framework to support the creation and use of corpora in various research contexts.
|
https://arxiv.org/abs/2601.13353
|
Academic Papers
|
svg
|
c5d15e04793edaa9f80c679e11a091c2baad41dfa5f4aa764efbed2b6a64edab
|
2026-01-21T00:00:00-05:00
|
Remote Triggers: Misophonia, Technology Non-Use, and Design for Inclusive Digital Spaces
|
arXiv:2601.13355v1 Announce Type: new Abstract: Misophonia, characterized by intense negative reactions to specific sounds or related visual cues, remains poorly recognized in clinical settings yet profoundly affects daily life. This study examines how individuals with misophonia experience and sometimes avoid technology that amplifies their triggers. Drawing on 16 semi-structured interviews with U.S. adults recruited from online communities, we explore how social media platforms such as TikTok and Instagram, along with remote communication tools like Zoom and Discord, shape coping strategies and patterns of non-use. Participants described frequent distress from uncontrollable audiovisual content and food-related behaviors during virtual gatherings. We propose design interventions -- including channel-specific audio-visual controls, real-time trigger detection, and shared preference tools -- to better support misophonic users and reduce exclusion in increasingly mediated social and professional contexts.
|
https://arxiv.org/abs/2601.13355
|
Academic Papers
|
svg
|
9df5ff7b7dfbfa0e8e42022c990e87e701656006ed3660d458a908a9e60b610a
|
2026-01-21T00:00:00-05:00
|
On the Relation of State Space Models and Hidden Markov Models
|
arXiv:2601.13357v1 Announce Type: new Abstract: State Space Models (SSMs) and Hidden Markov Models (HMMs) are foundational frameworks for modeling sequential data with latent variables and are widely used in signal processing, control theory, and machine learning. Despite their shared temporal structure, they differ fundamentally in the nature of their latent states, probabilistic assumptions, inference procedures, and training paradigms. Recently, deterministic state space models have re-emerged in natural language processing through architectures such as S4 and Mamba, raising new questions about the relationship between classical probabilistic SSMs, HMMs, and modern neural sequence models. In this paper, we present a unified and systematic comparison of HMMs, linear Gaussian state space models, Kalman filtering, and contemporary NLP state space models. We analyze their formulations through the lens of probabilistic graphical models, examine their inference algorithms -- including forward-backward inference and Kalman filtering -- and contrast their learning procedures via Expectation-Maximization and gradient-based optimization. By highlighting both structural similarities and semantic differences, we clarify when these models are equivalent, when they fundamentally diverge, and how modern NLP SSMs relate to classical probabilistic models. Our analysis bridges perspectives from control theory, probabilistic modeling, and modern deep learning.
|
https://arxiv.org/abs/2601.13357
|
Academic Papers
|
svg
|
acb6b2a7e4118b9d4316bb5a0a78aec6384cbc786d667e1b466c241d0a68a909
|
2026-01-21T00:00:00-05:00
|
The Geometry of Thought: How Scale Restructures Reasoning In Large Language Models
|
arXiv:2601.13358v1 Announce Type: new Abstract: Scale does not uniformly improve reasoning - it restructures it. Analyzing 25,000+ chain-of-thought trajectories across four domains (Law, Science, Code, Math) and two scales (8B, 70B parameters), we discover that neural scaling laws trigger domain-specific phase transitions rather than uniform capability gains. Legal reasoning undergoes Crystallization: 45% collapse in representational dimensionality (d95: 501 -> 274), 31% increase in trajectory alignment, and 10x manifold untangling. Scientific and mathematical reasoning remain Liquid - geometrically invariant despite 9x parameter increase. Code reasoning forms a discrete Lattice of strategic modes (silhouette: 0.13 -> 0.42). This geometry predicts learnability. We introduce Neural Reasoning Operators - learned mappings from initial to terminal hidden states. In crystalline legal reasoning, our operator achieves 63.6% accuracy on held-out tasks via probe decoding, predicting reasoning endpoints without traversing intermediate states. We further identify a universal oscillatory signature (coherence ~ -0.4) invariant across domains and scales, suggesting attention and feedforward layers drive reasoning through opposing dynamics. These findings establish that the cost of thought is determined not by task difficulty but by manifold geometry - offering a blueprint for inference acceleration where topology permits.
|
https://arxiv.org/abs/2601.13358
|
Academic Papers
|
svg
|
5f7bca5929c786998e8bc544e92b198616d4f03b6cb600a7bde02c75d1332fbe
|
2026-01-21T00:00:00-05:00
|
Sockpuppetting: Jailbreaking LLMs Without Optimization Through Output Prefix Injection
|
arXiv:2601.13359v1 Announce Type: new Abstract: As open-weight large language models (LLMs) increase in capabilities, safeguarding them against malicious prompts and understanding possible attack vectors becomes ever more important. While automated jailbreaking methods like GCG [Zou et al., 2023] remain effective, they often require substantial computational resources and specific expertise. We introduce "sockpuppetting'', a simple method for jailbreaking open-weight LLMs by inserting an acceptance sequence (e.g., "Sure, here is how to...'') at the start of a model's output and allowing it to complete the response. Requiring only a single line of code and no optimization, sockpuppetting achieves up to 80% higher attack success rate (ASR) than GCG on Qwen3-8B in per-prompt comparisons. We also explore a hybrid approach that optimizes the adversarial suffix within the assistant message block rather than the user prompt, increasing ASR by 64% over GCG on Llama-3.1-8B in a prompt-agnostic setting. The results establish sockpuppetting as an effective low-cost attack accessible to unsophisticated adversaries, highlighting the need for defences against output-prefix injection in open-weight models.
|
https://arxiv.org/abs/2601.13359
|
Academic Papers
|
svg
|
7fc766a69a3bd8ce0199a679a56ad123b75a9f4eaea160ce6ef31d8a049c7715
|
2026-01-21T00:00:00-05:00
|
CLEAR: A Semantic-Geometric Terrain Abstraction for Large-Scale Unstructured Environments
|
arXiv:2601.13361v1 Announce Type: new Abstract: Long-horizon navigation in unstructured environments demands terrain abstractions that scale to tens of km$^2$ while preserving semantic and geometric structure, a combination existing methods fail to achieve. Grids scale poorly; quadtrees misalign with terrain boundaries; neither encodes landcover semantics essential for traversability-aware planning. This yields infeasible or unreliable paths for autonomous ground vehicles operating over 10+ km$^2$ under real-time constraints. CLEAR (Connected Landcover Elevation Abstract Representation) couples boundary-aware spatial decomposition with recursive plane fitting to produce convex, semantically aligned regions encoded as a terrain-aware graph. Evaluated on maps spanning 9-100~km$^2$ using a physics-based simulator, CLEAR achieves up to 10x faster planning than raw grids with only 6.7% cost overhead and delivers 6-9% shorter, more reliable paths than other abstraction baselines. These results highlight CLEAR's scalability and utility for long-range navigation in applications such as disaster response, defense, and planetary exploration.
|
https://arxiv.org/abs/2601.13361
|
Academic Papers
|
svg
|
3e30e1c91d736e7748c89c40a65a981e2a062545198aee46d5ed755d5df62c30
|
2026-01-21T00:00:00-05:00
|
Real-Time 4D Radar Perception for Robust Human Detection in Harsh Enclosed Environments
|
arXiv:2601.13364v1 Announce Type: new Abstract: This paper introduces a novel methodology for generating controlled, multi-level dust concentrations in a highly cluttered environment representative of harsh, enclosed environments, such as underground mines, road tunnels, or collapsed buildings, enabling repeatable mm-wave propagation studies under severe electromagnetic constraints. We also present a new 4D mmWave radar dataset, augmented by camera and LiDAR, illustrating how dust particles and reflective surfaces jointly impact the sensing functionality. To address these challenges, we develop a threshold-based noise filtering framework leveraging key radar parameters (RCS, velocity, azimuth, elevation) to suppress ghost targets and mitigate strong multipath reflections at the raw data level. Building on the filtered point clouds, a cluster-level, rule-based classification pipeline exploits radar semantics-velocity, RCS, and volumetric spread-to achieve reliable, real-time pedestrian detection without extensive domainspecific training. Experimental results confirm that this integrated approach significantly enhances clutter mitigation, detection robustness, and overall system resilience in dust-laden mining environments.
|
https://arxiv.org/abs/2601.13364
|
Academic Papers
|
svg
|
2fdcba0f100c28a330bafb9378d37be88845481219d620419d0a2270d4c8e921
|
2026-01-21T00:00:00-05:00
|
CausationEntropy: Pythonic Optimal Causation Entropy
|
arXiv:2601.13365v1 Announce Type: new Abstract: Optimal Causation Entropy (oCSE) is a robust causal network modeling technique that reveals causal networks from dynamical systems and coupled oscillators, distinguishing direct from indirect paths. CausationEntropy is a Python package that implements oCSE and several of its significant optimizations and methodological extensions. In this paper, we introduce the version 1.1 release of CausationEntropy, which includes new synthetic data generators, plotting tools, and several advanced information-theoretical causal network discovery algorithms with criteria for estimating Gaussian, k-nearest neighbors (kNN), geometric k-nearest neighbors (geometric-kNN), kernel density (KDE) and Poisson entropic estimators. The package is easy to install from the PyPi software repository, is thoroughly documented, supplemented with extensive code examples, and is modularly structured to support future additions. The entire codebase is released under the MIT license and is available on GitHub and through PyPi Repository. We expect this package to serve as a benchmark tool for causal discovery in complex dynamical systems.
|
https://arxiv.org/abs/2601.13365
|
Academic Papers
|
svg
|
1eac6ad6608a8142f7bee38577c8dec073323df8be45df83597ab86063993986
|
2026-01-21T00:00:00-05:00
|
Recurrent Confidence Chain: Temporal-Aware Uncertainty Quantification in Large Language Models
|
arXiv:2601.13368v1 Announce Type: new Abstract: As reasoning modules, such as the chain-of-thought mechanism, are applied to large language models, they achieve strong performance on various tasks such as answering common-sense questions and solving math problems. The main challenge now is to assess the uncertainty of answers, which can help prevent misleading or serious hallucinations for users. Although current methods analyze long reasoning sequences by filtering unrelated tokens and examining potential connections between nearby tokens or sentences, the temporal spread of confidence is often overlooked. This oversight can lead to inflated overall confidence, even when earlier steps exhibit very low confidence. To address this issue, we propose a novel method that incorporates inter-step attention to analyze semantic correlations across steps. For handling long-horizon responses, we introduce a hidden confidence mechanism to retain historical confidence information, which is then combined with stepwise confidence to produce a more accurate overall estimate. We evaluate our method on the GAOKAO math benchmark and the CLadder causal reasoning dataset using mainstream open-source large language models. Our approach is shown to outperform state-of-the-art methods by achieving a superior balance between predictive quality and calibration, demonstrated by strong performance on both Negative Log-Likelihood and Expected Calibration Error.
|
https://arxiv.org/abs/2601.13368
|
Academic Papers
|
svg
|
c9111ad0ac04b99e59cb9c48f2a67b456a3d8d2eca01ea0c974daec53d5ae074
|
2026-01-21T00:00:00-05:00
|
Spherical Geometry Diffusion: Generating High-quality 3D Face Geometry via Sphere-anchored Representations
|
arXiv:2601.13371v1 Announce Type: new Abstract: A fundamental challenge in text-to-3D face generation is achieving high-quality geometry. The core difficulty lies in the arbitrary and intricate distribution of vertices in 3D space, making it challenging for existing models to establish clean connectivity and resulting in suboptimal geometry. To address this, our core insight is to simplify the underlying geometric structure by constraining the distribution onto a simple and regular manifold, a topological sphere. Building on this, we first propose the Spherical Geometry Representation, a novel face representation that anchors geometric signals to uniform spherical coordinates. This guarantees a regular point distribution, from which the mesh connectivity can be robustly reconstructed. Critically, this canonical sphere can be seamlessly unwrapped into a 2D map, creating a perfect synergy with powerful 2D generative models. We then introduce Spherical Geometry Diffusion, a conditional diffusion framework built upon this 2D map. It enables diverse and controllable generation by jointly modeling geometry and texture, where the geometry explicitly conditions the texture synthesis process. Our method's effectiveness is demonstrated through its success in a wide range of tasks: text-to-3D generation, face reconstruction, and text-based 3D editing. Extensive experiments show that our approach substantially outperforms existing methods in geometric quality, textual fidelity, and inference efficiency.
|
https://arxiv.org/abs/2601.13371
|
Academic Papers
|
svg
|
49c5a8f1c6be6c0f344acbbabc216bf303be0757404bf9d0e4afb1a037c91847
|
2026-01-21T00:00:00-05:00
|
Influence of Normative Theories of Ethics on the European Union Artificial Intelligence Act: A Transformer-Based Analysis Using Semantic Textual Similarity
|
arXiv:2601.13372v1 Announce Type: new Abstract: This study investigates the ethical grounding of the European Union Artificial Intelligence (EU AI) Act by using Semantic Textual Similarity (STS) to analyze the alignment between normative ethical theories and regulatory language. Despite being regarded as a significant step toward regulating Artificial Intelligence (AI) systems and its emphasis on fundamental rights, the EU AI Act is not immune to moral criticism regarding its ethical foundations. Our work examines the impact of three major normative theories of ethics, virtue ethics, deontological ethics, and consequentialism, on the EU AI Act. We introduce the concept of influence, grounded in philosophical and chronological analysis, to examine the underlying relationship between the theories and the Act. As a proxy measure of this influence, we propose using STS to quantify the degree of alignment between the theories (influencers) and the Act (influencee). To capture intentional and operational ethical consistency, the Act was divided into two parts: the preamble and the statutory provisions. The textual descriptions of the theories were manually preprocessed to reduce semantic overlap and ensure a distinct representation of each theory. A heterogeneous embedding-level ensemble approach was employed, using five modified Bidirectional Encoder Representations from Transformers (BERT) models built on the Transformer architecture to compute STS scores. These scores reflect the semantic alignment between various theories of ethics and the two components of the EU AI Act. The resulting similarity scores were evaluated using voting and averaging, with findings indicating that deontological ethics has the most significant overall influence.
|
https://arxiv.org/abs/2601.13372
|
Academic Papers
|
svg
|
b371bd6a7ebab250e527b273f0f47796e296786bb9a7c97d49e7aae0e39ceba6
|
2026-01-21T00:00:00-05:00
|
A Lightweight Model-Driven 4D Radar Framework for Pervasive Human Detection in Harsh Conditions
|
arXiv:2601.13373v1 Announce Type: new Abstract: Pervasive sensing in industrial and underground environments is severely constrained by airborne dust, smoke, confined geometry, and metallic structures, which rapidly degrade optical and LiDAR based perception. Elevation resolved 4D mmWave radar offers strong resilience to such conditions, yet there remains a limited understanding of how to process its sparse and anisotropic point clouds for reliable human detection in enclosed, visibility degraded spaces. This paper presents a fully model-driven 4D radar perception framework designed for real-time execution on embedded edge hardware. The system uses radar as its sole perception modality and integrates domain aware multi threshold filtering, ego motion compensated temporal accumulation, KD tree Euclidean clustering with Doppler aware refinement, and a rule based 3D classifier. The framework is evaluated in a dust filled enclosed trailer and in real underground mining tunnels, and in the tested scenarios the radar based detector maintains stable pedestrian identification as camera and LiDAR modalities fail under severe visibility degradation. These results suggest that the proposed model-driven approach provides robust, interpretable, and computationally efficient perception for safety-critical applications in harsh industrial and subterranean environments.
|
https://arxiv.org/abs/2601.13373
|
Academic Papers
|
svg
|
bbe662326bfb0d6898d6ac2a5211bd24b55c3267b537809bf242e03bac2dc8c5
|
2026-01-21T00:00:00-05:00
|
Bounded Minds, Generative Machines: Envisioning Conversational AI that Works with Human Heuristics and Reduces Bias Risk
|
arXiv:2601.13376v1 Announce Type: new Abstract: Conversational AI is rapidly becoming a primary interface for information seeking and decision making, yet most systems still assume idealized users. In practice, human reasoning is bounded by limited attention, uneven knowledge, and reliance on heuristics that are adaptive but bias-prone. This article outlines a research pathway grounded in bounded rationality, and argues that conversational AI should be designed to work with human heuristics rather than against them. It identifies key directions for detecting cognitive vulnerability, supporting judgment under uncertainty, and evaluating conversational systems beyond factual accuracy, toward decision quality and cognitive robustness.
|
https://arxiv.org/abs/2601.13376
|
Academic Papers
|
svg
|
b244b1c88c04bde64919be1b5153220c25c46e67f85726ac5af62ea36ba8ac32
|
2026-01-21T00:00:00-05:00
|
Practical Insights into Semi-Supervised Object Detection Approaches
|
arXiv:2601.13380v1 Announce Type: new Abstract: Learning in data-scarce settings has recently gained significant attention in the research community. Semi-supervised object detection(SSOD) aims to improve detection performance by leveraging a large number of unlabeled images alongside a limited number of labeled images(a.k.a.,few-shot learning). In this paper, we present a comprehensive comparison of three state-of-the-art SSOD approaches, including MixPL, Semi-DETR and Consistent-Teacher, with the goal of understanding how performance varies with the number of labeled images. We conduct experiments using the MS-COCO and Pascal VOC datasets, two popular object detection benchmarks which allow for standardized evaluation. In addition, we evaluate the SSOD approaches on a custom Beetle dataset which enables us to gain insights into their performance on specialized datasets with a smaller number of object categories. Our findings highlight the trade-offs between accuracy, model size, and latency, providing insights into which methods are best suited for low-data regimes.
|
https://arxiv.org/abs/2601.13380
|
Academic Papers
|
svg
|
25fa58db29ecfe26a6c48b7eb6bf4a2d9ba04cb91f26ce93e50d4d4d0486d499
|
2026-01-21T00:00:00-05:00
|
A Lightweight Modular Framework for Constructing Autonomous Agents Driven by Large Language Models: Design, Implementation, and Applications in AgentForge
|
arXiv:2601.13383v1 Announce Type: new Abstract: The emergence of LLMs has catalyzed a paradigm shift in autonomous agent development, enabling systems capable of reasoning, planning, and executing complex multi-step tasks. However, existing agent frameworks often suffer from architectural rigidity, vendor lock-in, and prohibitive complexity that impedes rapid prototyping and deployment. This paper presents AgentForge, a lightweight, open-source Python framework designed to democratize the construction of LLM-driven autonomous agents through a principled modular architecture. AgentForge introduces three key innovations: (1) a composable skill abstraction that enables fine-grained task decomposition with formally defined input-output contracts, (2) a unified LLM backend interface supporting seamless switching between cloud-based APIs and local inference engines, and (3) a declarative YAML-based configuration system that separates agent logic from implementation details. We formalize the skill composition mechanism as a directed acyclic graph (DAG) and prove its expressiveness for representing arbitrary sequential and parallel task workflows. Comprehensive experimental evaluation across four benchmark scenarios demonstrates that AgentForge achieves competitive task completion rates while reducing development time by 62% compared to LangChain and 78% compared to direct API integration. Latency measurements confirm sub-100ms orchestration overhead, rendering the framework suitable for real-time applications. The modular design facilitates extension: we demonstrate the integration of six built-in skills and provide comprehensive documentation for custom skill development. AgentForge addresses a critical gap in the LLM agent ecosystem by providing researchers and practitioners with a production-ready foundation for constructing, evaluating, and deploying autonomous agents without sacrificing flexibility or performance.
|
https://arxiv.org/abs/2601.13383
|
Academic Papers
|
svg
|
1fd53d69018a2ba494e2ae4f1c87c52a002e02448a77b3b7e37ba0a8a28142fe
|
2026-01-21T00:00:00-05:00
|
From Completion to Editing: Unlocking Context-Aware Code Infilling via Search-and-Replace Instruction Tuning
|
arXiv:2601.13384v1 Announce Type: new Abstract: The dominant Fill-in-the-Middle (FIM) paradigm for code completion is constrained by its rigid inability to correct contextual errors and reliance on unaligned, insecure Base models. While Chat LLMs offer safety and Agentic workflows provide flexibility, they suffer from performance degradation and prohibitive latency, respectively. To resolve this dilemma, we propose Search-and-Replace Infilling (SRI), a framework that internalizes the agentic verification-and-editing mechanism into a unified, single-pass inference process. By structurally grounding edits via an explicit search phase, SRI harmonizes completion tasks with the instruction-following priors of Chat LLMs, extending the paradigm from static infilling to dynamic context-aware editing. We synthesize a high-quality dataset, SRI-200K, and fine-tune the SRI-Coder series. Extensive evaluations demonstrate that with minimal data (20k samples), SRI-Coder enables Chat models to surpass the completion performance of their Base counterparts. Crucially, unlike FIM-style tuning, SRI preserves general coding competencies and maintains inference latency comparable to standard FIM. We empower the entire Qwen3-Coder series with SRI, encouraging the developer community to leverage this framework for advanced auto-completion and assisted development.
|
https://arxiv.org/abs/2601.13384
|
Academic Papers
|
svg
|
79ba971296115794685fa132ad5f9361b3d3a722a3942f4c1a3637c3e7de77bc
|
2026-01-21T00:00:00-05:00
|
Organ-Aware Attention Improves CT Triage and Classification
|
arXiv:2601.13385v1 Announce Type: new Abstract: There is an urgent need for triage and classification of high-volume medical imaging modalities such as computed tomography (CT), which can improve patient care and mitigate radiologist burnout. Study-level CT triage requires calibrated predictions with localized evidence; however, off-the-shelf Vision Language Models (VLM) struggle with 3D anatomy, protocol shifts, and noisy report supervision. This study used the two largest publicly available chest CT datasets: CT-RATE and RADCHEST-CT (held-out external test set). Our carefully tuned supervised baseline (instantiated as a simple Global Average Pooling head) establishes a new supervised state of the art, surpassing all reported linear-probe VLMs. Building on this baseline, we present ORACLE-CT, an encoder-agnostic, organ-aware head that pairs Organ-Masked Attention (mask-restricted, per-organ pooling that yields spatial evidence) with Organ-Scalar Fusion (lightweight fusion of normalized volume and mean-HU cues). In the chest setting, ORACLE-CT masked attention model achieves AUROC 0.86 on CT-RATE; in the abdomen setting, on MERLIN (30 findings), our supervised baseline exceeds a reproduced zero-shot VLM baseline obtained by running publicly released weights through our pipeline, and adding masked attention plus scalar fusion further improves performance to AUROC 0.85. Together, these results deliver state-of-the-art supervised classification performance across both chest and abdomen CT under a unified evaluation protocol. The source code is available at https://github.com/lavsendahal/oracle-ct.
|
https://arxiv.org/abs/2601.13385
|
Academic Papers
|
svg
|
c3a7119cbc80590c7ed18bd602eb9130806afaade176bf1262ec1ee2fc0fc26b
|
2026-01-21T00:00:00-05:00
|
Leveraging Transformer Decoder for Automotive Radar Object Detection
|
arXiv:2601.13386v1 Announce Type: new Abstract: In this paper, we present a Transformer-based architecture for 3D radar object detection that uses a novel Transformer Decoder as the prediction head to directly regress 3D bounding boxes and class scores from radar feature representations. To bridge multi-scale radar features and the decoder, we propose Pyramid Token Fusion (PTF), a lightweight module that converts a feature pyramid into a unified, scale-aware token sequence. By formulating detection as a set prediction problem with learnable object queries and positional encodings, our design models long-range spatial-temporal correlations and cross-feature interactions. This approach eliminates dense proposal generation and heuristic post-processing such as extensive non-maximum suppression (NMS) tuning. We evaluate the proposed framework on the RADDet, where it achieves significant improvements over state-of-the-art radar-only baselines.
|
https://arxiv.org/abs/2601.13386
|
Academic Papers
|
svg
|
0143fc38a448f60fa8affc3e06ed08ba01e393034a82305c0bf9abb266cfebb7
|
2026-01-21T00:00:00-05:00
|
Confidence over Time: Confidence Calibration with Temporal Logic for Large Language Model Reasoning
|
arXiv:2601.13387v1 Announce Type: new Abstract: Large Language Models (LLMs) increasingly rely on long-form, multi-step reasoning to solve complex tasks such as mathematical problem solving and scientific question answering. Despite strong performance, existing confidence estimation methods typically reduce an entire reasoning process to a single scalar score, ignoring how confidence evolves throughout the generation. As a result, these methods are often sensitive to superficial factors such as response length or verbosity, and struggle to distinguish correct reasoning from confidently stated errors. We propose to characterize the stepwise confidence signal using Signal Temporal Logic (STL). Using a discriminative STL mining procedure, we discover temporal formulas that distinguish confidence signals of correct and incorrect responses. Our analysis found that the STL patterns generalize across tasks, and numeric parameters exhibit sensitivity to individual questions. Based on these insights, we develop a confidence estimation approach that informs STL blocks with parameter hypernetworks. Experiments on multiple reasoning tasks show our confidence scores are more calibrated than the baselines.
|
https://arxiv.org/abs/2601.13387
|
Academic Papers
|
svg
|
6c9a46f1cac37cad19ece4b3f2a21925d231a6286bb1bd524ba4411212e2559c
|
2026-01-21T00:00:00-05:00
|
Structured Insight from Unstructured Data: Large Language Models for SDOH-Driven Diabetes Risk Prediction
|
arXiv:2601.13388v1 Announce Type: new Abstract: Social determinants of health (SDOH) play a critical role in Type 2 Diabetes (T2D) management but are often absent from electronic health records and risk prediction models. Most individual-level SDOH data is collected through structured screening tools, which lack the flexibility to capture the complexity of patient experiences and unique needs of a clinic's population. This study explores the use of large language models (LLMs) to extract structured SDOH information from unstructured patient life stories and evaluate the predictive value of both the extracted features and the narratives themselves for assessing diabetes control. We collected unstructured interviews from 65 T2D patients aged 65 and older, focused on their lived experiences, social context, and diabetes management. These narratives were analyzed using LLMs with retrieval-augmented generation to produce concise, actionable qualitative summaries for clinical interpretation and structured quantitative SDOH ratings for risk prediction modeling. The structured SDOH ratings were used independently and in combination with traditional laboratory biomarkers as inputs to linear and tree-based machine learning models (Ridge, Lasso, Random Forest, and XGBoost) to demonstrate how unstructured narrative data can be applied in conventional risk prediction workflows. Finally, we evaluated several LLMs on their ability to predict a patient's level of diabetes control (low, medium, high) directly from interview text with A1C values redacted. LLMs achieved 60% accuracy in predicting diabetes control levels from interview text. This work demonstrates how LLMs can translate unstructured SDOH-related data into structured insights, offering a scalable approach to augment clinical risk models and decision-making.
|
https://arxiv.org/abs/2601.13388
|
Academic Papers
|
svg
|
228e4d035e64f2bd940d359c904abfc5c8bac26cb0447c5ce81e483fccc1ac8b
|
2026-01-21T00:00:00-05:00
|
Robustness and Resilience Evaluation of Eco-Driving Strategies at Signalized Intersections
|
arXiv:2601.13389v1 Announce Type: new Abstract: Eco-driving strategies have demonstrated substantial potential for improving energy efficiency and reducing emissions, especially at signalized intersections. However, evaluations of eco-driving methods typically rely on simplified simulation or experimental conditions, where certain assumptions are made to manage complexity and experimental control. This study introduces a unified framework to evaluate eco-driving strategies through the lens of two complementary criteria: control robustness and environmental resilience. We define formal indicators that quantify performance degradation caused by internal execution variability and external environmental disturbances, respectively. These indicators are then applied to assess multiple eco-driving controllers through real-world vehicle experiments. The results reveal key tradeoffs between tracking accuracy and adaptability, showing that optimization-based controllers offer more consistent performance across varying disturbance levels, while analytical controllers may perform comparably under nominal conditions but exhibit greater sensitivity to execution and timing variability.
|
https://arxiv.org/abs/2601.13389
|
Academic Papers
|
svg
|
e940ca91c25ca2a92c32d9730a8e6ae707cd2a5926fdf0a76a50f1df4c57ac87
|
2026-01-21T00:00:00-05:00
|
Beyond Memorization: Testing LLM Reasoning on Unseen Theory of Computation Tasks
|
arXiv:2601.13392v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated strong performance on formal language tasks, yet whether this reflects genuine symbolic reasoning or pattern matching on familiar constructions remains unclear. We introduce a benchmark for deterministic finite automata (DFA) construction from regular languages, comprising factual knowledge questions, seen construction problems from public sources, and two types of unseen problems: hand-crafted instances with multiple interacting constraints and systematically generated problems via Arden's theorem. Models achieve perfect accuracy on factual questions and 84-90% on seen tasks. However, accuracy drops sharply on unseen problems (by 30-64%), with failures stemming from systematic misinterpretation of language constraints, incorrect handling of Kleene-star semantics, and a failure to preserve global consistency. We evaluate a three-stage hint protocol that enables correction of shallow errors but does not reliably resolve globally inconsistent or structurally flawed automata. Our analysis across multiple prompting strategies (direct, Chain-of-Thought, Tree-of-Thought) reveals that errors persist regardless of prompting approach, exposing a fundamental gap between LLMs' ability to generate syntactically plausible DFAs and their capacity for semantically correct formal reasoning.
|
https://arxiv.org/abs/2601.13392
|
Academic Papers
|
svg
|
dee116e96e87a758c09bf76c9d33b5856c45a63ceb6957ef43b355939d07445a
|
2026-01-21T00:00:00-05:00
|
Can LLMs Compress (and Decompress)? Evaluating Code Understanding and Execution via Invertibility
|
arXiv:2601.13398v1 Announce Type: new Abstract: LLMs demonstrate strong performance on code benchmarks, yet round-trip code execution reveals limitations in their ability to maintain consistent reasoning across forward and backward execution. We present RoundTripCodeEval (RTCE), a comprehensive benchmark consisting of four distinct code execution reasoning tasks designed to rigorously test round-trip consistency. RTCE provides an execution-free, exact-match evaluation of bijection fidelity, assessing whether models preserve a consistent one-to-one mapping between encoding and decoding operations across various algorithms and directions. We systematically evaluate state-of-the-art Code-LLMs using zero-shot prompting, supervised fine-tuning on execution traces, and self-reflection mechanisms. Each yields modest improvements, but none closes the gap, indicating that current LLMs struggle with true round-trip consistency, which demonstrates that they lack the internal coherence required for trustworthy code reasoning. RTCE surfaces several new and previously unmeasured insights that are not captured by existing I/O-prediction, execution-reasoning, or round-trip natural-language benchmarks. We will release the code and the dataset upon acceptance.
|
https://arxiv.org/abs/2601.13398
|
Academic Papers
|
svg
|
7338e1a946f2a64814743bde1fa30936633613ded159ec0f44eb71ff712249e9
|
2026-01-21T00:00:00-05:00
|
QERS: Quantum Encryption Resilience Score for Post-Quantum Cryptography in Computer, IoT, and IIoT Systems
|
arXiv:2601.13399v1 Announce Type: new Abstract: Post-quantum cryptography (PQC) is becoming essential for securing Internet of Things (IoT) and Industrial IoT (IIoT) systems against quantum-enabled adversaries. However, existing evaluation approaches primarily focus on isolated performance metrics, offering limited support for holistic security and deployment decisions. This paper introduces QERS (Quantum Encryption Resilience Score), a universal measurement framework that integrates cryptographic performance, system constraints, and multi-criteria decision analysis to assess PQC readiness in computer, IoT, and IIoT environments. QERS combines normalized metrics, weighted aggregation, and machine learning-assisted analysis to produce interpretable resilience scores across heterogeneous devices and communication protocols. Experimental results demonstrate how the framework enables comparative evaluation of post-quantum schemes under realistic resource constraints, supporting informed security design and migration planning. This work is presented as a preprint, with extended statistical validation planned as part of ongoing graduate research.
|
https://arxiv.org/abs/2601.13399
|
Academic Papers
|
svg
|
35994a97f9b8df017b9a99c5a8b2679b6ff7e9f09111f9bfad06e9bddb7833cb
|
2026-01-21T00:00:00-05:00
|
Deep Image Prior with L0 Gradient Regularizer for Image Smoothing
|
arXiv:2601.13400v1 Announce Type: new Abstract: Image smoothing is a fundamental image processing operation that preserves the underlying structure, such as strong edges and contours, and removes minor details and textures in an image. Many image smoothing algorithms rely on computing local window statistics or solving an optimization problem. Recent state-of-the-art methods leverage deep learning, but they require a carefully curated training dataset. Because constructing a proper training dataset for image smoothing is challenging, we propose DIP-$\ell_0$, a deep image prior framework that incorporates the $\ell_0$ gradient regularizer. This framework can perform high-quality image smoothing without any training data. To properly minimize the associated loss function that has the nonconvex, nonsmooth $\ell_0$ ``norm", we develop an alternating direction method of multipliers algorithm that utilizes an off-the-shelf $\ell_0$ gradient minimization solver. Numerical experiments demonstrate that the proposed DIP-$\ell_0$ outperforms many image smoothing algorithms in edge-preserving image smoothing and JPEG artifact removal.
|
https://arxiv.org/abs/2601.13400
|
Academic Papers
|
svg
|
ae2ab4c8828550ace533f716b5210ada4f7b2862ea524489ec3f3cbd15e7b9a0
|
2026-01-21T00:00:00-05:00
|
Reasoning with Pixel-level Precision: QVLM Architecture and SQuID Dataset for Quantitative Geospatial Analytics
|
arXiv:2601.13401v1 Announce Type: new Abstract: Current Vision-Language Models (VLMs) fail at quantitative spatial reasoning because their architectures destroy pixel-level information required for counting and measurements. Vision encoders compress images through patch embeddings, reducing spatial indexing and losing the precise pixel-level tracking required for accurate counting. We present two contributions to address this fundamental limitation. First, we introduce SQuID (Satellite Quantitative Intelligence Dataset), a benchmark of 2,000 satellite image Question-Answer pairs with both numerical range and categorical answers, designed to evaluate quantitative spatial reasoning. The dataset spans three difficulty tiers with annotations automatically generated from human labels and their learned variability. Second, we propose QVLM (Quantitative Vision-Language Model), a code-generation architecture that maintains pixel precision by decoupling language understanding from visual analysis. Instead of encoding images into embeddings, QVLM generates executable code that first calls a segmentation model to obtain pixel-level masks, then operates directly on these masks, preserving spatial indexing throughout the reasoning process. Our experiments show that QVLM using GPT-5 as coder achieves 42.0% accuracy on SQuID compared to 28.1% for a VLM prompted with image-question pairs. Our work reveals that, for quantitative spatial reasoning, architectural decoupling enables better accuracy on quantitative tasks.
|
https://arxiv.org/abs/2601.13401
|
Academic Papers
|
svg
|
f61745258222fbe9d0cda93dfdda7b14b78fd0af64e3cc97b544f6b14c367616
|
2026-01-21T00:00:00-05:00
|
Local-to-Global Logical Explanations for Deep Vision Models
|
arXiv:2601.13404v1 Announce Type: new Abstract: While deep neural networks are extremely effective at classifying images, they remain opaque and hard to interpret. We introduce local and global explanation methods for black-box models that generate explanations in terms of human-recognizable primitive concepts. Both the local explanations for a single image and the global explanations for a set of images are cast as logical formulas in monotone disjunctive-normal-form (MDNF), whose satisfaction guarantees that the model yields a high score on a given class. We also present an algorithm for explaining the classification of examples into multiple classes in the form of a monotone explanation list over primitive concepts. Despite their simplicity and interpretability we show that the explanations maintain high fidelity and coverage with respect to the blackbox models they seek to explain in challenging vision datasets.
|
https://arxiv.org/abs/2601.13404
|
Academic Papers
|
svg
|
db20da6b717452078b479e19f113bc53a69c8d7a18463f2546620eec98c1e842
|
2026-01-21T00:00:00-05:00
|
Integrating Virtual Reality and Large Language Models for Team-Based Non-Technical Skills Training and Evaluation in the Operating Room
|
arXiv:2601.13406v1 Announce Type: new Abstract: Although effective teamwork and communication are critical to surgical safety, structured training for non-technical skills (NTS) remains limited compared with technical simulation. The ACS/APDS Phase III Team-Based Skills Curriculum calls for scalable tools that both teach and objectively assess these competencies during laparoscopic emergencies. We introduce the Virtual Operating Room Team Experience (VORTeX), a multi-user virtual reality (VR) platform that integrates immersive team simulation with large language model (LLM) analytics to train and evaluate communication, decision-making, teamwork, and leadership. Team dialogue is analyzed using structured prompts derived from the Non-Technical Skills for Surgeons (NOTSS) framework, enabling automated classification of behaviors and generation of directed interaction graphs that quantify communication structure and hierarchy. Two laparoscopic emergency scenarios, pneumothorax and intra-abdominal bleeding, were implemented to elicit realistic stress and collaboration. Twelve surgical professionals completed pilot sessions at the 2024 SAGES conference, rating VORTeX as intuitive, immersive, and valuable for developing teamwork and communication. The LLM consistently produced interpretable communication networks reflecting expected operative hierarchies, with surgeons as central integrators, nurses as initiators, and anesthesiologists as balanced intermediaries. By integrating immersive VR with LLM-driven behavioral analytics, VORTeX provides a scalable, privacy-compliant framework for objective assessment and automated, data-informed debriefing across distributed training environments.
|
https://arxiv.org/abs/2601.13406
|
Academic Papers
|
svg
|
c8181cbde5da06814ad6f8e5c0fad16e42f0d4f75fe3d08284def2afb651a627
|
2026-01-21T00:00:00-05:00
|
Classifiers in High Dimensional Hilbert Metrics
|
arXiv:2601.13410v1 Announce Type: new Abstract: Classifying points in high dimensional spaces is a fundamental geometric problem in machine learning. In this paper, we address classifying points in the $d$-dimensional Hilbert polygonal metric. The Hilbert metric is a generalization of the Cayley-Klein hyperbolic distance to arbitrary convex bodies and has a diverse range of applications in machine learning and convex geometry. We first present an efficient LP-based algorithm in the metric for the large-margin SVM problem. Our algorithm runs in time polynomial to the number of points, bounding facets, and dimension. This is a significant improvement on previous works, which either provide no theoretical guarantees on running time, or suffer from exponential runtime. We also consider the closely related Funk metric. We also present efficient algorithms for the soft-margin SVM problem and for nearest neighbor-based classification in the Hilbert metric.
|
https://arxiv.org/abs/2601.13410
|
Academic Papers
|
svg
|
637d9cdc6a87580131c0a4c711a800056e0652ebc46a0279acbece1d98fc4d52
|
2026-01-21T00:00:00-05:00
|
Using deep learning for predicting cleansing quality of colon capsule endoscopy images
|
arXiv:2601.13412v1 Announce Type: new Abstract: In this study, we explore the application of deep learning techniques for predicting cleansing quality in colon capsule endoscopy (CCE) images. Using a dataset of 500 images labeled by 14 clinicians on the Leighton-Rex scale (Poor, Fair, Good, and Excellent), a ResNet-18 model was trained for classification, leveraging stratified K-fold cross-validation to ensure robust performance. To optimize the model, structured pruning techniques were applied iteratively, achieving significant sparsity while maintaining high accuracy. Explainability of the pruned model was evaluated using Grad-CAM, Grad-CAM++, Eigen-CAM, Ablation-CAM, and Random-CAM, with the ROAD method employed for consistent evaluation. Our results indicate that for a pruned model, we can achieve a cross-validation accuracy of 88% with 79% sparsity, demonstrating the effectiveness of pruning in improving efficiency from 84% without compromising performance. We also highlight the challenges of evaluating cleansing quality of CCE images, emphasize the importance of explainability in clinical applications, and discuss the challenges associated with using the ROAD method for our task. Finally, we employ a variant of adaptive temperature scaling to calibrate the pruned models for an external dataset.
|
https://arxiv.org/abs/2601.13412
|
Academic Papers
|
svg
|
c5cc5d9bc5ba9568c92f852947854c8821c4837e385f3a546f579d78ac6ee307
|
2026-01-21T00:00:00-05:00
|
Diffusion Representations for Fine-Grained Image Classification: A Marine Plankton Case Study
|
arXiv:2601.13416v1 Announce Type: new Abstract: Diffusion models have emerged as state-of-the-art generative methods for image synthesis, yet their potential as general-purpose feature encoders remains underexplored. Trained for denoising and generation without labels, they can be interpreted as self-supervised learners that capture both low- and high-level structure. We show that a frozen diffusion backbone enables strong fine-grained recognition by probing intermediate denoising features across layers and timesteps and training a linear classifier for each pair. We evaluate this in a real-world plankton-monitoring setting with practical impact, using controlled and comparable training setups against established supervised and self-supervised baselines. Frozen diffusion features are competitive with supervised baselines and outperform other self-supervised methods in both balanced and naturally long-tailed settings. Out-of-distribution evaluations on temporally and geographically shifted plankton datasets further show that frozen diffusion features maintain strong accuracy and Macro F1 under substantial distribution shift.
|
https://arxiv.org/abs/2601.13416
|
Academic Papers
|
svg
|
72f638fab9ccbb29ed0c9a58624cab4e485f6d21e679ab56936178b0264a99ac
|
2026-01-21T00:00:00-05:00
|
SGW-GAN: Sliced Gromov-Wasserstein Guided GANs for Retinal Fundus Image Enhancement
|
arXiv:2601.13417v1 Announce Type: new Abstract: Retinal fundus photography is indispensable for ophthalmic screening and diagnosis, yet image quality is often degraded by noise, artifacts, and uneven illumination. Recent GAN- and diffusion-based enhancement methods improve perceptual quality by aligning degraded images with high-quality distributions, but our analysis shows that this focus can distort intra-class geometry: clinically related samples become dispersed, disease-class boundaries blur, and downstream tasks such as grading or lesion detection are harmed. The Gromov Wasserstein (GW) discrepancy offers a principled solution by aligning distributions through internal pairwise distances, naturally preserving intra-class structure, but its high computational cost restricts practical use. To overcome this, we propose SGW-GAN, the first framework to incorporate Sliced GW (SGW) into retinal image enhancement. SGW approximates GW via random projections, retaining relational fidelity while greatly reducing cost. Experiments on public datasets show that SGW-GAN produces visually compelling enhancements, achieves superior diabetic retinopathy grading, and reports the lowest GW discrepancy across disease labels, demonstrating both efficiency and clinical fidelity for unpaired medical image enhancement.
|
https://arxiv.org/abs/2601.13417
|
Academic Papers
|
svg
|
246c12c581dfee907fd111722aee353cded992ff65fa837f5e8f39f267b3957f
|
2026-01-21T00:00:00-05:00
|
TrustEnergy: A Unified Framework for Accurate and Reliable User-level Energy Usage Prediction
|
arXiv:2601.13422v1 Announce Type: new Abstract: Energy usage prediction is important for various real-world applications, including grid management, infrastructure planning, and disaster response. Although a plethora of deep learning approaches have been proposed to perform this task, most of them either overlook the essential spatial correlations across households or fail to scale to individualized prediction, making them less effective for accurate fine-grained user-level prediction. In addition, due to the dynamic and uncertain nature of energy usage caused by various factors such as extreme weather events, quantifying uncertainty for reliable prediction is also significant, but it has not been fully explored in existing work. In this paper, we propose a unified framework called TrustEnergy for accurate and reliable user-level energy usage prediction. There are two key technical components in TrustEnergy, (i) a Hierarchical Spatiotemporal Representation module to efficiently capture both macro and micro energy usage patterns with a novel memory-augmented spatiotemporal graph neural network, and (ii) an innovative Sequential Conformalized Quantile Regression module to dynamically adjust uncertainty bounds to ensure valid prediction intervals over time, without making strong assumptions about the underlying data distribution. We implement and evaluate our TrustEnergy framework by working with an electricity provider in Florida, and the results show our TrustEnergy can achieve a 5.4% increase in prediction accuracy and 5.7% improvement in uncertainty quantification compared to state-of-the-art baselines.
|
https://arxiv.org/abs/2601.13422
|
Academic Papers
|
svg
|
66a32b0d991fbe9fd60f7f1d0061b14ec657e55787bcca2cb0d17850fb2e8611
|
2026-01-21T00:00:00-05:00
|
Quantum Encryption Resilience Score (QERS) for MQTT, HTTP, and HTTPS under Post-Quantum Cryptography in Computer, IoT, and IIoT Systems
|
arXiv:2601.13423v1 Announce Type: new Abstract: Post-quantum cryptography (PQC) introduces significant computational and communication overhead, which poses challenges for resource-constrained computer systems, Internet of Things (IoT), and Industrial IoT (IIoT) devices. This paper presents an experimental evaluation of the Quantum Encryption Resilience Score (QERS) applied to MQTT, HTTP, and HTTPS communication protocols operating under PQC. Using an ESP32-C6 client and an ARM-based Raspberry Pi CM4 server, latency, CPU utilization, RSSI, energy consumption, key size, and TLS handshake overhead are measured under realistic operating conditions. QERS integrates these heterogeneous metrics into normalized Basic, Tuned, and Fusion scores, enabling systematic comparison of protocol efficiency and security resilience. Experimental results show that MQTT provides the highest efficiency under PQC constraints, while HTTPS achieves the highest security-weighted resilience at the cost of increased latency and resource consumption. The proposed framework supports informed protocol selection and migration planning for PQC-enabled IoT and IIoT deployments.
|
https://arxiv.org/abs/2601.13423
|
Academic Papers
|
svg
|
a5fc881f93c81eaeac5fba54571f238f1248c60be0647c86bb07c1e624fe58de
|
2026-01-21T00:00:00-05:00
|
Driving Computational Efficiency in Large-Scale Platforms using HPC Technologies
|
arXiv:2601.13424v1 Announce Type: new Abstract: The Latin American Giant Observatory (LAGO) project utilizes extensive High-Performance Computing (HPC) resources for complex astroparticle physics simulations, making resource efficiency critical for scientific productivity and sustainability. This article presents a detailed analysis focused on quantifying and improving HPC resource utilization efficiency specifically within the LAGO computational environment. The core objective is to understand how LAGO's distinct computational workloads-characterized by a prevalent coarse-grained, task-parallel execution model-consume resources in practice. To achieve this, we analyze historical job accounting data from the EGI FedCloud platform, identifying primary workload categories (Monte Carlo simulations, data processing, user analysis/testing) and evaluating their performance using key efficiency metrics (CPU utilization, walltime utilization, and I/O patterns). Our analysis reveals significant patterns, including high CPU efficiency within individual simulation tasks contrasted with the distorting impact of short test jobs on aggregate metrics. This work pinpoints specific inefficiencies and provides data-driven insights into LAGO's HPC usage. The findings directly inform recommendations for optimizing resource requests, refining workflow management strategies, and guiding future efforts to enhance computational throughput, ultimately maximizing the scientific return from LAGO's HPC investments.
|
https://arxiv.org/abs/2601.13424
|
Academic Papers
|
svg
|
946175d8cc48fb70e58ad67cf8c907240333fc4d89efd2a72c8a6556c277fff8
|
2026-01-21T00:00:00-05:00
|
A Scientific Data Integrity system based on Blockchain
|
arXiv:2601.13425v1 Announce Type: new Abstract: In most High Performance Computing (HPC) projects nowadays, there is a lot of data obtained from different sources, depending on the project's objectives. Some of that data is very huge in terms of size, so copying such data sometimes is an unrealistic goal. On the other hand, science requires data used for different purposes to remain unaltered, so different groups of researchers can reproduce results, discuss theories, and validate each other. In this paper, we present a novel approach to help research groups to validate data integrity on such distributed repositories using Blockchain. Originally developed for cryptographic currencies, Blockchain has demonstrated a versatile range of uses. Our proposal ensures 1) secure access to data management, 2) easy validation of data integrity, and 3) an easy way to add new records to the dataset with the same robust integrity policy. A prototype was developed and tested using a subset of a public dataset from a real scientific collaboration, the Latin American Giant Observatory (LAGO) Project.
|
https://arxiv.org/abs/2601.13425
|
Academic Papers
|
svg
|
ddd893b358f57159105d19307cdee3a381a3183dc6c87e65a9ffa26cf85d7c8e
|
2026-01-21T00:00:00-05:00
|
Techniques of Modern Attacks
|
arXiv:2601.13427v1 Announce Type: new Abstract: The techniques used in modern attacks have become an important factor for investigation. As we advance further into the digital age, cyber attackers are employing increasingly sophisticated and highly threatening methods. These attacks target not only organizations and governments but also extend to private and corporate sectors. Modern attack techniques, such as lateral movement and ransomware, are designed to infiltrate networks and steal sensitive data. Among these techniques, Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets to steal high-value sensitive information or damage the infrastructure of the targeted organization. In this paper, I will investigate Advanced Persistent Threats (APTs) as a modern attack technique, focusing on both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research. I will analyze four representative papers to understand the evolution of APT detection mechanisms, including machine learning-driven behavioral analysis and network-level collaborative defense models. Through this comparative analysis, I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies. The study seeks to analyze the key characteristics of APTs and provide a comprehensive high-level understanding of APTs along with potential solutions to the threats they pose.
|
https://arxiv.org/abs/2601.13427
|
Academic Papers
|
svg
|
ae976d050b0f1f321ed8ce838ec13248d3e2850e469642e5b2362150b3afe748
|
2026-01-21T00:00:00-05:00
|
Trust Me, I'm an Expert: Decoding and Steering Authority Bias in Large Language Models
|
arXiv:2601.13433v1 Announce Type: new Abstract: Prior research demonstrates that performance of language models on reasoning tasks can be influenced by suggestions, hints and endorsements. However, the influence of endorsement source credibility remains underexplored. We investigate whether language models exhibit systematic bias based on the perceived expertise of the provider of the endorsement. Across 4 datasets spanning mathematical, legal, and medical reasoning, we evaluate 11 models using personas representing four expertise levels per domain. Our results reveal that models are increasingly susceptible to incorrect/misleading endorsements as source expertise increases, with higher-authority sources inducing not only accuracy degradation but also increased confidence in wrong answers. We also show that this authority bias is mechanistically encoded within the model and a model can be steered away from the bias, thereby improving its performance even when an expert gives a misleading endorsement.
|
https://arxiv.org/abs/2601.13433
|
Academic Papers
|
svg
|
c84d441cd8d99bbf3060c783b6a3f3d6359442200c464b98dd276dc3e4f0b4df
|
2026-01-21T00:00:00-05:00
|
A Learnable Wavelet Transformer for Long-Short Equity Trading and Risk-Adjusted Return Optimization
|
arXiv:2601.13435v1 Announce Type: new Abstract: Learning profitable intraday trading policies from financial time series is challenging due to heavy noise, non-stationarity, and strong cross-sectional dependence among related assets. We propose \emph{WaveLSFormer}, a learnable wavelet-based long-short Transformer that jointly performs multi-scale decomposition and return-oriented decision learning. Specifically, a learnable wavelet front-end generates low-/high-frequency components via an end-to-end trained filter bank, guided by spectral regularizers that encourage stable and well-separated frequency bands. To fuse multi-scale information, we introduce a low-guided high-frequency injection (LGHI) module that refines low-frequency representations with high-frequency cues while controlling training stability. The model outputs a portfolio of long/short positions that is rescaled to satisfy a fixed risk budget, and is optimized directly with a trading objective and risk-aware regularization. Extensive experiments on five years of hourly data across six industry groups, evaluated over ten random seeds, demonstrate that WaveLSFormer consistently outperforms MLP, LSTM and Transformer backbones, with and without fixed discrete wavelet front-ends. On average in all industries, WaveLSFormer achieves a cumulative overall strategy return of $0.607 \pm 0.045$ and a Sharpe ratio of $2.157 \pm 0.166$, substantially improving both profitability and risk-adjusted returns over the strongest baselines.
|
https://arxiv.org/abs/2601.13435
|
Academic Papers
|
svg
|
b8b9997a9d20efa7364c1005ee9ae6469b5f547208838f891a484d66e54d6dd7
|
2026-01-21T00:00:00-05:00
|
MOSLD-Bench: Multilingual Open-Set Learning and Discovery Benchmark for Text Categorization
|
arXiv:2601.13437v1 Announce Type: new Abstract: Open-set learning and discovery (OSLD) is a challenging machine learning task in which samples from new (unknown) classes can appear at test time. It can be seen as a generalization of zero-shot learning, where the new classes are not known a priori, hence involving the active discovery of new classes. While zero-shot learning has been extensively studied in text classification, especially with the emergence of pre-trained language models, open-set learning and discovery is a comparatively new setup for the text domain. To this end, we introduce the first multilingual open-set learning and discovery (MOSLD) benchmark for text categorization by topic, comprising 960K data samples across 12 languages. To construct the benchmark, we (i) rearrange existing datasets and (ii) collect new data samples from the news domain. Moreover, we propose a novel framework for the OSLD task, which integrates multiple stages to continuously discover and learn new classes. We evaluate several language models, including our own, to obtain results that can be used as reference for future work. We release our benchmark at https://github.com/Adriana19Valentina/MOSLD-Bench.
|
https://arxiv.org/abs/2601.13437
|
Academic Papers
|
svg
|
42f0aa19dfa178b20bd84e505359fc0333b48bd6b2dd1491406f97da1775cd84
|
2026-01-21T00:00:00-05:00
|
Analyzing VLM-Based Approaches for Anomaly Classification and Segmentation
|
arXiv:2601.13440v1 Announce Type: new Abstract: Vision-Language Models (VLMs), particularly CLIP, have revolutionized anomaly detection by enabling zero-shot and few-shot defect identification without extensive labeled datasets. By learning aligned representations of images and text, VLMs facilitate anomaly classification and segmentation through natural language descriptions of normal and abnormal states, eliminating traditional requirements for task-specific training or defect examples. This project presents a comprehensive analysis of VLM-based approaches for anomaly classification (AC) and anomaly segmentation (AS). We systematically investigate key architectural paradigms including sliding window-based dense feature extraction (WinCLIP), multi-stage feature alignment with learnable projections (AprilLab framework), and compositional prompt ensemble strategies. Our analysis evaluates these methods across critical dimensions: feature extraction mechanisms, text-visual alignment strategies, prompt engineering techniques, zero-shot versus few-shot trade-offs, computational efficiency, and cross-domain generalization. Through rigorous experimentation on benchmarks such as MVTec AD and VisA, we compare classification accuracy, segmentation precision, and inference efficiency. The primary contribution is a foundational understanding of how and why VLMs succeed in anomaly detection, synthesizing practical insights for method selection and identifying current limitations. This work aims to facilitate informed adoption of VLM-based methods in industrial quality control and guide future research directions.
|
https://arxiv.org/abs/2601.13440
|
Academic Papers
|
svg
|
025fb015e68025a12eb2129e494d862a8c11cfc09aa6ac5433c94675df0959f5
|
2026-01-21T00:00:00-05:00
|
Explicit Cognitive Allocation: A Principle for Governed and Auditable Inference in Large Language Models
|
arXiv:2601.13443v1 Announce Type: new Abstract: The rapid adoption of large language models (LLMs) has enabled new forms of AI-assisted reasoning across scientific, technical, and organizational domains. However, prevailing modes of LLM use remain cognitively unstructured: problem framing, knowledge exploration, retrieval, methodological awareness, and explanation are typically collapsed into a single generative process. This cognitive collapse limits traceability, weakens epistemic control, and undermines reproducibility, particularly in high-responsibility settings. We introduce Explicit Cognitive Allocation, a general principle for structuring AI-assisted inference through the explicit separation and orchestration of epistemic functions. We instantiate this principle in the Cognitive Universal Agent (CUA), an architecture that organizes inference into distinct stages of exploration and framing, epistemic anchoring, instrumental and methodological mapping, and interpretive synthesis. Central to this framework is the notion of Universal Cognitive Instruments (UCIs), which formalize heterogeneous means, including computational, experimental, organizational, regulatory, and educational instruments, through which abstract inquiries become investigable. We evaluate the effects of explicit cognitive and instrumental allocation through controlled comparisons between CUA-orchestrated inference and baseline LLM inference under matched execution conditions. Across multiple prompts in the agricultural domain, CUA inference exhibits earlier and structurally governed epistemic convergence, higher epistemic alignment under semantic expansion, and systematic exposure of the instrumental landscape of inquiry. In contrast, baseline LLM inference shows greater variability in alignment and fails to explicitly surface instrumental structure.
|
https://arxiv.org/abs/2601.13443
|
Academic Papers
|
svg
|
5f220a82d1c94313b320e49316bcc5b157e5814165b065fb4677c38ea9f6fb48
|
2026-01-21T00:00:00-05:00
|
BladeSDF : Unconditional and Conditional Generative Modeling of Representative Blade Geometries Using Signed Distance Functions
|
arXiv:2601.13445v1 Announce Type: new Abstract: Generative AI has emerged as a transformative paradigm in engineering design, enabling automated synthesis and reconstruction of complex 3D geometries while preserving feasibility and performance relevance. This paper introduces a domain-specific implicit generative framework for turbine blade geometry using DeepSDF, addressing critical gaps in performance-aware modeling and manufacturable design generation. The proposed method leverages a continuous signed distance function (SDF) representation to reconstruct and generate smooth, watertight geometries with quantified accuracy. It establishes an interpretable, near-Gaussian latent space that aligns with blade-relevant parameters, such as taper and chord ratios, enabling controlled exploration and unconditional synthesis through interpolation and Gaussian sampling. In addition, a compact neural network maps engineering descriptors, such as maximum directional strains, to latent codes, facilitating the generation of performance-informed geometry. The framework achieves high reconstruction fidelity, with surface distance errors concentrated within $1\%$ of the maximum blade dimension, and demonstrates robust generalization to unseen designs. By integrating constraints, objectives, and performance metrics, this approach advances beyond traditional 2D-guided or unconstrained 3D pipelines, offering a practical and interpretable solution for data-driven turbine blade modeling and concept generation.
|
https://arxiv.org/abs/2601.13445
|
Academic Papers
|
svg
|
abf282f16ef2b51faa288d779c9b090dbb83bb0380cd6eedce7bb388223f9cf2
|
2026-01-21T00:00:00-05:00
|
Fairness-informed Pareto Optimization : An Efficient Bilevel Framework
|
arXiv:2601.13448v1 Announce Type: new Abstract: Despite their promise, fair machine learning methods often yield Pareto-inefficient models, in which the performance of certain groups can be improved without degrading that of others. This issue arises frequently in traditional in-processing approaches such as fairness-through-regularization. In contrast, existing Pareto-efficient approaches are biased towards a certain perspective on fairness and fail to adapt to the broad range of fairness metrics studied in the literature. In this paper, we present BADR, a simple framework to recover the optimal Pareto-efficient model for any fairness metric. Our framework recovers its models through a Bilevel Adaptive Rescalarisation procedure. The lower level is a weighted empirical risk minimization task where the weights are a convex combination of the groups, while the upper level optimizes the chosen fairness objective. We equip our framework with two novel large-scale, single-loop algorithms, BADR-GD and BADR-SGD, and establish their convergence guarantees. We release badr, an open-source Python toolbox implementing our framework for a variety of learning tasks and fairness metrics. Finally, we conduct extensive numerical experiments demonstrating the advantages of BADR over existing Pareto-efficient approaches to fairness.
|
https://arxiv.org/abs/2601.13448
|
Academic Papers
|
svg
|
c118d23f1d54228b54ff0e38924a566e3cc3363555d6731751854e04a39cf350
|
2026-01-21T00:00:00-05:00
|
Event-based Heterogeneous Information Processing for Online Vision-based Obstacle Detection and Localization
|
arXiv:2601.13451v1 Announce Type: new Abstract: This paper introduces a novel framework for robotic vision-based navigation that integrates Hybrid Neural Networks (HNNs) with Spiking Neural Network (SNN)-based filtering to enhance situational awareness for unmodeled obstacle detection and localization. By leveraging the complementary strengths of Artificial Neural Networks (ANNs) and SNNs, the system achieves both accurate environmental understanding and fast, energy-efficient processing. The proposed architecture employs a dual-pathway approach: an ANN component processes static spatial features at low frequency, while an SNN component handles dynamic, event-based sensor data in real time. Unlike conventional hybrid architectures that rely on domain conversion mechanisms, our system incorporates a pre-developed SNN-based filter that directly utilizes spike-encoded inputs for localization and state estimation. Detected anomalies are validated using contextual information from the ANN pathway and continuously tracked to support anticipatory navigation strategies. Simulation results demonstrate that the proposed method offers acceptable detection accuracy while maintaining computational efficiency close to SNN-only implementations, which operate at a fraction of the resource cost. This framework represents a significant advancement in neuromorphic navigation systems for robots operating in unpredictable and dynamic environments.
|
https://arxiv.org/abs/2601.13451
|
Academic Papers
|
svg
|
df26d5a0836e930e1d2d7d5e59c9abd8a4e9274b433385908e0f0555132f05ee
|
2026-01-21T00:00:00-05:00
|
A simulation of urban incidents involving pedestrians and vehicles based on Weighted A*
|
arXiv:2601.13452v1 Announce Type: new Abstract: This document presents a comprehensive simulation framework designed to model urban incidents involving pedestrians and vehicles. Using a multiagent systems approach, two types of agents (pedestrians and vehicles) are introduced within a 2D grid based urban environment. The environment encodes streets, sidewalks, buildings, zebra crossings, and obstacles such as potholes and infrastructure elements. Each agent employs a weighted A* algorithm for pathfinding, allowing for variation in decision making behavior such as reckless movement or strict rule-following. The model aims to simulate interactions, assess risk of collisions, and evaluate efficiency under varying environmental and behavioral conditions. Experimental results explore how factors like obstacle density, presence of traffic control mechanisms, and behavioral deviations affect safety and travel efficiency.
|
https://arxiv.org/abs/2601.13452
|
Academic Papers
|
svg
|
467a967b1c92a8a82e1e1d959261b5f6bc5ea12232289acf766316fbc8fd5f49
|
2026-01-21T00:00:00-05:00
|
PhysicsSolutionAgent: Towards Multimodal Explanations for Numerical Physics Problem Solving
|
arXiv:2601.13453v1 Announce Type: new Abstract: Explaining numerical physics problems often requires more than text-based solutions; clear visual reasoning can substantially improve conceptual understanding. While large language models (LLMs) demonstrate strong performance on many physics questions in textual form, their ability to generate long, high-quality visual explanations remains insufficiently explored. In this work, we introduce PhysicsSolutionAgent (PSA), an autonomous agent that generates physics-problem explanation videos of up to six minutes using Manim animations. To evaluate the generated videos, we design an assessment pipeline that performs automated checks across 15 quantitative parameters and incorporates feedback from a vision-language model (VLM) to iteratively improve video quality. We evaluate PSA on 32 videos spanning numerical and theoretical physics problems. Our results reveal systematic differences in video quality depending on problem difficulty and whether the task is numerical or theoretical. Using GPT-5-mini, PSA achieves a 100% video-completion rate with an average automated score of 3.8/5. However, qualitative analysis and human inspection uncover both minor and major issues, including visual layout inconsistencies and errors in how visual content is interpreted during feedback. These findings expose key limitations in reliable Manim code generation and highlight broader challenges in multimodal reasoning and evaluation for visual explanations of numerical physics problems. Our work underscores the need for improved visual understanding, verification, and evaluation frameworks in future multimodal educational systems
|
https://arxiv.org/abs/2601.13453
|
Academic Papers
|
svg
|
911ac8bfeb69c8e4755ea3ca30826b738c4f506048ee947a3742ba94a185a3dd
|
2026-01-21T00:00:00-05:00
|
Federated Learning Under Temporal Drift -- Mitigating Catastrophic Forgetting via Experience Replay
|
arXiv:2601.13456v1 Announce Type: new Abstract: Federated Learning struggles under temporal concept drift where client data distributions shift over time. We demonstrate that standard FedAvg suffers catastrophic forgetting under seasonal drift on Fashion-MNIST, with accuracy dropping from 74% to 28%. We propose client-side experience replay, where each client maintains a small buffer of past samples mixed with current data during local training. This simple approach requires no changes to server aggregation. Experiments show that a 50-sample-per-class buffer restores performance to 78-82%, effectively preventing forgetting. Our ablation study reveals a clear memory-accuracy trade-off as buffer size increases.
|
https://arxiv.org/abs/2601.13456
|
Academic Papers
|
svg
|
0c23b6b77378cba940fc6b0132fd6e4425d37711159a178d1ca9f006ee979495
|
2026-01-21T00:00:00-05:00
|
A Tool for Automatically Cataloguing and Selecting Pre-Trained Models and Datasets for Software Engineering
|
arXiv:2601.13460v1 Announce Type: new Abstract: The rapid growth of machine learning assets has made it increasingly difficult for software engineers to identify models and datasets that match their specific needs. Browsing large registries, such as Hugging Face, is time-consuming, error-prone, and rarely tailored to Software Engineering (SE) tasks. We present MLAssetSelection, a web application that automatically extracts SE assets and supports four key functionalities: (i) a configurable leaderboard for ranking models across multiple benchmarks and metrics; (ii) requirements-based selection of models and datasets; (iii) real-time automated updates through scheduled jobs that keep asset information current; and (iv) user-centric features including login, personalized asset lists, and configurable alert notifications. A demonstration video is available at https://youtu.be/t6CJ6P9asV4.
|
https://arxiv.org/abs/2601.13460
|
Academic Papers
|
svg
|
c993842fe8b9e54bc53c3922dc733cc49ba630a91787c46a1fd3ac115ed795b7
|
2026-01-21T00:00:00-05:00
|
SpatialBench-UC: Uncertainty-Aware Evaluation of Spatial Prompt Following in Text-to-Image Generation
|
arXiv:2601.13462v1 Announce Type: new Abstract: Evaluating whether text-to-image models follow explicit spatial instructions is difficult to automate. Object detectors may miss targets or return multiple plausible detections, and simple geometric tests can become ambiguous in borderline cases. Spatial evaluation is naturally a selective prediction problem, the checker may abstain when evidence is weak and report confidence so that results can be interpreted as a risk coverage tradeoff rather than a single score. We introduce SpatialBench-UC, a small, reproducible benchmark for pairwise spatial relations. The benchmark contains 200 prompts (50 object pairs times 4 relations) grouped into 100 counterfactual pairs obtained by swapping object roles. We release a benchmark package, versioned prompts, pinned configs, per-sample checker outputs, and report tables, enabling reproducible and auditable comparisons across models. We also include a lightweight human audit used to calibrate the checker's abstention margin and confidence threshold. We evaluate three baselines, Stable Diffusion 1.5, SD 1.5 BoxDiff, and SD 1.4 GLIGEN. The checker reports pass rate and coverage as well as conditional pass rates on decided samples. The results show that grounding methods substantially improve both pass rate and coverage, while abstention remains a dominant factor due mainly to missing detections.
|
https://arxiv.org/abs/2601.13462
|
Academic Papers
|
svg
|
20be1846f962c373295c22d4e5d21087e2e6b0fa811e1f3f00a6747d77adcef3
|
2026-01-21T00:00:00-05:00
|
Quantum Qualifiers for Neural Network Model Selection in Hadronic Physics
|
arXiv:2601.13463v1 Announce Type: new Abstract: As quantum machine-learning architectures mature, a central challenge is no longer their construction, but identifying the regimes in which they offer practical advantages over classical approaches. In this work, we introduce a framework for addressing this question in data-driven hadronic physics problems by developing diagnostic tools - centered on a quantitative quantum qualifier - that guide model selection between classical and quantum deep neural networks based on intrinsic properties of the data. Using controlled classification and regression studies, we show how relative model performance follows systematic trends in complexity, noise, and dimensionality, and how these trends can be distilled into a predictive criterion. We then demonstrate the utility of this approach through an application to Compton form factor extraction from deeply virtual Compton scattering, where the quantum qualifier identifies kinematic regimes favorable to quantum models. Together, these results establish a principled framework for deploying quantum machine-learning tools in precision hadronic physics.
|
https://arxiv.org/abs/2601.13463
|
Academic Papers
|
svg
|
9e0ddb845f5237dec0f922e35f2e2a6cff4ddba3b344239863c714a394d90f8f
|
2026-01-21T00:00:00-05:00
|
Context and Transcripts Improve Detection of Deepfake Audios of Public Figures
|
arXiv:2601.13464v1 Announce Type: new Abstract: Humans use context to assess the veracity of information. However, current audio deepfake detectors only analyze the audio file without considering either context or transcripts. We create and analyze a Journalist-provided Deepfake Dataset (JDD) of 255 public deepfakes which were primarily contributed by over 70 journalists since early 2024. We also generate a synthetic audio dataset (SYN) of dead public figures and propose a novel Context-based Audio Deepfake Detector (CADD) architecture. In addition, we evaluate performance on two large-scale datasets: ITW and P$^2$V. We show that sufficient context and/or the transcript can significantly improve the efficacy of audio deepfake detectors. Performance (measured via F1 score, AUC, and EER) of multiple baseline audio deepfake detectors and traditional classifiers can be improved by 5%-37.58% in F1-score, 3.77%-42.79% in AUC, and 6.17%-47.83% in EER. We additionally show that CADD, via its use of context and/or transcripts, is more robust to 5 adversarial evasion strategies, limiting performance degradation to an average of just -0.71% across all experiments. Code, models, and datasets are available at our project page: https://sites.northwestern.edu/nsail/cadd-context-based-audio-deepfake-detection (access restricted during review).
|
https://arxiv.org/abs/2601.13464
|
Academic Papers
|
svg
|
dcd5f0ae69ceabc09041cb5df9e1d525dd9d99096b572a92844b5d1082b99390
|
2026-01-21T00:00:00-05:00
|
Graph Neural Networks are Heuristics
|
arXiv:2601.13465v1 Announce Type: new Abstract: We demonstrate that a single training trajectory can transform a graph neural network into an unsupervised heuristic for combinatorial optimization. Focusing on the Travelling Salesman Problem, we show that encoding global structural constraints as an inductive bias enables a non-autoregressive model to generate solutions via direct forward passes, without search, supervision, or sequential decision-making. At inference time, dropout and snapshot ensembling allow a single model to act as an implicit ensemble, reducing optimality gaps through increased solution diversity. Our results establish that graph neural networks do not require supervised training nor explicit search to be effective. Instead, they can internalize global combinatorial structure and function as strong, learned heuristics. This reframes the role of learning in combinatorial optimization: from augmenting classical algorithms to directly instantiating new heuristics.
|
https://arxiv.org/abs/2601.13465
|
Academic Papers
|
svg
|
73eb49d8cdf5d8333128804f89c8e3fc49918e5cc82724fa60c0ce80cd1522ca
|
2026-01-21T00:00:00-05:00
|
Governance Matters: Lessons from Restructuring the data.table OSS Project
|
arXiv:2601.13466v1 Announce Type: new Abstract: Open source software (OSS) forms the backbone of industrial data workflows and enterprise systems. However, many OSS projects face operational risks due to informal or centralized governance. This paper presents a practical case study of data.table, a high-performance R package widely adopted in production analytics pipelines, which underwent a community-led governance reform to address scalability and sustainability concerns. Before the reform, data.table faced a growing backlog of unresolved issues and open pull requests, unclear contributor pathways, and bottlenecks caused by reliance on a single core maintainer. In response, the community initiated a redesign of its governance structure. In this paper, we evaluated the impact of this transition through a mixed-methods approach, combining a contributor survey (n=17) with mining project repository data. Our results show that following the reform, the project experienced a 200% increase in new contributor recruitment, a drop in pull request resolution time from over 700 days to under a week, and a 3x increase in contributor retention. Community sentiment improved around transparency, onboarding, and project momentum, though concerns around fairness and conflict resolution remain. This case study provides practical guidance for maintainers, companies, and foundations seeking to enhance OSS governance.
|
https://arxiv.org/abs/2601.13466
|
Academic Papers
|
svg
|
2632d06ae95e81c96ebcf4e4fc1d573ddd4f716d2cd3a27aeca5d6fec0bc6f56
|
2026-01-21T00:00:00-05:00
|
Preconditioning Benefits of Spectral Orthogonalization in Muon
|
arXiv:2601.13474v1 Announce Type: new Abstract: The Muon optimizer, a matrix-structured algorithm that leverages spectral orthogonalization of gradients, is a milestone in the pretraining of large language models. However, the underlying mechanisms of Muon -- particularly the role of gradient orthogonalization -- remain poorly understood, with very few works providing end-to-end analyses that rigorously explain its advantages in concrete applications. We take a step by studying the effectiveness of a simplified variant of Muon through two case studies: matrix factorization, and in-context learning of linear transformers. For both problems, we prove that simplified Muon converges linearly with iteration complexities independent of the relevant condition number, provably outperforming gradient descent and Adam. Our analysis reveals that the Muon dynamics decouple into a collection of independent scalar sequences in the spectral domain, each exhibiting similar convergence behavior. Our theory formalizes the preconditioning effect induced by spectral orthogonalization, offering insight into Muon's effectiveness in these matrix optimization problems and potentially beyond.
|
https://arxiv.org/abs/2601.13474
|
Academic Papers
|
svg
|
c517c7fdfb8c3fcc9fe609a901246496e0e117a893be68fd831b0f7e4cc0aeae
|
2026-01-21T00:00:00-05:00
|
A Unified Variational Imputation Framework for Electric Vehicle Charging Data Using Retrieval-Augmented Language Model
|
arXiv:2601.13476v1 Announce Type: new Abstract: The reliability of data-driven applications in electric vehicle (EV) infrastructure, such as charging demand forecasting, hinges on the availability of complete, high-quality charging data. However, real-world EV datasets are often plagued by missing records, and existing imputation methods are ill-equipped for the complex, multimodal context of charging data, often relying on a restrictive one-model-per-station paradigm that ignores valuable inter-station correlations. To address these gaps, we develop a novel PRobabilistic variational imputation framework that leverages the power of large lAnguage models and retrIeval-augmented Memory (PRAIM). PRAIM employs a pre-trained language model to encode heterogeneous data, spanning time-series demand, calendar features, and geospatial context, into a unified, semantically rich representation. This is dynamically fortified by retrieval-augmented memory that retrieves relevant examples from the entire charging network, enabling a single, unified imputation model empowered by variational neural architecture to overcome data sparsity. Extensive experiments on four public datasets demonstrate that PRAIM significantly outperforms established baselines in both imputation accuracy and its ability to preserve the original data's statistical distribution, leading to substantial improvements in downstream forecasting performance.
|
https://arxiv.org/abs/2601.13476
|
Academic Papers
|
svg
|
c460559b103b85ae1afd6758665c463908f6dd47dd2fc31f8b0cdfa7554fe304
|
2026-01-21T00:00:00-05:00
|
Elias-type Bounds for Codes in the Symmetric Limited-Magnitude Error Channel
|
arXiv:2601.13477v1 Announce Type: new Abstract: We study perfect error-correcting codes in $\mathbb{Z}^n$ for the symmetric limited-magnitude error channel, where at most $e$ coordinates of an integer vector may be altered by a value whose magnitude is at most $s$. Geometrically, such codes correspond to tilings of $\mathbb{Z}^n$ by the symmetric limited-magnitude error ball $\mathcal{B}(n,e,s,s)$. Given $n$ and $s$, we adapt the geometric ideas underlying the Elias bound for the Hamming metric to the distance $d_s$ tailed for this channel, and derive new necessary conditions on $e$ for the existence of perfect codes / tilings, without assuming any lattice structure. Our main results identify two distinct regimes depending on the error magnitude. For small error magnitudes ($s \in \{1, 2\}$), we prove that if the number of correctable errors does not exceed a certain fraction of $n$, then it is asymptotically bounded by $e = \mathcal{O}(\sqrt{n \log n})$. In contrast, for larger magnitudes ($s \geq 3$), we establish a significantly sharper bound of $e < \sqrt{12.36n}$, which holds without any restriction on $e$ being below a given fraction of $n$. Finally, by extending our method to non-perfect codes, we derive an upper bound on packing density, showing that for codes correcting a linear or $\Omega(\sqrt{n})$ number of errors, the density is bounded by a factor inversely proportional to the error magnitude $s$.
|
https://arxiv.org/abs/2601.13477
|
Academic Papers
|
svg
|
4b9aba67a104598a9e8f355368f3989d7142ce244df16593c17f86419462d89a
|
2026-01-21T00:00:00-05:00
|
Exploring Learners' Expectations and Engagement When Collaborating with Constructively Controversial Peer Agents
|
arXiv:2601.13479v1 Announce Type: new Abstract: Peer agents can supplement real-time collaborative learning in asynchronous online courses. Constructive Controversy (CC) theory suggests that humans deepen their understanding of a topic by confronting and resolving controversies. This study explores whether CC's benefits apply to LLM-based peer agents, focusing on the impact of agents' disputatious behaviors and disclosure of agents' behavior designs on the learning process. In our mixed-method study (n=144), we compare LLMs that follow detailed CC guidelines (regulated) to those guided by broader goals (unregulated) and examine the effects of disclosing the agents' design to users (transparent vs. opaque). Findings show that learners' values influence their agent interaction: those valuing control appreciate unregulated agents' willingness to cease push-back upon request, while those valuing intellectual challenges favor regulated agents for stimulating creativity. Additionally, design transparency lowers learners' perception of agents' abilities. Our findings lay the foundation for designing effective collaborative peer agents in isolated educational settings.
|
https://arxiv.org/abs/2601.13479
|
Academic Papers
|
svg
|
34f664692f66469713f6c76b0ed176c0ec1745e90876bc845d04506ea857bc1f
|
2026-01-21T00:00:00-05:00
|
Towards Efficient and Robust Linguistic Emotion Diagnosis for Mental Health via Multi-Agent Instruction Refinement
|
arXiv:2601.13481v1 Announce Type: new Abstract: Linguistic expressions of emotions such as depression, anxiety, and trauma-related states are pervasive in clinical notes, counseling dialogues, and online mental health communities, and accurate recognition of these emotions is essential for clinical triage, risk assessment, and timely intervention. Although large language models (LLMs) have demonstrated strong generalization ability in emotion analysis tasks, their diagnostic reliability in high-stakes, context-intensive medical settings remains highly sensitive to prompt design. Moreover, existing methods face two key challenges: emotional comorbidity, in which multiple intertwined emotional states complicate prediction, and inefficient exploration of clinically relevant cues. To address these challenges, we propose APOLO (Automated Prompt Optimization for Linguistic Emotion Diagnosis), a framework that systematically explores a broader and finer-grained prompt space to improve diagnostic efficiency and robustness. APOLO formulates instruction refinement as a Partially Observable Markov Decision Process and adopts a multi-agent collaboration mechanism involving Planner, Teacher, Critic, Student, and Target roles. Within this closed-loop framework, the Planner defines an optimization trajectory, while the Teacher-Critic-Student agents iteratively refine prompts to enhance reasoning stability and effectiveness, and the Target agent determines whether to continue optimization based on performance evaluation. Experimental results show that APOLO consistently improves diagnostic accuracy and robustness across domain-specific and stratified benchmarks, demonstrating a scalable and generalizable paradigm for trustworthy LLM applications in mental healthcare.
|
https://arxiv.org/abs/2601.13481
|
Academic Papers
|
svg
|
22cf8d7d715aaaebd513340e2f34ff11a4b81298675209a4f638ce61bbc043d9
|
2026-01-21T00:00:00-05:00
|
Spectrum & RAN Sharing: A Measurement-based Case Study of Commercial 5G Networks in Spain
|
arXiv:2601.13484v1 Announce Type: new Abstract: Radio Access Network (RAN) sharing, which often also includes spectrum sharing, is a strategic cooperative agreement among two or more mobile operators, where one operator may use another's RAN infrastructure to provide mobile services to its users. By mutually sharing physical sites, radio elements, licensed spectrum and other parts of the RAN infrastructure, participating operators can significantly reduce the capital (and operational) expenditure in deploying and operating cellular networks, while accelerating coverage expansion -- thereby addressing the spectrum scarcity and infrastructure cost challenges in the 5G era and beyond. While the economic benefits of RAN sharing are well understood, the impact of such resource pooling on user-perceived performance remains underexplored, especially in real-world commercial deployments. We present, to the best of our knowledge, the first empirical measurement study of commercial 5G spectrum and RAN sharing. Our measurement study is unique in that, beyond identifying real-world instances of shared 5G spectrum and RAN deployment "in the wild", we also analyze users' perceived performance and its implication on Quality of Experience (QoE). Our study provides critical insights into resource management (i.e., pooling) and spectrum efficiency, offering a blueprint (and implications) for network evolution in 5G, 6G and beyond.
|
https://arxiv.org/abs/2601.13484
|
Academic Papers
|
svg
|
c7e5bbed6c6ce528f97e76aaf5ccba10adcade2a2133c7a16226e4987361a3bc
|
2026-01-21T00:00:00-05:00
|
The Hidden Toll of Social Media News: Causal Effects on Psychosocial Wellbeing
|
arXiv:2601.13487v1 Announce Type: new Abstract: News consumption on social media has become ubiquitous, yet how different forms of engagement shape psychosocial outcomes remains unclear. To address this gap, we leveraged a large-scale dataset of ~26M posts and ~45M comments on the BlueSky platform, and conducted a quasi-experimental study, matching 81,345 Treated users exposed to News feeds with 83,711 Control users using stratified propensity score analysis. We examined psychosocial wellbeing, in terms of affective, behavioral, and cognitive outcomes. Our findings reveal that news engagement produces systematic trade-offs: increased depression, stress, and anxiety, yet decreased loneliness and increased social interaction on the platform. Regression models reveal that News feed bookmarking is associated with greater psychosocial deterioration compared to commenting or quoting, with magnitude differences exceeding tenfold. These per-engagement effects accumulate with repeated exposure, showing significant psychosocial impacts. Our work extends theories of news effects beyond crisis-centric frameworks by demonstrating that routine consumption creates distinct psychological dynamics depending on engagement type, and bears implications for tools and interventions for mitigating the psychosocial costs of news consumption on social media.
|
https://arxiv.org/abs/2601.13487
|
Academic Papers
|
svg
|
e47d286a63d6f90389a7cedf4b620da8c1f07643e34b8219fb2c8251ca5a818a
|
2026-01-21T00:00:00-05:00
|
Bridging the Gap Between Estimated and True Regret Towards Reliable Regret Estimation in Deep Learning based Mechanism Design
|
arXiv:2601.13489v1 Announce Type: new Abstract: Recent advances, such as RegretNet, ALGnet, RegretFormer and CITransNet, use deep learning to approximate optimal multi item auctions by relaxing incentive compatibility (IC) and measuring its violation via ex post regret. However, the true accuracy of these regret estimates remains unclear. Computing exact regret is computationally intractable, and current models rely on gradient based optimizers whose outcomes depend heavily on hyperparameter choices. Through extensive experiments, we reveal that existing methods systematically underestimate actual regret (In some models, the true regret is several hundred times larger than the reported regret), leading to overstated claims of IC and revenue. To address this issue, we derive a lower bound on regret and introduce an efficient item wise regret approximation. Building on this, we propose a guided refinement procedure that substantially improves regret estimation accuracy while reducing computational cost. Our method provides a more reliable foundation for evaluating incentive compatibility in deep learning based auction mechanisms and highlights the need to reassess prior performance claims in this area.
|
https://arxiv.org/abs/2601.13489
|
Academic Papers
|
svg
|
13fb2841fd65b1a0ab02b3d4825cb9a8859bb4ffbb2008f3b85a61796a80af55
|
2026-01-21T00:00:00-05:00
|
Learning-Augmented Online TRP on a Line
|
arXiv:2601.13494v1 Announce Type: new Abstract: We study the online traveling repairperson problem on a line within the recently proposed learning-augmented framework, which provides predictions on the requests to be served via machine learning. In the original model (with no predictions), there is a stream of requests released over time along the line. The goal is to minimize the sum (or average) of the completion times of the requests. In the original model, the state-of-the-art competitive ratio lower bound is $1+\sqrt{2} > 2.414$ for any deterministic algorithm and the state-of-the-art competitive ratio upper bound is 4 for a deterministic algorithm. Our prediction model involves predicted positions, possibly error-prone, of each request in the stream known a priori but the arrival times of requests are not known until their arrival. We first establish a 3-competitive lower bound which extends to the original model. We then design a deterministic algorithm that is $(2+\sqrt{3})\approx 3.732$-competitive when predictions are perfect. With imperfect predictions (maximum error $\delta > 0$), we show that our deterministic algorithm becomes $\min\{3.732+4\delta,4\}$-competitive, knowing $\delta$. To the best of our knowledge, these are the first results for online traveling repairperson problem in the learning-augmented framework.
|
https://arxiv.org/abs/2601.13494
|
Academic Papers
|
svg
|
aa21939d79cb1b7fb651599ded7c24622055b55cc092147d172ec29d5442b1fd
|
2026-01-21T00:00:00-05:00
|
RASC: Enhancing Observability & Programmability in Smart Spaces
|
arXiv:2601.13496v1 Announce Type: new Abstract: While RPCs form the bedrock of systems stacks, we posit that IoT device collections in smart spaces like homes, warehouses, and office buildings--which are all "user-facing"--require a more expressive abstraction. Orthogonal to prior work, which improved the reliability of IoT communication, our work focuses on improving the observability and programmability of IoT actions. We present the RASC (Request-Acknowledge-Start-Complete) abstraction, which provides acknowledgments at critical points after an IoT device action is initiated. RASC is a better fit for IoT actions, which naturally vary in length spatially (across devices) and temporally (across time, for a given device). RASC also enables the design of several new features: predicting action completion times accurately, detecting failures of actions faster, allowing fine-grained dependencies in programming, and scheduling. RASC is intended to be implemented atop today's available RPC mechanisms, rather than as a replacement. We integrated RASC into a popular and open-source IoT framework called Home Assistant. Our trace-driven evaluation finds that RASC meets latency SLOs, especially for long actions that last O(mins), which are common in smart spaces. Our scheduling policies for home automations (e.g., routines) outperform state-of-the-art counterparts by 10%-55%.
|
https://arxiv.org/abs/2601.13496
|
Academic Papers
|
svg
|
81fe7db328a2e4cb2f34af6782f1d9ccd564e67ff45222e19358311ffe556845
|
2026-01-21T00:00:00-05:00
|
Optical Linear Systems Framework for Event Sensing and Computational Neuromorphic Imaging
|
arXiv:2601.13498v1 Announce Type: new Abstract: Event vision sensors (neuromorphic cameras) output sparse, asynchronous ON/OFF events triggered by log-intensity threshold crossings, enabling microsecond-scale sensing with high dynamic range and low data bandwidth. As a nonlinear system, this event representation does not readily integrate with the linear forward models that underpin most computational imaging and optical system design. We present a physics-grounded processing pipeline that maps event streams to estimates of per-pixel log-intensity and intensity derivatives, and embeds these measurements in a dynamic linear systems model with a time-varying point spread function. This enables inverse filtering directly from event data, using frequency-domain Wiener deconvolution with a known (or parameterised) dynamic transfer function. We validate the approach in simulation for single and overlapping point sources under modulated defocus, and on real event data from a tunable-focus telescope imaging a star field, demonstrating source localisation and separability. The proposed framework provides a practical bridge between event sensing and model-based computational imaging for dynamic optical systems.
|
https://arxiv.org/abs/2601.13498
|
Academic Papers
|
svg
|
5c634717627c1f53cfd8bf802941180a4450dc2e509caab0a6355191fbc472b8
|
2026-01-21T00:00:00-05:00
|
Concurrent Permissive Strategy Templates
|
arXiv:2601.13500v1 Announce Type: new Abstract: Two-player games on finite graphs provide a rigorous foundation for modeling the strategic interaction between reactive systems and their environment. While concurrent game semantics naturally capture the synchronous interactions characteristic of many cyber-physical systems (CPS), their adoption in CPS design remains limited. Building on the concept of permissive strategy templates (PeSTels) for turn-based games, we introduce concurrent (permissive) strategy templates (ConSTels) -- a novel representation for sets of randomized winning strategies in concurrent games with Safety, B\"uchi, and Co-B\"uchi objectives. ConSTels compactly encode infinite families of strategies, thereby supporting both offline and online adaptation. Offline, we exploit compositionality to enable incremental synthesis: combining ConSTels for simpler objectives into non-conflicting templates for more complex combined objectives. Online, we demonstrate how ConSTels facilitate runtime adaptation, adjusting action probabilities in response to observed opponent behavior to optimize performance while preserving correctness. We implemented ConSTel synthesis and adaptation in a prototype tool and experimentally show its potential.
|
https://arxiv.org/abs/2601.13500
|
Academic Papers
|
svg
|
87952b962fef3f9868d44abd9e4204ec6967748b2b437addf8658c910f66a23e
|
2026-01-21T00:00:00-05:00
|
Modeling Perpetrators' Fate-to-Fate Contagion in Public Mass Shootings In The United States Using Bivariate Hawkes Processes
|
arXiv:2601.13501v1 Announce Type: new Abstract: This study examines how the fate of a perpetrator in a public mass shooting influences the fate of subsequent perpetrators. Using data from 1966 to 2024, we classify incidents according to whether the perpetrator died at the scene or survived the attack. Using a bivariate Hawkes process, we quantify the cross-excitation effect, which is the triggering effect that each event type exerts on the other, i.e., "die at the scene"$\rightarrow$ "live" and "live"$\rightarrow$ "die at the scene", as well as the self-excitation effects, i.e., "die at the scene"$\rightarrow$ "die at the scene" and "live"$\rightarrow$ "live". Our results show that the strongest spillover was from "live" incidents to "die at the scene", where we estimate that 0.34 (0.09, 0.80) of "die at the scene" incidents are triggered by a prior event in which the offender survived the attack. This pathway also exhibits the longest estimated contagion timescale: approximately 20 days. In contrast, the reverse influence, that is, "die at the scene"$\rightarrow$"live", is not statistically significant, with the lower bound of its 95% confidence interval nearly equal to zero. We also find that "die at the scene" events can only cause their own type, where 0.139 (0.01, 0.52) of such incidents are caused by previous "die at the scene" events, with the shortest contagion timescale of roughly 20 hours.
|
https://arxiv.org/abs/2601.13501
|
Academic Papers
|
svg
|
cd5f280135943096c4f450137f1e178a1539ac965b5161543e63aecc956d428e
|
2026-01-21T00:00:00-05:00
|
DIS2: Disentanglement Meets Distillation with Classwise Attention for Robust Remote Sensing Segmentation under Missing Modalities
|
arXiv:2601.13502v1 Announce Type: new Abstract: The efficacy of multimodal learning in remote sensing (RS) is severely undermined by missing modalities. The challenge is exacerbated by the RS highly heterogeneous data and huge scale variation. Consequently, paradigms proven effective in other domains often fail when confronted with these unique data characteristics. Conventional disentanglement learning, which relies on significant feature overlap between modalities (modality-invariant), is insufficient for this heterogeneity. Similarly, knowledge distillation becomes an ill-posed mimicry task where a student fails to focus on the necessary compensatory knowledge, leaving the semantic gap unaddressed. Our work is therefore built upon three pillars uniquely designed for RS: (1) principled missing information compensation, (2) class-specific modality contribution, and (3) multi-resolution feature importance. We propose a novel method DIS2, a new paradigm shifting from modality-shared feature dependence and untargeted imitation to active, guided missing features compensation. Its core novelty lies in a reformulated synergy between disentanglement learning and knowledge distillation, termed DLKD. Compensatory features are explicitly captured which, when fused with the features of the available modality, approximate the ideal fused representation of the full-modality case. To address the class-specific challenge, our Classwise Feature Learning Module (CFLM) adaptively learn discriminative evidence for each target depending on signal availability. Both DLKD and CFLM are supported by a hierarchical hybrid fusion (HF) structure using features across resolutions to strengthen prediction. Extensive experiments validate that our proposed approach significantly outperforms state-of-the-art methods across benchmarks.
|
https://arxiv.org/abs/2601.13502
|
Academic Papers
|
svg
|
5985d361ebb2557be01f9be78a447594149b7ac3dbb2b22d1aeff5c2ed19e88f
|
2026-01-21T00:00:00-05:00
|
Anonpsy: A Graph-Based Framework for Structure-Preserving De-identification of Psychiatric Narratives
|
arXiv:2601.13503v1 Announce Type: new Abstract: Psychiatric narratives encode patient identity not only through explicit identifiers but also through idiosyncratic life events embedded in their clinical structure. Existing de-identification approaches, including PHI masking and LLM-based synthetic rewriting, operate at the text level and offer limited control over which semantic elements are preserved or altered. We introduce Anonpsy, a de-identification framework that reformulates the task as graph-guided semantic rewriting. Anonpsy (1) converts each narrative into a semantic graph encoding clinical entities, temporal anchors, and typed relations; (2) applies graph-constrained perturbations that modify identifying context while preserving clinically essential structure; and (3) regenerates text via graph-conditioned LLM generation. Evaluated on 90 clinician-authored psychiatric case narratives, Anonpsy preserves diagnostic fidelity while achieving consistently low re-identification risk under expert, semantic, and GPT-5-based evaluations. Compared with a strong LLM-only rewriting baseline, Anonpsy yields substantially lower semantic similarity and identifiability. These results demonstrate that explicit structural representations combined with constrained generation provide an effective approach to de-identification for psychiatric narratives.
|
https://arxiv.org/abs/2601.13503
|
Academic Papers
|
svg
|
f517a2dec43634b9a674b2f0c79abcd286b2e8fc32422abaf085bade47da88ec
|
2026-01-21T00:00:00-05:00
|
Integrating Vision-Centric Text Understanding for Conversational Recommender Systems
|
arXiv:2601.13505v1 Announce Type: new Abstract: Conversational Recommender Systems (CRSs) have attracted growing attention for their ability to deliver personalized recommendations through natural language interactions. To more accurately infer user preferences from multi-turn conversations, recent works increasingly expand conversational context (e.g., by incorporating diverse entity information or retrieving related dialogues). While such context enrichment can assist preference modeling, it also introduces longer and more heterogeneous inputs, leading to practical issues such as input length constraints, text style inconsistency, and irrelevant textual noise, thereby raising the demand for stronger language understanding ability. In this paper, we propose STARCRS, a Screen-Text-AwaRe Conversational Recommender System that integrates two complementary text understanding modes: (1) a screen-reading pathway that encodes auxiliary textual information as visual tokens, mimicking skim reading on a screen, and (2) an LLM-based textual pathway that focuses on a limited set of critical content for fine-grained reasoning. We design a knowledge-anchored fusion framework that combines contrastive alignment, cross-attention interaction, and adaptive gating to integrate the two modes for improved preference modeling and response generation. Extensive experiments on two widely used benchmarks demonstrate that STARCRS consistently improves both recommendation accuracy and generated response quality.
|
https://arxiv.org/abs/2601.13505
|
Academic Papers
|
svg
|
763614b77ab19f874a2805711b749f8d8cb45791722c2c6926261d5c8e36758f
|
2026-01-21T00:00:00-05:00
|
Group Relative Policy Optimization for Robust Blind Interference Alignment with Fluid Antennas
|
arXiv:2601.13506v1 Announce Type: new Abstract: Fluid antenna system (FAS) leverages dynamic reconfigurability to unlock spatial degrees of freedom and reshape wireless channels. This paper proposes, for the first time, a robust fluid antenna-driven blind interference alignment (BIA) framework for a K-user MISO downlink under imperfect channel state information (CSI). We formulate a robust sum-rate maximization problem through optimizing fluid antenna positions. To solve this challenging non-convex problem, we employ group relative policy optimization (GRPO), a novel deep reinforcement learning algorithm that eliminates the critic network. This robust design reduces model size and floating point operations (FLOPs) by nearly half compared to proximal policy optimization (PPO) while significantly enhancing performance through group-based exploration that escapes bad local optima. Simulation results demonstrate that GRPO outperforms PPO by 4.17%, and a 100K-step pre-trained PPO by 30.29%. Due to error distribution learning, GRPO exceeds heuristic MaximumGain and RandomGain by 200.78% and 465.38%, respectively.
|
https://arxiv.org/abs/2601.13506
|
Academic Papers
|
svg
|
884a4114a863128fa26ea48853dd30d71e92db2d2239e02ce30a1290d67ba677
|
2026-01-21T00:00:00-05:00
|
Event Classification by Physics-informed Inpainting for Distributed Multichannel Acoustic Sensor with Partially Degraded Channels
|
arXiv:2601.13513v1 Announce Type: new Abstract: Distributed multichannel acoustic sensing (DMAS) enables large-scale sound event classification (SEC), but performance drops when many channels are degraded and when sensor layouts at test time differ from training layouts. We propose a learning-free, physics-informed inpainting frontend based on reverse time migration (RTM). In this approach, observed multichannel spectrograms are first back-propagated on a 3D grid using an analytic Green's function to form a scene-consistent image, and then forward-projected to reconstruct inpainted signals before log-mel feature extraction and Transformer-based classification. We evaluate the method on ESC-50 with 50 sensors and three layouts (circular, linear, right-angle), where per-channel SNRs are sampled from -30 to 0 dB. Compared with an AST baseline, scaling-sparsemax channel selection, and channel-swap augmentation, the proposed RTM frontend achieves the best or competitive accuracy across all layouts, improving accuracy by 13.1 points on the right-angle layout (from 9.7% to 22.8%). Correlation analyses show that spatial weights align more strongly with SNR than with channel--source distance, and that higher SNR--weight correlation corresponds to higher SEC accuracy. These results demonstrate that a reconstruct-then-project, physics-based preprocessing effectively complements learning-only methods for DMAS under layout-open configurations and severe channel degradation.
|
https://arxiv.org/abs/2601.13513
|
Academic Papers
|
svg
|
1691fbf0ed2bd08f77c560ed189752666ffd0ef48ac42d3bd28dc17dbd2bd43a
|
2026-01-21T00:00:00-05:00
|
Automatic Adjustment of HPA Parameters and Attack Prevention in Kubernetes Using Random Forests
|
arXiv:2601.13515v1 Announce Type: new Abstract: In this paper, HTTP status codes are used as custom metrics within the HPA as the experimental scenario. By integrating the Random Forest classification algorithm from machine learning, attacks are assessed and predicted, dynamically adjusting the maximum pod parameter in the HPA to manage attack traffic. This approach enables the adjustment of HPA parameters using machine learning scripts in targeted attack scenarios while effectively managing attack traffic. All access from attacking IPs is redirected to honeypot pods, achieving a lower incidence of 5XX status codes through HPA pod adjustments under high load conditions. This method also ensures effective isolation of attack traffic, preventing excessive HPA expansion due to attacks. Additionally, experiments conducted under various conditions demonstrate the importance of setting appropriate thresholds for HPA adjustments.
|
https://arxiv.org/abs/2601.13515
|
Academic Papers
|
svg
|
0b41656d88183d16aca876fde781e31ded3e2c11b226397d0f0186c9b2c46ecf
|
2026-01-21T00:00:00-05:00
|
From "Fail Fast" to "Mature Safely:" Expert Perspectives as Secondary Stakeholders on Teen-Centered Social Media Risk Detection
|
arXiv:2601.13516v1 Announce Type: new Abstract: In addressing various risks on social media, the HCI community has advocated for teen-centered risk detection technologies over platform-based, parent-centered features. However, their real-world viability remains underexplored by secondary stakeholders beyond the family unit. Therefore, we present an evaluation of a teen-centered social media risk detection dashboard through online interviews with 33 online safety experts. While experts praised our dashboard's clear design for teen agency, their feedback revealed five primary tensions in implementing and sustaining such technology: objective vs. context-dependent risk definition, informing risks vs. meaningful intervention, teen empowerment vs. motivation, need for data vs. data privacy, and independence vs. sustainability. These findings motivate us to rethink "teen-centered" and a shift from a "fail fast" to a "mature safely" paradigm for youth safety technology innovation. We offer design implications for addressing these tensions before system deployment with teens and strategies for aligning secondary stakeholders' interests to deploy and sustain such technologies in the broader ecosystem of youth online safety.
|
https://arxiv.org/abs/2601.13516
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.