id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
dc2ca92b0122e20b880bbccc09d51ce8051b371d6eecbeb01a882e269296cc2e
|
2026-01-16T00:00:00-05:00
|
Improved Algorithms for Fair Matroid Submodular Maximization
|
arXiv:2601.09860v1 Announce Type: new Abstract: Submodular maximization subject to matroid constraints is a central problem with many applications in machine learning. As algorithms are increasingly used in decision-making over datapoints with sensitive attributes such as gender or race, it is becoming crucial to enforce fairness to avoid bias and discrimination. Recent work has addressed the challenge of developing efficient approximation algorithms for fair matroid submodular maximization. However, the best algorithms known so far are only guaranteed to satisfy a relaxed version of the fairness constraints that loses a factor 2, i.e., the problem may ask for $\ell$ elements with a given attribute, but the algorithm is only guaranteed to find $\lfloor \ell/2 \rfloor$. In particular, there is no provable guarantee when $\ell=1$, which corresponds to a key special case of perfect matching constraints. In this work, we achieve a new trade-off via an algorithm that gets arbitrarily close to full fairness. Namely, for any constant $\varepsilon>0$, we give a constant-factor approximation to fair monotone matroid submodular maximization that in expectation loses only a factor $(1-\varepsilon)$ in the lower-bound fairness constraint. Our empirical evaluation on a standard suite of real-world datasets -- including clustering, recommendation, and coverage tasks -- demonstrates the practical effectiveness of our methods.
|
https://arxiv.org/abs/2601.09860
|
Academic Papers
|
svg
|
cd57a203e1b7c339551f8f49a59f63789032e7b71ef4065880de2bfa2d522e1b
|
2026-01-16T00:00:00-05:00
|
High signal-to-noise ratio asymptotics of entropy-constrained Gaussian channel capacity
|
arXiv:2601.09864v1 Announce Type: new Abstract: We study the input-entropy-constrained Gaussian channel capacity problem in the asymptotic high signal-to-noise ratio (SNR) regime. We show that the capacity-achieving distribution as SNR goes to infinity is given by a discrete Gaussian distribution supported on a scaled integer lattice. Further, we show that the gap between the input entropy and the capacity decreases to zero exponentially in SNR, and characterize this exponent.
|
https://arxiv.org/abs/2601.09864
|
Academic Papers
|
svg
|
ea884e8cc9ed1f41aa1d89abbbba200756ef01ca64bc32ff7a17ccf33342ecfe
|
2026-01-16T00:00:00-05:00
|
Advancing Model Refinement: Muon-Optimized Distillation and Quantization for LLM Deployment
|
arXiv:2601.09865v1 Announce Type: new Abstract: Large Language Models (LLMs) enable advanced natural language processing but face deployment challenges on resource-constrained edge devices due to high computational, memory, and energy demands. Optimizing these models requires addressing three key challenges: acquiring task-specific data, fine-tuning for performance, and compressing models to accelerate inference while reducing resource demands. We propose an integrated framework combining GPTQ-based quantization, low-rank adaptation (LoRA), and a specialized data distillation process to significantly reduce model size and complexity while preserving or enhancing task-specific performance. By leveraging data distillation, knowledge distillation via Kullback-Leibler divergence, Bayesian hyperparameter optimization, and the Muon optimizer, our pipeline achieves up to 2x memory compression (e.g., reducing a 6GB model to 3GB) and enables efficient inference for specialized tasks. Empirical results demonstrate superior performance on standard LLM benchmarks compared to GPTQ quantization alone, with the Muon optimizer notably enhancing fine-tuned models' resistance to accuracy decay during quantization.
|
https://arxiv.org/abs/2601.09865
|
Academic Papers
|
svg
|
ba51d57318269724107292e4278536724a31a1c484620795ad859cd88020c9ce
|
2026-01-16T00:00:00-05:00
|
VibrantSR: Sub-Meter Canopy Height Models from Sentinel-2 Using Generative Flow Matching
|
arXiv:2601.09866v1 Announce Type: new Abstract: We present VibrantSR (Vibrant Super-Resolution), a generative super-resolution framework for estimating 0.5 meter canopy height models (CHMs) from 10 meter Sentinel-2 imagery. Unlike approaches based on aerial imagery that are constrained by infrequent and irregular acquisition schedules, VibrantSR leverages globally available Sentinel-2 seasonal composites, enabling consistent monitoring at a seasonal-to-annual cadence. Evaluated across 22 EPA Level 3 eco-regions in the western United States using spatially disjoint validation splits, VibrantSR achieves a Mean Absolute Error of 4.39 meters for canopy heights >= 2 m, outperforming Meta (4.83 m), LANDFIRE (5.96 m), and ETH (7.05 m) satellite-based benchmarks. While aerial-based VibrantVS (2.71 m MAE) retains an accuracy advantage, VibrantSR enables operational forest monitoring and carbon accounting at continental scales without reliance on costly and temporally infrequent aerial acquisitions.
|
https://arxiv.org/abs/2601.09866
|
Academic Papers
|
svg
|
b333d4205079995c236070536174248ff63e2091ca7ce630375600ed9ac4b5ea
|
2026-01-16T00:00:00-05:00
|
AmbShield: Enhancing Physical Layer Security with Ambient Backscatter Devices against Eavesdroppers
|
arXiv:2601.09867v1 Announce Type: new Abstract: Passive eavesdropping compromises confidentiality in wireless networks, especially in resource-constrained environments where heavyweight cryptography is impractical. Physical layer security (PLS) exploits channel randomness and spatial selectivity to confine information to an intended receiver with modest overhead. However, typical PLS techniques, such as using beamforming, artificial noise, and reconfigurable intelligent surfaces, often involve added active power or specialized deployment, and, in many designs, rely on precise time synchronization and perfect CSI estimation, which limits their practicality. To this end, we propose AmbShield, an AmBD-assisted PLS scheme that leverages naturally distributed AmBDs to simultaneously strengthen the legitimate channel and degrade eavesdroppers' without requiring extra transmit power and with minimal deployment overhead. In AmbShield, AmBDs are exploited as friendly jammers that randomly backscatter to create interference at eavesdroppers, and as passive relays that backscatter the desired signal to enhance the capacity of legitimate devices. We further develop a unified analytical framework that analyzes the exact probability density function (PDF) and cumulative distribution function (CDF) of legitimate and eavesdropper signal-to-interference-noise ratio (SINR), and a closed-form secrecy outage probability (SOP). The analysis provides clear design guidelines on various practical system parameters to minimize SOP. Extensive experiments that include Monte Carlo simulations, theoretical derivations, and high-SNR asymptotic analysis demonstrate the security gains of AmbShield across diverse system parameters under imperfect synchronization and CSI estimation.
|
https://arxiv.org/abs/2601.09867
|
Academic Papers
|
svg
|
e1b733ff9615cf058938eea69523bc05a1b7c7f1b55174ae61426a662250eead
|
2026-01-16T00:00:00-05:00
|
A Scoping Review of the Ethical Perspectives on Anthropomorphising Large Language Model-Based Conversational Agents
|
arXiv:2601.09869v1 Announce Type: new Abstract: Anthropomorphisation -- the phenomenon whereby non-human entities are ascribed human-like qualities -- has become increasingly salient with the rise of large language model (LLM)-based conversational agents (CAs). Unlike earlier chatbots, LLM-based CAs routinely generate interactional and linguistic cues, such as first-person self-reference, epistemic and affective expressions that empirical work shows can increase engagement. On the other hand, anthropomorphisation raises ethical concerns, including deception, overreliance, and exploitative relationship framing, while some authors argue that anthropomorphic interaction may support autonomy, well-being, and inclusion. Despite increasing interest in the phenomenon, literature remains fragmented across domains and varies substantially in how it defines, operationalizes, and normatively evaluates anthropomorphisation. This scoping review maps ethically oriented work on anthropomorphising LLM-based CAs across five databases and three preprint repositories. We synthesize (1) conceptual foundations, (2) ethical challenges and opportunities, and (3) methodological approaches. We find convergence on attribution-based definitions but substantial divergence in operationalization, a predominantly risk-forward normative framing, and limited empirical work that links observed interaction effects to actionable governance guidance. We conclude with a research agenda and design/governance recommendations for ethically deploying anthropomorphic cues in LLM-based conversational agents.
|
https://arxiv.org/abs/2601.09869
|
Academic Papers
|
svg
|
5b160352ab2547231ccba1d86e057c838d23f42cb9ef84786c245fab9ff90a0c
|
2026-01-16T00:00:00-05:00
|
Epistemology gives a Future to Complementarity in Human-AI Interactions
|
arXiv:2601.09871v1 Announce Type: new Abstract: Human-AI complementarity is the claim that a human supported by an AI system can outperform either alone in a decision-making process. Since its introduction in the human-AI interaction literature, it has gained traction by generalizing the reliance paradigm and by offering a more practical alternative to the contested construct of 'trust in AI.' Yet complementarity faces key theoretical challenges: it lacks precise theoretical anchoring, it is formalized just as a post hoc indicator of relative predictive accuracy, it remains silent about other desiderata of human-AI interactions and it abstracts away from the magnitude-cost profile of its performance gain. As a result, complementarity is difficult to obtain in empirical settings. In this work, we leverage epistemology to address these challenges by reframing complementarity within the discourse on justificatory AI. Drawing on computational reliabilism, we argue that historical instances of complementarity function as evidence that a given human-AI interaction is a reliable epistemic process for a given predictive task. Together with other reliability indicators assessing the alignment of the human-AI team with the epistemic standards and socio-technical practices, complementarity contributes to the degree of reliability of human-AI teams when generating predictions. This supports the practical reasoning of those affected by these outputs -- patients, managers, regulators, and others. In summary, our approach suggests that the role and value of complementarity lies not in providing a relative measure of predictive accuracy, but in helping calibrate decision-making to the reliability of AI-supported processes that increasingly shape everyday life.
|
https://arxiv.org/abs/2601.09871
|
Academic Papers
|
svg
|
3a5fc49d38798fb44420a4eb6b835e92883a50a6093b6e071acb3f46da243d73
|
2026-01-16T00:00:00-05:00
|
Beyond Strict Rules: Assessing the Effectiveness of Large Language Models for Code Smell Detection
|
arXiv:2601.09873v1 Announce Type: new Abstract: Code smells are symptoms of potential code quality problems that may affect software maintainability, thus increasing development costs and impacting software reliability. Large language models (LLMs) have shown remarkable capabilities for supporting various software engineering activities, but their use for detecting code smells remains underexplored. However, unlike the rigid rules of static analysis tools, LLMs can support flexible and adaptable detection strategies tailored to the unique properties of code smells. This paper evaluates the effectiveness of four LLMs -- DeepSeek-R1, GPT-5 mini, Llama-3.3, and Qwen2.5-Code -- for detecting nine code smells across 30 Java projects. For the empirical evaluation, we created a ground-truth dataset by asking 76 developers to manually inspect 268 code-smell candidates. Our results indicate that LLMs perform strongly for structurally straightforward smells, such as Large Class and Long Method. However, we also observed that different LLMs and tools fare better for distinct code smells. We then propose and evaluate a detection strategy that combines LLMs and static analysis tools. The proposed strategy outperforms LLMs and tools in five out of nine code smells in terms of F1-Score. However, it also generates more false positives for complex smells. Therefore, we conclude that the optimal strategy depends on whether Recall or Precision is the main priority for code smell detection.
|
https://arxiv.org/abs/2601.09873
|
Academic Papers
|
svg
|
0f1be0a66e15ddd4eb312df7c616a3875b3881189e20dc56458b252791ada22f
|
2026-01-16T00:00:00-05:00
|
Patient-Similarity Cohort Reasoning in Clinical Text-to-SQL
|
arXiv:2601.09876v1 Announce Type: new Abstract: Real-world clinical text-to-SQL requires reasoning over heterogeneous EHR tables, temporal windows, and patient-similarity cohorts to produce executable queries. We introduce CLINSQL, a benchmark of 633 expert-annotated tasks on MIMIC-IV v3.1 that demands multi-table joins, clinically meaningful filters, and executable SQL. Solving CLINSQL entails navigating schema metadata and clinical coding systems, handling long contexts, and composing multi-step queries beyond traditional text-to-SQL. We evaluate 22 proprietary and open-source models under Chain-of-Thought self-refinement and use rubric-based SQL analysis with execution checks that prioritize critical clinical requirements. Despite recent advances, performance remains far from clinical reliability: on the test set, GPT-5-mini attains 74.7% execution score, DeepSeek-R1 leads open-source at 69.2% and Gemini-2.5-Pro drops from 85.5% on Easy to 67.2% on Hard. Progress on CLINSQL marks tangible advances toward clinically reliable text-to-SQL for real-world EHR analytics.
|
https://arxiv.org/abs/2601.09876
|
Academic Papers
|
svg
|
a8663734ec6c8650f12fd409ecfec019f74a33f7c79e89e38261dec0d2584f1b
|
2026-01-16T00:00:00-05:00
|
Who Owns My AI Twin? Data Ownership in a New World of Simulated Identities
|
arXiv:2601.09877v1 Announce Type: new Abstract: The emergence of AI twins, digital replicas that encapsulate an individual's knowledge, memories, psychological traits, and behavioral patterns, raises novel legal and ethical challenges for data governance and personal identity. Built from personal data, these systems require a rethinking of what it means to exercise dominion over one's data and to maintain personal autonomy in an AI-mediated environment. This article argues that natural persons should be recognized as the moral and legal owners of their AI twins, which function as intimate extensions of the self rather than as proprietary technological artifacts. It critiques prevailing legal frameworks that prioritize technological infrastructure and platform control over data and individual autonomy, exposing their structural limitations. In response, the article advances a human-centric model of data governance grounded in individual dominion and a private-by-default principle. This approach proposes a reimagined social contract for AI-driven identities that strengthens personal agency, promotes equitable data stewardship, and better aligns legal norms with the socio-technical realities of AI twins.
|
https://arxiv.org/abs/2601.09877
|
Academic Papers
|
svg
|
46e2babef61701220b22344c54ecf54be228a55f8c31e33dee641487292d5a62
|
2026-01-16T00:00:00-05:00
|
MedVL-SAM2: A unified 3D medical vision-language model for multimodal reasoning and prompt-driven segmentation
|
arXiv:2601.09879v1 Announce Type: new Abstract: Recent progress in medical vision-language models (VLMs) has achieved strong performance on image-level text-centric tasks such as report generation and visual question answering (VQA). However, achieving fine-grained visual grounding and volumetric spatial reasoning in 3D medical VLMs remains challenging, particularly when aiming to unify these capabilities within a single, generalizable framework. To address this challenge, we proposed MedVL-SAM2, a unified 3D medical multimodal model that concurrently supports report generation, VQA, and multi-paradigm segmentation, including semantic, referring, and interactive segmentation. MedVL-SAM2 integrates image-level reasoning and pixel-level perception through a cohesive architecture tailored for 3D medical imaging, and incorporates a SAM2-based volumetric segmentation module to enable precise multi-granular spatial reasoning. The model is trained in a multi-stage pipeline: it is first pre-trained on a large-scale corpus of 3D CT image-text pairs to align volumetric visual features with radiology-language embeddings. It is then jointly optimized with both language-understanding and segmentation objectives using a comprehensive 3D CT segmentation dataset. This joint training enables flexible interaction via language, point, or box prompts, thereby unifying high-level visual reasoning with spatially precise localization. Our unified architecture delivers state-of-the-art performance across report generation, VQA, and multiple 3D segmentation tasks. Extensive analyses further show that the model provides reliable 3D visual grounding, controllable interactive segmentation, and robust cross-modal reasoning, demonstrating that high-level semantic reasoning and precise 3D localization can be jointly achieved within a unified 3D medical VLM.
|
https://arxiv.org/abs/2601.09879
|
Academic Papers
|
svg
|
c1405a9c91d51438dd33bbd33cfd5a3d5751c0f09c5c4f7930dcc20881463d38
|
2026-01-16T00:00:00-05:00
|
Transition Matching Distillation for Fast Video Generation
|
arXiv:2601.09881v1 Announce Type: new Abstract: Large video diffusion and flow models have achieved remarkable success in high-quality video generation, but their use in real-time interactive applications remains limited due to their inefficient multi-step sampling process. In this work, we present Transition Matching Distillation (TMD), a novel framework for distilling video diffusion models into efficient few-step generators. The central idea of TMD is to match the multi-step denoising trajectory of a diffusion model with a few-step probability transition process, where each transition is modeled as a lightweight conditional flow. To enable efficient distillation, we decompose the original diffusion backbone into two components: (1) a main backbone, comprising the majority of early layers, that extracts semantic representations at each outer transition step; and (2) a flow head, consisting of the last few layers, that leverages these representations to perform multiple inner flow updates. Given a pretrained video diffusion model, we first introduce a flow head to the model, and adapt it into a conditional flow map. We then apply distribution matching distillation to the student model with flow head rollout in each transition step. Extensive experiments on distilling Wan2.1 1.3B and 14B text-to-video models demonstrate that TMD provides a flexible and strong trade-off between generation speed and visual quality. In particular, TMD outperforms existing distilled models under comparable inference costs in terms of visual fidelity and prompt adherence. Project page: https://research.nvidia.com/labs/genair/tmd
|
https://arxiv.org/abs/2601.09881
|
Academic Papers
|
svg
|
6a19d87901bea13f52854b2bccdd2dcf0c5540ba3d8cbf3df89295ca4fb08652
|
2026-01-16T00:00:00-05:00
|
An efficient probabilistic scheme for the exit time probability of $\alpha$-stable L\'evy process
|
arXiv:2601.09882v1 Announce Type: new Abstract: The {\alpha}-stable L\'evy process, commonly used to describe L\'evy flight, is characterized by discontinuous jumps and is widely used to model anomalous transport phenomena. In this study, we investigate the associated exit problem and propose a method to compute the exit time probability, which quantifies the likelihood that a trajectory starting from an initial condition exits a bounded region in phase space within a given time. This estimation plays a key role in understanding anomalous diffusion behavior. The proposed method approximates the {\alpha}-stable process by combining a Brownian motion with a compound Poisson process. The exit time probability is then modeled using a framework based on partial integro-differential equations (PIDEs). The Feynman-Kac formula provides a probabilistic representation of the solution, involving conditional expectations over stochastic differential equations. These expectations are computed via tailored quadrature rules and interpolation techniques. The proposed method achieves first-order convergence in time and offers significant computational advantages over standard Monte Carlo and deterministic approaches. In particular, it avoids assembling and solving large dense linear systems, resulting in improved efficiency. We demonstrate the method's accuracy and performance through two numerical examples, highlighting its applicability to physical transport problems.
|
https://arxiv.org/abs/2601.09882
|
Academic Papers
|
svg
|
2b4da1dee7ff8a95a4a3a048b9f358afd83beed311f328ea0f0f11b3abde739b
|
2026-01-16T00:00:00-05:00
|
Beyond Rule-Based Workflows: An Information-Flow-Orchestrated Multi-Agents Paradigm via Agent-to-Agent Communication from CORAL
|
arXiv:2601.09883v1 Announce Type: new Abstract: Most existing Large Language Model (LLM)-based Multi-Agent Systems (MAS) rely on predefined workflows, where human engineers enumerate task states in advance and specify routing rules and contextual injections accordingly. Such workflow-driven designs are essentially rule-based decision trees, which suffer from two fundamental limitations: they require substantial manual effort to anticipate and encode possible task states, and they cannot exhaustively cover the state space of complex real-world tasks. To address these issues, we propose an Information-Flow-Orchestrated Multi-Agent Paradigm via Agent-to-Agent (A2A) Communication from CORAL, in which a dedicated information flow orchestrator continuously monitors task progress and dynamically coordinates other agents through the A2A toolkit using natural language, without relying on predefined workflows. We evaluate our approach on the general-purpose benchmark GAIA, using the representative workflow-based MAS OWL as the baseline while controlling for agent roles and underlying models. Under the pass@1 setting, our method achieves 63.64% accuracy, outperforming OWL's 55.15% by 8.49 percentage points with comparable token consumption. Further case-level analysis shows that our paradigm enables more flexible task monitoring and more robust handling of edge cases. Our implementation is publicly available at: https://github.com/Coral-Protocol/Beyond-Rule-Based-Workflows
|
https://arxiv.org/abs/2601.09883
|
Academic Papers
|
svg
|
fc622b76ea046afbf04603e7eef47d954789c720332d90642c96a43509eabdf9
|
2026-01-16T00:00:00-05:00
|
Clozing the Gap: Exploring Why Language Model Surprisal Outperforms Cloze Surprisal
|
arXiv:2601.09886v1 Announce Type: new Abstract: How predictable a word is can be quantified in two ways: using human responses to the cloze task or using probabilities from language models (LMs).When used as predictors of processing effort, LM probabilities outperform probabilities derived from cloze data. However, it is important to establish that LM probabilities do so for the right reasons, since different predictors can lead to different scientific conclusions about the role of prediction in language comprehension. We present evidence for three hypotheses about the advantage of LM probabilities: not suffering from low resolution, distinguishing semantically similar words, and accurately assigning probabilities to low-frequency words. These results call for efforts to improve the resolution of cloze studies, coupled with experiments on whether human-like prediction is also as sensitive to the fine-grained distinctions made by LM probabilities.
|
https://arxiv.org/abs/2601.09886
|
Academic Papers
|
svg
|
55a622ea97e477f92b088a5dd3319dccec57bc3e0f6fc849f926a965771c087a
|
2026-01-16T00:00:00-05:00
|
LAMDA: Aiding Visual Exploration of Atomic Displacements in Molecular Dynamics Simulations
|
arXiv:2601.09887v1 Announce Type: new Abstract: Contemporary materials science research is heavily conducted in silico, involving massive simulations of the atomic-scale evolution of materials. Cataloging basic patterns in the atomic displacements is key to understanding and predicting the evolution of physical properties. However, the combinatorial complexity of the space of possible transitions coupled with the overwhelming amount of data being produced by high-throughput simulations make such an analysis extremely challenging and time-consuming for domain experts. The development of visual analytics systems that facilitate the exploration of simulation data is an active field of research. While these systems excel in identifying temporal regions of interest, they treat each timestep of a simulation as an independent event without considering the behavior of the atomic displacements between timesteps. We address this gap by introducing LAMDA, a visual analytics system that allows domain experts to quickly and systematically explore state-to-state transitions. In LAMDA, transitions are hierarchically categorized, providing a basis for cataloging displacement behavior, as well as enabling the analysis of simulations at different resolutions, ranging from very broad qualitative classes of transitions to very narrow definitions of unit processes. LAMDA supports navigating the hierarchy of transitions, enabling scientists to visualize the commonalities between different transitions in each class in terms of invariant features characterizing local atomic environments, and LAMDA simplifies the analysis by capturing user inputs through annotations. We evaluate our system through a case study and report on findings from our domain experts.
|
https://arxiv.org/abs/2601.09887
|
Academic Papers
|
svg
|
fa39f4e2b037be201b303652bb8129ca2ae182777d611394bd9ac7020057cae5
|
2026-01-16T00:00:00-05:00
|
One-Cold Poisson Channel: A Simple Continuous-Time Channel with Zero Dispersion
|
arXiv:2601.09894v1 Announce Type: new Abstract: We introduce the one-cold Poisson channel (OCPC), where the transmitter chooses one of several frequency bands to attenuate at a time. In particular, the perfect OCPC, where the number of bands is unlimited, is an extremely simple continuous-time memoryless channel. It has a capacity 1, zero channel dispersion, and an information spectrum being the degenerate distribution at 1. It is the only known nontrivial (discrete or continuous-time) memoryless channel with a closed-form formula for its optimal non-asymptotic error probability, making it the simplest channel in this sense. A potential application is optical communication with a tunable band rejection filter. Due to its simplicity, we may use it as a basic currency of information that is infinitely divisible, as an alternative to bits which are not infinitely divisible. OCPC with perfect feedback gives a generalization of prefix codes. We also study non-asymptotic coding and channel simulation results for the general OCPC.
|
https://arxiv.org/abs/2601.09894
|
Academic Papers
|
svg
|
a2cd17ff390532093f2185df987b281b07bfaeacade6e36284852de3757c01d5
|
2026-01-16T00:00:00-05:00
|
The Algorithmic Gaze: An Audit and Ethnography of the LAION-Aesthetics Predictor Model
|
arXiv:2601.09896v1 Announce Type: new Abstract: Visual generative AI models are trained using a one-size-fits-all measure of aesthetic appeal. However, what is deemed "aesthetic" is inextricably linked to personal taste and cultural values, raising the question of whose taste is represented in visual generative AI models. In this work, we study an aesthetic evaluation model--LAION Aesthetic Predictor (LAP)--that is widely used to curate datasets to train visual generative image models, like Stable Diffusion, and evaluate the quality of AI-generated images. To understand what LAP measures, we audited the model across three datasets. First, we examined the impact of aesthetic filtering on the LAION-Aesthetics Dataset (approximately 1.2B images), which was curated from LAION-5B using LAP. We find that the LAP disproportionally filters in images with captions mentioning women, while filtering out images with captions mentioning men or LGBTQ+ people. Then, we used LAP to score approximately 330k images across two art datasets, finding the model rates realistic images of landscapes, cityscapes, and portraits from western and Japanese artists most highly. In doing so, the algorithmic gaze of this aesthetic evaluation model reinforces the imperial and male gazes found within western art history. In order to understand where these biases may have originated, we performed a digital ethnography of public materials related to the creation of LAP. We find that the development of LAP reflects the biases we found in our audits, such as the aesthetic scores used to train LAP primarily coming from English-speaking photographers and western AI-enthusiasts. In response, we discuss how aesthetic evaluation can perpetuate representational harms and call on AI developers to shift away from prescriptive measures of "aesthetics" toward more pluralistic evaluation.
|
https://arxiv.org/abs/2601.09896
|
Academic Papers
|
svg
|
f87c9a99ab06ed4da6d7fda403c665bae122ec3ac6bf668416468fdce33638bc
|
2026-01-16T00:00:00-05:00
|
Cooking Up Politeness in Human-AI Information Seeking Dialogue
|
arXiv:2601.09898v1 Announce Type: new Abstract: Politeness is a core dimension of human communication, yet its role in human-AI information seeking remains underexplored. We investigate how user politeness behaviour shapes conversational outcomes in a cooking-assistance setting. First, we annotated 30 dialogues, identifying four distinct user clusters ranging from Hyperpolite to Hyperefficient. We then scaled up to 18,000 simulated conversations across five politeness profiles (including impolite) and three open-weight models. Results show that politeness is not only cosmetic: it systematically affects response length, informational gain, and efficiency. Engagement-seeking prompts produced up to 90% longer replies and 38% more information nuggets than hyper-efficient prompts, but at markedly lower density. Impolite inputs yielded verbose but less efficient answers, with up to 48% fewer nuggets per watt-hour compared to polite input. These findings highlight politeness as both a fairness and sustainability issue: conversational styles can advantage or disadvantage users, and "polite" requests may carry hidden energy costs. We discuss implications for inclusive and resource-aware design of information agents.
|
https://arxiv.org/abs/2601.09898
|
Academic Papers
|
svg
|
c8a5b89cf5b7aca87a07a580a6c9b16b4adb15e55203e62a5658de4bbf89deec
|
2026-01-16T00:00:00-05:00
|
Nonlinear numerical schemes using specular differentiation for initial value problems of first-order ordinary differential equations
|
arXiv:2601.09900v1 Announce Type: new Abstract: This paper proposes specular differentiation in one-dimensional Euclidean space and provides its fundamental analysis, including quasi-Fermat's theorem and the quasi-Mean Value Theorem. As an application, this paper develops several numerical schemes for solving initial value problems for first-order ordinary differential equations. Based on numerical simulations, we select one scheme and prove its first-order consistency and second-order local convergence.
|
https://arxiv.org/abs/2601.09900
|
Academic Papers
|
svg
|
913c858186fe7f404e0938df87983fa977d7920476a3f3958ea9e1eb5790b59d
|
2026-01-16T00:00:00-05:00
|
A Novel Contrastive Loss for Zero-Day Network Intrusion Detection
|
arXiv:2601.09902v1 Announce Type: new Abstract: Machine learning has achieved state-of-the-art results in network intrusion detection; however, its performance significantly degrades when confronted by a new attack class -- a zero-day attack. In simple terms, classical machine learning-based approaches are adept at identifying attack classes on which they have been previously trained, but struggle with those not included in their training data. One approach to addressing this shortcoming is to utilise anomaly detectors which train exclusively on benign data with the goal of generalising to all attack classes -- both known and zero-day. However, this comes at the expense of a prohibitively high false positive rate. This work proposes a novel contrastive loss function which is able to maintain the advantages of other contrastive learning-based approaches (robustness to imbalanced data) but can also generalise to zero-day attacks. Unlike anomaly detectors, this model learns the distributions of benign traffic using both benign and known malign samples, i.e. other well-known attack classes (not including the zero-day class), and consequently, achieves significant performance improvements. The proposed approach is experimentally verified on the Lycos2017 dataset where it achieves an AUROC improvement of .000065 and .060883 over previous models in known and zero-day attack detection, respectively. Finally, the proposed method is extended to open-set recognition achieving OpenAUC improvements of .170883 over existing approaches.
|
https://arxiv.org/abs/2601.09902
|
Academic Papers
|
svg
|
ee03ce3c4881f096378ded0d4190c0b7080511a65c8b4c479c1305fa09022516
|
2026-01-16T00:00:00-05:00
|
Forward-only learning in memristor arrays with month-scale stability
|
arXiv:2601.09903v1 Announce Type: new Abstract: Turning memristor arrays from efficient inference engines into systems capable of on-chip learning has proved difficult. Weight updates have a high energy cost and cause device wear, analog states drift, and backpropagation requires a backward pass with reversed signal flow. Here we experimentally demonstrate learning on standard filamentary HfOx/Ti arrays that addresses these challenges with two design choices. First, we realize that standard filamentary HfOx/Ti memristors support sub-1 V reset-only pulses that cut energy, improve endurance, and yield stable analog states. Second, we rely on forward-only training algorithms derived from Hinton's Forward-Forward that use only inference-style operations. We train two-layer classifiers on an ImageNet-resolution four-class task using arrays up to 8,064 devices. Two forward-only variants, the double-pass supervised Forward-Forward and a single-pass competitive rule, achieve test accuracies of 89.5% and 89.6%, respectively; a reference experiment using backpropagation reaches 90.0%. Across five independent runs per method, these accuracies match within statistical uncertainty. Trained models retain accuracy for at least one month under ambient conditions, consistent with the stability of reset-only states. Sub-1 V reset updates use 460 times less energy than conventional program-and-verify programming and require just 46% more energy than inference-only operation. Together, these results establish forward-only, sub-1 V learning on standard filamentary stacks at array scale, outlining a practical, pulse-aware route to adaptive edge intelligence.
|
https://arxiv.org/abs/2601.09903
|
Academic Papers
|
svg
|
13f49d5671d8e97c837b32732f3e261703c346f7ba152c80a9e4023904d941d0
|
2026-01-16T00:00:00-05:00
|
Self-reflection in Automated Qualitative Coding: Improving Text Annotation through Secondary LLM Critique
|
arXiv:2601.09905v1 Announce Type: new Abstract: Large language models (LLMs) allow for sophisticated qualitative coding of large datasets, but zero- and few-shot classifiers can produce an intolerable number of errors, even with careful, validated prompting. We present a simple, generalizable two-stage workflow: an LLM applies a human-designed, LLM-adapted codebook; a secondary LLM critic performs self-reflection on each positive label by re-reading the source text alongside the first model's rationale and issuing a final decision. We evaluate this approach on six qualitative codes over 3,000 high-content emails from Apache Software Foundation project evaluation discussions. Our human-derived audit of 360 positive annotations (60 passages by six codes) found that the first-line LLM had a false-positive rate of 8% to 54%, despite F1 scores of 0.74 and 1.00 in testing. Subsequent recoding of all stage-one annotations via a second self-reflection stage improved F1 by 0.04 to 0.25, bringing two especially poor performing codes up to 0.69 and 0.79 from 0.52 and 0.55 respectively. Our manual evaluation identified two recurrent error classes: misinterpretation (violations of code definitions) and meta-discussion (debate about a project evaluation criterion mistaken for its use as a decision justification). Code-specific critic clauses addressing observed failure modes were especially effective with testing and refinement, replicating the codebook-adaption process for LLM interpretation in stage-one. We explain how favoring recall in first-line LLM annotation combined with secondary critique delivers precision-first, compute-light control. With human guidance and validation, self-reflection slots into existing LLM-assisted annotation pipelines to reduce noise and potentially salvage unusable classifiers.
|
https://arxiv.org/abs/2601.09905
|
Academic Papers
|
svg
|
05d6ecac1c1e9d349e51649428d9f0908aec0d10471df0f1566c2cbfab0f67ce
|
2026-01-16T00:00:00-05:00
|
Continuum Memory Architectures for Long-Horizon LLM Agents
|
arXiv:2601.09913v1 Announce Type: new Abstract: Retrieval-augmented generation (RAG) has become the default strategy for providing large language model (LLM) agents with contextual knowledge. Yet RAG treats memory as a stateless lookup table: information persists indefinitely, retrieval is read-only, and temporal continuity is absent. We define the \textit{Continuum Memory Architecture} (CMA), a class of systems that maintain and update internal state across interactions through persistent storage, selective retention, associative routing, temporal chaining, and consolidation into higher-order abstractions. Rather than disclosing implementation specifics, we specify the architectural requirements CMA imposes and show consistent behavioral advantages on tasks that expose RAG's structural inability to accumulate, mutate, or disambiguate memory. The empirical probes (knowledge updates, temporal association, associative recall, contextual disambiguation) demonstrate that CMA is a necessary architectural primitive for long-horizon agents while highlighting open challenges around latency, drift, and interpretability.
|
https://arxiv.org/abs/2601.09913
|
Academic Papers
|
svg
|
a3f0eef50bf0f4aa62eaa070cc12f576a7c061d4d3da22038ee2a989c3280326
|
2026-01-16T00:00:00-05:00
|
Learning-Augmented Perfectly Secure Collaborative Matrix Multiplication
|
arXiv:2601.09916v1 Announce Type: new Abstract: This paper presents a perfectly secure matrix multiplication (PSMM) protocol for multiparty computation (MPC) of $\mathrm{A}^{\top}\mathrm{B}$ over finite fields. The proposed scheme guarantees correctness and information-theoretic privacy against threshold-bounded, semi-honest colluding agents, under explicit local storage constraints. Our scheme encodes submatrices as evaluations of sparse masking polynomials and combines coefficient alignment with Beaver-style randomness to ensure perfect secrecy. We demonstrate that any colluding set of parties below the security threshold observes uniformly random shares, and that the recovery threshold is optimal, matching existing information-theoretic limits. Building on this framework, we introduce a learning-augmented extension that integrates tensor-decomposition-based local block multiplication, capturing both classical and learned low-rank methods. We demonstrate that the proposed learning-based PSMM preserves privacy and recovery guarantees for MPC, while providing scalable computational efficiency gains (up to $80\%$) as the matrix dimensions grow.
|
https://arxiv.org/abs/2601.09916
|
Academic Papers
|
svg
|
82c331a67cd3ce305776c62597fbabf3830be20e8e723663ef7d36cfb67bf21f
|
2026-01-16T00:00:00-05:00
|
Collision Avoidance for Non-Cooperative Multi-Swarm Coverage Control with Bounded Disturbance Measurements
|
arXiv:2601.09917v1 Announce Type: new Abstract: This paper proposes a new algorithm for collision-free coverage control of multiple non-cooperating swarms in the presence of bounded disturbances. A new methodology is introduced that accounts for uncertainties in disturbance measurements. The proposed methodology is used to develop an algorithm that ensures collision-free motion in multi-swarm coverage control, specifically for cases where disturbances are present and their measurements are subject to bounded uncertainty. The theoretical results are validated through simulations of multiple swarms that independently aim to cover a given region in an environment with disturbances.
|
https://arxiv.org/abs/2601.09917
|
Academic Papers
|
svg
|
7ea04a37b5153b70465000b0c4cd5165f41470687125382205b036b42afbffca
|
2026-01-16T00:00:00-05:00
|
SyncTwin: Fast Digital Twin Construction and Synchronization for Safe Robotic Grasping
|
arXiv:2601.09920v1 Announce Type: new Abstract: Accurate and safe grasping under dynamic and visually occluded conditions remains a core challenge in real-world robotic manipulation. We present SyncTwin, a digital twin framework that unifies fast 3D scene reconstruction and real-to-sim synchronization for robust and safety-aware grasping in such environments. In the offline stage, we employ VGGT to rapidly reconstruct object-level 3D assets from RGB images, forming a reusable geometry library for simulation. During execution, SyncTwin continuously synchronizes the digital twin by tracking real-world object states via point cloud segmentation updates and aligning them through colored-ICP registration. The updated twin enables motion planners to compute collision-free and dynamically feasible trajectories in simulation, which are safely executed on the real robot through a closed real-to-sim-to-real loop. Experiments in dynamic and occluded scenes show that SyncTwin improves grasp accuracy and motion safety, demonstrating the effectiveness of digital-twin synchronization for real-world robotic execution.
|
https://arxiv.org/abs/2601.09920
|
Academic Papers
|
svg
|
abf8daf5c506016f0dbd70684431168f46669e42ac1891ac16ccc75c2292ac09
|
2026-01-16T00:00:00-05:00
|
CaMeLs Can Use Computers Too: System-level Security for Computer Use Agents
|
arXiv:2601.09923v1 Announce Type: new Abstract: AI agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss. The only known robust defense is architectural isolation that strictly separates trusted task planning from untrusted environment observations. However, applying this design to Computer Use Agents (CUAs) -- systems that automate tasks by viewing screens and executing actions -- presents a fundamental challenge: current agents require continuous observation of UI state to determine each action, conflicting with the isolation required for security. We resolve this tension by demonstrating that UI workflows, while dynamic, are structurally predictable. We introduce Single-Shot Planning for CUAs, where a trusted planner generates a complete execution graph with conditional branches before any observation of potentially malicious content, providing provable control flow integrity guarantees against arbitrary instruction injections. Although this architectural isolation successfully prevents instruction injections, we show that additional measures are needed to prevent Branch Steering attacks, which manipulate UI elements to trigger unintended valid paths within the plan. We evaluate our design on OSWorld, and retain up to 57% of the performance of frontier models while improving performance for smaller open-source models by up to 19%, demonstrating that rigorous security and utility can coexist in CUAs.
|
https://arxiv.org/abs/2601.09923
|
Academic Papers
|
svg
|
02ee6ad84c906ea9deaf33c7cbc47f3fbdf36525f901fb10a4b701c2c5bdcb16
|
2026-01-16T00:00:00-05:00
|
The PROPER Approach to Proactivity: Benchmarking and Advancing Knowledge Gap Navigation
|
arXiv:2601.09926v1 Announce Type: new Abstract: Most language-based assistants follow a reactive ask-and-respond paradigm, requiring users to explicitly state their needs. As a result, relevant but unexpressed needs often go unmet. Existing proactive agents attempt to address this gap either by eliciting further clarification, preserving this burden, or by extrapolating future needs from context, often leading to unnecessary or mistimed interventions. We introduce ProPer, Proactivity-driven Personalized agents, a novel two-agent architecture consisting of a Dimension Generating Agent (DGA) and a Response Generating Agent (RGA). DGA, a fine-tuned LLM agent, leverages explicit user data to generate multiple implicit dimensions (latent aspects relevant to the user's task but not considered by the user) or knowledge gaps. These dimensions are selectively filtered using a reranker based on quality, diversity, and task relevance. RGA then balances explicit and implicit dimensions to tailor personalized responses with timely and proactive interventions. We evaluate ProPer across multiple domains using a structured, gap-aware rubric that measures coverage, initiative appropriateness, and intent alignment. Our results show that ProPer improves quality scores and win rates across all domains, achieving up to 84% gains in single-turn evaluation and consistent dominance in multi-turn interactions.
|
https://arxiv.org/abs/2601.09926
|
Academic Papers
|
svg
|
99f9a7aeb404083c38749c500019a369cf3ca4f6714ff33959671f9529e5737f
|
2026-01-16T00:00:00-05:00
|
In-Browser Agents for Search Assistance
|
arXiv:2601.09928v1 Announce Type: new Abstract: A fundamental tension exists between the demand for sophisticated AI assistance in web search and the need for user data privacy. Current centralized models require users to transmit sensitive browsing data to external services, which limits user control. In this paper, we present a browser extension that provides a viable in-browser alternative. We introduce a hybrid architecture that functions entirely on the client side, combining two components: (1) an adaptive probabilistic model that learns a user's behavioral policy from direct feedback, and (2) a Small Language Model (SLM), running in the browser, which is grounded by the probabilistic model to generate context-aware suggestions. To evaluate this approach, we conducted a three-week longitudinal user study with 18 participants. Our results show that this privacy-preserving approach is highly effective at adapting to individual user behavior, leading to measurably improved search efficiency. This work demonstrates that sophisticated AI assistance is achievable without compromising user privacy or data control.
|
https://arxiv.org/abs/2601.09928
|
Academic Papers
|
svg
|
4441fcd5926601af4ef116cbcdf5a9eff4cfbce5a47c18eb4d1439b6a6fd4d48
|
2026-01-16T00:00:00-05:00
|
Hallucination Detection and Mitigation in Large Language Models
|
arXiv:2601.09929v1 Announce Type: new Abstract: Large Language Models (LLMs) and Large Reasoning Models (LRMs) offer transformative potential for high-stakes domains like finance and law, but their tendency to hallucinate, generating factually incorrect or unsupported content, poses a critical reliability risk. This paper introduces a comprehensive operational framework for hallucination management, built on a continuous improvement cycle driven by root cause awareness. We categorize hallucination sources into model, data, and context-related factors, allowing targeted interventions over generic fixes. The framework integrates multi-faceted detection methods (e.g., uncertainty estimation, reasoning consistency) with stratified mitigation strategies (e.g., knowledge grounding, confidence calibration). We demonstrate its application through a tiered architecture and a financial data extraction case study, where model, context, and data tiers form a closed feedback loop for progressive reliability enhancement. This approach provides a systematic, scalable methodology for building trustworthy generative AI systems in regulated environments.
|
https://arxiv.org/abs/2601.09929
|
Academic Papers
|
svg
|
12bd65a3fccfff159a4e565acba73be1a407e4d52242c0325bdc2c48710c4d23
|
2026-01-16T00:00:00-05:00
|
Diffusion-based Frameworks for Unsupervised Speech Enhancement
|
arXiv:2601.09931v1 Announce Type: new Abstract: This paper addresses $\textit{unsupervised}$ diffusion-based single-channel speech enhancement (SE). Prior work in this direction combines a score-based diffusion model trained on clean speech with a Gaussian noise model whose covariance is structured by non-negative matrix factorization (NMF). This combination is used within an iterative expectation-maximization (EM) scheme, in which a diffusion-based posterior-sampling E-step estimates the clean speech. We first revisit this framework and propose to explicitly model both speech and acoustic noise as latent variables, jointly sampling them in the E-step instead of sampling speech alone as in previous approaches. We then introduce a new unsupervised SE framework that replaces the NMF noise prior with a diffusion-based noise model, learned jointly with the speech prior in a single conditional score model. Within this framework, we derive two variants: one that implicitly accounts for noise and one that explicitly treats noise as a latent variable. Experiments on WSJ0-QUT and VoiceBank-DEMAND show that explicit noise modeling systematically improves SE performance for both NMF-based and diffusion-based noise priors. Under matched conditions, the diffusion-based noise model attains the best overall quality and intelligibility among unsupervised methods, while under mismatched conditions the proposed NMF-based explicit-noise framework is more robust and suffers less degradation than several supervised baselines. Our code will be publicly available on this $\href{https://github.com/jeaneudesAyilo/enudiffuse}{URL}$.
|
https://arxiv.org/abs/2601.09931
|
Academic Papers
|
svg
|
86c9d8cae3b1e1df0ad1ea51005cfcbcb5d7202be9805c9d2178ad590cc74606
|
2026-01-16T00:00:00-05:00
|
Malware Classification using Diluted Convolutional Neural Network with Fast Gradient Sign Method
|
arXiv:2601.09933v1 Announce Type: new Abstract: Android malware has become an increasingly critical threat to organizations, society and individuals, posing significant risks to privacy, data security and infrastructure. As malware continues to evolve in terms of complexity and sophistication, the mitigation and detection of these malicious software instances have become more time consuming and challenging particularly due to the requirement of large number of features to identify potential malware. To address these challenges, this research proposes Fast Gradient Sign Method with Diluted Convolutional Neural Network (FGSM DICNN) method for malware classification. DICNN contains diluted convolutions which increases receptive field, enabling the model to capture dispersed malware patterns across long ranges using fewer features without adding parameters. Additionally, the FGSM strategy enhance the accuracy by using one-step perturbations during training that provides more defensive advantage of lower computational cost. This integration helps to manage high classification accuracy while reducing the dependence on extensive feature sets. The proposed FGSM DICNN model attains 99.44% accuracy while outperforming other existing approaches such as Custom Deep Neural Network (DCNN).
|
https://arxiv.org/abs/2601.09933
|
Academic Papers
|
svg
|
1068275bb4a46829e261f54db71dc74b7f41de7232bf1164c74f0c6c65d0c03e
|
2026-01-16T00:00:00-05:00
|
From SERPs to Agents: A Platform for Comparative Studies of Information Interaction
|
arXiv:2601.09937v1 Announce Type: new Abstract: The diversification of information access systems, from RAG to autonomous agents, creates a critical need for comparative user studies. However, the technical overhead to deploy and manage these distinct systems is a major barrier. We present UXLab, an open-source system for web-based user studies that addresses this challenge. Its core is a web-based dashboard enabling the complete, no-code configuration of complex experimental designs. Researchers can visually manage the full study, from recruitment to comparing backends like traditional search, vector databases, and LLMs. We demonstrate UXLab's value via a micro case study comparing user behavior with RAG versus an autonomous agent. UXLab allows researchers to focus on experimental design and analysis, supporting future multi-modal interaction research.
|
https://arxiv.org/abs/2601.09937
|
Academic Papers
|
svg
|
2087118eca7d06bc3d837e3fddd046ccc5ed07a26f78cb33e049f9c6e3899240
|
2026-01-16T00:00:00-05:00
|
How Diplomacy Reshapes Online Discourse:Asymmetric Persistence in Online Framing of North Korea
|
arXiv:2601.09942v1 Announce Type: new Abstract: Public opinion toward foreign adversaries shapes and constrains diplomatic options. Prior research has largely relied on sentiment analysis and survey based measures, providing limited insight into how sustained narrative changes (beyond transient emotional reactions) might follow diplomatic engagement. This study examines the extent to which high stakes diplomatic summits shape how adversaries are framed in online discourse. We analyze U.S.-North Korea summit diplomacy (2018-2019) using a Difference-in-Difference(DiD) design on Reddit discussions. Using multiple control groups (China, Iran, Russia) to adjust for concurrent geopolitical shocks, we integrate a validated Codebook LLM framework for framing classification with graph based discourse network analysis that examines both edge level relationships and community level narrative structures. Our results reveal short term asymmetric persistence in framing responses to diplomacy. While both post level and comment level sentiment proved transient (improving during the Singapore Summit but fully reverting after the Hanoi failure),framing exhibited significant stability: the shift from threat oriented to diplomacy oriented framing was only partially reversed. Structurally, the proportion of threat oriented edges decreased substantially (48% -> 28%) while diplomacy oriented structures expanded, and these shifts resisted complete reversion after diplomatic failure. These findings suggest that diplomatic success can leave a short-term but lasting imprint on how adversaries are framed in online discourse, even when subsequent negotiations fail.
|
https://arxiv.org/abs/2601.09942
|
Academic Papers
|
svg
|
1b41e074afda3bab3680086f3ed531e1ed049fbbcc1d142ba073699f0823daf7
|
2026-01-16T00:00:00-05:00
|
Modeling conflicting incentives in engineering senior capstone projects: A multi-player game theory approach
|
arXiv:2601.09944v1 Announce Type: new Abstract: University engineering capstone projects involve sustained interaction among students, faculty, and industry sponsors whose objectives are only partially aligned. While capstones are widely used in engineering education, existing analyses typically treat stakeholder behavior informally or descriptively, leaving incentive conflicts, information asymmetries, and strategic dependencies underexplored. This paper develops a formal game-theoretic framework that models capstone projects as a sequential Bayesian game involving three players: the university, the industry sponsor, and the student team. The framework is intended as an analytical and explanatory tool for understanding how institutional policy choices, such as grading structures, intellectual property rules, and sponsor engagement expectations, shape stakeholder behavior and project outcomes, rather than as a calibrated or predictive model. The university acts as a constrained Stackelberg leader by committing to course policies and assessment structures while anticipating strategic responses by sponsors and students under incomplete information. Reduced-form outcome functions capture technical quality, documentation quality, timeliness, alignment with sponsor needs, and publishability, while payoff functions reflect stakeholder-specific objectives and costs. Under standard assumptions, the model admits stable equilibrium regimes that correspond to empirically recognizable capstone dynamics observed in practice, including cooperative engagement, sponsor-dominated exploitation, and student grade gaming. Rather than claiming precise prediction, the framework provides a structured basis for reasoning about incentive design, policy tradeoffs, and structural failure modes in project-based learning environments, as well as for future extensions incorporating richer dynamics, repeated interaction, and empirical calibration.
|
https://arxiv.org/abs/2601.09944
|
Academic Papers
|
svg
|
e46bbdc347f453f38f685c054ddaec257b49b90f2ca09ca51c23c91acdedc04b
|
2026-01-16T00:00:00-05:00
|
Interpolation-Based Optimization for Enforcing lp-Norm Metric Differential Privacy in Continuous and Fine-Grained Domains
|
arXiv:2601.09946v1 Announce Type: new Abstract: Metric Differential Privacy (mDP) generalizes Local Differential Privacy (LDP) by adapting privacy guarantees based on pairwise distances, enabling context-aware protection and improved utility. While existing optimization-based methods reduce utility loss effectively in coarse-grained domains, optimizing mDP in fine-grained or continuous settings remains challenging due to the computational cost of constructing dense perterubation matrices and satisfying pointwise constraints. In this paper, we propose an interpolation-based framework for optimizing lp-norm mDP in such domains. Our approach optimizes perturbation distributions at a sparse set of anchor points and interpolates distributions at non-anchor locations via log-convex combinations, which provably preserve mDP. To address privacy violations caused by naive interpolation in high-dimensional spaces, we decompose the interpolation process into a sequence of one-dimensional steps and derive a corrected formulation that enforces lp-norm mDP by design. We further explore joint optimization over perturbation distributions and privacy budget allocation across dimensions. Experiments on real-world location datasets demonstrate that our method offers rigorous privacy guarantees and competitive utility in fine-grained domains, outperforming baseline mechanisms. in high-dimensional spaces, we decompose the interpolation process into a sequence of one-dimensional steps and derive a corrected formulation that enforces lp-norm mDP by design. We further explore joint optimization over perturbation distributions and privacy budget allocation across dimensions. Experiments on real-world location datasets demonstrate that our method offers rigorous privacy guarantees and competitive utility in fine-grained domains, outperforming baseline mechanisms.
|
https://arxiv.org/abs/2601.09946
|
Academic Papers
|
svg
|
5cd0a85149204d352d7ec1fa23748c659f3e2d578e0123cb02da14fb51853cfd
|
2026-01-16T00:00:00-05:00
|
Reconstructing Reed-Solomon Codes from Multiple Noisy Channel Outputs
|
arXiv:2601.09947v1 Announce Type: new Abstract: The sequence reconstruction problem, introduced by Levenshtein in 2001, considers a communication setting in which a sender transmits a codeword and the receiver observes K independent noisy versions of this codeword. In this work, we study the problem of efficient reconstruction when each of the $K$ outputs is corrupted by a $q$-ary discrete memoryless symmetric (DMS) substitution channel with substitution probability $p$. Focusing on Reed-Solomon (RS) codes, we adapt the Koetter-Vardy soft-decision decoding algorithm to obtain an efficient reconstruction algorithm. For sufficiently large blocklength and alphabet size, we derive an explicit rate threshold, depending only on $(p, K)$, such that the transmitted codeword can be reconstructed with arbitrarily small probability of error whenever the code rate $R$ lies below this threshold.
|
https://arxiv.org/abs/2601.09947
|
Academic Papers
|
svg
|
3e10b4ec45a74f12d0d618acc330acd77ab7098fdc771606510abfe8c9b03a08
|
2026-01-16T00:00:00-05:00
|
Kinematic Tokenization: Optimization-Based Continuous-Time Tokens for Learnable Decision Policies in Noisy Time Series
|
arXiv:2601.09949v1 Announce Type: new Abstract: Transformers are designed for discrete tokens, yet many real-world signals are continuous processes observed through noisy sampling. Discrete tokenizations (raw values, patches, finite differences) can be brittle in low signal-to-noise regimes, especially when downstream objectives impose asymmetric penalties that rationally encourage abstention. We introduce Kinematic Tokenization, an optimization-based continuous-time representation that reconstructs an explicit spline from noisy measurements and tokenizes local spline coefficients (position, velocity, acceleration, jerk). This is applied to financial time series data in the form of asset prices in conjunction with trading volume profiles. Across a multi-asset daily-equity testbed, we use a risk-averse asymmetric classification objective as a stress test for learnability. Under this objective, several discrete baselines collapse to an absorbing cash policy (the Liquidation Equilibrium), whereas the continuous spline tokens sustain calibrated, non-trivial action distributions and stable policies. These results suggest that explicit continuous-time tokens can improve the learnability and calibration of selective decision policies in noisy time series under abstention-inducing losses.
|
https://arxiv.org/abs/2601.09949
|
Academic Papers
|
svg
|
566425418285e1e255c89efc5656a20aaef274735ce1627cae7a0b106d8db909
|
2026-01-16T00:00:00-05:00
|
OT-Drive: Out-of-Distribution Off-Road Traversable Area Segmentation via Optimal Transport
|
arXiv:2601.09952v1 Announce Type: new Abstract: Reliable traversable area segmentation in unstructured environments is critical for planning and decision-making in autonomous driving. However, existing data-driven approaches often suffer from degraded segmentation performance in out-of-distribution (OOD) scenarios, consequently impairing downstream driving tasks. To address this issue, we propose OT-Drive, an Optimal Transport--driven multi-modal fusion framework. The proposed method formulates RGB and surface normal fusion as a distribution transport problem. Specifically, we design a novel Scene Anchor Generator (SAG) to decompose scene information into the joint distribution of weather, time-of-day, and road type, thereby constructing semantic anchors that can generalize to unseen scenarios. Subsequently, we design an innovative Optimal Transport-based multi-modal fusion module (OT Fusion) to transport RGB and surface normal features onto the manifold defined by the semantic anchors, enabling robust traversable area segmentation under OOD scenarios. Experimental results demonstrate that our method achieves 95.16% mIoU on ORFD OOD scenarios, outperforming prior methods by 6.35%, and 89.79% mIoU on cross-dataset transfer tasks, surpassing baselines by 13.99%.These results indicate that the proposed model can attain strong OOD generalization with only limited training data, substantially enhancing its practicality and efficiency for real-world deployment.
|
https://arxiv.org/abs/2601.09952
|
Academic Papers
|
svg
|
6f52401f12be27fc1060a6d3f9a895b60f76ea76d395d80daabbb50ce13a2bf0
|
2026-01-16T00:00:00-05:00
|
Take Out Your Calculators: Estimating the Real Difficulty of Question Items with LLM Student Simulations
|
arXiv:2601.09953v1 Announce Type: new Abstract: Standardized math assessments require expensive human pilot studies to establish the difficulty of test items. We investigate the predictive value of open-source large language models (LLMs) for evaluating the difficulty of multiple-choice math questions for real-world students. We show that, while LLMs are poor direct judges of problem difficulty, simulation-based approaches with LLMs yield promising results under the right conditions. Under the proposed approach, we simulate a "classroom" of 4th, 8th, or 12th grade students by prompting the LLM to role-play students of varying proficiency levels. We use the outcomes of these simulations to fit Item Response Theory (IRT) models, comparing learned difficulty parameters for items to their real-world difficulties, as determined by item-level statistics furnished by the National Assessment of Educational Progress (NAEP). We observe correlations as high as 0.75, 0.76, and 0.82 for grades 4, 8, and 12, respectively. In our simulations, we experiment with different "classroom sizes," showing tradeoffs between computation size and accuracy. We find that role-plays with named students improves predictions (compared to student ids), and stratifying names across gender and race further improves predictions. Our results show that LLMs with relatively weaker mathematical abilities (Gemma) actually yield better real-world difficulty predictions than mathematically stronger models (Llama and Qwen), further underscoring the suitability of open-source models for the task.
|
https://arxiv.org/abs/2601.09953
|
Academic Papers
|
svg
|
ca0702936f343c0925edc4285bc7c2c4ea82b3c0072662cf1b5ae552f0b2e208
|
2026-01-16T00:00:00-05:00
|
The Spatial Blindspot of Vision-Language Models
|
arXiv:2601.09954v1 Announce Type: new Abstract: Vision-language models (VLMs) have advanced rapidly, but their ability to capture spatial relationships remains a blindspot. Current VLMs are typically built with contrastive language-image pretraining (CLIP) style image encoders. The training recipe often flattens images into 1D patch sequences, discarding the 2D structure necessary for spatial reasoning. We argue that this lack of spatial awareness is a missing dimension in VLM design and a bottleneck for applications requiring spatial grounding, such as robotics and embodied AI. To address this, we investigate (i) image encoders trained with alternative objectives and (ii) 2D positional encodings. Our experiments show that these architectural choices can lead to improved spatial reasoning on several benchmarks.
|
https://arxiv.org/abs/2601.09954
|
Academic Papers
|
svg
|
aebb3cf43cb70ee9f5afd4004c6a76985584a0affc76c38c2e88a984dbd91b90
|
2026-01-16T00:00:00-05:00
|
Private Information Retrieval for Graph-based Replication with Minimal Subpacketization
|
arXiv:2601.09957v1 Announce Type: new Abstract: We design new minimal-subpacketization schemes for information-theoretic private information retrieval on graph-based replicated databases. In graph-based replication, the system consists of $K$ files replicated across $N$ servers according to a graph with $N$ vertices and $K$ edges. The client wants to retrieve one desired file, while keeping the index of the desired file private from each server via a query-response protocol. We seek PIR protocols that have (a) high rate, which is the ratio of the file-size to the total download cost, and (b) low subpacketization, which acts as a constraint on the size of the files for executing the protocol. We report two new schemes which have unit-subpacketization (which is minimal): (i) for a special class of graphs known as star graphs, and (ii) for general graphs. Our star-graph scheme has a better rate than previously known schemes with low subpacketization for general star graphs. Our scheme for general graphs uses a decomposition of the graph via independent sets. This scheme achieves a rate lower than prior schemes for the complete graph, however it can achieve higher rates than known for some specific graph classes. An extension of our scheme to the case of multigraphs achieves a higher rate than previous schemes for the complete multi-graph.
|
https://arxiv.org/abs/2601.09957
|
Academic Papers
|
svg
|
67bf0872054bef72d7102f86df29634220f2555fb2bf871f93df2bb26d77b8f2
|
2026-01-16T00:00:00-05:00
|
On the Leaky Private Information Retrieval with Side Information
|
arXiv:2601.09960v1 Announce Type: new Abstract: This paper investigates the problem of leaky-private Private Information Retrieval with Side Information (L-PIR-SI), which relaxes the requirement of perfect privacy to achieve improved communication efficiency in the presence of side information. While the capacities of PIR-SI under both $W$-privacy and $(W,S)$-privacy have been partially explored, the impact of controlled information leakage in these settings remains unaddressed. We propose a unified probabilistic framework to construct L-PIR-SI schemes where the privacy leakage is quantified by a parameter $\varepsilon$, consistent with differential privacy standards. We characterize the achievable download costs and show that our results generalize several landmark results in the PIR literature: they recover the capacity of PIR-SI when $\varepsilon \to 0$, and reduce to the known bounds for leaky-PIR when side information is absent. This work provides the first look at the trade-offs between leakage, side information, and retrieval efficiency.
|
https://arxiv.org/abs/2601.09960
|
Academic Papers
|
svg
|
e6b156b7153836598dfd68ca373f841cca2a172654245f5024745a82cd7ad3bf
|
2026-01-16T00:00:00-05:00
|
A Control Theoretic Approach to Decentralized AI Economy Stabilization via Dynamic Buyback-and-Burn Mechanisms
|
arXiv:2601.09961v1 Announce Type: new Abstract: The democratization of artificial intelligence through decentralized networks represents a paradigm shift in computational provisioning, yet the long-term viability of these ecosystems is critically endangered by the extreme volatility of their native economic layers. Current tokenomic models, which predominantly rely on static or threshold-based buyback heuristics, are ill-equipped to handle complex system dynamics and often function pro-cyclically, exacerbating instability during market downturns. To bridge this gap, we propose the Dynamic-Control Buyback Mechanism (DCBM), a formalized control-theoretic framework that utilizes a Proportional-Integral-Derivative (PID) controller with strict solvency constraints to regulate the token economy as a dynamical system. Extensive agent-based simulations utilizing Jump-Diffusion processes demonstrate that DCBM fundamentally outperforms static baselines, reducing token price volatility by approximately 66% and lowering operator churn from 19.5% to 8.1% in high-volatility regimes. These findings establish that converting tokenomics from static rules into continuous, structurally constrained control loops is a necessary condition for secure and sustainable decentralized intelligence networks.
|
https://arxiv.org/abs/2601.09961
|
Academic Papers
|
svg
|
3c94a78bda34b9e8510579973230a4c5611a5409f9b1af76e8d0e2d44d3e8a40
|
2026-01-16T00:00:00-05:00
|
A Sustainable AI Economy Needs Data Deals That Work for Generators
|
arXiv:2601.09966v1 Announce Type: new Abstract: We argue that the machine learning value chain is structurally unsustainable due to an economic data processing inequality: each state in the data cycle from inputs to model weights to synthetic outputs refines technical signal but strips economic equity from data generators. We show, by analyzing seventy-three public data deals, that the majority of value accrues to aggregators, with documented creator royalties rounding to zero and widespread opacity of deal terms. This is not just an economic welfare concern: as data and its derivatives become economic assets, the feedback loop that sustains current learning algorithms is at risk. We identify three structural faults - missing provenance, asymmetric bargaining power, and non-dynamic pricing - as the operational machinery of this inequality. In our analysis, we trace these problems along the machine learning value chain and propose an Equitable Data-Value Exchange (EDVEX) Framework to enable a minimal market that benefits all participants. Finally, we outline research directions where our community can make concrete contributions to data deals and contextualize our position with related and orthogonal viewpoints.
|
https://arxiv.org/abs/2601.09966
|
Academic Papers
|
svg
|
4e57e559b62d5e0b721003b4b3a2349f5d60326c3cf2568f758d0fe7e9c561e4
|
2026-01-16T00:00:00-05:00
|
An Exploratory Study to Repurpose LLMs to a Unified Architecture for Time Series Classification
|
arXiv:2601.09971v1 Announce Type: new Abstract: Time series classification (TSC) is a core machine learning problem with broad applications. Recently there has been growing interest in repurposing large language models (LLMs) for TSC, motivated by their strong reasoning and generalization ability. Prior work has primarily focused on alignment strategies that explicitly map time series data into the textual domain; however, the choice of time series encoder architecture remains underexplored. In this work, we conduct an exploratory study of hybrid architectures that combine specialized time series encoders with a frozen LLM backbone. We evaluate a diverse set of encoder families, including Inception, convolutional, residual, transformer-based, and multilayer perceptron architectures, among which the Inception model is the only encoder architecture that consistently yields positive performance gains when integrated with an LLM backbone. Overall, this study highlights the impact of time series encoder choice in hybrid LLM architectures and points to Inception-based models as a promising direction for future LLM-driven time series learning.
|
https://arxiv.org/abs/2601.09971
|
Academic Papers
|
svg
|
4f6636af1142245201ffb77f001cdcb6fff08a6285bee7da1502a6a0919de381
|
2026-01-16T00:00:00-05:00
|
Chinese Labor Law Large Language Model Benchmark
|
arXiv:2601.09972v1 Announce Type: new Abstract: Recent advances in large language models (LLMs) have led to substantial progress in domain-specific applications, particularly within the legal domain. However, general-purpose models such as GPT-4 often struggle with specialized subdomains that require precise legal knowledge, complex reasoning, and contextual sensitivity. To address these limitations, we present LabourLawLLM, a legal large language model tailored to Chinese labor law. We also introduce LabourLawBench, a comprehensive benchmark covering diverse labor-law tasks, including legal provision citation, knowledge-based question answering, case classification, compensation computation, named entity recognition, and legal case analysis. Our evaluation framework combines objective metrics (e.g., ROUGE-L, accuracy, F1, and soft-F1) with subjective assessment based on GPT-4 scoring. Experiments show that LabourLawLLM consistently outperforms general-purpose and existing legal-specific LLMs across task categories. Beyond labor law, our methodology provides a scalable approach for building specialized LLMs in other legal subfields, improving accuracy, reliability, and societal value of legal AI applications.
|
https://arxiv.org/abs/2601.09972
|
Academic Papers
|
svg
|
9248e1d3ea6041afa933c7785300b671ebcb3b644cf3ae5f5dd4e8563c45fac1
|
2026-01-16T00:00:00-05:00
|
Correspondences in computational and dynamical complexity II: forcing complex reductions
|
arXiv:2601.09973v1 Announce Type: new Abstract: An algebraic telic problem is a decision problem in $\textsf{NP}_\mathbb{R}$ formalizing finite-time reachability questions for one-dimensional dynamical systems. We prove that the existence of "natural" mapping reductions between algebraic telic problems coming from distinct dynamical systems implies the two dynamical systems exhibit similar behavior (in a precise sense). As a consequence, we obtain explicit barriers for algorithms solving algebraic telic problems coming from complex dynamical systems, such as those with positive topological entropy. For example, some telic problems cannot be decided by uniform arithmetic circuit families with only $+$ and $\times$ gates.
|
https://arxiv.org/abs/2601.09973
|
Academic Papers
|
svg
|
a1be380d65d6d4b88a19d075559843a42cea539a47685e3dab8dfefa1042443b
|
2026-01-16T00:00:00-05:00
|
SPRInG: Continual LLM Personalization via Selective Parametric Adaptation and Retrieval-Interpolated Generation
|
arXiv:2601.09974v1 Announce Type: new Abstract: Personalizing Large Language Models typically relies on static retrieval or one-time adaptation, assuming user preferences remain invariant over time. However, real-world interactions are dynamic, where user interests continuously evolve, posing a challenge for models to adapt to preference drift without catastrophic forgetting. Standard continual learning approaches often struggle in this context, as they indiscriminately update on noisy interaction streams, failing to distinguish genuine preference shifts from transient contexts. To address this, we introduce SPRInG, a novel semi-parametric framework designed for effective continual personalization. During training, SPRInG employs drift-driven selective adaptation, which utilizes a likelihood-based scoring function to identify high-novelty interactions. This allows the model to selectively update the user-specific adapter on drift signals while preserving hard-to-learn residuals in a replay buffer. During inference, we apply strict relevance gating and fuse parametric knowledge with retrieved history via logit interpolation. Experiments on the long-form personalized generation benchmark demonstrate that SPRInG outperforms existing baselines, validating its robustness for real-world continual personalization.
|
https://arxiv.org/abs/2601.09974
|
Academic Papers
|
svg
|
294fd19dc4b4e8c3779477c1ca047bb1025e62288de5448b7def2f89c18b0d90
|
2026-01-16T00:00:00-05:00
|
Federated Unlearning in Edge Networks: A Survey of Fundamentals, Challenges, Practical Applications and Future Directions
|
arXiv:2601.09978v1 Announce Type: new Abstract: The proliferation of connected devices and privacy-sensitive applications has accelerated the adoption of Federated Learning (FL), a decentralized paradigm that enables collaborative model training without sharing raw data. While FL addresses data locality and privacy concerns, it does not inherently support data deletion requests that are increasingly mandated by regulations such as the Right to be Forgotten (RTBF). In centralized learning, this challenge has been studied under the concept of Machine Unlearning (MU), that focuses on efficiently removing the influence of specific data samples or clients from trained models. Extending this notion to federated settings has given rise to Federated Unlearning (FUL), a new research area concerned with eliminating the contributions of individual clients or data subsets from the global FL model in a distributed and heterogeneous environment. In this survey, we first introduce the fundamentals of FUL. Then, we review the FUL frameworks that are proposed to address the three main implementation challenges, i.e., communication cost, resource allocation as well as security and privacy. Furthermore, we discuss applications of FUL in the modern distributed computer networks. We also highlight the open challenges and future research opportunities. By consolidating existing knowledge and mapping open problems, this survey aims to serve as a foundational reference for researchers and practitioners seeking to advance FL to build trustworthy, regulation-compliant and user-centric federated systems.
|
https://arxiv.org/abs/2601.09978
|
Academic Papers
|
svg
|
2489c4fb6b01fadba59af213c56f89bae90e0661430dfb61c02d7174b2ebff8b
|
2026-01-16T00:00:00-05:00
|
In-Context Operator Learning on the Space of Probability Measures
|
arXiv:2601.09979v1 Announce Type: new Abstract: We introduce \emph{in-context operator learning on probability measure spaces} for optimal transport (OT). The goal is to learn a single solution operator that maps a pair of distributions to the OT map, using only few-shot samples from each distribution as a prompt and \emph{without} gradient updates at inference. We parameterize the solution operator and develop scaling-law theory in two regimes. In the \emph{nonparametric} setting, when tasks concentrate on a low-intrinsic-dimension manifold of source--target pairs, we establish generalization bounds that quantify how in-context accuracy scales with prompt size, intrinsic task dimension, and model capacity. In the \emph{parametric} setting (e.g., Gaussian families), we give an explicit architecture that recovers the exact OT map in context and provide finite-sample excess-risk bounds. Our numerical experiments on synthetic transports and generative-modeling benchmarks validate the framework.
|
https://arxiv.org/abs/2601.09979
|
Academic Papers
|
svg
|
eb418a1df0b7a8d3afaa33e437f50e77e966221353b38e9a29267376d5e5a399
|
2026-01-16T00:00:00-05:00
|
DR$^2$Seg: Decomposed Two-Stage Rollouts for Efficient Reasoning Segmentation in Multimodal Large Language Models
|
arXiv:2601.09981v1 Announce Type: new Abstract: Reasoning segmentation is an emerging vision-language task that requires reasoning over intricate text queries to precisely segment objects. However, existing methods typically suffer from overthinking, generating verbose reasoning chains that interfere with object localization in multimodal large language models (MLLMs). To address this issue, we propose DR$^2$Seg, a self-rewarding framework that improves both reasoning efficiency and segmentation accuracy without requiring extra thinking supervision. DR$^2$Seg employs a two-stage rollout strategy that decomposes reasoning segmentation into multimodal reasoning and referring segmentation. In the first stage, the model generates a self-contained description that explicitly specifies the target object. In the second stage, this description replaces the original complex query to verify its self-containment. Based on this design, two self-rewards are introduced to strengthen goal-oriented reasoning and suppress redundant thinking. Extensive experiments across MLLMs of varying scales and segmentation models demonstrate that DR$^2$Seg consistently improves reasoning efficiency and overall segmentation performance.
|
https://arxiv.org/abs/2601.09981
|
Academic Papers
|
svg
|
bea87e0b149dc16f30c51f45cba6417caa397dff71f3702e814a06aefa1627fc
|
2026-01-16T00:00:00-05:00
|
Context Volume Drives Performance: Tackling Domain Shift in Extremely Low-Resource Translation via RAG
|
arXiv:2601.09982v1 Announce Type: new Abstract: Neural Machine Translation (NMT) models for low-resource languages suffer significant performance degradation under domain shift. We quantify this challenge using Dhao, an indigenous language of Eastern Indonesia with no digital footprint beyond the New Testament (NT). When applied to the unseen Old Testament (OT), a standard NMT model fine-tuned on the NT drops from an in-domain score of 36.17 chrF++ to 27.11 chrF++. To recover this loss, we introduce a hybrid framework where a fine-tuned NMT model generates an initial draft, which is then refined by a Large Language Model (LLM) using Retrieval-Augmented Generation (RAG). The final system achieves 35.21 chrF++ (+8.10 recovery), effectively matching the original in-domain quality. Our analysis reveals that this performance is driven primarily by the number of retrieved examples rather than the choice of retrieval algorithm. Qualitative analysis confirms the LLM acts as a robust "safety net," repairing severe failures in zero-shot domains.
|
https://arxiv.org/abs/2601.09982
|
Academic Papers
|
svg
|
1e423de0c399cf33d635d33c814af795981d09cbd285beb2f1f58c941667dbb3
|
2026-01-16T00:00:00-05:00
|
FaTRQ: Tiered Residual Quantization for LLM Vector Search in Far-Memory-Aware ANNS Systems
|
arXiv:2601.09985v1 Announce Type: new Abstract: Approximate Nearest-Neighbor Search (ANNS) is a key technique in retrieval-augmented generation (RAG), enabling rapid identification of the most relevant high-dimensional embeddings from massive vector databases. Modern ANNS engines accelerate this process using prebuilt indexes and store compressed vector-quantized representations in fast memory. However, they still rely on a costly second-pass refinement stage that reads full-precision vectors from slower storage like SSDs. For modern text and multimodal embeddings, these reads now dominate the latency of the entire query. We propose FaTRQ, a far-memory-aware refinement system using tiered memory that eliminates the need to fetch full vectors from storage. It introduces a progressive distance estimator that refines coarse scores using compact residuals streamed from far memory. Refinement stops early once a candidate is provably outside the top-k. To support this, we propose tiered residual quantization, which encodes residuals as ternary values stored efficiently in far memory. A custom accelerator is deployed in a CXL Type-2 device to perform low-latency refinement locally. Together, FaTRQ improves the storage efficiency by 2.4$\times$ and improves the throughput by up to 9$ \times$ than SOTA GPU ANNS system.
|
https://arxiv.org/abs/2601.09985
|
Academic Papers
|
svg
|
e593dc73001c519b7c676e8ab0cd9076bcf33454b75024ae4c1760fbf36787d2
|
2026-01-16T00:00:00-05:00
|
Outrunning Big KATs: Efficient Decision Procedures for Variants of GKAT
|
arXiv:2601.09986v1 Announce Type: new Abstract: This paper presents several efficient decision procedures for trace equivalence of GKAT automata, which make use of on-the-fly symbolic techniques via SAT solvers. To demonstrate applicability of our algorithms, we designed symbolic derivatives for CF-GKAT, a practical system based on GKAT designed to validate control-flow transformations. We implemented the algorithms in Rust and evaluated them on both randomly generated benchmarks and real-world control-flow transformations. Indeed, we observed order-of-magnitude performance improvements against existing implementations for both KAT and CF-GKAT. Notably, our experiments also revealed a bug in Ghidra, an industry-standard decompiler, highlighting the practical viability of these systems.
|
https://arxiv.org/abs/2601.09986
|
Academic Papers
|
svg
|
f919228e53a5adff3d0edf0adec9b0da6cf5cec9f86c00007fc1260eb275061c
|
2026-01-16T00:00:00-05:00
|
In-the-Wild Compliant Manipulation with UMI-FT
|
arXiv:2601.09988v1 Announce Type: new Abstract: Many manipulation tasks require careful force modulation. With insufficient force the task may fail, while excessive force could cause damage. The high cost, bulky size and fragility of commercial force/torque (F/T) sensors have limited large-scale, force-aware policy learning. We introduce UMI-FT, a handheld data-collection platform that mounts compact, six-axis force/torque sensors on each finger, enabling finger-level wrench measurements alongside RGB, depth, and pose. Using the multimodal data collected from this device, we train an adaptive compliance policy that predicts position targets, grasp force, and stiffness for execution on standard compliance controllers. In evaluations on three contact-rich, force-sensitive tasks (whiteboard wiping, skewering zucchini, and lightbulb insertion), UMI-FT enables policies that reliably regulate external contact forces and internal grasp forces, outperforming baselines that lack compliance or force sensing. UMI-FT offers a scalable path to learning compliant manipulation from in-the-wild demonstrations. We open-source the hardware and software to facilitate broader adoption at:https://umi-ft.github.io/.
|
https://arxiv.org/abs/2601.09988
|
Academic Papers
|
svg
|
c9f215b9e3541012419bc5743dce253b8eaff47528d6cf31f685d02e92e9c844
|
2026-01-16T00:00:00-05:00
|
Brief but Impactful: How Human Tutoring Interactions Shape Engagement in Online Learning
|
arXiv:2601.09994v1 Announce Type: new Abstract: Learning analytics can guide human tutors to efficiently address motivational barriers to learning that AI systems struggle to support. Students become more engaged when they receive human attention. However, what occurs during short interventions, and when are they most effective? We align student-tutor dialogue transcripts with MATHia tutoring system log data to study brief human-tutor interactions on Zoom drawn from 2,075 hours of 191 middle school students' classroom math practice. Mixed-effect models reveal that engagement, measured as successful solution steps per minute, is higher during a human-tutor visit and remains elevated afterward. Visit length exhibits diminishing returns: engagement rises during and shortly after visits, irrespective of visit length. Timing also matters: later visits yield larger immediate lifts than earlier ones, though an early visit remains important to counteract engagement decline. We create analytics that identify which tutor-student dialogues raise engagement the most. Qualitative analysis reveals that interactions with concrete, stepwise scaffolding with explicit work organization elevate engagement most strongly. We discuss implications for resource-constrained tutoring, prioritizing several brief, well-timed check-ins by a human tutor while ensuring at least one early contact. Our analytics can guide the prioritization of students for support and surface effective tutor moves in real-time.
|
https://arxiv.org/abs/2601.09994
|
Academic Papers
|
svg
|
c6041a9a1b3c1f43e394cebf54eb76dd4ac7a1e4b4b554dd5106bc5dcaf537fa
|
2026-01-16T00:00:00-05:00
|
Extremum Seeking Nonovershooting Control of Strict-Feedback Systems Under Unknown Control Direction
|
arXiv:2601.09998v1 Announce Type: new Abstract: This paper addresses the nonovershooting control problem for strict-feedback nonlinear systems with unknown control direction. We propose a method that integrates extremum seeking with Lie bracket-based design to achieve approximately nonovershooting tracking. The approach ensures that arbitrary reference trajectories can be tracked from below for any initial condition, with the overshoot reducible to arbitrarily small levels through parameter tuning. The method further provides a mechanism for enforcing high-relative-degree nonovershooting constraints in safety-critical scenarios involving unknown control directions.
|
https://arxiv.org/abs/2601.09998
|
Academic Papers
|
svg
|
d55930376c3fd73211ed34334cde98b38b4d17a2b2cdcc79a107a33766986e98
|
2026-01-16T00:00:00-05:00
|
EditEmoTalk: Controllable Speech-Driven 3D Facial Animation with Continuous Expression Editing
|
arXiv:2601.10000v1 Announce Type: new Abstract: Speech-driven 3D facial animation aims to generate realistic and expressive facial motions directly from audio. While recent methods achieve high-quality lip synchronization, they often rely on discrete emotion categories, limiting continuous and fine-grained emotional control. We present EditEmoTalk, a controllable speech-driven 3D facial animation framework with continuous emotion editing. The key idea is a boundary-aware semantic embedding that learns the normal directions of inter-emotion decision boundaries, enabling a continuous expression manifold for smooth emotion manipulation. Moreover, we introduce an emotional consistency loss that enforces semantic alignment between the generated motion dynamics and the target emotion embedding through a mapping network, ensuring faithful emotional expression. Extensive experiments demonstrate that EditEmoTalk achieves superior controllability, expressiveness, and generalization while maintaining accurate lip synchronization. Code and pretrained models will be released.
|
https://arxiv.org/abs/2601.10000
|
Academic Papers
|
svg
|
d2ef92cf096699f3e49c1c17f8277104de651d24f29bb49d4b4e79c40b89fd9a
|
2026-01-16T00:00:00-05:00
|
DW-DGAT: Dynamically Weighted Dual Graph Attention Network for Neurodegenerative Disease Diagnosis
|
arXiv:2601.10001v1 Announce Type: new Abstract: Parkinson's disease (PD) and Alzheimer's disease (AD) are the two most prevalent and incurable neurodegenerative diseases (NDs) worldwide, for which early diagnosis is critical to delay their progression. However, the high dimensionality of multi-metric data with diverse structural forms, the heterogeneity of neuroimaging and phenotypic data, and class imbalance collectively pose significant challenges to early ND diagnosis. To address these challenges, we propose a dynamically weighted dual graph attention network (DW-DGAT) that integrates: (1) a general-purpose data fusion strategy to merge three structural forms of multi-metric data; (2) a dual graph attention architecture based on brain regions and inter-sample relationships to extract both micro- and macro-level features; and (3) a class weight generation mechanism combined with two stable and effective loss functions to mitigate class imbalance. Rigorous experiments, based on the Parkinson Progression Marker Initiative (PPMI) and Alzhermer's Disease Neuroimaging Initiative (ADNI) studies, demonstrate the state-of-the-art performance of our approach.
|
https://arxiv.org/abs/2601.10001
|
Academic Papers
|
svg
|
94e979972cef107b171547cbadfa4d24a8e26d2871e825de2722271c1bc1a0be
|
2026-01-16T00:00:00-05:00
|
SocraticKG: Knowledge Graph Construction via QA-Driven Fact Extraction
|
arXiv:2601.10003v1 Announce Type: new Abstract: Constructing Knowledge Graphs (KGs) from unstructured text provides a structured framework for knowledge representation and reasoning, yet current LLM-based approaches struggle with a fundamental trade-off: factual coverage often leads to relational fragmentation, while premature consolidation causes information loss. To address this, we propose SocraticKG, an automated KG construction method that introduces question-answer pairs as a structured intermediate representation to systematically unfold document-level semantics prior to triple extraction. By employing 5W1H-guided QA expansion, SocraticKG captures contextual dependencies and implicit relational links typically lost in direct KG extraction pipelines, providing explicit grounding in the source document that helps mitigate implicit reasoning errors. Evaluation on the MINE benchmark demonstrates that our approach effectively addresses the coverage-connectivity trade-off, achieving superior factual retention while maintaining high structural cohesion even as extracted knowledge volume substantially expands. These results highlight that QA-mediated semantic scaffolding plays a critical role in structuring semantics prior to KG extraction, enabling more coherent and reliable graph construction in subsequent stages.
|
https://arxiv.org/abs/2601.10003
|
Academic Papers
|
svg
|
7a8c5cb605c3af065ff0c74e143d1ffc215cc10c9e50758b0396c60583b97685
|
2026-01-16T00:00:00-05:00
|
SoK: Privacy-aware LLM in Healthcare: Threat Model, Privacy Techniques, Challenges and Recommendations
|
arXiv:2601.10004v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly adopted in healthcare to support clinical decision-making, summarize electronic health records (EHRs), and enhance patient care. However, this integration introduces significant privacy and security challenges, driven by the sensitivity of clinical data and the high-stakes nature of medical workflows. These risks become even more pronounced across heterogeneous deployment environments, ranging from small on-premise hospital systems to regional health networks, each with unique resource limitations and regulatory demands. This Systematization of Knowledge (SoK) examines the evolving threat landscape across the three core LLM phases: Data preprocessing, Fine-tuning, and Inference within realistic healthcare settings. We present a detailed threat model that characterizes adversaries, capabilities, and attack surfaces at each phase, and we systematize how existing privacy-preserving techniques (PPTs) attempt to mitigate these vulnerabilities. While existing defenses show promise, our analysis identifies persistent limitations in securing sensitive clinical data across diverse operational tiers. We conclude with phase-aware recommendations and future research directions aimed at strengthening privacy guarantees for LLMs in regulated environments. This work provides a foundation for understanding the intersection of LLMs, threats, and privacy in healthcare, offering a roadmap toward more robust and clinically trustworthy AI systems.
|
https://arxiv.org/abs/2601.10004
|
Academic Papers
|
svg
|
62c4ff4db73c735d67c5ea0905fdff6c91ea7312299e9480eea7b60033d9dee2
|
2026-01-16T00:00:00-05:00
|
Continuous-Depth Transformers with Learned Control Dynamics
|
arXiv:2601.10007v1 Announce Type: new Abstract: We present a hybrid transformer architecture that replaces discrete middle layers with a continuous-depth Neural Ordinary Differential Equation (ODE) block, enabling inference-time control over generation attributes via a learned steering signal. Unlike standard transformers that process representations through fixed discrete layers, our approach treats depth as a continuous variable governed by a learned vector field $F_\theta(H, \tau, u)$, where $u$ is a low-dimensional control signal injected via explicit concatenation. We validate the architecture through four experiments: (1) gradient flow stability with zero exploding/vanishing gradient events, (2) semantic steering achieving 98\%/88\% accuracy for positive/negative sentiment control, (3) continuous interpolation validated by a negligible 0.068\% trajectory divergence between fixed and adaptive solvers, and (4) efficiency benchmarking demonstrating latency parity with standard discrete baselines. Additionally, we show that adaptive ODE solvers reveal geometric structure in the learned dynamics: the control signal partitions the vector field into distinct dynamical regimes with different curvature characteristics. The adjoint method enables $O(1)$ memory training regardless of integration depth. Our results demonstrate that continuous-depth dynamics with learned control signals provide a viable, efficient mechanism for steerable language generation.
|
https://arxiv.org/abs/2601.10007
|
Academic Papers
|
svg
|
f9f8f256e8cc858e95f0908c98e42da4a123bbb111bfff49ea76d1928ebe1f24
|
2026-01-16T00:00:00-05:00
|
The "I" in FAIR: Translating from Interoperability in Principle to Interoperation in Practice
|
arXiv:2601.10008v1 Announce Type: new Abstract: The FAIR (Findable, Accessible, Interoperable, and Reusable) data principles [1] promote the interoperability of scientific data by encouraging the use of persistent identifiers, standardized vocabularies, and formal metadata structures. Many resources are created using vocabularies that are FAIR-compliant and well-annotated, yet the collective ecosystem of these resources often fails to interoperate effectively in practice. This continued challenge is mainly due to variation in identifier schemas and data models used in these resources. We have created two tools to bridge the chasm between interoperability in principle and interoperation in practice. Babel solves the problem of multiple identifier schemes by producing a curated set of identifier mappings to create cliques of equivalent identifiers that are exposed through high-performance APIs. ORION solves the problems of multiple data models by ingesting knowledge bases and transforming them into a common, community-managed data model. Here, we describe Babel and ORION and demonstrate their ability to support data interoperation. A library of fully interoperable knowledge bases created through the application of Babel and ORION is available for download and use at https://robokop.renci.org.
|
https://arxiv.org/abs/2601.10008
|
Academic Papers
|
svg
|
e41dc84b9a2ffdbd75dd8b8ac8576232bd624cbe27e4266a6a71c2520b0a1e38
|
2026-01-16T00:00:00-05:00
|
VERHallu: Evaluating and Mitigating Event Relation Hallucination in Video Large Language Models
|
arXiv:2601.10010v1 Announce Type: new Abstract: Video Large Language Models (VideoLLMs) exhibit various types of hallucinations. Existing research has primarily focused on hallucinations involving the presence of events, objects, and scenes in videos, while largely neglecting event relation hallucination. In this paper, we introduce a novel benchmark for evaluating the Video Event Relation Hallucination, named VERHallu. This benchmark focuses on causal, temporal, and subevent relations between events, encompassing three types of tasks: relation classification, question answering, and counterfactual question answering, for a comprehensive evaluation of event relation hallucination. Additionally, it features counterintuitive video scenarios that deviate from typical pretraining distributions, with each sample accompanied by human-annotated candidates covering both vision-language and pure language biases. Our analysis reveals that current state-of-the-art VideoLLMs struggle with dense-event relation reasoning, often relying on prior knowledge due to insufficient use of frame-level cues. Although these models demonstrate strong grounding capabilities for key events, they often overlook the surrounding subevents, leading to an incomplete and inaccurate understanding of event relations. To tackle this, we propose a Key-Frame Propagating (KFP) strategy, which reallocates frame-level attention within intermediate layers to enhance multi-event understanding. Experiments show it effectively mitigates the event relation hallucination without affecting inference speed.
|
https://arxiv.org/abs/2601.10010
|
Academic Papers
|
svg
|
21202ae12770302031d1390590c0125bdfb395819cb366c7359c1b79944e88e1
|
2026-01-16T00:00:00-05:00
|
Memo-SQL: Structured Decomposition and Experience-Driven Self-Correction for Training-Free NL2SQL
|
arXiv:2601.10011v1 Announce Type: new Abstract: Existing NL2SQL systems face two critical limitations: (1) they rely on in-context learning with only correct examples, overlooking the rich signal in historical error-fix pairs that could guide more robust self-correction; and (2) test-time scaling approaches often decompose questions arbitrarily, producing near-identical SQL candidates across runs and diminishing ensemble gains. Moreover, these methods suffer from a stark accuracy-efficiency trade-off: high performance demands excessive computation, while fast variants compromise quality. We present Memo-SQL, a training-free framework that addresses these issues through two simple ideas: structured decomposition and experience-aware self-correction. Instead of leaving decomposition to chance, we apply three clear strategies, entity-wise, hierarchical, and atomic sequential, to encourage diverse reasoning. For correction, we build a dynamic memory of both successful queries and historical error-fix pairs, and use retrieval-augmented prompting to bring relevant examples into context at inference time, no fine-tuning or external APIs required. On BIRD, Memo-SQL achieves 68.5% execution accuracy, setting a new state of the art among open, zero-fine-tuning methods, while using over 10 times fewer resources than prior TTS approaches.
|
https://arxiv.org/abs/2601.10011
|
Academic Papers
|
svg
|
941c1f85460ffe7e4696a135aa1f5948b0e3e63f6b69bf734723d6c740c3d040
|
2026-01-16T00:00:00-05:00
|
PID-Guided Partial Alignment for Multimodal Decentralized Federated Learning
|
arXiv:2601.10012v1 Announce Type: new Abstract: Multimodal decentralized federated learning (DFL) is challenging because agents differ in available modalities and model architectures, yet must collaborate over peer-to-peer (P2P) networks without a central coordinator. Standard multimodal pipelines learn a single shared embedding across all modalities. In DFL, such a monolithic representation induces gradient misalignment between uni- and multimodal agents; as a result, it suppresses heterogeneous sharing and cross-modal interaction. We present PARSE, a multimodal DFL framework that operationalizes partial information decomposition (PID) in a server-free setting. Each agent performs feature fission to factorize its latent representation into redundant, unique, and synergistic slices. P2P knowledge sharing among heterogeneous agents is enabled by slice-level partial alignment: only semantically shareable branches are exchanged among agents that possess the corresponding modality. By removing the need for central coordination and gradient surgery, PARSE resolves uni-/multimodal gradient conflicts, thereby overcoming the multimodal DFL dilemma while remaining compatible with standard DFL constraints. Across benchmarks and agent mixes, PARSE yields consistent gains over task-, modality-, and hybrid-sharing DFL baselines. Ablations on fusion operators and split ratios, together with qualitative visualizations, further demonstrate the efficiency and robustness of the proposed design.
|
https://arxiv.org/abs/2601.10012
|
Academic Papers
|
svg
|
9462c0f0084355ae4c153bad30c42c62d670a1c9935cfbec2bf08ba651faa804
|
2026-01-16T00:00:00-05:00
|
CAFEDistill: Learning Personalized and Dynamic Models through Federated Early-Exit Network Distillation
|
arXiv:2601.10015v1 Announce Type: new Abstract: Personalized Federated Learning (PFL) enables collaboratively model training on decentralized, heterogeneous data while tailoring them to each client's unique distribution. However, existing PFL methods produce static models with a fixed tradeoff between accuracy and efficiency, limiting their applicability in environments where inference requirements vary with contexts and resource availability. Early-exit networks (EENs) offer adaptive inference by attaching intermediate classifiers. Yet integrating them into PFL is challenging due to client-wise heterogeneity and depth-wise interference arising from conflicting exit objectives. Prior studies fail to resolve both conflicts simultaneously, leading to suboptimal performance. In this paper, we propose CAFEDistill, a Conflict-Aware Federated Exit Distillation framework that jointly addresses these conflicts and extends PFL to early-exit networks. Through a progressive, depth-prioritized student coordination mechanism, CAFEDistill mitigates interference among shallow and deep exits while allowing effective personalized knowledge transfer across clients. Furthermore, it reduces communication overhead via a client-decoupled formulation. Extensive evaluations show that CAFEDistill outperforms the state-of-the-arts, achieving higher accuracy and reducing inference costs by 30.79%-46.86%.
|
https://arxiv.org/abs/2601.10015
|
Academic Papers
|
svg
|
dfd23f5436f7883d5a1e5e16c79724a5943894a2ff7d7c8f9eb1c4d5fc572472
|
2026-01-16T00:00:00-05:00
|
Empowering Older Adults in Digital Technology Use with Foundation Models
|
arXiv:2601.10018v1 Announce Type: new Abstract: While high-quality technology support can assist older adults in using digital applications, many struggle to articulate their issues due to unfamiliarity with technical terminology and age-related cognitive changes. This study examines these communication challenges and explores AI-based approaches to mitigate them. We conducted a diary study with English-speaking, community-dwelling older adults to collect asynchronous, technology-related queries and used reflexive thematic analysis to identify communication barriers. To address these barriers, we evaluated how foundation models can paraphrase older adults' queries to improve solution accuracy. Two controlled experiments followed: one with younger adults evaluating AI-rephrased queries and another with older adults evaluating AI-generated solutions. We also developed a pipeline using large language models to generate the first synthetic dataset of how older adults request tech support (OATS). We identified four key communication challenges: verbosity, incompleteness, over-specification, and under-specification. Our prompt-chaining approach using the large language model, GPT-4o, elicited contextual details, paraphrased the original query, and generated a solution. AI-rephrased queries significantly improved solution accuracy (69% vs. 46%) and Google search results (69% vs. 35%). Younger adults better understood AI-rephrased queries (93.7% vs. 65.8%) and reported greater confidence and ease. Older adults reported high perceived ability to answer contextual questions (89.8%) and follow solutions (94.7%), with high confidence and ease. OATS demonstrated strong fidelity and face validity. This work shows how foundation models can enhance technology support for older adults by addressing age-related communication barriers. The OATS dataset offers a scalable resource for developing equitable AI systems that better serve aging populations.
|
https://arxiv.org/abs/2601.10018
|
Academic Papers
|
svg
|
bc28970aecadbfaf0013b63c60bd07adfba6d9f2df6f2f465d9c5ce3ffccd97c
|
2026-01-16T00:00:00-05:00
|
Time Aggregation Features for XGBoost Models
|
arXiv:2601.10019v1 Announce Type: new Abstract: This paper studies time aggregation features for XGBoost models in click-through rate prediction. The setting is the Avazu click-through rate prediction dataset with strict out-of-time splits and a no-lookahead feature constraint. Features for hour H use only impressions from hours strictly before H. This paper compares a strong time-aware target encoding baseline to models augmented with entity history time aggregation under several window designs. Across two rolling-tail folds on a deterministic ten percent sample, a trailing window specification improves ROC AUC by about 0.0066 to 0.0082 and PR AUC by about 0.0084 to 0.0094 relative to target encoding alone. Within the time aggregation design grid, event count windows provide the only consistent improvement over trailing windows, and the gain is small. Gap windows and bucketized windows underperform simple trailing windows in this dataset and protocol. These results support a practical default of trailing windows, with an optional event count window when marginal ROC AUC gains matter.
|
https://arxiv.org/abs/2601.10019
|
Academic Papers
|
svg
|
09e34950c09fbf03f93b914035135944df173525a7cb2986b883f79de9f54f22
|
2026-01-16T00:00:00-05:00
|
EHRNavigator: A Multi-Agent System for Patient-Level Clinical Question Answering over Heterogeneous Electronic Health Records
|
arXiv:2601.10020v1 Announce Type: new Abstract: Clinical decision-making increasingly relies on timely and context-aware access to patient information within Electronic Health Records (EHRs), yet most existing natural language question-answering (QA) systems are evaluated solely on benchmark datasets, limiting their practical relevance. To overcome this limitation, we introduce EHRNavigator, a multi-agent framework that harnesses AI agents to perform patient-level question answering across heterogeneous and multimodal EHR data. We assessed its performance using both public benchmark and institutional datasets under realistic hospital conditions characterized by diverse schemas, temporal reasoning demands, and multimodal evidence integration. Through quantitative evaluation and clinician-validated chart review, EHRNavigator demonstrated strong generalization, achieving 86% accuracy on real-world cases while maintaining clinically acceptable response times. Overall, these findings confirm that EHRNavigator effectively bridges the gap between benchmark evaluation and clinical deployment, offering a robust, adaptive, and efficient solution for real-world EHR question answering.
|
https://arxiv.org/abs/2601.10020
|
Academic Papers
|
svg
|
d81e593eddfa2abeb30a6948dc8f0a176de4fb6fffed8e7953e694c8a5ca0c14
|
2026-01-16T00:00:00-05:00
|
BPE: Behavioral Profiling Ensemble
|
arXiv:2601.10024v1 Announce Type: new Abstract: Ensemble learning is widely recognized as a pivotal strategy for pushing the boundaries of predictive performance. Traditional static ensemble methods, such as Stacking, typically assign weights by treating each base learner as a holistic entity, thereby overlooking the fact that individual models exhibit varying degrees of competence across different regions of the instance space. To address this limitation, Dynamic Ensemble Selection (DES) was introduced. However, both static and dynamic approaches predominantly rely on the divergence among different models as the basis for integration. This inter-model perspective neglects the intrinsic characteristics of the models themselves and necessitates a heavy reliance on validation sets for competence estimation. In this paper, we propose the Behavioral Profiling Ensemble (BPE) framework, which introduces a novel paradigm shift. Unlike traditional methods, BPE constructs a ``behavioral profile'' intrinsic to each model and derives integration weights based on the deviation between the model's response to a specific test instance and its established behavioral profile. Extensive experiments on both synthetic and real-world datasets demonstrate that the algorithm derived from the BPE framework achieves significant improvements over state-of-the-art ensemble baselines. These gains are evident not only in predictive accuracy but also in computational efficiency and storage resource utilization across various scenarios.
|
https://arxiv.org/abs/2601.10024
|
Academic Papers
|
svg
|
f4b8910a218381c0cdfc058173be3ab47a7091b31dd113805314df1c1cf4f022
|
2026-01-16T00:00:00-05:00
|
Structured Personality Control and Adaptation for LLM Agents
|
arXiv:2601.10025v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly shaping human-computer interaction (HCI), from personalized assistants to social simulations. Beyond language competence, researchers are exploring whether LLMs can exhibit human-like characteristics that influence engagement, decision-making, and perceived realism. Personality, in particular, is critical, yet existing approaches often struggle to achieve both nuanced and adaptable expression. We present a framework that models LLM personality via Jungian psychological types, integrating three mechanisms: a dominant-auxiliary coordination mechanism for coherent core expression, a reinforcement-compensation mechanism for temporary adaptation to context, and a reflection mechanism that drives long-term personality evolution. This design allows the agent to maintain nuanced traits while dynamically adjusting to interaction demands and gradually updating its underlying structure. Personality alignment is evaluated using Myers-Briggs Type Indicator questionnaires and tested under diverse challenge scenarios as a preliminary structured assessment. Findings suggest that evolving, personality-aware LLMs can support coherent, context-sensitive interactions, enabling naturalistic agent design in HCI.
|
https://arxiv.org/abs/2601.10025
|
Academic Papers
|
svg
|
2e67d17d32e58308e705a2c0920c432f550b32ddb498d9ba9e582ee08bd00518
|
2026-01-16T00:00:00-05:00
|
STCRank: Spatio-temporal Collaborative Ranking for Interactive Recommender System at Kuaishou E-shop
|
arXiv:2601.10027v1 Announce Type: new Abstract: As a popular e-commerce platform, Kuaishou E-shop provides precise personalized product recommendations to tens of millions of users every day. To better respond real-time user feedback, we have deployed an interactive recommender system (IRS) alongside our core homepage recommender system. This IRS is triggered by user click on homepage, and generates a series of highly relevant recommendations based on the clicked item to meet focused browsing demands. Different from traditional e-commerce RecSys, the full-screen UI and immersive swiping down functionality present two distinct challenges for regular ranking system. First, there exists explicit interference (overlap or conflicts) between ranking objectives, i.e., conversion, view and swipe down. This is because there are intrinsic behavioral co-occurrences under the premise of immersive browsing and swiping down functionality. Second, the ranking system is prone to temporal greedy traps in sequential recommendation slot transitions, which is caused by full-screen UI design. To alleviate these challenges, we propose a novel Spatio-temporal collaborative ranking (STCRank) framework to achieve collaboration between multi-objectives within one slot (spatial) and between multiple sequential recommondation slots. In multi-objective collaboration (MOC) module, we push Pareto frontier by mitigating the objective overlaps and conflicts. In multi-slot collaboration (MSC) module, we achieve global optima on overall sequential slots by dual-stage look-ahead ranking mechanism. Extensive experiments demonstrate our proposed method brings about purchase and DAU co-growth. The proposed system has been already deployed at Kuaishou E-shop since 2025.6.
|
https://arxiv.org/abs/2601.10027
|
Academic Papers
|
svg
|
8946efcc8c5c770e3935da3ca042bff595caa6b88ba7b1990d24191895235b9a
|
2026-01-16T00:00:00-05:00
|
Fundamental Limits of Coded Polynomial Aggregation
|
arXiv:2601.10028v1 Announce Type: new Abstract: Coded polynomial aggregation (CPA) enables the master to directly recover a weighted aggregation of polynomial evaluations without individually decoding each term, thereby reducing the number of required worker responses. In this paper, we extend CPA to straggler-aware distributed computing systems and introduce a straggler-aware CPA framework with pre-specified non-straggler patterns, where exact recovery is required only for a given collection of admissible non-straggler sets. Our main result shows that exact recovery of the desired aggregation is achievable with fewer worker responses than required by polynomial coded computing based on individual decoding, and that feasibility is fundamentally characterized by the intersection structure of the non-straggler patterns. In particular, we establish necessary and sufficient conditions for exact recovery in straggler-aware CPA and identify an intersection-size threshold that is sufficient to guarantee exact recovery. We further prove that this threshold becomes both necessary and sufficient when the number of admissible non-straggler sets is sufficiently large. We also provide an explicit construction of feasible CPA schemes whenever the intersection size exceeds the derived threshold. Finally, simulations reveal a sharp feasibility transition at the predicted threshold, providing empirical evidence that the bound is tight in practice.
|
https://arxiv.org/abs/2601.10028
|
Academic Papers
|
svg
|
afcf6ddbe8a447452281ef1fd3971cc05056311eddf3c1b3d0f5a9e5cf6afdba
|
2026-01-16T00:00:00-05:00
|
PaperScout: An Autonomous Agent for Academic Paper Search with Process-Aware Sequence-Level Policy Optimization
|
arXiv:2601.10029v1 Announce Type: new Abstract: Academic paper search is a fundamental task in scientific research, yet most existing approaches rely on rigid, predefined workflows that struggle with complex, conditional queries. To address this limitation, we propose PaperScout, an autonomous agent that reformulates paper search as a sequential decision-making process. Unlike static workflows, PaperScout dynamically decides whether, when, and how to invoke search and expand tools based on accumulated retrieval context. However, training such agents presents a fundamental challenge: standard reinforcement learning methods, typically designed for single-turn tasks, suffer from a granularity mismatch when applied to multi-turn agentic tasks, where token-level optimization diverges from the granularity of sequence-level interactions, leading to noisy credit assignment. We introduce Proximal Sequence Policy Optimization (PSPO), a process-aware, sequence-level policy optimization method that aligns optimization with agent-environment interaction. Comprehensive experiments on both synthetic and real-world benchmarks demonstrate that PaperScout significantly outperforms strong workflow-driven and RL baselines in both recall and relevance, validating the effectiveness of our adaptive agentic framework and optimization strategy.
|
https://arxiv.org/abs/2601.10029
|
Academic Papers
|
svg
|
4d0532ce7c710732f39b699958630351b57c8cefdd549191b3c21bd32b2794a3
|
2026-01-16T00:00:00-05:00
|
FilDeep: Learning Large Deformations of Elastic-Plastic Solids with Multi-Fidelity Data
|
arXiv:2601.10031v1 Announce Type: new Abstract: The scientific computation of large deformations in elastic-plastic solids is crucial in various manufacturing applications. Traditional numerical methods exhibit several inherent limitations, prompting Deep Learning (DL) as a promising alternative. The effectiveness of current DL techniques typically depends on the availability of high-quantity and high-accuracy datasets, which are yet difficult to obtain in large deformation problems. During the dataset construction process, a dilemma stands between data quantity and data accuracy, leading to suboptimal performance in the DL models. To address this challenge, we focus on a representative application of large deformations, the stretch bending problem, and propose FilDeep, a Fidelity-based Deep Learning framework for large Deformation of elastic-plastic solids. Our FilDeep aims to resolve the quantity-accuracy dilemma by simultaneously training with both low-fidelity and high-fidelity data, where the former provides greater quantity but lower accuracy, while the latter offers higher accuracy but in less quantity. In FilDeep, we provide meticulous designs for the practical large deformation problem. Particularly, we propose attention-enabled cross-fidelity modules to effectively capture long-range physical interactions across MF data. To the best of our knowledge, our FilDeep presents the first DL framework for large deformation problems using MF data. Extensive experiments demonstrate that our FilDeep consistently achieves state-of-the-art performance and can be efficiently deployed in manufacturing.
|
https://arxiv.org/abs/2601.10031
|
Academic Papers
|
svg
|
07a2d6d9486be8823057b799fcee529f5239cde6fd32c393b049f6bb5981840d
|
2026-01-16T00:00:00-05:00
|
EmplifAI: a Fine-grained Dataset for Japanese Empathetic Medical Dialogues in 28 Emotion Labels
|
arXiv:2601.10033v1 Announce Type: new Abstract: This paper introduces EmplifAI, a Japanese empathetic dialogue dataset designed to support patients coping with chronic medical conditions. They often experience a wide range of positive and negative emotions (e.g., hope and despair) that shift across different stages of disease management. EmplifAI addresses this complexity by providing situation-based dialogues grounded in 28 fine-grained emotion categories, adapted and validated from the GoEmotions taxonomy. The dataset includes 280 medically contextualized situations and 4125 two-turn dialogues, collected through crowdsourcing and expert review. To evaluate emotional alignment in empathetic dialogues, we assessed model predictions on situation--dialogue pairs using BERTScore across multiple large language models (LLMs), achieving F1 scores of 0.83. Fine-tuning a baseline Japanese LLM (LLM-jp-3.1-13b-instruct4) with EmplifAI resulted in notable improvements in fluency, general empathy, and emotion-specific empathy. Furthermore, we compared the scores assigned by LLM-as-a-Judge and human raters on dialogues generated by multiple LLMs to validate our evaluation pipeline and discuss the insights and potential risks derived from the correlation analysis.
|
https://arxiv.org/abs/2601.10033
|
Academic Papers
|
svg
|
2747478941273fb6cba884eb958b3988be33351513415aa56d0858793611fb31
|
2026-01-16T00:00:00-05:00
|
A Compute and Communication Runtime Model for Loihi 2
|
arXiv:2601.10035v1 Announce Type: new Abstract: Neuromorphic computers hold the potential to vastly improve the speed and efficiency of a wide range of computational kernels with their asynchronous, compute-memory co-located, spatially distributed, and scalable nature. However, performance models that are simple yet sufficiently expressive to predict runtime on actual neuromorphic hardware are lacking, posing a challenge for researchers and developers who strive to design fast algorithms and kernels. As breaking the memory bandwidth wall of conventional von-Neumann architectures is a primary neuromorphic advantage, modeling communication time is especially important. At the same time, modeling communication time is difficult, as complex congestion patterns arise in a heavily-loaded Network-on-Chip. In this work, we introduce the first max-affine lower-bound runtime model -- a multi-dimensional roofline model -- for Intel's Loihi 2 neuromorphic chip that quantitatively accounts for both compute and communication based on a suite of microbenchmarks. Despite being a lower-bound model, we observe a tight correspondence (Pearson correlation coefficient greater than or equal to 0.97) between our model's estimated runtime and the measured runtime on Loihi 2 for a neural network linear layer, i.e., matrix-vector multiplication, and for an example application, a Quadratic Unconstrained Binary Optimization solver. Furthermore, we derive analytical expressions for communication-bottlenecked runtime to study scalability of the linear layer, revealing an area-runtime tradeoff for different spatial workload configurations with linear to superliner runtime scaling in layer size with a variety of constant factors. Our max-affine runtime model helps empower the design of high-speed algorithms and kernels for Loihi 2.
|
https://arxiv.org/abs/2601.10035
|
Academic Papers
|
svg
|
06a990e023ddba9f62fbdb9b57af5031d3de8adc7932c091054ef4c399cc335b
|
2026-01-16T00:00:00-05:00
|
Resistive Memory based Efficient Machine Unlearning and Continual Learning
|
arXiv:2601.10037v1 Announce Type: new Abstract: Resistive memory (RM) based neuromorphic systems can emulate synaptic plasticity and thus support continual learning, but they generally lack biologically inspired mechanisms for active forgetting, which are critical for meeting modern data privacy requirements. Algorithmic forgetting, or machine unlearning, seeks to remove the influence of specific data from trained models to prevent memorization of sensitive information and the generation of harmful content, yet existing exact and approximate unlearning schemes incur prohibitive programming overheads on RM hardware owing to device variability and iterative write-verify cycles. Analogue implementations of continual learning face similar barriers. Here we present a hardware-software co-design that enables an efficient training, deployment and inference pipeline for machine unlearning and continual learning on RM accelerators. At the software level, we introduce a low-rank adaptation (LoRA) framework that confines updates to compact parameter branches, substantially reducing the number of trainable parameters and therefore the training cost. At the hardware level, we develop a hybrid analogue-digital compute-in-memory system in which well-trained weights are stored in analogue RM arrays, whereas dynamic LoRA updates are implemented in a digital computing unit with SRAM buffer. This hybrid architecture avoids costly reprogramming of analogue weights and maintains high energy efficiency during inference. Fabricated in a 180 nm CMOS process, the prototype achieves up to a 147.76-fold reduction in training cost, a 387.95-fold reduction in deployment overhead and a 48.44-fold reduction in inference energy across privacy-sensitive tasks including face recognition, speaker authentication and stylized image generation, paving the way for secure and efficient neuromorphic intelligence at the edge.
|
https://arxiv.org/abs/2601.10037
|
Academic Papers
|
svg
|
c4af46b3c87172160466863d5a43923f448dc80fc9a3448c75ae822c099a58b3
|
2026-01-16T00:00:00-05:00
|
Emergency Department Patient Flow Optimization with an Alternative Care Threshold Policy
|
arXiv:2601.10041v1 Announce Type: new Abstract: Emergency department (ED) overcrowding and patient boarding represent critical systemic challenges that compromise care quality. We propose a threshold-based admission policy that redirects non-urgent patients to alternative care pathways, such as telemedicine, during peak congestion. The ED is modeled as a two-class $M/M/c$ preemptive-priority queuing system, where high-acuity patients are prioritized and low-acuity patients are subject to state-dependent redirection. Analyzed via a level-dependent Quasi-Birth-Death (QBD) process, the model determines the optimal threshold by maximizing a long-run time-averaged objective function comprising redirection-affected revenue and costs associated with patient balking and system occupancy. Numerical analysis using national healthcare data reveals that optimal policies are highly context-dependent. While rural EDs generally optimize at lower redirection thresholds, urban EDs exhibit performance peaks at moderate thresholds. Results indicate that our optimal policy yields significant performance gains of up to $4.84\%$ in rural settings and $5.90\%$ in urban environments. This research provides a mathematically rigorous framework for balancing clinical priority with operational efficiency across diverse ED settings.
|
https://arxiv.org/abs/2601.10041
|
Academic Papers
|
svg
|
ffcdfa0ef68af7e9bf1e30b3cf86dff523da58aa95843b35692d149aa45d5090
|
2026-01-16T00:00:00-05:00
|
Event-Driven Deep RL Dispatcher for Post-Storm Distribution System Restoration
|
arXiv:2601.10044v1 Announce Type: new Abstract: Natural hazards such as hurricanes and floods damage power grid equipment, forcing operators to replan restoration repeatedly as new information becomes available. This paper develops a deep reinforcement learning (DRL) dispatcher that serves as a real-time decision engine for crew-to-repair assignments. We model restoration as a sequential, information-revealing process and learn an actor-critic policy over compact features such as component status, travel/repair times, crew availability, and marginal restoration value. A feasibility mask blocks unsafe or inoperable actions, such as power flow limits, switching rules, and crew-time constraints, before they are applied. To provide realistic runtime inputs without relying on heavy solvers, we use lightweight surrogates for wind and flood intensities, fragility-based failure, spatial clustering of damage, access impairments, and progressive ticket arrivals. In simulated hurricane and flood events, the learned policy updates crew decisions in real time as new field reports arrive. Because the runtime logic is lightweight, it improves online performance (energy-not-supplied, critical-load restoration time, and travel distance) compared with mixed-integer programs and standard heuristics. The proposed approach is tested on the IEEE 13- and 123-bus feeders with mixed hurricane/flood scenarios.
|
https://arxiv.org/abs/2601.10044
|
Academic Papers
|
svg
|
11200950836c8572c958bec531c2f4e39f67a843d50b23d8b0eb8ef4cd382d53
|
2026-01-16T00:00:00-05:00
|
Privacy Enhanced PEFT: Tensor Train Decomposition Improves Privacy Utility Tradeoffs under DP-SGD
|
arXiv:2601.10045v1 Announce Type: new Abstract: Fine-tuning large language models on sensitive data poses significant privacy risks, as membership inference attacks can reveal whether individual records were used during training. While Differential Privacy (DP) provides formal protection, applying DP to conventional Parameter-Efficient Fine-Tuning (PEFT) methods such as Low-Rank Adaptation (LoRA) often incurs substantial utility loss. In this work, we show that a more structurally constrained PEFT architecture, Tensor Train Low-Rank Adaptation (TTLoRA), can improve the privacy-utility tradeoff by shrinking the effective parameter space while preserving expressivity. To this end, we develop TTLoRA-DP, a differentially private training framework for TTLoRA. Specifically, we extend the ghost clipping algorithm to Tensor Train cores via cached contraction states, enabling efficient Differentially Private Stochastic Gradient Descent (DP-SGD) with exact per-example gradient norm computation without materializing full per-example gradients. Experiments on GPT-2 fine-tuning over the Enron and Penn Treebank datasets show that TTLoRA-DP consistently strengthens privacy protection relative to LoRA-DP while maintaining comparable or better downstream utility. Moreover, TTLoRA exhibits lower membership leakage even without DP training, using substantially smaller adapters and requiring on average 7.6X fewer parameters than LoRA. Overall, our results demonstrate that TTLoRA offers a practical path to improving the privacy-utility tradeoff in parameter-efficient language model adaptation.
|
https://arxiv.org/abs/2601.10045
|
Academic Papers
|
svg
|
e644fa8978cad103369a8787128319076cfd75a6d3b64abcf40501ca028a5f8a
|
2026-01-16T00:00:00-05:00
|
Optimal Proximity Gap for Folded Reed--Solomon Codes via Subspace Designs
|
arXiv:2601.10047v1 Announce Type: new Abstract: A collection of sets satisfies a $(\delta,\varepsilon)$-proximity gap with respect to some property if for every set in the collection, either (i) all members of the set are $\delta$-close to the property in (relative) Hamming distance, or (ii) only a small $\varepsilon$-fraction of members are $\delta$-close to the property. In a seminal work, Ben-Sasson \textit{et al.}\ showed that the collection of affine subspaces exhibits a $(\delta,\varepsilon)$-proximity gap with respect to the property of being Reed--Solomon (RS) codewords with $\delta$ up to the so-called Johnson bound for list decoding. Their technique relies on the Guruswami--Sudan list decoding algorithm for RS codes, which is guaranteed to work in the Johnson bound regime. Folded Reed--Solomon (FRS) codes are known to achieve the optimal list decoding radius $\delta$, a regime known as capacity. Moreover, a rich line of list decoding algorithms was developed for FRS codes. It is then natural to ask if FRS codes can be shown to exhibit an analogous $(\delta,\varepsilon)$-proximity gap, but up to the so-called optimal capacity regime. We answer this question in the affirmative (and the framework naturally applies more generally to suitable subspace-design codes). An additional motivation to understand proximity gaps for FRS codes is the recent results [BCDZ'25] showing that they exhibit properties similar to random linear codes, which were previously shown to be related to properties of RS codes with random evaluation points in [LMS'25], as well as codes over constant-size alphabet based on AEL [JS'25].
|
https://arxiv.org/abs/2601.10047
|
Academic Papers
|
svg
|
ecdf8b710105f610f9f8ddc0939db58b42685de55222f8706fd6312a2de2b912
|
2026-01-16T00:00:00-05:00
|
Disentangled Concept Representation for Text-to-image Person Re-identification
|
arXiv:2601.10053v1 Announce Type: new Abstract: Text-to-image person re-identification (TIReID) aims to retrieve person images from a large gallery given free-form textual descriptions. TIReID is challenging due to the substantial modality gap between visual appearances and textual expressions, as well as the need to model fine-grained correspondences that distinguish individuals with similar attributes such as clothing color, texture, or outfit style. To address these issues, we propose DiCo (Disentangled Concept Representation), a novel framework that achieves hierarchical and disentangled cross-modal alignment. DiCo introduces a shared slot-based representation, where each slot acts as a part-level anchor across modalities and is further decomposed into multiple concept blocks. This design enables the disentanglement of complementary attributes (\textit{e.g.}, color, texture, shape) while maintaining consistent part-level correspondence between image and text. Extensive experiments on CUHK-PEDES, ICFG-PEDES, and RSTPReid demonstrate that our framework achieves competitive performance with state-of-the-art methods, while also enhancing interpretability through explicit slot- and block-level representations for more fine-grained retrieval results.
|
https://arxiv.org/abs/2601.10053
|
Academic Papers
|
svg
|
5c51d4f325d81a30635bd2f1c95e5a134b699bf7b265090a5323e70c6f2dee39
|
2026-01-16T00:00:00-05:00
|
UEOF: A Benchmark Dataset for Underwater Event-Based Optical Flow
|
arXiv:2601.10054v1 Announce Type: new Abstract: Underwater imaging is fundamentally challenging due to wavelength-dependent light attenuation, strong scattering from suspended particles, turbidity-induced blur, and non-uniform illumination. These effects impair standard cameras and make ground-truth motion nearly impossible to obtain. On the other hand, event cameras offer microsecond resolution and high dynamic range. Nonetheless, progress on investigating event cameras for underwater environments has been limited due to the lack of datasets that pair realistic underwater optics with accurate optical flow. To address this problem, we introduce the first synthetic underwater benchmark dataset for event-based optical flow derived from physically-based ray-traced RGBD sequences. Using a modern video-to-event pipeline applied to rendered underwater videos, we produce realistic event data streams with dense ground-truth flow, depth, and camera motion. Moreover, we benchmark state-of-the-art learning-based and model-based optical flow prediction methods to understand how underwater light transport affects event formation and motion estimation accuracy. Our dataset establishes a new baseline for future development and evaluation of underwater event-based perception algorithms. The source code and dataset for this project are publicly available at https://robotic-vision-lab.github.io/ueof.
|
https://arxiv.org/abs/2601.10054
|
Academic Papers
|
svg
|
63c4e11a212127eaa8af39514cf70c817b1ae3e9de757fd8791379aa235f37ba
|
2026-01-16T00:00:00-05:00
|
An Efficient Constant-Coefficient MSAV Scheme for Computing Vesicle Growth and Shrinkage
|
arXiv:2601.10057v1 Announce Type: new Abstract: We present a fast, unconditionally energy-stable numerical scheme for simulating vesicle deformation under osmotic pressure using a phase-field approach. The model couples an Allen-Cahn equation for the biomembrane interface with a variable-mobility Cahn-Hilliard equation governing mass exchange across the membrane. Classical approaches, including nonlinear multigrid and Multiple Scalar Auxiliary Variable (MSAV) methods, require iterative solution of variable-coefficient systems at each time step, resulting in substantial computational cost. We introduce a constant-coefficient MSAV (CC-MSAV) scheme that incorporates stabilization directly into the Cahn-Hilliard evolution equation rather than the chemical potential. This reformulation yields fully decoupled constant-coefficient elliptic problems solvable via fast discrete cosine transform (DCT), eliminating iterative solvers entirely. The method achieves O(N^2 log N) complexity per time step while preserving unconditional energy stability and discrete mass conservation. Numerical experiments verify second-order temporal and spatial accuracy, mass conservation to relative errors below 5 x 10^-11, and close agreement with nonlinear multigrid benchmarks. On grids with N >= 2048, CC-MSAV achieves 6-15x overall speedup compared to classical MSAV with optimized preconditioning, while the dominant Cahn-Hilliard subsystem is accelerated by up to two orders of magnitude. These efficiency gains, achieved without sacrificing accuracy, make CC-MSAV particularly well suited for large-scale simulations of vesicle dynamics.
|
https://arxiv.org/abs/2601.10057
|
Academic Papers
|
svg
|
66973904e4537e90c9affef6c5fc78b4037945b357a868889979264809bd89f7
|
2026-01-16T00:00:00-05:00
|
Unlabeled Data Can Provably Enhance In-Context Learning of Transformers
|
arXiv:2601.10058v1 Announce Type: new Abstract: Large language models (LLMs) exhibit impressive in-context learning (ICL) capabilities, yet the quality of their predictions is fundamentally limited by the few costly labeled demonstrations that can fit into a prompt. Meanwhile, there exist vast and continuously growing amounts of unlabeled data that may be closely related to the ICL task. How to utilize such unlabeled data to provably enhance the performance of ICL thus becomes an emerging fundamental question. In this work, we propose a novel augmented ICL framework, in which the prompt includes a small set of labeled examples alongside a block of unlabeled inputs. We focus on the multi-class linear classification setting and demonstrate that, with chain-of-thought (CoT) prompting, a multi-layer transformer can effectively emulate an expectation-maximization (EM) algorithm. This enables the transformer to implicitly extract useful information from both labeled and unlabeled data, leading to provable improvements in ICL accuracy. Moreover, we show that such a transformer can be trained via teacher forcing, with its parameters converging to the desired solution at a linear rate. Experiments demonstrate that the augmented ICL framework consistently outperforms conventional few-shot ICL, providing empirical support for our theoretical findings. To the best of our knowledge, this is the first theoretical study on the impact of unlabeled data on the ICL performance of transformers.
|
https://arxiv.org/abs/2601.10058
|
Academic Papers
|
svg
|
e6d522fbeb688623f4711553880b149a2a844ccd83f4eaab9f5ae33a780e56bd
|
2026-01-16T00:00:00-05:00
|
CoF-T2I: Video Models as Pure Visual Reasoners for Text-to-Image Generation
|
arXiv:2601.10061v1 Announce Type: new Abstract: Recent video generation models have revealed the emergence of Chain-of-Frame (CoF) reasoning, enabling frame-by-frame visual inference. With this capability, video models have been successfully applied to various visual tasks (e.g., maze solving, visual puzzles). However, their potential to enhance text-to-image (T2I) generation remains largely unexplored due to the absence of a clearly defined visual reasoning starting point and interpretable intermediate states in the T2I generation process. To bridge this gap, we propose CoF-T2I, a model that integrates CoF reasoning into T2I generation via progressive visual refinement, where intermediate frames act as explicit reasoning steps and the final frame is taken as output. To establish such an explicit generation process, we curate CoF-Evol-Instruct, a dataset of CoF trajectories that model the generation process from semantics to aesthetics. To further improve quality and avoid motion artifacts, we enable independent encoding operation for each frame. Experiments show that CoF-T2I significantly outperforms the base video model and achieves competitive performance on challenging benchmarks, reaching 0.86 on GenEval and 7.468 on Imagine-Bench. These results indicate the substantial promise of video models for advancing high-quality text-to-image generation.
|
https://arxiv.org/abs/2601.10061
|
Academic Papers
|
svg
|
ddc73eccf35ecb3b3468742e78a3b19ffe245844a4b8bcdad221aad8f26ca145
|
2026-01-16T00:00:00-05:00
|
Long-Chain Reasoning Distillation via Adaptive Prefix Alignment
|
arXiv:2601.10064v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in solving complex mathematical problems. Recent studies show that distilling long reasoning trajectories can effectively enhance the reasoning performance of small-scale student models. However, teacher-generated reasoning trajectories are often excessively long and structurally complex, making them difficult for student models to learn. This mismatch leads to a gap between the provided supervision signal and the learning capacity of the student model. To address this challenge, we propose Prefix-ALIGNment distillation (P-ALIGN), a framework that fully exploits teacher CoTs for distillation through adaptive prefix alignment. Specifically, P-ALIGN adaptively truncates teacher-generated reasoning trajectories by determining whether the remaining suffix is concise and sufficient to guide the student model. Then, P-ALIGN leverages the teacher-generated prefix to supervise the student model, encouraging effective prefix alignment. Experiments on multiple mathematical reasoning benchmarks demonstrate that P-ALIGN outperforms all baselines by over 3%. Further analysis indicates that the prefixes constructed by P-ALIGN provide more effective supervision signals, while avoiding the negative impact of redundant and uncertain reasoning components. All code is available at https://github.com/NEUIR/P-ALIGN.
|
https://arxiv.org/abs/2601.10064
|
Academic Papers
|
svg
|
d7c60296fc3d194395262982440e5a7ff04e33ba0cd3490e2f8f5707d7aeb79f
|
2026-01-16T00:00:00-05:00
|
Efficient Content-based Recommendation Model Training via Noise-aware Coreset Selection
|
arXiv:2601.10067v1 Announce Type: new Abstract: Content-based recommendation systems (CRSs) utilize content features to predict user-item interactions, serving as essential tools for helping users navigate information-rich web services. However, ensuring the effectiveness of CRSs requires large-scale and even continuous model training to accommodate diverse user preferences, resulting in significant computational costs and resource demands. A promising approach to this challenge is coreset selection, which identifies a small but representative subset of data samples that preserves model quality while reducing training overhead. Yet, the selected coreset is vulnerable to the pervasive noise in user-item interactions, particularly when it is minimally sized. To this end, we propose Noise-aware Coreset Selection (NaCS), a specialized framework for CRSs. NaCS constructs coresets through submodular optimization based on training gradients, while simultaneously correcting noisy labels using a progressively trained model. Meanwhile, we refine the selected coreset by filtering out low-confidence samples through uncertainty quantification, thereby avoid training with unreliable interactions. Through extensive experiments, we show that NaCS produces higher-quality coresets for CRSs while achieving better efficiency than existing coreset selection techniques. Notably, NaCS recovers 93-95\% of full-dataset training performance using merely 1\% of the training data. The source code is available at \href{https://github.com/chenxing1999/nacs}{https://github.com/chenxing1999/nacs}.
|
https://arxiv.org/abs/2601.10067
|
Academic Papers
|
svg
|
f3eea7b503d45d044bb3e43af005d8d1d3d39df63728f4cdbcc7095483b85062
|
2026-01-16T00:00:00-05:00
|
S$^2$F: Principled Hybrid Testing With Fuzzing, Symbolic Execution, and Sampling
|
arXiv:2601.10068v1 Announce Type: new Abstract: Hybrid testing that integrates fuzzing, symbolic execution, and sampling has demonstrated superior testing efficiency compared to individual techniques. However, the state-of-the-art (SOTA) hybrid testing tools do not fully exploit the capabilities of symbolic execution and sampling in two key aspects. First, the SOTA hybrid testing tools employ tailored symbolic execution engines that tend to over-prune branches, leading to considerable time wasted waiting for seeds from the fuzzer and missing opportunities to discover crashes. Second, existing methods do not apply sampling to the appropriate branches and therefore cannot utilize the full capability of sampling. To address these two limitations, we propose a novel hybrid testing architecture that combines the precision of conventional symbolic execution with the scalability of tailored symbolic execution engines. Based on this architecture, we propose several principles for combining fuzzing, symbolic execution, and sampling. We implement our method in a hybrid testing tool S$^2$F. To evaluate its effectiveness, we conduct extensive experiments on 15 real-world programs. Experimental results demonstrate that S$^2$F outperforms the SOTA tool, achieving an average improvement of 6.14% in edge coverage and 32.6% in discovered crashes. Notably, our tool uncovers three previously unknown crashes in real-world programs.
|
https://arxiv.org/abs/2601.10068
|
Academic Papers
|
svg
|
c3e028d2ab6cda95d1dd066cd8b43df6e45a17cf3c685182acb4396e023908e1
|
2026-01-16T00:00:00-05:00
|
Comparative Evaluation of Deep Learning-Based and WHO-Informed Approaches for Sperm Morphology Assessment
|
arXiv:2601.10070v1 Announce Type: new Abstract: Assessment of sperm morphological quality remains a critical yet subjective component of male fertility evaluation, often limited by inter-observer variability and resource constraints. This study presents a comparative biomedical artificial intelligence framework evaluating an image-based deep learning model (HuSHeM) alongside a clinically grounded baseline derived from World Health Organization criteria augmented with the Systemic Inflammation Response Index (WHO(+SIRI)). The HuSHeM model was trained on high-resolution sperm morphology images and evaluated using an independent clinical cohort. Model performance was assessed using discrimination, calibration, and clinical utility analyses. The HuSHeM model demonstrated higher discriminative performance, as reflected by an increased area under the receiver operating characteristic curve with relatively narrow confidence intervals compared to WHO(+SIRI). Precision-recall analysis further indicated improved performance under class imbalance, with higher precision-recall area values across evaluated thresholds. Calibration analysis indicated closer agreement between predicted probabilities and observed outcomes for HuSHeM, while decision curve analysis suggested greater net clinical benefit across clinically relevant threshold probabilities. These findings suggest that image-based deep learning may offer improved predictive reliability and clinical utility compared with traditional rule-based and inflammation-augmented criteria. The proposed framework supports objective and reproducible assessment of sperm morphology and may serve as a decision-support tool within fertility screening and referral workflows. The proposed models are intended as decision-support or referral tools and are not designed to replace clinical judgment or laboratory assessment.
|
https://arxiv.org/abs/2601.10070
|
Academic Papers
|
svg
|
6a402f5b10bf573c9ee9fbc1aff758e6ca4451677948496d444148032471825f
|
2026-01-16T00:00:00-05:00
|
ReaMIL: Reasoning- and Evidence-Aware Multiple Instance Learning for Whole-Slide Histopathology
|
arXiv:2601.10073v1 Announce Type: new Abstract: We introduce ReaMIL (Reasoning- and Evidence-Aware MIL), a multiple instance learning approach for whole-slide histopathology that adds a light selection head to a strong MIL backbone. The head produces soft per-tile gates and is trained with a budgeted-sufficiency objective: a hinge loss that enforces the true-class probability to be $\geq \tau$ using only the kept evidence, under a sparsity budget on the number of selected tiles. The budgeted-sufficiency objective yields small, spatially compact evidence sets without sacrificing baseline performance. Across TCGA-NSCLC (LUAD vs. LUSC), TCGA-BRCA (IDC vs. Others), and PANDA, ReaMIL matches or slightly improves baseline AUC and provides quantitative evidence-efficiency diagnostics. On NSCLC, it attains AUC 0.983 with a mean minimal sufficient K (MSK) $\approx 8.2$ tiles at $\tau = 0.90$ and AUKC $\approx 0.864$, showing that class confidence rises sharply and stabilizes once a small set of tiles is kept. The method requires no extra supervision, integrates seamlessly with standard MIL training, and naturally yields slide-level overlays. We report accuracy alongside MSK, AUKC, and contiguity for rigorous evaluation of model behavior on WSIs.
|
https://arxiv.org/abs/2601.10073
|
Academic Papers
|
svg
|
680d988e53d0ca2d2078c3405d1d76759e0130a544dfcfdadd3b26c74a1d5811
|
2026-01-16T00:00:00-05:00
|
Thinking Like Van Gogh: Structure-Aware Style Transfer via Flow-Guided 3D Gaussian Splatting
|
arXiv:2601.10075v1 Announce Type: new Abstract: In 1888, Vincent van Gogh wrote, "I am seeking exaggeration in the essential." This principle, amplifying structural form while suppressing photographic detail, lies at the core of Post-Impressionist art. However, most existing 3D style transfer methods invert this philosophy, treating geometry as a rigid substrate for surface-level texture projection. To authentically reproduce Post-Impressionist stylization, geometric abstraction must be embraced as the primary vehicle of expression. We propose a flow-guided geometric advection framework for 3D Gaussian Splatting (3DGS) that operationalizes this principle in a mesh-free setting. Our method extracts directional flow fields from 2D paintings and back-propagates them into 3D space, rectifying Gaussian primitives to form flow-aligned brushstrokes that conform to scene topology without relying on explicit mesh priors. This enables expressive structural deformation driven directly by painterly motion rather than photometric constraints. Our contributions are threefold: (1) a projection-based, mesh-free flow guidance mechanism that transfers 2D artistic motion into 3D Gaussian geometry; (2) a luminance-structure decoupling strategy that isolates geometric deformation from color optimization, mitigating artifacts during aggressive structural abstraction; and (3) a VLM-as-a-Judge evaluation framework that assesses artistic authenticity through aesthetic judgment instead of conventional pixel-level metrics, explicitly addressing the subjective nature of artistic stylization.
|
https://arxiv.org/abs/2601.10075
|
Academic Papers
|
svg
|
a80f10af707f3969acea63e222e004a4413937459ec9a6255883b771368e240a
|
2026-01-16T00:00:00-05:00
|
Sparse-RL: Breaking the Memory Wall in LLM Reinforcement Learning via Stable Sparse Rollouts
|
arXiv:2601.10079v1 Announce Type: new Abstract: Reinforcement Learning (RL) has become essential for eliciting complex reasoning capabilities in Large Language Models (LLMs). However, the substantial memory overhead of storing Key-Value (KV) caches during long-horizon rollouts acts as a critical bottleneck, often prohibiting efficient training on limited hardware. While existing KV compression techniques offer a remedy for inference, directly applying them to RL training induces a severe policy mismatch, leading to catastrophic performance collapse. To address this, we introduce Sparse-RL empowers stable RL training under sparse rollouts. We show that instability arises from a fundamental policy mismatch among the dense old policy, the sparse sampler policy, and the learner policy. To mitigate this issue, Sparse-RL incorporates Sparsity-Aware Rejection Sampling and Importance-based Reweighting to correct the off-policy bias introduced by compression-induced information loss. Experimental results show that Sparse-RL reduces rollout overhead compared to dense baselines while preserving the performance. Furthermore, Sparse-RL inherently implements sparsity-aware training, significantly enhancing model robustness during sparse inference deployment.
|
https://arxiv.org/abs/2601.10079
|
Academic Papers
|
svg
|
89c8e73e16e08fb6c6809c5bc83bed00b640692a5da9605f22528004b3044c27
|
2026-01-16T00:00:00-05:00
|
Deriving Character Logic from Storyline as Codified Decision Trees
|
arXiv:2601.10080v1 Announce Type: new Abstract: Role-playing (RP) agents rely on behavioral profiles to act consistently across diverse narrative contexts, yet existing profiles are largely unstructured, non-executable, and weakly validated, leading to brittle agent behavior. We propose Codified Decision Trees (CDT), a data-driven framework that induces an executable and interpretable decision structure from large-scale narrative data. CDT represents behavioral profiles as a tree of conditional rules, where internal nodes correspond to validated scene conditions and leaves encode grounded behavioral statements, enabling deterministic retrieval of context-appropriate rules at execution time. The tree is learned by iteratively inducing candidate scene-action rules, validating them against data, and refining them through hierarchical specialization, yielding profiles that support transparent inspection and principled updates. Across multiple benchmarks, CDT substantially outperforms human-written profiles and prior profile induction methods on $85$ characters across $16$ artifacts, indicating that codified and validated behavioral representations lead to more reliable agent grounding.
|
https://arxiv.org/abs/2601.10080
|
Academic Papers
|
svg
|
b0afba20cc8cfea3c1cb6cc5a4b13a7f3387ff86f22a20952c0d785a984f4115
|
2026-01-16T00:00:00-05:00
|
Is MT Ready for the Next Crisis or Pandemic?
|
arXiv:2601.10082v1 Announce Type: new Abstract: Communication in times of crisis is essential. However, there is often a mismatch between the language of governments, aid providers, doctors, and those to whom they are providing aid. Commercial MT systems are reasonable tools to turn to in these scenarios. But how effective are these tools for translating to and from low resource languages, particularly in the crisis or medical domain? In this study, we evaluate four commercial MT systems using the TICO-19 dataset, which is composed of pandemic-related sentences from a large set of high priority languages spoken by communities most likely to be affected adversely in the next pandemic. We then assess the current degree of ``readiness'' for another pandemic (or epidemic) based on the usability of the output translations.
|
https://arxiv.org/abs/2601.10082
|
Academic Papers
|
svg
|
7fdbaa02b1c023bfc9c7c51914b58a7f93e167e817919abbf1a262bd466578f5
|
2026-01-16T00:00:00-05:00
|
Starfield: Demand-Aware Satellite Topology Design for Low-Earth Orbit Mega Constellations
|
arXiv:2601.10083v1 Announce Type: new Abstract: Low-Earth orbit (LEO) mega-constellations are emerging as high-capacity backbones for next-generation Internet. Deployment of laser terminals enables high-bandwidth, low-latency inter-satellite links (ISLs); however, their limited number, slow acquisition, and instability make forming a stable satellite topology difficult. Existing patterns like +Grid and Motif ignore regional traffic, ground station placement, and constellation geometry. Given sparse population distribution on Earth and the isolation of rural areas, traffic patterns are inherently non-uniform, providing an opportunity to orient inter-satellite links (ISLs) according to these traffic patterns. In this paper, we propose Starfield, a novel demand-aware satellite topology design heuristic algorithm supported by mathematical analysis. We first formulate a vector field on the constellation's shell according to traffic flows and define a corresponding Riemannian metric on the spherical manifold of the shell. The metric, combined with the spatial geometry, is used to assign a distance to each potential ISL, which we then aggregate over all demand flows to generate a heuristic for each satellite's link selection. Inspired by +Grid, each satellite selects the link with the minimum Riemannian heuristic along with its corresponding angular links. To evaluate Starfield, we developed a custom, link-aware, and link-configurable packet-level simulator, comparing it against +Grid and Random topologies. For the Phase 1 Starlink, simulation results show up to a 30% reduction in hop count and a 15% improvement in stretch factor across multiple traffic distributions. Moreover, static Starfield, an inter-orbital link matching modification of Starfield, achieves a 20% improvement in stretch factor under realistic traffic patterns compared to +Grid. Experiments further demonstrate Starfield's robustness under traffic demand perturbations.
|
https://arxiv.org/abs/2601.10083
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.