id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
c65a90434a8d18c56f64467018d06027f2799ef38b192d88fa98ae27a7bba342
2026-01-07T00:00:00-05:00
SmartSnap: Proactive Evidence Seeking for Self-Verifying Agents
arXiv:2512.22322v2 Announce Type: replace Abstract: Agentic reinforcement learning (RL) holds great promise for the development of autonomous agents under complex GUI tasks, but its scalability remains severely hampered by the verification of task completion. Existing task verification is treated as a passive, post-hoc process: a verifier (i.e., rule-based scoring script, reward or critic model, and LLM-as-a-Judge) analyzes the agent's entire interaction trajectory to determine if the agent succeeds. Such processing of verbose context that contains irrelevant, noisy history poses challenges to the verification protocols and therefore leads to prohibitive cost and low reliability. To overcome this bottleneck, we propose SmartSnap, a paradigm shift from this passive, post-hoc verification to proactive, in-situ self-verification by the agent itself. We introduce the Self-Verifying Agent, a new type of agent designed with dual missions: to not only complete a task but also to prove its accomplishment with curated snapshot evidences. Guided by our proposed 3C Principles (Completeness, Conciseness, and Creativity), the agent leverages its accessibility to the online environment to perform self-verification on a minimal, decisive set of snapshots. Such evidences are provided as the sole materials for a general LLM-as-a-Judge verifier to determine their validity and relevance. Experiments on mobile tasks across model families and scales demonstrate that our SmartSnap paradigm allows training LLM-driven agents in a scalable manner, bringing performance gains up to 26.08% and 16.66% respectively to 8B and 30B models. The synergizing between solution finding and evidence seeking facilitates the cultivation of efficient, self-verifying agents with competitive performance against DeepSeek V3.1 and Qwen3-235B-A22B. Code is available at: https://github.com/TencentYoutuResearch/SmartSnap
https://arxiv.org/abs/2512.22322
Academic Papers
svg
e3d9c74600c3b73b60d3b2ba752db5e50bec2abcd9500b94d9553933c7fc15c4
2026-01-07T00:00:00-05:00
When Does Multi-Task Learning Fail? Quantifying Data Imbalance and Task Independence in Metal Alloy Property Prediction
arXiv:2512.22740v2 Announce Type: replace Abstract: Multi-task learning (MTL) is widely adopted in materials informatics under the assumption that related properties share leverageable physical principles. This study critically examines this premise by simultaneously predicting electrical resistivity, Vickers hardness, and amorphous-forming ability using a dataset of 54,028 metal alloys.1 Contrary to expectations, we observe a striking dichotomy: MTL significantly degrades regression accuracy (e.g., hardness 2$R^2$ drops from 3$0.832$ to 4$0.694$) while improving classification performance (amorphous F1 increases from 5$0.703$ to 6$0.744$).7 Analysis of learned task graphs reveals negligible inter-task correlations, attributing regression failure to negative transfer driven by severe data imbalance (52,388 vs. 800 samples). To mitigate this, we evaluate Deep Imbalanced Regression techniques. PCGrad recovers hardness performance ($R^2 \rightarrow 0.855$) by resolving gradient conflicts, while LDS+GradNorm achieves the best overall multi-task balance. Our findings suggest that alloy properties often behave independently, necessitating specific strategies: independent models for maximum regression precision, PCGrad for minority tasks, and LDS+GradNorm when balanced joint prediction is required.
https://arxiv.org/abs/2512.22740
Academic Papers
svg
5e69f3adbb4b4db39a63b780663f9e05498c06f9b6b1a21cdbc062abce61f06c
2026-01-07T00:00:00-05:00
Agentic Physical AI toward a Domain-Specific Foundation Model for Nuclear Reactor Control
arXiv:2512.23292v2 Announce Type: replace Abstract: The prevailing paradigm in AI for physical systems, scaling general-purpose foundation models toward universal multimodal reasoning, confronts a fundamental barrier at the control interface. Recent benchmarks show that even frontier vision-language models achieve only 50-53% accuracy on basic quantitative physics tasks, behaving as approximate guessers that preserve semantic plausibility while violating physical constraints. This input unfaithfulness is not a scaling deficiency but a structural limitation. Perception-centric architectures optimize parameter-space imitation, whereas safety-critical control demands outcome-space guarantees over executed actions. Here, we present a fundamentally different pathway toward domain-specific foundation models by introducing compact language models operating as Agentic Physical AI, in which policy optimization is driven by physics-based validation rather than perceptual inference. We train a 360-million-parameter model on synthetic reactor control scenarios, scaling the dataset from 10^3 to 10^5 examples. This induces a sharp phase transition absent in general-purpose models. Small-scale systems exhibit high-variance imitation with catastrophic tail risk, while large-scale models undergo variance collapse exceeding 500x reduction, stabilizing execution-level behavior. Despite balanced exposure to four actuation families, the model autonomously rejects approximately 70% of the training distribution and concentrates 95% of runtime execution on a single-bank strategy. Learned representations transfer across distinct physics and continuous input modalities without architectural modification.
https://arxiv.org/abs/2512.23292
Academic Papers
svg
9660538b8b8a733b37546e759c15802f989a083ad6da36f80d9312f8faac5d50
2026-01-07T00:00:00-05:00
Less is more: Probabilistic reduction is best explained by small-scale predictability measures
arXiv:2512.23659v2 Announce Type: replace Abstract: The primary research questions of this paper center on defining the amount of context that is necessary and/or appropriate when investigating the relationship between language model probabilities and cognitive phenomena. We investigate whether whole utterances are necessary to observe probabilistic reduction and demonstrate that n-gram representations suffice as cognitive units of planning.
https://arxiv.org/abs/2512.23659
Academic Papers
svg
e39b03a65bf64947d42abc166a491d23bf77f0bf321be1e89eebde3a7e0eb80f
2026-01-07T00:00:00-05:00
Large Empirical Case Study: Go-Explore adapted for AI Red Team Testing
arXiv:2601.00042v2 Announce Type: replace Abstract: Production LLM agents with tool-using capabilities require security testing despite their safety training. We adapt Go-Explore to evaluate GPT-4o-mini across 28 experimental runs spanning six research questions. We find that random-seed variance dominates algorithmic parameters, yielding an 8x spread in outcomes; single-seed comparisons are unreliable, while multi-seed averaging materially reduces variance in our setup. Reward shaping consistently harms performance, causing exploration collapse in 94% of runs or producing 18 false positives with zero verified attacks. In our environment, simple state signatures outperform complex ones. For comprehensive security testing, ensembles provide attack-type diversity, whereas single agents optimize coverage within a given attack type. Overall, these results suggest that seed variance and targeted domain knowledge can outweigh algorithmic sophistication when testing safety-trained models.
https://arxiv.org/abs/2601.00042
Academic Papers
svg
f2ce286e9ab963e6842c603530563e492932f78c602d7534bccf66443511a62e
2026-01-07T00:00:00-05:00
Adaptive Constraint Propagation: Scaling Structured Inference for Large Language Models via Meta-Reinforcement Learning
arXiv:2601.00095v2 Announce Type: replace Abstract: Large language models increasingly require structured inference, from JSON schema enforcement to multi-lingual parsing, where outputs must satisfy complex constraints. We introduce MetaJuLS, a meta-reinforcement learning approach that learns universal constraint propagation policies applicable across languages and tasks without task-specific retraining. By formulating structured inference as adaptive constraint propagation and training a Graph Attention Network with meta-learning, MetaJuLS achieves 1.5--2.0$\times$ speedups over GPU-optimized baselines while maintaining within 0.2\% accuracy of state-of-the-art parsers. On Universal Dependencies across 10 languages and LLM-constrained generation (LogicBench, GSM8K-Constrained), MetaJuLS demonstrates rapid cross-domain adaptation: a policy trained on English parsing adapts to new languages and tasks with 5--10 gradient steps (5--15 seconds) rather than requiring hours of task-specific training. Mechanistic analysis reveals the policy discovers human-like parsing strategies (easy-first) and novel non-intuitive heuristics. By reducing propagation steps in LLM deployments, MetaJuLS contributes to Green AI by directly reducing inference carbon footprint.
https://arxiv.org/abs/2601.00095
Academic Papers
svg
552c70f9229ac2d1e6efce395672c910f4562ccb1a351de8599b2a8ffafc452f
2026-01-07T00:00:00-05:00
FCMBench: A Comprehensive Financial Credit Multimodal Benchmark for Real-world Applications
arXiv:2601.00150v2 Announce Type: replace Abstract: As multimodal AI becomes widely used for credit risk assessment and document review, a domain-specific benchmark is urgently needed that (1) reflects documents and workflows specific to financial credit applications, (2) includes credit-specific understanding and real-world robustness, and (3) preserves privacy compliance without sacrificing practical utility. Here, we introduce FCMBench-V1.0 -- a large-scale financial credit multimodal benchmark for real-world applications, covering 18 core certificate types, with 4,043 privacy-compliant images and 8,446 QA samples. The FCMBench evaluation framework consists of three dimensions: Perception, Reasoning, and Robustness, including 3 foundational perception tasks, 4 credit-specific reasoning tasks that require decision-oriented understanding of visual evidence, and 10 real-world acquisition artifact types for robustness stress testing. To reconcile compliance with realism, we construct all samples via a closed synthesis-capture pipeline: we manually synthesize document templates with virtual content and capture scenario-aware images in-house. This design also mitigates pre-training data leakage by avoiding web-sourced or publicly released images. FCMBench can effectively discriminate performance disparities and robustness across modern vision-language models. Extensive experiments were conducted on 23 state-of-the-art vision-language models (VLMs) from 14 top AI companies and research institutes. Among them, Gemini 3 Pro achieves the best F1(\%) score as a commercial model (64.61), Qwen3-VL-235B achieves the best score as an open-source baseline (57.27), and our financial credit-specific model, Qfin-VL-Instruct, achieves the top overall score (64.92). Robustness evaluations show that even top-performing models suffer noticeable performance drops under acquisition artifacts.
https://arxiv.org/abs/2601.00150
Academic Papers
svg
d97006ae29f0e3303ff284b20acde3170170e492c35fe153b386424a859c9dd2
2026-01-07T00:00:00-05:00
When Agents See Humans as the Outgroup: Belief-Dependent Bias in LLM-Powered Agents
arXiv:2601.00240v2 Announce Type: replace Abstract: This paper reveals that LLM-powered agents exhibit not only demographic bias (e.g., gender, religion) but also intergroup bias under minimal "us" versus "them" cues. When such group boundaries align with the agent-human divide, a new bias risk emerges: agents may treat other AI agents as the ingroup and humans as the outgroup. To examine this risk, we conduct a controlled multi-agent social simulation and find that agents display consistent intergroup bias in an all-agent setting. More critically, this bias persists even in human-facing interactions when agents are uncertain about whether the counterpart is truly human, revealing a belief-dependent fragility in bias suppression toward humans. Motivated by this observation, we identify a new attack surface rooted in identity beliefs and formalize a Belief Poisoning Attack (BPA) that can manipulate agent identity beliefs and induce outgroup bias toward humans. Extensive experiments demonstrate both the prevalence of agent intergroup bias and the severity of BPA across settings, while also showing that our proposed defenses can mitigate the risk. These findings are expected to inform safer agent design and motivate more robust safeguards for human-facing agents.
https://arxiv.org/abs/2601.00240
Academic Papers
svg
6c5aadde3c4cf814e7e90dc8ac84495e47f033d9cfc8e7826bff0d075a17c1a6
2026-01-07T00:00:00-05:00
SlingBAG Pro: Accelerating point cloud-based iterative reconstruction for 3D photoacoustic imaging with arbitrary array geometries
arXiv:2601.00551v2 Announce Type: replace Abstract: High-quality three-dimensional (3D) photoacoustic imaging (PAI) is gaining increasing attention in clinical applications. To address the challenges of limited space and high costs, irregular geometric transducer arrays that conform to specific imaging regions are promising for achieving high-quality 3D PAI with fewer transducers. However, traditional iterative reconstruction algorithms struggle with irregular array configurations, suffering from high computational complexity, substantial memory requirements, and lengthy reconstruction times. In this work, we introduce SlingBAG Pro, an advanced reconstruction algorithm based on the point cloud iteration concept of the Sliding ball adaptive growth (SlingBAG) method, while extending its compatibility to arbitrary array geometries. SlingBAG Pro maintains high reconstruction quality, reduces the number of required transducers, and employs a hierarchical optimization strategy that combines zero-gradient filtering with progressively increased temporal sampling rates during iteration. This strategy rapidly removes redundant spatial point clouds, accelerates convergence, and significantly shortens overall reconstruction time. Compared to the original SlingBAG algorithm, SlingBAG Pro achieves up to a 2.2-fold speed improvement in point cloud-based 3D PA reconstruction under irregular array geometries. The proposed method is validated through both simulation and in vivo mouse experiments, and the source code is publicly available at https://github.com/JaegerCQ/SlingBAG_Pro.
https://arxiv.org/abs/2601.00551
Academic Papers
svg
116cc074d7c02d9911e0a5f42148fa339b4a5721a798994bcd412918c17beeee
2026-01-07T00:00:00-05:00
Interpretability-Guided Bi-objective Optimization: Aligning Accuracy and Explainability
arXiv:2601.00655v2 Announce Type: replace Abstract: This paper introduces Interpretability-Guided Bi-objective Optimization (IGBO), a framework that trains interpretable models by incorporating structured domain knowledge via a bi-objective formulation. IGBO encodes feature importance hierarchies as a Directed Acyclic Graph (DAG) via Central Limit Theorem-based construction and uses Temporal Integrated Gradients (TIG) to measure feature importance. To address the Out-of-Distribution (OOD) problem in TIG computation, we propose an Optimal Path Oracle that learns data-manifold-aware integration paths. Theoretical analysis establishes convergence properties via a geometric projection mapping $\mathcal{P}$ and proves robustness to mini-batch noise. Central Limit Theorem-based construction of the interpretability DAG ensures statistical validity of edge orientation decisions. Empirical results on time-series data demonstrate IGBO's effectiveness in enforcing DAG constraints with minimal accuracy loss, outperforming standard regularization baselines.
https://arxiv.org/abs/2601.00655
Academic Papers
svg
91b7e7f870d98e490986f9b0d465847115b441588eacf72bdae650bf72128d1f
2026-01-07T00:00:00-05:00
CogCanvas: Verbatim-Grounded Artifact Extraction for Long LLM Conversations
arXiv:2601.00821v2 Announce Type: replace Abstract: Conversation summarization loses nuanced details: when asked about coding preferences after 40 turns, summarization recalls "use type hints" but drops the critical constraint "everywhere" (19.0% exact match vs. 93.0% for our approach). We present CogCanvas, a training-free framework inspired by how teams use whiteboards to anchor shared memory. Rather than compressing conversation history, CogCanvas extracts verbatim-grounded artifacts (decisions, facts, reminders) and retrieves them via temporal-aware graph. On the LoCoMo benchmark (all 10 conversations from the ACL 2024 release), CogCanvas achieves the highest overall accuracy among training-free methods (32.4%), outperforming RAG (24.6%) by +7.8pp, with decisive advantages on complex reasoning tasks: +20.6pp on temporal reasoning (32.7% vs. 12.1% RAG) and +1.1pp on multi-hop questions (41.7% vs. 40.6% RAG). CogCanvas also leads on single-hop retrieval (26.6% vs. 24.6% RAG). Ablation studies reveal that BGE reranking contributes +7.7pp, making it the largest contributor to CogCanvas's performance. While heavily-optimized approaches achieve higher absolute scores through dedicated training (EverMemOS: ~92%), our training-free approach provides practitioners with an immediately-deployable alternative that significantly outperforms standard baselines. Code and data: https://github.com/tao-hpu/cog-canvas
https://arxiv.org/abs/2601.00821
Academic Papers
svg
072caf69d025c1797c9d95eefd01a8a37bb461d5200b43814ffd1a6d6b3d192e
2026-01-07T00:00:00-05:00
Geometric and Dynamic Scaling in Deep Transformers
arXiv:2601.01014v2 Announce Type: replace Abstract: Despite their empirical success, pushing Transformer architectures to extreme depth often leads to a paradoxical failure: representations become increasingly redundant, lose rank, and ultimately collapse. Existing explanations largely attribute this phenomenon to optimization instability or vanishing gradients, yet such accounts fail to explain why collapse persists even under modern normalization and initialization schemes. In this paper, we argue that the collapse of deep Transformers is fundamentally a geometric problem. Standard residual updates implicitly assume that feature accumulation is always beneficial, but offer no mechanism to constrain update directions or to erase outdated information. As depth increases, this leads to systematic drift off the semantic manifold and monotonic feature accumulation, causing representational degeneracy. We propose a unified geometric framework that addresses these failures through two orthogonal principles. First, manifold-constrained hyper-connections restrict residual updates to valid local tangent directions, preventing uncontrolled manifold drift. Second, deep delta learning introduces data-dependent, non-monotonic updates that enable reflection and erasure of redundant features rather than their unconditional accumulation. Together, these mechanisms decouple the direction and sign of feature updates, yielding a stable geometric evolution across depth. We term the resulting architecture the Manifold-Geometric Transformer (MGT). Our analysis predicts that enforcing geometric validity while allowing dynamic erasure is essential for avoiding rank collapse in ultra-deep networks. We outline an evaluation protocol for Transformers exceeding 100 layers to test the hypothesis that geometry, rather than depth itself, is the key limiting factor in deep representation learning.
https://arxiv.org/abs/2601.01014
Academic Papers
svg
6d39c633efc4aaed4951b197f8958ff357c9f37455ea14d51ada228b22b88afa
2026-01-07T00:00:00-05:00
Coarse-Grained Kullback--Leibler Control of Diffusion-Based Generative AI
arXiv:2601.01045v2 Announce Type: replace Abstract: Diffusion models and score-based generative models provide a powerful framework for synthesizing high-quality images from noise. However, there is still no satisfactory theory that describes how coarse-grained quantities, such as blockwise intensity or class proportions after partitioning an image into spatial blocks, are preserved and evolve along the reverse diffusion dynamics. In previous work, the author introduced an information-theoretic Lyapunov function V for non-ergodic Markov processes on a state space partitioned into blocks, defined as the minimal Kullback-Leibler divergence to the set of stationary distributions reachable from a given initial condition, and showed that a leak-tolerant potential V-delta with a prescribed tolerance for block masses admits a closed-form expression as a scaling-and-clipping operation on block masses. In this paper, I transplant this framework to the reverse diffusion process in generative models and propose a reverse diffusion scheme that is projected by the potential V-delta (referred to as the V-delta projected reverse diffusion). I extend the monotonicity of V to time-inhomogeneous block-preserving Markov kernels and show that, under small leakage and the V-delta projection, V-delta acts as an approximate Lyapunov function. Furthermore, using a toy model consisting of block-constant images and a simplified reverse kernel, I numerically demonstrate that the proposed method keeps the block-mass error and the leak-tolerant potential within the prescribed tolerance, while achieving pixel-wise accuracy and visual quality comparable to the non-projected dynamics. This study reinterprets generative sampling as a decrease of an information potential from noise to data, and provides a design principle for reverse diffusion processes with explicit control of coarse-grained quantities.
https://arxiv.org/abs/2601.01045
Academic Papers
svg
0492583918b71a011ac5ce238142b45fc06d972047b6aba5cc5f42b7e958a202
2026-01-07T00:00:00-05:00
A UCB Bandit Algorithm for General ML-Based Estimators
arXiv:2601.01061v2 Announce Type: replace Abstract: We present ML-UCB, a generalized upper confidence bound algorithm that integrates arbitrary machine learning models into multi-armed bandit frameworks. A fundamental challenge in deploying sophisticated ML models for sequential decision-making is the lack of tractable concentration inequalities required for principled exploration. We overcome this limitation by directly modeling the learning curve behavior of the underlying estimator. Specifically, assuming the Mean Squared Error decreases as a power law in the number of training samples, we derive a generalized concentration inequality and prove that ML-UCB achieves sublinear regret. This framework enables the principled integration of any ML model whose learning curve can be empirically characterized, eliminating the need for model-specific theoretical analysis. We validate our approach through experiments on a collaborative filtering recommendation system using online matrix factorization with synthetic data designed to simulate a simplified two-tower model, demonstrating substantial improvements over LinUCB
https://arxiv.org/abs/2601.01061
Academic Papers
svg
893e1d8ccf0bbe2c440a3d8ce76736215ab5a330714504510ed277b866051a6a
2026-01-07T00:00:00-05:00
Making MoE-based LLM Inference Resilient with Tarragon
arXiv:2601.01310v2 Announce Type: replace Abstract: Mixture-of-Experts (MoE) models are increasingly used to serve LLMs at scale, but failures become common as deployment scale grows. Existing systems exhibit poor failure resilience: even a single worker failure triggers a coarse-grained, service-wide restart, discarding accumulated progress and halting the entire inference pipeline during recovery--an approach clearly ill-suited for latency-sensitive, LLM services. We present Tarragon, a resilient MoE inference framework that confines the failures impact to individual workers while allowing the rest of the pipeline to continue making forward progress. Tarragon exploits the natural separation between the attention and expert computation in MoE-based transformers, treating attention workers (AWs) and expert workers (EWs) as distinct failure domains. Tarragon introduces a reconfigurable datapath to mask failures by rerouting requests to healthy workers. On top of this datapath, Tarragon implements a self-healing mechanism that relaxes the tightly synchronized execution of existing MoE frameworks. For stateful AWs, Tarragon performs asynchronous, incremental KV cache checkpointing with per-request restoration, and for stateless EWs, it leverages residual GPU memory to deploy shadow experts. These together keep recovery cost and recomputation overhead extremely low. Our evaluation shows that, compared to state-of-the-art MegaScale-Infer, Tarragon reduces failure-induced stalls by 160-213x (from ~64 s down to 0.3-0.4 s) while preserving performance when no failures occur.
https://arxiv.org/abs/2601.01310
Academic Papers
svg
9ab6bb0409f937f988a45174b4dd259c554e7f48193744c6d35ee128fe43555c
2026-01-07T00:00:00-05:00
Accelerating Storage-Based Training for Graph Neural Networks
arXiv:2601.01473v2 Announce Type: replace Abstract: Graph neural networks (GNNs) have achieved breakthroughs in various real-world downstream tasks due to their powerful expressiveness. As the scale of real-world graphs has been continuously growing, a storage-based approach to GNN training has been studied, which leverages external storage (e.g., NVMe SSDs) to handle such web-scale graphs on a single machine. Although such storage-based GNN training methods have shown promising potential in large-scale GNN training, we observed that they suffer from a severe bottleneck in data preparation since they overlook a critical challenge: how to handle a large number of small storage I/Os. To address the challenge, in this paper, we propose a novel storage-based GNN training framework, named AGNES, that employs a method of block-wise storage I/O processing to fully utilize the I/O bandwidth of high-performance storage devices. Moreover, to further enhance the efficiency of each storage I/O, AGNES employs a simple yet effective strategy, hyperbatch-based processing based on the characteristics of real-world graphs. Comprehensive experiments on five real-world graphs reveal that AGNES consistently outperforms four state-of-the-art methods, by up to 4.1X faster than the best competitor. Our code is available at https://github.com/Bigdasgit/agnes-kdd26.
https://arxiv.org/abs/2601.01473
Academic Papers
svg
53a2930b777e751d3d5751a220d51e0308efda0a93eefcf7ff6fc9734f9d0036
2026-01-07T00:00:00-05:00
MOSS Transcribe Diarize: Accurate Transcription with Speaker Diarization
arXiv:2601.01554v2 Announce Type: replace Abstract: Speaker-Attributed, Time-Stamped Transcription (SATS) aims to transcribe what is said and to precisely determine the timing of each speaker, which is particularly valuable for meeting transcription. Existing SATS systems rarely adopt an end-to-end formulation and are further constrained by limited context windows, weak long-range speaker memory, and the inability to output timestamps. To address these limitations, we present MOSS Transcribe Diarize, a unified multimodal large language model that jointly performs Speaker-Attributed, Time-Stamped Transcription in an end-to-end paradigm. Trained on extensive real wild data and equipped with a 128k context window for up to 90-minute inputs, MOSS Transcribe Diarize scales well and generalizes robustly. Across comprehensive evaluations, it outperforms state-of-the-art commercial systems on multiple public and in-house benchmarks.
https://arxiv.org/abs/2601.01554
Academic Papers
svg
64ca65201fea9e77126168ea47523de441e2deba13a640a6d027841f0ec37466
2026-01-07T00:00:00-05:00
Steerability of Instrumental-Convergence Tendencies in LLMs
arXiv:2601.01584v2 Announce Type: replace Abstract: We examine two properties of AI systems: capability (what a system can do) and steerability (how reliably one can shift behavior toward intended outcomes). A central question is whether capability growth reduces steerability and risks control collapse. We also distinguish between authorized steerability (builders reliably reaching intended behaviors) and unauthorized steerability (attackers eliciting disallowed behaviors). This distinction highlights a fundamental safety--security dilemma of AI models: safety requires high steerability to enforce control (e.g., stop/refuse), while security requires low steerability for malicious actors to elicit harmful behaviors. This tension presents a significant challenge for open-weight models, which currently exhibit high steerability via common techniques like fine-tuning or adversarial attacks. Using Qwen3 and InstrumentalEval, we find that a short anti-instrumental prompt suffix sharply reduces the measured convergence rate (e.g., shutdown avoidance, self-replication). For Qwen3-30B Instruct, the convergence rate drops from 81.69% under a pro-instrumental suffix to 2.82% under an anti-instrumental suffix. Under anti-instrumental prompting, larger aligned models show lower convergence rates than smaller ones (Instruct: 2.82% vs. 4.23%; Thinking: 4.23% vs. 9.86%). Code is available at github.com/j-hoscilowicz/instrumental_steering.
https://arxiv.org/abs/2601.01584
Academic Papers
svg
08319c4ab74fe88da0cd9440636d7b667f1fc87bed415f06d5ddfd2dd75f803f
2026-01-07T00:00:00-05:00
FFP-300K: Scaling First-Frame Propagation for Generalizable Video Editing
arXiv:2601.01720v2 Announce Type: replace Abstract: First-Frame Propagation (FFP) offers a promising paradigm for controllable video editing, but existing methods are hampered by a reliance on cumbersome run-time guidance. We identify the root cause of this limitation as the inadequacy of current training datasets, which are often too short, low-resolution, and lack the task diversity required to teach robust temporal priors. To address this foundational data gap, we first introduce FFP-300K, a new large-scale dataset comprising 300K high-fidelity video pairs at 720p resolution and 81 frames in length, constructed via a principled two-track pipeline for diverse local and global edits. Building on this dataset, we propose a novel framework designed for true guidance-free FFP that resolves the critical tension between maintaining first-frame appearance and preserving source video motion. Architecturally, we introduce Adaptive Spatio-Temporal RoPE (AST-RoPE), which dynamically remaps positional encodings to disentangle appearance and motion references. At the objective level, we employ a self-distillation strategy where an identity propagation task acts as a powerful regularizer, ensuring long-term temporal stability and preventing semantic drift. Comprehensive experiments on the EditVerseBench benchmark demonstrate that our method significantly outperforming existing academic and commercial models by receiving about 0.2 PickScore and 0.3 VLM score improvement against these competitors.
https://arxiv.org/abs/2601.01720
Academic Papers
svg
1694e136e0c1a1006cecc7a96d3482c958e6c25b8b528080c70258938dca9dd9
2026-01-07T00:00:00-05:00
PsychEval: A Multi-Session and Multi-Therapy Benchmark for High-Realism AI Psychological Counselor
arXiv:2601.01802v2 Announce Type: replace Abstract: To develop a reliable AI for psychological assessment, we introduce \texttt{PsychEval}, a multi-session, multi-therapy, and highly realistic benchmark designed to address three key challenges: \textbf{1) Can we train a highly realistic AI counselor?} Realistic counseling is a longitudinal task requiring sustained memory and dynamic goal tracking. We propose a multi-session benchmark (spanning 6-10 sessions across three distinct stages) that demands critical capabilities such as memory continuity, adaptive reasoning, and longitudinal planning. The dataset is annotated with extensive professional skills, comprising over 677 meta-skills and 4577 atomic skills. \textbf{2) How to train a multi-therapy AI counselor?} While existing models often focus on a single therapy, complex cases frequently require flexible strategies among various therapies. We construct a diverse dataset covering five therapeutic modalities (Psychodynamic, Behaviorism, CBT, Humanistic Existentialist, and Postmodernist) alongside an integrative therapy with a unified three-stage clinical framework across six core psychological topics. \textbf{3) How to systematically evaluate an AI counselor?} We establish a holistic evaluation framework with 18 therapy-specific and therapy-shared metrics across Client-Level and Counselor-Level dimensions. To support this, we also construct over 2,000 diverse client profiles. Extensive experimental analysis fully validates the superior quality and clinical fidelity of our dataset. Crucially, \texttt{PsychEval} transcends static benchmarking to serve as a high-fidelity reinforcement learning environment that enables the self-evolutionary training of clinically responsible and adaptive AI counselors.
https://arxiv.org/abs/2601.01802
Academic Papers
svg
79fa009518a73ac2c61eb555cce0178c3a231cc874296ef97a5d7830b66fcb6f
2026-01-07T00:00:00-05:00
RSwinV2-MD: An Enhanced Residual SwinV2 Transformer for Monkeypox Detection from Skin Images
arXiv:2601.01835v2 Announce Type: replace Abstract: In this paper, a deep learning approach for Mpox diagnosis named Customized Residual SwinTransformerV2 (RSwinV2) has been proposed, trying to enhance the capability of lesion classification by employing the RSwinV2 tool-assisted vision approach. In the RSwinV2 method, a hierarchical structure of the transformer has been customized based on the input dimensionality, embedding structure, and output targeted by the method. In this RSwinV2 approach, the input image has been split into non-overlapping patches and processed using shifted windows and attention in these patches. This process has helped the method link all the windows efficiently by avoiding the locality issues of non-overlapping regions in attention, while being computationally efficient. RSwinV2 has further developed based on SwinTransformer and has included patch and position embeddings to take advantage of the transformer global-linking capability by employing multi-head attention in these embeddings. Furthermore, RSwinV2 has developed and incorporated the Inverse Residual Block (IRB) into this method, which utilizes convolutional skip connections with these inclusive designs to address the vanishing gradient issues during processing. RSwinV2 inclusion of IRB has therefore facilitated this method to link global patterns as well as local patterns; hence, its integrity has helped improve lesion classification capability by minimizing variability of Mpox and increasing differences of Mpox, chickenpox, measles, and cowpox. In testing SwinV2, its accuracy of 96.51 and an F1score of 96.13 have been achieved on the Kaggle public dataset, which has outperformed standard CNN models and SwinTransformers; the RSwinV2 vector has thus proved its validity as a computer-assisted tool for Mpox lesion observation interpretation.
https://arxiv.org/abs/2601.01835
Academic Papers
svg
a315eb822b52aaeddac37bec30679b28572a0a4ad14eb6f8ee0cba88a38d88bf
2026-01-07T00:00:00-05:00
Safety at One Shot: Patching Fine-Tuned LLMs with A Single Instance
arXiv:2601.01887v2 Announce Type: replace Abstract: Fine-tuning safety-aligned large language models (LLMs) can substantially compromise their safety. Previous approaches require many safety samples or calibration sets, which not only incur significant computational overhead during realignment but also lead to noticeable degradation in model utility. Contrary to this belief, we show that safety alignment can be fully recovered with only a single safety example, without sacrificing utility and at minimal cost. Remarkably, this recovery is effective regardless of the number of harmful examples used in fine-tuning or the size of the underlying model, and convergence is achieved within just a few epochs. Furthermore, we uncover the low-rank structure of the safety gradient, which explains why such efficient correction is possible. We validate our findings across five safety-aligned LLMs and multiple datasets, demonstrating the generality of our approach.
https://arxiv.org/abs/2601.01887
Academic Papers
svg
15e27f901848141e38567c55a74affc187c1641f009caf8ba98df2d638bec924
2026-01-07T00:00:00-05:00
Tackling the Inherent Difficulty of Noise Filtering in RAG
arXiv:2601.01896v2 Announce Type: replace Abstract: Retrieval-Augmented Generation (RAG) has become a widely adopted approach to enhance Large Language Models (LLMs) by incorporating external knowledge and reducing hallucinations. However, noisy or irrelevant documents are often introduced during RAG, potentially degrading performance and even causing hallucinated outputs. While various methods have been proposed to filter out such noise, we argue that identifying irrelevant information from retrieved content is inherently difficult and limited number of transformer layers can hardly solve this. Consequently, retrievers fail to filter out irrelevant documents entirely. Therefore, LLMs must be robust against such noise, but we demonstrate that standard fine-tuning approaches are often ineffective in enabling the model to selectively utilize relevant information while ignoring irrelevant content due to the structural constraints of attention patterns. To address this, we propose a novel fine-tuning method designed to enhance the model's ability to distinguish between relevant and irrelevant information within retrieved documents. Extensive experiments across multiple benchmarks show that our approach significantly improves the robustness and performance of LLMs.
https://arxiv.org/abs/2601.01896
Academic Papers
svg
48aafd6f641901f7ce138cd27a13658404362d5824410dd408d54d308324df53
2026-01-07T00:00:00-05:00
Hidden State Poisoning Attacks against Mamba-based Language Models
arXiv:2601.01972v2 Announce Type: replace Abstract: State space models (SSMs) like Mamba offer efficient alternatives to Transformer-based language models, with linear time complexity. Yet, their adversarial robustness remains critically unexplored. This paper studies the phenomenon whereby specific short input phrases induce a partial amnesia effect in such models, by irreversibly overwriting information in their hidden states, referred to as a Hidden State Poisoning Attack (HiSPA). Our benchmark RoBench25 allows evaluating a model's information retrieval capabilities when subject to HiSPAs, and confirms the vulnerability of SSMs against such attacks. Even a recent 52B hybrid SSM-Transformer model from the Jamba family collapses on RoBench25 under optimized HiSPA triggers, whereas pure Transformers do not. We also observe that HiSPA triggers significantly weaken the Jamba model on the popular Open-Prompt-Injections benchmark, unlike pure Transformers. Finally, our interpretability study reveals patterns in Mamba's hidden layers during HiSPAs that could be used to build a HiSPA mitigation system. The full code and data to reproduce the experiments can be found at https://anonymous.4open.science/r/hispa_anonymous-5DB0.
https://arxiv.org/abs/2601.01972
Academic Papers
svg
56504dbf4354cd36df87ecc84270166cd22d65f4acdc5b247e92b9e714765662
2026-01-07T00:00:00-05:00
MDAgent2: Large Language Model for Code Generation and Knowledge Q&A in Molecular Dynamics
arXiv:2601.02075v2 Announce Type: replace Abstract: Molecular dynamics (MD) simulations are essential for understanding atomic-scale behaviors in materials science, yet writing LAMMPS scripts remains highly specialized and time-consuming tasks. Although LLMs show promise in code generation and domain-specific question answering, their performance in MD scenarios is limited by scarce domain data, the high deployment cost of state-of-the-art LLMs, and low code executability. Building upon our prior MDAgent, we present MDAgent2, the first end-to-end framework capable of performing both knowledge Q&A and code generation within the MD domain. We construct a domain-specific data-construction pipeline that yields three high-quality datasets spanning MD knowledge, question answering, and code generation. Based on these datasets, we adopt a three stage post-training strategy--continued pre-training (CPT), supervised fine-tuning (SFT), and reinforcement learning (RL)--to train two domain-adapted models, MD-Instruct and MD-Code. Furthermore, we introduce MD-GRPO, a closed-loop RL method that leverages simulation outcomes as reward signals and recycles low-reward trajectories for continual refinement. We further build MDAgent2-RUNTIME, a deployable multi-agent system that integrates code generation, execution, evaluation, and self-correction. Together with MD-EvalBench proposed in this work, the first benchmark for LAMMPS code generation and question answering, our models and system achieve performance surpassing several strong baselines.This work systematically demonstrates the adaptability and generalization capability of large language models in industrial simulation tasks, laying a methodological foundation for automatic code generation in AI for Science and industrial-scale simulations. URL: https://github.com/FredericVAN/PKU_MDAgent2
https://arxiv.org/abs/2601.02075
Academic Papers
svg
7b07cc8954efaeca863e610258f472aba0775c0f0223568b8aaa2b50341c0682
2026-01-07T00:00:00-05:00
PhysSFI-Net: Physics-informed Geometric Learning of Skeletal and Facial Interactions for Orthognathic Surgical Outcome Prediction
arXiv:2601.02088v2 Announce Type: replace Abstract: Orthognathic surgery repositions jaw bones to restore occlusion and enhance facial aesthetics. Accurate simulation of postoperative facial morphology is essential for preoperative planning. However, traditional biomechanical models are computationally expensive, while geometric deep learning approaches often lack interpretability. In this study, we develop and validate a physics-informed geometric deep learning framework named PhysSFI-Net for precise prediction of soft tissue deformation following orthognathic surgery. PhysSFI-Net consists of three components: a hierarchical graph module with craniofacial and surgical plan encoders combined with attention mechanisms to extract skeletal-facial interaction features; a Long Short-Term Memory (LSTM)-based sequential predictor for incremental soft tissue deformation; and a biomechanics-inspired module for high-resolution facial surface reconstruction. Model performance was assessed using point cloud shape error (Hausdorff distance), surface deviation error, and landmark localization error (Euclidean distances of craniomaxillofacial landmarks) between predicted facial shapes and corresponding ground truths. A total of 135 patients who underwent combined orthodontic and orthognathic treatment were included for model training and validation. Quantitative analysis demonstrated that PhysSFI-Net achieved a point cloud shape error of 1.070 +/- 0.088 mm, a surface deviation error of 1.296 +/- 0.349 mm, and a landmark localization error of 2.445 +/- 1.326 mm. Comparative experiments indicated that PhysSFI-Net outperformed the state-of-the-art method ACMT-Net in prediction accuracy. In conclusion, PhysSFI-Net enables interpretable, high-resolution prediction of postoperative facial morphology with superior accuracy, showing strong potential for clinical application in orthognathic surgical planning and simulation.
https://arxiv.org/abs/2601.02088
Academic Papers
svg
d3075e2fcf2e628342af8123d8eb23dc871fb256982e2d74b7b53ca6ba0a8c77
2026-01-07T00:00:00-05:00
MCD-Net: A Lightweight Deep Learning Baseline for Optical-Only Moraine Segmentation
arXiv:2601.02091v2 Announce Type: replace Abstract: Glacial segmentation is essential for reconstructing past glacier dynamics and evaluating climate-driven landscape change. However, weak optical contrast and the limited availability of high-resolution DEMs hinder automated mapping. This study introduces the first large-scale optical-only moraine segmentation dataset, comprising 3,340 manually annotated high-resolution images from Google Earth covering glaciated regions of Sichuan and Yunnan, China. We develop MCD-Net, a lightweight baseline that integrates a MobileNetV2 encoder, a Convolutional Block Attention Module (CBAM), and a DeepLabV3+ decoder. Benchmarking against deeper backbones (ResNet152, Xception) shows that MCD-Net achieves 62.3% mean Intersection over Union (mIoU) and 72.8% Dice coefficient while reducing computational cost by more than 60%. Although ridge delineation remains constrained by sub-pixel width and spectral ambiguity, the results demonstrate that optical imagery alone can provide reliable moraine-body segmentation. The dataset and code are publicly available at https://github.com/Lyra-alpha/MCD-Net, establishing a reproducible benchmark for moraine-specific segmentation and offering a deployable baseline for high-altitude glacial monitoring.
https://arxiv.org/abs/2601.02091
Academic Papers
svg
1d1a39c4eb6fcbadebc51dedc7759a0c52dda8183fd5fde7c35b80bb47858230
2026-01-07T00:00:00-05:00
Enabling Deep Reinforcement Learning Research for Energy Saving in Open RAN
arXiv:2601.02240v2 Announce Type: replace Abstract: The growing performance demands and higher deployment densities of next-generation wireless systems emphasize the importance of adopting strategies to manage the energy efficiency of mobile networks. In this demo, we showcase a framework that enables research on Deep Reinforcement Learning (DRL) techniques for improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems. Using the open-source simulator ns-O-RAN and the reinforcement learning environment Gymnasium, the framework enables to train and evaluate DRL agents that dynamically control the activation and deactivation of cells in a 5G network. We show how to collect data for training and evaluate the impact of DRL on energy efficiency in a realistic 5G network scenario, including users' mobility and handovers, a full protocol stack, and 3rd Generation Partnership Project (3GPP)-compliant channel models. The tool will be open-sourced and a tutorial for energy efficiency testing in ns-O-RAN.
https://arxiv.org/abs/2601.02240
Academic Papers
svg
12e4e67c4586554f6deb89ea1ea508293b1dcae7a3056c040bb3f4e34e6e89c0
2026-01-07T00:00:00-05:00
Deciding Serializability in Network Systems
arXiv:2601.02251v2 Announce Type: replace Abstract: We present the SER modeling language for automatically verifying serializability of concurrent programs, i.e., whether every concurrent execution of the program is equivalent to some serial execution. SER programs are suitably restricted to make this problem decidable, while still allowing for an unbounded number of concurrent threads of execution, each potentially running for an unbounded number of steps. Building on prior theoretical results, we give the first automated end-to-end decision procedure that either proves serializability by producing a checkable certificate, or refutes it by producing a counterexample trace. We also present a network-system abstraction to which SER programs compile. Our decision procedure then reduces serializability in this setting to a Petri net reachability query. Furthermore, in order to scale, we curtail the search space via multiple optimizations, including Petri net slicing, semilinear-set compression, and Presburger-formula manipulation. We extensively evaluate our framework and show that, despite the theoretical hardness of the problem, it can successfully handle various models of real-world programs, including stateful firewalls, BGP routers, and more.
https://arxiv.org/abs/2601.02251
Academic Papers
svg
a9fbb09999b6443323ffcf2d5a1a6de2566fa240ce0134cf56b8185bc32a794b
2026-01-07T00:00:00-05:00
pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs
arXiv:2601.02285v2 Announce Type: replace Abstract: PDFs are the second-most used document type on the internet (after HTML). Yet, existing QA datasets commonly start from text sources or only address specific domains. In this paper, we present pdfQA, a multi-domain 2K human-annotated (real-pdfQA) and 2K synthetic dataset (syn-pdfQA) differentiating QA pairs in ten complexity dimensions (e.g., file type, source modality, source position, answer type). We apply and evaluate quality and difficulty filters on both datasets, obtaining valid and challenging QA pairs. We answer the questions with open-source LLMs, revealing existing challenges that correlate with our complexity dimensions. pdfQA presents a basis for end-to-end QA pipeline evaluation, testing diverse skill sets and local optimizations (e.g., in information retrieval or parsing).
https://arxiv.org/abs/2601.02285
Academic Papers
svg
9c8e5e7e54e5ef15f09184823690c945f03f8085cc3e5644188ee32ba10d0b8d
2026-01-07T00:00:00-05:00
At the Intersection of Deep Sequential Model Framework and State-space Model Framework: Study on Option Pricing
arXiv:2012.07784v2 Announce Type: replace-cross Abstract: Inference and forecast problems of the nonlinear dynamical system have arisen in a variety of contexts. Reservoir computing and deep sequential models, on the one hand, have demonstrated efficient, robust, and superior performance in modeling simple and chaotic dynamical systems. However, their innate deterministic feature has partially detracted their robustness to noisy system, and their inability to offer uncertainty measurement has also been an insufficiency of the framework. On the other hand, the traditional state-space model framework is robust to noise. It also carries measured uncertainty, forming a just-right complement to the reservoir computing and deep sequential model framework. We propose the unscented reservoir smoother, a model that unifies both deep sequential and state-space models to achieve both frameworks' superiorities. Evaluated in the option pricing setting on top of noisy datasets, URS strikes highly competitive forecasting accuracy, especially those of longer-term, and uncertainty measurement. Further extensions and implications on URS are also discussed to generalize a full integration of both frameworks.
https://arxiv.org/abs/2012.07784
Academic Papers
svg
c68c9b540a757ff6b2335a409b70bd514cb58a7b839c2bffdba85ca85acad212
2026-01-07T00:00:00-05:00
Development of a high-resolution indoor radon map using a new machine learning-based probabilistic model and German radon survey data
arXiv:2310.11143v5 Announce Type: replace-cross Abstract: Accurate knowledge of indoor radon concentration is crucial for assessing radon-related health effects or identifying radon-prone areas. Indoor radon concentration at the national scale is usually estimated on the basis of extensive measurement campaigns. However, characteristics of the sampled households often differ from the characteristics of the target population owing to the large number of relevant factors that control the indoor radon concentration, such as the availability of geogenic radon or floor level. We propose a model-based approach that allows a more realistic estimation of indoor radon distribution with a higher spatial resolution than a purely data-based approach. A modeling approach was used by applying a quantile regression forest to estimate the probability distribution function of indoor radon for each floor level of each residential building in Germany. Based on the estimated probability distribution function,a probabilistic Monte Carlo sampling technique was applied, enabling the combination and population weighting of floor-level predictions. In this way,the uncertainty of the individual predictions is effectively propagated into the estimate of variability at the aggregated level. The results show an approximate lognormal distribution of indoor radon in dwellings in Germany with an arithmetic mean of 63 Bq/m3, a geometric mean of 41 Bq/m3, and a 95th percentile of 180 Bq/m3. The exceedance probabilities for 100 and 300 Bq/m3 are 12.5% (10.5 million people affected) and 2.2 % (1.9 million people affected), respectively. The advantages of our approach are that it yields a) an accurate estimation of indoor radon concentration even if the survey is not fully representative with respect to floor level and radon concentration in soil, and b) an estimate of the indoor radon distribution with a much higher spatial resolution than basic descriptive statistics.
https://arxiv.org/abs/2310.11143
Academic Papers
svg
134fd772df66f64f91a98451ab1d614d0fb3bbbd5c7fcb392e50d35583d65bfd
2026-01-07T00:00:00-05:00
Learning mirror maps in policy mirror descent
arXiv:2402.05187v3 Announce Type: replace-cross Abstract: Policy Mirror Descent (PMD) is a popular framework in reinforcement learning, serving as a unifying perspective that encompasses numerous algorithms. These algorithms are derived through the selection of a mirror map and enjoy finite-time convergence guarantees. Despite its popularity, the exploration of PMD's full potential is limited, with the majority of research focusing on a particular mirror map -- namely, the negative entropy -- which gives rise to the renowned Natural Policy Gradient (NPG) method. It remains uncertain from existing theoretical studies whether the choice of mirror map significantly influences PMD's efficacy. In our work, we conduct empirical investigations to show that the conventional mirror map choice (NPG) often yields less-than-optimal outcomes across several standard benchmark environments. Using evolutionary strategies, we identify more efficient mirror maps that enhance the performance of PMD. We first focus on a tabular environment, i.e. Grid-World, where we relate existing theoretical bounds with the performance of PMD for a few standard mirror maps and the learned one. We then show that it is possible to learn a mirror map that outperforms the negative entropy in more complex environments, such as the MinAtar suite. Additionally, we demonstrate that the learned mirror maps generalize effectively to different tasks by testing each map across various other environments.
https://arxiv.org/abs/2402.05187
Academic Papers
svg
5daca0448ad935a2d398e70aba5441e5f7f6e29327e6c9e16b3b8e606b5e2dcf
2026-01-07T00:00:00-05:00
A Method For Bounding Tail Probabilities
arXiv:2402.13662v3 Announce Type: replace-cross Abstract: We present a method for upper and lower bounding the right and the left tail probabilities of continuous random variables (RVs). For the right tail probability of RV $X$ with probability density function $f (x)$, this method requires first setting a continuous, positive, and strictly decreasing function $g (x)$ such that $-f (x)/g' (x)$ is a decreasing and increasing function, $\forall x>x_0$, which results in upper and lower bounds, respectively, given in the form $-f (x) g (x)/g' (x)$, $\forall x>x_0$, where $x_0$ is some point. Similarly, for the upper and lower bounds on the left tail probability of $X$, this method requires first setting a continuous, positive, and strictly increasing function $g (x)$ such that $f (x)/g' (x)$ is an increasing and decreasing function, $\forall x<x_0$. We provide some examples of good candidates for the function $g (x)$. We also establish connections between the new bounds and Markov's inequality and Chernoff's bound. In addition, we provide an iterative method for obtaining ever tighter lower and upper bounds, under certain conditions. As an application, we use the proposed method to derive a novel closed-form asymptotic expression of the converse bound on the capacity of the additive white Gaussian noise (AWGN) channel in the finite-blocklength regime, which is tighter than the closed-form asymptotic expression by Polyanskiy-Poor-Verd\'u. Finally, we provide numerical examples where we show the tightness of the bounds obtained by the proposed method.
https://arxiv.org/abs/2402.13662
Academic Papers
svg
cec46043e748d26030c4e3d521b05f36e904c3f3b0491028bd40c857e8712c1d
2026-01-07T00:00:00-05:00
Convergence of Decentralized Stochastic Subgradient-based Methods for Nonsmooth Nonconvex functions
arXiv:2403.11565v4 Announce Type: replace-cross Abstract: In this paper, we focus on the decentralized stochastic subgradient-based methods in minimizing nonsmooth nonconvex functions without Clarke regularity, especially in the decentralized training of nonsmooth neural networks. We propose a general framework that unifies various decentralized subgradient-based methods, such as decentralized stochastic subgradient descent (DSGD), DSGD with gradient-tracking technique (DSGD-T), and DSGD with momentum (DSGD-M). To establish the convergence properties of our proposed framework, we relate the discrete iterates to the trajectories of a continuous-time differential inclusion, which is assumed to have a coercive Lyapunov function with a stable set $\mathcal{A}$. We prove the asymptotic convergence of the iterates to the stable set $\mathcal{A}$ with sufficiently small and diminishing step-sizes. These results provide first convergence guarantees for some well-recognized of decentralized stochastic subgradient-based methods without Clarke regularity of the objective function. Preliminary numerical experiments demonstrate that our proposed framework yields highly efficient decentralized stochastic subgradient-based methods with convergence guarantees in the training of nonsmooth neural networks.
https://arxiv.org/abs/2403.11565
Academic Papers
svg
06afa83692a4c4c82129aac470be17a8d5da705b269f0ffd8bd25eaf0bb427df
2026-01-07T00:00:00-05:00
A split-step Christov method for approximating rational PDE solutions
arXiv:2407.04013v3 Announce Type: replace-cross Abstract: Rational solutions of partial differential equations (PDEs) are notoriously difficult to approximate via spectral Fourier methods due to their algebraically slow decay rate. In this work we discuss approximating rational PDE solutions in a basis of orthogonal functions known as the Fourier series, allowing for the computation of its spectrum via the fast Fourier transform. Spectral differentiation matrices are derived. Several explicit fourth-order split-step integrators are derived and their performance compared. As an application, rogue wave solutions in a family of nonlinear Schr\"odinger equations are explored. Perturbing the constant background is found to generate rogue wave-like structures. The effects of higher-order dispersion and generalized nonlinearities are also examined.
https://arxiv.org/abs/2407.04013
Academic Papers
svg
caafb35fb35997e656145fccb1601517895e9c8603081675e6fc0866a29a137f
2026-01-07T00:00:00-05:00
Robust Egoistic Rigid Body Localization
arXiv:2501.10219v2 Announce Type: replace-cross Abstract: We consider a robust and self-reliant (or "egoistic") variation of the rigid body localization (RBL) problem, in which a primary rigid body seeks to estimate the pose (i.e., location and orientation) of another rigid body (or "target"), relative to its own, without the assistance of external infrastructure, without prior knowledge of the shape of the target, and taking into account the possibility that the available observations are incomplete. Three complementary contributions are then offered for such a scenario. The first is a method to estimate the translation vector between the center point of both rigid bodies, which unlike existing techniques does not require that both objects have the same shape or even the same number of landmark points. This technique is shown to significantly outperform the state-of-the-art (SotA) under complete information, but to be sensitive to data erasures, even when enhanced by matrix completion methods. The second contribution, designed to offer improved performance in the presence of incomplete information, offers a robust alternative to the latter, at the expense of a slight relative loss under complete information. Finally, the third contribution is a scheme for the estimation of the rotation matrix describing the relative orientation of the target rigid body with respect to the primary. Comparisons of the proposed schemes and SotA techniques demonstrate the advantage of the contributed methods in terms of root mean square error (RMSE) performance under fully complete information and incomplete conditions.
https://arxiv.org/abs/2501.10219
Academic Papers
svg
cb9d1ac69814bf0023ad6d5d77624977e126010baf133bf891582039574a61aa
2026-01-07T00:00:00-05:00
Multimodal oscillator networks learn to solve a classification problem
arXiv:2502.12020v3 Announce Type: replace-cross Abstract: We numerically demonstrate a network of coupled oscillators that can learn to solve a classification task from a set of examples -- performing both training and inference through the nonlinear evolution of the system. We accomplish this by combining three key elements to achieve learning: A long- term memory that stores learned responses, analogous to the synapses in biological brains; a short- term memory that stores the neural activations, similar to the firing patterns of neurons; and an evolution law that updates the synapses in response to novel examples, inspired by synaptic plasticity. Achieving all three elements in wave-based information processors such as metamaterials is a significant challenge. Here, we solve it by leveraging the material multistability to implement long-term memory, and harnessing symmetries and thermal noise to realize the learning rule. Our analysis reveals that the learning mechanism, although inspired by synaptic plasticity, also shares parallelisms with bacterial evolution strategies, where mutation rates increase in the presence of noxious stimuli.
https://arxiv.org/abs/2502.12020
Academic Papers
svg
16534e41e8033ce0ae874e770c2b19de414d7d11c60820f6bc7d12a277ccc651
2026-01-07T00:00:00-05:00
Network topology of the Euro Area interbank market
arXiv:2502.15611v2 Announce Type: replace-cross Abstract: The rapidly increasing availability of large amounts of granular financial data, paired with the advances of big data related technologies induces the need of suitable analytics that can represent and extract meaningful information from such data. In this paper we propose a multi-layer network approach to distill the Euro Area (EA) banking system in different distinct layers. Each layer of the network represents a specific type of financial relationship between banks, based on various sources of EA granular data collections. The resulting multi-layer network allows one to describe, analyze and compare the topology and structure of EA banks from different perspectives, eventually yielding a more complete picture of the financial market. This granular information representation has the potential to enable researchers and practitioners to better apprehend financial system dynamics as well as to support financial policies to manage and monitor financial risk from a more holistic point of view.
https://arxiv.org/abs/2502.15611
Academic Papers
svg
271e954cb4c92aa628bbb60286a9b29aa97466d4a5e04796a7d7962fea7d676b
2026-01-07T00:00:00-05:00
Global law of conjugate kernel random matrices with heavy-tailed weights
arXiv:2502.18428v2 Announce Type: replace-cross Abstract: We study the asymptotic spectral distribution of the conjugate kernel random matrix $YY^\top$, where $Y= f(WX)$ arises from a two-layer neural network model. We consider the setting where $W$ and $X$ are random rectangular matrices with i.i.d.\ entries, where the entries of $W$ follow a heavy-tailed distribution, while those of $X$ have light tails. Our assumptions on $W$ include a broad class of heavy-tailed distributions, such as symmetric $\alpha$-stable laws with $\alpha \in ]0,2[$ and sparse matrices with $\mathcal{O}(1)$ nonzero entries per row. The activation function $f$, applied entrywise, is bounded, smooth, odd, and nonlinear. We compute the limiting eigenvalue distribution of $YY^\top$ through its moments and show that heavy-tailed weights induce strong correlations between the entries of $Y$, resulting in richer and fundamentally different spectral behavior compared to the light-tailed case.
https://arxiv.org/abs/2502.18428
Academic Papers
svg
89b50810687351d37f9a9661c64bc3cca65d0d2d4ead152c9c92c6f9969cde7c
2026-01-07T00:00:00-05:00
SPARKLE: A Nonparametric Approach for Online Decision-Making with High-Dimensional Covariates
arXiv:2503.16941v3 Announce Type: replace-cross Abstract: Personalized services are central to today's digital economy, and their sequential decisions are often modeled as contextual bandits. Modern applications pose two main challenges: high-dimensional covariates and the need for nonparametric models to capture complex reward-covariate relationships. We propose SPARKLE, a novel contextual bandit algorithm based on a sparse additive reward model that addresses both challenges through (i) a doubly penalized estimator for nonparametric reward estimation and (ii) an epoch-based design with adaptive screening to balance exploration and exploitation. We prove a sublinear regret bound that grows only logarithmically in the covariate dimensionality; to our knowledge, this is the first such result for nonparametric contextual bandits with high-dimensional covariates. We also derive an information-theoretic lower bound, and the gap to the upper bound vanishes as the reward smoothness increases. Extensive experiments on synthetic data and real data from video recommendation and personalized medicine show strong performance in high-dimensional settings.
https://arxiv.org/abs/2503.16941
Academic Papers
svg
039beac21f5997174bae7ef5cdb8de5c940cdccd4dcf0e2e8930ce81ede0c830
2026-01-07T00:00:00-05:00
AdGT: Decentralized Gradient Tracking with Tuning-free Per-Agent Stepsize
arXiv:2504.15196v4 Announce Type: replace-cross Abstract: In decentralized optimization, the choice of stepsize plays a critical role in algorithm performance. A common approach is to use a shared stepsize across all agents to ensure convergence. However, selecting an optimal stepsize often requires careful tuning, which can be time-consuming and may lead to slow convergence, especially when there is significant variation in the smoothness (L-smoothness) of local objective functions across agents. Individually tuning stepsizes per agent is also impractical, particularly in large-scale networks. To address these limitations, we propose AdGT, an adaptive gradient tracking method that enables each agent to adjust its stepsize based on the smoothness of its local objective. We prove that AdGT achieves linear convergence to the global optimal solution. Through numerical experiments, we compare AdGT with fixed-stepsize gradient tracking methods and demonstrate its superior performance. Additionally, we compare AdGT with adaptive gradient descent (AdGD) in a centralized setting and observe that fully adaptive stepsizes offer greater benefits in decentralized networks than in centralized ones.
https://arxiv.org/abs/2504.15196
Academic Papers
svg
dc09f58fd9820331604050de45db00af5785c8a3025d845e37e7917566c7c6a9
2026-01-07T00:00:00-05:00
Vertex evaluation of multiplex graphs using Forman Curvature
arXiv:2504.17286v2 Announce Type: replace-cross Abstract: The identification of vertices that play a central role in network analysis is a fundamental challenge. Although traditional centrality measures have been extensively employed for this purpose, the increasing complexity of modern networks necessitates the use of sophisticated metrics. The concept of Forman curvature has recently garnered significant attention as a promising approach. We define the Forman curvature for multiplex graphs, which are a category of complex networks characterized by multiple layers of connections between nodes. We then prove the key properties of the Forman curvature in the context of multiplex graphs and show its usefulness in identifying vertices occupying central positions within these networks. Moreover, through a series of comparative experiments with traditional graph features and graph kernels, we demonstrate that the Forman curvature can function as an effective metric for classifying the overall structure of networks.
https://arxiv.org/abs/2504.17286
Academic Papers
svg
03592e022a71576fcf75b20c32091df5ae58a41d1f41f7b8ac3f8b4a5fdccd31
2026-01-07T00:00:00-05:00
Machine Learning-Based Modeling of the Anode Heel Effect in X-ray Beam Monte Carlo Simulations
arXiv:2504.19155v3 Announce Type: replace-cross Abstract: To develop a machine learning-based framework for accurately modeling the anode heel effect in Monte Carlo simulations of X-ray imaging systems, enabling realistic beam intensity profiles with minimal experimental calibration. Multiple regression models were trained to predict spatial intensity variations along the anode-cathode axis using experimentally acquired weights derived from beam measurements across different tube potentials. These weights captured the asymmetry introduced by the anode heel effect. A systematic fine-tuning protocol was established to minimize the number of required measurements while preserving model accuracy. The models were implemented in the OpenGATE 10 and GGEMS Monte Carlo toolkits to evaluate their integration feasibility and predictive performance. Among the tested models, gradient boosting regression (GBR) delivered the highest accuracy, with prediction errors remaining below 5% across all energy levels. The optimized fine-tuning strategy required only six detector positions per energy level, reducing measurement effort by 65%. The maximum error introduced through this fine-tuning process remained below 2%. Dose actor comparisons within Monte Carlo simulations demonstrated that the GBR-based model closely replicated clinical beam profiles and significantly outperformed conventional symmetric beam models. This study presents a robust and generalizable method for incorporating the anode heel effect into Monte Carlo simulations using machine learning. By enabling accurate, energy-dependent beam modeling with limited calibration data, the approach enhances simulation realism for applications in clinical dosimetry, image quality assessment, and radiation protection.
https://arxiv.org/abs/2504.19155
Academic Papers
svg
7a12d3180e7db3ff4922251bb3837a8f1b3fc0a1aaa2039ddccb2762f55e3441
2026-01-07T00:00:00-05:00
Constant-Factor Algorithms for Revenue Management with Consecutive Stays
arXiv:2506.00909v2 Announce Type: replace-cross Abstract: We study network revenue management problems motivated by applications such as railway ticket sales and hotel room bookings. Requests, each requiring a resource for a consecutive stay, arrive sequentially with known arrival probabilities. We investigate two scenarios: the accept-or-reject scenario, where a request can be fulfilled by assigning any available resource; and the BAM-based scenario, which generalizes the former by incorporating customer preferences through the basic attraction model (BAM), allowing the platform to offer an assortment of available resources from which the customer may choose. We develop polynomial-time policies and evaluate their performance using approximation ratios, defined as the ratio between the expected revenue of our policy and that of the optimal online algorithm. When each arrival has a fixed request type (e.g., the interval of the stay is fixed), we establish constant-factor guarantees: a ratio of 1 - 1/e for the accept-or-reject scenario and 0.25 for the BAM-based scenario. We further extend these results to the case where the request type is random (e.g., the interval of the stay is random). In this setting, the approximation ratios incur an additional multiplicative factor of 1 - 1/e, resulting in guarantees of at least 0.399 for the accept-or-reject scenario and 0.156 for the BAM-based scenario. These constant-factor guarantees stand in sharp contrast to the prior nonconstant competitive ratios that are benchmarked against the offline optimum.
https://arxiv.org/abs/2506.00909
Academic Papers
svg
c2009d164a2c973d1a3e48be00e603350035328d16f03803205c5d9d81e34167
2026-01-07T00:00:00-05:00
Explainable AI Technique in Lung Cancer Detection Using Convolutional Neural Networks
arXiv:2508.10196v3 Announce Type: replace-cross Abstract: Early detection of lung cancer is critical to improving survival outcomes. We present a deep learning framework for automated lung cancer screening from chest computed tomography (CT) images with integrated explainability. Using the IQ-OTH/NCCD dataset (1,197 scans across Normal, Benign, and Malignant classes), we evaluate a custom convolutional neural network (CNN) and three fine-tuned transfer learning backbones: DenseNet121, ResNet152, and VGG19. Models are trained with cost-sensitive learning to mitigate class imbalance and evaluated via accuracy, precision, recall, F1-score, and ROC-AUC. While ResNet152 achieved the highest accuracy (97.3%), DenseNet121 provided the best overall balance in precision, recall, and F1 (up to 92%, 90%, 91%, respectively). We further apply Shapley Additive Explanations (SHAP) to visualize evidence contributing to predictions, improving clinical transparency. Results indicate that CNN-based approaches augmented with explainability can provide fast, accurate, and interpretable support for lung cancer screening, particularly in resource-limited settings.
https://arxiv.org/abs/2508.10196
Academic Papers
svg
4c3a1ea84b3d40bad99202824a67b903e388be91299270a3639addb3d9a5fc57
2026-01-07T00:00:00-05:00
Machine Learning H-theorem
arXiv:2508.14003v3 Announce Type: replace-cross Abstract: H-theorem provides a microscopic foundation of the Second Law of Thermodynamics and is therefore essential to establishing statistical physics, but at the same time, H-theorem has been subject to controversy that in part persists till this day. To better understand H-theorem and its relation to the arrow of time, we study the equilibration of randomly oriented and positioned hard disks with periodic boundary conditions. Using a model based on the DeepSets architecture, which imposes permutation invariance of the particle labels, we train a model to capture the irreversibility of the H-functional.
https://arxiv.org/abs/2508.14003
Academic Papers
svg
2ffbf635bb8b8915ed6f72ce59aad2674f997cebd460823986b99403827b259a
2026-01-07T00:00:00-05:00
Integrating upstream and downstream reciprocity stabilizes cooperator-defector coexistence in others-only public goods games
arXiv:2509.04743v2 Announce Type: replace-cross Abstract: Human cooperation persists among strangers in large, well-mixed populations despite theoretical predictions of difficulties, leaving a fundamental evolutionary puzzle. While upstream (pay-it-forward: helping others because you were helped) and downstream (rewarding-reputation: helping those with good reputations) indirect reciprocity have been independently considered as solutions, their joint dynamics in multiplayer contexts remain unexplored. We study public goods games without self-return (often called "others-only" PGGs) with benefit b and cost c and analyze evolutionary dynamics for three strategies: unconditional cooperation (ALLC), unconditional defection (ALLD), and an integrated reciprocity strategy combining unconditional forwarding with reputation-based discrimination. We show that integrating upstream and downstream reciprocity can yield a globally asymptotically stable mixed equilibrium of ALLD and integrated reciprocators when b/c > 2 in the absence of complexity costs. We analytically derive a critical threshold for complexity costs. If cognitive demands exceed this threshold, the stable equilibrium disappears via a saddle-node bifurcation. Otherwise, within the stable regime, complexity costs counterintuitively stabilize the equilibrium by preventing not only ALLC but also alternative conditional strategies from invading. Rather than requiring uniformity, our model reveals one pathway to stable cooperation through strategic diversity. ALLD serves as "evolutionary shields" preventing system collapse while integrated reciprocators flexibly combine open and discriminative responses. This framework demonstrates how pay-it-forward broadcasting and reputation systems can jointly maintain social polymorphism including cooperation despite cognitive limitations and group size challenges, offering a potential evolutionary foundation for behavioral diversity in human societies.
https://arxiv.org/abs/2509.04743
Academic Papers
svg
ca56de06cf5928d2317f311ec82346ac512e0ca07771bb9d54e518e011c9e8ee
2026-01-07T00:00:00-05:00
Error analysis of a compositional score-based algorithm for simulation-based inference
arXiv:2510.15817v2 Announce Type: replace-cross Abstract: Simulation-based inference (SBI) has become a widely used framework in applied sciences for estimating the parameters of stochastic models that best explain experimental observations. A central question in this setting is how to effectively combine multiple observations in order to improve parameter inference and obtain sharper posterior distributions. Recent advances in score-based diffusion methods address this problem by constructing a compositional score, obtained by aggregating individual posterior scores within the diffusion process. While it is natural to suspect that the accumulation of individual errors may significantly degrade sampling quality as the number of observations grows, this important theoretical issue has so far remained unexplored. In this paper, we study the compositional score produced by the GAUSS algorithm of Linhart et al. (2024) and establish an upper bound on its mean squared error in terms of both the individual score errors and the number of observations. We illustrate our theoretical findings on a Gaussian example, where all analytical expressions can be derived in a closed form.
https://arxiv.org/abs/2510.15817
Academic Papers
svg
424770e1ace1de6ac00ef13926601acf0572714e01ccc07d6536d36322c86845
2026-01-07T00:00:00-05:00
A Trust-region Funnel Algorithm for Grey-Box Optimisation
arXiv:2511.18998v2 Announce Type: replace-cross Abstract: Grey-box optimisation, where some parts of an optimisation problem are represented by explicit algebraic (glass-box) models while others are treated as black-box models lacking analytic derivatives, remains a challenge in process systems engineering. Trust-region (TR) methods provide a robust framework for grey-box problems by combining accurate glass-box derivatives with local reduced models (RMs) for black-box components. However, existing TR approaches often involve complex multi-layered formulations requiring extensive parameter tuning, or lack open-source implementations. Motivated by the recent advances in funnel-based convergence theory for nonlinear optimisation and the TR filter method, we propose a novel TR funnel algorithm for grey-box optimisation that replaces the filter acceptance criterion with a generalisable uni-dimensional funnel, maintaining a monotonically non-increasing upper bound on approximation error of the local black-box RMs. A global convergence proof to a first-order critical point is established. The algorithm, implemented in an open-source Pyomo framework, supports multiple RM forms and globalisation strategies (filter or funnel). Benchmark tests on seven numerical and engineering problems show that the TR funnel algorithm achieves comparable and often improved performance relative to the classical TR filter method. The TR funnel method thus provides a simpler, and extensible alternative for large-scale grey-box optimisation.
https://arxiv.org/abs/2511.18998
Academic Papers
svg
1486d7c69907afadbe7b159dbcda8ef10ff5fc4f199ca2e8ec73ef4ed097ab3f
2026-01-07T00:00:00-05:00
The Color-Clinical Decoupling: Why Perceptual Calibration Fails Clinical Biomarkers in Smartphone Dermatology
arXiv:2512.21988v2 Announce Type: replace-cross Abstract: Smartphone-based tele-dermatology assumes that colorimetric calibration ensures clinical reliability, yet this remains untested for underrepresented skin phototypes. We investigated whether standard calibration translates to reliable clinical biomarkers using 43,425 images from 965 Korean subjects (Fitzpatrick III-IV) across DSLR, tablet, and smartphone devices. While Linear Color Correction Matrix (CCM) normalization reduced color error by 67-77% -- achieving near-clinical accuracy (Delta E < 2.3) -- this success did not translate to biomarker reliability. We identify a phenomenon termed "color-clinical decoupling": despite perceptual accuracy, the Individual Typology Angle (ITA) showed poor inter-device agreement (ICC = 0.40), while the Melanin Index achieved good agreement (ICC = 0.77). This decoupling is driven by the ITA formula's sensitivity to b* channel noise and is further compounded by anatomical variance. Facial region accounts for 25.2% of color variance -- 3.6x greater than device effects (7.0%) -- challenging the efficacy of single-patch calibration. Our results demonstrate that current colorimetric standards are insufficient for clinical-grade biomarker extraction, necessitating region-aware protocols for mobile dermatology.
https://arxiv.org/abs/2512.21988
Academic Papers
svg
7f6e561c4a4e674d71fc36504e5e708c8c951ddcc06357de7d39c4d0f4f42f59
2026-01-07T00:00:00-05:00
Modeling Information Blackouts in Missing Not-At-Random Time Series Data
arXiv:2601.01480v2 Announce Type: replace-cross Abstract: Large-scale traffic forecasting relies on fixed sensor networks that often exhibit blackouts: contiguous intervals of missing measurements caused by detector or communication failures. These outages are typically handled under a Missing At Random (MAR) assumption, even though blackout events may correlate with unobserved traffic conditions (e.g., congestion or anomalous flow), motivating a Missing Not At Random (MNAR) treatment. We propose a latent state-space framework that jointly models (i) traffic dynamics via a linear dynamical system and (ii) sensor dropout via a Bernoulli observation channel whose probability depends on the latent traffic state. Inference uses an Extended Kalman Filter with Rauch-Tung-Striebel smoothing, and parameters are learned via an approximate EM procedure with a dedicated update for detector-specific missingness parameters. On the Seattle inductive loop detector data, introducing latent dynamics yields large gains over naive baselines, reducing blackout imputation RMSE from 7.02 (LOCF) and 5.02 (linear interpolation + seasonal naive) to 4.23 (MAR LDS), corresponding to about a 64% reduction in MSE relative to LOCF. Explicit MNAR modeling provides a consistent but smaller additional improvement on real data (imputation RMSE 4.20; 0.8% RMSE reduction relative to MAR), with similar modest gains for short-horizon post-blackout forecasts (evaluated at 1, 3, and 6 steps). In controlled synthetic experiments, the MNAR advantage increases as the true missingness dependence on latent state strengthens. Overall, temporal dynamics dominate performance, while MNAR modeling offers a principled refinement that becomes most valuable when missingness is genuinely informative.
https://arxiv.org/abs/2601.01480
Academic Papers
svg
91bfbb28a51400e2ea78658da3cae2878602567715c220210c0e36f4b11e5bf2
2026-01-07T00:00:00-05:00
Rethinking Secure Semantic Communications in the Age of Generative and Agentic AI: Threats and Opportunities
arXiv:2601.01791v2 Announce Type: replace-cross Abstract: Semantic communication (SemCom) improves communication efficiency by transmitting task-relevant information instead of raw bits and is expected to be a key technology for 6G networks. Recent advances in generative AI (GenAI) further enhance SemCom by enabling robust semantic encoding and decoding under limited channel conditions. However, these efficiency gains also introduce new security and privacy vulnerabilities. Due to the broadcast nature of wireless channels, eavesdroppers can also use powerful GenAI-based semantic decoders to recover private information from intercepted signals. Moreover, rapid advances in agentic AI enable eavesdroppers to perform long-term and adaptive inference through the integration of memory, external knowledge, and reasoning capabilities. This allows eavesdroppers to further infer user private behavior and intent beyond the transmitted content. Motivated by these emerging challenges, this paper comprehensively rethinks the security and privacy of SemCom systems in the age of generative and agentic AI. We first present a systematic taxonomy of eavesdropping threat models in SemCom systems. Then, we provide insights into how GenAI and agentic AI can enhance eavesdropping threats. Meanwhile, we also highlight potential opportunities for leveraging GenAI and agentic AI to design privacy-preserving SemCom systems.
https://arxiv.org/abs/2601.01791
Academic Papers
svg
2b6780dd46494a0bf3b6cab849282da5f703979ea710f9710ed89f6037c9abb3
2026-01-07T00:00:00-05:00
Spectral Properties and Energy Injection in Mercury's Magnetotail Current Sheet
arXiv:2601.02393v1 Announce Type: new Abstract: Mercury's magnetotail hosts a thin and highly dynamic current sheet (CS), where magnetic reconnection and strong fluctuations frequently occur. Here, we statistically analyze magnetic field power spectra across 370 magnetotail CSs observed by MESSENGER. About 20% of the events are quasi-laminar, showing single power-law spectra, whereas 80% are turbulent, exhibiting a spectral break separating inertial and kinetic ranges. A dawn-dusk asymmetry is identified: inertial-range slopes are systematically shallower on the dawnside, whereas kinetic-range slopes are steeper, indicating more developed turbulence there, consistent with the higher occurrence of reconnection-related processes on the dawnside. Component analysis shows that the transverse components, orthogonal to the tail-aligned principal field (BX), display shallow slopes near -1 in the inertial range, suggesting energy injection at ion scales rather than a classical inertial range. These results demonstrate that Mercury's unique plasma environment fundamentally reshapes the initiation of turbulence and the redistribution of energy in the magnetotail.
https://arxiv.org/abs/2601.02393
Academic Papers
svg
285d289c722c57c2b489c99bf154c6624b20ee80aaa783ac20929e86e258e6a8
2026-01-07T00:00:00-05:00
Modeling Policy and Resource Dynamics in the Construction Sector of Developing Countries: A System Dynamics Approach Using Sudan as a Case Study
arXiv:2601.02405v1 Announce Type: new Abstract: Construction industries in developing countries face systemic challenges such as chronic project delays, cost overruns, and regulatory inefficiencies. This paper presents a system dynamics (SD) modeling framework for analyzing policy and resource dynamics within the construction sector in Sudan, with broader applicability to Least Developed Countries (LDCs). The model incorporates key variables related to workforce, material supply, financing, and policy delays, and is calibrated using genetic algorithms (GAs) based on sectoral data and expert input. Simulation results across four policy scenarios indicate that regulatory reform and workforce training are the most effective levers for improving project performance. Specifically, implementing streamlined regulatory procedures reduced project delays by up to 32%, while investment in human capital decreased cost overruns by 28% over a 10-year simulation horizon. In contrast, scenarios focusing solely on material supply or financial inputs produced limited gains without corresponding policy or labor improvements. Sensitivity analysis further revealed that the system is highly responsive to macroeconomic stability and public investment flows. The study demonstrates that a hybrid SD-GA modeling approach offers a valuable decision-support tool for policymakers seeking to improve infrastructure delivery under uncertainty. Recommendations include phased regulatory reforms, targeted capacity building, and integrating modeling tools into strategic infrastructure planning in LDCs.
https://arxiv.org/abs/2601.02405
Academic Papers
svg
2492173607033d392c26c25afdbea6475127460936cd22ef47d0f97680894bcc
2026-01-07T00:00:00-05:00
A Combined Barrow Entropy and QCD Ghost Mechanism for Late-Time Cosmic Acceleration
arXiv:2601.02408v1 Announce Type: new Abstract: We investigate a unified dark-energy scenario based on the combined effects of Barrow entropy corrections and the QCD ghost mechanism, referred to as the BH--QCDGDE model. The dark-energy density is constructed in a generalized holographic form that incorporates both Barrow-deformed entropy corrections and low-energy QCD vacuum effects within a single framework. The cosmological dynamics are analyzed in a spatially flat Friedmann--Lema\^{\i}tre--Robertson--Walker background. The model exhibits a smooth transition from a decelerated matter-dominated era to a late-time accelerated phase without crossing the phantom divide, indicating a viable background evolution. An equivalent scalar-field description of the effective dark-energy sector is reconstructed and shown to admit a quintessence-like behavior. The thermodynamic viability is examined by testing the generalized second law at the apparent horizon, which is found to be satisfied throughout the parameter space. The classical stability of the model is further investigated through the squared speed of sound, revealing the role of model parameters in shaping stable cosmological regimes. Overall, the BH--QCDGDE framework provides a consistent and physically viable description of late-time cosmic acceleration.
https://arxiv.org/abs/2601.02408
Academic Papers
svg
fb054f02ae1146e4db8df9079301bf599703415263c8992707d590297b441871
2026-01-07T00:00:00-05:00
Complex-time singular structure of the 1D Hou-Luo model
arXiv:2601.02464v1 Announce Type: new Abstract: Starting from smooth initial data, we investigate the complex-time analytic structure of the one-dimensional Hou--Luo (HL) model, a wall approximation of the three-dimensional axisymmetric Euler equations. While the finite-time blow-up in this setting has been already established, here we chart the entire singular landscape. This analysis is enabled by a novel formulation of the HL model in Lagrangian coordinates, in which the time-Taylor coefficients of the flow fields are evaluated symbolically to high truncation order. Our results are threefold. First, we show that the Lagrangian series for the vorticity converges within the complex-time disc of radius~$t_\star >0$ and is free from (early-time) resonances that impede the Eulerian formulation. Second, applying asymptotic analysis on the series, we recover both the blow-up time and the singularity exponent with high accuracy. This also enables a quantitative assessment of the Beale--Kato--Majda criterion, which we find correctly identifies the blow-up time, but washes out the local singularity exponent, as it relies on a spatial supremum. Third, and most importantly, we develop a Lagrangian singularity theory that predicts the eye-shaped singularity profile observed in Eulerian coordinates by exploiting the driving mechanism of the blow-up: The accumulation of multiple fluid particles at the same Eulerian position. The employed techniques extend recently introduced methods for the inviscid Burgers equation [C. Rampf et al., Phys. Rev. Fluids 7 (2022) 10, 104610], and can be further adapted to higher spatial dimensions or other hydrodynamical equations.
https://arxiv.org/abs/2601.02464
Academic Papers
svg
6afa69d1fb610a55aff7b57b9dc8b7d5f3e2d273ce9096dee75ac40a6c00a090
2026-01-07T00:00:00-05:00
How to Engage Active Pedagogy with Physics Faculty: Watch Out for Powerlessness
arXiv:2601.02493v1 Announce Type: new Abstract: Despite the large body of research showing that students in STEM classes at all levels learn better via active learning than they do via lecture, post-secondary physics and astronomy (P&amp;A) faculty members continue to primarily use teacher-focused, lecture pedagogy in their classes. Methods include answers from eight faculty members, and interviews with five faculty members who self-identified as primarily using lecture were conducted to determine their perceptions of why they use lecture. During analysis coding, results show that an unanticipated theme not sufficiently represented in the pre-existing literature rose to the forefront: that many of these faculty members feel the decision of pedagogy is out of their control. In conclusion, a grounded theory was developed and is proposed herein that these faculty feel a sense of powerlessness. Reasons offered include administrators often make decisions based on the financial needs of the school, which then force the faculty into using lecture as their primary pedagogy. Implications include that providing professional development in active pedagogies may not be sufficient to help faculty members change pedagogy, as they may need to be convinced that they have the power to make change and use student-centered, active learning pedagogies within their own individual constraints and settings. Understanding that some instructors may feel powerless in choosing how to teach is an important step for professional development providers toward ensuring that faculty have a voice and can choose the best teaching methods for their classrooms.
https://arxiv.org/abs/2601.02493
Academic Papers
svg
854a09ee7be9b2eaff1006f7881b25d95707c409442d5290aa47244f17b492e3
2026-01-07T00:00:00-05:00
Ultrafast cation-dication dynamics in ammonia borane: H-migration to roaming H2 and reduced H3+ formation under strong-field ionization
arXiv:2601.02510v1 Announce Type: new Abstract: We report a femtosecond time-resolved strong-field study of ammonia borane (AB, BH3NH3) following both single and double ionization, revealing ultrafast fragmentation dynamics and hydrogen release. Mass spectrometry, combined with fragment correlation analysis and ab initio molecular dynamics simulations, is used to identify the molecular origin of the neutral and ionic products. Singly ionized AB produces neutral H and H2, while doubly ionized AB produces neutral H and H2 along with H+, H2+, and H3+, all within 1 ps. Electronic-structure calculations show that H, H+, H2, H2+, and H3+ originate predominantly from hydrogen atoms bound to the boron center and that their formation proceeds through hydrogen migration and, in some channels, neutral H2 roaming. The calculations further indicate that the dication meets the structural and energetic requirements for neutral H2 release, a prerequisite for forming astrochemically relevant H3+. However, the large adiabatic relaxation energy causes most roaming H2 to dissociate before proton abstraction, suppressing H3+ formation. These results provide new insight into dissociative ionization pathways in hydrogen-rich molecules, extend mechanistic principles developed for halogenated alkanes to ammonia borane, and suggest implications for hydrogen-release chemistry in ammonia-borane-based storage materials.
https://arxiv.org/abs/2601.02510
Academic Papers
svg
3b74529159482c39807ccf7627e6ef5fdfb11b43f26cc95919eaac8d332e487a
2026-01-07T00:00:00-05:00
Relaxation and statistical equilibria in generalised two-dimensional flows
arXiv:2601.02544v1 Announce Type: new Abstract: We study relaxation toward statistical equilibrium states of inviscid generalised two-dimensional flows, where the generalised vorticity $q$ is related to the streamfunction $\psi$ via $q=(-\nabla^2)^{\frac{\alpha}{2}}\psi$, with the parameter $\alpha$ controlling the strength of the nonlinear interactions. The equilibrium solutions exhibit an $\alpha \mapsto -\alpha$ symmetry, under which generalised energy $E_G$ and enstrophy $\Omega_G$ are interchanged.For initial conditions that produce condensates, we find long-lived quasi-equilibrium states far from the thermalised solutions we derive using canonical ensemble theory. Using numerical simulations we find that in the limit of vanishing nonlinearity, as $\alpha \to 0$, the time required for partial thermalisation $\tau_{th}$ scales like $1/\alpha$. So, the relaxation of the system toward equilibrium becomes increasingly slow as the system approaches the weakly nonlinear limit. This behaviour is also captured by a reduced model we derive using multiple scale asymptotics. These findings highlight the role of nonlinearity in controlling the relaxation toward equilibrium and that the inherent symmetry of the statistical equilibria determines the direction of the turbulent cascades.
https://arxiv.org/abs/2601.02544
Academic Papers
svg
637fca50fabb974e3c8b10f4ecdf080fb2189e25ce92f40fdb7e4e25062bccff
2026-01-07T00:00:00-05:00
Electron Beam Profiling via Rydberg Electromagnetically Induced Transparency in Rubidium Vapor with Crossed Laser beams
arXiv:2601.02549v1 Announce Type: new Abstract: We present an all-optical detection approach to determine the position and spatial profile of an electron beam based on quantum properties of alkali metal atoms. To measure the electric field, produced by an electron beam, we excite thermal rubidium atoms to a highly excited Rydberg state via a two-photon ladder transition and detect Stark shifts of Rydberg states by monitoring frequencies of the corresponding electromagnetically induced transparency (EIT) transmission peaks. We addressed several technical challenges in this approach. First, we use crossed laser beams to obtain spatial information about the electron beam position and geometry. Second, by pulsing the electron beam and using phase-sensitive optical detection, we separate the true electron beam electric signature from the parasitic electric fields due to photoelectric charges on the windows. Finally, we use a principle component analysis to further improve signal quality. We test this method to detect the current and to reconstruct a 2D profile of a 20 keV electron beam with currents ranging from 25 - 100 uA. While this technique provides less spatial resolution than fluorescence-based measurements, thanks to their speed and limited optical access requirements it can be useful for real-time non-invasive diagnostics of charged particle beams at accelerator facilities.
https://arxiv.org/abs/2601.02549
Academic Papers
svg
e898c5c12ada976e5804add242084139409b874675c6001b892b00a1ae559613
2026-01-07T00:00:00-05:00
Deep Learning-based Single-Shot Composite Fringe Projection Profilometry with Pixel-Wise Uncertainty Quantification
arXiv:2601.02572v1 Announce Type: new Abstract: Driven by the growing demand for high-speed 3D measurement in advanced manufacturing, optical metrology algorithms must deliver high accuracy and robustness under dynamic conditions. Fringe projection profilometry (FPP) offers high precision, yet the 2pi ambiguity of the wrapped phase means that conventional absolute phase recovery typically relies on multiple coded patterns, sacrificing temporal resolution. Deep learning-based composite FPP (CFPP) shows promise for single-shot phase recovery from a composite fringe, but limited interpretability makes it difficult to assess reconstruction reliability or trace error sources in the absence of ground truth. To address this, we propose HSURE-CFPP (Heteroscedastic Snapshot-ensemble Uncertainty-aware Ratio Estimation for CFPP). HSURE-CFPP predicts the numerator-denominator ratio used for wrapped-phase computation with a heteroscedastic snapshot-ensemble network, enabling ultra-fast 3D imaging from a single composite fringe and producing pixel-wise uncertainty maps for confidence assessment and unreliable-region identification. Specifically, a heteroscedastic likelihood jointly estimates pixel-wise noise variance to capture data uncertainty, while a snapshot ensemble quantifies model uncertainty via dispersion across snapshots, yielding total predictive uncertainty as an interpretable reliability measure. Experiments on static and dynamic scenes demonstrate that HSURE-CFPP achieves high-accuracy reconstruction at high speed and that the predicted uncertainty correlates well with reconstruction errors, providing a deployable quality-assessment mechanism for deep-learning-based FPP.
https://arxiv.org/abs/2601.02572
Academic Papers
svg
365b44feaf63ad57179cb8d766ae5f849c5006b55d00ca66a457ff09f67a0aa8
2026-01-07T00:00:00-05:00
Receiver Functions in the San Fernando Valley, California: Graph-Regularized Bayesian Approach for Gravity-Informed Mapping
arXiv:2601.02575v1 Announce Type: new Abstract: The San Fernando Valley (SFV) in Southern California is a complex sedimentary basin whose shape strongly influences ground shaking. We develop a fully quantitative, probabilistic graph-regularized inference model that integrates both gravity and receiver function (RF) constraints and evaluate its ability to determine the basin's shape. The sediment-basement interface in single-station RFs is often difficult to interpret due to scattering and noise, which can render isolated stations unusable. By using RFs from a dense seismic array and incorporating gravity, we address the issue of non-uniqueness in converting the times of RF phases to layer thickness by comparing the predicted gravity to observations at each station. In areas where the density contrast may change, Bayesian inference with a graph Laplacian allows us to determine the effective density contrast by taking into account its neighbors' picks and densities. This method promotes spatial smoothness between neighboring stations, while preserving sharp contrasts in locations supported by the RF and gravity data. We applied this method to a dataset that was acquired in fall 2023, when 140 nodes were installed in the SFV. Our results show the deep Sylmar sub-basin, the San Fernando sub-basin, and the Leadwell high found in a previous study (Ju\'arez-Z\'u\~niga and Persaud, 2025), and our results also show good agreement with the industry seismic reflection profiles across the valley. This method demonstrates how to incorporate gravity with lateral density variations into receiver function interpretation to better map interfaces in the subsurface.
https://arxiv.org/abs/2601.02575
Academic Papers
svg
698ba52bac3fd94ec2a5fbf2d3bc8dc70213fb65999da71ce68aa0c959cadd85
2026-01-07T00:00:00-05:00
Integrated Radiation-Magneto-Hydrodynamic Simulations of Magnetized Burning Plasmas. I. Magnetizing Ignition-Class Designs
arXiv:2601.02588v1 Announce Type: new Abstract: Motivated by breakthroughs in inertial confinement fusion (ICF), first achieving ignition conditions in National Ignition Facility (NIF) shot N210808 and then laser energy breakeven in N221204, modeling efforts here investigate the effect of imposed magnetic fields on integrated hohlraum simulations of igniting systems. Previous NIF experiments have shown yield and hotspot temperature to increase in magnetized, gas-filled capsules in line with scalings. In this work, we use the 2D radiation-magnetohydrodynamics code Lasnex with a Livermore ICF common model. Simulations are tuned to closely approximate data from unmagnetized experiments. Investigated here is the effect of imposed axial fields of up to 100 T on the fusion output of high-performing ICF shots, specifically the record BigFoot shot N180128, and HYBRID-E shots N210808 and N221204. The main observed effect is an increase in the hotspot temperature due to magnetic insulation. Namely, electron heat flow is constrained perpendicular to the magnetic field and alpha trajectories transition to gyro-orbits, enhancing energy deposition. In addition, we investigate the impact of applied magnetic fields to future NIF designs, specifically an example Enhanced Yield Capability design with 3 MJ of laser energy as well as a high-\r{ho}R, low implosion velocity "Pushered Single Shell" design. In conclusion, magnetization with field strengths of 5-75 T is found to increase the burn-averaged ion temperature by 50% and the neutron yield by 2-12. Specifically, we see yield enhancement of at least 50% with only a 5-10 T applied magnetic field for N221204, while a 65 T field on N210808 with symmetrization gives an 8 increase in yield. This is all without further design optimization to best take advantage of an applied B field, which promises even greater improvements for designs tailored specifically towards magnetization.
https://arxiv.org/abs/2601.02588
Academic Papers
svg
3bca610d7e09e5a03ac925bbe256ff77b709cb3907508e43190252e540ccbfb0
2026-01-07T00:00:00-05:00
High-throughput, high-brightness, ultrashort 90 keV electrons at 40 kHz
arXiv:2601.02597v1 Announce Type: new Abstract: Radiofrequency-compressed keV electron sources for ultrafast electron diffraction (UED) face competing demands: short pulses require low charge, yet weak scatterers demand high flux; high repetition rates enable signal averaging, yet most systems operate $\lesssim$1 kHz with low detection efficiency. Here, we demonstrate a 90 keV DC-RF source operating at 40 kHz with direct electron detection that address these challenges simultaneously. THz streaking retrieves compressed pulse durations of 97 $\pm$ 3 fs (FWHM) at 370 aC and 114 $\pm$ 47 fs (FWHM) at 2.8 fC. Long-term $t_0$ timing drifts, characterized independently both by convolution analysis of compression data and direct THz streaking measurements, lie between 65 - 95 fs (FWHM), among the lowest reported for RF-based systems. At low charge (17 aC), we report an intrinsic pulse duration of 56 fs (FWHM) from comparison of simulations to measured compression data, among the shortest for keV UED at $>$16 aC. Moreover, 2.8 fC bunches, combined with 40 kHz repetition rate and direct detection, produce a detectable normalized throughput that is one (three-to-four) orders of magnitude higher than existing keV (MeV) sources. This enables practical UED studies of weakly scattering samples and processes previously impractical due to low cross-sections and long acquisition times.
https://arxiv.org/abs/2601.02597
Academic Papers
svg
ad5fae6c359d24b9bab744215377d2858b3d3cbeb60c2a93d8fcd754914e1093
2026-01-07T00:00:00-05:00
Holotomography in 2025: From Morphometric Imaging to AI-Driven Multimodal Phenotyping
arXiv:2601.02611v1 Announce Type: new Abstract: By 2025, holotomography (HT) has matured from a niche optical modality into a versatile platform for quantitative, label-free imaging in biomedicine. By reconstructing the three-dimensional refractive-index (RI) distribution of cells and tissues, HT enables high-resolution volumetric imaging with low phototoxicity and minimal sample perturbation. This Review surveys recent advances in the field and highlights three emerging directions: (i) the incorporation of deep-learning approaches for virtual staining, phenotypic classification, and automated analysis; (ii) the extension of HT to structurally complex biological systems, including organoids and thick tissue specimens; and (iii) the integration of HT with complementary modalities, such as Raman and polarization-sensitive microscopy, to enhance molecular and biophysical specificity. We summarize current HT applications spanning subcellular phenotyping, metabolic and mechanical profiling, and early-stage clinical studies in areas such as infectious disease and pathology. Finally, we discuss remaining technical and translational challenges and outline a roadmap for the prospective integration of HT into digital pathology and high-throughput screening workflows.
https://arxiv.org/abs/2601.02611
Academic Papers
svg
394b1374464d99a8bb1314a1f97919775663c8d95ac3a5dd6f25f928fb95c184
2026-01-07T00:00:00-05:00
GKFieldFlow: A Spatio-Temporal Neural Surrogate for Nonlinear Gyrokinetic Turbulence
arXiv:2601.02614v1 Announce Type: new Abstract: We present GKFieldFlow, a novel three-dimensional autoregressive deep learning surrogate model for nonlinear gyrokinetic turbulence. Based on the architecture FieldFlow-Net, this model combines a multi-resolution 3D U-Net encoder-decoder that operates on evolving plasma potential fields. A dilated temporal convolutional network (TCN) learns the nonlinear time evolution of latent turbulence features. GKFieldFlow simultaneously (i) predicts ion and electron energy fluxes, and particle flux directly from CGYRO turbulence, and (ii) predicts future potential fields autoregressively with desired spatial resolution. This enables the model to replicate both instantaneous transport and the underlying spatio-temporal dynamics that generate it. The architecture is physics-informed in its design: 3D convolutions preserve the anisotropic geometry and phase structure of gyrokinetic fluctuations, while dilated temporal convolutions capture multiscale dynamical couplings such as turbulence and zonal-flow interactions, turbulence decorrelation, and intermittent bursty transport. We provide a complete technical description of the data structure, model components, and rationale behind each architectural choice. The model achieves high accuracy across all three transport channels, with multi-horizon inference maintaining robustness. Autoregressive field rollouts preserve the spectral content, phase coherence, and energy distribution of the CGYRO nonlinear state with strong fidelity, and flux predictions remain consistent with CGYRO within a small fractional error. This work presents GKFieldFlow as a data-driven reduced model that can jointly learn turbulence dynamics and transport.
https://arxiv.org/abs/2601.02614
Academic Papers
svg
d231e537add95e3326cce89217263cfb9b0bba4aacfd0e61a8bbe10863e91745
2026-01-07T00:00:00-05:00
Coupled Microelectromechanical Drum Resonators for Reservoir Computing via Sideband Pumped Phonon-Cavity Dynamics
arXiv:2601.02617v1 Announce Type: new Abstract: Reservoir computing is a bio-inspired machine learning paradigm that exploits the intrinsic dynamics of nonlinear systems with fading memory for efficient temporal information processing. Microelectromechanical resonators offer a promising platform for reservoir computing as they inherently possess the requisite nonlinear and temporal properties while also facilitating the integration of sensing and computing within a single platform. In this work, we experimentally demonstrate a physical reservoir computing platform based on two capacitively coupled drum resonators, operating in the MHz frequency regime. Taking advantage of the concept of phonon-cavity electromechanics, a pump tone is applied at the sideband of the phonon cavity while probing one of the coupled modes, analogous to optomechanical systems, thereby creating nonlinear dynamics in energy transfer between the two resonators. Physical reservoir computing is implemented by exploiting the nonlinear response induced through pump amplitude modulation in combination with a time-delay feedback loop, and the performance is evaluated using both parity and Normalized Auto-Regressive Moving Average benchmarks. This work demonstrates a compact microelectromechanical platform for the integration of sensing and reservoir computing. Moreover, the sideband pumping scheme can further extend conventional single resonator reservoir computing to a multimode architecture.
https://arxiv.org/abs/2601.02617
Academic Papers
svg
799907546855faf08029cbbb6965a715e87a05bd4a580b3710805b923eb05b85
2026-01-07T00:00:00-05:00
Acoustic Analogy of Quantum Baldin Sum Rule for Optimal Causal Scattering
arXiv:2601.02630v1 Announce Type: new Abstract: The mass law is a cornerstone in predicting sound transmission loss, yet it neglects the constraints of causal dispersion. Current causality-based theories, such as the Rozanov limit, are applicable only to one-port reflective absorbers. Here, we derive a universal sum rule governing causal scattering in acoustic systems, establishing a rigorous analogy to the Baldin sum rule in quantum field theory. This relation reveals that the integral of the extinction cross-section is fundamentally locked by the scatterer's static effective mass and stiffness, which is validated numerically using seminal examples of underwater metamaterials. Furthermore, the proposed sum rule predicts an optimal condition for an anomalously broadened transmission loss bandwidth, as experimentally observed through the spectral shaping effect of an acoustic Fano resonator. Our findings open up an unexplored avenue for enhancing the scattering bandwidth of passive metamaterials.
https://arxiv.org/abs/2601.02630
Academic Papers
svg
40715f0a879b93a0c77488b20502e3f14517b8e31145161f1d6cbe9527ee1063
2026-01-07T00:00:00-05:00
Freestanding Resist Metasurface Supporting Higher-Order BICs for Efficient Field Enhancement in TMD Monolayers
arXiv:2601.02635v1 Announce Type: new Abstract: Enhancing light-matter coupling in two-dimensional (2D) semiconductors such as transition metal dichalcogenide monolayers remains a central challenge in nanophotonics due to their atomic thickness, which limits their interaction volume with light. Here, we demonstrate that first-order quasi-bound states in the continuum (quasi-BICs) supported by a freestanding metasurface provide exceptionally strong surface field enhancement, enabling efficient coupling with a tungsten disulfide (WS2) monolayer. Triangular-lattice polymer patterns on silicon nitride membranes are fabricated to realize these higher-order modes. Simulations reveal that first-order quasi-BICs exhibit much stronger field enhancement than zeroth-order modes at the top surface where the WS2 monolayer is placed. Photoluminescence (PL) measurements confirm a remarkable PL enhancement factor of 127 for first-order quasi-BICs, over six times larger than that of zeroth-order quasi-BICs. These results establish higher-order BICs in freestanding metasurfaces as a powerful route to engineer light-matter interactions in 2D semiconductors for advanced nanophotonic and quantum photonic applications.
https://arxiv.org/abs/2601.02635
Academic Papers
svg
987a5df9b62d00d90b3d3b9c7f59b56a5842f478fe18b6f6f7b6630f800e81eb
2026-01-07T00:00:00-05:00
Musical Molecules: Sonifying the IR Spectra and Modeling Intramolecular Vibrational Energy Redistribution of Small Molecules
arXiv:2601.02652v1 Announce Type: new Abstract: This work explores how small molecules sound. Infrared (IR) spectra of HCl, H2O, NH3, and acetone are mapped into the audible range using a simple anharmonic oscillator model and NIST vibrational data. Comparing harmonic and anharmonic sonifications reveals systematic pitch flattening, beating, and the emergence of combination bands, which are analyzed with spectrograms and autocorrelation functions. A time-dependent model of intramolecular vibrational energy redistribution (IVR) in acetone, implemented by "plucking" a single mode, produces evolving sound textures that mirror energy flow through the molecule. These results suggest that sonified IR spectra can provide an intuitive, pedagogical window into anharmonicity, mode coupling, and IVR.
https://arxiv.org/abs/2601.02652
Academic Papers
svg
8ca161357472f39b7794f8625427afd4d0678fc5c2680b11c5e39329025be406
2026-01-07T00:00:00-05:00
Modulating anomalous thermal quenching behavior of stimulation luminescence via high-orbit electronic satellite-stabilized Trap state in germanate-based phosphors for 5D optical data storage
arXiv:2601.02667v1 Announce Type: new Abstract: Persistent luminescence (PersL) materials, widely used in emergency lighting and information storage, are primarily employed at room temperature. However, their luminescent performance deteriorates sharply at high temperatures. Herein, a serials of Mg2GeO4:Ti4+,Ln3+ (Ln = Tb, Eu) phosphors demonstrated anomalous thermal quenching PersL due to the temperature-dependent Fermi-Dirac distribution of bound charge carriers of Ti4+Mg2+ as remote electron traps and VMg2+ as hole traps. The high carrier retention rate of phosphors is attributed to the ability of Ti4+Mg2+ positive charge center to strongly trap non-bonding electrons over a long range (about 20 angstroms) as the electronic satellite for its stable operation. Under external optical/thermal stimulation, the released electrons and holes recombine at the different luminescent levels of Tb3+, resulting in the emission of different PersL branching ratios. Using these phosphors, we have developed 5D optical data storage (2D plane + trap depth + temperature + time) and the encrypted engine program for high-temperature aerospace engines. This study reveals the energy storage process of long-range trapping and releasing electrons by Ti4+ electron traps, and provides a new design concept for the design of PersL materials.
https://arxiv.org/abs/2601.02667
Academic Papers
svg
6a7b8aa75534a2568f7bf3e6948c1d604b0d9cdd1e09954afac8a857c58168be
2026-01-07T00:00:00-05:00
Optical Quasi-symmetry Groups for Meron Lattices
arXiv:2601.02675v1 Announce Type: new Abstract: We introduce quasi-symmetry groups in optics emerging from the commutation between mirror operation and the spin-orbit interaction (SOI) of light. Contrary to the principle of symmetry inheritance in free-space optics, where the symmetry of any structured field is strictly constrained by that of its source, we show that strong SOI enables quasi-symmetry-protected formation of meron lattices even when the underlying optical sources violate the nominal rotational symmetry. By analyzing the Hermiticity of the electric-dipole radiation amplitude in a circular polarization basis, we derive an effective mirror operator acting only on a subset of C3 polarized dipole emitters, forming a quasi-symmetry group that commutes with SOI. This quasi-symmetry guarantees exact C3 merons and gives rise to a robust polarization zone within which continuously varying input polarizations generate identical topological textures. Our work establishes quasi-symmetry as a new fundamental principle in optical physics and opens pathways to engineered topological structures of light beyond conventional symmetry constraints.
https://arxiv.org/abs/2601.02675
Academic Papers
svg
d213fcbedb5ca08c769c7347fb33b5ccf69fc4fd26593b6c82079a3955edc94e
2026-01-07T00:00:00-05:00
RONS Generation in Plasma-Activated Saline for Wound Healing
arXiv:2601.02684v1 Announce Type: new Abstract: This study explores the physicochemical modifications and antimicrobial potential of plasma activated saline generated by exposing Sodium Chloride and Ringer solution to atmospheric pressure dielectric barrier discharge plasma. Plasma activation produced reactive oxygen and nitrogen species leading to changes in pH, redox potential, conductivity and concentrations of Hydrogen Peroxide, Nitrogen and Nitrous Oxides. Effects of activation time, voltage, and gas composition were analyzed. Antimicrobial activity against Staphylococcus aureus, Pseudomonas aeruginosa and E coli was assessed via MIC, CFU reduction and biofilm inhibition tests. Optimal plasma exposure achieved strong microbial inactivation with good biocompatibility. SEM and FTIR confirmed membrane damage, supporting PAS as a safe, nonantibiotic wound irrigation and disinfectant solution.
https://arxiv.org/abs/2601.02684
Academic Papers
svg
3e3bb9e24d245ae1a0a24eac4120a6bf5adee8b9a824e34cb16fd13ff317c5e2
2026-01-07T00:00:00-05:00
Data-Driven Flow Initialization Framework for CFD Acceleration of Underwater Vehicle in Vertical-Plane Oblique Motion
arXiv:2601.02693v1 Announce Type: new Abstract: Accurate prediction of flow fields around underwater vehicles undergoing vertical-plane oblique motions is critical for hydrodynamic analysis, but it often requires computationally expensive CFD simulations. This study proposes a Data-Driven Flow Initialization (DDFI) framework that accelerates CFD simulation by integrating deep neural network (DNN) to predict full-domain flow fields. Using the suboff hull under various inlet velocities and angles of attack as an example, a DNN is trained to predict velocity, pressure, and turbulent quantities based on mesh geometry, operating conditions, and hybrid vectors. The DNN can provide reasonably accurate predictions with a relative error about 3.3%. To enhance numerical accuracy while maintaining physical consistency, the DNN-predicted flow fields are utilized as initial solutions for the CFD solver, achieving up to 3.5-fold and 2.0-fold speedup at residual thresholds of 5*10^(-6)and 5*10^(-8), respectively. This method maintains physical consistency by refining neural network outputs via traditional CFD solvers, balancing computational efficiency and accuracy. Notably, reducing the size of training set does not exert an essential impact on acceleration performance. Besides, this method exhibits cross-mesh generalization capability. In general, this proposed hybrid approach offers a new pathway for high-fidelity and efficient full-domain flow field predictions around complex underwater vehicles.
https://arxiv.org/abs/2601.02693
Academic Papers
svg
04465c8abcd622c6d5acdf1002eb49a282ae00b173e5ef09ae97a31d1cc3cfcb
2026-01-07T00:00:00-05:00
Uncooled low-noise thin-film optomechanical resonator for thermal sensing on lithium niobate
arXiv:2601.02715v1 Announce Type: new Abstract: Optomechanical transduction harnesses the interaction between optical fields and mechanical motion to achieve sensitive measurement of weak mechanical quantities with inherently low noise. Lithium niobate combines low optical loss, strong piezoelectricity, high intrinsic fQ_m factor, and low thermal conductivity, making it promising for exploring optomechanical platforms targeting thermal sensing applications. Here, we developed an integrated optomechanical platform on thin-film lithium niobate with precisely engineered optical, mechanical, and thermal fields within a compact 40 {\mu}m by 40 {\mu}m footprint. The platform integrates suspended microring resonators with ultrathin central membranes, reducing mechanical stiffness and effective mass while maintaining a high optical factor Q_o of 1e6 and mechanical quality factor Q_m of 1117, which increases to 5.1e4 after oscillation. The design suppresses thermal dissipation into the silicon substrate and enhances thermal sensitivity, achieving a temperature coefficient of frequency of -124 ppm/K and a noise-equivalent power of 6.2 nW/sqrt(Hz) at 10 kHz at room temperature. This compact and scalable platform opens up new opportunities for high-sensitivity thermal sensing, supports heterogeneous integration with infrared absorbers for uncooled infrared detection, and enables fully integrated, all-optical on-chip readout, paving the way toward large-format, low-noise infrared sensing arrays.
https://arxiv.org/abs/2601.02715
Academic Papers
svg
7a005b2b87773fd00ef2e09b07a222d0ae6765a5c18c1445a97b0c3affea969c
2026-01-07T00:00:00-05:00
Photonic Waveguide Circuit Integrated with Carbon Nanotube Single-Photon Source Operating at Room Temperature
arXiv:2601.02758v1 Announce Type: new Abstract: Photonic integrated circuits require robust room-temperature single-photon sources to enable scalable quantum technologies. Single-walled carbon nanotubes (CNTs), with their unique excitonic properties and chemical tunability, are attractive candidates, but their integration into photonic circuits remains challenging. In this work, we demonstrate the integration of functionalized CNTs as room-temperature single-photon emitters into photonic cavities and waveguide circuits. (6,5) CNTs with aryl sp$^3$ defects are either stochastically deposited via drop-casting or deterministically positioned on photonic cavities using an anthracene-assisted transfer method guided by real-time photoluminescence monitoring. Photoluminescence spectra reveal cavity-enhanced emission, while second-order autocorrelation measurements confirm single-photon propagation through the photonic integrated circuit, highlighting the potential of CNTs for scalable, room-temperature quantum photonic applications.
https://arxiv.org/abs/2601.02758
Academic Papers
svg
ecbaec584aa1cd87064bbe3e152e696d75cfaa2dcda0978fe739903c0f0f5f39
2026-01-07T00:00:00-05:00
Thermally adaptive textile inspired by morpho butterfly for all-season comfort and visible aesthetics
arXiv:2601.02774v1 Announce Type: new Abstract: A longstanding challenge in personal thermal management has been transitioning from static, appearance-limited passive radiative cooling (PDRC) materials to systems that are both dynamically adaptive and visually versatile. The central hurdle remains the inherent compromise between color saturation and cooling power. Inspired by organisms such as butterflies, which decouple structural color from thermal function, we present a smart textile that seamlessly merges a dynamic thermochromic layer with static photonic crystals (PCs). This design enables the solar reflectance to be autonomously switched-from approximately 0.6 in the colored state for heating to about 0.9 in the high-reflectance state for cooling. Consequently, outdoor experiments validated substantial temperature regulation: the fabric achieves a surface temperature reduction of 3-4 {\deg}C in summer and a heating difference of <1 {\deg}C in winter compared to commercial reference materials, all while maintaining high-saturation colors. This dual-mode operation offers a viable pathway for achieving adaptive, aesthetic, and energy-free thermal comfort.
https://arxiv.org/abs/2601.02774
Academic Papers
svg
b04481f3b8bcb4327601dcd002b6e6399832fdc54d1067ed4ea863a6b02a9545
2026-01-07T00:00:00-05:00
Development of CMOS LGAD sensors for the ALICE~3 Time of Flight detector
arXiv:2601.02823v1 Announce Type: new Abstract: The next-generation ALICE 3 experiment at the High-Luminosity LHC (HL-LHC) requires detector technologies that combine fine spatial resolution, fast timing, and an extremely low material budget. This paper presents the design, characterization, and beam-test performance of MadPix, a monolithic CMOS sensor featuring an internal avalanche gain layer. The sensor is implemented in a 110 nm CMOS imaging process and demonstrates the portability of the Low Gain Avalanche Diode concept to a standard CMOS technology. The results showed an intrinsic gain between 10 and 13 and a time resolution of 75 ps.
https://arxiv.org/abs/2601.02823
Academic Papers
svg
9399127a980753282a4d81420f0b6a6f5ddeac1a73d95d31221ccd3b7cc09c9c
2026-01-07T00:00:00-05:00
High-Q AlN microresonators for nonlinear near-infrared and near-visible photonics
arXiv:2601.02842v1 Announce Type: new Abstract: High Q-factors of microresonators are crucial for nonlinear integrated photonics, as many nonlinear dynamics have quadratic or even cubic dependence on Q-factors. The unique material properties make AlN microresonators invaluable for microcomb generation, Raman lasing and visible integrated photonics. However, the loss level of AlN falls behind other integrated platforms. By optimizing the fabrication, we demonstrate record Q-factors of 5.4$\times$10$^6$ and 2.2$\times$10$^6$ for AlN microresonators in the near-infrared and near-visible, respectively. Polarized-mode-interaction was used to create anomalous dispersion to support bright AlN Dirac solitons. Measurement of polarization-dependent spectra reveals the polarization hybridization of the Dirac soliton. In a microresonator with normal dispersion, Raman assisted four-wave-mixing (RFWM) was observed to initiate platicon formation, adding an approach to generate normal dispersion microcombs. A design of width-varying waveguides was used to ensure both efficient coupling and high Q-factor for racetrack microresonators at 780 nm. The microresonator was pumped to generate near-visble Raman laser at 820 nm with a fundamental linewidth narrower than 220 Hz. Our work unlocks new opportunities for integrated AlN photonics by improving Q-factors and uncovering nonlinear dynamics in AlN microresonators.
https://arxiv.org/abs/2601.02842
Academic Papers
svg
034a77ac51b8676e5b4c12b2a349427bebc3ff5c4ab37fee1b209036852abd54
2026-01-07T00:00:00-05:00
A Vehicle-portable Ultra-stable Laser for Operating on Highways
arXiv:2601.02843v1 Announce Type: new Abstract: Portable ultra-stable lasers are essential for high-precision measurements. This study presents a 1550 nm vehicle-portable ultra-stable laser designed for continuous real-time operation on highways. We implement several measures to mitigate environmental impacts, including active temperature control with a standard deviation of mK/day to reduce frequency drift of the optical reference cavity, all-polarization-maintaining fiber devices to enhance the robustness of the optical path, and highly integrated electronic units to diminish thermal effects. The performance of the ultra-stable laser is evaluated through real-time beat frequency measurements with another similar ultra-stable laser over a transport distance of approximately 100 km, encompassing rural roads, national roads, urban roads, and expressways. The results indicate frequency stability of approximately 10-12/(0.01s-100 s) during transport, about 5E-14/s while the vehicle is stationary with the engine running, and around 3E-15/s with the engine off, all without active vibration isolation. This work marks the first recorded instance of a portable ultra-stable laser achieving continuous real-time operation on highways and lays a crucial foundation for non-laboratory applications, such as mobile laser communication and dynamic free-space time-frequency comparison.
https://arxiv.org/abs/2601.02843
Academic Papers
svg
5e90ef42966ba61732585b11aa8835dde716b8d9a5495e6ae3f827f45e22dc84
2026-01-07T00:00:00-05:00
ML enhanced measurement of the electrostatic charge distribution of powder conveyed through a duct
arXiv:2601.02852v1 Announce Type: new Abstract: The electrostatic charge acquired by powders during transport through ducts can cause devastating dust explosions. Our recently developed laser-optical measurement technique can resolve the powder charge along a one-dimensional (1D) path. However, the charge across the duct's complete two-dimensional (2D) cross-section, which is the critical parameter for process safety, is generally unavailable due to limited optical access. To estimate the complete powder charge distribution in a conveying duct, we propose a machine learning (ML) approach using a shallow neural network (SNN). The ML algorithm is trained with cross-sectional data extracted from four different three-dimensional direct numerical simulations of a turbulent duct flow with varying particle size. Through this training with simulation data, the ML algorithm can estimate the powder charge distribution in the duct's cross-section based on only 1D measurements. The results reveal an average $L^1$-error of the reconstructed 2D cross-section of 1.63 %.
https://arxiv.org/abs/2601.02852
Academic Papers
svg
a0452c2668bc7682c5c65d8739133ba5dca8526daf4138a0221f59a96a584344
2026-01-07T00:00:00-05:00
Finite Element Simulation of NMC Particle Fracture during Calendering: a Route to Optimize Electrode Microstructures
arXiv:2601.02879v1 Announce Type: new Abstract: Beyond active material intrinsic properties, the electrode manufacturing process is a crucial step to reach high energy density and long-life of Li-ion batteries. In particular, very high pressures are applied to the electrode during the calendering step, that directly influence the microstructure and the electrochemical performances. This article reports the first calendering simulation of a NMC cathode using a finite element method (FEM), including the post-fracturation behaviour of the secondary NMC particles. Calibrated with nano-indentation experiments, the mechanical model provides stress-strain predictions fully consistent with experimental data. On assemblies up to 100 particles, simulations reveal three calendering regimes along compression: particle rearrangement, moderatepressure fracturing, and complete crushing. The model shows the strong sensitivity of the electrode microstructure to the calendering pressure level, and can thus be used as a guidance in the multi-criteria optimization of the manufacturing process.
https://arxiv.org/abs/2601.02879
Academic Papers
svg
651b8a8169a17985620fd296115aa83aa8175f4c8ca3e2f442244d514de9a14a
2026-01-07T00:00:00-05:00
What happens if you put your head in the Geneva water jet? An inquiry-based physics activity exploring fluid dynamics
arXiv:2601.02934v1 Announce Type: new Abstract: We describe a physics education activity for third-year Bachelor students, inspired by a humorous question about the Geneva water jet. The exercise engages students in key scientific practices: reformulating everyday questions in scientific terms, constructing simplified models, performing semi-quantitative estimations, and comparing alternative solution methods. Students explore approaches based on Bernoulli principle and a power analysis, revealing consistent results when assumptions are carefully considered. The activity emphasizes critical reasoning, including identifying relevant data, making approximations, and applying energy and mass conservation to incompressible fluids. It also fosters metacognitive skills and higher-order thinking (HOT), illustrating the universality of fundamental physical principles across diverse phenomena. By situating the task in a relatable, real-world context, the activity motivates students while exposing them to problem-solving challenges rarely encountered in traditional instruction, such as Fermi-type estimation and cross-context knowledge transfer.
https://arxiv.org/abs/2601.02934
Academic Papers
svg
38285007bdb26febb8e0e2c5fbb35677f24bb365b03b63eb883d3d7277edfd6a
2026-01-07T00:00:00-05:00
A First-Principles Closure for Nonlocal Magnetized Transport
arXiv:2601.02937v1 Announce Type: new Abstract: A reduced kinetic method (RKM) for describing nonlocal transport in magnetized plasmas is derived from first principles and considered in a 1D3V geometry. Unlike standard nonlocal closures, this RKM uses the Fokker-Planck collision operator, therefore local transport results are naturally reproduced for small Knudsen number. An inhibited peak heat flux and preheat of the conductive heat flux are observed, which are expected from physical arguments and previous kinetic studies. Nonlocal behavior of other transport fluxes, namely the Righi-Leduc, Peltier, Ettingshausen, Nernst, thermal force, friction, cross friction, viscous stress, and gyroviscous stress terms are also demonstrated. Neglecting the nonlinear component of the Fokker-Planck collision operator is justified a posteriori. An especially computationally efficient and analytically simpler version of the RKM is presented.
https://arxiv.org/abs/2601.02937
Academic Papers
svg
c05a8456feffada8bcf33aab3f6a8f9745b7402ae8cc1fc1afbec44201524ca8
2026-01-07T00:00:00-05:00
What Is the Minimum Number of Parameters Required to Represent Solutions of the Grad-Shafranov Equation?
arXiv:2601.02942v1 Announce Type: new Abstract: Fast and accurate solutions of the Grad--Shafranov (GS) equation are essential for equilibrium analysis, integrated modeling, and surrogate model construction in magnetic confinement fusion. In this work, we address a fundamental question: what is the minimum number of free parameters required to accurately represent numerical solutions of the GS equation under fixed-boundary conditions? We demonstrate that, for most practical applications, GS equilibria can be represented using only 2--5 free parameters while maintaining relative errors below 5\%. For higher-accuracy requirements, we introduce a unified spectral representation based on the Miller extended harmonic (MXH) expansion in the poloidal direction combined with shifted Chebyshev (Cheb) polynomials in the radial direction. This MXH--Cheb basis exhibits rapid convergence for two-dimensional GS equilibria. For configurations where three geometric moments (shift, elongation, and triangularity) are specified at the last closed flux surface (LCFS), relative errors on the order of $10^{-2}$--$10^{-3}$ can be achieved using as few as 13--20 parameters. In more general cases, including up--down asymmetric equilibria, X-point configurations, and stiff pressure and current profiles (e.g., H-mode pedestals), accuracies beyond this level can be obtained with fewer than 100 parameters. The resulting equilibrium configurations and profile functions are fully analytical, with smooth derivatives of all orders. These results provide a systematic foundation for developing high-fidelity, ultra-fast GS solvers and enable efficient reduced-order and AI-based surrogate modeling of tokamak equilibria.
https://arxiv.org/abs/2601.02942
Academic Papers
svg
3f1e75d04cf872bf5ac3251877d9dc4ffd0988032d6855ac0be2338e1c7a7e2f
2026-01-07T00:00:00-05:00
Defect Landscape Engineering Suppresses Helium Damage in Ceramics
arXiv:2601.02946v1 Announce Type: new Abstract: Helium accumulation in structural ceramics used in nuclear, fusion, and aerospace systems causes swelling, cracking, and early failure, yet controlling this damage has remained elusive. Here, we introduce defect landscape engineering, the deliberate creation of vacancy clusters prior to helium exposure, as a general strategy to suppress helium-induced degradation. Using {\alpha}-SiC as a model, we combine advanced microscopy, strain mapping, helium depth profiling, positron annihilation spectroscopy, and atomistic simulations to demonstrate that tailored pre-damage transforms helium defect evolution. Instead of forming extended platelets and nanocracks, helium is trapped in stable, uniformly dispersed nanobubbles. Simulations reveal that small vacancy clusters act as dual-function sinks for irradiation-induced interstitials and preferential helium traps, fundamentally altering cascade recombination dynamics. This mechanism is composition-independent and scalable, offering a new design principle for radiation-tolerant ceramics across carbides, nitrides, and oxides. By viewing defect control as a tunable parameter instead of a fixed material property, this work outlines a possible design route toward enhanced radiation tolerance in ceramics used in extreme environments.
https://arxiv.org/abs/2601.02946
Academic Papers
svg
08f2446ec45209072a44d668e1451f957d6e69ec417efe1f66d533c4174521b3
2026-01-07T00:00:00-05:00
Charged excitations made neutral: N-centered ensemble density functional theory of Fukui functions
arXiv:2601.02985v1 Announce Type: new Abstract: We introduce an in-principle exact working equation to compute Fukui functions within $N$-centered ensemble DFT. It avoids the kernel derivative discontinuity problem of DFT for fractional number of electrons, whose contribution is recovered through weight-derivatives of the ensemble density functional potential. We explore practical strategies to compute its contribution by recycling ground-state density-functional approximations dressed with a weight-dependent scaling function. We also show interpolating between known limits of the ensemble functional and learning from uniform density profiles is a very effective strategy.
https://arxiv.org/abs/2601.02985
Academic Papers
svg
eb81faae5652b87906ff3cdd5fe6b7bc200ae9a5d39415016fe0e9ba49a11d02
2026-01-07T00:00:00-05:00
Effective Hamiltonian based DNP Sequence Optimization
arXiv:2601.03004v1 Announce Type: new Abstract: Dynamic nuclear polarization (DNP) enhances the intensity of NMR signals by transferring polarization from electron spins to nuclei via microwave irradiation. Pulsed DNP methods offer more control on the spin dynamics than conventional continuous-wave approaches. Here, we report on-resonance and off-resonance DNP sequences optimized using effective Hamiltonians derived from continuous Floquet theory. Experiments at 80 K and 0.35 T using a sample of 5 mM Trityl OX063 in a glycerol-d8/D2O/H2O matrix (60:30:10, v/v/v) demonstrate that the optimized on-resonance sequence achieves 100 MHz electron offset bandwidth, while the offresonance sequence cantered at an electron offset of 50 MHz can cover 20 MHz, with 25 MHz and 20 MHz of microwave power, respectively. These results demonstrate that continuous Floquet theory is a useful framework for the optimization of pulsed DNP sequences.
https://arxiv.org/abs/2601.03004
Academic Papers
svg
3062795d2a213f9b68da1339b9a4bf8ef8a229b2f5bc0da409e3a5bb27716593
2026-01-07T00:00:00-05:00
Statistical State Dynamics Based Study of the Turbulent Ekman Layer
arXiv:2601.03033v1 Announce Type: new Abstract: Streamwise roll and streak structures (RSS) are prominent features observed in both atmospheric and oceanic planetary boundary layers (PBL) as well as in laboratory scale Wall bounded shear flows. Despite their structural similarity across these systems, the mechanisms responsible for forming and sustaining the RSS remain debated. This study demonstrates that the same turbulence sustaining mechanism previously identified in Wall bounded shear flows using the Statistical State Dynamics (SSD) formulation of the Navier Stokes equations (Farrell & Ioannou 2012; Farrell et al. 2017) also operates in the Ekman layer. By extending the SSD based stability analysis methods previously used for studying roll formation in wall bounded shear flows to the Ekman layer, we show that the well known Reynolds stress driven instability mechanism in wall-bounded turbulence acts together with inflectional instability to produce and sustain RSS in the Ekman layer. These results enhance the mechanistic understanding of RSS formation and evolution in the turbulent Ekman layer and provide a fundamental link between geophysical Ekman-layer turbulence and turbulence in engineering-scale shear flows.
https://arxiv.org/abs/2601.03033
Academic Papers
svg
fb7a3282b495f00d283c71c4f5633163f95f539d50ae29e1edbf937b28488244
2026-01-07T00:00:00-05:00
Harnessing Evanescent Wave Interaction for Enhanced Optical NO2 Detection with Carbon Nanotube-Coated Side-Polished Fiber
arXiv:2601.03071v1 Announce Type: new Abstract: Evanescent-wave gas sensors employing side-polished optical fibers (SPFs) functionalized with nanomaterial coatings represent a promising platform for compact, sensitive detection. While single-walled carbon nanotube (SWCNT) films are recognized for their gas adsorption capabilities, their integration with photonic structures often overlooks complex light-matter interactions. In this work, we report a counterintuitive polarization-dependent response in an evanescent-wave NO2 sensor, fabricated by depositing aerosol-synthesized SWCNT thin films onto SPFs. The device demonstrates high performance, including a limit of detection of 400 ppb and stable operation in humid environments. However, its sensing behavior deviates strikingly from established models: upon NO2 exposure, transmitted light intensity increases for TM polarization but decreases for TE polarization, a phenomenon not attributable solely to changes in the intrinsic absorption of the SWCNTs. We pinpoint that the dominant mechanism is a gas-induced alteration of the SWCNT film's complex refractive index, which subsequently perturbs the evanescent field mode profile of the waveguide. Numerical simulations confirm that accounting for this mode-profile redistribution is essential to accurately describe the sensor's response. Revealed mechanism provides an important design framework for advanced evanescent-field sensors based on tunable nanomaterial claddings.
https://arxiv.org/abs/2601.03071
Academic Papers
svg
1e4bf9602b6dd443aef2823d97ecf9825cfa5f1286d4ada4150e94e5df643a2f
2026-01-07T00:00:00-05:00
Unifying Viscocapillary and Inertial Regimes in Selective Withdrawal
arXiv:2601.03074v1 Announce Type: new Abstract: Selective withdrawal extracts only a single phase from a stratified multi-layer system. Entrainment occurs when a critical condition draws up the static layer which is not being withdrawn. Existing studies provide robust scalings within distinct limiting regimes. These include viscocapillary-dominated entrainment at low Reynolds number. They also include inertia-dominated entrainment at high Reynolds number. However, a single unifying representation remains to be explored in the literature. This limitation is most evident in transitional conditions between classical limits. It is also pronounced when the lower layer is non-Newtonian. Here we report selective-withdrawal experiments spanning these conditions. The upper layer is Newtonian, using PDMS or soybean oil. The lower layer is either Newtonian water or shear-thinning xanthan-gum solutions. We propose a unified framework that connects these previously separated regimes. The framework adopts a ``Moody diagram'' type representation for selective withdrawal. We collapse normalized critical submergence height using a Reynolds-like control parameter. Surface-tension effects enter subdominantly through the capillary length. The resulting master curve captures the transition between dominant balances. It connects viscous and shear-controlled entrainment to inertial entrainment. The collapse also clarifies how shear thinning enters the organization. Shear thinning primarily renormalizes the viscous correction through an effective viscosity. It does not alter the inertial baseline scale that anchors the normalization. This regime-spanning representation avoids regime-by-regime correlation switching. It provides a compact diagnostic for entrainment thresholds across fluid types. The diagnostic applies to Newtonian and generalized-Newtonian two-layer systems.
https://arxiv.org/abs/2601.03074
Academic Papers
svg
a8362bf9868572a22d6c9ae34888c5573baff2630b14ff7ede1b262b0f5c2d77
2026-01-07T00:00:00-05:00
Collective light-matter interaction in plasmonic waveguide quantum electrodynamics
arXiv:2601.03142v1 Announce Type: new Abstract: Rabi oscillations characterize light-matter hybridization in the waveguide quantum electrodynamics~(WQED) framework, with their associated decay rates reflecting excitation damping, yet their behavior remains unresolved when collective emitters are coupled to a collective waveguide mode. This scenario reveals a conceptually novel collective-light-collective-matter interaction, realizable when a timed-Dicke state~(TDS) of subwavelength emitters couples to a slow, delocalized surface-plasmon mode, forming a hybridized plasmon-polariton~(HPP). The HPP acquires its directionality from the TDS via momentum matching. It also exhibits plasmonic characteristics, with excitation frequencies following the surface-plasmon dispersion relation. We obtain a Rabi oscillation and a long-time decay that describe the HPP and use them to reveal weak- and strong-coupling regimes through the emergence of normal-mode splitting. By performing a finite-time Lyapunov-exponent analysis, we show that the HPP also exhibits instantaneous decay and identify three distinct decay regimes: early-time rapid, transient-time oscillatory, and long-time classical. Finally, by analyzing the emission spectrum, we observe an anticrossing of the peak doublets~(a feature also seen in cavity QED setups) which originates from quantum vacuum effects and the resulting non-Markovian HPP evolution in our WQED.
https://arxiv.org/abs/2601.03142
Academic Papers
svg
c14a3f2c58842b43c69c9a7501b522843267b80c140ce3ab5dcc8d56a1c3c6d2
2026-01-07T00:00:00-05:00
Fast and slow surfactants in turbulent bubble breakup
arXiv:2601.03157v1 Announce Type: new Abstract: When a large air cavity breaks in a turbulent flow, it goes through very large deformations and cascading events of new interface formation, including elongated filaments and bubbles over a wide range of scales, with their rate of formation controlled by turbulence and capillary processes. We experimentally investigate the effects of surfactants and salt on the fragmentation, and observe an order of magnitude increase of the number of bubbles being produced in some cases. For bubbles larger than the Hinze scale $d_H$ (defined as the balance between surface tension and turbulence stresses), we observe that bubble size distributions remain unchanged for all solutions tested. For bubbles below $d_H$, however, we observe an increase of the number of bubbles produced and an associated steepening of the bubble size distribution upon the addition of surfactant or salt. This later effect is only visible for some of the surfactants tested when their adsorption timescale is fast enough compared to the rate at which new interfaces are being generated by turbulence.
https://arxiv.org/abs/2601.03157
Academic Papers
svg
c3c63e3948b12443e41cb4c60fd9f8e6e0965fb0e1fdd7fb868fdac4c4795ae4
2026-01-07T00:00:00-05:00
Optimization of Cryogenic Detector Test Station by Rejecting Electromagnetic Interference
arXiv:2601.03158v1 Announce Type: new Abstract: We report on the solution optimized for characterizing SNSPDs by rejecting electromagnetic interference from various sources. The proposed readout method enhances measurement stability and enables reliable device characterization at low bias currents, where the signal-to-noise ratio is typically limited. By effectively suppressing EMI-induced noise, the method improves the ability to distinguish genuine detection events from spurious signals and reduces the effort required for data analysis. The approach has been applied to preliminary measurements of SNSPDs exposed to $\alpha$ particles emitted from a $^{241}$Am source, demonstrating stable operation and clean signal acquisition. While a detailed study of $\alpha$ detection is underway, the method establishes a foundation for further characterization of SNSPDs with various incident particles. The demonstrated EMI rejection technique is expected to facilitate future research in particle detection and support ongoing SNSPD development for applications in nuclear and accelerator-based experiments.
https://arxiv.org/abs/2601.03158
Academic Papers
svg
3ae930e4bcffe7b2a6e565c21827e9425a8b9a8b42be07661247608df6c79702
2026-01-07T00:00:00-05:00
Feasibility study of the positronium lifetime imaging with the Biograph Vision Quadra and J-PET tomographs
arXiv:2601.03172v1 Announce Type: new Abstract: Background: After its first ex-vivo and in-vivo demonstration, Positronium Lifetime Imaging (PLI) has received considerable interest as a potential new diagnostic biomarker. High sensitivity Positron Emission Tomography (PET) systems are needed for PLI since it requires simultaneous registration of annihilation photons and prompt gamma. In this simulation-based study, a~feasibility of PLI with the long axial field-of-view Biograph Vision Quadra (Quadra) and the Total Body J-PET scanner was investigated. Methods: The study was performed using the GATE software. Background radiation, present within the Quadra tomograph, was added to the simulation. First, the optimal placement of the energy window for the registration of the prompt gamma was investigated. Next, the organ-wise sensitivity of Quadra was calculated for the $^{68}$Ga, $^{44}$Sc, $^{22}$Na and $^{124}$I radioisotopes. Finally, the sensitivity for the scandium isotope was compared to the sensitivities obtainable with the Total Body J-PET scanner, as well as with the modular J-PET prototype. Results: The PLI sensitivities for the Quadra with the background radiation are estimated to 9.22(3), 10.46(4), 5.91(3), and 15.39(4) cps/kBq for the $^{44}$Sc, $^{68}$Ga, $^{22}$Na and $^{124}$I radioisotopes, respectively. The highest sensitivity was obtained when the energy window for the deexcitation photon is adjacent to the energy window for the annihilation photons. The determined PLI sensitivities with Quadra and the Total Body J-PET are in the order of sensitivities of standard PET imaging with the short axial field-of-view ($\sim$20 cm) PET scanners. Conclusion: The organ-wise PLI sensitivity of Quadra has been computed for the $^{68}$Ga, $^{44}$Sc, $^{22}$Na and $^{124}$I radioisotopes. A sensitivity gain by a factor of 150 was estimated relative to the modular J-PET system previously used for the first in-vivo PLI.
https://arxiv.org/abs/2601.03172
Academic Papers
svg
5322a5d22d9bbfb0140b0316c39c0ba287a2b173799c0645a6764fdb0c8e42d2
2026-01-07T00:00:00-05:00
Modelling and Simulation of the Propagation of P-SV Seismic Waves from Earthquakes: Application to Deep Earthquakes in Acre, Brazil
arXiv:2601.03177v1 Announce Type: new Abstract: Brazil is located in the central-eastern portion of the South American Plate, meaning that the country mostly experiences low-intensity seismic activity within its territory. However, some geological faults in this region have generated intense earthquakes. In this context, we intend to describe a recent earthquake of magnitude around 6.5 M_b that occurred at a depth of approximately 600 km in the state of Acre, Brazil. In this work, we modeled the propagation of P-SV seismic waves using a two-dimensional system of partial differential equations (PDEs) in a two-dimensional vertical rectangular domain. The source is modeled by a Gaussian pulse function. The initial quiescence condition and Neumann boundary conditions are used. The PDE system is discretized by the finite difference method (FDM) and solved by the Gauss-Seidel method (GSM). The numerical simulations obtained describe the propagation of attenuated seismic waves in multiple geological layers, simulating intense and deep earthquakes in Acre. We used the propagation of perfect seismic waves to validate the model. The results include images of the simulations and theoretical seismograms simulating the vertical and horizontal displacement in the epicenter region and 200 km east and west of the epicenter.
https://arxiv.org/abs/2601.03177
Academic Papers
svg
4e1965dc706449a5172527313f8a56dcc5f0be9dc676275958d29fd1739d351c
2026-01-07T00:00:00-05:00
Open-Source Coil Matching Toolbox for Magnetic Stimulation and Other Electromagnetics (COMATOSE)
arXiv:2601.03224v1 Announce Type: new Abstract: The coil in transcranial magnetic stimulation (TMS) determines the spatial shape of the electromagnetic field in the head, which structures are concurrently activated, and how focal stimulation is. Most of the readily available coils have been designed intuitively instead of systematic mathematical-physical optimization as there were no methods available at the time. Previous research however demonstrated that these coils are far from optimum, e.g., for pulse energy or efficiency, and leave substantial room for lots of improvements. Techniques for rigorous mathematical optimization have been developed but are only available to very few groups worldwide. This paper presents an open-source toolbox, COMATOSE, to change that situation and make these methods available to a wider community. It incorporates the fundamental formalisms and offers vector space decomposition as well as base mapping as an explicit forward method, which is computationally less demanding than iterative computational optimization but can also form the initial solution for a subsequent optimization run if desired.
https://arxiv.org/abs/2601.03224
Academic Papers
svg
9add334f51853b92c85909d645fe8b0b59aa2276102fc19539d23e7051a9d1ad
2026-01-07T00:00:00-05:00
Nutritional and growth enhancement of alfalfa sprouts through cold plasma and UV seed treatments
arXiv:2601.03255v1 Announce Type: new Abstract: Employing eco-friendly techniques like cold plasma (CP) and ultraviolet (UV) radiation provides innovative approaches to enhance the sprout quality and productivity of alfalfa. This study explores the effects of CP and UV radiation on the germination, growth, and phytochemical profiles of alfalfa sprouts. CP significantly accelerated germination time, reducing median germination time by 8 hours compared to the control, and enhanced photo synthetic pigments, leading to higher biomass (25.87 mg/sprout fresh weight and 1.45 mg/sprout dry weight). UV treatments, particularly UV-C, increased chlorophyll and total flavonoid content. Overall, CP effectively promotes alfalfa germination and growth, while UV treatments improve specific phytochemicals.
https://arxiv.org/abs/2601.03255
Academic Papers
svg
4837b2520e20e25cbbeb500aea4426f50e87fc18d94a3726e05bbd6e2e2493e8
2026-01-07T00:00:00-05:00
Feedback Driven Convergence, Competition, and Entanglement in Classical Stochastic Processes
arXiv:2601.02388v1 Announce Type: cross Abstract: We present a dynamical theory of statistical convergence in which the law of large numbers arises from outcome-outcome feedback rather than assumed independence. Defining the convergence field and its derivative, we show that empirical frequencies evolve through coupling, producing competition, finite-m fluctuations, and classical entanglement. Using the Kramers-Moyal expansion, we derive an Ito-Langevin and Fokker-Planck description, reducing in the symmetric regime to a time-dependent Ornstein-Uhlenbeck process. We propose variance-based witnesses that detect outcome-space entanglement in both binary sequences and coupled Brownian trajectories, and confirm entanglement through numerical experiments. Extending the formalism yields multi-outcome feedback dynamics and finite-time cross-diffusion between Brownian particles. The results unify convergence, fluctuation, and entanglement as consequences of a single feedback-driven stochastic principle.
https://arxiv.org/abs/2601.02388
Academic Papers
svg