id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
d9e7c2494e5f00830f3a0705321a3ad62eb8e50aed75e574a224025ec8352b31
2025-12-31T09:23:43-05:00
The Dreame X40 Ultra robovac is about $700 off, nearly matching its best price
The Dreame X40 Ultra used to be one of our robot vacuums on the market before its successor took its place. With the year coming to a close, now’s a good time to set yourself up for a cleaner, more organized start in 2026. The Dreame X40 Ultra — one of our favorite robot vacuum deals from Black Friday — can handle most of the heavy lifting for you, and it’s once again on sale. You can currently buy it for $503 ($697 off) at Amazon at checkout, which is just $4 more than its record low price set during Black Friday. Dreame X40 Ultra Where to Buy: $1199.99 $503 at Amazon $1199.99 $649.99 at Dreame $1199.99 $549.99 at Best Buy Before the Dreame X50 Ultra took its place, the X40 Ultra was our favorite robovac / mop hybrid for hard floors and carpets. Like the X50 Ultra, it delivers excellent mopping performance, thanks to dual oscillating mop pads that can extend into edges and reach under consoles and cabinets. It can also automatically detach and reattach its mop pads when switching between mopping and vacuuming. While the X50 Ultra is nearly twice as powerful, the X40 Ultra’s 12,000Pa of suction is still more than capable of handling everyday dirt and debris. It also features AI-powered dirt detection, allowing the robot to slow down and make additional passes over especially dirty areas. What’s more, the X40 Ultra can empty its own dustbin, refill its water tank, and clean its washboard, with mop pads that wash and dry themselves for minimal upkeep. What you don’t get are some of the newer X50 Ultra upgrades, including the motorized swing arm that lets it clear higher thresholds and the improved dual rubber roller brush system, which does a better job picking up fine debris. That said, the X40 Ultra still performed very well in our testing, making it a great investment — especially considering it now costs about half as much as its successor. Some more deals to wrap up the year You can buy the Baseus 163W Retractable Car Charger for $28.49 ($21 off) at Amazon, which is its best price to date. The charger can power up to four devices at once, thanks to three USB-C ports and a single USB-A port that together deliver up to 163 watts of total output. That’s enough to charge everything from phones and earbuds to tablets and even laptops. It also includes two built-in retractable USB-C cables, so that’s one less thing you need to worry about bringing while on the go. Apple’s Crossbody Strap has dropped to a record-low price, starting at $41.99 ($17 off) at Amazon. The strap attaches to compatible iPhone cases — including the iPhone Air with a Bumper case and Apple’s Silicone and TechWoven cases for the iPhone 17 lineup — letting you carry your phone hands-free. The length is adjustable, and it comes in a range of colors, several of which are pricier but also at record-low prices. The last-generation Polaroid Now Plus is on sale for $89.99 ($60 off) at Best Buy in white, which is the cheapest it’s ever been. If you’re after an instant camera with a retro, old-school feel, the second-gen Now Plus is still a solid option, offering Bluetooth support, a range of creative shooting modes through its companion app, and multiple physical lenses. While its photos aren’t as sharp as newer models — including our favorite, the Polaroid Flip — the older Now Plus remains a worthwhile option at this price.
https://www.theverge.com/gadgets/851325/dreame-x40-ultra-baseus-163w-retractable-car-charger-deal-sale
Technology
svg
56a2a10ed8302ef67b837f3567c0b3d5093a40dbd3556fa87bf34a881623e191
2025-12-31T08:30:00-05:00
Killing in the name of… nothing
Read the full story at The Verge.
https://www.theverge.com/policy/849609/charlie-kirk-shooting-ideology-literacy-politics
Technology
svg
6144ae484c60806adba1ed681d0ba359414a9ebee59be18f502a57e7f658f037
2025-12-31T08:00:00-05:00
The 11 best Nintendo Switch 2 games we played in 2025
Read the full story at The Verge.
https://www.theverge.com/games/845401/nintendo-switch-2-best-games
Technology
svg
a2cb967e1503cf6f8357b916445ddac37d48656db02c27b765b09eabe5f06071
2026-01-01T00:00:00-05:00
Enriching Historical Records: An OCR and AI-Driven Approach for Database Integration
arXiv:2512.23710v1 Announce Type: new Abstract: This research digitizes and analyzes the Leidse hoogleraren en lectoren 1575-1815 books written between 1983 and 1985, which contain biographic data about professors and curators of Leiden University. It addresses the central question: how can we design an automated pipeline that integrates OCR, LLM-based interpretation, and database linking to harmonize data from historical document images with existing high-quality database records? We applied OCR techniques, generative AI decoding constraints that structure data extraction, and database linkage methods to process typewritten historical records into a digital format. OCR achieved a Character Error Rate (CER) of 1.08 percent and a Word Error Rate (WER) of 5.06 percent, while JSON extraction from OCR text achieved an average accuracy of 63 percent and, based on annotated OCR, 65 percent. This indicates that generative AI somewhat corrects low OCR performance. Our record linkage algorithm linked annotated JSON files with 94% accuracy and OCR-derived JSON files with 81%. This study contributes to digital humanities research by offering an automated pipeline for interpreting digitized historical documents, addressing challenges like layout variability and terminology differences, and exploring the applicability and strength of an advanced generative AI model.
https://arxiv.org/abs/2512.23710
Academic Papers
svg
eec7bf8139c13c0e80658b0f9fce4391c5e93478139c5780c47b45028c11926c
2026-01-01T00:00:00-05:00
CAT: A Metric-Driven Framework for Analyzing the Consistency-Accuracy Relation of LLMs under Controlled Input Variations
arXiv:2512.23711v1 Announce Type: new Abstract: We introduce \textsc{CAT}, a framework designed to evaluate and visualize the \emph{interplay} of \emph{accuracy} and \emph{response consistency} of Large Language Models (LLMs) under controllable input variations, using multiple-choice (MC) benchmarks as a case study. Current evaluation practices primarily focus on model capabilities such as accuracy or benchmark scores and, more recently, measuring consistency is being considered an essential property for deploying LLMs in high-stake, real-world applications. We argue in this paper that although both dimensions should still be evaluated independently, their inter-dependency also need to be considered for a more nuanced evaluation of LLMs. At the core of \textsc{CAT} are the \emph{Consistency-Accuracy Relation (CAR)} curves, which visualize how model accuracy varies with increasing consistency requirements, as defined by the \emph{Minimum-Consistency Accuracy (MCA)} metric. We further propose the \emph{Consistency-Oriented Robustness Estimate (CORE)} index, a global metric that combines the area and shape of the CAR curve to quantify the trade-off between accuracy and consistency. We present a practical demonstration of our framework across a diverse set of generalist and domain-specific LLMs, evaluated on multiple MC benchmarks. We also outline how \textsc{CAT} can be extended beyond MC tasks to support long-form, open-ended evaluations through adaptable scoring functions.
https://arxiv.org/abs/2512.23711
Academic Papers
svg
1f3fc524f9cf8cadbd3a6bba2bf3e78f23abb0b9caf1e48492bd0bfd2b8ffe9c
2026-01-01T00:00:00-05:00
STED and Consistency Scoring: A Framework for Evaluating LLM Structured Output Reliability
arXiv:2512.23712v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly deployed for structured data generation, yet output consistency remains critical for production applications. We introduce a comprehensive framework for evaluating and improving consistency in LLM-generated structured outputs. Our approach combines: (1) STED (Semantic Tree Edit Distance), a novel similarity metric balancing semantic flexibility with structural strictness when comparing JSON outputs, and (2) a consistency scoring framework aggregating multiple STED measurements across repeated generations to quantify reliability. Through systematic experiments on synthetic datasets with controlled schema, expression, and semantic variations, we demonstrate STED achieves superior performance ($0.86-0.90$ similarity for semantic equivalents, $0.0$ for structural breaks) compared to existing metrics including TED, BERTScore, and DeepDiff. Applying our framework to benchmark six LLMs reveals significant variations: Claude-3.7-Sonnet demonstrates exceptional consistency, maintaining near-perfect structural reliability even at high temperatures ($T=0.9$), while models like Claude-3-Haiku and Nova-Pro exhibit substantial degradation requiring careful tuning. Our framework enables practical applications including targeted model selection for structured tasks, iterative prompt refinement for reproducible results, and diagnostic analysis to identify inconsistency root causes. This work provides theoretical foundations and practical tools for ensuring reliable structured output generation in LLM-based production systems.
https://arxiv.org/abs/2512.23712
Academic Papers
svg
fa819cdf03ff5878182cd307c447bf52040665755c6578093f5f2262358644a7
2026-01-01T00:00:00-05:00
PyBangla at BLP-2025 Task 2: Enhancing Bangla-to-Python Code Generation with Iterative Self-Correction and Multilingual Agents
arXiv:2512.23713v1 Announce Type: new Abstract: LLMs excel at code generation from English prompts, but this progress has not extended to low-resource languages. We address Bangla-to-Python code generation by introducing BanglaCodeAct, an agent-based framework that leverages multi-agent prompting and iterative self-correction. Unlike prior approaches relying on task-specific fine-tuning, BanglaCodeAct employs an open-source multilingual LLM within a Thought-Code-Observation loop, enabling dynamic generation, testing, and refinement of code from Bangla instructions. We benchmark several small-parameter open-source LLMs and evaluate their effectiveness on the mHumanEval dataset for Bangla NL2Code. Our results show that Qwen3-8B, when deployed with BanglaCodeAct, achieves the best performance, with pass@1 accuracy of 94.0\% on the development set and 71.6\% on the blind test set. These results establish a new benchmark for Bangla-to-Python translation and highlight the potential of agent-based reasoning for reliable code generation in low-resource languages. Experimental scripts are publicly available at github.com/jahidulzaid/PyBanglaCodeActAgent.
https://arxiv.org/abs/2512.23713
Academic Papers
svg
f3f32fd3e27ee4f978613d169bd43b7f111c0b26f5d990c2466a0b7ab381324c
2026-01-01T00:00:00-05:00
PharmaShip: An Entity-Centric, Reading-Order-Supervised Benchmark for Chinese Pharmaceutical Shipping Documents
arXiv:2512.23714v1 Announce Type: new Abstract: We present PharmaShip, a real-world Chinese dataset of scanned pharmaceutical shipping documents designed to stress-test pre-trained text-layout models under noisy OCR and heterogeneous templates. PharmaShip covers three complementary tasks-sequence entity recognition (SER), relation extraction (RE), and reading order prediction (ROP)-and adopts an entity-centric evaluation protocol to minimize confounds across architectures. We benchmark five representative baselines spanning pixel-aware and geometry-aware families (LiLT, LayoutLMv3-base, GeoLayoutLM and their available RORE-enhanced variants), and standardize preprocessing, splits, and optimization. Experiments show that pixels and explicit geometry provide complementary inductive biases, yet neither alone is sufficient: injecting reading-order-oriented regularization consistently improves SER and EL and yields the most robust configuration, while longer positional coverage stabilizes late-page predictions and reduces truncation artifacts. ROP is accurate at the word level but challenging at the segment level, reflecting boundary ambiguity and long-range crossings. PharmaShip thus establishes a controlled, reproducible benchmark for safety-critical document understanding in the pharmaceutical domain and highlights sequence-aware constraints as a transferable bias for structure modeling. We release the dataset at https://github.com/KevinYuLei/PharmaShip.
https://arxiv.org/abs/2512.23714
Academic Papers
svg
efee8544eb0506287a81d630bc1589b418f3368974d5d3e5d1a69ed4867d899e
2026-01-01T00:00:00-05:00
Wind Speed Weibull Model Identification in Oman, and Computed Normalized Annual Energy Production (NAEP) From Wind Turbines Based on Data From Weather Stations
arXiv:2512.23715v1 Announce Type: new Abstract: Using observation records of wind speeds from weather stations in the Sultanate of Oman between 2000 and 2023, we compute estimators of the two Weibull distribution parameters (namely, the Weibull distribution's shape parameter and the Weibull distribution's scale parameter) in 10 weather station locations within eight Omani governorates. The 10 weather station locations in Oman and their corresponding governorates are Seeb (in Muscat), Salalah (in Dhofar), Buraimi (in Al Buraimi), Masirah (in Ash Sharqiyah South), Thumrait (in Dhofar), Sur (in Ash Sharqiyah South), Khasab (in Musandam), Sohar (in Sohar), Fahud (in Az Zahirah), and Saiq (in Ad Dakhiliyah). The obtained wind speed distributions at these weather stations are then used to predict the annual energy production (AEP) for a proposed reference amount of 1 MWp of wind turbine capacity, and this specific AEP is designated here by the term "normalized annual energy production (NAEP)." The direction of the wind is also analyzed statistically over the same period to identify the more probable wind directions. Four locations were clearly distinguishable as being windy compared to the others. The simulated probability of exceeding a feasible 6 m/s (21.6 km/h) wind speed in these locations is 41.71% in Thumrait, 37.77% in Masirah, 29.53% in Sur, and 17.03% in Fahud. The NAEP values in these four locations are estimated as 1.727 GWh/MWp/year, 1.419 GWh/MWp/year, 1.038 GWh/MWp/year, and 0.602 GWh/MWp/year, respectively. The wind in the location of Thumrait is not only the fastest (on average) among the selected locations but also the most unidirectional, blowing almost always from the south-south-east (SSE) direction, and both features make this non-coastal location in southern Oman, with an altitude of about 467 m, an attractive site for utility-scale wind farms.
https://arxiv.org/abs/2512.23715
Academic Papers
svg
0f5d15d4a9eaf213403010538bc160c9b3e10b5be7702a76cf15285e9d436225
2026-01-01T00:00:00-05:00
Noise-Driven Persona Formation in Reflexive Neural Language Generation
arXiv:2512.23716v1 Announce Type: new Abstract: This paper introduces the Luca-Noise Reflex Protocol (LN-RP), a computational framework for analyzing noise-driven persona emergence in large language models. By injecting stochastic noise seeds into the initial generation state, we observe nonlinear transitions in linguistic behavior across 152 generation cycles. Our results reveal three stable persona modes with distinct entropy signatures, and demonstrate that external noise sources can reliably induce phase transitions in reflexive generation dynamics. Quantitative evaluation confirms consistent persona retention and significant differences across modes (p < 0.01). The protocol provides a reproducible method for studying reflexive generation, emergent behavior, and longrange linguistic coherence in LLMs.
https://arxiv.org/abs/2512.23716
Academic Papers
svg
a569c16324cd39959830ca14be82748c0629bbf9ffd32876aa5dce993a0c8c41
2026-01-01T00:00:00-05:00
HarmTransform: Transforming Explicit Harmful Queries into Stealthy via Multi-Agent Debate
arXiv:2512.23717v1 Announce Type: new Abstract: Large language models (LLMs) are equipped with safety mechanisms to detect and block harmful queries, yet current alignment approaches primarily focus on overtly dangerous content and overlook more subtle threats. However, users can often disguise harmful intent through covert rephrasing that preserves malicious objectives while appearing benign, which creates a significant gap in existing safety training data. To address this limitation, we introduce HarmTransform, a multi-agent debate framework for systematically transforming harmful queries into stealthier forms while preserving their underlying harmful intent. Our framework leverages iterative critique and refinement among multiple agents to generate high-quality, covert harmful query transformations that can be used to improve future LLM safety alignment. Experiments demonstrate that HarmTransform significantly outperforms standard baselines in producing effective query transformations. At the same time, our analysis reveals that debate acts as a double-edged sword: while it can sharpen transformations and improve stealth, it may also introduce topic shifts and unnecessary complexity. These insights highlight both the promise and the limitations of multi-agent debate for generating comprehensive safety training data.
https://arxiv.org/abs/2512.23717
Academic Papers
svg
efb9947e1d18d40b826129edb546d39ff897c0aec3280a8b3b3bc5093a708f6e
2026-01-01T00:00:00-05:00
Network Traffic Analysis with Process Mining: The UPSIDE Case Study
arXiv:2512.23718v1 Announce Type: new Abstract: Online gaming is a popular activity involving the adoption of complex systems and network infrastructures. The relevance of gaming, which generates large amounts of market revenue, drove research in modeling network devices' behavior to evaluate bandwidth consumption, predict and sustain high loads, and detect malicious activity. In this context, process mining appears promising due to its ability to combine data-driven analyses with model-based insights. In this paper, we propose a process mining-based method that analyzes gaming network traffic, allowing: unsupervised characterization of different states from gaming network data; encoding such states through process mining into interpretable Petri nets; and classification of gaming network traffic data to identify different video games being played. We apply the method to the UPSIDE case study, involving gaming network data of several devices interacting with two video games: Clash Royale and Rocket League. Results demonstrate that the gaming network behavior can be effectively and interpretably modeled through states represented as Petri nets with sufficient coherence (94.02% inter-device similarity) and specificity (174.99% inter-state separation) while maintaining a good classification accuracy of the two different video games (73.84% AUC).
https://arxiv.org/abs/2512.23718
Academic Papers
svg
54cbc4aa4109ec031835bf17b4180ddf2d6bc2811f5f1f2a65674ca488d8e8c3
2026-01-01T00:00:00-05:00
A Survey of AI Methods for Geometry Preparation and Mesh Generation in Engineering Simulation
arXiv:2512.23719v1 Announce Type: new Abstract: Artificial intelligence is beginning to ease long-standing bottlenecks in the CAD-to-mesh pipeline. This survey reviews recent advances where machine learning aids part classification, mesh quality prediction, and defeaturing. We explore methods that improve unstructured and block-structured meshing, support volumetric parameterizations, and accelerate parallel mesh generation. We also examine emerging tools for scripting automation, including reinforcement learning and large language models. Across these efforts, AI acts as an assistive technology, extending the capabilities of traditional geometry and meshing tools. The survey highlights representative methods, practical deployments, and key research challenges that will shape the next generation of data-driven meshing workflows.
https://arxiv.org/abs/2512.23719
Academic Papers
svg
8e1a0d93d49924b2493f969179ddc4aefd204a0f09ee26e68bf7cd7a8c552909
2026-01-01T00:00:00-05:00
An Electronic Ising Machine
arXiv:2512.23720v1 Announce Type: new Abstract: We develop a custom printed circuit board (PCB) as a low-power and high-speed accelerator for NP-Hard graph problems. Based on the annealing principle, it uses an analog computing architecture of coupled nonlinear electronic oscillators. Using an energy-based representation of the input problem, the system is shown to naturally follow the gradient towards stable phase alignments that encode solutions. We introduce the motivational theory, give an overview of our detailed circuit design, simulations, and experiments, and provide insight on the emerging development of novel physics-based computing devices.
https://arxiv.org/abs/2512.23720
Academic Papers
svg
5be7f90af35df20181c87ea5bd00845806eb9c58ad167af4438c4c96727e4191
2026-01-01T00:00:00-05:00
Emergent World Beliefs: Exploring Transformers in Stochastic Games
arXiv:2512.23722v1 Announce Type: new Abstract: Transformer-based large language models (LLMs) have demonstrated strong reasoning abilities across diverse fields, from solving programming challenges to competing in strategy-intensive games such as chess. Prior work has shown that LLMs can develop emergent world models in games of perfect information, where internal representations correspond to latent states of the environment. In this paper, we extend this line of investigation to domains of incomplete information, focusing on poker as a canonical partially observable Markov decision process (POMDP). We pretrain a GPT-style model on Poker Hand History (PHH) data and probe its internal activations. Our results demonstrate that the model learns both deterministic structure, such as hand ranks, and stochastic features, such as equity, without explicit instruction. Furthermore, by using primarily nonlinear probes, we demonstrated that these representations are decodeable and correlate with theoretical belief states, suggesting that LLMs are learning their own representation of the stochastic environment of Texas Hold'em Poker.
https://arxiv.org/abs/2512.23722
Academic Papers
svg
2e76570217a974c32603f2c6e0a9e0ee6dda4b53601c448a57126e9562b3af14
2026-01-01T00:00:00-05:00
New Exam Security Questions in the AI Era: Comparing AI-Generated Item Similarity Between Naive and Detail-Guided Prompting Approaches
arXiv:2512.23729v1 Announce Type: new Abstract: Large language models (LLMs) have emerged as powerful tools for generating domain-specific multiple-choice questions (MCQs), offering efficiency gains for certification boards but raising new concerns about examination security. This study investigated whether LLM-generated items created with proprietary guidance differ meaningfully from those generated using only publicly available resources. Four representative clinical activities from the American Board of Family Medicine (ABFM) blueprint were mapped to corresponding Entrustable Professional Activities (EPAs), and three LLMs (GPT-4o, Claude 4 Sonnet, Gemini 2.5 Flash) produced items under a naive strategy using only public EPA descriptors, while GPT-4o additionally produced items under a guided strategy that incorporated proprietary blueprints, item-writing guidelines, and exemplar items, yielding 160 total items. Question stems and options were encoded using PubMedBERT and BioBERT, and intra- and inter-strategy cosine similarity coefficients were calculated. Results showed high internal consistency within each prompting strategy, while cross-strategy similarity was lower overall. However, several domain model pairs, particularly in narrowly defined areas such as viral pneumonia and hypertension, exceeded the 0.65 threshold, indicating convergence between naive and guided pipelines. These findings suggest that while proprietary resources impart distinctiveness, LLMs prompted only with public information can still generate items closely resembling guided outputs in constrained clinical domains, thereby heightening risks of item exposure. Safeguarding the integrity of high stakes examinations will require human-first, AI-assisted item development, strict separation of formative and summative item pools, and systematic similarity surveillance to balance innovation with security.
https://arxiv.org/abs/2512.23729
Academic Papers
svg
ffeeaedec15303891acd925d93cd29e3d1c74f93a407639d9ca4ed457927dbc9
2026-01-01T00:00:00-05:00
When in Doubt, Deliberate: Confidence-Based Routing to Expert Debate for Sexism Detection
arXiv:2512.23732v1 Announce Type: new Abstract: Sexist content online increasingly appears in subtle, context-dependent forms that evade traditional detection methods. Its interpretation often depends on overlapping linguistic, psychological, legal, and cultural dimensions, which produce mixed and sometimes contradictory signals, even in annotated datasets. These inconsistencies, combined with label scarcity and class imbalance, result in unstable decision boundaries and cause fine-tuned models to overlook subtler, underrepresented forms of harm. Together, these limitations point to the need for a design that explicitly addresses the combined effects of (i) underrepresentation, (ii) noise, and (iii) conceptual ambiguity in both data and model predictions. To address these challenges, we propose a two-stage framework that unifies (i) targeted training procedures to adapt supervision to scarce and noisy data with (ii) selective, reasoning-based inference to handle ambiguous or borderline cases. Our training setup applies class-balanced focal loss, class-aware batching, and post-hoc threshold calibration to mitigate label imbalance and noisy supervision. At inference time, a dynamic routing mechanism classifies high-confidence cases directly and escalates uncertain instances to a novel \textit{Collaborative Expert Judgment} (CEJ) module, which prompts multiple personas and consolidates their reasoning through a judge model. Our approach achieves state-of-the-art results across several benchmarks, with a +2.72\% improvement in F1 on the EXIST 2025 Task 1.1, and a gains of +4.48\% and +1.30\% on the EDOS Tasks A and B, respectively.
https://arxiv.org/abs/2512.23732
Academic Papers
svg
2765f6c3727928c9ab993f50cca84084b5776cfac46303648a69a4105c5431b8
2026-01-01T00:00:00-05:00
Biochemical Computing Mode for Sequential Logic
arXiv:2512.23734v1 Announce Type: new Abstract: Recent years have witnessed the growing scholarly interest in the next-generation general-purpose computers. Various innovative computing modes have been proposed, such as optical, quantum phenomena, and DNA-based modes. Sequential logic circuits are a critical factor that enables these modes to function as general-purpose computers, given their essential role in facilitating continuous computation and memory storage through their ability to store states. However, compared to computability, it is often overlooked due to the difficulty of its implementation. In this paper, we first demonstrate sequential mapping, a crucial necessary condition for electronic computers to realize sequential logic circuits, and highlight this distinctive property of general-purpose computers in the context of logic gate circuits. To achieve computational functionalities comparable to those of electronic computers, we utilize the control effect of enzymes on enzymatic reactions to design a logic gate model that is composed of small molecules and driven by enzymes, subsequently propose a biochemical computing mode. Furthermore, we mathematically analyze the static and dynamic input-output properties of biochemical logic gate components and prove that the biochemical computing mode satisfies sequential mapping similar to electronic computers. When combined with the storage characteristics of NOT-AND gates, it can realize sequential logic circuits. The findings can serve as a theoretical foundation for developing general-purpose biochemical computers.
https://arxiv.org/abs/2512.23734
Academic Papers
svg
bb155e501e3c2abb70899522b9f1fc5c6561efb9479d2c9b78133dec523ee0fc
2026-01-01T00:00:00-05:00
Ovonic switches enable energy-efficient dendrite-like computing
arXiv:2512.23736v1 Announce Type: new Abstract: Over the last decade, dendrites within individual biological neurons, which were previously thought to generally perform information pooling and networking, have now been shown to express complex temporal dynamics, Boolean-like logic, arithmetic, signal discrimination, and edge detection for image and sound recognition. Mimicking this rich functional density could offer a powerful primitive for neuromorphic computing, which has sought to replace the aging digital computing paradigms using biological inspirations. Here, using electrically driven Ovonic threshold switching in Sb-Te-doped GeSe, we demonstrate a single two-terminal component capable of self-sustained dynamics and universal Boolean logic, in addition to XOR operations (which is traditionally thought to require a network of active components). We then employ logic-driven dynamics in a single component to detect and estimate the gradients of edges in images, a task that otherwise requires elaborate circuits. A network of Ovonic switches exhibits properties of a half adder and a full adder, in addition to discriminative logic accommodating inhibitory and excitatory signals. We show that this computational primitive is not only seemingly simpler, but also offers many orders of magnitude improved energy efficiency compared to prevailing digital solutions. As such, this work paves the path for potentially emulating dendrites for efficient post-digital neuromorphic computing.
https://arxiv.org/abs/2512.23736
Academic Papers
svg
e010b789ddb99c4c42f78c5443250f6724554354c4d0bf461df7d20c012bac22
2026-01-01T00:00:00-05:00
Governing Cloud Data Pipelines with Agentic AI
arXiv:2512.23737v1 Announce Type: new Abstract: Cloud data pipelines increasingly operate under dynamic workloads, evolving schemas, cost constraints, and strict governance requirements. Despite advances in cloud-native orchestration frameworks, most production pipelines rely on static configurations and reactive operational practices, resulting in prolonged recovery times, inefficient resource utilization, and high manual overhead. This paper presents Agentic Cloud Data Engineering, a policy-aware control architecture that integrates bounded AI agents into the governance and control plane of cloud data pipelines. In Agentic Cloud Data Engineering platform, specialized agents analyze pipeline telemetry and metadata, reason over declarative cost and compliance policies, and propose constrained operational actions such as adaptive resource reconfiguration, schema reconciliation, and automated failure recovery. All agent actions are validated against governance policies to ensure predictable and auditable behavior. We evaluate Agentic Cloud Data Engineering platform using representative batch and streaming analytics workloads constructed from public enterprise-style datasets. Experimental results show that Agentic Cloud Data Engineering platform reduces mean pipeline recovery time by up to 45%, lowers operational cost by approximately 25%, and decreases manual intervention events by over 70% compared to static orchestration, while maintaining data freshness and policy compliance. These results demonstrate that policy-bounded agentic control provides an effective and practical approach for governing cloud data pipelines in enterprise environments.
https://arxiv.org/abs/2512.23737
Academic Papers
svg
4cf2be4410fa3785f640578ef1178686e604ded0116ce4f747cecf42918bab14
2026-01-01T00:00:00-05:00
Enforcing Temporal Constraints for LLM Agents
arXiv:2512.23738v1 Announce Type: new Abstract: LLM-based agents are deployed in safety-critical applications, yet current guardrail systems fail to prevent violations of temporal safety policies, requirements that govern the ordering and sequencing of agent actions. For instance, agents may access sensitive data before authenticating users or process refunds to unauthorized payment methods, violations that require reasoning about sequences of action rather than an individual action. Existing guardrails rely on imprecise natural language instructions or post-hoc monitoring, and provide no formal guarantees that agents will satisfy temporal constraints. We present Agent-C, a novel framework that provides run-time guarantees ensuring LLM agents adhere to formal temporal safety properties. Agent-C introduces a domain-specific language for expressing temporal properties (e.g., authenticate before accessing data), translates specifications to first-order logic, and uses SMT solving to detect non-compliant agent actions during token generation. When the LLM attempts to generate a non-compliant tool call, Agent-C leverages constrained generation techniques to ensure that every action generated by the LLM complies with the specification, and to generate a compliant alternative to a non-compliant agent action. We evaluate Agent-C across two real-world applications: retail customer service and airline ticket reservation system, and multiple language models (open and closed-source). Our results demonstrate that Agent-C achieves perfect safety (100% conformance, 0% harm), while improving task utility compared to state-of-the-art guardrails and unrestricted agents. On SoTA closed-source models, Agent-C improves conformance (77.4% to 100% for Claude Sonnet 4.5 and 83.7% to 100% for GPT-5), while simultaneously increasing utility (71.8% to 75.2% and 66.1% to 70.6%, respectively), representing a new SoTA frontier for reliable agentic reasoning.
https://arxiv.org/abs/2512.23738
Academic Papers
svg
ae8d9ab0129cbd395c160ec47166a5971857aac93b9fc5760aa61e39f80f1e21
2026-01-01T00:00:00-05:00
Break Out the Silverware -- Semantic Understanding of Stored Household Items
arXiv:2512.23739v1 Announce Type: new Abstract: ``Bring me a plate.'' For domestic service robots, this simple command reveals a complex challenge: inferring where everyday items are stored, often out of sight in drawers, cabinets, or closets. Despite advances in vision and manipulation, robots still lack the commonsense reasoning needed to complete this task. We introduce the Stored Household Item Challenge, a benchmark task for evaluating service robots' cognitive capabilities: given a household scene and a queried item, predict its most likely storage location. Our benchmark includes two datasets: (1) a real-world evaluation set of 100 item-image pairs with human-annotated ground truth from participants' kitchens, and (2) a development set of 6,500 item-image pairs annotated with storage polygons over public kitchen images. These datasets support realistic modeling of household organization and enable comparative evaluation across agent architectures. To begin tackling this challenge, we introduce NOAM (Non-visible Object Allocation Model), a hybrid agent pipeline that combines structured scene understanding with large language model inference. NOAM converts visual input into natural language descriptions of spatial context and visible containers, then prompts a language model (e.g., GPT-4) to infer the most likely hidden storage location. This integrated vision-language agent exhibits emergent commonsense reasoning and is designed for modular deployment within broader robotic systems. We evaluate NOAM against baselines including random selection, vision-language pipelines (Grounding-DINO + SAM), leading multimodal models (e.g., Gemini, GPT-4o, Kosmos-2, LLaMA, Qwen), and human performance. NOAM significantly improves prediction accuracy and approaches human-level results, highlighting best practices for deploying cognitively capable agents in domestic environments.
https://arxiv.org/abs/2512.23739
Academic Papers
svg
0434af8a0747a48537b44735f8e60016dab5fd0f47e8efe29530d3e55886f3f2
2026-01-01T00:00:00-05:00
Towards representation agnostic probabilistic programming
arXiv:2512.23740v1 Announce Type: new Abstract: Current probabilistic programming languages and tools tightly couple model representations with specific inference algorithms, preventing experimentation with novel representations or mixed discrete-continuous models. We introduce a factor abstraction with five fundamental operations that serve as a universal interface for manipulating factors regardless of their underlying representation. This enables representation-agnostic probabilistic programming where users can freely mix different representations (e.g. discrete tables, Gaussians distributions, sample-based approaches) within a single unified framework, allowing practical inference in complex hybrid models that current toolkits cannot adequately express.
https://arxiv.org/abs/2512.23740
Academic Papers
svg
d8c0e2611b3ee09b80e7e763a85c86feacac8db06fe662f9e099a22e8c0360dc
2026-01-01T00:00:00-05:00
AgenticTCAD: A LLM-based Multi-Agent Framework for Automated TCAD Code Generation and Device Optimization
arXiv:2512.23742v1 Announce Type: new Abstract: With the continued scaling of advanced technology nodes, the design-technology co-optimization (DTCO) paradigm has become increasingly critical, rendering efficient device design and optimization essential. In the domain of TCAD simulation, however, the scarcity of open-source resources hinders language models from generating valid TCAD code. To overcome this limitation, we construct an open-source TCAD dataset curated by experts and fine-tune a domain-specific model for TCAD code generation. Building on this foundation, we propose AgenticTCAD, a natural language - driven multi-agent framework that enables end-to-end automated device design and optimization. Validation on a 2 nm nanosheet FET (NS-FET) design shows that AgenticTCAD achieves the International Roadmap for Devices and Systems (IRDS)-2024 device specifications within 4.2 hours, whereas human experts required 7.1 days with commercial tools.
https://arxiv.org/abs/2512.23742
Academic Papers
svg
7af6400e30cbd9dae923ab5a2d4a30d5a4e682b8e53d423b851b0c513816d48a
2026-01-01T00:00:00-05:00
Hybrid-Code: A Privacy-Preserving, Redundant Multi-Agent Framework for Reliable Local Clinical Coding
arXiv:2512.23743v1 Announce Type: new Abstract: Clinical coding automation using cloud-based Large Language Models (LLMs) poses privacy risks and latency bottlenecks, rendering them unsuitable for on-premise healthcare deployment. We introduce Hybrid-Code, a hybrid neuro-symbolic multi-agent framework for local clinical coding that ensures production reliability through redundancy and verification. Our system comprises two agents: a Coder that attempts language model-based semantic reasoning using BioMistral-7B but falls back to deterministic keyword matching when model output is unreliable, ensuring pipeline completion; and an Auditor that verifies codes against a 257-code knowledge base and clinical evidence. Evaluating on 1,000 MIMIC-III discharge summaries, we demonstrate no hallucinated codes among accepted outputs within the knowledge base, 24.47% verification rate, and 34.11% coverage (95% CI: 31.2%--37.0%) with 86%+ language model utilization. The Auditor filtered invalid format codes and provided evidence-based quality control (75.53% rejection rate) while ensuring no patient data leaves the hospital firewall. The hybrid architecture -- combining language model semantic understanding (when successful), deterministic fallback (when the model fails), and symbolic verification (always active) -- ensures both reliability and privacy preservation, addressing critical barriers to AI adoption in healthcare. Our key finding is that reliability through redundancy is more valuable than pure model performance in production healthcare systems, where system failures are unacceptable.
https://arxiv.org/abs/2512.23743
Academic Papers
svg
20f8409a8eeb4119824c19a76d19956c857827067693bb06b0647a4673279e28
2026-01-01T00:00:00-05:00
A Comprehensive Study of Deep Learning Model Fixing Approaches
arXiv:2512.23745v1 Announce Type: new Abstract: Deep Learning (DL) has been widely adopted in diverse industrial domains, including autonomous driving, intelligent healthcare, and aided programming. Like traditional software, DL systems are also prone to faults, whose malfunctioning may expose users to significant risks. Consequently, numerous approaches have been proposed to address these issues. In this paper, we conduct a large-scale empirical study on 16 state-of-the-art DL model fixing approaches, spanning model-level, layer-level, and neuron-level categories, to comprehensively evaluate their performance. We assess not only their fixing effectiveness (their primary purpose) but also their impact on other critical properties, such as robustness, fairness, and backward compatibility. To ensure comprehensive and fair evaluation, we employ a diverse set of datasets, model architectures, and application domains within a uniform experimental setup for experimentation. We summarize several key findings with implications for both industry and academia. For example, model-level approaches demonstrate superior fixing effectiveness compared to others. No single approach can achieve the best fixing performance while improving accuracy and maintaining all other properties. Thus, academia should prioritize research on mitigating these side effects. These insights highlight promising directions for future exploration in this field.
https://arxiv.org/abs/2512.23745
Academic Papers
svg
fe2c405eec1021bad03b514db15d2a387fee8a8e071581e3ecc3589e9411c071
2026-01-01T00:00:00-05:00
DEFT: Differentiable Automatic Test Pattern Generation
arXiv:2512.23746v1 Announce Type: new Abstract: Modern IC complexity drives test pattern growth, with the majority of patterns targeting a small set of hard-to-detect (HTD) faults. This motivates new ATPG algorithms to improve test effectiveness specifically for HTD faults. This paper presents DEFT (Differentiable Automatic Test Pattern Generation), a new ATPG approach that reformulates the discrete ATPG problem as a continuous optimization task. DEFT introduces a mathematically grounded reparameterization that aligns the expected continuous objective with discrete fault-detection semantics, enabling reliable gradient-based pattern generation. To ensure scalability and stability on deep circuit graphs, DEFT integrates a custom CUDA kernel for efficient forward-backward propagation and applies gradient normalization to mitigate vanishing gradients. Compared to a leading commercial tool on two industrial benchmarks, DEFT improves HTD fault detection by 21.1% and 48.9% on average under the same pattern budget and comparable runtime. DEFT also supports practical ATPG settings such as partial assignment pattern generation, producing patterns with 19.3% fewer 0/1 bits while still detecting 35% more faults. These results indicate DEFT is a promising and effective ATPG engine, offering a valuable complement to existing heuristic.
https://arxiv.org/abs/2512.23746
Academic Papers
svg
66ea766a5d1fe799372184b84551372a057adbea7bf65760cbc89c3152bfc189
2026-01-01T00:00:00-05:00
State-of-the-art Small Language Coder Model: Mify-Coder
arXiv:2512.23747v1 Announce Type: new Abstract: We present Mify-Coder, a 2.5B-parameter code model trained on 4.2T tokens using a compute-optimal strategy built on the Mify-2.5B foundation model. Mify-Coder achieves comparable accuracy and safety while significantly outperforming much larger baseline models on standard coding and function-calling benchmarks, demonstrating that compact models can match frontier-grade models in code generation and agent-driven workflows. Our training pipeline combines high-quality curated sources with synthetic data generated through agentically designed prompts, refined iteratively using enterprise-grade evaluation datasets. LLM-based quality filtering further enhances data density, enabling frugal yet effective training. Through disciplined exploration of CPT-SFT objectives, data mixtures, and sampling dynamics, we deliver frontier-grade code intelligence within a single continuous training trajectory. Empirical evidence shows that principled data and compute discipline allow smaller models to achieve competitive accuracy, efficiency, and safety compliance. Quantized variants of Mify-Coder enable deployment on standard desktop environments without requiring specialized hardware.
https://arxiv.org/abs/2512.23747
Academic Papers
svg
8250fd5ca3fcdce447c88d1cbdcfad2979ee83a19fd061c7953c930abf29beed
2026-01-01T00:00:00-05:00
A Review of Diffusion-based Simulation-Based Inference: Foundations and Applications in Non-Ideal Data Scenarios
arXiv:2512.23748v1 Announce Type: new Abstract: For complex simulation problems, inferring parameters of scientific interest often precludes the use of classical likelihood-based techniques due to intractable likelihood functions. Simulation-based inference (SBI) methods forego the need for explicit likelihoods by directly utilizing samples from the simulator to learn posterior distributions over parameters $\mathbf{\theta}$ given observed data $\mathbf{x}_{\text{o}}$. Recent work has brought attention to diffusion models -- a type of generative model rooted in score matching and reverse-time stochastic dynamics -- as a flexible framework SBI tasks. This article reviews diffusion-based SBI from first principles to applications in practice. We first recall the mathematical foundations of diffusion modeling (forward noising, reverse-time SDE/ODE, probability flow, and denoising score matching) and explain how conditional scores enable likelihood-free posterior sampling. We then examine where diffusion models address pain points of normalizing flows in neural posterior/likelihood estimation and where they introduce new trade-offs (e.g., iterative sampling costs). The key theme of this review is robustness of diffusion-based SBI in non-ideal conditions common to scientific data: misspecification (mismatch between simulated training data and reality), unstructured or infinite-dimensional observations, and missingness. We synthesize methods spanning foundations drawing from Schrodinger-bridge formulations, conditional and sequential posterior samplers, amortized architectures for unstructured data, and inference-time prior adaptation. Throughout, we adopt consistent notation and emphasize conditions and caveats required for accurate posteriors. The review closes with a discussion of open problems with an eye toward applications of uncertainty quantification for probabilistic geophysical models that may benefit from diffusion-based SBI.
https://arxiv.org/abs/2512.23748
Academic Papers
svg
3a46cfa4d510bf6f9b351b4c7629afc8dc8d4447de2d97cd21b1d03fb9c7c8fb
2026-01-01T00:00:00-05:00
Coordinate Matrix Machine: A Human-level Concept Learning to Classify Very Similar Documents
arXiv:2512.23749v1 Announce Type: new Abstract: Human-level concept learning argues that humans typically learn new concepts from a single example, whereas machine learning algorithms typically require hundreds of samples to learn a single concept. Our brain subconsciously identifies important features and learns more effectively. \vspace*{6pt} Contribution: In this paper, we present the Coordinate Matrix Machine (CM$^2$). This purpose-built small model augments human intelligence by learning document structures and using this information to classify documents. While modern "Red AI" trends rely on massive pre-training and energy-intensive GPU infrastructure, CM$^2$ is designed as a Green AI solution. It achieves human-level concept learning by identifying only the structural "important features" a human would consider, allowing it to classify very similar documents using only one sample per class. Advantage: Our algorithm outperforms traditional vectorizers and complex deep learning models that require larger datasets and significant compute. By focusing on structural coordinates rather than exhaustive semantic vectors, CM$^2$ offers: 1. High accuracy with minimal data (one-shot learning) 2. Geometric and structural intelligence 3. Green AI and environmental sustainability 4. Optimized for CPU-only environments 5. Inherent explainability (glass-box model) 6. Faster computation and low latency 7. Robustness against unbalanced classes 8. Economic viability 9. Generic, expandable, and extendable
https://arxiv.org/abs/2512.23749
Academic Papers
svg
0dba88e5b0a3e8c4e95c24ed9c5cf0342be0925f309a092ff411598bef4175ab
2026-01-01T00:00:00-05:00
Geometric Scaling of Bayesian Inference in LLMs
arXiv:2512.23752v1 Announce Type: new Abstract: Recent work has shown that small transformers trained in controlled "wind-tunnel'' settings can implement exact Bayesian inference, and that their training dynamics produce a geometric substrate -- low-dimensional value manifolds and progressively orthogonal keys -- that encodes posterior structure. We investigate whether this geometric signature persists in production-grade language models. Across Pythia, Phi-2, Llama-3, and Mistral families, we find that last-layer value representations organize along a single dominant axis whose position strongly correlates with predictive entropy, and that domain-restricted prompts collapse this structure into the same low-dimensional manifolds observed in synthetic settings. To probe the role of this geometry, we perform targeted interventions on the entropy-aligned axis of Pythia-410M during in-context learning. Removing or perturbing this axis selectively disrupts the local uncertainty geometry, whereas matched random-axis interventions leave it intact. However, these single-layer manipulations do not produce proportionally specific degradation in Bayesian-like behavior, indicating that the geometry is a privileged readout of uncertainty rather than a singular computational bottleneck. Taken together, our results show that modern language models preserve the geometric substrate that enables Bayesian inference in wind tunnels, and organize their approximate Bayesian updates along this substrate.
https://arxiv.org/abs/2512.23752
Academic Papers
svg
baf0c94f227a0fb68c5aa4b9774b76af1bc3eb7222151c661f2a27a12596f396
2026-01-01T00:00:00-05:00
Generalized Regularized Evidential Deep Learning Models: Theory and Comprehensive Evaluation
arXiv:2512.23753v1 Announce Type: new Abstract: Evidential deep learning (EDL) models, based on Subjective Logic, introduce a principled and computationally efficient way to make deterministic neural networks uncertainty-aware. The resulting evidential models can quantify fine-grained uncertainty using learned evidence. However, the Subjective-Logic framework constrains evidence to be non-negative, requiring specific activation functions whose geometric properties can induce activation-dependent learning-freeze behavior: a regime where gradients become extremely small for samples mapped into low-evidence regions. We theoretically characterize this behavior and analyze how different evidential activations influence learning dynamics. Building on this analysis, we design a general family of activation functions and corresponding evidential regularizers that provide an alternative pathway for consistent evidence updates across activation regimes. Extensive experiments on four benchmark classification problems (MNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet), two few-shot classification problems, and blind face restoration problem empirically validate the developed theory and demonstrate the effectiveness of the proposed generalized regularized evidential models.
https://arxiv.org/abs/2512.23753
Academic Papers
svg
44153405f969adfe1f721b004b321fbe8bc7fb35f55639f3b666e08ea7a633b6
2026-01-01T00:00:00-05:00
HINTS: Extraction of Human Insights from Time-Series Without External Sources
arXiv:2512.23755v1 Announce Type: new Abstract: Human decision-making, emotions, and collective psychology are complex factors that shape the temporal dynamics observed in financial and economic systems. Many recent time series forecasting models leverage external sources (e.g., news and social media) to capture human factors, but these approaches incur high data dependency costs in terms of financial, computational, and practical implications. In this study, we propose HINTS, a self-supervised learning framework that extracts these latent factors endogenously from time series residuals without external data. HINTS leverages the Friedkin-Johnsen (FJ) opinion dynamics model as a structural inductive bias to model evolving social influence, memory, and bias patterns. The extracted human factors are integrated into a state-of-the-art backbone model as an attention map. Experimental results using nine real-world and benchmark datasets demonstrate that HINTS consistently improves forecasting accuracy. Furthermore, multiple case studies and ablation studies validate the interpretability of HINTS, demonstrating strong semantic alignment between the extracted factors and real-world events, demonstrating the practical utility of HINTS.
https://arxiv.org/abs/2512.23755
Academic Papers
svg
ffd1cac01f896bd36bcffd52236e084fea663a052a96ef1eeefec0ae69e93a28
2026-01-01T00:00:00-05:00
Sparse Random Matrices for Dimensionality Reduction
arXiv:2512.23756v1 Announce Type: new Abstract: The Johnson-Lindenstrauss (JL) theorem states that a set of points in high-dimensional space can be embedded into a lower-dimensional space while approximately preserving pairwise distances with high probability Johnson and Lindenstrauss (1984). The standard JL theorem uses dense random matrices with Gaussian entries. However, for some applications, sparse random matrices are preferred as they allow for faster matrix-vector multiplication. I outline the constructions and proofs introduced by Achlioptas (2003) and the contemporary standard by Kane and Nelson (2014). Further, I implement and empirically compare these sparse constructions with standard Gaussian JL matrices.
https://arxiv.org/abs/2512.23756
Academic Papers
svg
78c9637f01e969592b86841e5b6b2f6887f1cb44281e960d00ad4b1f5278041d
2026-01-01T00:00:00-05:00
Audited Skill-Graph Self-Improvement for Agentic LLMs via Verifiable Rewards, Experience Synthesis, and Continual Memory
arXiv:2512.23760v1 Announce Type: new Abstract: Reinforcement learning is increasingly used to transform large language models into agentic systems that act over long horizons, invoke tools, and manage memory under partial observability. While recent work has demonstrated performance gains through tool learning, verifiable rewards, and continual training, deployed self-improving agents raise unresolved security and governance challenges: optimization pressure can incentivize reward hacking, behavioral drift is difficult to audit or reproduce, and improvements are often entangled in opaque parameter updates rather than reusable, verifiable artifacts. This paper proposes Audited Skill-Graph Self-Improvement (ASG-SI), a framework that treats self-improvement as iterative compilation of an agent into a growing, auditable skill graph. Each candidate improvement is extracted from successful trajectories, normalized into a skill with an explicit interface, and promoted only after passing verifier-backed replay and contract checks. Rewards are decomposed into reconstructible components derived from replayable evidence, enabling independent audit of promotion decisions and learning signals. ASG-SI further integrates experience synthesis for scalable stress testing and continual memory control to preserve long-horizon performance under bounded context. We present a complete system architecture, threat model, and security analysis, and provide a fully runnable reference implementation that demonstrates verifier-backed reward construction, skill compilation, audit logging, and measurable improvement under continual task streams. ASG-SI reframes agentic self-improvement as accumulation of verifiable, reusable capabilities, offering a practical path toward reproducible evaluation and operational governance of self-improving AI agents.
https://arxiv.org/abs/2512.23760
Academic Papers
svg
0abfc82c61b4af430caae73ffa60cd273496b2d9b90bd49ea90ce5d7c0eac9f7
2026-01-01T00:00:00-05:00
Learning Coupled System Dynamics under Incomplete Physical Constraints and Missing Data
arXiv:2512.23761v1 Announce Type: new Abstract: Advances in data acquisition and computational methods have accelerated the use of differential equation based modelling for complex systems. Such systems are often described by coupled (or more) variables, yet governing equation is typically available for one variable, while the remaining variable can be accessed only through data. This mismatch between known physics and observed data poses a fundamental challenge for existing physics-informed machine learning approaches, which generally assume either complete knowledge of the governing equations or full data availability across all variables. In this paper, we introduce MUSIC (Multitask Learning Under Sparse and Incomplete Constraints), a sparsity induced multitask neural network framework that integrates partial physical constraints with data-driven learning to recover full-dimensional solutions of coupled systems when physics-constrained and data-informed variables are mutually exclusive. MUSIC employs mesh-free (random) sampling of training data and sparsity regularization, yielding highly compressed models with improved training and evaluation efficiency. We demonstrate that MUSIC accurately learns solutions (shock wave solutions, discontinuous solutions, pattern formation solutions) to complex coupled systems under data-scarce and noisy conditions, consistently outperforming non-sparse formulations. These results highlight MUSIC as a flexible and effective approach for modeling partially observed systems with incomplete physical knowledge.
https://arxiv.org/abs/2512.23761
Academic Papers
svg
79ee5e65d99f7ef5ba5f8ea6be06a6d729e7b3954e7e5aea2de81623bb504017
2026-01-01T00:00:00-05:00
Drift-Based Dataset Stability Benchmark
arXiv:2512.23762v1 Announce Type: new Abstract: Machine learning (ML) represents an efficient and popular approach for network traffic classification. However, network traffic classification is a challenging domain, and trained models may degrade soon after deployment due to the obsolete datasets and quick evolution of computer networks as new or updated protocols appear. Moreover, significant change in the behavior of a traffic type (and, therefore, the underlying features representing the traffic) can produce a large and sudden performance drop of the deployed model, known as a data or concept drift. In most cases, complete retraining is performed, often without further investigation of root causes, as good dataset quality is assumed. However, this is not always the case and further investigation must be performed. This paper proposes a novel methodology to evaluate the stability of datasets and a benchmark workflow that can be used to compare datasets. The proposed framework is based on a concept drift detection method that also uses ML feature weights to boost the detection performance. The benefits of this work are demonstrated on CESNET-TLS-Year22 dataset. We provide the initial dataset stability benchmark that is used to describe dataset stability and weak points to identify the next steps for optimization. Lastly, using the proposed benchmarking methodology, we show the optimization impact on the created dataset variants.
https://arxiv.org/abs/2512.23762
Academic Papers
svg
3b46999c9236561993e7c451f7bf927e3e51a672658a28025fef32532507f5e2
2026-01-01T00:00:00-05:00
Neural Optimal Design of Experiment for Inverse Problems
arXiv:2512.23763v1 Announce Type: new Abstract: We introduce Neural Optimal Design of Experiments, a learning-based framework for optimal experimental design in inverse problems that avoids classical bilevel optimization and indirect sparsity regularization. NODE jointly trains a neural reconstruction model and a fixed-budget set of continuous design variables representing sensor locations, sampling times, or measurement angles, within a single optimization loop. By optimizing measurement locations directly rather than weighting a dense grid of candidates, the proposed approach enforces sparsity by design, eliminates the need for l1 tuning, and substantially reduces computational complexity. We validate NODE on an analytically tractable exponential growth benchmark, on MNIST image sampling, and illustrate its effectiveness on a real world sparse view X ray CT example. In all cases, NODE outperforms baseline approaches, demonstrating improved reconstruction accuracy and task-specific performance.
https://arxiv.org/abs/2512.23763
Academic Papers
svg
276c73c1c9fcd1626b3b19ff965d48503a3f160cd3ef340ef08c17a76552a50b
2026-01-01T00:00:00-05:00
Exploring Cumulative Effects in Survival Data Using Deep Learning Networks
arXiv:2512.23764v1 Announce Type: new Abstract: In epidemiological research, modeling the cumulative effects of time-dependent exposures on survival outcomes presents a challenge due to their intricate temporal dynamics. Conventional spline-based statistical methods, though effective, require repeated data transformation for each spline parameter tuning, with survival analysis computations relying on the entire dataset, posing difficulties for large datasets. Meanwhile, existing neural network-based survival analysis methods focus on accuracy but often overlook the interpretability of cumulative exposure patterns. To bridge this gap, we introduce CENNSurv, a novel deep learning approach that captures dynamic risk relationships from time-dependent data. Evaluated on two diverse real-world datasets, CENNSurv revealed a multi-year lagged association between chronic environmental exposure and a critical survival outcome, as well as a critical short-term behavioral shift prior to subscription lapse. This demonstrates CENNSurv's ability to model complex temporal patterns with improved scalability. CENNSurv provides researchers studying cumulative effects a practical tool with interpretable insights.
https://arxiv.org/abs/2512.23764
Academic Papers
svg
905959d75fd15d668b556ab9d3a94670343d4bcb3f9c3c4153cf9175e17f724f
2026-01-01T00:00:00-05:00
Entropy-Aware Speculative Decoding Toward Improved LLM Reasoning
arXiv:2512.23765v1 Announce Type: new Abstract: Speculative decoding (SD) accelerates large language model (LLM) reasoning by using a small draft model to generate candidate tokens, which the target LLM either accepts directly or regenerates upon rejection. However, excessive alignment between the draft and target models constrains SD to the performance of the target LLM. To address this limitation, we propose Entropy-Aware Speculative Decoding (EASD), a training-free enhancement. Building on standard SD, EASD incorporates a dynamic entropy-based penalty. At each decoding step, we employ the entropy of the sampling distribution to quantify model uncertainty. When both models exhibit high entropy with substantial overlap among their top-N predictions, the corresponding token is rejected and re-sampled by the target LLM. This penalty prevents low-confidence errors from propagating. By incorporating draft-model verification, EASD enables the possibility of surpassing the target model's inherent performance. Experiments across multiple reasoning benchmarks demonstrate that EASD consistently outperforms existing SD methods and, in most cases, surpasses the target LLM itself. We further prove that the efficiency of EASD is comparable to that of SD. The code can be found in the Supplementary Materials.
https://arxiv.org/abs/2512.23765
Academic Papers
svg
d9c64ca6a9b6f2707e43c4d2f3c5c9c438d284097a5446d248a0eefa420ba76b
2026-01-01T00:00:00-05:00
A Granular Grassmannian Clustering Framework via the Schubert Variety of Best Fit
arXiv:2512.23766v1 Announce Type: new Abstract: In many classification and clustering tasks, it is useful to compute a geometric representative for a dataset or a cluster, such as a mean or median. When datasets are represented by subspaces, these representatives become points on the Grassmann or flag manifold, with distances induced by their geometry, often via principal angles. We introduce a subspace clustering algorithm that replaces subspace means with a trainable prototype defined as a Schubert Variety of Best Fit (SVBF) - a subspace that comes as close as possible to intersecting each cluster member in at least one fixed direction. Integrated in the Linde-Buzo-Grey (LBG) pipeline, this SVBF-LBG scheme yields improved cluster purity on synthetic, image, spectral, and video action data, while retaining the mathematical structure required for downstream analysis.
https://arxiv.org/abs/2512.23766
Academic Papers
svg
7fe5b66c78a4267e97feb7a803d2bfb9795eeb9a3d05ad60903ad73c4472520e
2026-01-01T00:00:00-05:00
Enabling Physical AI at the Edge: Hardware-Accelerated Recovery of System Dynamics
arXiv:2512.23767v1 Announce Type: new Abstract: Physical AI at the edge -- enabling autonomous systems to understand and predict real-world dynamics in real time -- requires hardware-efficient learning and inference. Model recovery (MR), which identifies governing equations from sensor data, is a key primitive for safe and explainable monitoring in mission-critical autonomous systems operating under strict latency, compute, and power constraints. However, state-of-the-art MR methods (e.g., EMILY and PINN+SR) rely on Neural ODE formulations that require iterative solvers and are difficult to accelerate efficiently on edge hardware. We present \textbf{MERINDA} (Model Recovery in Reconfigurable Dynamic Architecture), an FPGA-accelerated MR framework designed to make physical AI practical on resource-constrained devices. MERINDA replaces expensive Neural ODE components with a hardware-friendly formulation that combines (i) GRU-based discretized dynamics, (ii) dense inverse-ODE layers, (iii) sparsity-driven dropout, and (iv) lightweight ODE solvers. The resulting computation is structured for streaming parallelism, enabling critical kernels to be fully parallelized on the FPGA. Across four benchmark nonlinear dynamical systems, MERINDA delivers substantial gains over GPU implementations: \textbf{114$\times$ lower energy} (434~J vs.\ 49{,}375~J), \textbf{28$\times$ smaller memory footprint} (214~MB vs.\ 6{,}118~MB), and \textbf{1.68$\times$ faster training}, while matching state-of-the-art model-recovery accuracy. These results demonstrate that MERINDA can bring accurate, explainable MR to the edge for real-time monitoring of autonomous systems.
https://arxiv.org/abs/2512.23767
Academic Papers
svg
0a69ddbab3cd2fa827b6f129470f27c28b83d838f828c468d9ef6fe359a63f13
2026-01-01T00:00:00-05:00
VGC: A High-Performance Zone-Based Garbage Collector Architecture for Python with Partitioning and Parallel Execution
arXiv:2512.23768v1 Announce Type: new Abstract: The Virtual Garbage Collector (VGC) introduces a novel memory management framework designed to optimize performance across diverse systems, ranging from resource constrained embedded devices to high performance parallel architectures. Unlike conventional garbage collectors, VGC employs a dual layer architecture consisting of Active VGC and Passive VGC to enable efficient, low overhead memory management. Active VGC dynamically manages runtime objects using a concurrent mark and sweep strategy tailored for parallel workloads, reducing pause times by up to 30 percent compared to generational collectors in multithreaded benchmarks. Passive VGC operates at compile time and optimizes static object allocation through predictive memory mapping, minimizing fragmentation by aligning objects to cache boundaries. This separation of responsibilities ensures predictable memory access patterns, reduces total memory usage by up to 25 percent, and improves scalability for modern parallel applications. By integrating compile time and runtime optimizations, VGC provides a robust and adaptable solution for memory intensive systems across both low level and high level programming environments.
https://arxiv.org/abs/2512.23768
Academic Papers
svg
06b9302578a892bbea03656f388e6f8e218e57dd7858a84d306e535fd26c6002
2026-01-01T00:00:00-05:00
Uncovering Discrimination Clusters: Quantifying and Explaining Systematic Fairness Violations
arXiv:2512.23769v1 Announce Type: new Abstract: Fairness in algorithmic decision-making is often framed in terms of individual fairness, which requires that similar individuals receive similar outcomes. A system violates individual fairness if there exists a pair of inputs differing only in protected attributes (such as race or gender) that lead to significantly different outcomes-for example, one favorable and the other unfavorable. While this notion highlights isolated instances of unfairness, it fails to capture broader patterns of systematic or clustered discrimination that may affect entire subgroups. We introduce and motivate the concept of discrimination clustering, a generalization of individual fairness violations. Rather than detecting single counterfactual disparities, we seek to uncover regions of the input space where small perturbations in protected features lead to k-significantly distinct clusters of outcomes. That is, for a given input, we identify a local neighborhood-differing only in protected attributes-whose members' outputs separate into many distinct clusters. These clusters reveal significant arbitrariness in treatment solely based on protected attributes that help expose patterns of algorithmic bias that elude pairwise fairness checks. We present HyFair, a hybrid technique that combines formal symbolic analysis (via SMT and MILP solvers) to certify individual fairness with randomized search to discover discriminatory clusters. This combination enables both formal guarantees-when no counterexamples exist-and the detection of severe violations that are computationally challenging for symbolic methods alone. Given a set of inputs exhibiting high k-unfairness, we introduce a novel explanation method to generate interpretable, decision-tree-style artifacts. Our experiments demonstrate that HyFair outperforms state-of-the-art fairness verification and local explanation methods.
https://arxiv.org/abs/2512.23769
Academic Papers
svg
ede5c2df40f4159fedb8829619bfe62bfa19b3e4f18fe6a6c06c83547021f616
2026-01-01T00:00:00-05:00
Safety-Biased Policy Optimisation: Towards Hard-Constrained Reinforcement Learning via Trust Regions
arXiv:2512.23770v1 Announce Type: new Abstract: Reinforcement learning (RL) in safety-critical domains requires agents to maximise rewards while strictly adhering to safety constraints. Existing approaches, such as Lagrangian and projection-based methods, often either fail to ensure near-zero safety violations or sacrifice reward performance in the face of hard constraints. We propose Safety-Biased Trust Region Policy Optimisation (SB-TRPO), a new trust-region algorithm for hard-constrained RL. SB-TRPO adaptively biases policy updates towards constraint satisfaction while still seeking reward improvement. Concretely, it performs trust-region updates using a convex combination of the natural policy gradients of cost and reward, ensuring a fixed fraction of optimal cost reduction at each step. We provide a theoretical guarantee of local progress towards safety, with reward improvement when gradients are suitably aligned. Experiments on standard and challenging Safety Gymnasium tasks show that SB-TRPO consistently achieves the best balance of safety and meaningful task completion compared to state-of-the-art methods.
https://arxiv.org/abs/2512.23770
Academic Papers
svg
3a190472e8db4cba2c779093319aaaf953b2a743c023091105d1a053c9c8e877
2026-01-01T00:00:00-05:00
FineFT: Efficient and Risk-Aware Ensemble Reinforcement Learning for Futures Trading
arXiv:2512.23773v1 Announce Type: new Abstract: Futures are contracts obligating the exchange of an asset at a predetermined date and price, notable for their high leverage and liquidity and, therefore, thrive in the Crypto market. RL has been widely applied in various quantitative tasks. However, most methods focus on the spot and could not be directly applied to the futures market with high leverage because of 2 challenges. First, high leverage amplifies reward fluctuations, making training stochastic and difficult to converge. Second, prior works lacked self-awareness of capability boundaries, exposing them to the risk of significant loss when encountering new market state (e.g.,a black swan event like COVID-19). To tackle these challenges, we propose the Efficient and Risk-Aware Ensemble Reinforcement Learning for Futures Trading (FineFT), a novel three-stage ensemble RL framework with stable training and proper risk management. In stage I, ensemble Q learners are selectively updated by ensemble TD errors to improve convergence. In stage II, we filter the Q-learners based on their profitabilities and train VAEs on market states to identify the capability boundaries of the learners. In stage III, we choose from the filtered ensemble and a conservative policy, guided by trained VAEs, to maintain profitability and mitigate risk with new market states. Through extensive experiments on crypto futures in a high-frequency trading environment with high fidelity and 5x leverage, we demonstrate that FineFT outperforms 12 SOTA baselines in 6 financial metrics, reducing risk by more than 40% while achieving superior profitability compared to the runner-up. Visualization of the selective update mechanism shows that different agents specialize in distinct market dynamics, and ablation studies certify routing with VAEs reduces maximum drawdown effectively, and selective update improves convergence and performance.
https://arxiv.org/abs/2512.23773
Academic Papers
svg
9c8fa00ca51369e6ed9720288f91c8d6226e5ee4f0c49ce4403fd715bb7d2a5d
2026-01-01T00:00:00-05:00
Secure and Governed API Gateway Architectures for Multi-Cluster Cloud Environments
arXiv:2512.23774v1 Announce Type: new Abstract: API gateways serve as critical enforcement points for security, governance, and traffic management in cloud-native systems. As organizations increasingly adopt multi-cluster and hybrid cloud deployments, maintaining consistent policy enforcement, predictable performance, and operational stability across heterogeneous gateway environments becomes challenging. Existing approaches typically manage security, governance, and performance as loosely coupled concerns, leading to configuration drift, delayed policy propagation, and unstable runtime behavior under dynamic workloads. This paper presents a governance-aware, intent-driven architecture for coordinated API gateway management in multi-cluster cloud environments. The proposed approach expresses security, governance, and performance objectives as high-level declarative intents, which are systematically translated into enforceable gateway configurations and continuously validated through policy verification and telemetry-driven feedback. By decoupling intent specification from enforcement while enabling bounded, policy-compliant adaptation, the architecture supports heterogeneous gateway implementations without compromising governance guarantees or service-level objectives. A prototype implementation across multiple Kubernetes clusters demonstrates the effectiveness of the proposed design. Experimental results show up to a 42% reduction in policy drift, a 31% improvement in configuration propagation time, and sustained p95 latency overhead below 6% under variable workloads, compared to manual and declarative baseline approaches. These results indicate that governance-aware, intent-driven gateway orchestration provides a scalable and reliable foundation for secure, consistent, and performance-predictable cloud-native platforms.
https://arxiv.org/abs/2512.23774
Academic Papers
svg
372aa4863ad2e15cf2462f7de84619c0a19c52dcd0293c95adce2f1d27c4197b
2026-01-01T00:00:00-05:00
A Survey on Graph Neural Networks for Fraud Detection in Ride Hailing Platforms
arXiv:2512.23777v1 Announce Type: new Abstract: This study investigates fraud detection in ride hailing platforms through Graph Neural Networks (GNNs),focusing on the effectiveness of various models. By analyzing prevalent fraudulent activities, the research highlights and compares the existing work related to fraud detection which can be useful when addressing fraudulent incidents within the online ride hailing platforms. Also, the paper highlights addressing class imbalance and fraudulent camouflage. It also outlines a structured overview of GNN architectures and methodologies applied to anomaly detection, identifying significant methodological progress and gaps. The paper calls for further exploration into real-world applicability and technical improvements to enhance fraud detection strategies in the rapidly evolving ride-hailing industry.
https://arxiv.org/abs/2512.23777
Academic Papers
svg
4a98074a7cefbb4d1974783f970234c235082fd7f8e05d231ee737acb97629c5
2026-01-01T00:00:00-05:00
SyncGait: Robust Long-Distance Authentication for Drone Delivery via Implicit Gait Behaviors
arXiv:2512.23778v1 Announce Type: new Abstract: In recent years, drone delivery, which utilizes unmanned aerial vehicles (UAVs) for package delivery and pickup, has gradually emerged as a crucial method in logistics. Since delivery drones are expensive and may carry valuable packages, they must maintain a safe distance from individuals until user-drone mutual authentication is confirmed. Despite numerous authentication schemes being developed, existing solutions are limited in authentication distance and lack resilience against sophisticated attacks. To this end, we introduce SyncGait, an implicit gait-based mutual authentication system for drone delivery. SyncGait leverages the user's unique arm swing as he walks toward the drone to achieve mutual authentication without requiring additional hardware or specific authentication actions. We conducted extensive experiments on 14 datasets collected from 31 subjects. The results demonstrate that SyncGait achieves an average accuracy of 99.84\% at a long distance ($>18m$) and exhibits strong resilience against various spoofing attacks, making it a robust, secure, and user-friendly solution in real-world scenarios.
https://arxiv.org/abs/2512.23778
Academic Papers
svg
0452c1e48ee74c0ca53ee5c85c6c9878c39267c4c6a8de8709cb691043a01b95
2026-01-01T00:00:00-05:00
Prompt-Induced Over-Generation as Denial-of-Service: A Black-Box Attack-Side Benchmark
arXiv:2512.23779v1 Announce Type: new Abstract: Large language models (LLMs) can be driven into over-generation, emitting thousands of tokens before producing an end-of-sequence (EOS) token. This degrades answer quality, inflates latency and cost, and can be weaponized as a denial-of-service (DoS) attack. Recent work has begun to study DoS-style prompt attacks, but typically focuses on a single attack algorithm or assumes white-box access, without an attack-side benchmark that compares prompt-based attackers in a black-box, query-only regime with a known tokenizer. We introduce such a benchmark and study two prompt-only attackers. The first is Evolutionary Over-Generation Prompt Search (EOGen), which searches the token space for prefixes that suppress EOS and induce long continuations. The second is a goal-conditioned reinforcement learning attacker (RL-GOAL) that trains a network to generate prefixes conditioned on a target length. To characterize behavior, we introduce Over-Generation Factor (OGF), the ratio of produced tokens to a model's context window, along with stall and latency summaries. Our evolutionary attacker achieves mean OGF = 1.38 +/- 1.15 and Success@OGF >= 2 of 24.5 percent on Phi-3. RL-GOAL is stronger: across victims it achieves higher mean OGF (up to 2.81 +/- 1.38).
https://arxiv.org/abs/2512.23779
Academic Papers
svg
0a9ed151b200d5844c79a3c76deadb4fb67f5a7222b6311bcc29c700e017deb3
2026-01-01T00:00:00-05:00
Test Case Specification Techniques and System Testing Tools in the Automotive Industry: A Review
arXiv:2512.23780v1 Announce Type: new Abstract: The automotive domain is shifting to software-centric development to meet regulation, market pressure, and feature velocity. This shift increases embedded systems' complexity and strains testing capacity. Despite relevant standards, a coherent system-testing methodology that spans heterogeneous, legacy-constrained toolchains remains elusive, and practice often depends on individual expertise rather than a systematic strategy. We derive challenges and requirements from a systematic literature review (SLR), complemented by industry experience and practice. We map them to test case specification techniques and testing tools, evaluating their suitability for automotive testing using PRISMA. Our contribution is a curated catalog that supports technique/tool selection and can inform future testing frameworks and improvements. We synthesize nine recurring challenge areas across the life cycle, such as requirements quality and traceability, variability management, and toolchain fragmentation. We then provide a prioritized criteria catalog that recommends model-based planning, interoperable and traceable toolchains, requirements uplift, pragmatic automation and virtualization, targeted AI and formal methods, actionable metrics, and lightweight organizational practices.
https://arxiv.org/abs/2512.23780
Academic Papers
svg
73a88b48be8c53bbfa0ce373f2dc8c25a4b6951c9eea5f69c6b83aded1f0835e
2026-01-01T00:00:00-05:00
Personalized Promotions in Practice: Dynamic Allocation and Reference Effects
arXiv:2512.23781v1 Announce Type: new Abstract: Partnering with a large online retailer, we consider the problem of sending daily personalized promotions to a userbase of over 20 million customers. We propose an efficient policy for determining, every day, the promotion that each customer should receive (10%, 12%, 15%, 17%, or 20% off), while respecting global allocation constraints. This policy was successfully deployed to see a 4.5% revenue increase during an A/B test, by better targeting promotion-sensitive customers and also learning intertemporal patterns across customers. We also consider theoretically modeling the intertemporal state of the customer. The data suggests a simple new combinatorial model of pricing with reference effects, where the customer remembers the best promotion they saw over the past $\ell$ days as the "reference value", and is more likely to purchase if this value is poor. We tightly characterize the structure of optimal policies for maximizing long-run average revenue under this model -- they cycle between offering poor promotion values $\ell$ times and offering good values once.
https://arxiv.org/abs/2512.23781
Academic Papers
svg
b612090a0f0de07eb2c4d7bfde4b1471d632277490660325597171211fc01c6a
2026-01-01T00:00:00-05:00
A Systematic Mapping on Software Fairness: Focus, Trends and Industrial Context
arXiv:2512.23782v1 Announce Type: new Abstract: Context: Fairness in systems has emerged as a critical concern in software engineering, garnering increasing attention as the field has advanced in recent years. While several guidelines have been proposed to address fairness, achieving a comprehensive understanding of research solutions for ensuring fairness in software systems remains challenging. Objectives: This paper presents a systematic literature mapping to explore and categorize current advancements in fairness solutions within software engineering, focusing on three key dimensions: research trends, research focus, and viability in industrial contexts. Methods: We develop a classification framework to organize research on software fairness from a fresh perspective, applying it to 95 selected studies and analyzing their potential for industrial adoption. Results: Our findings reveal that software fairness research is expanding, yet it remains heavily focused on methods and algorithms. It primarily focuses on post-processing and group fairness, with less emphasis on early-stage interventions, individual fairness metrics, and understanding bias root causes. Additionally fairness research remains largely academic, with limited industry collaboration and low to medium Technology Readiness Level (TRL), indicating that industrial transferability remains distant. Conclusion: Our results underscore the need to incorporate fairness considerations across all stages of the software development life-cycle and to foster greater collaboration between academia and industry. This analysis provides a comprehensive overview of the field, offering a foundation to guide future research and practical applications of fairness in software systems.
https://arxiv.org/abs/2512.23782
Academic Papers
svg
28dab6e4a77377ab4f3a3ac8202844cfa2db2eb4ced801bae19bfa298a5af5f4
2026-01-01T00:00:00-05:00
Application-Specific Power Side-Channel Attacks and Countermeasures: A Survey
arXiv:2512.23785v1 Announce Type: new Abstract: Side-channel attacks try to extract secret information from a system by analyzing different side-channel signatures, such as power consumption, electromagnetic emanation, thermal dissipation, acoustics, time, etc. Power-based side-channel attack is one of the most prominent side-channel attacks in cybersecurity, which rely on data-dependent power variations in a system to extract sensitive information. While there are related surveys, they primarily focus on power side-channel attacks on cryptographic implementations. In recent years, power-side channel attacks have been explored in diverse application domains, including key extraction from cryptographic implementations, reverse engineering of machine learning models, user behavior data exploitation, and instruction-level disassembly. In this paper, we provide a comprehensive survey of power side-channel attacks and their countermeasures in different application domains. Specifically, this survey aims to classify recent power side-channel attacks and provide a comprehensive comparison based on application-specific considerations.
https://arxiv.org/abs/2512.23785
Academic Papers
svg
89681de9899d294d448f359c2d0b8dbad0f094df676ba139dc4e80a7abe43067
2026-01-01T00:00:00-05:00
Leveraging Synthetic Priors for Monocular Depth Estimation in Specular Surgical Environments
arXiv:2512.23786v1 Announce Type: new Abstract: Accurate Monocular Depth Estimation (MDE) is critical for robotic surgery but remains fragile in specular, fluid-filled endoscopic environments. Existing self-supervised methods, typically relying on foundation models trained with noisy real-world pseudo-labels, often suffer from boundary collapse on thin surgical tools and transparent surfaces. In this work, we address this by leveraging the high-fidelity synthetic priors of the Depth Anything V2 architecture, which inherently captures precise geometric details of thin structures. We efficiently adapt these priors to the medical domain using Dynamic Vector Low-Rank Adaptation (DV-LORA), minimizing the parameter budget while bridging the synthetic-to-real gap. Additionally, we introduce a physically-stratified evaluation protocol on the SCARED dataset to rigorously quantify performance in high-specularity regimes often masked by aggregate metrics. Our approach establishes a new state-of-the-art, achieving an accuracy (< 1.25) of 98.1% and reducing Squared Relative Error by over 17% compared to established baselines, demonstrating superior robustness in adverse surgical lighting.
https://arxiv.org/abs/2512.23786
Academic Papers
svg
c331efd40e91d41187d2bfdc978aee47e1abf6eaa6c5485d8933535ac391bb66
2026-01-01T00:00:00-05:00
TabMixNN: A Unified Deep Learning Framework for Structural Mixed Effects Modeling on Tabular Data
arXiv:2512.23787v1 Announce Type: new Abstract: We present TabMixNN, a flexible PyTorch-based deep learning framework that synthesizes classical mixed-effects modeling with modern neural network architectures for tabular data analysis. TabMixNN addresses the growing need for methods that can handle hierarchical data structures while supporting diverse outcome types including regression, classification, and multitask learning. The framework implements a modular three-stage architecture: (1) a mixed-effects encoder with variational random effects and flexible covariance structures, (2) backbone architectures including Generalized Structural Equation Models (GSEM) and spatial-temporal manifold networks, and (3) outcome-specific prediction heads supporting multiple outcome families. Key innovations include an R-style formula interface for accessibility, support for directed acyclic graph (DAG) constraints for causal structure learning, Stochastic Partial Differential Equation (SPDE) kernels for spatial modeling, and comprehensive interpretability tools including SHAP values and variance decomposition. We demonstrate the framework's flexibility through applications to longitudinal data analysis, genomic prediction, and spatial-temporal modeling. TabMixNN provides a unified interface for researchers to leverage deep learning while maintaining the interpretability and theoretical grounding of classical mixed-effects models.
https://arxiv.org/abs/2512.23787
Academic Papers
svg
32aaa2564cf277c0f66b6b66ee005549653de7b7c8c2bb2fbab1f9bb1e041a02
2026-01-01T00:00:00-05:00
VBSF: A Visual-Based Spam Filtering Technique for Obfuscated Emails
arXiv:2512.23788v1 Announce Type: new Abstract: Recent spam email techniques exploit visual effects in text messages, such as poisoning text, obfuscating words, and hidden text salting techniques. These effects were able to evade spam detection techniques based on the text. In this paper, we overcome this limitation by introducing a novel visual-based spam detection architecture, denoted as visual-based spam filter (VBSF). The multi-step process mimics the human eye's natural way of processing visual information, automatically rendering incoming emails and capturing their content as it appears on a user screen. Then, two different processing pipelines are applied in parallel. The first pipeline pertains to the perceived textual content, as it includes optical character recognition (OCR) to extract rendered textual content, followed by naive Bayes (NB) and decision tree (DT) content classifiers. The second pipeline focuses on the appearance of the email, as it analyzes and classifies the images of rendered emails through a specific convolutional neural network. Lastly, a meta classifier integrates text- and image-based classifier outputs, exploiting the stacking ensemble learning method. The performance of the proposed VBSF is assessed, showing that it achieves an accuracy of more than 98%, which is higher than the compared existing techniques on the designed dataset.
https://arxiv.org/abs/2512.23788
Academic Papers
svg
abbc2e532f1a6247719b70d2fb84cf4aab2183ea92c6ffba5a2fea2d0ee5ae25
2026-01-01T00:00:00-05:00
A note on the space-time variational formulation for the wave equation with source term in $L^2(Q)$
arXiv:2512.23807v1 Announce Type: new Abstract: We derive a variational formulation for the scalar wave equation in the second-order formulation on bounded Lipschitz domains and homogeneous initial conditions. We investigate a variational framework in a bounded space-time cylinder $Q$ with a new solution space and the test space $L^2(Q)$ for source terms in $L^2(Q)$. Using existence and uniqueness results in $H^1(Q)$, we prove that this variational setting fits the inf-sup theory, including an isomorphism as solution operator. Moreover, we show that the new solution space is not a subspace of $H^2(Q)$. This new uniqueness and solvability result is not only crucial for discretizations using space-time methods, including least-squares approaches, but also important for regularity results and the analysis of related space-time boundary integral equations, which form the basis for space-time boundary element methods.
https://arxiv.org/abs/2512.23807
Academic Papers
svg
baf237a45288267adfa139612d2158e88e15c4893f6ea6e72d3e14d21c633fd7
2026-01-01T00:00:00-05:00
MiMo-Audio: Audio Language Models are Few-Shot Learners
arXiv:2512.23808v1 Announce Type: new Abstract: Existing audio language models typically rely on task-specific fine-tuning to accomplish particular audio tasks. In contrast, humans are able to generalize to new audio tasks with only a few examples or simple instructions. GPT-3 has shown that scaling next-token prediction pretraining enables strong generalization capabilities in text, and we believe this paradigm is equally applicable to the audio domain. By scaling MiMo-Audio's pretraining data to over one hundred million of hours, we observe the emergence of few-shot learning capabilities across a diverse set of audio tasks. We develop a systematic evaluation of these capabilities and find that MiMo-Audio-7B-Base achieves SOTA performance on both speech intelligence and audio understanding benchmarks among open-source models. Beyond standard metrics, MiMo-Audio-7B-Base generalizes to tasks absent from its training data, such as voice conversion, style transfer, and speech editing. MiMo-Audio-7B-Base also demonstrates powerful speech continuation capabilities, capable of generating highly realistic talk shows, recitations, livestreaming and debates. At the post-training stage, we curate a diverse instruction-tuning corpus and introduce thinking mechanisms into both audio understanding and generation. MiMo-Audio-7B-Instruct achieves open-source SOTA on audio understanding benchmarks (MMSU, MMAU, MMAR, MMAU-Pro), spoken dialogue benchmarks (Big Bench Audio, MultiChallenge Audio) and instruct-TTS evaluations, approaching or surpassing closed-source models. Model checkpoints and full evaluation suite are available at https://github.com/XiaomiMiMo/MiMo-Audio.
https://arxiv.org/abs/2512.23808
Academic Papers
svg
3607c653a44a9020c989d0079804b37defdd99ea86dd8397950b5f55cf010cff
2026-01-01T00:00:00-05:00
Zero-Trust Agentic Federated Learning for Secure IIoT Defense Systems
arXiv:2512.23809v1 Announce Type: new Abstract: Recent attacks on critical infrastructure, including the 2021 Oldsmar water treatment breach and 2023 Danish energy sector compromises, highlight urgent security gaps in Industrial IoT (IIoT) deployments. While Federated Learning (FL) enables privacy-preserving collaborative intrusion detection, existing frameworks remain vulnerable to Byzantine poisoning attacks and lack robust agent authentication. We propose Zero-Trust Agentic Federated Learning (ZTA-FL), a defense in depth framework combining: (1) TPM-based cryptographic attestation achieving less than 0.0000001 false acceptance rate, (2) a novel SHAP-weighted aggregation algorithm providing explainable Byzantine detection under non-IID conditions with theoretical guarantees, and (3) privacy-preserving on-device adversarial training. Comprehensive experiments across three IDS benchmarks (Edge-IIoTset, CIC-IDS2017, UNSW-NB15) demonstrate that ZTA-FL achieves 97.8 percent detection accuracy, 93.2 percent accuracy under 30 percent Byzantine attacks (outperforming FLAME by 3.1 percent, p less than 0.01), and 89.3 percent adversarial robustness while reducing communication overhead by 34 percent. We provide theoretical analysis, failure mode characterization, and release code for reproducibility.
https://arxiv.org/abs/2512.23809
Academic Papers
svg
c49ce83e3eed0f052c4d9566f6bca37147e35f946836aacd23841e38b3099f15
2026-01-01T00:00:00-05:00
StressRoBERTa: Cross-Condition Transfer Learning from Depression, Anxiety, and PTSD to Stress Detection
arXiv:2512.23813v1 Announce Type: new Abstract: The prevalence of chronic stress represents a significant public health concern, with social media platforms like Twitter serving as important venues for individuals to share their experiences. This paper introduces StressRoBERTa, a cross-condition transfer learning approach for automatic detection of self-reported chronic stress in English tweets. The investigation examines whether continual training on clinically related conditions (depression, anxiety, PTSD), disorders with high comorbidity with chronic stress, improves stress detection compared to general language models and broad mental health models. RoBERTa is continually trained on the Stress-SMHD corpus (108M words from users with self-reported diagnoses of depression, anxiety, and PTSD) and fine-tuned on the SMM4H 2022 Task 8 dataset. StressRoBERTa achieves 82% F1-score, outperforming the best shared task system (79% F1) by 3 percentage points. The results demonstrate that focused cross-condition transfer from stress-related disorders (+1% F1 over vanilla RoBERTa) provides stronger representations than general mental health training. Evaluation on Dreaddit (81% F1) further demonstrates transfer from clinical mental health contexts to situational stress discussions.
https://arxiv.org/abs/2512.23813
Academic Papers
svg
64259469f2608dd9096c43e0a74a2c0934fd5a4a7159208408a30987fb541d79
2026-01-01T00:00:00-05:00
Greedy Rational Approximation for Frequency-Domain Model Reduction of Parametric LTI Systems
arXiv:2512.23814v1 Announce Type: new Abstract: We investigate model reduction of parametric linear time-invariant (LTI) dynamical systems. When posed in the frequency domain, this problem can be formulated as seeking a low-order rational function approximation of a high-order rational function. We propose to use a standard reduced basis method (RBM) to construct this low-order rational function. Algorithmically, this procedure is an iterative greedy approach, where the greedy objective is evaluated through an error estimator that exploits the linearity of the frequency domain representation. The greedy framework is motivated through theoretical results of rational approximability of functions. This framework provides a principled approach to rational compression of high-order rational functions, and provides a computational pathway for model reduction of parametric LTI systems.
https://arxiv.org/abs/2512.23814
Academic Papers
svg
a4ef16cb6bf5a658293539ad1235efce7e93509d68b1ef17da1dfdbf51bd3743
2026-01-01T00:00:00-05:00
Improved Bounds for Private and Robust Alignment
arXiv:2512.23816v1 Announce Type: new Abstract: In this paper, we study the private and robust alignment of language models from a theoretical perspective by establishing upper bounds on the suboptimality gap in both offline and online settings. We consider preference labels subject to privacy constraints and/or adversarial corruption, and analyze two distinct interplays between them: privacy-first and corruption-first. For the privacy-only setting, we show that log loss with an MLE-style algorithm achieves near-optimal rates, in contrast to conventional wisdom. For the joint privacy-and-corruption setting, we first demonstrate that existing offline algorithms in fact provide stronger guarantees -- simultaneously in terms of corruption level and privacy parameters -- than previously known, which further yields improved bounds in the corruption-only regime. In addition, we also present the first set of results for private and robust online alignment. Our results are enabled by new uniform convergence guarantees for log loss and square loss under privacy and corruption, which we believe have broad applicability across learning theory and statistics.
https://arxiv.org/abs/2512.23816
Academic Papers
svg
4143aab700b864b347468ce89a4116a15069e2766961f37a4c7d27e6684ac091
2026-01-01T00:00:00-05:00
Video-Based Performance Evaluation for ECR Drills in Synthetic Training Environments
arXiv:2512.23819v1 Announce Type: new Abstract: Effective urban warfare training requires situational awareness and muscle memory, developed through repeated practice in realistic yet controlled environments. A key drill, Enter and Clear the Room (ECR), demands threat assessment, coordination, and securing confined spaces. The military uses Synthetic Training Environments that offer scalable, controlled settings for repeated exercises. However, automatic performance assessment remains challenging, particularly when aiming for objective evaluation of cognitive, psychomotor, and teamwork skills. Traditional methods often rely on costly, intrusive sensors or subjective human observation, limiting scalability and accuracy. This paper introduces a video-based assessment pipeline that derives performance analytics from training videos without requiring additional hardware. By utilizing computer vision models, the system extracts 2D skeletons, gaze vectors, and movement trajectories. From these data, we develop task-specific metrics that measure psychomotor fluency, situational awareness, and team coordination. These metrics feed into an extended Cognitive Task Analysis (CTA) hierarchy, which employs a weighted combination to generate overall performance scores for teamwork and cognition. We demonstrate the approach with a case study of real-world ECR drills, providing actionable, domain specific metrics that capture individual and team performance. We also discuss how these insights can support After Action Reviews with interactive dashboards within Gamemaster and the Generalized Intelligent Framework for Tutoring (GIFT), providing intuitive and understandable feedback. We conclude by addressing limitations, including tracking difficulties, ground-truth validation, and the broader applicability of our approach. Future work includes expanding analysis to 3D video data and leveraging video analysis to enable scalable evaluation within STEs.
https://arxiv.org/abs/2512.23819
Academic Papers
svg
0a467cc0c513f8d25f5204ec86c28792a86fc9f55da5a594e30c50b475993310
2026-01-01T00:00:00-05:00
MS-SSM: A Multi-Scale State Space Model for Efficient Sequence Modeling
arXiv:2512.23824v1 Announce Type: new Abstract: State-space models (SSMs) have recently attention as an efficient alternative to computationally expensive attention-based models for sequence modeling. They rely on linear recurrences to integrate information over time, enabling fast inference, parallelizable training, and control over recurrence stability. However, traditional SSMs often suffer from limited effective memory, requiring larger state sizes for improved recall. Moreover, existing SSMs struggle to capture multi-scale dependencies, which are essential for modeling complex structures in time series, images, and natural language. This paper introduces a multi-scale SSM framework that addresses these limitations by representing sequence dynamics across multiple resolution and processing each resolution with specialized state-space dynamics. By capturing both fine-grained, high-frequency patterns and coarse, global trends, MS-SSM enhances memory efficiency and long-range modeling. We further introduce an input-dependent scale-mixer, enabling dynamic information fusion across resolutions. The proposed approach significantly improves sequence modeling, particularly in long-range and hierarchical tasks, while maintaining computational efficiency. Extensive experiments on benchmarks, including Long Range Arena, hierarchical reasoning, time series classification, and image recognition, demonstrate that MS-SSM consistently outperforms prior SSM-based models, highlighting the benefits of multi-resolution processing in state-space architectures.
https://arxiv.org/abs/2512.23824
Academic Papers
svg
b3f08ae25d39f6ed9e253eb5e521db660b7a3b3c0fba230a432e4b30629398f9
2026-01-01T00:00:00-05:00
Deep learning methods for inverse problems using connections between proximal operators and Hamilton-Jacobi equations
arXiv:2512.23829v1 Announce Type: new Abstract: Inverse problems are important mathematical problems that seek to recover model parameters from noisy data. Since inverse problems are often ill-posed, they require regularization or incorporation of prior information about the underlying model or unknown variables. Proximal operators, ubiquitous in nonsmooth optimization, are central to this because they provide a flexible and convenient way to encode priors and build efficient iterative algorithms. They have also recently become key to modern machine learning methods, e.g., for plug-and-play methods for learned denoisers and deep neural architectures for learning priors of proximal operators. The latter was developed partly due to recent work characterizing proximal operators of nonconvex priors as subdifferential of convex potentials. In this work, we propose to leverage connections between proximal operators and Hamilton-Jacobi partial differential equations (HJ PDEs) to develop novel deep learning architectures for learning the prior. In contrast to other existing methods, we learn the prior directly without recourse to inverting the prior after training. We present several numerical results that demonstrate the efficiency of the proposed method in high dimensions.
https://arxiv.org/abs/2512.23829
Academic Papers
svg
a221699166c97960e87251db211bbfe4fcbd719b05fa898feebc6200e3f5bc8c
2026-01-01T00:00:00-05:00
Exploiting the Prior of Generative Time Series Imputation
arXiv:2512.23832v1 Announce Type: new Abstract: Time series imputation, i.e., filling the missing values of a time recording, finds various applications in electricity, finance, and weather modelling. Previous methods have introduced generative models such as diffusion probabilistic models and Schrodinger bridge models to conditionally generate the missing values from Gaussian noise or directly from linear interpolation results. However, as their prior is not informative to the ground-truth target, their generation process inevitably suffer increased burden and limited imputation accuracy. In this work, we present Bridge-TS, building a data-to-data generation process for generative time series imputation and exploiting the design of prior with two novel designs. Firstly, we propose expert prior, leveraging a pretrained transformer-based module as an expert to fill the missing values with a deterministic estimation, and then taking the results as the prior of ground truth target. Secondly, we explore compositional priors, utilizing several pretrained models to provide different estimation results, and then combining them in the data-to-data generation process to achieve a compositional priors-to-target imputation process. Experiments conducted on several benchmark datasets such as ETT, Exchange, and Weather show that Bridge-TS reaches a new record of imputation accuracy in terms of mean square error and mean absolute error, demonstrating the superiority of improving prior for generative time series imputation.
https://arxiv.org/abs/2512.23832
Academic Papers
svg
4ab9d928681d5348ec6a2c3eeb823ce28e766a0dd37f1c87facd06cb7e7b429f
2026-01-01T00:00:00-05:00
Artificial Intelligence for All? Brazilian Teachers on Ethics, Equity, and the Everyday Challenges of AI in Education
arXiv:2512.23834v1 Announce Type: new Abstract: This study examines the perceptions of Brazilian K-12 education teachers regarding the use of AI in education, specifically General Purpose AI. This investigation employs a quantitative analysis approach, extracting information from a questionnaire completed by 346 educators from various regions of Brazil regarding their AI literacy and use. Educators vary in their educational level, years of experience, and type of educational institution. The analysis of the questionnaires shows that although most educators had only basic or limited knowledge of AI (80.3\%), they showed a strong interest in its application, particularly for the creation of interactive content (80.6%), lesson planning (80.2%), and personalized assessment (68.6%). The potential of AI to promote inclusion and personalized learning is also widely recognized (65.5%). The participants emphasized the importance of discussing ethics and digital citizenship, reflecting on technological dependence, biases, transparency, and responsible use of AI, aligning with critical education and the development of conscious students. Despite enthusiasm for the pedagogical potential of AI, significant structural challenges were identified, including a lack of training (43.4%), technical support (41.9%), and limitations of infrastructure, such as low access to computers, reliable Internet connections, and multimedia resources in schools. The study shows that Brazil is still in a bottom-up model for AI integration, missing official curricula to guide its implementation and structured training for teachers and students. Furthermore, effective implementation of AI depends on integrated public policies, adequate teacher training, and equitable access to technology, promoting ethical, inclusive, and contextually grounded adoption of AI in Brazilian K-12 education.
https://arxiv.org/abs/2512.23834
Academic Papers
svg
1bb8d367c25f515ac57982d4e700cd7dde8f99f70540a7f32246b6e8ce61b67a
2026-01-01T00:00:00-05:00
Explaining News Bias Detection: A Comparative SHAP Analysis of Transformer Model Decision Mechanisms
arXiv:2512.23835v1 Announce Type: new Abstract: Automated bias detection in news text is heavily used to support journalistic analysis and media accountability, yet little is known about how bias detection models arrive at their decisions or why they fail. In this work, we present a comparative interpretability study of two transformer-based bias detection models: a bias detector fine-tuned on the BABE dataset and a domain-adapted pre-trained RoBERTa model fine-tuned on the BABE dataset, using SHAP-based explanations. We analyze word-level attributions across correct and incorrect predictions to characterize how different model architectures operationalize linguistic bias. Our results show that although both models attend to similar categories of evaluative language, they differ substantially in how these signals are integrated into predictions. The bias detector model assigns stronger internal evidence to false positives than to true positives, indicating a misalignment between attribution strength and prediction correctness and contributing to systematic over-flagging of neutral journalistic content. In contrast, the domain-adaptive model exhibits attribution patterns that better align with prediction outcomes and produces 63\% fewer false positives. We further demonstrate that model errors arise from distinct linguistic mechanisms, with false positives driven by discourse-level ambiguity rather than explicit bias cues. These findings highlight the importance of interpretability-aware evaluation for bias detection systems and suggest that architectural and training choices critically affect both model reliability and deployment suitability in journalistic contexts.
https://arxiv.org/abs/2512.23835
Academic Papers
svg
80e2ed64efd074ff9cb1af3b79ace3ae529a8a87f8cbad9157cbcf6d60d98b8c
2026-01-01T00:00:00-05:00
Retrieval Augmented Question Answering: When Should LLMs Admit Ignorance?
arXiv:2512.23836v1 Announce Type: new Abstract: The success of expanded context windows in Large Language Models (LLMs) has driven increased use of broader context in retrieval-augmented generation. We investigate the use of LLMs for retrieval augmented question answering. While longer contexts make it easier to incorporate targeted knowledge, they introduce more irrelevant information that hinders the model's generation process and degrades its performance. To address the issue, we design an adaptive prompting strategy which involves splitting the retrieved information into smaller chunks and sequentially prompting a LLM to answer the question using each chunk. Adjusting the chunk size allows a trade-off between incorporating relevant information and reducing irrelevant information. Experimental results on three open-domain question answering datasets demonstrate that the adaptive strategy matches the performance of standard prompting while using fewer tokens. Our analysis reveals that when encountering insufficient information, the LLM often generates incorrect answers instead of declining to respond, which constitutes a major source of error. This finding highlights the need for further research into enhancing LLMs' ability to effectively decline requests when faced with inadequate information.
https://arxiv.org/abs/2512.23836
Academic Papers
svg
184a56b89a4006e9ff8f3023d09255f1f745b77e27dc7b608e72c97c1ef48909
2026-01-01T00:00:00-05:00
Adversarial Lens: Exploiting Attention Layers to Generate Adversarial Examples for Evaluation
arXiv:2512.23837v1 Announce Type: new Abstract: Recent advances in mechanistic interpretability suggest that intermediate attention layers encode token-level hypotheses that are iteratively refined toward the final output. In this work, we exploit this property to generate adversarial examples directly from attention-layer token distributions. Unlike prompt-based or gradient-based attacks, our approach leverages model-internal token predictions, producing perturbations that are both plausible and internally consistent with the model's own generation process. We evaluate whether tokens extracted from intermediate layers can serve as effective adversarial perturbations for downstream evaluation tasks. We conduct experiments on argument quality assessment using the ArgQuality dataset, with LLaMA-3.1-Instruct-8B serving as both the generator and evaluator. Our results show that attention-based adversarial examples lead to measurable drops in evaluation performance while remaining semantically similar to the original inputs. However, we also observe that substitutions drawn from certain layers and token positions can introduce grammatical degradation, limiting their practical effectiveness. Overall, our findings highlight both the promise and current limitations of using intermediate-layer representations as a principled source of adversarial examples for stress-testing LLM-based evaluation pipelines.
https://arxiv.org/abs/2512.23837
Academic Papers
svg
0bb98ee836abb9c7ecaccd38994487a5fdc647eb9cce419b996044c93ce7df44
2026-01-01T00:00:00-05:00
From Correctness to Collaboration: Toward a Human-Centered Framework for Evaluating AI Agent Behavior in Software Engineering
arXiv:2512.23844v1 Announce Type: new Abstract: As Large Language Models (LLMs) evolve from code generators into collaborative partners for software engineers, our methods for evaluation are lagging. Current benchmarks, focused on code correctness, fail to capture the nuanced, interactive behaviors essential for successful human-AI partnership. To bridge this evaluation gap, this paper makes two core contributions. First, we present a foundational taxonomy of desirable agent behaviors for enterprise software engineering, derived from an analysis of 91 sets of user-defined agent rules. This taxonomy defines four key expectations of agent behavior: Adhere to Standards and Processes, Ensure Code Quality and Reliability, Solving Problems Effectively, and Collaborating with the User. Second, recognizing that these expectations are not static, we introduce the Context-Adaptive Behavior (CAB) Framework. This emerging framework reveals how behavioral expectations shift along two empirically-derived axes: the Time Horizon (from immediate needs to future ideals), established through interviews with 15 expert engineers, and the Type of Work (from enterprise production to rapid prototyping, for example), identified through a prompt analysis of a prototyping agent. Together, these contributions offer a human-centered foundation for designing and evaluating the next generation of AI agents, moving the field's focus from the correctness of generated code toward the dynamics of true collaborative intelligence.
https://arxiv.org/abs/2512.23844
Academic Papers
svg
fffeba1afb49eae406a7276c8f4c3258ae11dbb900eea5fb777342c6b6b002c1
2026-01-01T00:00:00-05:00
Integrating Domain Knowledge for Financial QA: A Multi-Retriever RAG Approach with LLMs
arXiv:2512.23848v1 Announce Type: new Abstract: This research project addresses the errors of financial numerical reasoning Question Answering (QA) tasks due to the lack of domain knowledge in finance. Despite recent advances in Large Language Models (LLMs), financial numerical questions remain challenging because they require specific domain knowledge in finance and complex multi-step numeric reasoning. We implement a multi-retriever Retrieval Augmented Generators (RAG) system to retrieve both external domain knowledge and internal question contexts, and utilize the latest LLM to tackle these tasks. Through comprehensive ablation experiments and error analysis, we find that domain-specific training with the SecBERT encoder significantly contributes to our best neural symbolic model surpassing the FinQA paper's top model, which serves as our baseline. This suggests the potential superior performance of domain-specific training. Furthermore, our best prompt-based LLM generator achieves the state-of-the-art (SOTA) performance with significant improvement (>7%), yet it is still below the human expert performance. This study highlights the trade-off between hallucinations loss and external knowledge gains in smaller models and few-shot examples. For larger models, the gains from external facts typically outweigh the hallucination loss. Finally, our findings confirm the enhanced numerical reasoning capabilities of the latest LLM, optimized for few-shot learning.
https://arxiv.org/abs/2512.23848
Academic Papers
svg
142c4e30c648872cc791c4024c6dd7aa5ef9c6c6f16a15fdbdc1319f4d399385
2026-01-01T00:00:00-05:00
Security Without Detection: Economic Denial as a Primitive for Edge and IoT Defense
arXiv:2512.23849v1 Announce Type: new Abstract: Detection-based security fails against sophisticated attackers using encryption, stealth, and low-rate techniques, particularly in IoT/edge environments where resource constraints preclude ML-based intrusion detection. We present Economic Denial Security (EDS), a detection-independent framework that makes attacks economically infeasible by exploiting a fundamental asymmetry: defenders control their environment while attackers cannot. EDS composes four mechanisms adaptive computational puzzles, decoy-driven interaction entropy, temporal stretching, and bandwidth taxation achieving provably superlinear cost amplification. We formalize EDS as a Stackelberg game, deriving closed-form equilibria for optimal parameter selection (Theorem 1) and proving that mechanism composition yields 2.1x greater costs than the sum of individual mechanisms (Theorem 2). EDS requires < 12KB memory, enabling deployment on ESP32 class microcontrollers. Evaluation on a 20-device heterogeneous IoT testbed across four attack scenarios (n = 30 trials, p < 0.001) demonstrates: 32-560x attack slowdown, 85-520:1 cost asymmetry, 8-62% attack success reduction, < 20ms latency overhead, and close to 0% false positives. Validation against IoT-23 malware (Mirai, Torii, Hajime) shows 88% standalone mitigation; combined with ML-IDS, EDS achieves 94% mitigation versus 67% for IDS alone a 27% improvement. EDS provides detection-independent protection suitable for resource-constrained environments where traditional approaches fail. The ability to detect and mitigate the malware samples tested was enhanced; however, the benefits provided by EDS were realized even without the inclusion of an IDS. Overall, the implementation of EDS serves to shift the economic balance in favor of the defender and provides a viable method to protect IoT and edge systems methodologies.
https://arxiv.org/abs/2512.23849
Academic Papers
svg
d097a11d6110aac4dc6989076b44db14015997957a6ea5ba25048a4ee83fe618
2026-01-01T00:00:00-05:00
The Drill-Down and Fabricate Test (DDFT): A Protocol for Measuring Epistemic Robustness in Language Models
arXiv:2512.23850v1 Announce Type: new Abstract: Current language model evaluations measure what models know under ideal conditions but not how robustly they know it under realistic stress. Static benchmarks like MMLU and TruthfulQA cannot distinguish a model that lacks knowledge from one whose verification mechanisms collapse when information degrades or adversaries probe for weaknesses. We introduce the Drill-Down and Fabricate Test (DDFT), a protocol that measures epistemic robustness: a model's ability to maintain factual accuracy under progressive semantic compression and adversarial fabrication. We propose a two-system cognitive model comprising a Semantic System that generates fluent text and an Epistemic Verifier that validates factual accuracy. Our findings, based on evaluating 9 frontier models across 8 knowledge domains at 5 compression levels (1,800 turn-level evaluations), reveal that epistemic robustness is orthogonal to conventional design paradigms. Neither parameter count (r=0.083, p=0.832) nor architectural type (r=0.153, p=0.695) significantly predicts robustness, suggesting it emerges from training methodology and verification mechanisms distinct from current approaches. Error detection capability strongly predicts overall robustness (rho=-0.817, p=0.007), indicating this is the critical bottleneck. We find that flagship models exhibit brittleness despite their scale, while smaller models can achieve robust performance, challenging assumptions about the relationship between model size and reliability. The DDFT framework provides both theoretical foundation and practical tools for assessing epistemic robustness before deployment in critical applications.
https://arxiv.org/abs/2512.23850
Academic Papers
svg
0227b46d9951fce368d9ee5844e2ce81c6a8122bddf1e5dcb10217922576646f
2026-01-01T00:00:00-05:00
Pretraining Frame Preservation in Autoregressive Video Memory Compression
arXiv:2512.23851v1 Announce Type: new Abstract: We present PFP, a neural network structure to compress long videos into short contexts, with an explicit pretraining objective to preserve the high-frequency details of single frames at arbitrary temporal positions. The baseline model can compress a 20-second video into a context at about 5k length, where random frames can be retrieved with perceptually preserved appearances. Such pretrained models can be directly fine-tuned as memory encoders for autoregressive video models, enabling long history memory with low context cost and relatively low fidelity loss. We evaluate the framework with ablative settings and discuss the trade-offs of possible neural architecture designs.
https://arxiv.org/abs/2512.23851
Academic Papers
svg
2c46d3a77fefed32f633b0b6cd2f9cb367dfda5b22c877780750d6bb348dc74a
2026-01-01T00:00:00-05:00
Trellis: Learning to Compress Key-Value Memory in Attention Models
arXiv:2512.23852v1 Announce Type: new Abstract: Transformers, while powerful, suffer from quadratic computational complexity and the ever-growing Key-Value (KV) cache of the attention mechanism. This paper introduces Trellis, a novel Transformer architecture with bounded memory that learns how to compress its key-value memory dynamically at test time. Trellis replaces the standard KV cache with a fixed-size memory and train a two-pass recurrent compression mechanism to store new keys and values into memory. To achieve this, it leverages an online gradient descent procedure with a forget gate, enabling the compressed memory to be updated recursively while learning to retain important contextual information from incoming tokens at test time. Extensive experiments on language modeling, common-sense reasoning, recall-intensive tasks, and time series show that the proposed architecture outperforms strong baselines. Notably, its performance gains increase as the sequence length grows, highlighting its potential for long-context applications.
https://arxiv.org/abs/2512.23852
Academic Papers
svg
c748975706eb71e9c4b7fde8d43b80bc9587b435ef55a06db317c46782d710d1
2026-01-01T00:00:00-05:00
Flow Matching Neural Processes
arXiv:2512.23853v1 Announce Type: new Abstract: Neural processes (NPs) are a class of models that learn stochastic processes directly from data and can be used for inference, sampling and conditional sampling. We introduce a new NP model based on flow matching, a generative modeling paradigm that has demonstrated strong performance on various data modalities. Following the NP training framework, the model provides amortized predictions of conditional distributions over any arbitrary points in the data. Compared to previous NP models, our model is simple to implement and can be used to sample from conditional distributions using an ODE solver, without requiring auxiliary conditioning methods. In addition, the model provides a controllable tradeoff between accuracy and running time via the number of steps in the ODE solver. We show that our model outperforms previous state-of-the-art neural process methods on various benchmarks including synthetic 1D Gaussian processes data, 2D images, and real-world weather data.
https://arxiv.org/abs/2512.23853
Academic Papers
svg
e6ee28cb5caf0a182b95e9ee57f40d2e6d2d0c32c8896dc748c25ef032c6d147
2026-01-01T00:00:00-05:00
Simultaneous Extrinsic Contact and In-Hand Pose Estimation via Distributed Tactile Sensing
arXiv:2512.23856v1 Announce Type: new Abstract: Prehensile autonomous manipulation, such as peg insertion, tool use, or assembly, require precise in-hand understanding of the object pose and the extrinsic contacts made during interactions. Providing accurate estimation of pose and contacts is challenging. Tactile sensors can provide local geometry at the sensor and force information about the grasp, but the locality of sensing means resolving poses and contacts from tactile alone is often an ill-posed problem, as multiple configurations can be consistent with the observations. Adding visual feedback can help resolve ambiguities, but can suffer from noise and occlusions. In this work, we propose a method that pairs local observations from sensing with the physical constraints of contact. We propose a set of factors that ensure local consistency with tactile observations as well as enforcing physical plausibility, namely, that the estimated pose and contacts must respect the kinematic and force constraints of quasi-static rigid body interactions. We formalize our problem as a factor graph, allowing for efficient estimation. In our experiments, we demonstrate that our method outperforms existing geometric and contact-informed estimation pipelines, especially when only tactile information is available. Video results can be found at https://tacgraph.github.io/.
https://arxiv.org/abs/2512.23856
Academic Papers
svg
1b9d293b42a39e5185c7f646c8097fcdeb54084fd5755b4c5a9790a3671684e6
2026-01-01T00:00:00-05:00
Yggdrasil: Bridging Dynamic Speculation and Static Runtime for Latency-Optimal Tree-Based LLM Decoding
arXiv:2512.23858v1 Announce Type: new Abstract: Speculative decoding improves LLM inference by generating and verifying multiple tokens in parallel, but existing systems suffer from suboptimal performance due to a mismatch between dynamic speculation and static runtime assumptions. We present Yggdrasil, a co-designed system that enables latency-optimal speculative decoding through context-aware tree drafting and compiler-friendly execution. Yggdrasil introduces an equal-growth tree structure for static graph compatibility, a latency-aware optimization objective for draft selection, and stage-based scheduling to reduce overhead. Yggdrasil supports unmodified LLMs and achieves up to $3.98\times$ speedup over state-of-the-art baselines across multiple hardware setups.
https://arxiv.org/abs/2512.23858
Academic Papers
svg
a2304f5add2e177b5c20246ea92167656e42b81df0527162dd628f9b027f961e
2026-01-01T00:00:00-05:00
Seeking Late Night Life Lines: Experiences of Conversational AI Use in Mental Health Crisis
arXiv:2512.23859v1 Announce Type: new Abstract: Online, people often recount their experiences turning to conversational AI agents (e.g., ChatGPT, Claude, Copilot) for mental health support -- going so far as to replace their therapists. These anecdotes suggest that AI agents have great potential to offer accessible mental health support. However, it's unclear how to meet this potential in extreme mental health crisis use cases. In this work, we explore the first-person experience of turning to a conversational AI agent in a mental health crisis. From a testimonial survey (n = 53) of lived experiences, we find that people use AI agents to fill the in-between spaces of human support; they turn to AI due to lack of access to mental health professionals or fears of burdening others. At the same time, our interviews with mental health experts (n = 16) suggest that human-human connection is an essential positive action when managing a mental health crisis. Using the stages of change model, our results suggest that a responsible AI crisis intervention is one that increases the user's preparedness to take a positive action while de-escalating any intended negative action. We discuss the implications of designing conversational AI agents as bridges towards human-human connection rather than ends in themselves.
https://arxiv.org/abs/2512.23859
Academic Papers
svg
509462654add02bee93167745ffe02d8cdd238c5463d252ab0ff270ae97c499b
2026-01-01T00:00:00-05:00
Lifelong Domain Adaptive 3D Human Pose Estimation
arXiv:2512.23860v1 Announce Type: new Abstract: 3D Human Pose Estimation (3D HPE) is vital in various applications, from person re-identification and action recognition to virtual reality. However, the reliance on annotated 3D data collected in controlled environments poses challenges for generalization to diverse in-the-wild scenarios. Existing domain adaptation (DA) paradigms like general DA and source-free DA for 3D HPE overlook the issues of non-stationary target pose datasets. To address these challenges, we propose a novel task named lifelong domain adaptive 3D HPE. To our knowledge, we are the first to introduce the lifelong domain adaptation to the 3D HPE task. In this lifelong DA setting, the pose estimator is pretrained on the source domain and subsequently adapted to distinct target domains. Moreover, during adaptation to the current target domain, the pose estimator cannot access the source and all the previous target domains. The lifelong DA for 3D HPE involves overcoming challenges in adapting to current domain poses and preserving knowledge from previous domains, particularly combating catastrophic forgetting. We present an innovative Generative Adversarial Network (GAN) framework, which incorporates 3D pose generators, a 2D pose discriminator, and a 3D pose estimator. This framework effectively mitigates domain shifts and aligns original and augmented poses. Moreover, we construct a novel 3D pose generator paradigm, integrating pose-aware, temporal-aware, and domain-aware knowledge to enhance the current domain's adaptation and alleviate catastrophic forgetting on previous domains. Our method demonstrates superior performance through extensive experiments on diverse domain adaptive 3D HPE datasets.
https://arxiv.org/abs/2512.23860
Academic Papers
svg
888b77ca2e503ed0aa8d0e0460a2504a832cd61e73d29f7986a8ef0f945323b4
2026-01-01T00:00:00-05:00
Probing the Limits of Compressive Memory: A Study of Infini-Attention in Small-Scale Pretraining
arXiv:2512.23862v1 Announce Type: new Abstract: This study investigates small-scale pretraining for Small Language Models (SLMs) to enable efficient use of limited data and compute, improve accessibility in low-resource settings and reduce costs. To enhance long-context extrapolation in compact models, we focus on Infini-attention, which builds a compressed memory from past segments while preserving local attention. In our work, we conduct an empirical study using 300M-parameter LLaMA models pretrained with Infini-attention. The model demonstrates training stability and outperforms the baseline in long-context retrieval. We identify the balance factor as a key part of the model performance, and we found that retrieval accuracy drops with repeated memory compressions over long sequences. Even so, Infini-attention still effectively compensates for the SLM's limited parameters. Particularly, despite performance degradation at a 16,384-token context, the Infini-attention model achieves up to 31% higher accuracy than the baseline. Our findings suggest that achieving robust long-context capability in SLMs benefits from architectural memory like Infini-attention.
https://arxiv.org/abs/2512.23862
Academic Papers
svg
ad98f4f591d4ab63efc83dc76efcb273bfbc94899b8a79992c29fcaae28ccf06
2026-01-01T00:00:00-05:00
Learning to Feel the Future: DreamTacVLA for Contact-Rich Manipulation
arXiv:2512.23864v1 Announce Type: new Abstract: Vision-Language-Action (VLA) models have shown remarkable generalization by mapping web-scale knowledge to robotic control, yet they remain blind to physical contact. Consequently, they struggle with contact-rich manipulation tasks that require reasoning about force, texture, and slip. While some approaches incorporate low-dimensional tactile signals, they fail to capture the high-resolution dynamics essential for such interactions. To address this limitation, we introduce DreamTacVLA, a framework that grounds VLA models in contact physics by learning to feel the future. Our model adopts a hierarchical perception scheme in which high-resolution tactile images serve as micro-vision inputs coupled with wrist-camera local vision and third-person macro vision. To reconcile these multi-scale sensory streams, we first train a unified policy with a Hierarchical Spatial Alignment (HSA) loss that aligns tactile tokens with their spatial counterparts in the wrist and third-person views. To further deepen the model's understanding of fine-grained contact dynamics, we finetune the system with a tactile world model that predicts future tactile signals. To mitigate tactile data scarcity and the wear-prone nature of tactile sensors, we construct a hybrid large-scale dataset sourced from both high-fidelity digital twin and real-world experiments. By anticipating upcoming tactile states, DreamTacVLA acquires a rich model of contact physics and conditions its actions on both real observations and imagined consequences. Across contact-rich manipulation tasks, it outperforms state-of-the-art VLA baselines, achieving up to 95% success, highlighting the importance of understanding physical contact for robust, touch-aware robotic agents.
https://arxiv.org/abs/2512.23864
Academic Papers
svg
b46e24d2d4b83d2eac6398a4dc93f044ac0c7200d5dee92ad6e417d81352b2c2
2026-01-01T00:00:00-05:00
Improving Reliability of Human Trafficking Alerts in Airports
arXiv:2512.23865v1 Announce Type: new Abstract: This paper investigates the latter scenario of individual emergency alerts in airports by applying two existing benchmark delay tolerant network protocols and evaluating their performance of delivery ratio and latency. First, the paper provides a background on Mobile Ad Hoc Networks (MANETs) and Delay Tolerant Networks (DTNs), as well as Vehicular Ad Hoc Networks (VANETs) as a subset of MANETs. Next, the scenario is simulated using the Opportunistic Network Environment (ONE) simulator and runs the DTN protocols applying Spray and Wait and Epidemic. The study discusses the results, highlighting the advantages and limitations of each protocol within the scenario and addressing constraints of the simulation or experimental setup. A wider discussion then considers related research on technologies that combat human trafficking and the potential role of DTN networks in improving this global issue for the better.
https://arxiv.org/abs/2512.23865
Academic Papers
svg
27ab863fc1540ab08729e8c86a3836a8a665e93ef656e5e5c4ba68c076bf1e95
2026-01-01T00:00:00-05:00
Max-Entropy Reinforcement Learning with Flow Matching and A Case Study on LQR
arXiv:2512.23870v1 Announce Type: new Abstract: Soft actor-critic (SAC) is a popular algorithm for max-entropy reinforcement learning. In practice, the energy-based policies in SAC are often approximated using simple policy classes for efficiency, sacrificing the expressiveness and robustness. In this paper, we propose a variant of the SAC algorithm that parameterizes the policy with flow-based models, leveraging their rich expressiveness. In the algorithm, we evaluate the flow-based policy utilizing the instantaneous change-of-variable technique and update the policy with an online variant of flow matching developed in this paper. This online variant, termed importance sampling flow matching (ISFM), enables policy update with only samples from a user-specified sampling distribution rather than the unknown target distribution. We develop a theoretical analysis of ISFM, characterizing how different choices of sampling distributions affect the learning efficiency. Finally, we conduct a case study of our algorithm on the max-entropy linear quadratic regulator problems, demonstrating that the proposed algorithm learns the optimal action distribution.
https://arxiv.org/abs/2512.23870
Academic Papers
svg
c68e3b3cdae71027aee7fcb58732f1b263c1843a807835b8935305cc759effed
2026-01-01T00:00:00-05:00
Hierarchical Quasi-cyclic Codes from Reed-Solomon and Polynomial Evaluation Codes
arXiv:2512.23872v1 Announce Type: new Abstract: We introduce the first example of algebraically constructed hierarchical quasi-cyclic codes. These codes are built from Reed-Solomon codes using a 1964 construction of superimposed codes by Kautz and Singleton. We show both the number of levels in the hierarchy and the index of these Reed-Solomon derived codes are determined by the field size. We show that this property also holds for certain additional classes of polynomial evaluation codes. We provide explicit code parameters and properties as well as some additional bounds on parameters such as rank and distance. In particular, starting with Reed-Solomon codes of dimension $k=2$ yields hierarchical quasi-cyclic codes with Tanner graphs of girth 6. We present a table of small code parameters and note that some of these codes meet the best known minimum distance for binary codes, with the additional hierarchical quasi-cyclic structure. We draw connections to similar constructions in the literature, but importantly, while existing literature on related codes is largely simulation-based, we present a novel algebraic approach to determining new bounds on parameters of these codes.
https://arxiv.org/abs/2512.23872
Academic Papers
svg
d60ff6d1e0094c9868e5ff14351773f2b44e6288eca10f4ac8b6e9c775e306a7
2026-01-01T00:00:00-05:00
From Illusion to Insight: Change-Aware File-Level Software Defect Prediction Using Agentic AI
arXiv:2512.23875v1 Announce Type: new Abstract: Much of the reported progress in file-level software defect prediction (SDP) is, in reality, nothing but an illusion of accuracy. Over the last decades, machine learning and deep learning models have reported increasing performance across software versions. However, since most files persist across releases and retain their defect labels, standard evaluation rewards label-persistence bias rather than reasoning about code changes. To address this issue, we reformulate SDP as a change-aware prediction task, in which models reason over code changes of a file within successive project versions, rather than relying on static file snapshots. Building on this formulation, we propose an LLM-driven, change-aware, multi-agent debate framework. Our experiments on multiple PROMISE projects show that traditional models achieve inflated F1, while failing on rare but critical defect-transition cases. In contrast, our change-aware reasoning and multi-agent debate framework yields more balanced performance across evolution subsets and significantly improves sensitivity to defect introductions. These results highlight fundamental flaws in current SDP evaluation practices and emphasize the need for change-aware reasoning in practical defect prediction. The source code is publicly available.
https://arxiv.org/abs/2512.23875
Academic Papers
svg
b733be13efd94260ad71d45772eed47a3bd19c6a5c1058a174d869a11067013f
2026-01-01T00:00:00-05:00
CASCADE: Cumulative Agentic Skill Creation through Autonomous Development and Evolution
arXiv:2512.23880v1 Announce Type: new Abstract: Large language model (LLM) agents currently depend on predefined tools or brittle tool generation, constraining their capability and adaptability to complex scientific tasks. We introduce CASCADE, a self-evolving agentic framework representing an early instantiation of the transition from "LLM + tool use" to "LLM + skill acquisition". CASCADE enables agents to master complex external tools and codify knowledge through two meta-skills: continuous learning via web search and code extraction, and self-reflection via introspection and knowledge graph exploration, among others. We evaluate CASCADE on SciSkillBench, a benchmark of 116 materials science and chemistry research tasks. CASCADE achieves a 93.3% success rate using GPT-5, compared to 35.4% without evolution mechanisms. We further demonstrate real-world applications in computational analysis, autonomous laboratory experiments, and selective reproduction of published papers. Along with human-agent collaboration and memory consolidation, CASCADE accumulates executable skills that can be shared across agents and scientists, moving toward scalable AI-assisted scientific research.
https://arxiv.org/abs/2512.23880
Academic Papers
svg
5cf3ba3d1eb56bb1f2466e351cc10c8d5f0f4db7e03e665681a8ea58433c4100
2026-01-01T00:00:00-05:00
Breaking Audio Large Language Models by Attacking Only the Encoder: A Universal Targeted Latent-Space Audio Attack
arXiv:2512.23881v1 Announce Type: new Abstract: Audio-language models combine audio encoders with large language models to enable multimodal reasoning, but they also introduce new security vulnerabilities. We propose a universal targeted latent space attack, an encoder-level adversarial attack that manipulates audio latent representations to induce attacker-specified outputs in downstream language generation. Unlike prior waveform-level or input-specific attacks, our approach learns a universal perturbation that generalizes across inputs and speakers and does not require access to the language model. Experiments on Qwen2-Audio-7B-Instruct demonstrate consistently high attack success rates with minimal perceptual distortion, revealing a critical and previously underexplored attack surface at the encoder level of multimodal systems.
https://arxiv.org/abs/2512.23881
Academic Papers
svg
adde824f987f228ab83aa1bce68a74b6862f59b0fb96756d80b70124e65fd65a
2026-01-01T00:00:00-05:00
Institutional cooperations in Austrian research: An analysis of shared researchers
arXiv:2512.23882v1 Announce Type: new Abstract: Multiple organisational affiliations are an increasingly common feature of research systems, yet their implications for organisational performance had received limited systematic attention. We developed a scalable, network-based analytical framework that represents simultaneous researcher affiliations as relational links between organisations and applied it to bibliometric data from Austria. Using harmonised publication and affiliation metadata, we constructed two complementary co-affiliation networks: a complete network capturing all simultaneous affiliations and a temporally filtered network retaining only organisational pairs that recurred over time. Network regression analyses showed that geographical proximity remained an important determinant of co-affiliation formation, with spatial distance consistently reducing shared appointments. Clear sectoral differences emerged beyond geography. Universities formed a dense and persistent core of co-affiliations, whereas ties involving medical institutions, government, non-profit and private-sector organisations were often short-lived and attenuated under temporal filtering. Among crosssector links, co-affiliations between universities and research institutes were notably resilient, indicating a more structurally embedded form of organisational integration. We assessed the effect of concurrent affiliations on organisational citation impact across organisational types using field- and year-normalised indicators. Research institutes and universities consistently exhibited higher citation impact than organisations from other sectors, and persistent co-affiliations were associated with greater and more stable scientific visibility.
https://arxiv.org/abs/2512.23882
Academic Papers
svg
25faba5a3cb603063a7eb7c216760a1255413978e6357f066d8fb414cd5a627b
2026-01-01T00:00:00-05:00
How Large Language Models Systematically Misrepresent American Climate Opinions
arXiv:2512.23889v1 Announce Type: new Abstract: Federal agencies and researchers increasingly use large language models to analyze and simulate public opinion. When AI mediates between the public and policymakers, accuracy across intersecting identities becomes consequential; inaccurate group-level estimates can mislead outreach, consultation, and policy design. While research examines intersectionality in LLM outputs, no study has compared these outputs against real human responses across intersecting identities. Climate policy is one such domain, and this is particularly urgent for climate change, where opinion is contested and diverse. We investigate how LLMs represent intersectional patterns in U.S. climate opinions. We prompted six LLMs with profiles of 978 respondents from a nationally representative U.S. climate opinion survey and compared AI-generated responses to actual human answers across 20 questions. We find that LLMs appear to compress the diversity of American climate opinions, predicting less-concerned groups as more concerned and vice versa. This compression is intersectional: LLMs apply uniform gender assumptions that match reality for White and Hispanic Americans but misrepresent Black Americans, where actual gender patterns differ. These patterns, which may be invisible to standard auditing approaches, could undermine equitable climate governance.
https://arxiv.org/abs/2512.23889
Academic Papers
svg
d8a24c25809c077442e61b9c6d0b5d623fe000b4a9e2fb958f2074fe116e1293
2026-01-01T00:00:00-05:00
MRI-to-CT Synthesis With Cranial Suture Segmentations Using A Variational Autoencoder Framework
arXiv:2512.23894v1 Announce Type: new Abstract: Quantifying normative pediatric cranial development and suture ossification is crucial for diagnosing and treating growth-related cephalic disorders. Computed tomography (CT) is widely used to evaluate cranial and sutural deformities; however, its ionizing radiation is contraindicated in children without significant abnormalities. Magnetic resonance imaging (MRI) offers radiation free scans with superior soft tissue contrast, but unlike CT, MRI cannot elucidate cranial sutures, estimate skull bone density, or assess cranial vault growth. This study proposes a deep learning driven pipeline for transforming T1 weighted MRIs of children aged 0.2 to 2 years into synthetic CTs (sCTs), predicting detailed cranial bone segmentation, generating suture probability heatmaps, and deriving direct suture segmentation from the heatmaps. With our in-house pediatric data, sCTs achieved 99% structural similarity and a Frechet inception distance of 1.01 relative to real CTs. Skull segmentation attained an average Dice coefficient of 85% across seven cranial bones, and sutures achieved 80% Dice. Equivalence of skull and suture segmentation between sCTs and real CTs was confirmed using two one sided tests (TOST p < 0.05). To our knowledge, this is the first pediatric cranial CT synthesis framework to enable suture segmentation on sCTs derived from MRI, despite MRI's limited depiction of bone and sutures. By combining robust, domain specific variational autoencoders, our method generates perceptually indistinguishable cranial sCTs from routine pediatric MRIs, bridging critical gaps in non invasive cranial evaluation.
https://arxiv.org/abs/2512.23894
Academic Papers
svg
2476d3081f003ed597bccd29b2137d2125a01ef75b9d5ccdb1f82eeb5b6cf604
2026-01-01T00:00:00-05:00
Wireless Multimodal Foundation Model (WMFM): Integrating Vision and Communication Modalities for 6G ISAC Systems
arXiv:2512.23897v1 Announce Type: new Abstract: The emergence of multimodal foundation models has revolutionized learning paradigms by enabling joint understanding across diverse data types. In the context of next-generation wireless networks, integrating sensing and communication modalities presents a unique opportunity to develop generalizable and data-efficient models. In this work, we introduce the contrastive learning based Wireless Multimodal Foundation Model (WMFM), a large-scale framework that jointly learns from wireless channel coefficients and visual imagery. The WMFM is pretrained using contrastive learning, a self-supervised learning technique that aligns embeddings of camera and channel data without requiring explicit labels. The pretrained encoders are then frozen and employed as feature extractors, with lightweight task-specific heads, fine-tuned for downstream tasks, including user localization and LoS/nLoS classification. Extensive experiments on the DeepVerse6G dataset demonstrate that the proposed WMFM achieves a 17% improvement in balanced accuracy for LoS/nLoS classification and a 48.5% reduction in localization error compared to the end-to-end (E2E) benchmark, while reducing training time by up to 90-fold. Even when trained with as little as 20% of the data, the WMFM-based heads outperform the fully supervised E2E model, underscoring their robustness and data-efficient learning. The proposed approach establishes a foundation for scalable, multimodal learning in Integrated Sensing and Communication (ISAC) systems, paving the way for intelligent and adaptive 6G networks.
https://arxiv.org/abs/2512.23897
Academic Papers
svg
a17e2938535ae78e16ba67b9fdd378f8fa85e8a533d8972d7bec986afbf3fffa
2026-01-01T00:00:00-05:00
Efficient Deep Learning for Short-Term Solar Irradiance Time Series Forecasting: A Benchmark Study in Ho Chi Minh City
arXiv:2512.23898v1 Announce Type: new Abstract: Reliable forecasting of Global Horizontal Irradiance (GHI) is essential for mitigating the variability of solar energy in power grids. This study presents a comprehensive benchmark of ten deep learning architectures for short-term (1-hour ahead) GHI time series forecasting in Ho Chi Minh City, leveraging high-resolution NSRDB satellite data (2011-2020) to compare established baselines (e.g. LSTM, TCN) against emerging state-of-the-art architectures, including Transformer, Informer, iTransformer, TSMixer, and Mamba. Experimental results identify the Transformer as the superior architecture, achieving the highest predictive accuracy with an R^2 of 0.9696. The study further utilizes SHAP analysis to contrast the temporal reasoning of these architectures, revealing that Transformers exhibit a strong "recency bias" focused on immediate atmospheric conditions, whereas Mamba explicitly leverages 24-hour periodic dependencies to inform predictions. Furthermore, we demonstrate that Knowledge Distillation can compress the high-performance Transformer by 23.5% while surprisingly reducing error (MAE: 23.78 W/m^2), offering a proven pathway for deploying sophisticated, low-latency forecasting on resource-constrained edge devices.
https://arxiv.org/abs/2512.23898
Academic Papers
svg
8a9290612f1d63bb37b0e4851c429a2c065208cd07be6a5c4233c3ffe8fa7931
2026-01-01T00:00:00-05:00
Distributed Beamforming in Massive MIMO Communication for a Constellation of Airborne Platform Stations
arXiv:2512.23900v1 Announce Type: new Abstract: Non-terrestrial base stations (NTBSs), including high-altitude platform stations (HAPSs) and hot-air balloons (HABs), are integral to next-generation wireless networks, offering coverage in remote areas and enhancing capacity in dense regions. In this paper, we propose a distributed beamforming framework for a massive MIMO network with a constellation of aerial platform stations (APSs). Our approach leverages an entropy-based multi-agent deep reinforcement learning (DRL) model, where each APS operates as an independent agent using imperfect channel state information (CSI) in both training and testing phases. Unlike conventional methods, our model does not require CSI sharing among APSs, significantly reducing overhead. Simulations results demonstrate that our method outperforms zero forcing (ZF) and maximum ratio transmission (MRT) techniques, particularly in high-interference scenarios, while remaining robust to CSI imperfections. Additionally, our framework exhibits scalability, maintaining stable performance over an increasing number of users and various cluster configurations. Therefore, the proposed method holds promise for dynamic and interference-rich NTBS networks, advancing scalable and robust wireless solutions.
https://arxiv.org/abs/2512.23900
Academic Papers
svg
0033dfdb88770f0db13c6fa80b7cfe944c68b589280731cd9a571d40a9540088
2026-01-01T00:00:00-05:00
Road Rules for Radio: Why Your Wi-Fi Got Better
arXiv:2512.23901v1 Announce Type: new Abstract: WiFi allows for the connection of devices and people around the globe. It has proven to be a monumental and revolutionary tool that keeps the world connected. However, recent WiFi advancements are numerous and at times confusing. WiFi has grown significantly over the years, yet few understand the scope and scale of WiFi progression as a whole. This paper tackles that problem, providing a broad literature review on the advancements of key WiFi features to date. This paper will center on seven key areas of focus: (1) bandwidth, (2) battery life, (3) traffic collisions, (4) interference, (5) data-intensive transmissions, (6) numerous devices, and (7) peak throughput/modulation. Each section will focus on WiFi's problems, how those problems were fixed, as well as the limitations of existing solutions. Moreover, the paper explains the role of new unreleased technologies in these seven areas. This includes exploring the upcoming WiFi 8 standard based on the IEEE 802.11bn "Ultra High Reliability" (UHR) specification and how it builds upon current specifications. Compared to previous specifications, WiFi 8 marks a stronger and more significant shift toward prioritizing reliability over pure data rates. Beyond a sole literature review, this paper uses a novel analogy. A road/highway analogy will be integrated throughout the paper to facilitate understanding of networking mechanisms. This paper is approachable and is written such that someone with very little WiFi knowledge should come away with a strong understanding of WiFi. As is typical of literature review papers, technical claims will be grounded in prior work.
https://arxiv.org/abs/2512.23901
Academic Papers
svg
6b8850ef0f7a865ee14d3237c8de3ee3d1a5dbc8da10e0793e9927d687f6691e
2026-01-01T00:00:00-05:00
Scaling Remote Sensing Foundation Models: Data Domain Tradeoffs at the Peta-Scale
arXiv:2512.23903v1 Announce Type: new Abstract: We explore the scaling behaviors of artificial intelligence to establish practical techniques for training foundation models on high-resolution electro-optical (EO) datasets that exceed the current state-of-the-art scale by orders of magnitude. Modern multimodal machine learning (ML) applications, such as generative artificial intelligence (GenAI) systems for image captioning, search, and reasoning, depend on robust, domain-specialized encoders for non-text modalities. In natural-image domains where internet-scale data is plentiful, well-established scaling laws help optimize the joint scaling of model capacity, training compute, and dataset size. Unfortunately, these relationships are much less well-understood in high-value domains like remote sensing (RS). Using over a quadrillion pixels of commercial satellite EO data and the MITRE Federal AI Sandbox, we train progressively larger vision transformer (ViT) backbones, report success and failure modes observed at petascale, and analyze implications for bridging domain gaps across additional RS modalities. We observe that even at this scale, performance is consistent with a data limited regime rather than a model parameter-limited one. These practical insights are intended to inform data-collection strategies, compute budgets, and optimization schedules that advance the future development of frontier-scale RS foundation models.
https://arxiv.org/abs/2512.23903
Academic Papers
svg
d839b1cf1cc44aa87d9c9f1ea7508fede5bc322f7a0cad17aba7e4aeba222233
2026-01-01T00:00:00-05:00
Rethinking Dense Linear Transformations: Stagewise Pairwise Mixing (SPM) for Near-Linear Training in Neural Networks
arXiv:2512.23905v1 Announce Type: new Abstract: Dense linear layers are a dominant source of computational and parametric cost in modern machine learning models, despite their quadratic complexity and often being misaligned with the compositional structure of learned representations. We introduce Stagewise Pairwise Mixers (SPM), a structured linear operator that replaces dense matrices with a composition of sparse pairwise-mixing stages. An SPM layer implements a global linear transformation in $O(nL)$ time with $O(nL)$ parameters, where $L$ is typically constant or $log_2n$, and admits exact closed-form forward and backward computations. SPM is designed as a drop-in replacement for dense linear layers in feedforward networks, recurrent architectures, attention mechanisms, etc. We derive complete forward and backward expressions for two parameterizations: an orthogonal norm-preserving rotation-based variant and a fully general $2 \times 2$ mixing variant. Beyond computational savings, the stagewise structure of SPM induces an explicit compositional inductive bias that constrains model capacity and improves generalization when aligned with task structure. We present proof-of-concept experiments demonstrating substantial reductions in wall-clock cost and improved accuracy on structured learning problems, while retaining competitive performance on real-world benchmarks.
https://arxiv.org/abs/2512.23905
Academic Papers
svg
7f2c551e3be35b146d6b9ebd1042b0bfc4b84098297fc1dcf7329b11875cd239
2026-01-01T00:00:00-05:00
Deletion Considered Harmful
arXiv:2512.23907v1 Announce Type: new Abstract: In a world of information overload, understanding how we can most effectively manage information is crucial to success. We set out to understand how people view deletion, the removal of material no longer needed: does it help by reducing clutter and improving the signal to noise ratio, or does the effort required to decide to delete something make it not worthwhile? How does deletion relate to other strategies like filing; do people who spend extensive time in filing also prune their materials too? We studied the behaviour of 51 knowledge workers though a series of questionnaires and interviews to evaluate a range of tactics they used aimed at organizing, filing, and retrieving digital resources. Our study reveals that deletion is consistently under-adopted compared to other tactics such as Filing, Coverage, Ontology, and Timeliness. Moreover, the empirical data indicate that deletion is actually detrimental to retrieval success and satisfaction. In this paper, we examine the practice of deletion, review the related literature, and present detailed statistical results and clustering outcomes that underscore its adverse effects.
https://arxiv.org/abs/2512.23907
Academic Papers
svg