id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2502.12128 | LaM-SLidE: Latent Space Modeling of Spatial Dynamical Systems via Linked
Entities | cs.LG cs.AI | Generative models are spearheading recent progress in deep learning, showing
strong promise for trajectory sampling in dynamical systems as well. However,
while latent space modeling paradigms have transformed image and video
generation, similar approaches are more difficult for most dynamical systems.
Such systems -... |
2502.12129 | When Wyner and Ziv Met Bayes in Quantum-Classical Realm | cs.IT math.IT quant-ph | In this work, we address the lossy quantum-classical source coding with the
quantum side-information (QC-QSI) problem. The task is to compress the
classical information about a quantum source, obtained after performing a
measurement while incurring a bounded reconstruction error. Here, the decoder
is allowed to use t... |
2502.12130 | Scaling Autonomous Agents via Automatic Reward Modeling And Planning | cs.AI | Large language models (LLMs) have demonstrated remarkable capabilities across
a range of text-generation tasks. However, LLMs still struggle with problems
requiring multi-step decision-making and environmental feedback, such as online
shopping, scientific reasoning, and mathematical problem-solving. Unlike pure
text ... |
2502.12131 | Transformer Dynamics: A neuroscientific approach to interpretability of
large language models | cs.AI | As artificial intelligence models have exploded in scale and capability,
understanding of their internal mechanisms remains a critical challenge.
Inspired by the success of dynamical systems approaches in neuroscience, here
we propose a novel framework for studying computations in deep learning
systems. We focus on t... |
2502.12134 | SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs | cs.CL | Chain-of-Thought (CoT) reasoning enables Large Language Models (LLMs) to
solve complex reasoning tasks by generating intermediate reasoning steps.
However, most existing approaches focus on hard token decoding, which
constrains reasoning within the discrete vocabulary space and may not always be
optimal. While recent... |
2502.12135 | MagicArticulate: Make Your 3D Models Articulation-Ready | cs.CV cs.GR | With the explosive growth of 3D content creation, there is an increasing
demand for automatically converting static 3D models into articulation-ready
versions that support realistic animation. Traditional approaches rely heavily
on manual annotation, which is both time-consuming and labor-intensive.
Moreover, the lac... |
2502.12137 | REVERSUM: A Multi-staged Retrieval-Augmented Generation Method to
Enhance Wikipedia Tail Biographies through Personal Narratives | cs.CL cs.IR | Wikipedia is an invaluable resource for factual information about a wide
range of entities. However, the quality of articles on less-known entities
often lags behind that of the well-known ones. This study proposes a novel
approach to enhancing Wikipedia's B and C category biography articles by
leveraging personal na... |
2502.12138 | FLARE: Feed-forward Geometry, Appearance and Camera Estimation from
Uncalibrated Sparse Views | cs.CV | We present FLARE, a feed-forward model designed to infer high-quality camera
poses and 3D geometry from uncalibrated sparse-view images (i.e., as few as 2-8
inputs), which is a challenging yet practical setting in real-world
applications. Our solution features a cascaded learning paradigm with camera
pose serving as ... |
2502.12143 | Small Models Struggle to Learn from Strong Reasoners | cs.AI | Large language models (LLMs) excel in complex reasoning tasks, and distilling
their reasoning capabilities into smaller models has shown promise. However, we
uncover an interesting phenomenon, which we term the Small Model Learnability
Gap: small models ($\leq$3B parameters) do not consistently benefit from long
chai... |
2502.12145 | Fast or Better? Balancing Accuracy and Cost in Retrieval-Augmented
Generation with Flexible User Control | cs.IR cs.AI | Retrieval-Augmented Generation (RAG) has emerged as a powerful approach to
mitigate large language model (LLM) hallucinations by incorporating external
knowledge retrieval. However, existing RAG frameworks often apply retrieval
indiscriminately,leading to inefficiencies-over-retrieving when unnecessary or
failing to ... |
2502.12146 | Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising
Trajectory Sharpening | cs.CV | We propose Diffusion-Sharpening, a fine-tuning approach that enhances
downstream alignment by optimizing sampling trajectories. Existing RL-based
fine-tuning methods focus on single training timesteps and neglect
trajectory-level alignment, while recent sampling trajectory optimization
methods incur significant infer... |
2502.12147 | Learning Smooth and Expressive Interatomic Potentials for Physical
Property Prediction | physics.comp-ph cs.LG | Machine learning interatomic potentials (MLIPs) have become increasingly
effective at approximating quantum mechanical calculations at a fraction of the
computational cost. However, lower errors on held out test sets do not always
translate to improved results on downstream physical property prediction tasks.
In this... |
2502.12148 | HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and
Generation | cs.CV | The remarkable success of the autoregressive paradigm has made significant
advancement in Multimodal Large Language Models (MLLMs), with powerful models
like Show-o, Transfusion and Emu3 achieving notable progress in unified image
understanding and generation. For the first time, we uncover a common
phenomenon: the u... |
2502.12149 | HARBOR: Exploring Persona Dynamics in Multi-Agent Competition | cs.MA cs.AI cs.CL | We investigate factors contributing to LLM agents' success in competitive
multi-agent environments, using auctions as a testbed where agents bid to
maximize profit. The agents are equipped with bidding domain knowledge,
distinct personas that reflect item preferences, and a memory of auction
history. Our work extends... |
2502.12150 | Idiosyncrasies in Large Language Models | cs.CL | In this work, we unveil and study idiosyncrasies in Large Language Models
(LLMs) -- unique patterns in their outputs that can be used to distinguish the
models. To do so, we consider a simple classification task: given a particular
text output, the objective is to predict the source LLM that generates the
text. We ev... |
2502.12151 | VoLUT: Efficient Volumetric streaming enhanced by LUT-based
super-resolution | cs.CV cs.SY eess.SY | 3D volumetric video provides immersive experience and is gaining traction in
digital media. Despite its rising popularity, the streaming of volumetric video
content poses significant challenges due to the high data bandwidth
requirement. A natural approach to mitigate the bandwidth issue is to reduce
the volumetric v... |
2502.12152 | Learning Getting-Up Policies for Real-World Humanoid Robots | cs.RO cs.LG | Automatic fall recovery is a crucial prerequisite before humanoid robots can
be reliably deployed. Hand-designing controllers for getting up is difficult
because of the varied configurations a humanoid can end up in after a fall and
the challenging terrains humanoid robots are expected to operate on. This paper
devel... |
2502.12154 | Diffusion Models without Classifier-free Guidance | cs.CV cs.AI cs.LG | This paper presents Model-guidance (MG), a novel objective for training
diffusion model that addresses and removes of the commonly used Classifier-free
guidance (CFG). Our innovative approach transcends the standard modeling of
solely data distribution to incorporating the posterior probability of
conditions. The pro... |
2502.12158 | Mining Social Determinants of Health for Heart Failure Patient 30-Day
Readmission via Large Language Model | cs.LG cs.AI cs.CL cs.CY | Heart Failure (HF) affects millions of Americans and leads to high
readmission rates, posing significant healthcare challenges. While Social
Determinants of Health (SDOH) such as socioeconomic status and housing
stability play critical roles in health outcomes, they are often
underrepresented in structured EHRs and h... |
2502.12159 | Causal Interpretations in Observational Studies: The Role of
Sociocultural Backgrounds and Team Dynamics | physics.soc-ph cs.CL | The prevalence of drawing causal conclusions from observational studies has
raised concerns about potential exaggeration in science communication. While
some believe causal language should only apply to randomized controlled trials,
others argue that rigorous methods can justify causal claims in observational
studies... |
2502.12161 | Integrating Artificial Intelligence and Geophysical Insights for
Earthquake Forecasting: A Cross-Disciplinary Review | physics.geo-ph cs.AI cs.LG | Earthquake forecasting remains a significant scientific challenge, with
current methods falling short of achieving the performance necessary for
meaningful societal benefits. Traditional models, primarily based on past
seismicity and geomechanical data, struggle to capture the complexity of
seismic patterns and often... |
2502.12164 | Scalable and Robust Physics-Informed Graph Neural Networks for Water
Distribution Systems | cs.NE cs.LG cs.SY eess.SY | Water distribution systems (WDSs) are an important part of critical
infrastructure becoming increasingly significant in the face of climate change
and urban population growth. We propose a robust and scalable surrogate deep
learning (DL) model to enable efficient planning, expansion, and rehabilitation
of WDSs. Our a... |
2502.12167 | TastepepAI, An artificial intelligence platform for taste peptide de
novo design | cs.LG cs.AI | Taste peptides have emerged as promising natural flavoring agents attributed
to their unique organoleptic properties, high safety profile, and potential
health benefits. However, the de novo identification of taste peptides derived
from animal, plant, or microbial sources remains a time-consuming and
resource-intensi... |
2502.12168 | CFIRSTNET: Comprehensive Features for Static IR Drop Estimation with
Neural Network | cs.LG cs.CV | IR drop estimation is now considered a first-order metric due to the concern
about reliability and performance in modern electronic products. Since
traditional solution involves lengthy iteration and simulation flow, how to
achieve fast yet accurate estimation has become an essential demand. In this
work, with the he... |
2502.12169 | Antimatter Annihilation Vertex Reconstruction with Deep Learning for
ALPHA-g Radial Time Projection Chamber | physics.ins-det cs.LG hep-ex | The ALPHA-g experiment at CERN aims to precisely measure the terrestrial
gravitational acceleration of antihydrogen atoms. A radial Time Projection
Chamber (rTPC), that surrounds the ALPHA-g magnetic trap, is employed to
determine the annihilation location, called the vertex. The standard approach
requires identifyin... |
2502.12170 | MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway
Dynamic Dense Connections | cs.LG cs.AI cs.CL | We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective
method to address the limitations of residual connections and enhance
cross-layer information flow in Transformers. Unlike existing dense connection
approaches with static and shared connection weights, MUDD generates connection
weights dyna... |
2502.12171 | GoRA: Gradient-driven Adaptive Low Rank Adaptation | cs.LG cs.AI cs.CL | Low-Rank Adaptation (LoRA) is a crucial method for efficiently fine-tuning
pretrained large language models (LLMs), with its performance largely
influenced by two key factors: rank and initialization strategy. Numerous LoRA
variants have been proposed to enhance its performance by addressing these
factors. However, t... |
2502.12172 | Application-oriented automatic hyperparameter optimization for spiking
neural network prototyping | cs.NE cs.LG | Hyperparameter optimization (HPO) is of paramount importance in the
development of high-performance, specialized artificial intelligence (AI)
models, ranging from well-established machine learning (ML) solutions to the
deep learning (DL) domain and the field of spiking neural networks (SNNs). The
latter introduce fur... |
2502.12173 | nanoML for Human Activity Recognition | cs.LG cs.AI | Human Activity Recognition (HAR) is critical for applications in healthcare,
fitness, and IoT, but deploying accurate models on resource-constrained devices
remains challenging due to high energy and memory demands. This paper
demonstrates the application of Differentiable Weightless Neural Networks
(DWNs) to HAR, ac... |
2502.12174 | Robust blue-green urban flood risk management optimised with a genetic
algorithm for multiple rainstorm return periods | cs.NE cs.CE cs.CY | Flood risk managers seek to optimise Blue-Green Infrastructure (BGI) designs
to maximise return on investment. Current systems often use optimisation
algorithms and detailed flood models to maximise benefit-cost ratios for single
rainstorm return periods. However, these schemes may lack robustness in
mitigating flood... |
2502.12175 | Spatiotemporal Graph Neural Networks in short term load forecasting:
Does adding Graph Structure in Consumption Data Improve Predictions? | cs.LG cs.AI | Short term Load Forecasting (STLF) plays an important role in traditional and
modern power systems. Most STLF models predominantly exploit temporal
dependencies from historical data to predict future consumption. Nowadays, with
the widespread deployment of smart meters, their data can contain
spatiotemporal dependenc... |
2502.12176 | Ten Challenging Problems in Federated Foundation Models | cs.LG cs.AI | Federated Foundation Models (FedFMs) represent a distributed learning
paradigm that fuses general competences of foundation models as well as
privacy-preserving capabilities of federated learning. This combination allows
the large foundation models and the small local domain models at the remote
clients to learn from... |
2502.12177 | Recent Advances of NeuroDiffEq -- An Open-Source Library for
Physics-Informed Neural Networks | cs.LG | Solving differential equations is a critical challenge across a host of
domains. While many software packages efficiently solve these equations using
classical numerical approaches, there has been less effort in developing a
library for researchers interested in solving such systems using neural
networks. With PyTorc... |
2502.12178 | Direct Preference Optimization-Enhanced Multi-Guided Diffusion Model for
Traffic Scenario Generation | cs.LG cs.MA | Diffusion-based models are recognized for their effectiveness in using
real-world driving data to generate realistic and diverse traffic scenarios.
These models employ guided sampling to incorporate specific traffic preferences
and enhance scenario realism. However, guiding the sampling process to conform
to traffic ... |
2502.12179 | Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts | cs.LG cs.AI cs.CL | Steering methods manipulate the representations of large language models
(LLMs) to induce responses that have desired properties, e.g., truthfulness,
offering a promising approach for LLM alignment without the need for
fine-tuning. Traditionally, steering has relied on supervision, such as from
contrastive pairs of p... |
2502.12180 | ClusMFL: A Cluster-Enhanced Framework for Modality-Incomplete Multimodal
Federated Learning in Brain Imaging Analysis | eess.IV cs.AI cs.CV cs.LG | Multimodal Federated Learning (MFL) has emerged as a promising approach for
collaboratively training multimodal models across distributed clients,
particularly in healthcare domains. In the context of brain imaging analysis,
modality incompleteness presents a significant challenge, where some
institutions may lack sp... |
2502.12181 | 3D ReX: Causal Explanations in 3D Neuroimaging Classification | eess.IV cs.AI cs.CV cs.LG | Explainability remains a significant problem for AI models in medical
imaging, making it challenging for clinicians to trust AI-driven predictions.
We introduce 3D ReX, the first causality-based post-hoc explainability tool for
3D models. 3D ReX uses the theory of actual causality to generate
responsibility maps whic... |
2502.12182 | Towards Transparent and Accurate Plasma State Monitoring at JET | physics.plasm-ph cs.AI cs.LG | Controlling and monitoring plasma within a tokamak device is complex and
challenging. Plasma off-normal events, such as disruptions, are hindering
steady-state operation. For large devices, they can even endanger the machine's
integrity and it represents in general one of the most serious concerns for the
exploitatio... |
2502.12183 | Leveraging large language models for structured information extraction
from pathology reports | cs.CL cs.LG | Background: Structured information extraction from unstructured
histopathology reports facilitates data accessibility for clinical research.
Manual extraction by experts is time-consuming and expensive, limiting
scalability. Large language models (LLMs) offer efficient automated extraction
through zero-shot prompting... |
2502.12185 | Large Language Models for Extrapolative Modeling of Manufacturing
Processes | cs.CL cs.AI | Conventional predictive modeling of parametric relationships in manufacturing
processes is limited by the subjectivity of human expertise and intuition on
the one hand and by the cost and time of experimental data generation on the
other hand. This work addresses this issue by establishing a new Large Language
Model ... |
2502.12186 | E2CB2former: Effecitve and Explainable Transformer for CB2 Receptor
Ligand Activity Prediction | cs.LG cs.AI q-bio.QM | Accurate prediction of CB2 receptor ligand activity is pivotal for advancing
drug discovery targeting this receptor, which is implicated in inflammation,
pain management, and neurodegenerative conditions. Although conventional
machine learning and deep learning techniques have shown promise, their limited
interpretab... |
2502.12187 | Hallucinations are inevitable but statistically negligible | cs.CL cs.FL cs.LG math.ST stat.ML stat.TH | Hallucinations, a phenomenon where a language model (LM) generates nonfactual
content, pose a significant challenge to the practical deployment of LMs. While
many empirical methods have been proposed to mitigate hallucinations, a recent
study established a computability-theoretic result showing that any LM will
inevi... |
2502.12188 | Boosting Generalization in Diffusion-Based Neural Combinatorial Solver
via Energy-guided Sampling | cs.LG cs.AI | Diffusion-based Neural Combinatorial Optimization (NCO) has demonstrated
effectiveness in solving NP-complete (NPC) problems by learning discrete
diffusion models for solution generation, eliminating hand-crafted domain
knowledge. Despite their success, existing NCO methods face significant
challenges in both cross-s... |
2502.12189 | Self-supervised Attribute-aware Dynamic Preference Ranking Alignment | cs.CL cs.AI | Reinforcement Learning from Human Feedback and its variants excel in aligning
with human intentions to generate helpful, harmless, and honest responses.
However, most of them rely on costly human-annotated pairwise comparisons for
supervised alignment, which is not suitable for list-level scenarios, such as
community... |
2502.12191 | AnyTouch: Learning Unified Static-Dynamic Representation across Multiple
Visuo-tactile Sensors | cs.LG cs.CV cs.RO | Visuo-tactile sensors aim to emulate human tactile perception, enabling
robots to precisely understand and manipulate objects. Over time, numerous
meticulously designed visuo-tactile sensors have been integrated into robotic
systems, aiding in completing various tasks. However, the distinct data
characteristics of th... |
2502.12193 | AI and the Law: Evaluating ChatGPT's Performance in Legal Classification | cs.CL cs.AI | The use of ChatGPT to analyze and classify evidence in criminal proceedings
has been a topic of ongoing discussion. However, to the best of our knowledge,
this issue has not been studied in the context of the Polish language. This
study addresses this research gap by evaluating the effectiveness of ChatGPT in
classif... |
2502.12195 | GeneralizeFormer: Layer-Adaptive Model Generation across Test-Time
Distribution Shifts | cs.LG | We consider the problem of test-time domain generalization, where a model is
trained on several source domains and adjusted on target domains never seen
during training. Different from the common methods that fine-tune the model or
adjust the classifier parameters online, we propose to generate multiple layer
paramet... |
2502.12196 | Integrated Scheduling Model for Arrivals and Departures in Metroplex
Terminal Area | cs.NE math.OC | In light of the rapid expansion of civil aviation, addressing the delays and
congestion phenomena in the vicinity of metroplex caused by the imbalance
between air traffic flow and capacity is crucial. This paper first proposes a
bi-level optimization model for the collaborative flight sequencing of arrival
and depart... |
2502.12197 | A Closer Look at System Prompt Robustness | cs.CL cs.AI | System prompts have emerged as a critical control surface for specifying the
behavior of LLMs in chat and agent settings. Developers depend on system
prompts to specify important context, output format, personalities, guardrails,
content policies, and safety countermeasures, all of which require models to
robustly ad... |
2502.12198 | Maximize Your Diffusion: A Study into Reward Maximization and Alignment
for Diffusion-based Control | cs.LG cs.AI | Diffusion-based planning, learning, and control methods present a promising
branch of powerful and expressive decision-making solutions. Given the growing
interest, such methods have undergone numerous refinements over the past years.
However, despite these advancements, existing methods are limited in their
investig... |
2502.12200 | Efficient and Effective Prompt Tuning via Prompt Decomposition and
Compressed Outer Product | cs.CL cs.AI | Prompt tuning (PT) offers a cost-effective alternative to fine-tuning
large-scale pre-trained language models (PLMs), requiring only a few parameters
in soft prompt tokens added before the input text. However, existing PT
approaches face two significant issues: (i) They overlook intrinsic semantic
associations betwee... |
2502.12202 | BoT: Breaking Long Thought Processes of o1-like Large Language Models
through Backdoor Attack | cs.CL cs.AI cs.LG | Longer thought, better performance: large language models with deep reasoning
capabilities, particularly o1-like models, have demonstrated remarkable
performance by generating extensive thought processes during inference. This
trade-off reveals a potential vulnerability: adversaries could compromise model
performance... |
2502.12203 | An Interpretable Automated Mechanism Design Framework with Large
Language Models | cs.LG cs.AI cs.GT cs.NE | Mechanism design has long been a cornerstone of economic theory, with
traditional approaches relying on mathematical derivations. Recently, automated
approaches, including differentiable economics with neural networks, have
emerged for designing payments and allocations. While both analytical and
automated methods ha... |
2502.12204 | Predicting Depression in Screening Interviews from Interactive
Multi-Theme Collaboration | cs.CL cs.AI | Automatic depression detection provides cues for early clinical intervention
by clinicians. Clinical interviews for depression detection involve dialogues
centered around multiple themes. Existing studies primarily design end-to-end
neural network models to capture the hierarchical structure of clinical
interview dia... |
2502.12206 | Evaluating the Paperclip Maximizer: Are RL-Based Language Models More
Likely to Pursue Instrumental Goals? | cs.AI cs.CL cs.LG | As large language models (LLMs) continue to evolve, ensuring their alignment
with human goals and values remains a pressing challenge. A key concern is
\textit{instrumental convergence}, where an AI system, in optimizing for a
given objective, develops unintended intermediate goals that override the
ultimate objectiv... |
2502.12207 | PAR-AdvGAN: Improving Adversarial Attack Capability with Progressive
Auto-Regression AdvGAN | cs.LG cs.AI | Deep neural networks have demonstrated remarkable performance across various
domains. However, they are vulnerable to adversarial examples, which can lead
to erroneous predictions. Generative Adversarial Networks (GANs) can leverage
the generators and discriminators model to quickly produce high-quality
adversarial e... |
2502.12208 | AI-Augmented Metamorphic Testing for Comprehensive Validation of
Autonomous Vehicles | cs.SE cs.RO | Self-driving cars have the potential to revolutionize transportation, but
ensuring their safety remains a significant challenge. These systems must
navigate a variety of unexpected scenarios on the road, and their complexity
poses substantial difficulties for thorough testing. Conventional testing
methodologies face ... |
2502.12209 | Suboptimal Shapley Value Explanations | stat.ML cs.AI cs.LG | Deep Neural Networks (DNNs) have demonstrated strong capacity in supporting a
wide variety of applications. Shapley value has emerged as a prominent tool to
analyze feature importance to help people understand the inference process of
deep neural models. Computing Shapley value function requires choosing a
baseline t... |
2502.12210 | Enhancing Frame Detection with Retrieval Augmented Generation | cs.CL cs.AI cs.LG | Recent advancements in Natural Language Processing have significantly
improved the extraction of structured semantic representations from
unstructured text, especially through Frame Semantic Role Labeling (FSRL).
Despite this progress, the potential of Retrieval-Augmented Generation (RAG)
models for frame detection r... |
2502.12213 | Spatiotemporal-aware Trend-Seasonality Decomposition Network for Traffic
Flow Forecasting | cs.LG cs.AI | Traffic prediction is critical for optimizing travel scheduling and enhancing
public safety, yet the complex spatial and temporal dynamics within traffic
data present significant challenges for accurate forecasting. In this paper, we
introduce a novel model, the Spatiotemporal-aware Trend-Seasonality
Decomposition Ne... |
2502.12214 | Zero Token-Driven Deep Thinking in LLMs: Unlocking the Full Potential of
Existing Parameters via Cyclic Refinement | cs.CL cs.AI | Resource limitations often constrain the parameter counts of Large Language
Models (LLMs), hindering their performance. While existing methods employ
parameter sharing to reuse the same parameter set under fixed budgets, such
approaches typically force each layer to assume multiple roles with a
predetermined number o... |
2502.12215 | Revisiting the Test-Time Scaling of o1-like Models: Do they Truly
Possess Test-Time Scaling Capabilities? | cs.LG cs.AI cs.CL | The advent of test-time scaling in large language models (LLMs), exemplified
by OpenAI's o1 series, has advanced reasoning capabilities by scaling
computational resource allocation during inference. While successors like QwQ,
Deepseek-R1 (R1) and LIMO replicate these advancements, whether these models
truly possess t... |
2502.12216 | Tactic: Adaptive Sparse Attention with Clustering and Distribution
Fitting for Long-Context LLMs | cs.LG cs.AI cs.CL | Long-context models are essential for many applications but face
inefficiencies in loading large KV caches during decoding. Prior methods
enforce fixed token budgets for sparse attention, assuming a set number of
tokens can approximate full attention. However, these methods overlook
variations in the importance of at... |
2502.12217 | Optimal Brain Iterative Merging: Mitigating Interference in LLM Merging | cs.LG cs.AI cs.CL | Large Language Models (LLMs) have demonstrated impressive capabilities, but
their high computational costs pose challenges for customization. Model merging
offers a cost-effective alternative, yet existing methods suffer from
interference among parameters, leading to performance degradation. In this
work, we propose ... |
2502.12219 | Towards Efficient Molecular Property Optimization with Graph Energy
Based Models | q-bio.BM cs.LG | Optimizing chemical properties is a challenging task due to the vastness and
complexity of chemical space. Here, we present a generative energy-based
architecture for implicit chemical property optimization, designed to
efficiently generate molecules that satisfy target properties without explicit
conditional generat... |
2502.12222 | IMPACTX: Improving Model Performance by Appropriately predicting CorrecT
eXplanations | cs.LG cs.AI | The eXplainable Artificial Intelligence (XAI) research predominantly
concentrates to provide explainations about AI model decisions, especially Deep
Learning (DL) models. However, there is a growing interest in using XAI
techniques to automatically improve the performance of the AI systems
themselves.
This paper pr... |
2502.12223 | GLoT: A Novel Gated-Logarithmic Transformer for Efficient Sign Language
Translation | cs.CL cs.CV | Machine Translation has played a critical role in reducing language barriers,
but its adaptation for Sign Language Machine Translation (SLMT) has been less
explored. Existing works on SLMT mostly use the Transformer neural network
which exhibits low performance due to the dynamic nature of the sign language.
In this ... |
2502.12224 | Accurate Expert Predictions in MoE Inference via Cross-Layer Gate | cs.AI cs.LG | Large Language Models (LLMs) have demonstrated impressive performance across
various tasks, and their application in edge scenarios has attracted
significant attention. However, sparse-activated Mixture-of-Experts (MoE)
models, which are well suited for edge scenarios, have received relatively
little attention due to... |
2502.12225 | Subjective Logic Encodings | cs.LG cs.AI | Many existing approaches for learning from labeled data assume the existence
of gold-standard labels. According to these approaches, inter-annotator
disagreement is seen as noise to be removed, either through refinement of
annotation guidelines, label adjudication, or label filtering. However,
annotator disagreement ... |
2502.12226 | On Creating a Causally Grounded Usable Rating Method for Assessing the
Robustness of Foundation Models Supporting Time Series | cs.LG cs.AI | Foundation Models (FMs) have improved time series forecasting in various
sectors, such as finance, but their vulnerability to input disturbances can
hinder their adoption by stakeholders, such as investors and analysts. To
address this, we propose a causally grounded rating framework to study the
robustness of Founda... |
2502.12227 | Identifying the Best Transition Law | cs.LG cs.AI | Motivated by recursive learning in Markov Decision Processes, this paper
studies best-arm identification in bandit problems where each arm's reward is
drawn from a multinomial distribution with a known support. We compare the
performance { reached by strategies including notably LUCB without and with use
of this know... |
2502.12231 | PUGS: Zero-shot Physical Understanding with Gaussian Splatting | cs.CV | Current robotic systems can understand the categories and poses of objects
well. But understanding physical properties like mass, friction, and hardness,
in the wild, remains challenging. We propose a new method that reconstructs 3D
objects using the Gaussian splatting representation and predicts various
physical pro... |
2502.12243 | On the Learnability of Knot Invariants: Representation, Predictability,
and Neural Similarity | math.GT cs.LG | We analyze different aspects of neural network predictions of knot
invariants. First, we investigate the impact of different knot representations
on the prediction of invariants and find that braid representations work in
general the best. Second, we study which knot invariants are easy to learn,
with invariants deri... |
2502.12257 | InfoQuest: Evaluating Multi-Turn Dialogue Agents for Open-Ended
Conversations with Hidden Context | cs.CL cs.LG | While large language models excel at following explicit instructions, they
often struggle with ambiguous or incomplete user requests, defaulting to
verbose, generic responses rather than seeking clarification. We introduce
InfoQuest, a multi-turn chat benchmark designed to evaluate how dialogue agents
handle hidden c... |
2502.12258 | SmokeNet: Efficient Smoke Segmentation Leveraging Multiscale
Convolutions and Multiview Attention Mechanisms | cs.CV | Efficient segmentation of smoke plumes is crucial for environmental
monitoring and industrial safety, enabling the detection and mitigation of
harmful emissions from activities like quarry blasts and wildfires. Accurate
segmentation facilitates environmental impact assessments, timely
interventions, and compliance wi... |
2502.12264 | Multi-dimensional Test Design | econ.TH cs.CY cs.GT cs.LG | How should one jointly design tests and the arrangement of agencies to
administer these tests (testing procedure)? To answer this question, we analyze
a model where a principal must use multiple tests to screen an agent with a
multi-dimensional type, knowing that the agent can change his type at a cost.
We identify a... |
2502.12267 | NeuroStrata: Harnessing Neurosymbolic Paradigms for Improved Design,
Testability, and Verifiability of Autonomous CPS | cs.SE cs.AI | Autonomous cyber-physical systems (CPSs) leverage AI for perception,
planning, and control but face trust and safety certification challenges due to
inherent uncertainties. The neurosymbolic paradigm replaces stochastic layers
with interpretable symbolic AI, enabling determinism. While promising,
challenges like mult... |
2502.12272 | Learning to Reason at the Frontier of Learnability | cs.LG cs.AI cs.CL | Reinforcement learning is now widely adopted as the final stage of large
language model training, especially for reasoning-style tasks such as maths
problems. Typically, models attempt each question many times during a single
training step and attempt to learn from their successes and failures. However,
we demonstrat... |
2502.12275 | Integrating Expert Knowledge into Logical Programs via LLMs | cs.AI cs.CL cs.MA | This paper introduces ExKLoP, a novel framework designed to evaluate how
effectively Large Language Models (LLMs) integrate expert knowledge into
logical reasoning systems. This capability is especially valuable in
engineering, where expert knowledge-such as manufacturer-recommended
operational ranges-can be directly... |
2502.12276 | Story Grammar Semantic Matching for Literary Study | cs.CL | In Natural Language Processing (NLP), semantic matching algorithms have
traditionally relied on the feature of word co-occurrence to measure semantic
similarity. While this feature approach has proven valuable in many contexts,
its simplistic nature limits its analytical and explanatory power when used to
understand ... |
2502.12277 | Healthcare cost prediction for heterogeneous patient profiles using deep
learning models with administrative claims data | cs.LG cs.CY | Problem: How can we design patient cost prediction models that effectively
address the challenges of heterogeneity in administrative claims (AC) data to
ensure accurate, fair, and generalizable predictions, especially for high-need
(HN) patients with complex chronic conditions?
Relevance: Accurate and equitable pat... |
2502.12278 | Towards Practical First-Order Model Counting | cs.LO cs.AI | First-order model counting (FOMC) is the problem of counting the number of
models of a sentence in first-order logic. Since lifted inference techniques
rely on reductions to variants of FOMC, the design of scalable methods for FOMC
has attracted attention from both theoreticians and practitioners over the past
decade... |
2502.12280 | Connecting Large Language Model Agent to High Performance Computing
Resource | cs.DC cs.AI | The Large Language Model agent workflow enables the LLM to invoke tool
functions to increase the performance on specific scientific domain questions.
To tackle large scale of scientific research, it requires access to computing
resource and parallel computing setup. In this work, we implemented Parsl to
the LangChain... |
2502.12286 | Rational Capability in Concurrent Games | cs.LO cs.MA | We extend concurrent game structures (CGSs) with a simple notion of
preference over computations and define a minimal notion of rationality for
agents based on the concept of dominance. We use this notion to interpret a CL
and an ATL languages that extend the basic CL and ATL languages with modalities
for rational ca... |
2502.12289 | Evaluating Step-by-step Reasoning Traces: A Survey | cs.CL | Step-by-step reasoning is widely used to enhance the reasoning ability of
large language models (LLMs) in complex problems. Evaluating the quality of
reasoning traces is crucial for understanding and improving LLM reasoning.
However, the evaluation criteria remain highly unstandardized, leading to
fragmented efforts ... |
2502.12292 | Independence Tests for Language Models | cs.LG cs.CL | We consider the following problem: given the weights of two models, can we
test whether they were trained independently -- i.e., from independent random
initializations? We consider two settings: constrained and unconstrained. In
the constrained setting, we make assumptions about model architecture and
training and p... |
2502.12293 | Data-Efficient Limited-Angle CT Using Deep Priors and Regularization | cs.CV | Reconstructing an image from its Radon transform is a fundamental computed
tomography (CT) task arising in applications such as X-ray scans. In many
practical scenarios, a full 180-degree scan is not feasible, or there is a
desire to reduce radiation exposure. In these limited-angle settings, the
problem becomes ill-... |
2502.12295 | On the Computational Tractability of the (Many) Shapley Values | cs.LG cs.CC cs.LO | Recent studies have examined the computational complexity of computing
Shapley additive explanations (also known as SHAP) across various models and
distributions, revealing their tractability or intractability in different
settings. However, these studies primarily focused on a specific variant called
Conditional SHA... |
2502.12297 | Duo Streamers: A Streaming Gesture Recognition Framework | cs.CV | Gesture recognition in resource-constrained scenarios faces significant
challenges in achieving high accuracy and low latency. The streaming gesture
recognition framework, Duo Streamers, proposed in this paper, addresses these
challenges through a three-stage sparse recognition mechanism, an RNN-lite
model with an ex... |
2502.12298 | Symmetric Rank-One Quasi-Newton Methods for Deep Learning Using Cubic
Regularization | math.OC cs.IT cs.LG cs.NA math.IT math.NA stat.ML | Stochastic gradient descent and other first-order variants, such as Adam and
AdaGrad, are commonly used in the field of deep learning due to their
computational efficiency and low-storage memory requirements. However, these
methods do not exploit curvature information. Consequently, iterates can
converge to saddle po... |
2502.12300 | Per-channel autoregressive linear prediction padding in tiled CNN
processing of 2D spatial data | cs.LG cs.CV | We present linear prediction as a differentiable padding method. For each
channel, a stochastic autoregressive linear model is fitted to the padding
input by minimizing its noise terms in the least-squares sense. The padding is
formed from the expected values of the autoregressive model given the known
pixels. We tra... |
2502.12301 | SMOL: Professionally translated parallel data for 115 under-represented
languages | cs.CL | We open-source SMOL (Set of Maximal Overall Leverage), a suite of training
data to unlock translation for low-resource languages (LRLs). SMOL has been
translated into 115 under-resourced languages, including many for which there
exist no previous public resources, for a total of 6.1M translated tokens. SMOL
comprises... |
2502.12302 | Chaotic Map based Compression Approach to Classification | cs.LG | Modern machine learning approaches often prioritize performance at the cost
of increased complexity, computational demands, and reduced interpretability.
This paper introduces a novel framework that challenges this trend by
reinterpreting learning from an information-theoretic perspective, viewing it
as a search for ... |
2502.12303 | From Gaming to Research: GTA V for Synthetic Data Generation for
Robotics and Navigations | cs.CV | In computer vision, the development of robust algorithms capable of
generalizing effectively in real-world scenarios more and more often requires
large-scale datasets collected under diverse environmental conditions. However,
acquiring such datasets is time-consuming, costly, and sometimes unfeasible. To
address thes... |
2502.12304 | Warmup Generations: A Task-Agnostic Approach for Guiding
Sequence-to-Sequence Learning with Unsupervised Initial State Generation | cs.CL cs.AI | Traditional supervised fine-tuning (SFT) strategies for sequence-to-sequence
tasks often train models to directly generate the target output. Recent work
has shown that guiding models with intermediate steps, such as keywords,
outlines, or reasoning chains, can significantly improve performance,
coherence, and interp... |
2502.12307 | The Agafonov and Schnorr-Stimm theorems for probabilistic automata | cs.FL cs.IT math.IT | For a fixed alphabet $A$, an infinite sequence $X$ is said to be normal if
every word $w$ over $A$ appears in $X$ with the same frequency as any other
word of the same length. A classical result of Agafonov (1966) relates
normality to finite automata as follows: a sequence $X$ is normal if and only
if any subsequence... |
2502.12309 | Eigenvalues in microeconomics | econ.TH cs.SI math.HO | Square matrices often arise in microeconomics, particularly in network models
addressing applications from opinion dynamics to platform regulation. Spectral
theory provides powerful tools for analyzing their properties. We present an
accessible overview of several fundamental applications of spectral methods in
micro... |
2502.12310 | Domain Randomization is Sample Efficient for Linear Quadratic Control | eess.SY cs.SY | We study the sample efficiency of domain randomization and robust control for
the benchmark problem of learning the linear quadratic regulator (LQR). Domain
randomization, which synthesizes controllers by minimizing average performance
over a distribution of model parameters, has achieved empirical success in
robotic... |
2502.12315 | Mean-Field Bayesian Optimisation | cs.LG cs.MA | We address the problem of optimising the average payoff for a large number of
cooperating agents, where the payoff function is unknown and treated as a black
box. While standard Bayesian Optimisation (BO) methods struggle with the
scalability required for high-dimensional input spaces, we demonstrate how
leveraging t... |
2502.12317 | Can Language Models Learn Typologically Implausible Languages? | cs.CL cs.LG | Grammatical features across human languages show intriguing correlations
often attributed to learning biases in humans. However, empirical evidence has
been limited to experiments with highly simplified artificial languages, and
whether these correlations arise from domain-general or language-specific
biases remains ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.