id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2502.13095 | Understanding and Rectifying Safety Perception Distortion in VLMs | cs.CV cs.CL cs.LG | Recent studies reveal that vision-language models (VLMs) become more
susceptible to harmful requests and jailbreak attacks after integrating the
vision modality, exhibiting greater vulnerability than their text-only LLM
backbones. To uncover the root cause of this phenomenon, we conduct an in-depth
analysis and ident... |
2502.13103 | WeedsGalore: A Multispectral and Multitemporal UAV-based Dataset for
Crop and Weed Segmentation in Agricultural Maize Fields | cs.CV | Weeds are one of the major reasons for crop yield loss but current weeding
practices fail to manage weeds in an efficient and targeted manner. Effective
weed management is especially important for crops with high worldwide
production such as maize, to maximize crop yield for meeting increasing global
demands. Advance... |
2502.13105 | Enhanced uncertainty quantification variational autoencoders for the
solution of Bayesian inverse problems | cs.LG cs.NA math.NA | Among other uses, neural networks are a powerful tool for solving
deterministic and Bayesian inverse problems in real-time. In the Bayesian
framework, variational autoencoders, a specialized type of neural network,
enable the estimation of model parameters and their distribution based on
observational data allowing t... |
2502.13107 | MatterChat: A Multi-Modal LLM for Material Science | cs.AI cs.LG | Understanding and predicting the properties of inorganic materials is crucial
for accelerating advancements in materials science and driving applications in
energy, electronics, and beyond. Integrating material structure data with
language-based information through multi-modal large language models (LLMs)
offers grea... |
2502.13108 | Improving Clinical Question Answering with Multi-Task Learning: A Joint
Approach for Answer Extraction and Medical Categorization | cs.CL cs.AI cs.LG | Clinical Question Answering (CQA) plays a crucial role in medical
decision-making, enabling physicians to extract relevant information from
Electronic Medical Records (EMRs). While transformer-based models such as BERT,
BioBERT, and ClinicalBERT have demonstrated state-of-the-art performance in
CQA, existing models l... |
2502.13110 | MLPs at the EOC: Dynamics of Feature Learning | cs.LG | Since infinitely wide neural networks in the kernel regime are random feature
models, the success of contemporary deep learning lies in the rich regime,
where a satisfying theory should explain not only the convergence of gradient
descent but the learning of features along the way. Such a theory should also
cover phe... |
2502.13112 | Constrained Online Convex Optimization with Polyak Feasibility Steps | cs.LG math.OC | In this work, we study online convex optimization with a fixed constraint
function $g : \mathbb{R}^d \rightarrow \mathbb{R}$. Prior work on this problem
has shown $O(\sqrt{T})$ regret and cumulative constraint satisfaction
$\sum_{t=1}^{T} g(x_t) \leq 0$, while only accessing the constraint value and
subgradient at th... |
2502.13114 | The influence of motion features in temporal perception | cs.CL | This paper examines the role of manner-of-motion verbs in shaping subjective
temporal perception and emotional resonance. Through four complementary
studies, we explore how these verbs influence the conceptualization of time,
examining their use in literal and metaphorical (temporal) contexts. Our
findings reveal tha... |
2502.13115 | Near-Optimal Private Learning in Linear Contextual Bandits | cs.LG cs.AI cs.CR math.ST stat.ML stat.TH | We analyze the problem of private learning in generalized linear contextual
bandits. Our approach is based on a novel method of re-weighted regression,
yielding an efficient algorithm with regret of order
$\sqrt{T}+\frac{1}{\alpha}$ and $\sqrt{T}/\alpha$ in the joint and local model
of $\alpha$-privacy, respectively.... |
2502.13117 | Performance Evaluation of Large Language Models in Statistical
Programming | stat.AP cs.AI | The programming capabilities of large language models (LLMs) have
revolutionized automatic code generation and opened new avenues for automatic
statistical analysis. However, the validity and quality of these generated
codes need to be systematically evaluated before they can be widely adopted.
Despite their growing ... |
2502.13119 | STEER-ME: Assessing the Microeconomic Reasoning of Large Language Models | cs.CL | How should one judge whether a given large language model (LLM) can reliably
perform economic reasoning? Most existing LLM benchmarks focus on specific
applications and fail to present the model with a rich variety of economic
tasks. A notable exception is Raman et al. [2024], who offer an approach for
comprehensivel... |
2502.13120 | Adapting Psycholinguistic Research for LLMs: Gender-inclusive Language
in a Coreference Context | cs.CL cs.AI | Gender-inclusive language is often used with the aim of ensuring that all
individuals, regardless of gender, can be associated with certain concepts.
While psycholinguistic studies have examined its effects in relation to human
cognition, it remains unclear how Large Language Models (LLMs) process
gender-inclusive la... |
2502.13124 | NaturalReasoning: Reasoning in the Wild with 2.8M Challenging Questions | cs.CL | Scaling reasoning capabilities beyond traditional domains such as math and
coding is hindered by the lack of diverse and high-quality questions. To
overcome this limitation, we introduce a scalable approach for generating
diverse and challenging reasoning questions, accompanied by reference answers.
We present Natura... |
2502.13125 | RuozhiBench: Evaluating LLMs with Logical Fallacies and Misleading
Premises | cs.CL | Recent advances in large language models (LLMs) have shown that they can
answer questions requiring complex reasoning. However, their ability to
identify and respond to text containing logical fallacies or deliberately
misleading premises remains less studied. To address this gap, we introduce
RuozhiBench, a bilingua... |
2502.13127 | Facilitating Long Context Understanding via Supervised Chain-of-Thought
Reasoning | cs.CL | Recent advances in Large Language Models (LLMs) have enabled them to process
increasingly longer sequences, ranging from 2K to 2M tokens and even beyond.
However, simply extending the input sequence length does not necessarily lead
to effective long-context understanding. In this study, we integrate
Chain-of-Thought ... |
2502.13128 | SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song
Generation | cs.SD cs.AI | Text-to-song generation, the task of creating vocals and accompaniment from
textual inputs, poses significant challenges due to domain complexity and data
scarcity. Existing approaches often employ multi-stage generation procedures,
resulting in cumbersome training and inference pipelines. In this paper, we
propose S... |
2502.13129 | Is Noise Conditioning Necessary for Denoising Generative Models? | cs.CV | It is widely believed that noise conditioning is indispensable for denoising
diffusion models to work successfully. This work challenges this belief.
Motivated by research on blind image denoising, we investigate a variety of
denoising-based generative models in the absence of noise conditioning. To our
surprise, mos... |
2502.13130 | Magma: A Foundation Model for Multimodal AI Agents | cs.CV cs.AI cs.HC cs.LG cs.RO | We present Magma, a foundation model that serves multimodal AI agentic tasks
in both the digital and physical worlds. Magma is a significant extension of
vision-language (VL) models in that it not only retains the VL understanding
ability (verbal intelligence) of the latter, but is also equipped with the
ability to p... |
2502.13131 | Rethinking Diverse Human Preference Learning through Principal Component
Analysis | cs.AI cs.CL | Understanding human preferences is crucial for improving foundation models
and building personalized AI systems. However, preferences are inherently
diverse and complex, making it difficult for traditional reward models to
capture their full range. While fine-grained preference data can help,
collecting it is expensi... |
2502.13132 | Learning to Defer for Causal Discovery with Imperfect Experts | cs.LG cs.AI stat.ML | Integrating expert knowledge, e.g. from large language models, into causal
discovery algorithms can be challenging when the knowledge is not guaranteed to
be correct. Expert recommendations may contradict data-driven results, and
their reliability can vary significantly depending on the domain or specific
query. Exis... |
2502.13133 | AV-Flow: Transforming Text to Audio-Visual Human-like Interactions | cs.CV | We introduce AV-Flow, an audio-visual generative model that animates
photo-realistic 4D talking avatars given only text input. In contrast to prior
work that assumes an existing speech signal, we synthesize speech and vision
jointly. We demonstrate human-like speech synthesis, synchronized lip motion,
lively facial e... |
2502.13134 | RHINO: Learning Real-Time Humanoid-Human-Object Interaction from Human
Demonstrations | cs.RO cs.HC cs.LG | Humanoid robots have shown success in locomotion and manipulation. Despite
these basic abilities, humanoids are still required to quickly understand human
instructions and react based on human interaction signals to become valuable
assistants in human daily life. Unfortunately, most existing works only focus
on multi... |
2502.13135 | Sleepless Nights, Sugary Days: Creating Synthetic Users with Health
Conditions for Realistic Coaching Agent Interactions | cs.LG cs.AI cs.CL | We present an end-to-end framework for generating synthetic users for
evaluating interactive agents designed to encourage positive behavior changes,
such as in health and lifestyle coaching. The synthetic users are grounded in
health and lifestyle conditions, specifically sleep and diabetes management in
this study, ... |
2502.13137 | Theorem Prover as a Judge for Synthetic Data Generation | cs.AI | The demand for synthetic data in mathematical reasoning has increased due to
its potential to enhance the mathematical capabilities of large language models
(LLMs). However, ensuring the validity of intermediate reasoning steps remains
a significant challenge, affecting data quality. While formal verification via
the... |
2502.13138 | AIDE: AI-Driven Exploration in the Space of Code | cs.AI cs.LG | Machine learning, the foundation of modern artificial intelligence, has
driven innovations that have fundamentally transformed the world. Yet, behind
advancements lies a complex and often tedious process requiring labor and
compute intensive iteration and experimentation. Engineers and scientists
developing machine l... |
2502.13140 | Towards Quantum Tensor Decomposition in Biomedical Applications | q-bio.QM cs.LG | Tensor decomposition has emerged as a powerful framework for feature
extraction in multi-modal biomedical data. In this review, we present a
comprehensive analysis of tensor decomposition methods such as Tucker,
CANDECOMP/PARAFAC, spiked tensor decomposition, etc. and their diverse
applications across biomedical doma... |
2502.13141 | UniGuardian: A Unified Defense for Detecting Prompt Injection, Backdoor
Attacks and Adversarial Attacks in Large Language Models | cs.CL cs.AI cs.LG | Large Language Models (LLMs) are vulnerable to attacks like prompt injection,
backdoor attacks, and adversarial attacks, which manipulate prompts or models
to generate harmful outputs. In this paper, departing from traditional deep
learning attack paradigms, we explore their intrinsic relationship and
collectively te... |
2502.13142 | Pre-training Auto-regressive Robotic Models with 4D Representations | cs.RO cs.AI | Foundation models pre-trained on massive unlabeled datasets have
revolutionized natural language and computer vision, exhibiting remarkable
generalization capabilities, thus highlighting the importance of pre-training.
Yet, efforts in robotics have struggled to achieve similar success, limited by
either the need for ... |
2502.13143 | SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and
Object Manipulation | cs.RO cs.AI cs.CV | Spatial intelligence is a critical component of embodied AI, promoting robots
to understand and interact with their environments. While recent advances have
enhanced the ability of VLMs to perceive object locations and positional
relationships, they still lack the capability to precisely understand object
orientation... |
2502.13144 | RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based
Reinforcement Learning | cs.CV cs.RO | Existing end-to-end autonomous driving (AD) algorithms typically follow the
Imitation Learning (IL) paradigm, which faces challenges such as causal
confusion and the open-loop gap. In this work, we establish a 3DGS-based
closed-loop Reinforcement Learning (RL) training paradigm. By leveraging 3DGS
techniques, we cons... |
2502.13145 | Multimodal Mamba: Decoder-only Multimodal State Space Model via
Quadratic to Linear Distillation | cs.CV | Recent Multimodal Large Language Models (MLLMs) have achieved remarkable
performance but face deployment challenges due to their quadratic computational
complexity, growing Key-Value cache requirements, and reliance on separate
vision encoders. We propose mmMamba, a framework for developing
linear-complexity native m... |
2502.13146 | Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct
Preference Optimization | cs.CV cs.LG | The emergence of large Vision Language Models (VLMs) has broadened the scope
and capabilities of single-modal Large Language Models (LLMs) by integrating
visual modalities, thereby unlocking transformative cross-modal applications in
a variety of real-world scenarios. Despite their impressive performance, VLMs
are pr... |
2502.13149 | Bi-Fact: A Bidirectional Factorization-based Evaluation of Intent
Extraction from UI Trajectories | cs.AI | Evaluating intent extraction from GUIs demands accurate, fine-grained
metrics. This paper introduces Bi-Fact, a novel method that decomposes intents
into atomic facts and performs bidirectional comparisons to assess precision
and recall. Experiments demonstrate Bi-Fact's superior correlation with human
judgments comp... |
2502.13160 | Understanding Dynamic Diffusion Process of LLM-based Agents under
Information Asymmetry | cs.MA cs.AI | Large language models have been used to simulate human society using
multi-agent systems. Most current social simulation research emphasizes
interactive behaviors in fixed environments, ignoring information opacity,
relationship variability and diffusion diversity. In this paper, we study the
dynamics of information ... |
2502.13161 | Noumenal Labs White Paper: How To Build A Brain | q-bio.NC cs.AI | This white paper describes some of the design principles for artificial or
machine intelligence that guide efforts at Noumenal Labs. These principles are
drawn from both nature and from the means by which we come to represent and
understand it. The end goal of research and development in this field should be
to desig... |
2502.13162 | ShieldLearner: A New Paradigm for Jailbreak Attack Defense in LLMs | cs.CR cs.AI cs.CL | Large Language Models (LLMs) have achieved remarkable success in various
domains but remain vulnerable to adversarial jailbreak attacks. Existing
prompt-defense strategies, including parameter-modifying and parameter-free
approaches, face limitations in adaptability, interpretability, and
customization, constraining ... |
2502.13164 | Multi-Agent Actor-Critic Generative AI for Query Resolution and Analysis | cs.MA cs.AI | In this paper, we introduce MASQRAD (Multi-Agent Strategic Query Resolution
and Diagnostic tool), a transformative framework for query resolution based on
the actor-critic model, which utilizes multiple generative AI agents. MASQRAD
is excellent at translating imprecise or ambiguous user inquiries into precise
and ac... |
2502.13165 | HedgeAgents: A Balanced-aware Multi-agent Financial Trading System | cs.MA cs.AI q-fin.TR | As automated trading gains traction in the financial market, algorithmic
investment strategies are increasingly prominent. While Large Language Models
(LLMs) and Agent-based models exhibit promising potential in real-time market
analysis and trading decisions, they still experience a significant -20% loss
when confro... |
2502.13166 | Large Language Models Can Help Mitigate Barren Plateaus | quant-ph cs.AI cs.CL cs.LG | In the era of noisy intermediate-scale quantum (NISQ) computing, Quantum
Neural Networks (QNNs) have emerged as a promising approach for various
applications, yet their training is often hindered by barren plateaus (BPs),
where gradient variance vanishes exponentially as the model size increases. To
address this chal... |
2502.13167 | SmartLLM: Smart Contract Auditing using Custom Generative AI | cs.CR cs.AI | Smart contracts are essential to decentralized finance (DeFi) and blockchain
ecosystems but are increasingly vulnerable to exploits due to coding errors and
complex attack vectors. Traditional static analysis tools and existing
vulnerability detection methods often fail to address these challenges
comprehensively, le... |
2502.13170 | Unveiling the Magic of Code Reasoning through Hypothesis Decomposition
and Amendment | cs.AI cs.LG | The reasoning abilities are one of the most enigmatic and captivating aspects
of large language models (LLMs). Numerous studies are dedicated to exploring
and expanding the boundaries of this reasoning capability. However, tasks that
embody both reasoning and recall characteristics are often overlooked. In this
paper... |
2502.13171 | Web Phishing Net (WPN): A scalable machine learning approach for
real-time phishing campaign detection | cs.CR cs.AI cs.LG | Phishing is the most prevalent type of cyber-attack today and is recognized
as the leading source of data breaches with significant consequences for both
individuals and corporations. Web-based phishing attacks are the most frequent
with vectors such as social media posts and emails containing links to phishing
URLs ... |
2502.13172 | Unveiling Privacy Risks in LLM Agent Memory | cs.CR cs.AI | Large Language Model (LLM) agents have become increasingly prevalent across
various real-world applications. They enhance decision-making by storing
private user-agent interactions in the memory module for demonstrations,
introducing new privacy risks for LLM agents. In this work, we systematically
investigate the vu... |
2502.13173 | Thinking Preference Optimization | cs.LG cs.AI | Supervised Fine-Tuning (SFT) has been a go-to and effective method for
enhancing long chain-of-thought (CoT) reasoning in relatively small LLMs by
fine-tuning them with long CoT responses from larger LLMs. To continually
improve reasoning abilities, we can either collect new high-quality long CoT
reasoning SFT data o... |
2502.13174 | Generative Topology Optimization: Exploring Diverse Solutions in
Structural Design | cs.LG cond-mat.mtrl-sci cs.AI cs.CV | Topology optimization (TO) is a family of computational methods that derive
near-optimal geometries from formal problem descriptions. Despite their
success, established TO methods are limited to generating single solutions,
restricting the exploration of alternative designs. To address this limitation,
we introduce G... |
2502.13175 | Towards Robust and Secure Embodied AI: A Survey on Vulnerabilities and
Attacks | cs.CR cs.AI cs.RO | Embodied AI systems, including robots and autonomous vehicles, are
increasingly integrated into real-world applications, where they encounter a
range of vulnerabilities stemming from both environmental and system-level
factors. These vulnerabilities manifest through sensor spoofing, adversarial
attacks, and failures ... |
2502.13176 | BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference | cs.LG cs.AI | In Large Language Model (LLM) inference, Key-Value (KV) caches (KV-caches)
are essential for reducing time complexity. However, they result in a linear
increase in GPU memory as the context length grows. While recent work explores
KV-cache eviction and compression policies to reduce memory usage, they often
consider ... |
2502.13177 | KL Penalty Control via Perturbation for Direct Preference Optimization | cs.LG cs.AI | Direct Preference Optimization (DPO) demonstrates the advantage of aligning a
large language model with human preference using only an offline dataset.
However, DPO has the limitation that the KL penalty, which prevents excessive
deviation from the reference model, is static throughout the training process.
Several m... |
2502.13178 | Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy,
Unified Evaluation, and Comparative Analysis | cs.LG cs.AI | Post-training Quantization (PTQ) technique has been extensively adopted for
large language models (LLMs) compression owing to its efficiency and low
resource requirement. However, current research lacks a in-depth analysis of
the superior and applicable scenarios of each PTQ strategy. In addition,
existing algorithms... |
2502.13179 | PTQ1.61: Push the Real Limit of Extremely Low-Bit Post-Training
Quantization Methods for Large Language Models | cs.LG cs.AI | Large Language Models (LLMs) suffer severe performance degradation when
facing extremely low-bit (sub 2-bit) quantization. Several existing sub 2-bit
post-training quantization (PTQ) methods utilize a mix-precision scheme by
leveraging an unstructured fine-grained mask to explicitly distinguish salient
weights, while... |
2502.13180 | Uncertain Multi-Objective Recommendation via Orthogonal Meta-Learning
Enhanced Bayesian Optimization | cs.LG cs.AI | Recommender systems (RSs) play a crucial role in shaping our digital
interactions, influencing how we access and engage with information across
various domains. Traditional research has predominantly centered on maximizing
recommendation accuracy, often leading to unintended side effects such as echo
chambers and con... |
2502.13181 | RingFormer: Rethinking Recurrent Transformer with Adaptive Level Signals | cs.LG cs.AI | Transformers have achieved great success in effectively processing sequential
data such as text. Their architecture consisting of several attention and
feedforward blocks can model relations between elements of a sequence in
parallel manner, which makes them very efficient to train and effective in
sequence modeling.... |
2502.13182 | Fundus2Globe: Generative AI-Driven 3D Digital Twins for Personalized
Myopia Management | eess.IV cs.CV eess.SP | Myopia, projected to affect 50% population globally by 2050, is a leading
cause of vision loss. Eyes with pathological myopia exhibit distinctive shape
distributions, which are closely linked to the progression of
vision-threatening complications. Recent understanding of eye-shape-based
biomarkers requires magnetic r... |
2502.13183 | Synthetic generation of 2D data records based on Autoencoders | eess.IV cs.LG | Gas Chromatography coupled with Ion Mobility Spectrometry (GC-IMS) is a
dual-separation analytical technique widely used for identifying components in
gaseous samples by separating and analysing the arrival times of their
constituent species. Data generated by GC-IMS is typically represented as
two-dimensional spectr... |
2502.13185 | CondensNet: Enabling stable long-term climate simulations via hybrid
deep learning models with adaptive physical constraints | physics.ao-ph cs.AI cs.LG | Accurate and efficient climate simulations are crucial for understanding
Earth's evolving climate. However, current general circulation models (GCMs)
face challenges in capturing unresolved physical processes, such as cloud and
convection. A common solution is to adopt cloud resolving models, that provide
more accura... |
2502.13186 | Model selection for behavioral learning data and applications to
contextual bandits | stat.ML cs.LG | Learning for animals or humans is the process that leads to behaviors better
adapted to the environment. This process highly depends on the individual that
learns and is usually observed only through the individual's actions. This
article presents ways to use this individual behavioral data to find the model
that bes... |
2502.13187 | A Survey of Sim-to-Real Methods in RL: Progress, Prospects and
Challenges with Foundation Models | cs.LG cs.AI cs.RO | Deep Reinforcement Learning (RL) has been explored and verified to be
effective in solving decision-making tasks in various domains, such as
robotics, transportation, recommender systems, etc. It learns from the
interaction with environments and updates the policy using the collected
experience. However, due to the l... |
2502.13188 | Autonomous Vehicles Using Multi-Agent Reinforcement Learning for Routing
Decisions Can Harm Urban Traffic | cs.MA cs.LG cs.RO | Autonomous vehicles (AVs) using Multi-Agent Reinforcement Learning (MARL) for
simultaneous route optimization may destabilize traffic environments, with
human drivers possibly experiencing longer travel times. We study this
interaction by simulating human drivers and AVs. Our experiments with standard
MARL algorithms... |
2502.13189 | MoBA: Mixture of Block Attention for Long-Context LLMs | cs.LG cs.AI cs.CL | Scaling the effective context length is essential for advancing large
language models (LLMs) toward artificial general intelligence (AGI). However,
the quadratic increase in computational complexity inherent in traditional
attention mechanisms presents a prohibitive overhead. Existing approaches
either impose strongl... |
2502.13190 | Application of machine learning algorithm in temperature field
reconstruction | cs.LG physics.flu-dyn | This study focuses on the stratification patterns and dynamic evolution of
reservoir water temperatures, aiming to estimate and reconstruct the
temperature field using limited and noisy local measurement data. Due to
complex measurement environments and technical limitations, obtaining complete
temperature informatio... |
2502.13191 | On the Privacy Risks of Spiking Neural Networks: A Membership Inference
Analysis | cs.LG cs.AI | Spiking Neural Networks (SNNs) are increasingly explored for their energy
efficiency and robustness in real-world applications, yet their privacy risks
remain largely unexamined. In this work, we investigate the susceptibility of
SNNs to Membership Inference Attacks (MIAs) -- a major privacy threat where an
adversary... |
2502.13193 | Private Text Generation by Seeding Large Language Model Prompts | cs.CL | We explore how private synthetic text can be generated by suitably prompting
a large language model (LLM). This addresses a challenge for organizations like
hospitals, which hold sensitive text data like patient medical records, and
wish to share it in order to train machine learning models for medical tasks,
while p... |
2502.13194 | Conditional Max-Sum for Asynchronous Multiagent Decision Making | cs.MA cs.AI | In this paper we present a novel approach for multiagent decision making in
dynamic environments based on Factor Graphs and the Max-Sum algorithm,
considering asynchronous variable reassignments and distributed message-passing
among agents. Motivated by the challenging domain of lane-free traffic where
automated vehi... |
2502.13195 | Linguistic Generalizations are not Rules: Impacts on Evaluation of LMs | cs.CL | Linguistic evaluations of how well LMs generalize to produce or understand
novel text often implicitly take for granted that natural languages are
generated by symbolic rules. Grammaticality is thought to be determined by
whether or not sentences obey such rules. Interpretation is believed to be
compositionally gener... |
2502.13196 | GS-QA: Comprehensive Quality Assessment Benchmark for Gaussian Splatting
View Synthesis | cs.MM cs.CV | Gaussian Splatting (GS) offers a promising alternative to Neural Radiance
Fields (NeRF) for real-time 3D scene rendering. Using a set of 3D Gaussians to
represent complex geometry and appearance, GS achieves faster rendering times
and reduced memory consumption compared to the neural network approach used in
NeRF. Ho... |
2502.13198 | Enhancing Machine Learning Performance through Intelligent Data Quality
Assessment: An Unsupervised Data-centric Framework | cs.LG cs.AI stat.ML | Poor data quality limits the advantageous power of Machine Learning (ML) and
weakens high-performing ML software systems. Nowadays, data are more prone to
the risk of poor quality due to their increasing volume and complexity.
Therefore, tedious and time-consuming work goes into data preparation and
improvement befor... |
2502.13199 | The Role of GitHub Copilot on Software Development: A Perspec-tive on
Productivity, Security, Best Practices and Future Directions | cs.SE cs.AI | GitHub Copilot is transforming software development by automating tasks and
boosting productivity through AI-driven code generation. In this paper, we
con-duct a literature survey to synthesize insights on Copilot's impact on
productivity and security. We review academic journal databases, industry
reports, and offic... |
2502.13200 | Learning To Explore With Predictive World Model Via Self-Supervised
Learning | cs.LG cs.AI | Autonomous artificial agents must be able to learn behaviors in complex
environments without humans to design tasks and rewards. Designing these
functions for each environment is not feasible, thus, motivating the
development of intrinsic reward functions. In this paper, we propose using
several cognitive elements th... |
2502.13207 | Thinking Outside the (Gray) Box: A Context-Based Score for Assessing
Value and Originality in Neural Text Generation | cs.CL cs.AI cs.CY cs.LG | Despite the increasing use of large language models for creative tasks, their
outputs often lack diversity. Common solutions, such as sampling at higher
temperatures, can compromise the quality of the results. Drawing on information
theory, we propose a context-based score to quantitatively evaluate value and
origina... |
2502.13220 | The impact of conformer quality on learned representations of molecular
conformer ensembles | cs.LG physics.chem-ph | Training machine learning models to predict properties of molecular conformer
ensembles is an increasingly popular strategy to accelerate the conformational
analysis of drug-like small molecules, reactive organic substrates, and
homogeneous catalysts. For high-throughput analyses especially, trained
surrogate models ... |
2502.13221 | Two Tickets are Better than One: Fair and Accurate Hiring Under
Strategic LLM Manipulations | cs.LG cs.AI cs.CY cs.GT | In an era of increasingly capable foundation models, job seekers are turning
to generative AI tools to enhance their application materials. However, unequal
access to and knowledge about generative AI tools can harm both employers and
candidates by reducing the accuracy of hiring decisions and giving some
candidates ... |
2502.13228 | Conformal Prediction as Bayesian Quadrature | cs.LG cs.AI stat.ML | As machine learning-based prediction systems are increasingly used in
high-stakes situations, it is important to understand how such predictive
models will perform upon deployment. Distribution-free uncertainty
quantification techniques such as conformal prediction provide guarantees about
the loss black-box models w... |
2502.13233 | SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question
Answering? | cs.CL cs.AI cs.IR cs.IT math.IT | Large Language Models (LLMs) have shown remarkable capabilities in general
domains but often struggle with tasks requiring specialized knowledge.
Conventional Retrieval-Augmented Generation (RAG) techniques typically retrieve
external information from static knowledge bases, which can be outdated or
incomplete, missi... |
2502.13234 | MotionMatcher: Motion Customization of Text-to-Video Diffusion Models
via Motion Feature Matching | cs.CV cs.AI cs.LG | Text-to-video (T2V) diffusion models have shown promising capabilities in
synthesizing realistic videos from input text prompts. However, the input text
description alone provides limited control over the precise objects movements
and camera framing. In this work, we tackle the motion customization problem,
where a r... |
2502.13243 | Learning the Universe: Learning to Optimize Cosmic Initial Conditions
with Non-Differentiable Structure Formation Models | astro-ph.CO astro-ph.GA cs.LG | Making the most of next-generation galaxy clustering surveys requires
overcoming challenges in complex, non-linear modelling to access the
significant amount of information at smaller cosmological scales. Field-level
inference has provided a unique opportunity beyond summary statistics to use
all of the information o... |
2502.13245 | Range Retrieval with Graph-Based Indices | cs.IR | Retrieving points based on proximity in a high-dimensional vector space is a
crucial step in information retrieval applications. The approximate nearest
neighbor search (ANNS) problem, which identifies the $k$ nearest neighbors for
a query (approximately, since exactly is hard), has been extensively studied in
recent... |
2502.13246 | When People are Floods: Analyzing Dehumanizing Metaphors in Immigration
Discourse with Large Language Models | cs.CL cs.CY | Metaphor, discussing one concept in terms of another, is abundant in politics
and can shape how people understand important issues. We develop a
computational approach to measure metaphorical language, focusing on
immigration discourse on social media. Grounded in qualitative social science
research, we identify seve... |
2502.13247 | Grounding LLM Reasoning with Knowledge Graphs | cs.CL | Knowledge Graphs (KGs) are valuable tools for representing relationships
between entities in a structured format. Traditionally, these knowledge bases
are queried to extract specific information. However, question-answering (QA)
over such KGs poses a challenge due to the intrinsic complexity of natural
language compa... |
2502.13248 | Communication Strategy on Macro-and-Micro Traffic State in Cooperative
Deep Reinforcement Learning for Regional Traffic Signal Control | cs.MA cs.AI cs.LG | Adaptive Traffic Signal Control (ATSC) has become a popular research topic in
intelligent transportation systems. Regional Traffic Signal Control (RTSC)
using the Multi-agent Deep Reinforcement Learning (MADRL) technique has become
a promising approach for ATSC due to its ability to achieve the optimum
trade-off betw... |
2502.13249 | Evidence of Replica Symmetry Breaking under the Nishimori conditions in
epidemic inference on graphs | cond-mat.dis-nn cond-mat.stat-mech cs.IT cs.LG math.IT physics.soc-ph | In Bayesian inference, computing the posterior distribution from the data is
typically a non-trivial problem, which usually requires approximations such as
mean-field approaches or numerical methods, like the Monte Carlo Markov Chain.
Being a high-dimensional distribution over a set of correlated variables, the
poste... |
2502.13251 | Neural Attention Search | cs.CL cs.AI | We present Neural Attention Search (NAtS), a framework that automatically
evaluates the importance of each token within a sequence and determines if the
corresponding token can be dropped after several steps. This approach can
efficiently reduce the KV cache sizes required by transformer-based models
during inference... |
2502.13252 | Multilingual Language Model Pretraining using Machine-translated Data | cs.CL | High-resource languages such as English, enables the pretraining of
high-quality large language models (LLMs). The same can not be said for most
other languages as LLMs still underperform for non-English languages, likely
due to a gap in the quality and diversity of the available multilingual
pretraining corpora. In ... |
2502.13255 | PCB Renewal: Iterative Reuse of PCB Substrates for Sustainable
Electronic Making | cs.HC cs.CY cs.RO | PCB (printed circuit board) substrates are often single-use, leading to
material waste in electronics making. We introduce PCB Renewal, a novel
technique that "erases" and "reconfigures" PCB traces by selectively depositing
conductive epoxy onto outdated areas, transforming isolated paths into
conductive planes that ... |
2502.13256 | A Survey of Anomaly Detection in Cyber-Physical Systems | cs.CR cs.AI | In our increasingly interconnected world, Cyber-Physical Systems (CPS) play a
crucial role in industries like healthcare, transportation, and manufacturing
by combining physical processes with computing power. These systems, however,
face many challenges, especially regarding security and system faults.
Anomalies in ... |
2502.13257 | Random Forest Autoencoders for Guided Representation Learning | cs.LG | Decades of research have produced robust methods for unsupervised data
visualization, yet supervised visualization$\unicode{x2013}$where expert labels
guide representations$\unicode{x2013}$remains underexplored, as most supervised
approaches prioritize classification over visualization. Recently, RF-PHATE, a
diffusio... |
2502.13259 | HumT DumT: Measuring and controlling human-like language in LLMs | cs.CL cs.AI cs.CY | Should LLMs generate language that makes them seem human? Human-like language
might improve user experience, but might also lead to overreliance and
stereotyping. Assessing these potential impacts requires a systematic way to
measure human-like tone in LLM outputs. We introduce HumT and SocioT, metrics
for human-like... |
2502.13260 | Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought
Reasoning in Large Language Models | cs.CL cs.AI cs.LG | Chain-of-Thought (CoT) reasoning, which breaks down complex tasks into
intermediate reasoning steps, has significantly enhanced the performance of
large language models (LLMs) on challenging tasks. However, the detailed
reasoning process in CoT often incurs long generation times and high
computational costs, partly d... |
2502.13263 | Spectral method for low-dose Poisson and Bernoulli phase retrieval | cs.IT math.IT math.PR | We consider the problem of phaseless reconstruction from measurements with
Poisson or Bernoulli distributed noise. This is of particular interest in
biological imaging experiments where a low dose of radiation has to be used to
mitigate potential damage of the specimen, resulting in low observed particle
counts. We d... |
2502.13266 | A Machine Learning Approach That Beats Large Rubik's Cubes | cs.LG cs.DM | The paper proposes a novel machine learning-based approach to the pathfinding
problem on extremely large graphs. This method leverages diffusion distance
estimation via a neural network and uses beam search for pathfinding. We
demonstrate its efficiency by finding solutions for 4x4x4 and 5x5x5 Rubik's
cubes with unpr... |
2502.13267 | BeforeIT.jl: High-Performance Agent-Based Macroeconomics Made Easy | cs.MA cs.CE econ.GN q-fin.EC | BeforeIT is an open-source software for building and simulating
state-of-the-art macroeconomic agent-based models (macro ABMs) based on the
recently introduced macro ABM developed in [1] and here referred to as the base
model. Written in Julia, it combines extraordinary computational efficiency
with user-friendliness... |
2502.13268 | Talking About the Assumption in the Room | cs.HC cs.LG | The reference to assumptions in how practitioners use or interact with
machine learning (ML) systems is ubiquitous in HCI and responsible ML
discourse. However, what remains unclear from prior works is the
conceptualization of assumptions and how practitioners identify and handle
assumptions throughout their workflow... |
2502.13270 | REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation | cs.CL | Long-term, open-domain dialogue capabilities are essential for chatbots
aiming to recall past interactions and demonstrate emotional intelligence (EI).
Yet, most existing research relies on synthetic, LLM-generated data, leaving
open questions about real-world conversational patterns. To address this gap,
we introduc... |
2502.13277 | HyperGCL: Multi-Modal Graph Contrastive Learning via Learnable
Hypergraph Views | cs.LG cs.AI | Recent advancements in Graph Contrastive Learning (GCL) have demonstrated
remarkable effectiveness in improving graph representations. However, relying
on predefined augmentations (e.g., node dropping, edge perturbation, attribute
masking) may result in the loss of task-relevant information and a lack of
adaptability... |
2502.13278 | Performance Evaluation of Sentiment Analysis on Text and Emoji Data
Using End-to-End, Transfer Learning, Distributed and Explainable AI Models | cs.CL cs.AI | Emojis are being frequently used in todays digital world to express from
simple to complex thoughts more than ever before. Hence, they are also being
used in sentiment analysis and targeted marketing campaigns. In this work, we
performed sentiment analysis of Tweets as well as on emoji dataset from the
Kaggle. Since ... |
2502.13280 | Value Gradient Sampler: Sampling as Sequential Decision Making | cs.LG | We propose the Value Gradient Sampler (VGS), a trainable sampler based on the
interpretation of sampling as discrete-time sequential decision-making. VGS
generates samples from a given unnormalized density (i.e., energy) by drifting
and diffusing randomly initialized particles. In VGS, finding the optimal drift
is eq... |
2502.13283 | Benefits of Early Stopping in Gradient Descent for Overparameterized
Logistic Regression | cs.LG stat.ML | In overparameterized logistic regression, gradient descent (GD) iterates
diverge in norm while converging in direction to the maximum $\ell_2$-margin
solution -- a phenomenon known as the implicit bias of GD. This work
investigates additional regularization effects induced by early stopping in
well-specified high-dim... |
2502.13285 | Task Shift: From Classification to Regression in Overparameterized
Linear Models | stat.ML cs.LG | Modern machine learning methods have recently demonstrated remarkable
capability to generalize under task shift, where latent knowledge is
transferred to a different, often more difficult, task under a similar data
distribution. We investigate this phenomenon in an overparameterized linear
regression setting where th... |
2502.13286 | BoundPlanner: A convex-set-based approach to bounded manipulator
trajectory planning | cs.RO | Online trajectory planning enables robot manipulators to react quickly to
changing environments or tasks. Many robot trajectory planners exist for known
environments but are often too slow for online computations. Current methods in
online trajectory planning do not find suitable trajectories in challenging
scenarios... |
2502.13287 | Breaking the bonds of generative artificial intelligence by minimizing
the maximum entropy | cs.LG cond-mat.stat-mech cs.IT math.IT | The emergence of generative artificial intelligence (GenAI), comprising large
language models, text-to-image generators, and AI algorithms for medical drug
and material design, had a transformative impact on society. However, despite
an initial exponential growth surpassing Moore's law, progress is now
plateauing, su... |
2502.13289 | Multiple Distribution Shift -- Aerial (MDS-A): A Dataset for Test-Time
Error Detection and Model Adaptation | cs.LG | Machine learning models assume that training and test samples are drawn from
the same distribution. As such, significant differences between training and
test distributions often lead to degradations in performance. We introduce
Multiple Distribution Shift -- Aerial (MDS-A) -- a collection of inter-related
datasets o... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.