id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.02219 | Diffusion Model-Based Data Synthesis Aided Federated Semi-Supervised
Learning | cs.LG cs.AI cs.IT math.IT | Federated semi-supervised learning (FSSL) is primarily challenged by two
factors: the scarcity of labeled data across clients and the non-independent
and identically distribution (non-IID) nature of data among clients. In this
paper, we propose a novel approach, diffusion model-based data synthesis aided
FSSL (DDSA-FSSL), which utilizes a diffusion model (DM) to generate synthetic
data, bridging the gap between heterogeneous local data distributions and the
global data distribution. In DDSA-FSSL, clients address the challenge of the
scarcity of labeled data by employing a federated learning-trained classifier
to perform pseudo labeling for unlabeled data. The DM is then collaboratively
trained using both labeled and precision-optimized pseudo-labeled data,
enabling clients to generate synthetic samples for classes that are absent in
their labeled datasets. This process allows clients to generate more
comprehensive synthetic datasets aligned with the global distribution.
Extensive experiments conducted on multiple datasets and varying non-IID
distributions demonstrate the effectiveness of DDSA-FSSL, e.g., it improves
accuracy from 38.46% to 52.14% on CIFAR-10 datasets with 10% labeled data.
|
2501.02221 | CORD: Generalizable Cooperation via Role Diversity | cs.AI cs.LG cs.MA | Cooperative multi-agent reinforcement learning (MARL) aims to develop agents
that can collaborate effectively. However, most cooperative MARL methods
overfit training agents, making learned policies not generalize well to unseen
collaborators, which is a critical issue for real-world deployment. Some
methods attempt to address the generalization problem but require prior
knowledge or predefined policies of new teammates, limiting real-world
applications. To this end, we propose a hierarchical MARL approach to enable
generalizable cooperation via role diversity, namely CORD. CORD's high-level
controller assigns roles to low-level agents by maximizing the role entropy
with constraints. We show this constrained objective can be decomposed into
causal influence in role that enables reasonable role assignment, and role
heterogeneity that yields coherent, non-redundant role clusters. Evaluated on a
variety of cooperative multi-agent tasks, CORD achieves better performance than
baselines, especially in generalization tests. Ablation studies further
demonstrate the efficacy of the constrained objective in generalizable
cooperation.
|
2501.02226 | Knowledge Graph Retrieval-Augmented Generation for LLM-based
Recommendation | cs.IR | Recommender systems have become increasingly vital in our daily lives,
helping to alleviate the problem of information overload across various
user-oriented online services. The emergence of Large Language Models (LLMs)
has yielded remarkable achievements, demonstrating their potential for the
development of next-generation recommender systems. Despite these advancements,
LLM-based recommender systems face inherent limitations stemming from their LLM
backbones, particularly issues of hallucinations and the lack of up-to-date and
domain-specific knowledge. Recently, Retrieval-Augmented Generation (RAG) has
garnered significant attention for addressing these limitations by leveraging
external knowledge sources to enhance the understanding and generation of LLMs.
However, vanilla RAG methods often introduce noise and neglect structural
relationships in knowledge, limiting their effectiveness in LLM-based
recommendations. To address these limitations, we propose to retrieve
high-quality and up-to-date structure information from the knowledge graph (KG)
to augment recommendations. Specifically, our approach develops a
retrieval-augmented framework, termed K-RagRec, that facilitates the
recommendation generation process by incorporating structure information from
the external KG. Extensive experiments have been conducted to demonstrate the
effectiveness of our proposed method.
|
2501.02227 | tCURLoRA: Tensor CUR Decomposition Based Low-Rank Parameter Adaptation
and Its Application in Medical Image Segmentation | eess.IV cs.CV | Transfer learning, by leveraging knowledge from pre-trained models, has
significantly enhanced the performance of target tasks. However, as deep neural
networks scale up, full fine-tuning introduces substantial computational and
storage challenges in resource-constrained environments, limiting its
widespread adoption. To address this, parameter-efficient fine-tuning (PEFT)
methods have been developed to reduce computational complexity and storage
requirements by minimizing the number of updated parameters. While matrix
decomposition-based PEFT methods, such as LoRA, show promise, they struggle to
fully capture the high-dimensional structural characteristics of model weights.
In contrast, high-dimensional tensors offer a more natural representation of
neural network weights, allowing for a more comprehensive capture of
higher-order features and multi-dimensional interactions. In this paper, we
propose tCURLoRA, a novel fine-tuning method based on tensor CUR decomposition.
By concatenating pre-trained weight matrices into a three-dimensional tensor
and applying tensor CUR decomposition, we update only the lower-order tensor
components during fine-tuning, effectively reducing computational and storage
overhead. Experimental results demonstrate that tCURLoRA outperforms existing
PEFT methods in medical image segmentation tasks.
|
2501.02232 | Distillation-Enhanced Physical Adversarial Attacks | cs.CV | The study of physical adversarial patches is crucial for identifying
vulnerabilities in AI-based recognition systems and developing more robust deep
learning models. While recent research has focused on improving patch
stealthiness for greater practical applicability, achieving an effective
balance between stealth and attack performance remains a significant challenge.
To address this issue, we propose a novel physical adversarial attack method
that leverages knowledge distillation. Specifically, we first define a stealthy
color space tailored to the target environment to ensure smooth blending. Then,
we optimize an adversarial patch in an unconstrained color space, which serves
as the 'teacher' patch. Finally, we use an adversarial knowledge distillation
module to transfer the teacher patch's knowledge to the 'student' patch,
guiding the optimization of the stealthy patch. Experimental results show that
our approach improves attack performance by 20%, while maintaining stealth,
highlighting its practical value.
|
2501.02235 | Survey on Question Answering over Visually Rich Documents: Methods,
Challenges, and Trends | cs.CL | Using Large Language Models (LLMs) for Visually-rich Document Understanding
(VrDU) has significantly improved performance on tasks requiring both
comprehension and generation, such as question answering, albeit introducing
new challenges. This survey explains how VrDU models enhanced by LLMs function,
covering methods for integrating VrD features into LLMs and highlighting key
challenges.
|
2501.02237 | Financial Named Entity Recognition: How Far Can LLM Go? | cs.CL cs.AI | The surge of large language models (LLMs) has revolutionized the extraction
and analysis of crucial information from a growing volume of financial
statements, announcements, and business news. Recognition for named entities to
construct structured data poses a significant challenge in analyzing financial
documents and is a foundational task for intelligent financial analytics.
However, how effective are these generic LLMs and their performance under
various prompts are yet need a better understanding. To fill in the blank, we
present a systematic evaluation of state-of-the-art LLMs and prompting methods
in the financial Named Entity Recognition (NER) problem. Specifically, our
experimental results highlight their strengths and limitations, identify five
representative failure types, and provide insights into their potential and
challenges for domain-specific tasks.
|
2501.02241 | Interpretable Load Forecasting via Representation Learning of
Geo-distributed Meteorological Factors | cs.LG cs.AI | Meteorological factors (MF) are crucial in day-ahead load forecasting as they
significantly influence the electricity consumption behaviors of consumers.
Numerous studies have incorporated MF into the load forecasting model to
achieve higher accuracy. Selecting MF from one representative location or the
averaged MF as the inputs of the forecasting model is a common practice.
However, the difference in MF collected in various locations within a region
may be significant, which poses a challenge in selecting the appropriate MF
from numerous locations. A representation learning framework is proposed to
extract geo-distributed MF while considering their spatial relationships. In
addition, this paper employs the Shapley value in the graph-based model to
reveal connections between MF collected in different locations and loads. To
reduce the computational complexity of calculating the Shapley value, an
acceleration method is adopted based on Monte Carlo sampling and weighted
linear regression. Experiments on two real-world datasets demonstrate that the
proposed method improves the day-ahead forecasting accuracy, especially in
extreme scenarios such as the "accumulation temperature effect" in summer and
"sudden temperature change" in winter. We also find a significant correlation
between the importance of MF in different locations and the corresponding
area's GDP and mainstay industry.
|
2501.02242 | Encircling General 2-D Boundaries by Mobile Robots with Collision
Avoidance: A Vector Field Guided Approach | cs.RO cs.SY eess.SY | The ability to automatically encircle boundaries with mobile robots is
crucial for tasks such as border tracking and object enclosing. Previous
research has primarily focused on regular boundaries, often assuming that their
geometric equations are known in advance, which is not often the case in
practice. In this paper, we investigate a more general case and propose an
algorithm that addresses geometric irregularities of boundaries without
requiring prior knowledge of their analytical expressions. To achieve this, we
develop a Fourier-based curve fitting method for boundary approximation using
sampled points, enabling parametric characterization of general 2-D boundaries.
This approach allows star-shaped boundaries to be fitted into polar-angle-based
parametric curves, while boundaries of other shapes are handled through
decomposition. Then, we design a vector field (VF) to achieve the encirclement
of the parameterized boundary, wherein a polar radius error is introduced to
measure the robot's ``distance'' to the boundary. The controller is finally
synthesized using a control barrier function and quadratic programming to
mediate some potentially conflicting specifications: boundary encirclement,
obstacle avoidance, and limited actuation. In this manner, the VF-guided
reference control not only guides the boundary encircling action, but can also
be minimally modified to satisfy obstacle avoidance and input saturation
constraints. Simulations and experiments are presented to verify the
performance of our new method, which can be applied to mobile robots to perform
practical tasks such as cleaning chemical spills and environment monitoring.
|
2501.02260 | MagicFace: High-Fidelity Facial Expression Editing with Action-Unit
Control | cs.CV | We address the problem of facial expression editing by controling the
relative variation of facial action-unit (AU) from the same person. This
enables us to edit this specific person's expression in a fine-grained,
continuous and interpretable manner, while preserving their identity, pose,
background and detailed facial attributes. Key to our model, which we dub
MagicFace, is a diffusion model conditioned on AU variations and an ID encoder
to preserve facial details of high consistency. Specifically, to preserve the
facial details with the input identity, we leverage the power of pretrained
Stable-Diffusion models and design an ID encoder to merge appearance features
through self-attention. To keep background and pose consistency, we introduce
an efficient Attribute Controller by explicitly informing the model of current
background and pose of the target. By injecting AU variations into a denoising
UNet, our model can animate arbitrary identities with various AU combinations,
yielding superior results in high-fidelity expression editing compared to other
facial expression editing works. Code is publicly available at
https://github.com/weimengting/MagicFace.
|
2501.02263 | The Convergence of Blockchain Technology and Islamic Economics:
Decentralized Solutions for Shariah-Compliant Finance | cs.CR cs.CE cs.ET | This paper provides a brief overview of the ongoing financial revolution,
which extends beyond the emergence of cryptocurrencies as a digital medium of
exchange. At its core, this revolution is driven by a paradigm shift rooted in
the technological advancements of blockchain and the foundational principles of
Islamic economics. Together, these elements offer a transformative framework
that challenges traditional financial systems, emphasizing transparency,
equity, and decentralized governance. The paper highlights the implications of
this shift and its potential to reshape the global economic landscape.
|
2501.02264 | Unsupervised Class Generation to Expand Semantic Segmentation Datasets | cs.CV | Semantic segmentation is a computer vision task where classification is
performed at a pixel level. Due to this, the process of labeling images for
semantic segmentation is time-consuming and expensive. To mitigate this cost
there has been a surge in the use of synthetically generated data -- usually
created using simulators or videogames -- which, in combination with domain
adaptation methods, can effectively learn how to segment real data. Still,
these datasets have a particular limitation: due to their closed-set nature, it
is not possible to include novel classes without modifying the tool used to
generate them, which is often not public. Concurrently, generative models have
made remarkable progress, particularly with the introduction of diffusion
models, enabling the creation of high-quality images from text prompts without
additional supervision.
In this work, we propose an unsupervised pipeline that leverages Stable
Diffusion and Segment Anything Module to generate class examples with an
associated segmentation mask, and a method to integrate generated cutouts for
novel classes in semantic segmentation datasets, all with minimal user input.
Our approach aims to improve the performance of unsupervised domain adaptation
methods by introducing novel samples into the training data without
modifications to the underlying algorithms. With our methods, we show how
models can not only effectively learn how to segment novel classes, with an
average performance of 51% IoU, but also reduce errors for other, already
existing classes, reaching a higher performance level overall.
|
2501.02266 | LLMzSz{\L}: a comprehensive LLM benchmark for Polish | cs.CL cs.AI | This article introduces the first comprehensive benchmark for the Polish
language at this scale: LLMzSz{\L} (LLMs Behind the School Desk). It is based
on a coherent collection of Polish national exams, including both academic and
professional tests extracted from the archives of the Polish Central
Examination Board. It covers 4 types of exams, coming from 154 domains.
Altogether, it consists of almost 19k closed-ended questions. We investigate
the performance of open-source multilingual, English, and Polish LLMs to verify
LLMs' abilities to transfer knowledge between languages. Also, the correlation
between LLMs and humans at model accuracy and exam pass rate levels is
examined. We show that multilingual LLMs can obtain superior results over
monolingual ones; however, monolingual models may be beneficial when model size
matters. Our analysis highlights the potential of LLMs in assisting with exam
validation, particularly in identifying anomalies or errors in examination
tasks.
|
2501.02267 | Towards a constructive framework for control theory | math.OC cs.AI cs.SY eess.SY | This work presents a framework for control theory based on constructive
analysis to account for discrepancy between mathematical results and their
implementation in a computer, also referred to as computational uncertainty. In
control engineering, the latter is usually either neglected or considered
submerged into some other type of uncertainty, such as system noise, and
addressed within robust control. However, even robust control methods may be
compromised when the mathematical objects involved in the respective algorithms
fail to exist in exact form and subsequently fail to satisfy the required
properties. For instance, in general stabilization using a control Lyapunov
function, computational uncertainty may distort stability certificates or even
destabilize the system despite robustness of the stabilization routine with
regards to system, actuator and measurement noise. In fact, battling numerical
problems in practical implementation of controllers is common among control
engineers. Such observations indicate that computational uncertainty should
indeed be addressed explicitly in controller synthesis and system analysis. The
major contribution here is a fairly general framework for proof techniques in
analysis and synthesis of control systems based on constructive analysis which
explicitly states that every computation be doable only up to a finite
precision thus accounting for computational uncertainty. A series of previous
works is overviewed, including constructive system stability and stabilization,
approximate optimal controls, eigenvalue problems, Caratheodory trajectories,
measurable selectors. Additionally, a new constructive version of the Danskin's
theorem, which is crucial in adversarial defense, is presented.
|
2501.02268 | What Kind of Visual Tokens Do We Need? Training-free Visual Token
Pruning for Multi-modal Large Language Models from the Perspective of Graph | cs.CV cs.AI | Recent Multimodal Large Language Models(MLLMs) often use a large number of
visual tokens to compensate their visual shortcoming, leading to excessive
computation and obvious visual redundancy. In this paper, we investigate what
kind of visual tokens are needed for MLLMs, and reveal that both foreground and
background tokens are critical for MLLMs given the varying difficulties of
examples. Based on this observation, we propose a graph-based method towards
training-free visual token pruning, termed G-Prune.In particular, G-Prune
regards visual tokens as nodes, and construct their connections based on their
semantic similarities. Afterwards, the information flow is propagated via
weighted links, and the most important tokens after iterations are kept for
MLLMs, which can be front or background.To validate G-Prune, we apply it to a
recent MLLM called LLaVA-NeXT, and conduct extensive experiments on a set of
benchmarks.The experiment results show that G-Prune can greatly reduce
computation overhead while retaining high performance on both coarse- and
fine-grained tasks. For instance, G-Prune can reduce 63.57\% FLOPs of
LLaVA-NeXT on VQA2.0 and TextVQA with only 0.95\% and 2.34\% accuracy drops,
respectively.
|
2501.02269 | TDM: Temporally-Consistent Diffusion Model for All-in-One Real-World
Video Restoration | cs.CV | In this paper, we propose the first diffusion-based all-in-one video
restoration method that utilizes the power of a pre-trained Stable Diffusion
and a fine-tuned ControlNet. Our method can restore various types of video
degradation with a single unified model, overcoming the limitation of standard
methods that require specific models for each restoration task. Our
contributions include an efficient training strategy with Task Prompt Guidance
(TPG) for diverse restoration tasks, an inference strategy that combines
Denoising Diffusion Implicit Models~(DDIM) inversion with a novel Sliding
Window Cross-Frame Attention (SW-CFA) mechanism for enhanced content
preservation and temporal consistency, and a scalable pipeline that makes our
method all-in-one to adapt to different video restoration tasks. Through
extensive experiments on five video restoration tasks, we demonstrate the
superiority of our method in generalization capability to real-world videos and
temporal consistency preservation over existing state-of-the-art methods. Our
method advances the video restoration task by providing a unified solution that
enhances video quality across multiple applications.
|
2501.02270 | Efficient Video-Based ALPR System Using YOLO and Visual Rhythm | cs.CV cs.LG eess.IV | Automatic License Plate Recognition (ALPR) involves extracting vehicle
license plate information from image or a video capture. These systems have
gained popularity due to the wide availability of low-cost surveillance cameras
and advances in Deep Learning. Typically, video-based ALPR systems rely on
multiple frames to detect the vehicle and recognize the license plates.
Therefore, we propose a system capable of extracting exactly one frame per
vehicle and recognizing its license plate characters from this singular image
using an Optical Character Recognition (OCR) model. Early experiments show that
this methodology is viable.
|
2501.02271 | Securing Integrated Sensing and Communication Against a Mobile
Adversary: A Stackelberg Game with Deep Reinforcement Learning | cs.IT eess.SP math.IT | In this paper, we study a secure integrated sensing and communication (ISAC)
system employing a full-duplex base station with sensing capabilities against a
mobile proactive adversarial target$\unicode{x2014}$a malicious unmanned aerial
vehicle (M-UAV). We develop a game-theoretic model to enhance communication
security, radar sensing accuracy, and power efficiency. The interaction between
the legitimate network and the mobile adversary is formulated as a
non-cooperative Stackelberg game (NSG), where the M-UAV acts as the leader and
strategically adjusts its trajectory to improve its eavesdropping ability while
conserving power and avoiding obstacles. In response, the legitimate network,
acting as the follower, dynamically allocates resources to minimize network
power usage while ensuring required secrecy rates and sensing performance. To
address this challenging problem, we propose a low-complexity successive convex
approximation (SCA) method for network resource optimization combined with a
deep reinforcement learning (DRL) algorithm for adaptive M-UAV trajectory
planning through sequential interactions and learning. Simulation results
demonstrate the efficacy of the proposed method in addressing security
challenges of dynamic ISAC systems in 6G, i.e., achieving a Stackelberg
equilibrium with robust performance while mitigating the adversary's ability to
intercept network signals.
|
2501.02273 | Digital Deep Joint Source-Channel Coding with Blind Training for
Adaptive Modulation and Power Control | eess.SP cs.IT math.IT | This paper proposes a novel digital deep joint source-channel coding
(DeepJSCC) framework that achieves robust performance across diverse
communication environments without requiring extensive retraining and prior
knowledge of communication environments. Traditional digital DeepJSCC
techniques often face challenges in adapting to various communication
environments, as they require significant training overhead and large amounts
of communication data to develop either multiple specialized models or a single
generalized model, in pre-defined communication environments. To address this
challenge, in our framework, an error-adaptive blind training strategy is
devised, which eliminates the need for prior knowledge of communication
environments. This is achieved by modeling the relationship between the
encoder's output and the decoder's input using binary symmetric channels, and
optimizing bit-flip probabilities by treating them as trainable parameters. In
our framework, a training-aware communication strategy is also presented, which
dynamically selects the optimal encoder-decoder pair and transmission
parameters based on current channel conditions. In particular, in this
strategy, an adaptive power and modulation control method is developed to
minimize the total transmission power, while maintaining high task performance.
Simulation results demonstrate that our framework outperforms existing DeepJSCC
methods, achieving higher peak signal-to-noise ratio, lower power consumption,
and requiring significantly fewer encoder-decoder pairs for adaptation.
|
2501.02278 | An experimental comparison of tree-data structures for connectivity
queries on fully-dynamic undirected graphs (Extended Version) | cs.DB | During the past decades significant efforts have been made to propose data
structures for answering connectivity queries on fully dynamic graphs, i.e.,
graphs with frequent insertions and deletions of edges. However, a
comprehensive understanding of how these data structures perform in practice is
missing, since not all of them have been implemented, let alone evaluated
experimentally. We provide reference implementations for the proposed data
structures and experimentally evaluate them on a wide range of graphs. Our
findings show that the current solutions are not ready to be deployed in
systems as is, as every data structure has critical weaknesses when used in
practice. Key limitations that must be overcome are the space and time overhead
incurred by balanced data structures, the degeneration of the runtime of
space-efficient data structures in worst case scenarios, and the maintenance
costs for balanced data structures. We detail our findings in the experimental
evaluation and provide recommendations for implementing robust solutions for
answering connectivity queries on dynamic graphs.
|
2501.02279 | Stochastic Generalized Dynamic Games with Coupled Chance Constraints | eess.SY cs.SY | Designing multi-agent systems with safety constraints and uncertain dynamics
is a challenging problem. This paper studies a stochastic dynamic
non-cooperative game with coupling safety chance constraints. The uncertainty
is assumed to satisfy a concentration of measure property. Firstly, due to the
non-convexity of chance constraints, a convex under-approximation of chance
constraints is given using constraints on the expectation. Then, the conditions
for the existence of the stochastic generalized Nash equilibrium (SGNE) of the
under-approximated game are investigated, and the relation between the
$\varepsilon-$SGNE of the original game and the under-approximated one is
derived. A sampling-based algorithm is proposed for the SGNE seeking of the
under-approximated game that does not require knowing the distribution of the
uncertainty nor the analytical computation of expectations. Finally, under some
assumptions on the game's pseudo-gradient mapping, the almost sure convergence
of the algorithm to SGNE is proven. A numerical study is carried out on
demand-side management in microgrids with shared battery to demonstrate the
applicability of the proposed scheme.
|
2501.02280 | On Symmetries in Analytic Input-Output Systems | eess.SY cs.SY | There are many notions of symmetry for state space models. They play a role
in understanding when systems are time reversible, provide a system theoretic
interpretation of thermodynamics, and have applications in certain
stabilization and optimal control problems. The earliest form of symmetry for
analytic input-output systems is due to Fliess who introduced systems described
by an exchangeable generating series. In this case, one is able to write the
output as a memoryless analytic function of the integral of each input. The
first goal of this paper is to describe two new types of symmetry for such
Chen--Fliess input-output systems, namely, coefficient reversible symmetry and
palindromic symmetry. Each concept is then related to the notion of an
exchangeable series. The second goal of the paper is to provide an in-depth
analysis of Chen--Fliess input-output systems whose generating series are
linear time-varying, palindromic, and have generating series coefficients
growing at a maximal rate while ensuring some type of convergence. It is shown
that such series have an infinite Hankel rank and Lie rank, have a certain
infinite dimensional state space realization, and a description of their
relative degree and zero dynamics is given.
|
2501.02285 | Hyperbolic Contrastive Learning for Hierarchical 3D Point Cloud
Embedding | cs.CV cs.AI | Hyperbolic spaces allow for more efficient modeling of complex, hierarchical
structures, which is particularly beneficial in tasks involving multi-modal
data. Although hyperbolic geometries have been proven effective for
language-image pre-training, their capabilities to unify language, image, and
3D Point Cloud modalities are under-explored. We extend the 3D Point Cloud
modality in hyperbolic multi-modal contrastive pre-training. Additionally, we
explore the entailment, modality gap, and alignment regularizers for learning
hierarchical 3D embeddings and facilitating the transfer of knowledge from both
Text and Image modalities. These regularizers enable the learning of
intra-modal hierarchy within each modality and inter-modal hierarchy across
text, 2D images, and 3D Point Clouds. Experimental results demonstrate that our
proposed training strategy yields an outstanding 3D Point Cloud encoder, and
the obtained 3D Point Cloud hierarchical embeddings significantly improve
performance on various downstream tasks.
|
2501.02287 | Deep Learning-Driven Segmentation of Ischemic Stroke Lesions Using
Multi-Channel MRI | eess.IV cs.AI cs.CV | Ischemic stroke, caused by cerebral vessel occlusion, presents substantial
challenges in medical imaging due to the variability and subtlety of stroke
lesions. Magnetic Resonance Imaging (MRI) plays a crucial role in diagnosing
and managing ischemic stroke, yet existing segmentation techniques often fail
to accurately delineate lesions. This study introduces a novel deep
learning-based method for segmenting ischemic stroke lesions using
multi-channel MRI modalities, including Diffusion Weighted Imaging (DWI),
Apparent Diffusion Coefficient (ADC), and enhanced Diffusion Weighted Imaging
(eDWI). The proposed architecture integrates DenseNet121 as the encoder with
Self-Organized Operational Neural Networks (SelfONN) in the decoder, enhanced
by Channel and Space Compound Attention (CSCA) and Double
Squeeze-and-Excitation (DSE) blocks. Additionally, a custom loss function
combining Dice Loss and Jaccard Loss with weighted averages is introduced to
improve model performance. Trained and evaluated on the ISLES 2022 dataset, the
model achieved Dice Similarity Coefficients (DSC) of 83.88% using DWI alone,
85.86% with DWI and ADC, and 87.49% with the integration of DWI, ADC, and eDWI.
This approach not only outperforms existing methods but also addresses key
limitations in current segmentation practices. These advancements significantly
enhance diagnostic precision and treatment planning for ischemic stroke,
providing valuable support for clinical decision-making.
|
2501.02288 | Making the Peers' Subjective Well-being Visible Impairs
Cooperator-centered Experimental Social Networks | cs.SI physics.soc-ph | Past experiments show that reputation or the knowledge of peers' past
cooperation can enhance cooperation in human social networks. On the other
hand, the knowledge of peers' wealth undermines cooperativeness, and that of
peers' interconnectedness and network structure does not affect it. However, it
is unknown if making peers' subjective well-being (SWB) available or visible in
social networks may enhance or undermine cooperation. Therefore, we implemented
online network experiments (N = 662 in 50 networked groups with 15 rounds of
interactions), in which study participants cooperated with or defected against
connected peers through Public Goods Game, made and cut social ties with
others, and rated their SWB. We manipulated the visibility of connected peers'
SWB (25 visible vs. 25 invisible SWB networked groups) while keeping the
connected peers' reputation and in-game wealth visible. Results show that
making the peers/ SWB visible did not alter overall cooperativeness, wealth,
inter-connectedness, or SWB. In contrast, the visible SWB networked groups
exhibited a higher number of communities and lower transitivity (the proportion
of the cases where a peer of a peer is also a peer) than the invisible SWB
networked groups. These phenomena are explained by an altered decision-making
pattern in the visible SWB networks: cooperators were less likely to connect
with cooperators and more likely to connect with defectors, and consequently,
cooperators could not maintain their popularity or stay in the center of the
networks.
|
2501.02295 | Explicit vs. Implicit: Investigating Social Bias in Large Language
Models through Self-Reflection | cs.CL | Large Language Models (LLMs) have been shown to exhibit various biases and
stereotypes in their generated content. While extensive research has
investigated bias in LLMs, prior work has predominantly focused on explicit
bias, leaving the more nuanced implicit biases largely unexplored. This paper
presents a systematic framework grounded in social psychology theories to
investigate and compare explicit and implicit biases in LLMs. We propose a
novel "self-reflection" based evaluation framework that operates in two phases:
first measuring implicit bias through simulated psychological assessment
methods, then evaluating explicit bias by prompting LLMs to analyze their own
generated content. Through extensive experiments on state-of-the-art LLMs
across multiple social dimensions, we demonstrate that LLMs exhibit a
substantial inconsistency between explicit and implicit biases, where explicit
biases manifest as mild stereotypes while implicit biases show strong
stereotypes. Furthermore, we investigate the underlying factors contributing to
this explicit-implicit bias inconsistency. Our experiments examine the effects
of training data scale, model parameters, and alignment techniques. Results
indicate that while explicit bias diminishes with increased training data and
model size, implicit bias exhibits a contrasting upward trend. Notably,
contemporary alignment methods (e.g., RLHF, DPO) effectively suppress explicit
bias but show limited efficacy in mitigating implicit bias. These findings
suggest that while scaling up models and alignment training can address
explicit bias, the challenge of implicit bias requires novel approaches beyond
current methodologies.
|
2501.02298 | Beyond Log-Concavity and Score Regularity: Improved Convergence Bounds
for Score-Based Generative Models in W2-distance | stat.ML cs.LG | Score-based Generative Models (SGMs) aim to sample from a target distribution
by learning score functions using samples perturbed by Gaussian noise. Existing
convergence bounds for SGMs in the $\mathcal{W}_2$-distance rely on stringent
assumptions about the data distribution. In this work, we present a novel
framework for analyzing $\mathcal{W}_2$-convergence in SGMs, significantly
relaxing traditional assumptions such as log-concavity and score regularity.
Leveraging the regularization properties of the Ornstein-Uhlenbeck (OU)
process, we show that weak log-concavity of the data distribution evolves into
log-concavity over time. This transition is rigorously quantified through a
PDE-based analysis of the Hamilton-Jacobi-Bellman equation governing the
log-density of the forward process. Moreover, we establish that the drift of
the time-reversed OU process alternates between contractive and non-contractive
regimes, reflecting the dynamics of concavity. Our approach circumvents the
need for stringent regularity conditions on the score function and its
estimators, relying instead on milder, more practical assumptions. We
demonstrate the wide applicability of this framework through explicit
computations on Gaussian mixture models, illustrating its versatility and
potential for broader classes of data distributions.
|
2501.02299 | The parenthood effect in urban mobility | physics.soc-ph cs.IT math.IT physics.data-an | The modelling of human mobility is vital for the understanding of the
complexity of urban dynamics and guiding effective interventions to improve
quality of life. Traditional modelling approaches focus on `average citizens,'
which overlook the multitude of experiences from distinct sociodemographic
groups. Recent studies have unveiled significant variations in mobility
patterns related to gender and socioeconomic status, yet the impact of
parenthood remains under-explored. Parenthood brings profound changes to daily
routines, influenced by factors such as increased caregiving responsibilities,
altered work-life balance, and the need for family-friendly environments.
Parents often prioritise considerations such as cost of living, social
wellbeing, environmental quality, and safety. Quantifying how `friendly' a city
is becomes more and more important for parents, especially in the context of
rising remote work opportunities which, in turn, reverberate on the choices on
where to settle. This work investigates whether these considerations lead to
distinct mobility patterns between parents and non-parents, also accounting for
the impact of partnership. Using extensive census data across American cities,
we analyse how parenthood and partnership reshape their urban experiences. Our
findings indicate that cities can indeed be classified by their level of
friendliness towards parents and partners. For example, Dallas and Nashville
can be more suited for single individuals, New York and Chicago can be more
accommodating to parents, while Washington and Baltimore favour married people.
These insights contribute to the growing body of research advocating for more
nuanced and equitable urban planning. By recognising the diverse needs of
different demographic groups, particularly parents, our study underscores the
importance of tailored urban design strategies over universal solutions.
|
2501.02300 | Diabetic Retinopathy Detection Using CNN with Residual Block with DCGAN | eess.IV cs.CV cs.LG | Diabetic Retinopathy (DR) is a major cause of blindness worldwide, caused by
damage to the blood vessels in the retina due to diabetes. Early detection and
classification of DR are crucial for timely intervention and preventing vision
loss. This work proposes an automated system for DR detection using
Convolutional Neural Networks (CNNs) with a residual block architecture, which
enhances feature extraction and model performance. To further improve the
model's robustness, we incorporate advanced data augmentation techniques,
specifically leveraging a Deep Convolutional Generative Adversarial Network
(DCGAN) for generating diverse retinal images. This approach increases the
variability of training data, making the model more generalizable and capable
of handling real-world variations in retinal images. The system is designed to
classify retinal images into five distinct categories, from No DR to
Proliferative DR, providing an efficient and scalable solution for early
diagnosis and monitoring of DR progression. The proposed model aims to support
healthcare professionals in large-scale DR screening, especially in
resource-constrained settings.
|
2501.02303 | Design and Benchmarking of A Multi-Modality Sensor for Robotic
Manipulation with GAN-Based Cross-Modality Interpretation | cs.RO eess.SP | In this paper, we present the design and benchmark of an innovative sensor,
ViTacTip, which fulfills the demand for advanced multi-modal sensing in a
compact design. A notable feature of ViTacTip is its transparent skin, which
incorporates a `see-through-skin' mechanism. This mechanism aims at capturing
detailed object features upon contact, significantly improving both
vision-based and proximity perception capabilities. In parallel, the biomimetic
tips embedded in the sensor's skin are designed to amplify contact details,
thus substantially augmenting tactile and derived force perception abilities.
To demonstrate the multi-modal capabilities of ViTacTip, we developed a
multi-task learning model that enables simultaneous recognition of hardness,
material, and textures. To assess the functionality and validate the
versatility of ViTacTip, we conducted extensive benchmarking experiments,
including object recognition, contact point detection, pose regression, and
grating identification. To facilitate seamless switching between various
sensing modalities, we employed a Generative Adversarial Network (GAN)-based
approach. This method enhances the applicability of the ViTacTip sensor across
diverse environments by enabling cross-modality interpretation.
|
2501.02309 | Multi-Satellite Beam Hopping and Power Allocation Using Deep
Reinforcement Learning | eess.SY cs.SY | In non-geostationary orbit (NGSO) satellite communication systems,
effectively utilizing beam hopping (BH) technology is crucial for addressing
uneven traffic demands. However, optimizing beam scheduling and resource
allocation in multi-NGSO BH scenarios remains a significant challenge. This
paper proposes a multi-NGSO BH algorithm based on deep reinforcement learning
(DRL) to optimize beam illumination patterns and power allocation. By
leveraging three degrees of freedom (i.e., time, space, and power), the
algorithm aims to optimize the long-term throughput and the long-term
cumulative average delay (LTCAD). The solution is based on proximal policy
optimization (PPO) with a hybrid action space combining discrete and continuous
actions. Using two policy networks with a shared base layer, the proposed
algorithm jointly optimizes beam scheduling and power allocation. One network
selects beam illumination patterns in the discrete action space, while the
other manages power allocation in the continuous space. Simulation results show
that the proposed algorithm significantly reduces LTCAD while maintaining high
throughput in time-varying traffic scenarios. Compared to the four benchmark
methods, it improves network throughput by up to $8.9\%$ and reduces LTCAD by
up to $69.2\%$
|
2501.02311 | Analysis of Fluorescence Telescope Data Using Machine Learning Methods | astro-ph.IM cs.LG | Fluorescence telescopes are among the key instruments used for studying
ultra-high energy cosmic rays in all modern experiments. We use model data for
a small ground-based telescope EUSO-TA to try some methods of machine learning
and neural networks for recognizing tracks of extensive air showers in its data
and for reconstruction of energy and arrival directions of primary particles.
We also comment on the opportunities to use this approach for other
fluorescence telescopes and outline possible ways of improving the performance
of the suggested methods.
|
2501.02313 | DiffGraph: Heterogeneous Graph Diffusion Model | cs.LG cs.AI cs.IR | Recent advances in Graph Neural Networks (GNNs) have revolutionized
graph-structured data modeling, yet traditional GNNs struggle with complex
heterogeneous structures prevalent in real-world scenarios. Despite progress in
handling heterogeneous interactions, two fundamental challenges persist: noisy
data significantly compromising embedding quality and learning performance, and
existing methods' inability to capture intricate semantic transitions among
heterogeneous relations, which impacts downstream predictions. To address these
fundamental issues, we present the Heterogeneous Graph Diffusion Model
(DiffGraph), a pioneering framework that introduces an innovative cross-view
denoising strategy. This advanced approach transforms auxiliary heterogeneous
data into target semantic spaces, enabling precise distillation of
task-relevant information. At its core, DiffGraph features a sophisticated
latent heterogeneous graph diffusion mechanism, implementing a novel forward
and backward diffusion process for superior noise management. This methodology
achieves simultaneous heterogeneous graph denoising and cross-type transition,
while significantly simplifying graph generation through its latent-space
diffusion capabilities. Through rigorous experimental validation on both public
and industrial datasets, we demonstrate that DiffGraph consistently surpasses
existing methods in link prediction and node classification tasks, establishing
new benchmarks for robustness and efficiency in heterogeneous graph processing.
The model implementation is publicly available at:
https://github.com/HKUDS/DiffGraph.
|
2501.02314 | RadarNeXt: Real-Time and Reliable 3D Object Detector Based On 4D mmWave
Imaging Radar | cs.CV | 3D object detection is crucial for Autonomous Driving (AD) and Advanced
Driver Assistance Systems (ADAS). However, most 3D detectors prioritize
detection accuracy, often overlooking network inference speed in practical
applications. In this paper, we propose RadarNeXt, a real-time and reliable 3D
object detector based on the 4D mmWave radar point clouds. It leverages the
re-parameterizable neural networks to catch multi-scale features, reduce memory
cost and accelerate the inference. Moreover, to highlight the irregular
foreground features of radar point clouds and suppress background clutter, we
propose a Multi-path Deformable Foreground Enhancement Network (MDFEN),
ensuring detection accuracy while minimizing the sacrifice of speed and
excessive number of parameters. Experimental results on View-of-Delft and
TJ4DRadSet datasets validate the exceptional performance and efficiency of
RadarNeXt, achieving 50.48 and 32.30 mAPs with the variant using our proposed
MDFEN. Notably, our RadarNeXt variants achieve inference speeds of over 67.10
FPS on the RTX A4000 GPU and 28.40 FPS on the Jetson AGX Orin. This research
demonstrates that RadarNeXt brings a novel and effective paradigm for 3D
perception based on 4D mmWave radar.
|
2501.02325 | Revisiting Compactness for District Plans | physics.soc-ph cs.CV | Modern sampling methods create ensembles of district maps that score well on
discrete compactness scores, whereas the Polsby-Popper and other shape-based
scores remain highly relevant for building fair maps and litigating unfair
ones. The aim of this paper is twofold. First, we introduce population-weighted
versions of shape-based scores and show a precise sense in which this
interpolates between shape-based and discrete scores. Second, we introduce a
modification of the ReCom sampling method that produces ensembles of maps with
improved shape-based compactness scores.
|
2501.02330 | SR-Reward: Taking The Path More Traveled | cs.LG cs.AI | In this paper, we propose a novel method for learning reward functions
directly from offline demonstrations. Unlike traditional inverse reinforcement
learning (IRL), our approach decouples the reward function from the learner's
policy, eliminating the adversarial interaction typically required between the
two. This results in a more stable and efficient training process. Our reward
function, called \textit{SR-Reward}, leverages successor representation (SR) to
encode a state based on expected future states' visitation under the
demonstration policy and transition dynamics. By utilizing the Bellman
equation, SR-Reward can be learned concurrently with most reinforcement
learning (RL) algorithms without altering the existing training pipeline. We
also introduce a negative sampling strategy to mitigate overestimation errors
by reducing rewards for out-of-distribution data, thereby enhancing robustness.
This strategy inherently introduces a conservative bias into RL algorithms that
employ the learned reward. We evaluate our method on the D4RL benchmark,
achieving competitive results compared to offline RL algorithms with access to
true rewards and imitation learning (IL) techniques like behavioral cloning.
Moreover, our ablation studies on data size and quality reveal the advantages
and limitations of SR-Reward as a proxy for true rewards.
|
2501.02333 | On The Causal Network Of Face-selective Regions In Human Brain During
Movie Watching | q-bio.NC cs.LG eess.IV | Understanding the causal interactions in simple brain tasks, such as face
detection, remains a challenging and ambiguous process for researchers. In this
study, we address this issue by employing a novel causal discovery method --
Directed Acyclic Graphs via M-matrices for Acyclicity (DAGMA) -- to investigate
the causal structure of the brain's face-selective network and gain deeper
insights into its mechanism. Using natural movie stimuli, we extract causal
network of face-selective regions and analyze how frames containing faces
influence this network. Our findings reveal that the presence of faces in the
stimuli have causal effect both on the number and strength of causal
connections within the network. Additionally, our results highlight the crucial
role of subcortical regions in satisfying causal sufficiency, emphasizing its
importance in causal studies of brain. This study provides a new perspective on
understanding the causal architecture of the face-selective network of the
brain, motivating further research on neural causality.
|
2501.02334 | Validity Arguments For Constructed Response Scoring Using Generative
Artificial Intelligence Applications | cs.CL cs.AI cs.CY | The rapid advancements in large language models and generative artificial
intelligence (AI) capabilities are making their broad application in the
high-stakes testing context more likely. Use of generative AI in the scoring of
constructed responses is particularly appealing because it reduces the effort
required for handcrafting features in traditional AI scoring and might even
outperform those methods. The purpose of this paper is to highlight the
differences in the feature-based and generative AI applications in constructed
response scoring systems and propose a set of best practices for the collection
of validity evidence to support the use and interpretation of constructed
response scores from scoring systems using generative AI. We compare the
validity evidence needed in scoring systems using human ratings, feature-based
natural language processing AI scoring engines, and generative AI. The evidence
needed in the generative AI context is more extensive than in the feature-based
NLP scoring context because of the lack of transparency and other concerns
unique to generative AI such as consistency. Constructed response score data
from standardized tests demonstrate the collection of validity evidence for
different types of scoring systems and highlights the numerous complexities and
considerations when making a validity argument for these scores. In addition,
we discuss how the evaluation of AI scores might include a consideration of how
a contributory scoring approach combining multiple AI scores (from different
sources) will cover more of the construct in the absence of human ratings.
|
2501.02335 | Connecting the Unconnectable through Feedback | cs.IT eess.SP math.IT | Reliable uplink connectivity remains a persistent challenge for IoT devices,
particularly those at the cell edge, due to their limited transmit power and
single-antenna configurations. This paper introduces a novel framework aimed at
connecting the unconnectable, leveraging real-time feedback from access points
(APs) to enhance uplink coverage without increasing the energy consumption of
IoT devices. At the core of this approach are feedback channel codes, which
enable IoT devices to dynamically adapt their transmission strategies based on
AP decoding feedback, thereby reducing the critical uplink SNR required for
successful communication. Analytical models are developed to quantify the
coverage probability and the number of connectable APs, providing a
comprehensive understanding of the system's performance. Numerical results
validate the proposed method, demonstrating substantial improvements in
coverage range and connectivity, particularly for devices at the cell edge,
with up to a 51% boost in connectable APs. Our approach offers a robust and
energy-efficient solution to overcoming uplink coverage limitations, enabling
IoT networks to connect devices in challenging environments.
|
2501.02336 | AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM
Inference | cs.CL cs.AI | Long-context large language models (LLMs) inference is increasingly critical,
motivating a number of studies devoted to alleviating the substantial storage
and computational costs in such scenarios. Layer-wise skipping methods are
promising optimizations but rarely explored in long-context inference. We
observe that existing layer-wise skipping strategies have several limitations
when applied in long-context inference, including the inability to adapt to
model and context variability, disregard for sublayer significance, and
inapplicability for the prefilling phase. This paper proposes \sysname, an
adaptive sublayer skipping method specifically designed for long-context
inference. \sysname adaptively identifies less important layers by leveraging
on-the-fly similarity information, enables sublayer-wise skipping, and
accelerates both the prefilling and decoding phases. The effectiveness of
\sysname is demonstrated through extensive experiments on various long-context
benchmarks and models, showcasing its superior inference performance over
existing baselines.
|
2501.02338 | Evaluation of the Code Generation Capabilities of ChatGPT 4: A
Comparative Analysis in 19 Programming Languages | cs.SE cs.AI | This bachelor's thesis examines the capabilities of ChatGPT 4 in code
generation across 19 programming languages. The study analyzed solution rates
across three difficulty levels, types of errors encountered, and code quality
in terms of runtime and memory efficiency through a quantitative experiment. A
total of 188 programming problems were selected from the LeetCode platform, and
ChatGPT 4 was given three attempts to produce a correct solution with feedback.
ChatGPT 4 successfully solved 39.67% of all tasks, with success rates
decreasing significantly as problem complexity increased. Notably, the model
faced considerable challenges with hard problems across all languages. ChatGPT
4 demonstrated higher competence in widely used languages, likely due to a
larger volume and higher quality of training data. The solution rates also
revealed a preference for languages with low abstraction levels and static
typing. For popular languages, the most frequent error was "Wrong Answer,"
whereas for less popular languages, compiler and runtime errors prevailed,
suggesting frequent misunderstandings and confusion regarding the structural
characteristics of these languages. The model exhibited above-average runtime
efficiency in all programming languages, showing a tendency toward statically
typed and low-abstraction languages. Memory efficiency results varied
significantly, with above-average performance in 14 languages and below-average
performance in five languages. A slight preference for low-abstraction
languages and a leaning toward dynamically typed languages in terms of memory
efficiency were observed. Future research should include a larger number of
tasks, iterations, and less popular languages. Additionally, ChatGPT 4's
abilities in code interpretation and summarization, debugging, and the
development of complex, practical code could be analyzed further.
----
Diese Bachelorarbeit untersucht die F\"ahigkeiten von ChatGPT 4 zur
Code-Generierung in 19 Programmiersprachen. Betrachtet wurden die
L\"osungsraten zwischen drei Schwierigkeitsgraden, die aufgetretenen
Fehlerarten und die Qualit\"at des Codes hinsichtlich der Laufzeit- und
Speichereffizienz in einem quantitativen Experiment. Dabei wurden 188
Programmierprobleme der Plattform LeetCode entnommen, wobei ChatGPT 4 jeweils
drei Versuche hatte, mittels Feedback eine korrekte L\"osung zu generieren.
ChatGPT 4 l\"oste 39,67 % aller Aufgaben erfolgreich, wobei die Erfolgsrate mit
zunehmendem Schwierigkeitsgrad deutlich abnahm und bei komplexen Problemen in
allen Sprachen signifikante Schwierigkeiten auftraten. Das Modell zeigte eine
h\"ohere Kompetenz in weit verbreiteten Sprachen, was wahrscheinlich auf eine
gr\"o{\ss}ere Menge und h\"ohere Qualit\"at der Trainingsdaten
zur\"uckzuf\"uhren ist. Bez\"uglich der L\"osungsraten zeigte das Modell zudem
eine Pr\"aferenz f\"ur Sprachen mit niedrigem Abstraktionsniveau und statischer
Typisierung. Bei Sprachen hoher Popularit\"at trat der Fehler Wrong Answer am
h\"aufigsten auf, w\"ahrend bei weniger popul\"aren Sprachen Compiler- und
Laufzeitfehler \"uberwogen, was auf h\"aufige Missverst\"andnisse und
Verwechslungen bez\"uglich der spezifischen strukturellen Eigenschaften dieser
Sprachen zur\"uckzuf\"uhren ist. ChatGPT 4 demonstrierte in allen
Programmiersprachen eine \"uberdurchschnittliche Laufzeiteffizienz und
tendierte diesbez\"uglich erneut zu statisch typisierten und niedrig
abstrahierten Sprachen. Die Werte zur Speichereffizienz variierten erheblich,
wobei in 14 Sprachen \"uberdurchschnittliche und in f\"unf Sprachen
unterdurchschnittliche Werte erzielt wurden. Es zeigte sich diesbez\"uglich
eine leichte Tendenz zugunsten von niedrig abstrahierten sowie eine Pr\"aferenz
zu dynamisch typisierten Sprachen. Zuk\"unftige Forschung sollte eine h\"ohere
Anzahl an Aufgaben, Iterationen und unpopul\"aren Sprachen einbeziehen.
Dar\"uber hinaus k\"onnten die F\"ahigkeiten von ChatGPT 4 in der
Code-Interpretation und -Zusammenfassung, im Debugging und in der Entwicklung
komplexer, praxisbezogener Codes analysiert werden.
|
2501.02341 | UAVs Meet LLMs: Overviews and Perspectives Toward Agentic Low-Altitude
Mobility | cs.RO cs.AI | Low-altitude mobility, exemplified by unmanned aerial vehicles (UAVs), has
introduced transformative advancements across various domains, like
transportation, logistics, and agriculture. Leveraging flexible perspectives
and rapid maneuverability, UAVs extend traditional systems' perception and
action capabilities, garnering widespread attention from academia and industry.
However, current UAV operations primarily depend on human control, with only
limited autonomy in simple scenarios, and lack the intelligence and
adaptability needed for more complex environments and tasks. The emergence of
large language models (LLMs) demonstrates remarkable problem-solving and
generalization capabilities, offering a promising pathway for advancing UAV
intelligence. This paper explores the integration of LLMs and UAVs, beginning
with an overview of UAV systems' fundamental components and functionalities,
followed by an overview of the state-of-the-art in LLM technology.
Subsequently, it systematically highlights the multimodal data resources
available for UAVs, which provide critical support for training and evaluation.
Furthermore, it categorizes and analyzes key tasks and application scenarios
where UAVs and LLMs converge. Finally, a reference roadmap towards agentic UAVs
is proposed, aiming to enable UAVs to achieve agentic intelligence through
autonomous perception, memory, reasoning, and tool utilization. Related
resources are available at https://github.com/Hub-Tian/UAVs_Meet_LLMs.
|
2501.02342 | Optimizing Small Language Models for In-Vehicle Function-Calling | cs.LG cs.AI cs.CL cs.CV cs.HC | We propose a holistic approach for deploying Small Language Models (SLMs) as
function-calling agents within vehicles as edge devices, offering a more
flexible and robust alternative to traditional rule-based systems. By
leveraging SLMs, we simplify vehicle control mechanisms and enhance the user
experience. Given the in-vehicle hardware constraints, we apply
state-of-the-art model compression techniques, including structured pruning,
healing, and quantization, ensuring that the model fits within the resource
limitations while maintaining acceptable performance. Our work focuses on
optimizing a representative SLM, Microsoft's Phi-3 mini, and outlines best
practices for enabling embedded models, including compression, task-specific
fine-tuning, and vehicle integration. We demonstrate that, despite significant
reduction in model size which removes up to 2 billion parameters from the
original model, our approach preserves the model's ability to handle complex
in-vehicle tasks accurately and efficiently. Furthermore, by executing the
model in a lightweight runtime environment, we achieve a generation speed of 11
tokens per second, making real-time, on-device inference feasible without
hardware acceleration. Our results demonstrate the potential of SLMs to
transform vehicle control systems, enabling more intuitive interactions between
users and their vehicles for an enhanced driving experience.
|
2501.02344 | Accurate Crop Yield Estimation of Blueberries using Deep Learning and
Smart Drones | cs.CV | We present an AI pipeline that involves using smart drones equipped with
computer vision to obtain a more accurate fruit count and yield estimation of
the number of blueberries in a field. The core components are two
object-detection models based on the YOLO deep learning architecture: a Bush
Model that is able to detect blueberry bushes from images captured at low
altitudes and at different angles, and a Berry Model that can detect individual
berries that are visible on a bush. Together, both models allow for more
accurate crop yield estimation by allowing intelligent control of the drone's
position and camera to safely capture side-view images of bushes up close. In
addition to providing experimental results for our models, which show good
accuracy in terms of precision and recall when captured images are cropped
around the foreground center bush, we also describe how to deploy our models to
map out blueberry fields using different sampling strategies, and discuss the
challenges of annotating very small objects (blueberries) and difficulties in
evaluating the effectiveness of our models.
|
2501.02346 | Exploring the Capabilities and Limitations of Large Language Models for
Radiation Oncology Decision Support | physics.med-ph cs.AI | Thanks to the rapidly evolving integration of LLMs into decision-support
tools, a significant transformation is happening across large-scale systems.
Like other medical fields, the use of LLMs such as GPT-4 is gaining increasing
interest in radiation oncology as well. An attempt to assess GPT-4's
performance in radiation oncology was made via a dedicated 100-question
examination on the highly specialized topic of radiation oncology physics,
revealing GPT-4's superiority over other LLMs. GPT-4's performance on a broader
field of clinical radiation oncology is further benchmarked by the ACR
Radiation Oncology In-Training (TXIT) exam where GPT-4 achieved a high accuracy
of 74.57%. Its performance on re-labelling structure names in accordance with
the AAPM TG-263 report has also been benchmarked, achieving above 96%
accuracies. Such studies shed light on the potential of LLMs in radiation
oncology. As interest in the potential and constraints of LLMs in general
healthcare applications continues to rise5, the capabilities and limitations of
LLMs in radiation oncology decision support have not yet been fully explored.
|
2501.02348 | Thinking with Many Minds: Using Large Language Models for
Multi-Perspective Problem-Solving | cs.CL cs.HC | Complex problem-solving requires cognitive flexibility--the capacity to
entertain multiple perspectives while preserving their distinctiveness. This
flexibility replicates the "wisdom of crowds" within a single individual,
allowing them to "think with many minds." While mental simulation enables
imagined deliberation, cognitive constraints limit its effectiveness. We
propose synthetic deliberation, a Large Language Model (LLM)-based method that
simulates discourse between agents embodying diverse perspectives, as a
solution. Using a custom GPT-based model, we showcase its benefits: concurrent
processing of multiple viewpoints without cognitive degradation, parallel
exploration of perspectives, and precise control over viewpoint synthesis. By
externalizing the deliberative process and distributing cognitive labor between
parallel search and integration, synthetic deliberation transcends mental
simulation's limitations. This approach shows promise for strategic planning,
policymaking, and conflict resolution.
|
2501.02349 | Revelio: A Real-World Screen-Camera Communication System with Visually
Imperceptible Data Embedding | cs.MM cs.CR cs.CV cs.IT cs.NI math.IT | We present `Revelio', a real-world screen-camera communication system
leveraging temporal flicker fusion in the OKLAB color space. Using
spatially-adaptive flickering and encoding information in pixel region shapes,
Revelio achieves visually imperceptible data embedding while remaining robust
against noise, asynchronicity, and distortions in screen-camera channels,
ensuring reliable decoding by standard smartphone cameras. The decoder, driven
by a two-stage neural network, uses a weighted differential accumulator for
precise frame detection and symbol recognition. Initial experiments demonstrate
Revelio's effectiveness in interactive television, offering an unobtrusive
method for meta-information transmission.
|
2501.02352 | GNSS/GPS Spoofing and Jamming Identification Using Machine Learning and
Deep Learning | cs.CR cs.AI cs.CV cs.LG | The increasing reliance on Global Navigation Satellite Systems (GNSS),
particularly the Global Positioning System (GPS), underscores the urgent need
to safeguard these technologies against malicious threats such as spoofing and
jamming. As the backbone for positioning, navigation, and timing (PNT) across
various applications including transportation, telecommunications, and
emergency services GNSS is vulnerable to deliberate interference that poses
significant risks. Spoofing attacks, which involve transmitting counterfeit
GNSS signals to mislead receivers into calculating incorrect positions, can
result in serious consequences, from navigational errors in civilian aviation
to security breaches in military operations. Furthermore, the lack of inherent
security measures within GNSS systems makes them attractive targets for
adversaries. While GNSS/GPS jamming and spoofing systems consist of numerous
components, the ability to distinguish authentic signals from malicious ones is
essential for maintaining system integrity. Recent advancements in machine
learning and deep learning provide promising avenues for enhancing detection
and mitigation strategies against these threats. This paper addresses both
spoofing and jamming by tackling real-world challenges through machine
learning, deep learning, and computer vision techniques. Through extensive
experiments on two real-world datasets related to spoofing and jamming
detection using advanced algorithms, we achieved state of the art results. In
the GNSS/GPS jamming detection task, we attained approximately 99% accuracy,
improving performance by around 5% compared to previous studies. Additionally,
we addressed a challenging tasks related to spoofing detection, yielding
results that underscore the potential of machine learning and deep learning in
this domain.
|
2501.02353 | Reweighting Improves Conditional Risk Bounds | cs.LG stat.ML | In this work, we study the weighted empirical risk minimization (weighted
ERM) schema, in which an additional data-dependent weight function is
incorporated when the empirical risk function is being minimized. We show that
under a general ``balanceable" Bernstein condition, one can design a weighted
ERM estimator to achieve superior performance in certain sub-regions over the
one obtained from standard ERM, and the superiority manifests itself through a
data-dependent constant term in the error bound. These sub-regions correspond
to large-margin ones in classification settings and low-variance ones in
heteroscedastic regression settings, respectively. Our findings are supported
by evidence from synthetic data experiments.
|
2501.02354 | PrivDPR: Synthetic Graph Publishing with Deep PageRank under
Differential Privacy | cs.DB cs.CR | The objective of privacy-preserving synthetic graph publishing is to
safeguard individuals' privacy while retaining the utility of original data.
Most existing methods focus on graph neural networks under differential privacy
(DP), and yet two fundamental problems in generating synthetic graphs remain
open. First, the current research often encounters high sensitivity due to the
intricate relationships between nodes in a graph. Second, DP is usually
achieved through advanced composition mechanisms that tend to converge
prematurely when working with a small privacy budget. In this paper, inspired
by the simplicity, effectiveness, and ease of analysis of PageRank, we design
PrivDPR, a novel privacy-preserving deep PageRank for graph synthesis. In
particular, we achieve DP by adding noise to the gradient for a specific weight
during learning. Utilizing weight normalization as a bridge, we theoretically
reveal that increasing the number of layers in PrivDPR can effectively mitigate
the high sensitivity and privacy budget splitting. Through formal privacy
analysis, we prove that the synthetic graph generated by PrivDPR satisfies
node-level DP. Experiments on real-world graph datasets show that PrivDPR
preserves high data utility across multiple graph structural properties.
|
2501.02355 | CorrFill: Enhancing Faithfulness in Reference-based Inpainting with
Correspondence Guidance in Diffusion Models | cs.CV | In the task of reference-based image inpainting, an additional reference
image is provided to restore a damaged target image to its original state. The
advancement of diffusion models, particularly Stable Diffusion, allows for
simple formulations in this task. However, existing diffusion-based methods
often lack explicit constraints on the correlation between the reference and
damaged images, resulting in lower faithfulness to the reference images in the
inpainting results. In this work, we propose CorrFill, a training-free module
designed to enhance the awareness of geometric correlations between the
reference and target images. This enhancement is achieved by guiding the
inpainting process with correspondence constraints estimated during inpainting,
utilizing attention masking in self-attention layers and an objective function
to update the input tensor according to the constraints. Experimental results
demonstrate that CorrFill significantly enhances the performance of multiple
baseline diffusion-based methods, including state-of-the-art approaches, by
emphasizing faithfulness to the reference images.
|
2501.02356 | When is the Computation of a Feature Attribution Method Tractable? | cs.LG stat.ML | Feature attribution methods have become essential for explaining machine
learning models. Many popular approaches, such as SHAP and Banzhaf values, are
grounded in power indices from cooperative game theory, which measure the
contribution of features to model predictions. This work studies the
computational complexity of power indices beyond SHAP, addressing the
conditions under which they can be computed efficiently. We identify a simple
condition on power indices that ensures that computation is polynomially
equivalent to evaluating expected values, extending known results for SHAP. We
also introduce Bernoulli power indices, showing that their computation can be
simplified to a constant number of expected value evaluations. Furthermore, we
explore interaction power indices that quantify the importance of feature
subsets, proving that their computation complexity mirrors that of individual
features.
|
2501.02361 | Context Aware Lemmatization and Morphological Tagging Method in Turkish | cs.CL cs.AI | The smallest part of a word that defines the word is called a word root. Word
roots are used to increase success in many applications since they simplify the
word. In this study, the lemmatization model, which is a word root finding
method, and the morphological tagging model, which predicts the grammatical
knowledge of the word, are presented. The presented model was developed for
Turkish, and both models make predictions by taking the meaning of the word
into account. In the literature, there is no lemmatization study that is
sensitive to word meaning in Turkish. For this reason, the present study shares
the model and the results obtained from the model on Turkish lemmatization for
the first time in the literature. In the present study, in the lemmatization
and morphological tagging models, bidirectional LSTM is used for the spelling
of words, and the Turkish BERT model is used for the meaning of words. The
models are trained using the IMST and PUD datasets from Universal Dependencies.
The results from the training of the models were compared with the results from
the SIGMORPHON 2019 competition. The results of the comparisons revealed that
our models were superior.
|
2501.02362 | Easing Optimization Paths: a Circuit Perspective | cs.LG eess.SP stat.ML | Gradient descent is the method of choice for training large artificial
intelligence systems. As these systems become larger, a better understanding of
the mechanisms behind gradient training would allow us to alleviate compute
costs and help steer these systems away from harmful behaviors. To that end, we
suggest utilizing the circuit perspective brought forward by mechanistic
interpretability. After laying out our intuition, we illustrate how it enables
us to design a curriculum for efficient learning in a controlled setting. The
code is available at \url{https://github.com/facebookresearch/pal}.
|
2501.02363 | V2X-DGPE: Addressing Domain Gaps and Pose Errors for Robust
Collaborative 3D Object Detection | cs.CV cs.MA | In V2X collaborative perception, the domain gaps between heterogeneous nodes
pose a significant challenge for effective information fusion. Pose errors
arising from latency and GPS localization noise further exacerbate the issue by
leading to feature misalignment. To overcome these challenges, we propose
V2X-DGPE, a high-accuracy and robust V2X feature-level collaborative perception
framework. V2X-DGPE employs a Knowledge Distillation Framework and a Feature
Compensation Module to learn domain-invariant representations from multi-source
data, effectively reducing the feature distribution gap between vehicles and
roadside infrastructure. Historical information is utilized to provide the
model with a more comprehensive understanding of the current scene.
Furthermore, a Collaborative Fusion Module leverages a heterogeneous
self-attention mechanism to extract and integrate heterogeneous representations
from vehicles and infrastructure. To address pose errors, V2X-DGPE introduces a
deformable attention mechanism, enabling the model to adaptively focus on
critical parts of the input features by dynamically offsetting sampling points.
Extensive experiments on the real-world DAIR-V2X dataset demonstrate that the
proposed method outperforms existing approaches, achieving state-of-the-art
detection performance. The code is available at
https://github.com/wangsch10/V2X-DGPE.
|
2501.02364 | Understanding How Nonlinear Layers Create Linearly Separable Features
for Low-Dimensional Data | cs.LG cs.CV stat.ML | Deep neural networks have attained remarkable success across diverse
classification tasks. Recent empirical studies have shown that deep networks
learn features that are linearly separable across classes. However, these
findings often lack rigorous justifications, even under relatively simple
settings. In this work, we address this gap by examining the linear separation
capabilities of shallow nonlinear networks. Specifically, inspired by the low
intrinsic dimensionality of image data, we model inputs as a union of
low-dimensional subspaces (UoS) and demonstrate that a single nonlinear layer
can transform such data into linearly separable sets. Theoretically, we show
that this transformation occurs with high probability when using random weights
and quadratic activations. Notably, we prove this can be achieved when the
network width scales polynomially with the intrinsic dimension of the data
rather than the ambient dimension. Experimental results corroborate these
theoretical findings and demonstrate that similar linear separation properties
hold in practical scenarios beyond our analytical scope. This work bridges the
gap between empirical observations and theoretical understanding of the
separation capacity of nonlinear networks, offering deeper insights into model
interpretability and generalization.
|
2501.02368 | Enhancing Workplace Productivity and Well-being Using AI Agent | cs.AI cs.HC | This paper discusses the use of Artificial Intelligence (AI) to enhance
workplace productivity and employee well-being. By integrating machine learning
(ML) techniques with neurobiological data, the proposed approaches ensure
alignment with human ethical standards through value alignment models and
Hierarchical Reinforcement Learning (HRL) for autonomous task management. The
system utilizes biometric feedback from employees to generate personalized
health prompts, fostering a supportive work environment that encourages
physical activity. Additionally, we explore decentralized multi-agent systems
for improved collaboration and decision-making frameworks that enhance
transparency. Various approaches using ML techniques in conjunction with AI
implementations are discussed. Together, these innovations aim to create a more
productive and health-conscious workplace. These outcomes assist HR management
and organizations in launching more rational career progression streams for
employees and facilitating organizational transformation.
|
2501.02369 | Predicting two-dimensional spatiotemporal chaotic patterns with
optimized high-dimensional hybrid reservoir computing | cs.LG nlin.CD | As an alternative approach for predicting complex dynamical systems where
physics-based models are no longer reliable, reservoir computing (RC) has
gained popularity. The hybrid approach is considered an interesting option for
improving the prediction performance of RC. The idea is to combine a
knowledge-based model (KBM) to support the fully data-driven RC prediction.
There are three types of hybridization for RC, namely full hybrid (FH), input
hybrid (IH) and output hybrid (OH), where it was shown that the latter one is
superior in terms of the accuracy and the robustness for the prediction of
low-dimensional chaotic systems. Here, we extend the formalism to the
prediction of spatiotemporal patterns in two dimensions. To overcome the curse
of dimensionality for this very high-dimensional case we employ the local
states ansatz, where only a few locally adjacent time series are utilized for
the RC-based prediction. Using simulation data from the Barkley model
describing chaotic electrical wave propagation in cardiac tissue, we outline
the formalism of high-dimensional hybrid RC and assess the performance of the
different hybridization schemes. We find that all three methods (FH, IH and OH)
perform better than reservoir only, where improvements are small when the model
is very inaccurate. For small model errors and small reservoirs FH and OH
perform nearly equally well and better than IH. Given the smaller CPU needs for
OH and especially the better interpretability of it, OH is to be favored. For
large reservoirs the performance of OH drops below that of FH and IH.
Generally, it maybe advisable to test the three setups for a given application
and select the best suited one that optimizes between the counteracting factors
of prediction performance and CPU needs.
|
2501.02370 | Prepending or Cross-Attention for Speech-to-Text? An Empirical
Comparison | cs.CL cs.SD eess.AS | Following the remarkable success of Large Language Models (LLMs) in NLP
tasks, there is increasing interest in extending their capabilities to speech
-- the most common form of communication. The most widespread approach to
integrating speech into LLMs is dense feature prepending (DFP), which prepends
the projected speech representations to the textual representations, allowing
end-to-end training with a speech encoder. This raises questions about the need
for a sophisticated speech encoder for DFP and how its performance compares
with a standard encoder-decoder (i.e., cross-attention) architecture. We
compare DFP and cross-attention under a variety of configurations, such as CTC
compression, sequence-level knowledge distillation, on monolingual, bilingual,
and multilingual models. To perform a controlled architectural comparison, we
train all models from scratch rather than using large pretrained models and use
comparable data and parameter settings, testing speech-to-text recognition
(ASR) and translation (ST) on MuST-C v1.0 and CoVoST2 datasets. Despite the
wide adoption of DFP, our results do not indicate a clear advantage of DFP over
cross-attention.
|
2501.02373 | BADTV: Unveiling Backdoor Threats in Third-Party Task Vectors | cs.LG cs.CR | Task arithmetic in large-scale pre-trained models enables flexible adaptation
to diverse downstream tasks without extensive re-training. By leveraging task
vectors (TVs), users can perform modular updates to pre-trained models through
simple arithmetic operations like addition and subtraction. However, this
flexibility introduces new security vulnerabilities. In this paper, we identify
and evaluate the susceptibility of TVs to backdoor attacks, demonstrating how
malicious actors can exploit TVs to compromise model integrity. By developing
composite backdoors and eliminating redudant clean tasks, we introduce BadTV, a
novel backdoor attack specifically designed to remain effective under task
learning, forgetting, and analogies operations. Our extensive experiments
reveal that BadTV achieves near-perfect attack success rates across various
scenarios, significantly impacting the security of models using task
arithmetic. We also explore existing defenses, showing that current methods
fail to detect or mitigate BadTV. Our findings highlight the need for robust
defense mechanisms to secure TVs in real-world applications, especially as TV
services become more popular in machine-learning ecosystems.
|
2501.02376 | Generalizable Origin Identification for Text-Guided Image-to-Image
Diffusion Models | cs.CV | Text-guided image-to-image diffusion models excel in translating images based
on textual prompts, allowing for precise and creative visual modifications.
However, such a powerful technique can be misused for spreading misinformation,
infringing on copyrights, and evading content tracing. This motivates us to
introduce the task of origin IDentification for text-guided Image-to-image
Diffusion models (ID$^2$), aiming to retrieve the original image of a given
translated query. A straightforward solution to ID$^2$ involves training a
specialized deep embedding model to extract and compare features from both
query and reference images. However, due to visual discrepancy across
generations produced by different diffusion models, this similarity-based
approach fails when training on images from one model and testing on those from
another, limiting its effectiveness in real-world applications. To solve this
challenge of the proposed ID$^2$ task, we contribute the first dataset and a
theoretically guaranteed method, both emphasizing generalizability. The curated
dataset, OriPID, contains abundant Origins and guided Prompts, which can be
used to train and test potential IDentification models across various diffusion
models. In the method section, we first prove the existence of a linear
transformation that minimizes the distance between the pre-trained Variational
Autoencoder (VAE) embeddings of generated samples and their origins.
Subsequently, it is demonstrated that such a simple linear transformation can
be generalized across different diffusion models. Experimental results show
that the proposed method achieves satisfying generalization performance,
significantly surpassing similarity-based methods ($+31.6\%$ mAP), even those
with generalization designs.
|
2501.02378 | A ghost mechanism: An analytical model of abrupt learning | cs.LG q-bio.NC stat.ML | \emph{Abrupt learning} is commonly observed in neural networks, where long
plateaus in network performance are followed by rapid convergence to a
desirable solution. Yet, despite its common occurrence, the complex interplay
of task, network architecture, and learning rule has made it difficult to
understand the underlying mechanisms. Here, we introduce a minimal dynamical
system trained on a delayed-activation task and demonstrate analytically how
even a one-dimensional system can exhibit abrupt learning through ghost points
rather than bifurcations. Through our toy model, we show that the emergence of
a ghost point destabilizes learning dynamics. We identify a critical learning
rate that prevents learning through two distinct loss landscape features: a
no-learning zone and an oscillatory minimum. Testing these predictions in
recurrent neural networks (RNNs), we confirm that ghost points precede abrupt
learning and accompany the destabilization of learning. We demonstrate two
complementary remedies: lowering the model output confidence prevents the
network from getting stuck in no-learning zones, while increasing trainable
ranks beyond task requirements (\textit{i.e.}, adding sloppy parameters)
provides more stable learning trajectories. Our model reveals a
bifurcation-free mechanism for abrupt learning and illustrates the importance
of both deliberate uncertainty and redundancy in stabilizing learning dynamics.
|
2501.02379 | Tensor-GaLore: Memory-Efficient Training via Gradient Tensor
Decomposition | cs.LG | We present Tensor-GaLore, a novel method for efficient training of neural
networks with higher-order tensor weights. Many models, particularly those used
in scientific computing, employ tensor-parameterized layers to capture complex,
multidimensional relationships. When scaling these methods to high-resolution
problems makes memory usage grow intractably, and matrix based optimization
methods lead to suboptimal performance and compression. We propose to work
directly in the high-order space of the complex tensor parameter space using a
tensor factorization of the gradients during optimization. We showcase its
effectiveness on Fourier Neural Operators (FNOs), a class of models crucial for
solving partial differential equations (PDE) and prove the theory of it. Across
various PDE tasks like the Navier Stokes and Darcy Flow equations,
Tensor-GaLore achieves substantial memory savings, reducing optimizer memory
usage by up to 75%. These substantial memory savings across AI for science
demonstrate Tensor-GaLore's potential.
|
2501.02385 | Guiding Medical Vision-Language Models with Explicit Visual Prompts:
Framework Design and Comprehensive Exploration of Prompt Variations | cs.CV cs.CL | While mainstream vision-language models (VLMs) have advanced rapidly in
understanding image level information, they still lack the ability to focus on
specific areas designated by humans. Rather, they typically rely on large
volumes of high-quality image-text paired data to learn and generate posterior
attention maps. To address this critical issue, we propose leveraging visual
prompts:simple visual markers in various forms to guide and enhance the
formation of region-specific attention. Thus, we introduce MedVP, a pioneering
framework that integrates medical entity extraction, visual prompt generation,
and dataset adaptation for visual prompt guided fine-tuning. We successfully
outperform recent state-of-the-art large models across multiple medical VQA
datasets. Extensive experiments and Human evaluation are conducted to analyze
the impact of different visual prompt forms and how they contribute to
performance improvement. The results demonstrate both the effectiveness and
clinical significance of our approach.
|
2501.02392 | Syntactic Evolution in Language Usage | cs.CL cs.AI | This research aims to investigate the dynamic nature of linguistic style
throughout various stages of life, from post teenage to old age. By employing
linguistic analysis tools and methodologies, the study will delve into the
intricacies of how individuals adapt and modify their language use over time.
The research uses a data set of blogs from blogger.com from 2004 and focuses on
English for syntactic analysis. The findings of this research can have
implications for linguistics, psychology, and communication studies, shedding
light on the intricate relationship between age and language.
|
2501.02393 | Graph-Aware Isomorphic Attention for Adaptive Dynamics in Transformers | cs.LG cond-mat.mes-hall cond-mat.mtrl-sci cs.AI cs.CL | We present an approach to modifying Transformer architectures by integrating
graph-aware relational reasoning into the attention mechanism, merging concepts
from graph neural networks and language modeling. Building on the inherent
connection between attention and graph theory, we reformulate the Transformer's
attention mechanism as a graph operation and propose Graph-Aware Isomorphic
Attention. This method leverages advanced graph modeling strategies, including
Graph Isomorphism Networks (GIN) and Principal Neighborhood Aggregation (PNA),
to enrich the representation of relational structures. Our approach captures
complex dependencies and generalizes across tasks, as evidenced by a reduced
generalization gap and improved learning performance. Additionally, we expand
the concept of graph-aware attention to introduce Sparse GIN-Attention, a
fine-tuning approach that employs sparse GINs. By interpreting attention
matrices as sparse adjacency graphs, this technique enhances the adaptability
of pre-trained foundational models with minimal computational overhead,
endowing them with graph-aware capabilities. Sparse GIN-Attention fine-tuning
achieves improved training dynamics and better generalization compared to
alternative methods like low-rank adaption (LoRA). We discuss latent graph-like
structures within traditional attention mechanisms, offering a new lens through
which Transformers can be understood. By evolving Transformers as hierarchical
GIN models for relational reasoning. This perspective suggests profound
implications for foundational model development, enabling the design of
architectures that dynamically adapt to both local and global dependencies.
Applications in bioinformatics, materials science, language modeling, and
beyond could benefit from this synthesis of relational and sequential data
modeling, setting the stage for interpretable and generalizable modeling
strategies.
|
2501.02401 | iTARGET: Interpretable Tailored Age Regression for Grouped Epigenetic
Traits | q-bio.GN cs.AI | Accurately predicting chronological age from DNA methylation patterns is
crucial for advancing biological age estimation. However, this task is made
challenging by Epigenetic Correlation Drift (ECD) and Heterogeneity Among CpGs
(HAC), which reflect the dynamic relationship between methylation and age
across different life stages. To address these issues, we propose a novel
two-phase algorithm. The first phase employs similarity searching to cluster
methylation profiles by age group, while the second phase uses Explainable
Boosting Machines (EBM) for precise, group-specific prediction. Our method not
only improves prediction accuracy but also reveals key age-related CpG sites,
detects age-specific changes in aging rates, and identifies pairwise
interactions between CpG sites. Experimental results show that our approach
outperforms traditional epigenetic clocks and machine learning models, offering
a more accurate and interpretable solution for biological age estimation with
significant implications for aging research.
|
2501.02406 | Zero-Shot Statistical Tests for LLM-Generated Text Detection using
Finite Sample Concentration Inequalities | stat.ML cs.AI cs.CL cs.IT cs.LG math.IT | Verifying the provenance of content is crucial to the function of many
organizations, e.g., educational institutions, social media platforms, firms,
etc. This problem is becoming increasingly difficult as text generated by Large
Language Models (LLMs) becomes almost indistinguishable from human-generated
content. In addition, many institutions utilize in-house LLMs and want to
ensure that external, non-sanctioned LLMs do not produce content within the
institution. In this paper, we answer the following question: Given a piece of
text, can we identify whether it was produced by LLM $A$ or $B$ (where $B$ can
be a human)? We model LLM-generated text as a sequential stochastic process
with complete dependence on history and design zero-shot statistical tests to
distinguish between (i) the text generated by two different sets of LLMs $A$
(in-house) and $B$ (non-sanctioned) and also (ii) LLM-generated and
human-generated texts. We prove that the type I and type II errors for our
tests decrease exponentially in the text length. In designing our tests, we
derive concentration inequalities on the difference between log-perplexity and
the average entropy of the string under $A$. Specifically, for a given string,
we demonstrate that if the string is generated by $A$, the log-perplexity of
the string under $A$ converges to the average entropy of the string under $A$,
except with an exponentially small probability in string length. We also show
that if $B$ generates the text, except with an exponentially small probability
in string length, the log-perplexity of the string under $A$ converges to the
average cross-entropy of $B$ and $A$. Lastly, we present preliminary
experimental results to support our theoretical results. By enabling guaranteed
(with high probability) finding of the origin of harmful LLM-generated text
with arbitrary size, we can help combat misinformation.
|
2501.02407 | Anonymization by Design of Language Modeling | cs.CL cs.CR cs.LG | Rapid advances in Natural Language Processing (NLP) have revolutionized many
fields, including healthcare. However, these advances raise significant privacy
concerns, especially when models specialized on sensitive data can memorize and
then expose and regurgitate confidential information. This paper presents a
privacy-by-design language modeling approach to address the problem of language
models anonymization, and thus promote their sharing. Specifically, we propose
both a Masking Language Modeling (MLM) methodology to specialize a BERT-like
language model, and a Causal Language Modeling (CLM) methodology to specialize
a GPT-like model that avoids the model from memorizing direct and indirect
identifying information present in the training data. We have comprehensively
evaluated our approaches using medical datasets and compared them against
different baselines. Our results indicate that by avoiding memorizing both
direct and indirect identifiers during model specialization, our masking and
causal language modeling schemes offer the best tradeoff for maintaining high
privacy while retaining high utility.
|
2501.02408 | GenTREC: The First Test Collection Generated by Large Language Models
for Evaluating Information Retrieval Systems | cs.IR | Building test collections for Information Retrieval evaluation has
traditionally been a resource-intensive and time-consuming task, primarily due
to the dependence on manual relevance judgments. While various cost-effective
strategies have been explored, the development of such collections remains a
significant challenge. In this paper, we present GenTREC , the first test
collection constructed entirely from documents generated by a Large Language
Model (LLM), eliminating the need for manual relevance judgments. Our approach
is based on the assumption that documents generated by an LLM are inherently
relevant to the prompts used for their generation. Based on this heuristic, we
utilized existing TREC search topics to generate documents. We consider a
document relevant only to the prompt that generated it, while other
document-topic pairs are treated as non-relevant. To introduce realistic
retrieval challenges, we also generated non-relevant documents, ensuring that
IR systems are tested against a diverse and robust set of materials. The
resulting GenTREC collection comprises 96,196 documents, 300 topics, and 18,964
relevance "judgments". We conducted extensive experiments to evaluate GenTREC
in terms of document quality, relevance judgment accuracy, and evaluation
reliability. Notably, our findings indicate that the ranking of IR systems
using GenTREC is compatible with the evaluations conducted using traditional
TREC test collections, particularly for P@100, MAP, and RPrec metrics. Overall,
our results show that our proposed approach offers a promising, low-cost
alternative for IR evaluation, significantly reducing the burden of building
and maintaining future IR evaluation resources.
|
2501.02409 | Interpretable Neural ODEs for Gene Regulatory Network Discovery under
Perturbations | cs.LG cs.AI cs.CE q-bio.MN stat.ME | Modern high-throughput biological datasets with thousands of perturbations
provide the opportunity for large-scale discovery of causal graphs that
represent the regulatory interactions between genes. Differentiable causal
graphical models have been proposed to infer a gene regulatory network (GRN)
from large scale interventional datasets, capturing the causal gene regulatory
relationships from genetic perturbations. However, existing models are limited
in their expressivity and scalability while failing to address the dynamic
nature of biological processes such as cellular differentiation. We propose
PerturbODE, a novel framework that incorporates biologically informative neural
ordinary differential equations (neural ODEs) to model cell state trajectories
under perturbations and derive the causal GRN from the neural ODE's parameters.
We demonstrate PerturbODE's efficacy in trajectory prediction and GRN inference
across simulated and real over-expression datasets.
|
2501.02410 | JammingSnake: A follow-the-leader continuum robot with variable
stiffness based on fiber jamming | cs.RO cs.SY eess.SY | Follow-the-leader (FTL) motion is essential for continuum robots operating in
fragile and confined environments. It allows the robot to exert minimal force
on its surroundings, reducing the risk of damage. This paper presents a novel
design of a snake-like robot capable of achieving FTL motion by integrating
fiber jamming modules (FJMs). The proposed robot can dynamically adjust its
stiffness during propagation and interaction with the environment. An algorithm
is developed to independently control the tendon and FJM insertion movements,
allowing the robot to maintain its shape while minimizing the forces exerted on
surrounding structures. To validate the proposed design, comparative tests were
conducted between a traditional tendon-driven robot and the novel design under
different configurations. The results demonstrate that our design relies
significantly less on contact with the surroundings to maintain its shape. This
highlights its potential for safer and more effective operations in delicate
environments, such as minimally invasive surgery (MIS) or industrial in-situ
inspection.
|
2501.02411 | Transfer learning via Regularized Linear Discriminant Analysis | stat.ML cs.LG | Linear discriminant analysis is a widely used method for classification.
However, the high dimensionality of predictors combined with small sample sizes
often results in large classification errors. To address this challenge, it is
crucial to leverage data from related source models to enhance the
classification performance of a target model. We propose to address this
problem in the framework of transfer learning.
In this paper, we present novel transfer learning methods via regularized
random-effects linear discriminant analysis, where the discriminant direction
is estimated as a weighted combination of ridge estimates obtained from both
the target and source models. Multiple strategies for determining these weights
are introduced and evaluated, including one that minimizes the estimation risk
of the discriminant vector and another that minimizes the classification error.
Utilizing results from random matrix theory, we explicitly derive the
asymptotic values of these weights and the associated classification error
rates in the high-dimensional setting, where $p/n \rightarrow \gamma$, with $p$
representing the predictor dimension and $n$ the sample size. We also provide
geometric interpretations of various weights and a guidance on which weights to
choose. Extensive numerical studies, including simulations and analysis of
proteomics-based 10-year cardiovascular disease risk classification,
demonstrate the effectiveness of the proposed approach.
|
2501.02413 | Semantic foundations of equality saturation | cs.PL cs.DB | Equality saturation is an emerging technique for program and query
optimization developed in the programming language community. It performs term
rewriting over an E-graph, a data structure that compactly represents a program
space. Despite its popularity, the theory of equality saturation lags behind
the practice. In this paper, we define a fixpoint semantics of equality
saturation based on tree automata and uncover deep connections between equality
saturation and the chase. We characterize the class of chase sequences that
correspond to equality saturation. We study the complexities of terminations of
equality saturation in three cases: single-instance, all-term-instance, and
all-E-graph-instance. Finally, we define a syntactic criterion based on
acyclicity that implies equality saturation termination.
|
2501.02414 | Journey into Automation: Image-Derived Pavement Texture Extraction and
Evaluation | cs.CV cs.LG | Mean texture depth (MTD) is pivotal in assessing the skid resistance of
asphalt pavements and ensuring road safety. This study focuses on developing an
automated system for extracting texture features and evaluating MTD based on
pavement images. The contributions of this work are threefold: firstly, it
proposes an economical method to acquire three-dimensional (3D) pavement
texture data; secondly, it enhances 3D image processing techniques and
formulates features that represent various aspects of texture; thirdly, it
establishes multivariate prediction models that link these features with MTD
values. Validation results demonstrate that the Gradient Boosting Tree (GBT)
model achieves remarkable prediction stability and accuracy (R2 = 0.9858), and
field tests indicate the superiority of the proposed method over other
techniques, with relative errors below 10%. This method offers a comprehensive
end-to-end solution for pavement quality evaluation, from images input to MTD
predictions output.
|
2501.02421 | Fastest Mixing Reversible Markov Chain: Clique Lifted Graphs and
Subgraphs | cs.IT cs.SY eess.SY math.IT | Markov chains are one of the well-known tools for modeling and analyzing
stochastic systems. At the same time, they are used for constructing random
walks that can achieve a given stationary distribution. This paper is concerned
with determining the transition probabilities that optimize the mixing time of
the reversible Markov chains towards a given equilibrium distribution. This
problem is referred to as the Fastest Mixing Reversible Markov Chain (FMRMC)
problem. It is shown that for a given base graph and its clique lifted graph,
the FMRMC problem over the clique lifted graph is reducible to the FMRMC
problem over the base graph, while the optimal mixing times on both graphs are
identical. Based on this result and the solution of the semidefinite
programming formulation of the FMRMC problem, the problem has been addressed
over a wide variety of topologies with the same base graph. Second, the general
form of the FMRMC problem is addressed on stand-alone topologies as well as
subgraphs of an arbitrary graph. For subgraphs, it is shown that the optimal
transition probabilities over edges of the subgraph can be determined
independent of rest of the topology.
|
2501.02423 | Scaling Laws for Floating Point Quantization Training | cs.LG cs.AR cs.CL | Low-precision training is considered an effective strategy for reducing both
training and downstream inference costs. Previous scaling laws for precision
mainly focus on integer quantization, which pay less attention to the
constituents in floating-point quantization and thus cannot well fit the LLM
losses in this scenario. In contrast, while floating-point quantization
training is more commonly implemented in production, the research on it has
been relatively superficial. In this paper, we thoroughly explore the effects
of floating-point quantization targets, exponent bits, mantissa bits, and the
calculation granularity of the scaling factor in floating-point quantization
training performance of LLM models. While presenting an accurate floating-point
quantization unified scaling law, we also provide valuable suggestions for the
community: (1) Exponent bits contribute slightly more to the model performance
than mantissa bits. We provide the optimal exponent-mantissa bit ratio for
different bit numbers, which is available for future reference by hardware
manufacturers; (2) We discover the formation of the critical data size in
low-precision LLM training. Too much training data exceeding the critical data
size will inversely bring in degradation of LLM performance; (3) The optimal
floating-point quantization precision is directly proportional to the
computational power, but within a wide computational power range, we estimate
that the best cost-performance precision lies between 4-8 bits.
|
2501.02427 | MetaNeRV: Meta Neural Representations for Videos with Spatial-Temporal
Guidance | cs.CV | Neural Representations for Videos (NeRV) has emerged as a promising implicit
neural representation (INR) approach for video analysis, which represents
videos as neural networks with frame indexes as inputs. However, NeRV-based
methods are time-consuming when adapting to a large number of diverse videos,
as each video requires a separate NeRV model to be trained from scratch. In
addition, NeRV-based methods spatially require generating a high-dimension
signal (i.e., an entire image) from the input of a low-dimension timestamp, and
a video typically consists of tens of frames temporally that have a minor
change between adjacent frames. To improve the efficiency of video
representation, we propose Meta Neural Representations for Videos, named
MetaNeRV, a novel framework for fast NeRV representation for unseen videos.
MetaNeRV leverages a meta-learning framework to learn an optimal parameter
initialization, which serves as a good starting point for adapting to new
videos. To address the unique spatial and temporal characteristics of video
modality, we further introduce spatial-temporal guidance to improve the
representation capabilities of MetaNeRV. Specifically, the spatial guidance
with a multi-resolution loss aims to capture the information from different
resolution stages, and the temporal guidance with an effective progressive
learning strategy could gradually refine the number of fitted frames during the
meta-learning process. Extensive experiments conducted on multiple datasets
demonstrate the superiority of MetaNeRV for video representations and video
compression.
|
2501.02428 | Framework for lung CT image segmentation based on UNet++ | eess.IV cs.CV | Recently, the state-of-art models for medical image segmentation is U-Net and
their variants. These networks, though succeeding in deriving notable results,
ignore the practical problem hanging over the medical segmentation field:
overfitting and small dataset. The over-complicated deep neural networks
unnecessarily extract meaningless information, and a majority of them are not
suitable for lung slice CT image segmentation task. To overcome the two
limitations, we proposed a new whole-process network merging advanced UNet++
model. The network comprises three main modules: data augmentation, optimized
neural network, parameter fine-tuning. By incorporating diverse methods, the
training results demonstrate a significant advantage over similar works,
achieving leading accuracy of 98.03% with the lowest overfitting. potential.
Our network is remarkable as one of the first to target on lung slice CT
images.
|
2501.02429 | Citation Structural Diversity: A Novel and Concise Metric Combining
Structure and Semantics for Literature Evaluation | cs.IR | As academic research becomes increasingly diverse, traditional literature
evaluation methods face significant limitations,particularly in capturing the
complexity of academic dissemination and the multidimensional impacts of
literature. To address these challenges, this paper introduces a novel
literature evaluation model of citation structural diversity, with a focus on
assessing its feasibility as an evaluation metric. By refining citation network
and incorporating both ciation structural features and semantic information,
the study examines the influence of the proposed model of citation structural
diversity on citation volume and long-term academic impact. The findings reveal
that literature with higher citation structural diversity demonstrates notable
advantages in both citation frequency and sustained academic influence. Through
data grouping and a decade-long citation trend analysis, the potential
application of this model in literature evaluation is further validated. This
research offers a fresh perspective on optimizing literature evaluation methods
and emphasizes the distinct advantages of citation structural diversity in
measuring interdisciplinarity.
|
2501.02430 | FOLDER: Accelerating Multi-modal Large Language Models with Enhanced
Performance | cs.CV | Recently, Multi-modal Large Language Models (MLLMs) have shown remarkable
effectiveness for multi-modal tasks due to their abilities to generate and
understand cross-modal data. However, processing long sequences of visual
tokens extracted from visual backbones poses a challenge for deployment in
real-time applications. To address this issue, we introduce FOLDER, a simple
yet effective plug-and-play module designed to reduce the length of the visual
token sequence, mitigating both computational and memory demands during
training and inference. Through a comprehensive analysis of the token reduction
process, we analyze the information loss introduced by different reduction
strategies and develop FOLDER to preserve key information while removing visual
redundancy. We showcase the effectiveness of FOLDER by integrating it into the
visual backbone of several MLLMs, significantly accelerating the inference
phase. Furthermore, we evaluate its utility as a training accelerator or even
performance booster for MLLMs. In both contexts, FOLDER achieves comparable or
even better performance than the original models, while dramatically reducing
complexity by removing up to 70% of visual tokens.
|
2501.02432 | Swift Cross-Dataset Pruning: Enhancing Fine-Tuning Efficiency in Natural
Language Understanding | cs.CL | Dataset pruning aims to select a subset of a dataset for efficient model
training. While data efficiency in natural language processing has primarily
focused on within-corpus scenarios during model pre-training, efficient dataset
pruning for task-specific fine-tuning across diverse datasets remains
challenging due to variability in dataset sizes, data distributions, class
imbalance and label spaces. Current cross-dataset pruning techniques for
fine-tuning often rely on computationally expensive sample ranking processes,
typically requiring full dataset training or reference models. We address this
gap by proposing Swift Cross-Dataset Pruning (SCDP). Specifically, our approach
uses TF-IDF embeddings with geometric median to rapidly evaluate sample
importance. We then apply dataset size-adaptive pruning to ensure diversity:
for smaller datasets, we retain samples far from the geometric median, while
for larger ones, we employ distance-based stratified pruning. Experimental
results on six diverse datasets demonstrate the effectiveness of our method,
spanning various tasks and scales while significantly reducing computational
resources. Source code is available at:
https://github.com/he-y/NLP-Dataset-Pruning
|
2501.02434 | Towards Multimodal Metaphor Understanding: A Chinese Dataset and Model
for Metaphor Mapping Identification | cs.CL | Metaphors play a crucial role in human communication, yet their comprehension
remains a significant challenge for natural language processing (NLP) due to
the cognitive complexity involved. According to Conceptual Metaphor Theory
(CMT), metaphors map a target domain onto a source domain, and understanding
this mapping is essential for grasping the nature of metaphors. While existing
NLP research has focused on tasks like metaphor detection and sentiment
analysis of metaphorical expressions, there has been limited attention to the
intricate process of identifying the mappings between source and target
domains. Moreover, non-English multimodal metaphor resources remain largely
neglected in the literature, hindering a deeper understanding of the key
elements involved in metaphor interpretation. To address this gap, we developed
a Chinese multimodal metaphor advertisement dataset (namely CM3D) that includes
annotations of specific target and source domains. This dataset aims to foster
further research into metaphor comprehension, particularly in non-English
languages. Furthermore, we propose a Chain-of-Thought (CoT) Prompting-based
Metaphor Mapping Identification Model (CPMMIM), which simulates the human
cognitive process for identifying these mappings. Drawing inspiration from CoT
reasoning and Bi-Level Optimization (BLO), we treat the task as a hierarchical
identification problem, enabling more accurate and interpretable metaphor
mapping. Our experimental results demonstrate the effectiveness of CPMMIM,
highlighting its potential for advancing metaphor comprehension in NLP. Our
dataset and code are both publicly available to encourage further advancements
in this field.
|
2501.02436 | An Analysis Framework for Understanding Deep Neural Networks Based on
Network Dynamics | cs.LG nlin.CD stat.ML | Advancing artificial intelligence demands a deeper understanding of the
mechanisms underlying deep learning. Here, we propose a straightforward
analysis framework based on the dynamics of learning models. Neurons are
categorized into two modes based on whether their transformation functions
preserve order. This categorization reveals how deep neural networks (DNNs)
maximize information extraction by rationally allocating the proportion of
neurons in different modes across deep layers. We further introduce the
attraction basins of the training samples in both the sample vector space and
the weight vector space to characterize the generalization ability of DNNs.
This framework allows us to identify optimal depth and width configurations,
providing a unified explanation for fundamental DNN behaviors such as the "flat
minima effect," "grokking," and double descent phenomena. Our analysis extends
to networks with depths up to 100 layers.
|
2501.02438 | Efficient Deployment of Large Language Models on Resource-constrained
Devices | cs.LG cs.AI cs.CL cs.DC | Deploying Large Language Models (LLMs) on resource-constrained (or weak)
devices presents significant challenges due to limited resources and
heterogeneous data distribution. To address the data concern, it is necessary
to fine-tune LLMs using on-device private data for various downstream tasks.
While Federated Learning (FL) offers a promising privacy-preserving solution,
existing fine-tuning methods retain the original LLM size, leaving issues of
high inference latency and excessive memory demands unresolved. Hence, we
design FedSpine, an FL framework that combines Parameter- Efficient Fine-Tuning
(PEFT) with structured pruning for efficient deployment of LLMs on
resource-constrained devices. Specifically, FedSpine introduces an iterative
process to prune and tune the parameters of LLMs. To mitigate the impact of
device heterogeneity, an online Multi-Armed Bandit (MAB) algorithm is employed
to adaptively determine different pruning ratios and LoRA ranks for
heterogeneous devices without any prior knowledge of their computing and
communication capabilities. As a result, FedSpine maintains higher inference
accuracy while improving fine-tuning efficiency. Experimental results conducted
on a physical platform with 80 devices demonstrate that FedSpine can speed up
fine-tuning by 1.4$\times$-6.9$\times$ and improve final accuracy by 0.4%-4.5%
under the same sparsity level compared to other baselines.
|
2501.02441 | A Statistical Hypothesis Testing Framework for Data Misappropriation
Detection in Large Language Models | stat.ML cs.AI cs.CL cs.CR cs.LG math.ST stat.TH | Large Language Models (LLMs) are rapidly gaining enormous popularity in
recent years. However, the training of LLMs has raised significant privacy and
legal concerns, particularly regarding the inclusion of copyrighted materials
in their training data without proper attribution or licensing, which falls
under the broader issue of data misappropriation. In this article, we focus on
a specific problem of data misappropriation detection, namely, to determine
whether a given LLM has incorporated data generated by another LLM. To address
this issue, we propose embedding watermarks into the copyrighted training data
and formulating the detection of data misappropriation as a hypothesis testing
problem. We develop a general statistical testing framework, construct a
pivotal statistic, determine the optimal rejection threshold, and explicitly
control the type I and type II errors. Furthermore, we establish the asymptotic
optimality properties of the proposed tests, and demonstrate its empirical
effectiveness through intensive numerical experiments.
|
2501.02442 | Unsupervised Search for Ethnic Minorities' Medical Segmentation Training
Set | cs.CV | This article investigates the critical issue of dataset bias in medical
imaging, with a particular emphasis on racial disparities caused by uneven
population distribution in dataset collection. Our analysis reveals that
medical segmentation datasets are significantly biased, primarily influenced by
the demographic composition of their collection sites. For instance, Scanning
Laser Ophthalmoscopy (SLO) fundus datasets collected in the United States
predominantly feature images of White individuals, with minority racial groups
underrepresented. This imbalance can result in biased model performance and
inequitable clinical outcomes, particularly for minority populations. To
address this challenge, we propose a novel training set search strategy aimed
at reducing these biases by focusing on underrepresented racial groups. Our
approach utilizes existing datasets and employs a simple greedy algorithm to
identify source images that closely match the target domain distribution. By
selecting training data that aligns more closely with the characteristics of
minority populations, our strategy improves the accuracy of medical
segmentation models on specific minorities, i.e., Black. Our experimental
results demonstrate the effectiveness of this approach in mitigating bias. We
also discuss the broader societal implications, highlighting how addressing
these disparities can contribute to more equitable healthcare outcomes.
|
2501.02446 | RTLMarker: Protecting LLM-Generated RTL Copyright via a Hardware
Watermarking Framework | cs.CR cs.AI | Recent advances of large language models in the field of Verilog generation
have raised several ethical and security concerns, such as code copyright
protection and dissemination of malicious code. Researchers have employed
watermarking techniques to identify codes generated by large language models.
However, the existing watermarking works fail to protect RTL code copyright due
to the significant syntactic and semantic differences between RTL code and
software code in languages such as Python. This paper proposes a hardware
watermarking framework RTLMarker that embeds watermarks into RTL code and
deeper into the synthesized netlist. We propose a set of rule-based Verilog
code transformations , ensuring the watermarked RTL code's syntactic and
semantic correctness. In addition, we consider an inherent tradeoff between
watermark transparency and watermark effectiveness and jointly optimize them.
The results demonstrate RTLMarker's superiority over the baseline in RTL code
watermarking.
|
2501.02447 | MedSegDiffNCA: Diffusion Models With Neural Cellular Automata for Skin
Lesion Segmentation | cs.CV cs.LG eess.IV | Denoising Diffusion Models (DDMs) are widely used for high-quality image
generation and medical image segmentation but often rely on Unet-based
architectures, leading to high computational overhead, especially with
high-resolution images. This work proposes three NCA-based improvements for
diffusion-based medical image segmentation. First, Multi-MedSegDiffNCA uses a
multilevel NCA framework to refine rough noise estimates generated by lower
level NCA models. Second, CBAM-MedSegDiffNCA incorporates channel and spatial
attention for improved segmentation. Third, MultiCBAM-MedSegDiffNCA combines
these methods with a new RGB channel loss for semantic guidance. Evaluations on
Lesion segmentation show that MultiCBAM-MedSegDiffNCA matches Unet-based model
performance with dice score of 87.84% while using 60-110 times fewer
parameters, offering a more efficient solution for low resource medical
settings.
|
2501.02448 | Understand, Solve and Translate: Bridging the Multilingual Mathematical
Reasoning Gap | cs.CL | Large language models (LLMs) demonstrate exceptional performance on complex
reasoning tasks. However, despite their strong reasoning capabilities in
high-resource languages (e.g., English and Chinese), a significant performance
gap persists in other languages. To investigate this gap in Korean, we
introduce HRM8K, a benchmark comprising 8,011 English-Korean parallel bilingual
math problems. Through systematic analysis of model behaviors, we identify a
key finding: these performance disparities stem primarily from difficulties in
comprehending non-English inputs, rather than limitations in reasoning
capabilities. Based on these findings, we propose UST (Understand, Solve, and
Translate), a method that strategically uses English as an anchor for reasoning
and solution generation. By fine-tuning the model on 130k synthetically
generated data points, UST achieves a 10.91% improvement on the HRM8K benchmark
and reduces the multilingual performance gap from 11.6% to 0.7%. Additionally,
we show that improvements from UST generalize effectively to different Korean
domains, demonstrating that capabilities acquired from machine-verifiable
content can be generalized to other areas. We publicly release the benchmark,
training dataset, and models.
|
2501.02450 | GCP: Guarded Collaborative Perception with Spatial-Temporal Aware
Malicious Agent Detection | cs.CV | Collaborative perception significantly enhances autonomous driving safety by
extending each vehicle's perception range through message sharing among
connected and autonomous vehicles. Unfortunately, it is also vulnerable to
adversarial message attacks from malicious agents, resulting in severe
performance degradation. While existing defenses employ
hypothesis-and-verification frameworks to detect malicious agents based on
single-shot outliers, they overlook temporal message correlations, which can be
circumvented by subtle yet harmful perturbations in model input and output
spaces. This paper reveals a novel blind area confusion (BAC) attack that
compromises existing single-shot outlier-based detection methods. As a
countermeasure, we propose GCP, a Guarded Collaborative Perception framework
based on spatial-temporal aware malicious agent detection, which maintains
single-shot spatial consistency through a confidence-scaled spatial concordance
loss, while simultaneously examining temporal anomalies by reconstructing
historical bird's eye view motion flows in low-confidence regions. We also
employ a joint spatial-temporal Benjamini-Hochberg test to synthesize
dual-domain anomaly results for reliable malicious agent detection. Extensive
experiments demonstrate GCP's superior performance under diverse attack
scenarios, achieving up to 34.69% improvements in AP@0.5 compared to the
state-of-the-art CP defense strategies under BAC attacks, while maintaining
consistent 5-8% improvements under other typical attacks. Code will be released
at https://github.com/CP-Security/GCP.git.
|
2501.02451 | Enhancing Contrastive Learning for Retinal Imaging via Adjusted
Augmentation Scales | cs.CV cs.AI | Contrastive learning, a prominent approach within self-supervised learning,
has demonstrated significant effectiveness in developing generalizable models
for various applications involving natural images. However, recent research
indicates that these successes do not necessarily extend to the medical imaging
domain. In this paper, we investigate the reasons for this suboptimal
performance and hypothesize that the dense distribution of medical images poses
challenges to the pretext tasks in contrastive learning, particularly in
constructing positive and negative pairs. We explore model performance under
different augmentation strategies and compare the results to those achieved
with strong augmentations. Our study includes six publicly available datasets
covering multiple clinically relevant tasks. We further assess the model's
generalizability through external evaluations. The model pre-trained with weak
augmentation outperforms those with strong augmentation, improving AUROC from
0.838 to 0.848 and AUPR from 0.523 to 0.597 on MESSIDOR2, and showing similar
enhancements across other datasets. Our findings suggest that optimizing the
scale of augmentation is critical for enhancing the efficacy of contrastive
learning in medical imaging.
|
2501.02453 | Blockage-Aware UAV-Assisted Wireless Data Harvesting With Building
Avoidance | cs.IT eess.SP math.IT | Unmanned aerial vehicles (UAVs) offer dynamic trajectory control, enabling
them to avoid obstacles and establish line-of-sight (LoS) wireless channels
with ground nodes (GNs), unlike traditional ground-fixed base stations. This
study addresses the joint optimization of scheduling and three-dimensional (3D)
trajectory planning for UAV-assisted wireless data harvesting. The objective is
to maximize the minimum uplink throughput among GNs while accounting for signal
blockages and building avoidance. To achieve this, we first present
mathematical models designed to avoid cuboid-shaped buildings and to determine
wireless signal blockage by buildings through rigorous mathematical proof. The
optimization problem is formulated as nonconvex mixed-integer nonlinear
programming and solved using advanced techniques. Specifically, the problem is
decomposed into convex subproblems via quadratic transform and successive
convex approximation. Building avoidance and signal blockage constraints are
incorporated using the separating hyperplane method and an approximated
indicator function. These subproblems are then iteratively solved using the
block coordinate descent algorithm. Simulation results validate the
effectiveness of the proposed approach. The UAV dynamically adjusts its
trajectory and scheduling policy to maintain LoS channels with GNs,
significantly enhancing network throughput compared to existing schemes.
Moreover, the trajectory of the UAV adheres to building avoidance constraints
for its continuous trajectory, ensuring uninterrupted operation and compliance
with safety requirements.
|
2501.02456 | Keeping Score: A Quantitative Analysis of How the CHI Community
Appreciates Its Milestones | cs.HC cs.SI | The ACM CHI Conference has a tradition of citing its intellectual heritage.
At the same time, we know CHI is highly diverse and evolving. In this highly
dynamic context, it is not clear how the CHI community continues to appreciate
its milestones (within and outside of CHI). We present an investigation into
how the community's citations to milestones have evolved over 43 years of CHI
Proceedings (1981-2024). Forgetting curves plotted for each year suggest that
milestones are slowly fading from the CHI community's collective memory.
However, the picture is more nuanced when we trace citations to the top-cited
milestones over time. We identify three distinct types of milestones cited at
CHI, a typology of milestone contributions, and define the Milestone
Coefficient as a metric to assess the impact of milestone papers on a
continuous scale. Further, we provide empirical evidence of a Matthew effect at
CHI. We discuss the broader ramifications for the CHI community and the field
of HCI.
|
2501.02458 | Neural Reflectance Fields for Radio-Frequency Ray Tracing | cs.CV cs.LG cs.NI eess.SP | Ray tracing is widely employed to model the propagation of radio-frequency
(RF) signal in complex environment. The modelling performance greatly depends
on how accurately the target scene can be depicted, including the scene
geometry and surface material properties. The advances in computer vision and
LiDAR make scene geometry estimation increasingly accurate, but there still
lacks scalable and efficient approaches to estimate the material reflectivity
in real-world environment. In this work, we tackle this problem by learning the
material reflectivity efficiently from the path loss of the RF signal from the
transmitters to receivers. Specifically, we want the learned material
reflection coefficients to minimize the gap between the predicted and measured
powers of the receivers. We achieve this by translating the neural reflectance
field from optics to RF domain by modelling both the amplitude and phase of RF
signals to account for the multipath effects. We further propose a
differentiable RF ray tracing framework that optimizes the neural reflectance
field to match the signal strength measurements. We simulate a complex
real-world environment for experiments and our simulation results show that the
neural reflectance field can successfully learn the reflection coefficients for
all incident angles. As a result, our approach achieves better accuracy in
predicting the powers of receivers with significantly less training data
compared to existing approaches.
|
2501.02460 | Towards Omni-RAG: Comprehensive Retrieval-Augmented Generation for Large
Language Models in Medical Applications | cs.CL | Large language models hold promise for addressing medical challenges, such as
medical diagnosis reasoning, research knowledge acquisition, clinical
decision-making, and consumer health inquiry support. However, they often
generate hallucinations due to limited medical knowledge. Incorporating
external knowledge is therefore critical, which necessitates multi-source
knowledge acquisition. We address this challenge by framing it as a source
planning problem, which is to formulate context-appropriate queries tailored to
the attributes of diverse sources. Existing approaches either overlook source
planning or fail to achieve it effectively due to misalignment between the
model's expectation of the sources and their actual content. To bridge this
gap, we present MedOmniKB, a repository comprising multigenre and
multi-structured medical knowledge sources. Leveraging these sources, we
propose the Source Planning Optimisation method, which enhances multi-source
utilisation. Our approach involves enabling an expert model to explore and
evaluate potential plans while training a smaller model to learn source
alignment. Experimental results demonstrate that our method substantially
improves multi-source planning performance, enabling the optimised small model
to achieve state-of-the-art results in leveraging diverse medical knowledge
sources.
|
2501.02461 | FedRSClip: Federated Learning for Remote Sensing Scene Classification
Using Vision-Language Models | cs.CV cs.AI | Remote sensing data is often distributed across multiple institutions, and
due to privacy concerns and data-sharing restrictions, leveraging large-scale
datasets in a centralized training framework is challenging. Federated learning
offers a promising solution by enabling collaborative model training across
distributed data sources without requiring data centralization. However,
current Vision-Language Models (VLMs), which typically contain billions of
parameters, pose significant communication challenges for traditional federated
learning approaches based on model parameter updates, as they would incur
substantial communication costs. In this paper, we propose FedRSCLIP, the first
federated learning framework designed for remote sensing image classification
based on a VLM, specifically CLIP. FedRSCLIP addresses the challenges of data
heterogeneity and large-scale model transmission in federated environments by
introducing Prompt Learning, which optimizes only a small set of tunable
parameters. The framework introduces a dual-prompt mechanism, comprising Shared
Prompts for global knowledge sharing and Private Prompts for client-specific
adaptation. To maintain semantic coherence between shared and private prompts,
we propose the Dual Prompt Alignment Constraint to balance global consistency
and local adaptability across diverse client distributions. Additionally, to
enhance cross-modal representation learning, we introduce the Cross-Modal
Feature Alignment Constraint to align multimodal features between text and
image prompts. To validate the effectiveness of our proposed model, we
construct a Fed-RSIC dataset based on three existing remote sensing image
classification datasets, specifically designed to simulate various federated
learning configurations. Experimental results demonstrate the effectiveness and
superiority of FedRSCLIP in remote sensing image classification.
|
2501.02464 | Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera | cs.CV cs.AI cs.RO | While recent depth estimation methods exhibit strong zero-shot
generalization, achieving accurate metric depth across diverse camera
types-particularly those with large fields of view (FoV) such as fisheye and
360-degree cameras-remains a significant challenge. This paper presents Depth
Any Camera (DAC), a powerful zero-shot metric depth estimation framework that
extends a perspective-trained model to effectively handle cameras with varying
FoVs. The framework is designed to ensure that all existing 3D data can be
leveraged, regardless of the specific camera types used in new applications.
Remarkably, DAC is trained exclusively on perspective images but generalizes
seamlessly to fisheye and 360-degree cameras without the need for specialized
training data. DAC employs Equi-Rectangular Projection (ERP) as a unified image
representation, enabling consistent processing of images with diverse FoVs. Its
key components include a pitch-aware Image-to-ERP conversion for efficient
online augmentation in ERP space, a FoV alignment operation to support
effective training across a wide range of FoVs, and multi-resolution data
augmentation to address resolution disparities between training and testing.
DAC achieves state-of-the-art zero-shot metric depth estimation, improving
delta-1 ($\delta_1$) accuracy by up to 50% on multiple fisheye and 360-degree
datasets compared to prior metric depth foundation models, demonstrating robust
generalization across camera types.
|
2501.02465 | EOG Communication Interface for Quadriplegics: Prototype & Signal
Processing | eess.SP cs.SY eess.SY | Electrooculography (EOG) is an electrophysiological signal that determines
the human eye orientation and is therefore widely used in Human Tracking
Interfaces (HCI). The purpose of this project is to develop a communication
method for quadriplegic patients using EOG signals aimed at text and voice
generation. The system consists of 3D eye movement tracking embedded using a
custom-built prototype to measure the eyeball's left-right and up-down
movements. The ESP32 board, which has a set of parameters to convert the data
into content displayed on LCDs and MP3 players, is used to capture and process
the signal. helps people by facilitating more natural and efficient symptom
expression. The blink system will be able to incorporate face masks and more
eye tests as it continues to develop. Even if it might work, more research and
clinical trials are needed to evaluate the system's usefulness and ensure that
it performs as planned in real-world scenarios. With this project, assistive
technology will make significant progress and improve the lives of many who
suffer from severe motor impairments.
|
2501.02467 | DeTrack: In-model Latent Denoising Learning for Visual Object Tracking | cs.CV | Previous visual object tracking methods employ image-feature regression
models or coordinate autoregression models for bounding box prediction.
Image-feature regression methods heavily depend on matching results and do not
utilize positional prior, while the autoregressive approach can only be trained
using bounding boxes available in the training set, potentially resulting in
suboptimal performance during testing with unseen data. Inspired by the
diffusion model, denoising learning enhances the model's robustness to unseen
data. Therefore, We introduce noise to bounding boxes, generating noisy boxes
for training, thus enhancing model robustness on testing data. We propose a new
paradigm to formulate the visual object tracking problem as a denoising
learning process. However, tracking algorithms are usually asked to run in
real-time, directly applying the diffusion model to object tracking would
severely impair tracking speed. Therefore, we decompose the denoising learning
process into every denoising block within a model, not by running the model
multiple times, and thus we summarize the proposed paradigm as an in-model
latent denoising learning process. Specifically, we propose a denoising Vision
Transformer (ViT), which is composed of multiple denoising blocks. In the
denoising block, template and search embeddings are projected into every
denoising block as conditions. A denoising block is responsible for removing
the noise in a predicted bounding box, and multiple stacked denoising blocks
cooperate to accomplish the whole denoising process. Subsequently, we utilize
image features and trajectory information to refine the denoised bounding box.
Besides, we also utilize trajectory memory and visual memory to improve
tracking stability. Experimental results validate the effectiveness of our
approach, achieving competitive performance on several challenging datasets.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.