id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.01761 | Adverse Weather Conditions Augmentation of LiDAR Scenes with Latent
Diffusion Models | cs.CV | LiDAR scenes constitute a fundamental source for several autonomous driving
applications. Despite the existence of several datasets, scenes from adverse
weather conditions are rarely available. This limits the robustness of
downstream machine learning models, and restrains the reliability of autonomous
driving systems in particular locations and seasons. Collecting feature-diverse
scenes under adverse weather conditions is challenging due to seasonal
limitations. Generative models are therefore essentials, especially for
generating adverse weather conditions for specific driving scenarios. In our
work, we propose a latent diffusion process constituted by autoencoder and
latent diffusion models. Moreover, we leverage the clear condition LiDAR scenes
with a postprocessing step to improve the realism of the generated adverse
weather condition scenes.
|
2501.01763 | Quantifying A Firm's AI Engagement: Constructing Objective, Data-Driven,
AI Stock Indices Using 10-K Filings | q-fin.GN cs.AI econ.EM q-fin.PM q-fin.RM | Following an analysis of existing AI-related exchange-traded funds (ETFs), we
reveal the selection criteria for determining which stocks qualify as
AI-related are often opaque and rely on vague phrases and subjective judgments.
This paper proposes a new, objective, data-driven approach using natural
language processing (NLP) techniques to classify AI stocks by analyzing annual
10-K filings from 3,395 NASDAQ-listed firms between 2011 and 2023. This
analysis quantifies each company's engagement with AI through binary indicators
and weighted AI scores based on the frequency and context of AI-related terms.
Using these metrics, we construct four AI stock indices-the Equally Weighted AI
Index (AII), the Size-Weighted AI Index (SAII), and two Time-Discounted AI
Indices (TAII05 and TAII5X)-offering different perspectives on AI investment.
We validate our methodology through an event study on the launch of OpenAI's
ChatGPT, demonstrating that companies with higher AI engagement saw
significantly greater positive abnormal returns, with analyses supporting the
predictive power of our AI measures. Our indices perform on par with or surpass
14 existing AI-themed ETFs and the Nasdaq Composite Index in risk-return
profiles, market responsiveness, and overall performance, achieving higher
average daily returns and risk-adjusted metrics without increased volatility.
These results suggest our NLP-based approach offers a reliable,
market-responsive, and cost-effective alternative to existing AI-related ETF
products. Our innovative methodology can also guide investors, asset managers,
and policymakers in using corporate data to construct other thematic
portfolios, contributing to a more transparent, data-driven, and competitive
approach.
|
2501.01765 | SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation | cs.LG | As advancements in large language models (LLMs) continue and the demand for
personalized models increases, parameter-efficient fine-tuning (PEFT) methods
(e.g., LoRA) will become essential due to their efficiency in reducing
computation costs. However, recent studies have raised alarming concerns that
LoRA fine-tuning could potentially compromise the safety alignment in LLMs,
posing significant risks for the model owner. In this paper, we first
investigate the underlying mechanism by analyzing the changes in safety
alignment related features before and after fine-tuning. Then, we propose a
fixed safety module calculated by safety data and a task-specific
initialization for trainable parameters in low-rank adaptations, termed
Safety-alignment preserved Low-Rank Adaptation (SaLoRA). Unlike previous LoRA
methods and their variants, SaLoRA enables targeted modifications to LLMs
without disrupting their original alignments. Our experiments show that SaLoRA
outperforms various adapters-based approaches across various evaluation metrics
in different fine-tuning tasks.
|
2501.01767 | LogicAD: Explainable Anomaly Detection via VLM-based Text Feature
Extraction | cs.CV | Logical image understanding involves interpreting and reasoning about the
relationships and consistency within an image's visual content. This capability
is essential in applications such as industrial inspection, where logical
anomaly detection is critical for maintaining high-quality standards and
minimizing costly recalls. Previous research in anomaly detection (AD) has
relied on prior knowledge for designing algorithms, which often requires
extensive manual annotations, significant computing power, and large amounts of
data for training. Autoregressive, multimodal Vision Language Models (AVLMs)
offer a promising alternative due to their exceptional performance in visual
reasoning across various domains. Despite this, their application to logical AD
remains unexplored. In this work, we investigate using AVLMs for logical AD and
demonstrate that they are well-suited to the task. Combining AVLMs with format
embedding and a logic reasoner, we achieve SOTA performance on public
benchmarks, MVTec LOCO AD, with an AUROC of 86.0% and F1-max of 83.7%, along
with explanations of anomalies. This significantly outperforms the existing
SOTA method by a large margin.
|
2501.01770 | TCPFormer: Learning Temporal Correlation with Implicit Pose Proxy for 3D
Human Pose Estimation | cs.CV | Recent multi-frame lifting methods have dominated the 3D human pose
estimation. However, previous methods ignore the intricate dependence within
the 2D pose sequence and learn single temporal correlation. To alleviate this
limitation, we propose TCPFormer, which leverages an implicit pose proxy as an
intermediate representation. Each proxy within the implicit pose proxy can
build one temporal correlation therefore helping us learn more comprehensive
temporal correlation of human motion. Specifically, our method consists of
three key components: Proxy Update Module (PUM), Proxy Invocation Module (PIM),
and Proxy Attention Module (PAM). PUM first uses pose features to update the
implicit pose proxy, enabling it to store representative information from the
pose sequence. PIM then invocates and integrates the pose proxy with the pose
sequence to enhance the motion semantics of each pose. Finally, PAM leverages
the above mapping between the pose sequence and pose proxy to enhance the
temporal correlation of the whole pose sequence. Experiments on the Human3.6M
and MPI-INF-3DHP datasets demonstrate that our proposed TCPFormer outperforms
the previous state-of-the-art methods.
|
2501.01773 | Compressed Domain Prior-Guided Video Super-Resolution for Cloud Gaming
Content | eess.IV cs.CV | Cloud gaming is an advanced form of Internet service that necessitates local
terminals to decode within limited resources and time latency. Super-Resolution
(SR) techniques are often employed on these terminals as an efficient way to
reduce the required bit-rate bandwidth for cloud gaming. However, insufficient
attention has been paid to SR of compressed game video content. Most SR
networks amplify block artifacts and ringing effects in decoded frames while
ignoring edge details of game content, leading to unsatisfactory reconstruction
results. In this paper, we propose a novel lightweight network called Coding
Prior-Guided Super-Resolution (CPGSR) to address the SR challenges in
compressed game video content. First, we design a Compressed Domain Guided
Block (CDGB) to extract features of different depths from coding priors, which
are subsequently integrated with features from the U-net backbone. Then, a
series of re-parameterization blocks are utilized for reconstruction.
Ultimately, inspired by the quantization in video coding, we propose a
partitioned focal frequency loss to effectively guide the model's focus on
preserving high-frequency information. Extensive experiments demonstrate the
advancement of our approach.
|
2501.01774 | A Unifying View of Linear Function Approximation in Off-Policy RL
Through Matrix Splitting and Preconditioning | cs.LG | Traditionally, TD and FQI are viewed as differing in the number of updates
toward the target value function: TD makes one update, FQI makes an infinite
number, and Partial Fitted Q-Iteration (PFQI) performs a finite number, such as
the use of a target network in Deep Q-Networks (DQN) in the OPE setting. This
perspective, however, fails to capture the convergence connections between
these algorithms and may lead to incorrect conclusions, for example, that the
convergence of TD implies the convergence of FQI. In this paper, we focus on
linear value function approximation and offer a new perspective, unifying TD,
FQI, and PFQI as the same iterative method for solving the Least Squares
Temporal Difference (LSTD) system, but using different preconditioners and
matrix splitting schemes. TD uses a constant preconditioner, FQI employs a
data-feature adaptive preconditioner, and PFQI transitions between the two.
Then, we reveal that in the context of linear function approximation,
increasing the number of updates under the same target value function
essentially represents a transition from using a constant preconditioner to
data-feature adaptive preconditioner. This unifying perspective also simplifies
the analyses of the convergence conditions for these algorithms and clarifies
many issues. Consequently, we fully characterize the convergence of each
algorithm without assuming specific properties of the chosen features (e.g.,
linear independence). We also examine how common assumptions about feature
representations affect convergence, and discover new conditions on features
that are important for convergence. These convergence conditions allow us to
establish the convergence connections between these algorithms and to address
important questions.
|
2501.01776 | Smooth Rate Limiter Model for Power System Stability Analysis and
Control | eess.SY cs.SY | The letter proposes a smooth Rate Limiter (RL) model for power system
stability analysis and control. The proposed model enables the effects of
derivative bounds to be incorporated into system eigenvalue analysis, while
replicating the behavior of conventional non-smooth RLs with high fidelity. In
addition, it can be duly modified to enhance the system's dynamic control
performance. The behavior of the proposed model is demonstrated through
illustrative examples as well as through a simulation of the New York/New
England 16-machine 68-bus system.
|
2501.01779 | From Occasional to Steady: Habit Formation Insights From a Comprehensive
Fitness Study | cs.CY cs.CE cs.SI | Exercising regularly is widely recognized as a cornerstone of health, yet the
challenge of sustaining consistent exercise habits persists. Understanding the
factors that influence the formation of these habits is crucial for developing
effective interventions. This study utilizes data from Mars Athletic Club,
T\"urkiye's largest sports chain, to investigate the dynamics of gym attendance
and habit formation. The general problem addressed by this study is identifying
the critical periods and factors that contribute to the successful
establishment of consistent exercise routines among gym-goers. Here we show
that there are specific periods during which gym attendance is most crucial for
habit formation. By developing a survival metric based on gym attendance
patterns, we pinpoint these critical periods and segment members into distinct
clusters based on their visit patterns. Our analysis reveals significant
differences in how various subgroups respond to interventions, such as group
classes, personal trainer sessions, and visiting different clubs. Using causal
inference analysis, we demonstrate that personalized guidance and social
dynamics are key drivers of sustained long-term engagement. By systematically
examining these variables and considering the specific characteristics of
different clusters, our research demonstrates the importance of a tailored,
multi-dimensional approach to promoting exercise habits, which integrates
social dynamics, personalized guidance, and strategic interventions to sustain
long-term engagement.
|
2501.01785 | Can Synthetic Data be Fair and Private? A Comparative Study of Synthetic
Data Generation and Fairness Algorithms | cs.LG cs.AI cs.CY | The increasing use of machine learning in learning analytics (LA) has raised
significant concerns around algorithmic fairness and privacy. Synthetic data
has emerged as a dual-purpose tool, enhancing privacy and improving fairness in
LA models. However, prior research suggests an inverse relationship between
fairness and privacy, making it challenging to optimize both. This study
investigates which synthetic data generators can best balance privacy and
fairness, and whether pre-processing fairness algorithms, typically applied to
real datasets, are effective on synthetic data. Our results highlight that the
DEbiasing CAusal Fairness (DECAF) algorithm achieves the best balance between
privacy and fairness. However, DECAF suffers in utility, as reflected in its
predictive accuracy. Notably, we found that applying pre-processing fairness
algorithms to synthetic data improves fairness even more than when applied to
real data. These findings suggest that combining synthetic data generation with
fairness pre-processing offers a promising approach to creating fairer LA
models.
|
2501.01788 | Universal Online Temporal Calibration for Optimization-based
Visual-Inertial Navigation Systems | cs.RO cs.CV | 6-Degree of Freedom (6DoF) motion estimation with a combination of visual and
inertial sensors is a growing area with numerous real-world applications.
However, precise calibration of the time offset between these two sensor types
is a prerequisite for accurate and robust tracking. To address this, we propose
a universal online temporal calibration strategy for optimization-based
visual-inertial navigation systems. Technically, we incorporate the time offset
td as a state parameter in the optimization residual model to align the IMU
state to the corresponding image timestamp using td, angular velocity and
translational velocity. This allows the temporal misalignment td to be
optimized alongside other tracking states during the process. As our method
only modifies the structure of the residual model, it can be applied to various
optimization-based frameworks with different tracking frontends. We evaluate
our calibration method with both EuRoC and simulation data and extensive
experiments demonstrate that our approach provides more accurate time offset
estimation and faster convergence, particularly in the presence of noisy sensor
data.
|
2501.01790 | Ingredients: Blending Custom Photos with Video Diffusion Transformers | cs.CV | This paper presents a powerful framework to customize video creations by
incorporating multiple specific identity (ID) photos, with video diffusion
Transformers, referred to as \texttt{Ingredients}. Generally, our method
consists of three primary modules: (\textbf{i}) a facial extractor that
captures versatile and precise facial features for each human ID from both
global and local perspectives; (\textbf{ii}) a multi-scale projector that maps
face embeddings into the contextual space of image query in video diffusion
transformers; (\textbf{iii}) an ID router that dynamically combines and
allocates multiple ID embedding to the corresponding space-time regions.
Leveraging a meticulously curated text-video dataset and a multi-stage training
protocol, \texttt{Ingredients} demonstrates superior performance in turning
custom photos into dynamic and personalized video content. Qualitative
evaluations highlight the advantages of proposed method, positioning it as a
significant advancement toward more effective generative video control tools in
Transformer-based architecture, compared to existing methods. The data, code,
and model weights are publicly available at:
\url{https://github.com/feizc/Ingredients}.
|
2501.01791 | A Minimal Subset Approach for Efficient and Scalable Loop Closure | cs.CV cs.RO | Loop closure detection in large-scale and long-term missions can be
computationally demanding due to the need to identify, verify, and process
numerous candidate pairs to establish edge connections for the pose graph
optimization. Keyframe sampling mitigates this by reducing the number of frames
stored and processed in the back-end system. In this article, we address the
gap in optimized keyframe sampling for the combined problem of pose graph
optimization and loop closure detection. Our Minimal Subset Approach (MSA)
employs an optimization strategy with two key factors, redundancy minimization
and information preservation, within a sliding window framework to efficiently
reduce redundant keyframes, while preserving essential information. This method
delivers comparable performance to baseline approaches, while enhancing
scalability and reducing computational overhead. Finally, we evaluate MSA on
relevant publicly available datasets, showcasing that it consistently performs
across a wide range of environments, without requiring any manual parameter
tuning.
|
2501.01793 | Creating Artificial Students that Never Existed: Leveraging Large
Language Models and CTGANs for Synthetic Data Generation | cs.LG cs.AI | In this study, we explore the growing potential of AI and deep learning
technologies, particularly Generative Adversarial Networks (GANs) and Large
Language Models (LLMs), for generating synthetic tabular data. Access to
quality students data is critical for advancing learning analytics, but privacy
concerns and stricter data protection regulations worldwide limit their
availability and usage. Synthetic data offers a promising alternative. We
investigate whether synthetic data can be leveraged to create artificial
students for serving learning analytics models. Using the popular GAN model
CTGAN and three LLMs- GPT2, DistilGPT2, and DialoGPT, we generate synthetic
tabular student data. Our results demonstrate the strong potential of these
methods to produce high-quality synthetic datasets that resemble real students
data. To validate our findings, we apply a comprehensive set of utility
evaluation metrics to assess the statistical and predictive performance of the
synthetic data and compare the different generator models used, specially the
performance of LLMs. Our study aims to provide the learning analytics community
with valuable insights into the use of synthetic data, laying the groundwork
for expanding the field methodological toolbox with new innovative approaches
for learning analytics data generation.
|
2501.01796 | Reading Between the Lines: A dataset and a study on why some texts are
tougher than others | cs.CL | Our research aims at better understanding what makes a text difficult to read
for specific audiences with intellectual disabilities, more specifically,
people who have limitations in cognitive functioning, such as reading and
understanding skills, an IQ below 70, and challenges in conceptual domains. We
introduce a scheme for the annotation of difficulties which is based on
empirical research in psychology as well as on research in translation studies.
The paper describes the annotated dataset, primarily derived from the parallel
texts (standard English and Easy to Read English translations) made available
online. we fine-tuned four different pre-trained transformer models to perform
the task of multiclass classification to predict the strategies required for
simplification. We also investigate the possibility to interpret the decisions
of this language model when it is aimed at predicting the difficulty of
sentences. The resources are available from
https://github.com/Nouran-Khallaf/why-tough
|
2501.01798 | JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video Editing | cs.CV | Significant progress has been made in talking-face video generation research;
however, precise lip-audio synchronization and high visual quality remain
challenging in editing lip shapes based on input audio. This paper introduces
JoyGen, a novel two-stage framework for talking-face generation, comprising
audio-driven lip motion generation and visual appearance synthesis. In the
first stage, a 3D reconstruction model and an audio2motion model predict
identity and expression coefficients respectively. Next, by integrating audio
features with a facial depth map, we provide comprehensive supervision for
precise lip-audio synchronization in facial generation. Additionally, we
constructed a Chinese talking-face dataset containing 130 hours of high-quality
video. JoyGen is trained on the open-source HDTF dataset and our curated
dataset. Experimental results demonstrate superior lip-audio synchronization
and visual quality achieved by our method.
|
2501.01799 | Grasping in Uncertain Environments: A Case Study For Industrial Robotic
Recycling | cs.RO cs.SY eess.SY | Autonomous robotic grasping of uncertain objects in uncertain environments is
an impactful open challenge for the industries of the future. One such industry
is the recycling of Waste Electrical and Electronic Equipment (WEEE) materials,
in which electric devices are disassembled and readied for the recovery of raw
materials. Since devices may contain hazardous materials and their disassembly
involves heavy manual labor, robotic disassembly is a promising venue. However,
since devices may be damaged, dirty and unidentified, robotic disassembly is
challenging since object models are unavailable or cannot be relied upon. This
case study explores grasping strategies for industrial robotic disassembly of
WEEE devices with uncertain vision data. We propose three grippers and
appropriate tactile strategies for force-based manipulation that improves
grasping robustness. For each proposed gripper, we develop corresponding
strategies that can perform effectively in different grasping tasks and
leverage the grippers design and unique strengths. Through experiments
conducted in lab and factory settings for four different WEEE devices, we
demonstrate how object uncertainty may be overcome by tactile sensing and
compliant techniques, significantly increasing grasping success rates.
|
2501.01801 | John Ellipsoids via Lazy Updates | cs.DS cs.LG | We give a faster algorithm for computing an approximate John ellipsoid around
$n$ points in $d$ dimensions. The best known prior algorithms are based on
repeatedly computing the leverage scores of the points and reweighting them by
these scores [CCLY19]. We show that this algorithm can be substantially sped up
by delaying the computation of high accuracy leverage scores by using sampling,
and then later computing multiple batches of high accuracy leverage scores via
fast rectangular matrix multiplication. We also give low-space streaming
algorithms for John ellipsoids using similar ideas.
|
2501.01802 | BERT4MIMO: A Foundation Model using BERT Architecture for Massive MIMO
Channel State Information Prediction | cs.IT cs.AI eess.SP math.IT | Massive MIMO (Multiple-Input Multiple-Output) is an advanced wireless
communication technology, using a large number of antennas to improve the
overall performance of the communication system in terms of capacity, spectral,
and energy efficiency. The performance of MIMO systems is highly dependent on
the quality of channel state information (CSI). Predicting CSI is, therefore,
essential for improving communication system performance, particularly in MIMO
systems, since it represents key characteristics of a wireless channel,
including propagation, fading, scattering, and path loss. This study proposes a
foundation model inspired by BERT, called BERT4MIMO, which is specifically
designed to process high-dimensional CSI data from massive MIMO systems.
BERT4MIMO offers superior performance in reconstructing CSI under varying
mobility scenarios and channel conditions through deep learning and attention
mechanisms. The experimental results demonstrate the effectiveness of BERT4MIMO
in a variety of wireless environments.
|
2501.01805 | End-to-End Long Document Summarization using Gradient Caching | cs.CL cs.AI | Training transformer-based encoder-decoder models for long document
summarization poses a significant challenge due to the quadratic memory
consumption during training. Several approaches have been proposed to extend
the input length at test time, but training with these approaches is still
difficult, requiring truncation of input documents and causing a mismatch
between training and test conditions. In this work, we propose CachED (Gradient
$\textbf{Cach}$ing for $\textbf{E}$ncoder-$\textbf{D}$ecoder models), an
approach that enables end-to-end training of existing transformer-based
encoder-decoder models, using the entire document without truncation.
Specifically, we apply non-overlapping sliding windows to input documents,
followed by fusion in decoder. During backpropagation, the gradients are cached
at the decoder and are passed through the encoder in chunks by re-computing the
hidden vectors, similar to gradient checkpointing. In the experiments on long
document summarization, we extend BART to CachED BART, processing more than
500K tokens during training and achieving superior performance without using
any additional parameters.
|
2501.01806 | TRG-planner: Traversal Risk Graph-Based Path Planning in Unstructured
Environments for Safe and Efficient Navigation | cs.RO cs.SY eess.SY | Unstructured environments such as mountains, caves, construction sites, or
disaster areas are challenging for autonomous navigation because of terrain
irregularities. In particular, it is crucial to plan a path to avoid risky
terrain and reach the goal quickly and safely. In this paper, we propose a
method for safe and distance-efficient path planning, leveraging Traversal Risk
Graph (TRG), a novel graph representation that takes into account geometric
traversability of the terrain. TRG nodes represent stability and reachability
of the terrain, while edges represent relative traversal risk-weighted path
candidates. Additionally, TRG is constructed in a wavefront propagation manner
and managed hierarchically, enabling real-time planning even in large-scale
environments. Lastly, we formulate a graph optimization problem on TRG that
leads the robot to navigate by prioritizing both safe and short paths. Our
approach demonstrated superior safety, distance efficiency, and fast processing
time compared to the conventional methods. It was also validated in several
real-world experiments using a quadrupedal robot. Notably, TRG-planner
contributed as the global path planner of an autonomous navigation framework
for the DreamSTEP team, which won the Quadruped Robot Challenge at ICRA 2023.
The project page is available at https://trg-planner.github.io .
|
2501.01808 | MoEE: Mixture of Emotion Experts for Audio-Driven Portrait Animation | cs.CV | The generation of talking avatars has achieved significant advancements in
precise audio synchronization. However, crafting lifelike talking head videos
requires capturing a broad spectrum of emotions and subtle facial expressions.
Current methods face fundamental challenges: a) the absence of frameworks for
modeling single basic emotional expressions, which restricts the generation of
complex emotions such as compound emotions; b) the lack of comprehensive
datasets rich in human emotional expressions, which limits the potential of
models. To address these challenges, we propose the following innovations: 1)
the Mixture of Emotion Experts (MoEE) model, which decouples six fundamental
emotions to enable the precise synthesis of both singular and compound
emotional states; 2) the DH-FaceEmoVid-150 dataset, specifically curated to
include six prevalent human emotional expressions as well as four types of
compound emotions, thereby expanding the training potential of emotion-driven
models. Furthermore, to enhance the flexibility of emotion control, we propose
an emotion-to-latents module that leverages multimodal inputs, aligning diverse
control signals-such as audio, text, and labels-to ensure more varied control
inputs as well as the ability to control emotions using audio alone. Through
extensive quantitative and qualitative evaluations, we demonstrate that the
MoEE framework, in conjunction with the DH-FaceEmoVid-150 dataset, excels in
generating complex emotional expressions and nuanced facial details, setting a
new benchmark in the field. These datasets will be publicly released.
|
2501.01811 | QuantumBind-RBFE: Accurate Relative Binding Free Energy Calculations
Using Neural Network Potentials | physics.chem-ph cs.LG physics.comp-ph | Accurate prediction of protein-ligand binding affinities is crucial in drug
discovery, particularly during hit-to-lead and lead optimization phases,
however, limitations in ligand force fields continue to impact prediction
accuracy. In this work, we validate relative binding free energy (RBFE)
accuracy using neural network potentials (NNPs) for the ligands. We utilize a
novel NNP model, AceForce 1.0, based on the TensorNet architecture for small
molecules that broadens the applicability to diverse drug-like compounds,
including all important chemical elements and supporting charged molecules.
Using established benchmarks, we show overall improved accuracy and correlation
in binding affinity predictions compared with GAFF2 for molecular mechanics and
ANI2-x for NNPs. Slightly less accuracy but comparable correlations with OPLS4.
We also show that we can run the NNP simulations at 2 fs timestep, at least two
times larger than previous NNP models, providing significant speed gains. The
results show promise for further evolutions of free energy calculations using
NNPs while demonstrating its practical use already with the current generation.
The code and NNP model are publicly available for research use.
|
2501.01813 | Eliciting Understandable Architectonic Gestures for Robotic Furniture
through Co-Design Improvisation | cs.HC cs.RO | The vision of adaptive architecture proposes that robotic technologies could
enable interior spaces to physically transform in a bidirectional interaction
with occupants. Yet, it is still unknown how this interaction could unfold in
an understandable way. Inspired by HRI studies where robotic furniture gestured
intents to occupants by deliberately positioning or moving in space, we
hypothesise that adaptive architecture could also convey intents through
gestures performed by a mobile robotic partition. To explore this design space,
we invited 15 multidisciplinary experts to join co-design improvisation
sessions, where they manually manoeuvred a deactivated robotic partition to
design gestures conveying six architectural intents that varied in purpose and
urgency. Using a gesture elicitation method alongside motion-tracking data, a
Laban-based questionnaire, and thematic analysis, we identified 20 unique
gestural strategies. Through categorisation, we introduced architectonic
gestures as a novel strategy for robotic furniture to convey intent by
indexically leveraging its spatial impact, complementing the established
deictic and emblematic gestures. Our study thus represents an exploratory step
toward making the autonomous gestures of adaptive architecture more legible. By
understanding how robotic gestures are interpreted based not only on their
motion but also on their spatial impact, we contribute to bridging HRI with
Human-Building Interaction research.
|
2501.01816 | Uncertainty-Aware Label Refinement on Hypergraphs for Personalized
Federated Facial Expression Recognition | cs.CV | Most facial expression recognition (FER) models are trained on large-scale
expression data with centralized learning. Unfortunately, collecting a large
amount of centralized expression data is difficult in practice due to privacy
concerns of facial images. In this paper, we investigate FER under the
framework of personalized federated learning, which is a valuable and practical
decentralized setting for real-world applications. To this end, we develop a
novel uncertainty-Aware label refineMent on hYpergraphs (AMY) method. For local
training, each local model consists of a backbone, an uncertainty estimation
(UE) block, and an expression classification (EC) block. In the UE block, we
leverage a hypergraph to model complex high-order relationships between
expression samples and incorporate these relationships into uncertainty
features. A personalized uncertainty estimator is then introduced to estimate
reliable uncertainty weights of samples in the local client. In the EC block,
we perform label propagation on the hypergraph, obtaining high-quality refined
labels for retraining an expression classifier. Based on the above, we
effectively alleviate heterogeneous sample uncertainty across clients and learn
a robust personalized FER model in each client. Experimental results on two
challenging real-world facial expression databases show that our proposed
method consistently outperforms several state-of-the-art methods. This
indicates the superiority of hypergraph modeling for uncertainty estimation and
label refinement on the personalized federated FER task. The source code will
be released at https://github.com/mobei1006/AMY.
|
2501.01817 | Distributed Framework Construction for Affine Formation Control | eess.SY cs.SY | In affine formation control problems, the construction of the framework with
universal rigidity and affine localizability is a critical prerequisite, but it
has not yet been well addressed, especially when additional agents join the
formation or link/agent failures emerge. Motivated by this observation, we
investigate the problem of constructing affine formation frameworks in three
scenarios, including vertex addition, edge deletion and vertex deletion. Our
approach starts from the original affine formation and uses geometric methods
to locally adjust the structure of the weighted graph to describe the topology,
so that the modified framework maintains the universal rigidity and affine
localizability. Notably, the developed strategies only utilize local
measurements and exhibit distributed characteristics, laying the foundation for
applications in multi-agent systems. To demonstrate the compatibility with
affine formation control proposals, we present a case study on affine formation
tracking in a multi-UAV formation, demonstrating the effectiveness of our
algorithms in constructing eligible frameworks in aforementioned scenarios.
Moreover, a comparative simulations is also conducted to highlight the low time
complexity of our distributed algorithm relative to the centralized
optimization-based method.
|
2501.01818 | Rerouting LLM Routers | cs.CR cs.LG | LLM routers aim to balance quality and cost of generation by classifying
queries and routing them to a cheaper or more expensive LLM depending on their
complexity. Routers represent one type of what we call LLM control planes:
systems that orchestrate use of one or more LLMs. In this paper, we investigate
routers' adversarial robustness.
We first define LLM control plane integrity, i.e., robustness of LLM
orchestration to adversarial inputs, as a distinct problem in AI safety. Next,
we demonstrate that an adversary can generate query-independent token sequences
we call ``confounder gadgets'' that, when added to any query, cause LLM routers
to send the query to a strong LLM.
Our quantitative evaluation shows that this attack is successful both in
white-box and black-box settings against a variety of open-source and
commercial routers, and that confounding queries do not affect the quality of
LLM responses. Finally, we demonstrate that gadgets can be effective while
maintaining low perplexity, thus perplexity-based filtering is not an effective
defense. We finish by investigating alternative defenses.
|
2501.01821 | SDPO: Segment-Level Direct Preference Optimization for Social Agents | cs.AI cs.CL | Social agents powered by large language models (LLMs) can simulate human
social behaviors but fall short in handling complex goal-oriented social
dialogues. Direct Preference Optimization (DPO) has proven effective in
aligning LLM behavior with human preferences across a variety of agent tasks.
Existing DPO-based approaches for multi-turn interactions are divided into
turn-level and session-level methods. The turn-level method is overly
fine-grained, focusing exclusively on individual turns, while session-level
methods are too coarse-grained, often introducing training noise. To address
these limitations, we propose Segment-Level Direct Preference Optimization
(SDPO), which focuses on specific key segments within interactions to optimize
multi-turn agent behavior while minimizing training noise. Evaluations on the
SOTOPIA benchmark demonstrate that SDPO-tuned agents consistently outperform
both existing DPO-based methods and proprietary LLMs like GPT-4o, underscoring
SDPO's potential to advance the social intelligence of LLM-based agents. We
release our code and data at
https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/SDPO.
|
2501.01825 | Unified Native Spaces in Kernel Methods | stat.ML cs.LG | There exists a plethora of parametric models for positive definite kernels,
and their use is ubiquitous in disciplines as diverse as statistics, machine
learning, numerical analysis, and approximation theory. Usually, the kernel
parameters index certain features of an associated process. Amongst those
features, smoothness (in the sense of Sobolev spaces, mean square
differentiability, and fractal dimensions), compact or global supports, and
negative dependencies (hole effects) are of interest to several theoretical and
applied disciplines. This paper unifies a wealth of well-known kernels into a
single parametric class that encompasses them as special cases, attained either
by exact parameterization or through parametric asymptotics. We furthermore
characterize the Sobolev space that is norm equivalent to the RKHS associated
with the new kernel. As a by-product, we infer the Sobolev spaces that are
associated with existing classes of kernels. We illustrate the main properties
of the new class, show how this class can switch from compact to global
supports, and provide special cases for which the kernel attains negative
values over nontrivial intervals. Hence, the proposed class of kernel is the
reproducing kernel of a very rich Hilbert space that contains many special
cases, including the celebrated Mat\'ern and Wendland kernels, as well as their
aliases with hole effects.
|
2501.01827 | The Proof is in the Almond Cookies | cs.CL cs.AI | This paper presents a case study on how to process cooking recipes (and more
generally, how-to instructions) in a way that makes it possible for a robot or
artificial cooking assistant to support human chefs in the kitchen. Such AI
assistants would be of great benefit to society, as they can help to sustain
the autonomy of aging adults or people with a physical impairment, or they may
reduce the stress in a professional kitchen. We propose a novel approach to
computational recipe understanding that mimics the human sense-making process,
which is narrative-based. Using an English recipe for almond crescent cookies
as illustration, we show how recipes can be modelled as rich narrative
structures by integrating various knowledge sources such as language
processing, ontologies, and mental simulation. We show how such narrative
structures can be used for (a) dealing with the challenges of recipe language,
such as zero anaphora, (b) optimizing a robot's planning process, (c) measuring
how well an AI system understands its current tasks, and (d) allowing recipe
annotations to become language-independent.
|
2501.01828 | Age-Based Device Selection and Transmit Power Optimization in
Over-the-Air Federated Learning | cs.NI cs.LG | Recently, over-the-air federated learning (FL) has attracted significant
attention for its ability to enhance communication efficiency. However, the
performance of over-the-air FL is often constrained by device selection
strategies and signal aggregation errors. In particular, neglecting straggler
devices in FL can lead to a decline in the fairness of model updates and
amplify the global model's bias toward certain devices' data, ultimately
impacting the overall system performance. To address this issue, we propose a
joint device selection and transmit power optimization framework that ensures
the appropriate participation of straggler devices, maintains efficient
training performance, and guarantees timely updates. First, we conduct a
theoretical analysis to quantify the convergence upper bound of over-the-air FL
under age-of-information (AoI)-based device selection. Our analysis further
reveals that both the number of selected devices and the signal aggregation
errors significantly influence the convergence upper bound. To minimize the
expected weighted sum peak age of information, we calculate device priorities
for each communication round using Lyapunov optimization and select the
highest-priority devices via a greedy algorithm. Then, we formulate and solve a
transmit power and normalizing factor optimization problem for selected devices
to minimize the time-average mean squared error (MSE). Experimental results
demonstrate that our proposed method offers two significant advantages: (1) it
reduces MSE and improves model performance compared to baseline methods, and
(2) it strikes a balance between fairness and training efficiency while
maintaining satisfactory timeliness, ensuring stable model performance.
|
2501.01830 | Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large
Language Models | cs.CR cs.AI cs.CL | Automated red-teaming has become a crucial approach for uncovering
vulnerabilities in large language models (LLMs). However, most existing methods
focus on isolated safety flaws, limiting their ability to adapt to dynamic
defenses and uncover complex vulnerabilities efficiently. To address this
challenge, we propose Auto-RT, a reinforcement learning framework that
automatically explores and optimizes complex attack strategies to effectively
uncover security vulnerabilities through malicious queries. Specifically, we
introduce two key mechanisms to reduce exploration complexity and improve
strategy optimization: 1) Early-terminated Exploration, which accelerate
exploration by focusing on high-potential attack strategies; and 2) Progressive
Reward Tracking algorithm with intermediate downgrade models, which dynamically
refine the search trajectory toward successful vulnerability exploitation.
Extensive experiments across diverse LLMs demonstrate that, by significantly
improving exploration efficiency and automatically optimizing attack
strategies, Auto-RT detects a boarder range of vulnerabilities, achieving a
faster detection speed and 16.63\% higher success rates compared to existing
methods.
|
2501.01831 | Online Fault Tolerance Strategy for Abrupt Reachability Constraint
Changes | eess.SY cs.RO cs.SY | When a system's constraints change abruptly, the system's reachability safety
does no longer sustain. Thus, the system can reach a forbidden/dangerous value.
Conventional remedy practically involves online controller redesign (OCR) to
re-establish the reachability's compliance with the new constraints, which,
however, is usually too slow. There is a need for an online strategy capable of
managing runtime changes in reachability constraints. However, to the best of
the authors' knowledge, this topic has not been addressed in the existing
literature. In this paper, we propose a fast fault tolerance strategy to
recover the system's reachability safety in runtime. Instead of redesigning the
system's controller, we propose to change the system's reference state to
modify the system's reachability to comply with the new constraints. We frame
the reference state search as an optimization problem and employ the
Karush-Kuhn-Tucker (KKT) method as well as the Interior Point Method (IPM)
based Newton's method (as a fallback for the KKT method) for fast solution
derivation. The optimization also allows more future fault tolerance. Numerical
simulations demonstrate that our method outperforms the conventional OCR method
in terms of computational efficiency and success rate. Specifically, the
results show that the proposed method finds a solution $10^{2}$ (with the IPM
based Newton's method) $\sim 10^{4}$ (with the KKT method) times faster than
the OCR method. Additionally, the improvement rate of the success rate of our
method over the OCR method is $40.81\%$ without considering the deadline of run
time. The success rate remains at $49.44\%$ for the proposed method, while it
becomes $0\%$ for the OCR method when a deadline of $1.5 \; seconds$ is
imposed.
|
2501.01832 | Time Series Language Model for Descriptive Caption Generation | cs.CL cs.LG | The automatic generation of representative natural language descriptions for
observable patterns in time series data enhances interpretability, simplifies
analysis and increases cross-domain utility of temporal data. While pre-trained
foundation models have made considerable progress in natural language
processing (NLP) and computer vision (CV), their application to time series
analysis has been hindered by data scarcity. Although several large language
model (LLM)-based methods have been proposed for time series forecasting, time
series captioning is under-explored in the context of LLMs. In this paper, we
introduce TSLM, a novel time series language model designed specifically for
time series captioning. TSLM operates as an encoder-decoder model, leveraging
both text prompts and time series data representations to capture subtle
temporal patterns across multiple phases and generate precise textual
descriptions of time series inputs. TSLM addresses the data scarcity problem in
time series captioning by first leveraging an in-context prompting synthetic
data generation, and second denoising the generated data via a novel
cross-modal dense retrieval scoring applied to time series-caption pairs.
Experimental findings on various time series captioning datasets demonstrate
that TSLM outperforms existing state-of-the-art approaches from multiple data
modalities by a significant margin.
|
2501.01834 | MoColl: Agent-Based Specific and General Model Collaboration for Image
Captioning | cs.CV cs.AI | Image captioning is a critical task at the intersection of computer vision
and natural language processing, with wide-ranging applications across various
domains. For complex tasks such as diagnostic report generation, deep learning
models require not only domain-specific image-caption datasets but also the
incorporation of relevant general knowledge to provide contextual accuracy.
Existing approaches exhibit inherent limitations: specialized models excel in
capturing domain-specific details but lack generalization, while
vision-language models (VLMs) built on large language models (LLMs) leverage
general knowledge but struggle with domain-specific adaptation. To address
these limitations, this paper proposes a novel agent-enhanced model
collaboration framework, which we call MoColl, designed to effectively
integrate domain-specific and general knowledge. Specifically, our approach is
to decompose complex image captioning tasks into a series of interconnected
question-answer subtasks. A trainable visual question answering (VQA) model is
employed as a specialized tool to focus on domain-specific visual analysis,
answering task-specific questions based on image content. Concurrently, an
LLM-based agent with general knowledge formulates these questions and
synthesizes the resulting question-answer pairs into coherent captions. Beyond
its role in leveraging the VQA model, the agent further guides its training to
enhance its domain-specific capabilities. Experimental results on radiology
report generation validate the effectiveness of the proposed framework,
demonstrating significant improvements in the quality of generated reports.
|
2501.01835 | ASKCOS: an open source software suite for synthesis planning | cs.AI | The advancement of machine learning and the availability of large-scale
reaction datasets have accelerated the development of data-driven models for
computer-aided synthesis planning (CASP) in the past decade. Here, we detail
the newest version of ASKCOS, an open source software suite for synthesis
planning that makes available several research advances in a freely available,
practical tool. Four one-step retrosynthesis models form the basis of both
interactive planning and automatic planning modes. Retrosynthetic planning is
complemented by other modules for feasibility assessment and pathway
evaluation, including reaction condition recommendation, reaction outcome
prediction, and auxiliary capabilities such as solubility prediction and
quantum mechanical descriptor prediction. ASKCOS has assisted hundreds of
medicinal, synthetic, and process chemists in their day-to-day tasks,
complementing expert decision making. It is our belief that CASP tools like
ASKCOS are an important part of modern chemistry research, and that they offer
ever-increasing utility and accessibility.
|
2501.01836 | Practical machine learning is learning on small samples | cs.LG cs.AI | Based on limited observations, machine learning discerns a dependence which
is expected to hold in the future. What makes it possible? Statistical learning
theory imagines indefinitely increasing training sample to justify its
approach. In reality, there is no infinite time or even infinite general
population for learning. Here I argue that practical machine learning is based
on an implicit assumption that underlying dependence is relatively ``smooth" :
likely, there are no abrupt differences in feedback between cases with close
data points. From this point of view learning shall involve selection of the
hypothesis ``smoothly" approximating the training set. I formalize this as
Practical learning paradigm. The paradigm includes terminology and rules for
description of learners. Popular learners (local smoothing, k-NN, decision
trees, Naive Bayes, SVM for classification and for regression) are shown here
to be implementations of this paradigm.
|
2501.01840 | Signal Recovery Using a Spiked Mixture Model | stat.ML cs.LG | We introduce the spiked mixture model (SMM) to address the problem of
estimating a set of signals from many randomly scaled and noisy observations.
Subsequently, we design a novel expectation-maximization (EM) algorithm to
recover all parameters of the SMM. Numerical experiments show that in low
signal-to-noise ratio regimes, and for data types where the SMM is relevant,
SMM surpasses the more traditional Gaussian mixture model (GMM) in terms of
signal recovery performance. The broad relevance of the SMM and its
corresponding EM recovery algorithm is demonstrated by applying the technique
to different data types. The first case study is a biomedical research
application, utilizing an imaging mass spectrometry dataset to explore the
molecular content of a rat brain tissue section at micrometer scale. The second
case study demonstrates SMM performance in a computer vision application,
segmenting a hyperspectral imaging dataset into underlying patterns. While the
measurement modalities differ substantially, in both case studies SMM is shown
to recover signals that were missed by traditional methods such as k-means
clustering and GMM.
|
2501.01841 | Dedicated Inference Engine and Binary-Weight Neural Networks for
Lightweight Instance Segmentation | cs.CV cs.AR | Reducing computational costs is an important issue for development of
embedded systems. Binary-weight Neural Networks (BNNs), in which weights are
binarized and activations are quantized, are employed to reduce computational
costs of various kinds of applications. In this paper, a design methodology of
hardware architecture for inference engines is proposed to handle modern BNNs
with two operation modes. Multiply-Accumulate (MAC) operations can be
simplified by replacing multiply operations with bitwise operations. The
proposed method can effectively reduce the gate count of inference engines by
removing a part of computational costs from the hardware system. The
architecture of MAC operations can calculate the inference results of BNNs
efficiently with only 52% of hardware costs compared with the related works. To
show that the inference engine can handle practical applications, two
lightweight networks which combine the backbones of SegNeXt and the decoder of
SparseInst for instance segmentation are also proposed. The output results of
the lightweight networks are computed using only bitwise operations and add
operations. The proposed inference engine has lower hardware costs than related
works. The experimental results show that the proposed inference engine can
handle the proposed instance-segmentation networks and achieves higher accuracy
than YOLACT on the "Person" category although the model size is 77.7$\times$
smaller compared with YOLACT.
|
2501.01844 | Learning from Ambiguous Data with Hard Labels | cs.LG | Real-world data often contains intrinsic ambiguity that the common
single-hard-label annotation paradigm ignores. Standard training using
ambiguous data with these hard labels may produce overly confident models and
thus leading to poor generalization. In this paper, we propose a novel
framework called Quantized Label Learning (QLL) to alleviate this issue. First,
we formulate QLL as learning from (very) ambiguous data with hard labels:
ideally, each ambiguous instance should be associated with a ground-truth
soft-label distribution describing its corresponding probabilistic weight in
each class, however, this is usually not accessible; in practice, we can only
observe a quantized label, i.e., a hard label sampled (quantized) from the
corresponding ground-truth soft-label distribution, of each instance, which can
be seen as a biased approximation of the ground-truth soft-label. Second, we
propose a Class-wise Positive-Unlabeled (CPU) risk estimator that allows us to
train accurate classifiers from only ambiguous data with quantized labels.
Third, to simulate ambiguous datasets with quantized labels in the real world,
we design a mixing-based ambiguous data generation procedure for empirical
evaluation. Experiments demonstrate that our CPU method can significantly
improve model generalization performance and outperform the baselines.
|
2501.01845 | Semantic Segmentation for Sequential Historical Maps by Learning from
Only One Map | cs.CV | Historical maps are valuable resources that capture detailed geographical
information from the past. However, these maps are typically available in
printed formats, which are not conducive to modern computer-based analyses.
Digitizing these maps into a machine-readable format enables efficient
computational analysis. In this paper, we propose an automated approach to
digitization using deep-learning-based semantic segmentation, which assigns a
semantic label to each pixel in scanned historical maps. A key challenge in
this process is the lack of ground-truth annotations required for training deep
neural networks, as manual labeling is time-consuming and labor-intensive. To
address this issue, we introduce a weakly-supervised age-tracing strategy for
model fine-tuning. This approach exploits the similarity in appearance and
land-use patterns between historical maps from neighboring time periods to
guide the training process. Specifically, model predictions for one map are
utilized as pseudo-labels for training on maps from adjacent time periods.
Experiments conducted on our newly curated \textit{Hameln} dataset demonstrate
that the proposed age-tracing strategy significantly enhances segmentation
performance compared to baseline models. In the best-case scenario, the mean
Intersection over Union (mIoU) achieved 77.3\%, reflecting an improvement of
approximately 20\% over baseline methods. Additionally, the fine-tuned model
achieved an average overall accuracy of 97\%, highlighting the effectiveness of
our approach for digitizing historical maps.
|
2501.01849 | Multi-Agent Conversational Online Learning for Adaptive LLM Response
Identification | cs.HC cs.AI | The remarkable generative capability of large language models (LLMs) has
sparked a growing interest in automatically generating responses for different
applications. Given the dynamic nature of user preferences and the uncertainty
of LLM response performance, it is crucial to design efficient online learning
algorithms to identify optimal LLM responses (i.e., high-quality responses that
also meet user preferences). Most existing online algorithms adopt a
centralized approach and fail to leverage explicit user preferences for more
efficient and personalized LLM response identification. In contrast, this paper
introduces \textit{MACO} (\underline{M}ulti-\underline{A}gent
\underline{C}onversational \underline{O}nline Learning for Adaptive LLM
Response Identification): 1) The online LLM response identification process is
accelerated by multiple local agents (such as smartphones), while enhancing
data privacy; 2) A novel conversational mechanism is proposed to adaptively
conduct conversations for soliciting user preferences (e.g., a preference for a
humorous tone over a serious one in generated responses), so to minimize
uncertainty in preference estimation. Our theoretical analysis demonstrates
that \cadi\ is near-optimal regarding cumulative regret. Additionally, \cadi\
offers reduced communication costs and computational complexity by eliminating
the traditional, computing-intensive ``G-optimal design" found in previous
works. Extensive experiments with the open LLM \textit{Llama}, coupled with two
different embedding models from Google and OpenAI for text vector
representation, demonstrate that \cadi\ significantly outperforms the current
state-of-the-art in online LLM response identification.
|
2501.01850 | LCFed: An Efficient Clustered Federated Learning Framework for
Heterogeneous Data | cs.LG cs.AI cs.DC | Clustered federated learning (CFL) addresses the performance challenges posed
by data heterogeneity in federated learning (FL) by organizing edge devices
with similar data distributions into clusters, enabling collaborative model
training tailored to each group. However, existing CFL approaches strictly
limit knowledge sharing to within clusters, lacking the integration of global
knowledge with intra-cluster training, which leads to suboptimal performance.
Moreover, traditional clustering methods incur significant computational
overhead, especially as the number of edge devices increases. In this paper, we
propose LCFed, an efficient CFL framework to combat these challenges. By
leveraging model partitioning and adopting distinct aggregation strategies for
each sub-model, LCFed effectively incorporates global knowledge into
intra-cluster co-training, achieving optimal training performance.
Additionally, LCFed customizes a computationally efficient model similarity
measurement method based on low-rank models, enabling real-time cluster updates
with minimal computational overhead. Extensive experiments show that LCFed
outperforms state-of-the-art benchmarks in both test accuracy and clustering
computational efficiency.
|
2501.01855 | UAV-DETR: Efficient End-to-End Object Detection for Unmanned Aerial
Vehicle Imagery | cs.CV | Unmanned aerial vehicle object detection (UAV-OD) has been widely used in
various scenarios. However, most existing UAV-OD algorithms rely on manually
designed components, which require extensive tuning. End-to-end models that do
not depend on such manually designed components are mainly designed for natural
images, which are less effective for UAV imagery. To address such challenges,
this paper proposes an efficient detection transformer (DETR) framework
tailored for UAV imagery, i.e., UAV-DETR. The framework includes a multi-scale
feature fusion with frequency enhancement module, which captures both spatial
and frequency information at different scales. In addition, a frequency-focused
down-sampling module is presented to retain critical spatial details during
down-sampling. A semantic alignment and calibration module is developed to
align and fuse features from different fusion paths. Experimental results
demonstrate the effectiveness and generalization of our approach across various
UAV imagery datasets. On the VisDrone dataset, our method improves AP by 3.1\%
and $\text{AP}_{50}$ by 4.2\% over the baseline. Similar enhancements are
observed on the UAVVaste dataset. The project page:
https://github.com/ValiantDiligent/UAV-DETR
|
2501.01859 | Deposition Rates in Thermal Laser Epitaxy: Simulation and Experiment | cs.CE physics.comp-ph | The modeling of deposition rates in Thermal Laser Epitaxy (TLE) is essential
for the accurate prediction of the evaporation process and for improved dynamic
process control. We demonstrate excellent agreement between experimental data
and a model based on a finite element simulation that describes the temperature
distribution of an elemental source when irradiated with continuous wave laser
radiation. The simulation strongly depends on the thermophysical constants of
the material, data of which is lacking for many elements. Effective values for
the parameters may be determined with precision by means of an unambiguous
reference provided by the melting point of the material, which is directly
observed during the experiments. TLE may therefore be used to study the high
temperature thermophysical and optical properties of the elements.
|
2501.01864 | Towards Hard and Soft Shadow Removal via Dual-Branch Separation Network
and Vision Transformer | cs.CV | Image shadow removal is a crucial task in computer vision. In real-world
scenes, shadows alter image color and brightness, posing challenges for
perception and texture recognition. Traditional and deep learning methods often
overlook the distinct needs for handling hard and soft shadows, thereby lacking
detailed processing to specifically address each type of shadow in images.We
propose a dual-path model that processes these shadows separately using
specially designed loss functions to accomplish the hard and soft shadow
removal. The model classifies shadow types and processes them through
appropriate paths to produce shadow-free outputs, integrating a Vision
Transformer with UNet++ for enhanced edge detail and feature fusion. Our model
outperforms state-of-the-art methods and achieves 2.905 RMSE value on the ISTD
dataset, which demonstrates greater effectiveness than typical single-path
approaches.
|
2501.01872 | Turning Logic Against Itself : Probing Model Defenses Through
Contrastive Questions | cs.CL | Large language models, despite extensive alignment with human values and
ethical principles, remain vulnerable to sophisticated jailbreak attacks that
exploit their reasoning abilities. Existing safety measures often detect overt
malicious intent but fail to address subtle, reasoning-driven vulnerabilities.
In this work, we introduce POATE (Polar Opposite query generation, Adversarial
Template construction, and Elaboration), a novel jailbreak technique that
harnesses contrastive reasoning to provoke unethical responses. POATE crafts
semantically opposing intents and integrates them with adversarial templates,
steering models toward harmful outputs with remarkable subtlety. We conduct
extensive evaluation across six diverse language model families of varying
parameter sizes to demonstrate the robustness of the attack, achieving
significantly higher attack success rates (~44%) compared to existing methods.
To counter this, we propose Intent-Aware CoT and Reverse Thinking CoT, which
decompose queries to detect malicious intent and reason in reverse to evaluate
and reject harmful responses. These methods enhance reasoning robustness and
strengthen the model's defense against adversarial exploits.
|
2501.01874 | DFF: Decision-Focused Fine-tuning for Smarter Predict-then-Optimize with
Limited Data | cs.LG | Decision-focused learning (DFL) offers an end-to-end approach to the
predict-then-optimize (PO) framework by training predictive models directly on
decision loss (DL), enhancing decision-making performance within PO contexts.
However, the implementation of DFL poses distinct challenges. Primarily, DL can
result in deviation from the physical significance of the predictions under
limited data. Additionally, some predictive models are non-differentiable or
black-box, which cannot be adjusted using gradient-based methods. To tackle the
above challenges, we propose a novel framework, Decision-Focused Fine-tuning
(DFF), which embeds the DFL module into the PO pipeline via a novel bias
correction module. DFF is formulated as a constrained optimization problem that
maintains the proximity of the DL-enhanced model to the original predictive
model within a defined trust region. We theoretically prove that DFF strictly
confines prediction bias within a predetermined upper bound, even with limited
datasets, thereby substantially reducing prediction shifts caused by DL under
limited data. Furthermore, the bias correction module can be integrated into
diverse predictive models, enhancing adaptability to a broad range of PO tasks.
Extensive evaluations on synthetic and real-world datasets, including network
flow, portfolio optimization, and resource allocation problems with different
predictive models, demonstrate that DFF not only improves decision performance
but also adheres to fine-tuning constraints, showcasing robust adaptability
across various scenarios.
|
2501.01876 | Accuracy Can Lie: On the Impact of Surrogate Model in Configuration
Tuning | cs.SE cs.AI | To ease the expensive measurements during configuration tuning, it is natural
to build a surrogate model as the replacement of the system, and thereby the
configuration performance can be cheaply evaluated. Yet, a stereotype therein
is that the higher the model accuracy, the better the tuning result would be.
This "accuracy is all" belief drives our research community to build more and
more accurate models and criticize a tuner for the inaccuracy of the model
used. However, this practice raises some previously unaddressed questions,
e.g., Do those somewhat small accuracy improvements reported in existing work
really matter much to the tuners? What role does model accuracy play in the
impact of tuning quality? To answer those related questions, we conduct one of
the largest-scale empirical studies to date-running over the period of 13
months 24*7-that covers 10 models, 17 tuners, and 29 systems from the existing
works while under four different commonly used metrics, leading to 13,612 cases
of investigation. Surprisingly, our key findings reveal that the accuracy can
lie: there are a considerable number of cases where higher accuracy actually
leads to no improvement in the tuning outcomes (up to 58% cases under certain
setting), or even worse, it can degrade the tuning quality (up to 24% cases
under certain setting). We also discover that the chosen models in most
proposed tuners are sub-optimal and that the required % of accuracy change to
significantly improve tuning quality varies according to the range of model
accuracy. Deriving from the fitness landscape analysis, we provide in-depth
discussions of the rationale behind, offering several lessons learned as well
as insights for future opportunities. Most importantly, this work poses a clear
message to the community: we should take one step back from the natural
"accuracy is all" belief for model-based configuration tuning.
|
2501.01877 | ANTHROPOS-V: benchmarking the novel task of Crowd Volume Estimation | cs.CV | We introduce the novel task of Crowd Volume Estimation (CVE), defined as the
process of estimating the collective body volume of crowds using only RGB
images. Besides event management and public safety, CVE can be instrumental in
approximating body weight, unlocking weight sensitive applications such as
infrastructure stress assessment, and assuring even weight balance. We propose
the first benchmark for CVE, comprising ANTHROPOS-V, a synthetic photorealistic
video dataset featuring crowds in diverse urban environments. Its annotations
include each person's volume, SMPL shape parameters, and keypoints. Also, we
explore metrics pertinent to CVE, define baseline models adapted from Human
Mesh Recovery and Crowd Counting domains, and propose a CVE specific
methodology that surpasses baselines. Although synthetic, the weights and
heights of individuals are aligned with the real-world population distribution
across genders, and they transfer to the downstream task of CVE from real
images. Benchmark and code are available at
github.com/colloroneluca/Crowd-Volume-Estimation.
|
2501.01880 | Long Context vs. RAG for LLMs: An Evaluation and Revisits | cs.CL | Extending context windows (i.e., Long Context, LC) and using retrievers to
selectively access relevant information (i.e., Retrieval-Augmented Generation,
RAG) are the two main strategies to enable LLMs to incorporate extremely long
external contexts. This paper revisits recent studies on this topic,
highlighting their key insights and discrepancies. We then provide a more
comprehensive evaluation by filtering out questions answerable without external
context, identifying the most effective retrieval methods, and expanding the
datasets. We show that LC generally outperforms RAG in question-answering
benchmarks, especially for Wikipedia-based questions. Summarization-based
retrieval performs comparably to LC, while chunk-based retrieval lags behind.
However, RAG has advantages in dialogue-based and general question queries.
These insights underscore the trade-offs between RAG and LC strategies,
offering guidance for future optimization of LLMs with external knowledge
sources. We also provide an in-depth discussion on this topic, highlighting the
overlooked importance of context relevance in existing studies.
|
2501.01884 | Telegram as a Battlefield: Kremlin-related Communications during the
Russia-Ukraine Conflict | cs.SI cs.CY cs.HC | Telegram emerged as a crucial platform for both parties during the conflict
between Russia and Ukraine. Per its minimal policies for content moderation,
Pro-Kremlin narratives and potential misinformation were spread on Telegram,
while anti-Kremlin narratives with related content were also propagated, such
as war footage, troop movements, maps of bomb shelters, and air raid warnings.
This paper presents a dataset of posts from both pro-Kremlin and anti-Kremlin
Telegram channels, collected over a period spanning a year before and a year
after the Russian invasion. The dataset comprises 404 pro-Kremlin channels with
4,109,645 posts and 114 anti-Kremlin channels with 1,117,768 posts. We provide
details on the data collection process, processing methods, and dataset
characterization. Lastly, we discuss the potential research opportunities this
dataset may enable researchers across various disciplines.
|
2501.01886 | Evaluating Scenario-based Decision-making for Interactive Autonomous
Driving Using Rational Criteria: A Survey | cs.RO cs.AI cs.SY eess.SY | Autonomous vehicles (AVs) can significantly promote the advances in road
transport mobility in terms of safety, reliability, and decarbonization.
However, ensuring safety and efficiency in interactive during within dynamic
and diverse environments is still a primary barrier to large-scale AV adoption.
In recent years, deep reinforcement learning (DRL) has emerged as an advanced
AI-based approach, enabling AVs to learn decision-making strategies adaptively
from data and interactions. DRL strategies are better suited than traditional
rule-based methods for handling complex, dynamic, and unpredictable driving
environments due to their adaptivity. However, varying driving scenarios
present distinct challenges, such as avoiding obstacles on highways and
reaching specific exits at intersections, requiring different scenario-specific
decision-making algorithms. Many DRL algorithms have been proposed in
interactive decision-making. However, a rationale review of these DRL
algorithms across various scenarios is lacking. Therefore, a comprehensive
evaluation is essential to assess these algorithms from multiple perspectives,
including those of vehicle users and vehicle manufacturers. This survey reviews
the application of DRL algorithms in autonomous driving across typical
scenarios, summarizing road features and recent advancements. The scenarios
include highways, on-ramp merging, roundabouts, and unsignalized intersections.
Furthermore, DRL-based algorithms are evaluated based on five rationale
criteria: driving safety, driving efficiency, training efficiency,
unselfishness, and interpretability (DDTUI). Each criterion of DDTUI is
specifically analyzed in relation to the reviewed algorithms. Finally, the
challenges for future DRL-based decision-making algorithms are summarized.
|
2501.01889 | Exploring Equality: An Investigation into Custom Loss Functions for
Fairness Definitions | cs.LG cs.CY | This paper explores the complex tradeoffs between various fairness metrics
such as equalized odds, disparate impact, and equal opportunity and predictive
accuracy within COMPAS by building neural networks trained with custom loss
functions optimized to specific fairness criteria. This paper creates the first
fairness-driven implementation of the novel Group Accuracy Parity (GAP)
framework, as theoretically proposed by Gupta et al. (2024), and applies it to
COMPAS. To operationalize and accurately compare the fairness of COMPAS models
optimized to differing fairness ideals, this paper develops and proposes a
combinatory analytical procedure that incorporates Pareto front and
multivariate analysis, leveraging data visualizations such as violin graphs.
This paper concludes that GAP achieves an enhanced equilibrium between fairness
and accuracy compared to COMPAS's current nationwide implementation and
alternative implementations of COMPAS optimized to more traditional fairness
definitions. While this paper's algorithmic improvements of COMPAS
significantly augment its fairness, external biases undermine the fairness of
its implementation. Practices such as predictive policing and issues such as
the lack of transparency regarding COMPAS's internal workings have contributed
to the algorithm's historical injustice. In conjunction with developments
regarding COMPAS's predictive methodology, legal and institutional changes must
happen for COMPAS's just deployment.
|
2501.01892 | QuArch: A Question-Answering Dataset for AI Agents in Computer
Architecture | cs.AR cs.AI cs.LG | We introduce QuArch, a dataset of 1500 human-validated question-answer pairs
designed to evaluate and enhance language models' understanding of computer
architecture. The dataset covers areas including processor design, memory
systems, and performance optimization. Our analysis highlights a significant
performance gap: the best closed-source model achieves 84% accuracy, while the
top small open-source model reaches 72%. We observe notable struggles in memory
systems, interconnection networks, and benchmarking. Fine-tuning with QuArch
improves small model accuracy by up to 8%, establishing a foundation for
advancing AI-driven computer architecture research. The dataset and leaderboard
are at https://harvard-edge.github.io/QuArch/.
|
2501.01895 | EnerVerse: Envisioning Embodied Future Space for Robotics Manipulation | cs.RO cs.CV cs.LG | We introduce EnerVerse, a generative robotics foundation model that
constructs and interprets embodied spaces. EnerVerse employs an autoregressive
video diffusion framework to predict future embodied spaces from instructions,
enhanced by a sparse context memory for long-term reasoning. To model the 3D
robotics world, we propose Free Anchor Views (FAVs), a multi-view video
representation offering flexible, task-adaptive perspectives to address
challenges like motion ambiguity and environmental constraints. Additionally,
we present EnerVerse-D, a data engine pipeline combining the generative model
with 4D Gaussian Splatting, forming a self-reinforcing data loop to reduce the
sim-to-real gap. Leveraging these innovations, EnerVerse translates 4D world
representations into physical actions via a policy head (EnerVerse-A), enabling
robots to execute task instructions. EnerVerse-A achieves state-of-the-art
performance in both simulation and real-world settings.
|
2501.01904 | Virgo: A Preliminary Exploration on Reproducing o1-like MLLM | cs.CV cs.AI | Recently, slow-thinking reasoning systems, built upon large language models
(LLMs), have garnered widespread attention by scaling the thinking time during
inference. There is also growing interest in adapting this capability to
multimodal large language models (MLLMs). Given that MLLMs handle more complex
data semantics across different modalities, it is intuitively more challenging
to implement multimodal slow-thinking systems.
To address this issue, in this paper, we explore a straightforward approach
by fine-tuning a capable MLLM with a small amount of textual long-form thought
data, resulting in a multimodal slow-thinking system, Virgo (Visual reasoning
with long thought). We find that these long-form reasoning processes, expressed
in natural language, can be effectively transferred to MLLMs. Moreover, it
seems that such textual reasoning data can be even more effective than visual
reasoning data in eliciting the slow-thinking capacities of MLLMs. While this
work is preliminary, it demonstrates that slow-thinking capacities are
fundamentally associated with the language model component, which can be
transferred across modalities or domains. This finding can be leveraged to
guide the development of more powerful slow-thinking reasoning systems. We
release our resources at https://github.com/RUCAIBox/Virgo.
|
2501.01905 | Alleviating Overfitting in Transformation-Interaction-Rational Symbolic
Regression with Multi-Objective Optimization | cs.LG | The Transformation-Interaction-Rational is a representation for symbolic
regression that limits the search space of functions to the ratio of two
nonlinear functions each one defined as the linear regression of transformed
variables. This representation has the main objective to bias the search
towards simpler expressions while keeping the approximation power of standard
approaches.
The performance of using Genetic Programming with this representation was
substantially better than with its predecessor (Interaction-Transformation) and
ranked close to the state-of-the-art on a contemporary Symbolic Regression
benchmark. On a closer look at these results, we observed that the performance
could be further improved with an additional selective pressure for smaller
expressions when the dataset contains just a few data points. The introduction
of a penalization term applied to the fitness measure improved the results on
these smaller datasets. One problem with this approach is that it introduces
two additional hyperparameters: i) a criteria to when the penalization should
be activated and, ii) the amount of penalization to the fitness function.
In this paper, we extend Transformation-Interaction-Rational to support
multi-objective optimization, specifically the NSGA-II algorithm, and apply
that to the same benchmark. A detailed analysis of the results show that the
use of multi-objective optimization benefits the overall performance on a
subset of the benchmarks while keeping the results similar to the
single-objective approach on the remainder of the datasets. Specifically to the
small datasets, we observe a small (and statistically insignificant)
improvement of the results suggesting that further strategies must be explored.
|
2501.01908 | Detecting and Mitigating Adversarial Attacks on Deep Learning-Based MRI
Reconstruction Without Any Retraining | cs.CV cs.LG eess.IV physics.med-ph | Deep learning (DL) methods, especially those based on physics-driven DL, have
become the state-of-the-art for reconstructing sub-sampled magnetic resonance
imaging (MRI) data. However, studies have shown that these methods are
susceptible to small adversarial input perturbations, or attacks, resulting in
major distortions in the output images. Various strategies have been proposed
to reduce the effects of these attacks, but they require retraining and may
lower reconstruction quality for non-perturbed/clean inputs. In this work, we
propose a novel approach for detecting and mitigating adversarial attacks on
MRI reconstruction models without any retraining. Our detection strategy is
based on the idea of cyclic measurement consistency. The output of the model is
mapped to another set of MRI measurements for a different sub-sampling pattern,
and this synthesized data is reconstructed with the same model. Intuitively,
without an attack, the second reconstruction is expected to be consistent with
the first, while with an attack, disruptions are present. Subsequently, this
idea is extended to devise a novel objective function, which is minimized
within a small ball around the attack input for mitigation. Experimental
results show that our method substantially reduces the impact of adversarial
perturbations across different datasets, attack types/strengths and PD-DL
networks, and qualitatively and quantitatively outperforms conventional
mitigation methods that involve retraining.
|
2501.01912 | Exoplanet Detection via Differentiable Rendering | astro-ph.EP astro-ph.IM cs.CV eess.IV | Direct imaging of exoplanets is crucial for advancing our understanding of
planetary systems beyond our solar system, but it faces significant challenges
due to the high contrast between host stars and their planets. Wavefront
aberrations introduce speckles in the telescope science images, which are
patterns of diffracted starlight that can mimic the appearance of planets,
complicating the detection of faint exoplanet signals. Traditional
post-processing methods, operating primarily in the image intensity domain, do
not integrate wavefront sensing data. These data, measured mainly for adaptive
optics corrections, have been overlooked as a potential resource for
post-processing, partly due to the challenge of the evolving nature of
wavefront aberrations. In this paper, we present a differentiable rendering
approach that leverages these wavefront sensing data to improve exoplanet
detection. Our differentiable renderer models wave-based light propagation
through a coronagraphic telescope system, allowing gradient-based optimization
to significantly improve starlight subtraction and increase sensitivity to
faint exoplanets. Simulation experiments based on the James Webb Space
Telescope configuration demonstrate the effectiveness of our approach,
achieving substantial improvements in contrast and planet detection limits. Our
results showcase how the computational advancements enabled by differentiable
rendering can revitalize previously underexploited wavefront data, opening new
avenues for enhancing exoplanet imaging and characterization.
|
2501.01913 | Mingling with the Good to Backdoor Federated Learning | cs.CR cs.AI cs.DC | Federated learning (FL) is a decentralized machine learning technique that
allows multiple entities to jointly train a model while preserving dataset
privacy. However, its distributed nature has raised various security concerns,
which have been addressed by increasingly sophisticated defenses. These
protections utilize a range of data sources and metrics to, for example, filter
out malicious model updates, ensuring that the impact of attacks is minimized
or eliminated.
This paper explores the feasibility of designing a generic attack method
capable of installing backdoors in FL while evading a diverse array of
defenses. Specifically, we focus on an attacker strategy called MIGO, which
aims to produce model updates that subtly blend with legitimate ones. The
resulting effect is a gradual integration of a backdoor into the global model,
often ensuring its persistence long after the attack concludes, while
generating enough ambiguity to hinder the effectiveness of defenses.
MIGO was employed to implant three types of backdoors across five datasets
and different model architectures. The results demonstrate the significant
threat posed by these backdoors, as MIGO consistently achieved exceptionally
high backdoor accuracy (exceeding 90%) while maintaining the utility of the
main task. Moreover, MIGO exhibited strong evasion capabilities against ten
defenses, including several state-of-the-art methods. When compared to four
other attack strategies, MIGO consistently outperformed them across most
configurations. Notably, even in extreme scenarios where the attacker controls
just 0.1% of the clients, the results indicate that successful backdoor
insertion is possible if the attacker can persist for a sufficient number of
rounds.
|
2501.01915 | Social Processes: Probabilistic Meta-learning for Adaptive Multiparty
Interaction Forecasting | cs.LG | Adaptively forecasting human behavior in social settings is an important step
toward achieving Artificial General Intelligence. Most existing research in
social forecasting has focused either on unfocused interactions, such as
pedestrian trajectory prediction, or on monadic and dyadic behavior
forecasting. In contrast, social psychology emphasizes the importance of group
interactions for understanding complex social dynamics. This creates a gap that
we address in this paper: forecasting social interactions at the group
(conversation) level. Additionally, it is important for a forecasting model to
be able to adapt to groups unseen at train time, as even the same individual
behaves differently across different groups. This highlights the need for a
forecasting model to explicitly account for each group's unique dynamics. To
achieve this, we adopt a meta-learning approach to human behavior forecasting,
treating every group as a separate meta-learning task. As a result, our method
conditions its predictions on the specific behaviors within the group, leading
to generalization to unseen groups. Specifically, we introduce Social Process
(SP) models, which predict a distribution over future multimodal cues jointly
for all group members based on their preceding low-level multimodal cues, while
incorporating other past sequences of the same group's interactions. In this
work we also analyze the generalization capabilities of SP models in both their
outputs and latent spaces through the use of realistic synthetic datasets.
|
2501.01924 | Transformer-Driven Inverse Problem Transform for Fast Blind
Hyperspectral Image Dehazing | cs.CV eess.IV | Hyperspectral dehazing (HyDHZ) has become a crucial signal processing
technology to facilitate the subsequent identification and classification
tasks, as the airborne visible/infrared imaging spectrometer (AVIRIS) data
portal reports a massive portion of haze-corrupted areas in typical
hyperspectral remote sensing images. The idea of inverse problem transform
(IPT) has been proposed in recent remote sensing literature in order to
reformulate a hardly tractable inverse problem (e.g., HyDHZ) into a relatively
simple one. Considering the emerging spectral super-resolution (SSR) technique,
which spectrally upsamples multispectral data to hyperspectral data, we aim to
solve the challenging HyDHZ problem by reformulating it as an SSR problem.
Roughly speaking, the proposed algorithm first automatically selects some
uncorrupted/informative spectral bands, from which SSR is applied to spectrally
upsample the selected bands in the feature space, thereby obtaining a clean
hyperspectral image (HSI). The clean HSI is then further refined by a deep
transformer network to obtain the final dehazed HSI, where a global attention
mechanism is designed to capture nonlocal information. There are very few HyDHZ
works in existing literature, and this article introduces the powerful
spatial-spectral transformer into HyDHZ for the first time. Remarkably, the
proposed transformer-driven IPT-based HyDHZ (T2HyDHZ) is a blind algorithm
without requiring the user to manually select the corrupted region. Extensive
experiments demonstrate the superiority of T2HyDHZ with less color distortion.
|
2501.01926 | Mitigating Hallucination for Large Vision Language Model by
Inter-Modality Correlation Calibration Decoding | cs.CV cs.AI | Large vision-language models (LVLMs) have shown remarkable capabilities in
visual-language understanding for downstream multi-modal tasks. Despite their
success, LVLMs still suffer from generating hallucinations in complex
generation tasks, leading to inconsistencies between visual inputs and
generated content. To address this issue, some approaches have introduced
inference-time interventions, such as contrastive decoding and attention
rectification, to reduce overreliance on language priors. However, these
approaches overlook hallucinations stemming from spurious inter-modality
correlations. In this paper, we propose an Inter-Modality Correlation
Calibration Decoding (IMCCD) method to mitigate hallucinations in LVLMs in a
training-free manner. In this method, we design a Cross-Modal Value-Enhanced
Decoding(CMVED) module to alleviate hallucination by a novel contrastive
decoding mechanism. During the estimation of distorted distribution, CMVED
masks the value vectors associated with significant cross-modal attention
weights, which address both uni-modality overreliance and misleading
inter-modality correlations. Additionally, a Content-Driven Attention
Refinement(CDAR) module refines cross-modal attention weights, guiding LVLMs to
focus on important visual content. Experimental results on diverse
hallucination benchmarks validate the superiority of our method over existing
state-of-the-art techniques in reducing hallucinations in LVLM text generation.
Our code will be available at https://github.com/lijm48/IMCCD.
|
2501.01929 | Compressed sensing for inverse problems II: applications to
deconvolution, source recovery, and MRI | math.FA cs.IT math.IT math.OC | This paper extends the sample complexity theory for ill-posed inverse
problems developed in a recent work by the authors [`Compressed sensing for
inverse problems and the sample complexity of the sparse Radon transform', J.
Eur. Math. Soc., to appear], which was originally focused on the sparse Radon
transform. We demonstrate that the underlying abstract framework, based on
infinite-dimensional compressed sensing and generalized sampling techniques,
can effectively handle a variety of practical applications. Specifically, we
analyze three case studies: (1) The reconstruction of a sparse signal from a
finite number of pointwise blurred samples; (2) The recovery of the (sparse)
source term of an elliptic partial differential equation from finite samples of
the solution; and (3) A moderately ill-posed variation of the classical sensing
problem of recovering a wavelet-sparse signal from finite Fourier samples,
motivated by magnetic resonance imaging. For each application, we establish
rigorous recovery guarantees by verifying the key theoretical requirements,
including quasi-diagonalization and coherence bounds. Our analysis reveals that
careful consideration of balancing properties and optimized sampling strategies
can lead to improved reconstruction performance. The results provide a unified
theoretical foundation for compressed sensing approaches to inverse problems
while yielding practical insights for specific applications.
|
2501.01930 | GoBERT: Gene Ontology Graph Informed BERT for Universal Gene Function
Prediction | cs.LG | Exploring the functions of genes and gene products is crucial to a wide range
of fields, including medical research, evolutionary biology, and environmental
science. However, discovering new functions largely relies on expensive and
exhaustive wet lab experiments. Existing methods of automatic function
annotation or prediction mainly focus on protein function prediction with
sequence, 3D-structures or protein family information. In this study, we
propose to tackle the gene function prediction problem by exploring Gene
Ontology graph and annotation with BERT (GoBERT) to decipher the underlying
relationships among gene functions. Our proposed novel function prediction task
utilizes existing functions as inputs and generalizes the function prediction
to gene and gene products. Specifically, two pre-train tasks are designed to
jointly train GoBERT to capture both explicit and implicit relations of
functions. Neighborhood prediction is a self-supervised multi-label
classification task that captures the explicit function relations. Specified
masking and recovering task helps GoBERT in finding implicit patterns among
functions. The pre-trained GoBERT possess the ability to predict novel
functions for various gene and gene products based on known functional
annotations. Extensive experiments, biological case studies, and ablation
studies are conducted to demonstrate the superiority of our proposed GoBERT.
|
2501.01932 | Bridging Classification and Segmentation in Osteosarcoma Assessment via
Foundation and Discrete Diffusion Models | cs.CV | Osteosarcoma, the most common primary bone cancer, often requires accurate
necrosis assessment from whole slide images (WSIs) for effective treatment
planning and prognosis. However, manual assessments are subjective and prone to
variability. In response, we introduce FDDM, a novel framework bridging the gap
between patch classification and region-based segmentation. FDDM operates in
two stages: patch-based classification, followed by region-based refinement,
enabling cross-patch information intergation. Leveraging a newly curated
dataset of osteosarcoma images, FDDM demonstrates superior segmentation
performance, achieving up to a 10% improvement mIOU and a 32.12% enhancement in
necrosis rate estimation over state-of-the-art methods. This framework sets a
new benchmark in osteosarcoma assessment, highlighting the potential of
foundation models and diffusion-based refinements in complex medical imaging
tasks.
|
2501.01933 | Abstractive Text Summarization for Contemporary Sanskrit Prose: Issues
and Challenges | cs.CL cs.AI | This thesis presents Abstractive Text Summarization models for contemporary
Sanskrit prose. The first chapter, titled Introduction, presents the motivation
behind this work, the research questions, and the conceptual framework.
Sanskrit is a low-resource inflectional language. The key research question
that this thesis investigates is what the challenges in developing an
abstractive TS for Sanskrit. To answer the key research questions,
sub-questions based on four different themes have been posed in this work. The
second chapter, Literature Review, surveys the previous works done. The third
chapter, data preparation, answers the remaining three questions from the third
theme. It reports the data collection and preprocessing challenges for both
language model and summarization model trainings. The fourth chapter reports
the training and inference of models and the results obtained therein. This
research has initiated a pipeline for Sanskrit abstractive text summarization
and has reported the challenges faced at every stage of the development. The
research questions based on every theme have been answered to answer the key
research question.
|
2501.01934 | Fusion DeepONet: A Data-Efficient Neural Operator for Geometry-Dependent
Hypersonic Flows on Arbitrary Grids | cs.LG physics.flu-dyn | Designing re-entry vehicles requires accurate predictions of hypersonic flow
around their geometry. Rapid prediction of such flows can revolutionize vehicle
design, particularly for morphing geometries. We evaluate advanced neural
operator models such as Deep Operator Networks (DeepONet),
parameter-conditioned U-Net, Fourier Neural Operator (FNO), and MeshGraphNet,
with the objective of addressing the challenge of learning geometry-dependent
hypersonic flow fields with limited data. Specifically, we compare the
performance of these models for two grid types: uniform Cartesian and irregular
grids. To train these models, we use 36 unique elliptic geometries for
generating high-fidelity simulations with a high-order entropy-stable DGSEM
solver, emphasizing the challenge of working with a scarce dataset. We evaluate
and compare the four operator-based models for their efficacy in predicting
hypersonic flow field around the elliptic body. Moreover, we develop a novel
framework, called Fusion DeepONet, which leverages neural field concepts and
generalizes effectively across varying geometries. Despite the scarcity of
training data, Fusion DeepONet achieves performance comparable to
parameter-conditioned U-Net on uniform grids while it outperforms MeshGraphNet
and vanilla DeepONet on irregular, arbitrary grids. Fusion DeepONet requires
significantly fewer trainable parameters as compared to U-Net, MeshGraphNet,
and FNO, making it computationally efficient. We also analyze the basis
functions of the Fusion DeepONet model using Singular Value Decomposition. This
analysis reveals that Fusion DeepONet generalizes effectively to unseen
solutions and adapts to varying geometries and grid points, demonstrating its
robustness in scenarios with limited training data.
|
2501.01936 | Improving Transducer-Based Spoken Language Understanding with
Self-Conditioned CTC and Knowledge Transfer | cs.LG | In this paper, we propose to improve end-to-end (E2E) spoken language
understand (SLU) in an RNN transducer model (RNN-T) by incorporating a joint
self-conditioned CTC automatic speech recognition (ASR) objective. Our proposed
model is akin to an E2E differentiable cascaded model which performs ASR and
SLU sequentially and we ensure that the SLU task is conditioned on the ASR task
by having CTC self conditioning. This novel joint modeling of ASR and SLU
improves SLU performance significantly over just using SLU optimization. We
further improve the performance by aligning the acoustic embeddings of this
model with the semantically richer BERT model. Our proposed knowledge transfer
strategy makes use of a bag-of-entity prediction layer on the aligned
embeddings and the output of this is used to condition the RNN-T based SLU
decoding. These techniques show significant improvement over several strong
baselines and can perform at par with large models like Whisper with
significantly fewer parameters.
|
2501.01945 | Cold-Start Recommendation towards the Era of Large Language Models
(LLMs): A Comprehensive Survey and Roadmap | cs.IR cs.AI | Cold-start problem is one of the long-standing challenges in recommender
systems, focusing on accurately modeling new or interaction-limited users or
items to provide better recommendations. Due to the diversification of internet
platforms and the exponential growth of users and items, the importance of
cold-start recommendation (CSR) is becoming increasingly evident. At the same
time, large language models (LLMs) have achieved tremendous success and possess
strong capabilities in modeling user and item information, providing new
potential for cold-start recommendations. However, the research community on
CSR still lacks a comprehensive review and reflection in this field. Based on
this, in this paper, we stand in the context of the era of large language
models and provide a comprehensive review and discussion on the roadmap,
related literature, and future directions of CSR. Specifically, we have
conducted an exploration of the development path of how existing CSR utilizes
information, from content features, graph relations, and domain information, to
the world knowledge possessed by large language models, aiming to provide new
insights for both the research and industrial communities on CSR. Related
resources of cold-start recommendations are collected and continuously updated
for the community in
https://github.com/YuanchenBei/Awesome-Cold-Start-Recommendation.
|
2501.01949 | VideoLifter: Lifting Videos to 3D with Fast Hierarchical Stereo
Alignment | cs.CV | Efficiently reconstructing accurate 3D models from monocular video is a key
challenge in computer vision, critical for advancing applications in virtual
reality, robotics, and scene understanding. Existing approaches typically
require pre-computed camera parameters and frame-by-frame reconstruction
pipelines, which are prone to error accumulation and entail significant
computational overhead. To address these limitations, we introduce VideoLifter,
a novel framework that leverages geometric priors from a learnable model to
incrementally optimize a globally sparse to dense 3D representation directly
from video sequences. VideoLifter segments the video sequence into local
windows, where it matches and registers frames, constructs consistent
fragments, and aligns them hierarchically to produce a unified 3D model. By
tracking and propagating sparse point correspondences across frames and
fragments, VideoLifter incrementally refines camera poses and 3D structure,
minimizing reprojection error for improved accuracy and robustness. This
approach significantly accelerates the reconstruction process, reducing
training time by over 82% while surpassing current state-of-the-art methods in
visual fidelity and computational efficiency.
|
2501.01950 | MADGEN: Mass-Spec attends to De Novo Molecular generation | cs.LG cs.AI | The annotation (assigning structural chemical identities) of MS/MS spectra
remains a significant challenge due to the enormous molecular diversity in
biological samples and the limited scope of reference databases. Currently, the
vast majority of spectral measurements remain in the "dark chemical space"
without structural annotations. To improve annotation, we propose MADGEN
(Mass-spec Attends to De Novo Molecular GENeration), a scaffold-based method
for de novo molecular structure generation guided by mass spectrometry data.
MADGEN operates in two stages: scaffold retrieval and spectra-conditioned
molecular generation starting with the scaffold. In the first stage, given an
MS/MS spectrum, we formulate scaffold retrieval as a ranking problem and employ
contrastive learning to align mass spectra with candidate molecular scaffolds.
In the second stage, starting from the retrieved scaffold, we employ the MS/MS
spectrum to guide an attention-based generative model to generate the final
molecule. Our approach constrains the molecular generation search space,
reducing its complexity and improving generation accuracy. We evaluate MADGEN
on three datasets (NIST23, CANOPUS, and MassSpecGym) and evaluate MADGEN's
performance with a predictive scaffold retriever and with an oracle retriever.
We demonstrate the effectiveness of using attention to integrate spectral
information throughout the generation process to achieve strong results with
the oracle retriever.
|
2501.01951 | MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of
Accelerators | cs.LG cs.AI | Graph convolutional networks (GCNs) have demonstrated superiority in
graph-based learning tasks. However, training GCNs on full graphs is
particularly challenging, due to the following two challenges: (1) the
associated feature tensors can easily explode the memory and block the
communication bandwidth of modern accelerators, and (2) the computation
workflow in training GCNs alternates between sparse and dense matrix
operations, complicating the efficient utilization of computational resources.
Existing solutions for scalable distributed full-graph GCN training mostly
adopt partition parallelism, which is unsatisfactory as they only partially
address the first challenge while incurring scaled-out communication volume. To
this end, we propose MixGCN aiming to simultaneously address both the
aforementioned challenges towards GCN training. To tackle the first challenge,
MixGCN integrates mixture of parallelism. Both theoretical and empirical
analysis verify its constant communication volumes and enhanced balanced
workload; For handling the second challenge, we consider mixture of
accelerators (i.e., sparse and dense accelerators) with a dedicated accelerator
for GCN training and a fine-grain pipeline. Extensive experiments show that
MixGCN achieves boosted training efficiency and scalability.
|
2501.01956 | Metadata Conditioning Accelerates Language Model Pre-training | cs.CL | The vast diversity of styles, domains, and quality levels present in language
model pre-training corpora is essential in developing general model
capabilities, but efficiently learning and deploying the correct behaviors
exemplified in each of these heterogeneous data sources is challenging. To
address this, we propose a new method, termed Metadata Conditioning then
Cooldown (MeCo), to incorporate additional learning cues during pre-training.
MeCo first provides metadata (e.g., URLs like en.wikipedia.org) alongside the
text during training and later uses a cooldown phase with only the standard
text, thereby enabling the model to function normally even without metadata.
MeCo significantly accelerates pre-training across different model scales (600M
to 8B parameters) and training sources (C4, RefinedWeb, and DCLM). For
instance, a 1.6B language model trained with MeCo matches the downstream task
performance of standard pre-training while using 33% less data. Additionally,
MeCo enables us to steer language models by conditioning the inference prompt
on either real or fabricated metadata that encodes the desired properties of
the output: for example, prepending wikipedia.org to reduce harmful generations
or factquizmaster.com (fabricated) to improve common knowledge task
performance. We also demonstrate that MeCo is compatible with different types
of metadata, such as model-generated topics. MeCo is remarkably simple, adds no
computational overhead, and demonstrates promise in producing more capable and
steerable language models.
|
2501.01957 | VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction | cs.CV cs.SD eess.AS | Recent Multimodal Large Language Models (MLLMs) have typically focused on
integrating visual and textual modalities, with less emphasis placed on the
role of speech in enhancing interaction. However, speech plays a crucial role
in multimodal dialogue systems, and implementing high-performance in both
vision and speech tasks remains a significant challenge due to the fundamental
modality differences. In this paper, we propose a carefully designed
multi-stage training methodology that progressively trains LLM to understand
both visual and speech information, ultimately enabling fluent vision and
speech interaction. Our approach not only preserves strong vision-language
capacity, but also enables efficient speech-to-speech dialogue capabilities
without separate ASR and TTS modules, significantly accelerating multimodal
end-to-end response speed. By comparing our method against state-of-the-art
counterparts across benchmarks for image, video, and speech tasks, we
demonstrate that our model is equipped with both strong visual and speech
capabilities, making near real-time vision and speech interaction.
|
2501.01959 | STEAM-EEG: Spatiotemporal EEG Analysis with Markov Transfer Fields and
Attentive CNNs | cs.CV cs.AI cs.CE | Electroencephalogram (EEG) signals play a pivotal role in biomedical research
and clinical applications, including epilepsy diagnosis, sleep disorder
analysis, and brain-computer interfaces. However, the effective analysis and
interpretation of these complex signals often present significant challenges.
This paper presents a novel approach that integrates computer graphics
techniques with biological signal pattern recognition, specifically using
Markov Transfer Fields (MTFs) for EEG time series imaging. The proposed
framework (STEAM-EEG) employs the capabilities of MTFs to capture the
spatiotemporal dynamics of EEG signals, transforming them into visually
informative images. These images are then rendered, visualised, and modelled
using state-of-the-art computer graphics techniques, thereby facilitating
enhanced data exploration, pattern recognition, and decision-making. The code
could be accessed from GitHub.
|
2501.01960 | GAF-FusionNet: Multimodal ECG Analysis via Gramian Angular Fields and
Split Attention | cs.CV cs.AI cs.GR cs.LG | Electrocardiogram (ECG) analysis plays a crucial role in diagnosing
cardiovascular diseases, but accurate interpretation of these complex signals
remains challenging. This paper introduces a novel multimodal
framework(GAF-FusionNet) for ECG classification that integrates time-series
analysis with image-based representation using Gramian Angular Fields (GAF).
Our approach employs a dual-layer cross-channel split attention module to
adaptively fuse temporal and spatial features, enabling nuanced integration of
complementary information. We evaluate GAF-FusionNet on three diverse ECG
datasets: ECG200, ECG5000, and the MIT-BIH Arrhythmia Database. Results
demonstrate significant improvements over state-of-the-art methods, with our
model achieving 94.5\%, 96.9\%, and 99.6\% accuracy on the respective datasets.
Our code will soon be available at
https://github.com/Cross-Innovation-Lab/GAF-FusionNet.git.
|
2501.01963 | Statistical learning does not always entail knowledge | cs.LG cs.AI cs.IT math.IT math.PR math.ST stat.ML stat.TH | In this paper, we study learning and knowledge acquisition (LKA) of an agent
about a proposition that is either true or false. We use a Bayesian approach,
where the agent receives data to update his beliefs about the proposition
according to a posterior distribution. The LKA is formulated in terms of active
information, with data representing external or exogenous information that
modifies the agent's beliefs. It is assumed that data provide details about a
number of features that are relevant to the proposition. We show that this
leads to a Gibbs distribution posterior, which is in maximum entropy relative
to the prior, conditioned on the side constraints that the data provide in
terms of the features. We demonstrate that full learning is sometimes not
possible and full knowledge acquisition is never possible when the number of
extracted features is too small. We also distinguish between primary learning
(receiving data about features of relevance for the proposition) and secondary
learning (receiving data about the learning of another agent). We argue that
this type of secondary learning does not represent true knowledge acquisition.
Our results have implications for statistical learning algorithms, and we claim
that such algorithms do not always generate true knowledge. The theory is
illustrated with several examples.
|
2501.01969 | Optimal bounds for dissatisfaction in perpetual voting | cs.GT cs.AI cs.LG | In perpetual voting, multiple decisions are made at different moments in
time. Taking the history of previous decisions into account allows us to
satisfy properties such as proportionality over periods of time. In this paper,
we consider the following question: is there a perpetual approval voting method
that guarantees that no voter is dissatisfied too many times? We identify a
sufficient condition on voter behavior -- which we call 'bounded conflicts'
condition -- under which a sublinear growth of dissatisfaction is possible. We
provide a tight upper bound on the growth of dissatisfaction under bounded
conflicts, using techniques from Kolmogorov complexity. We also observe that
the approval voting with binary choices mimics the machine learning setting of
prediction with expert advice. This allows us to present a voting method with
sublinear guarantees on dissatisfaction under bounded conflicts, based on the
standard techniques from prediction with expert advice.
|
2501.01973 | INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models | cs.CV cs.AI cs.CY | The rapid development of large language models (LLMs) and large vision models
(LVMs) have propelled the evolution of multi-modal AI systems, which have
demonstrated the remarkable potential for industrial applications by emulating
human-like cognition. However, they also pose significant ethical challenges,
including amplifying harmful content and reinforcing societal biases. For
instance, biases in some industrial image generation models highlighted the
urgent need for robust fairness assessments. Most existing evaluation
frameworks focus on the comprehensiveness of various aspects of the models, but
they exhibit critical limitations, including insufficient attention to content
generation alignment and social bias-sensitive domains. More importantly, their
reliance on pixel-detection techniques is prone to inaccuracies.
To address these issues, this paper presents INFELM, an in-depth fairness
evaluation on widely-used text-to-image models. Our key contributions are: (1)
an advanced skintone classifier incorporating facial topology and refined skin
pixel representation to enhance classification precision by at least 16.04%,
(2) a bias-sensitive content alignment measurement for understanding societal
impacts, (3) a generalizable representation bias evaluation for diverse
demographic groups, and (4) extensive experiments analyzing large-scale
text-to-image model outputs across six social-bias-sensitive domains. We find
that existing models in the study generally do not meet the empirical fairness
criteria, and representation bias is generally more pronounced than alignment
errors. INFELM establishes a robust benchmark for fairness assessment,
supporting the development of multi-modal AI systems that align with ethical
and human-centric principles.
|
2501.01974 | Hawkes based Representation Learning for Reasoning over Scale-free
Community-structured Temporal Knowledge Graphs | cs.SI cs.AI cs.LG | Temporal knowledge graph (TKG) reasoning has become a hot topic due to its
great value in many practical tasks. The key to TKG reasoning is modeling the
structural information and evolutional patterns of the TKGs. While great
efforts have been devoted to TKG reasoning, the structural and evolutional
characteristics of real-world networks have not been considered. In the aspect
of structure, real-world networks usually exhibit clear community structure and
scale-free (long-tailed distribution) properties. In the aspect of evolution,
the impact of an event decays with the time elapsing. In this paper, we propose
a novel TKG reasoning model called Hawkes process-based Evolutional
Representation Learning Network (HERLN), which learns structural information
and evolutional patterns of a TKG simultaneously, considering the
characteristics of real-world networks: community structure, scale-free and
temporal decaying. First, we find communities in the input TKG to make the
encoding get more similar intra-community embeddings. Second, we design a
Hawkes process-based relational graph convolutional network to cope with the
event impact-decaying phenomenon. Third, we design a conditional decoding
method to alleviate biases towards frequent entities caused by long-tailed
distribution. Experimental results show that HERLN achieves significant
improvements over the state-of-the-art models.
|
2501.01980 | Polarimetric BSSRDF Acquisition of Dynamic Faces | cs.CV cs.GR | Acquisition and modeling of polarized light reflection and scattering help
reveal the shape, structure, and physical characteristics of an object, which
is increasingly important in computer graphics. However, current polarimetric
acquisition systems are limited to static and opaque objects. Human faces, on
the other hand, present a particularly difficult challenge, given their complex
structure and reflectance properties, the strong presence of spatially-varying
subsurface scattering, and their dynamic nature. We present a new polarimetric
acquisition method for dynamic human faces, which focuses on capturing
spatially varying appearance and precise geometry, across a wide spectrum of
skin tones and facial expressions. It includes both single and heterogeneous
subsurface scattering, index of refraction, and specular roughness and
intensity, among other parameters, while revealing biophysically-based
components such as inner- and outer-layer hemoglobin, eumelanin and
pheomelanin. Our method leverages such components' unique multispectral
absorption profiles to quantify their concentrations, which in turn inform our
model about the complex interactions occurring within the skin layers. To our
knowledge, our work is the first to simultaneously acquire polarimetric and
spectral reflectance information alongside biophysically-based skin parameters
and geometry of dynamic human faces. Moreover, our polarimetric skin model
integrates seamlessly into various rendering pipelines.
|
2501.01981 | Optical Character Recognition using Convolutional Neural Networks for
Ashokan Brahmi Inscriptions | cs.CV eess.IV | This research paper delves into the development of an Optical Character
Recognition (OCR) system for the recognition of Ashokan Brahmi characters using
Convolutional Neural Networks. It utilizes a comprehensive dataset of character
images to train the models, along with data augmentation techniques to optimize
the training process. Furthermore, the paper incorporates image preprocessing
to remove noise, as well as image segmentation to facilitate line and character
segmentation. The study mainly focuses on three pre-trained CNNs, namely LeNet,
VGG-16, and MobileNet and compares their accuracy. Transfer learning was
employed to adapt the pre-trained models to the Ashokan Brahmi character
dataset. The findings reveal that MobileNet outperforms the other two models in
terms of accuracy, achieving a validation accuracy of 95.94% and validation
loss of 0.129. The paper provides an in-depth analysis of the implementation
process using MobileNet and discusses the implications of the findings.
The use of OCR for character recognition is of significant importance in the
field of epigraphy, specifically for the preservation and digitization of
ancient scripts. The results of this research paper demonstrate the
effectiveness of using pre-trained CNNs for the recognition of Ashokan Brahmi
characters.
|
2501.01982 | Is Your Image a Good Storyteller? | cs.CV cs.AI cs.CL | Quantifying image complexity at the entity level is straightforward, but the
assessment of semantic complexity has been largely overlooked. In fact, there
are differences in semantic complexity across images. Images with richer
semantics can tell vivid and engaging stories and offer a wide range of
application scenarios. For example, the Cookie Theft picture is such a kind of
image and is widely used to assess human language and cognitive abilities due
to its higher semantic complexity. Additionally, semantically rich images can
benefit the development of vision models, as images with limited semantics are
becoming less challenging for them. However, such images are scarce,
highlighting the need for a greater number of them. For instance, there is a
need for more images like Cookie Theft to cater to people from different
cultural backgrounds and eras. Assessing semantic complexity requires human
experts and empirical evidence. Automatic evaluation of how semantically rich
an image will be the first step of mining or generating more images with rich
semantics, and benefit human cognitive assessment, Artificial Intelligence, and
various other applications. In response, we propose the Image Semantic
Assessment (ISA) task to address this problem. We introduce the first ISA
dataset and a novel method that leverages language to solve this vision
problem. Experiments on our dataset demonstrate the effectiveness of our
approach. Our data and code are available at:
https://github.com/xiujiesong/ISA.
|
2501.01983 | ECG-guided individual identification via PPG | cs.CV cs.AI | Photoplethsmography (PPG)-based individual identification aiming at
recognizing humans via intrinsic cardiovascular activities has raised extensive
attention due to its high security and resistance to mimicry. However, this
kind of technology witnesses unpromising results due to the limitation of low
information density. To this end, electrocardiogram (ECG) signals have been
introduced as a novel modality to enhance the density of input information.
Specifically, a novel cross-modal knowledge distillation framework is
implemented to propagate discriminate knowledge from ECG modality to PPG
modality without incurring additional computational demands at the inference
phase. Furthermore, to ensure efficient knowledge propagation, Contrastive
Language-Image Pre-training (CLIP)-based knowledge alignment and
cross-knowledge assessment modules are proposed respectively. Comprehensive
experiments are conducted and results show our framework outperforms the
baseline model with the improvement of 2.8% and 3.0% in terms of overall
accuracy on seen- and unseen individual recognitions.
|
2501.01984 | Leveraging AI for Automatic Classification of PCOS Using Ultrasound
Imaging | eess.IV cs.AI cs.CV | The AUTO-PCOS Classification Challenge seeks to advance the diagnostic
capabilities of artificial intelligence (AI) in identifying Polycystic Ovary
Syndrome (PCOS) through automated classification of healthy and unhealthy
ultrasound frames. This report outlines our methodology for building a robust
AI pipeline utilizing transfer learning with the InceptionV3 architecture to
achieve high accuracy in binary classification. Preprocessing steps ensured the
dataset was optimized for training, validation, and testing, while
interpretability methods like LIME and saliency maps provided valuable insights
into the model's decision-making. Our approach achieved an accuracy of 90.52%,
with precision, recall, and F1-score metrics exceeding 90% on validation data,
demonstrating its efficacy. The project underscores the transformative
potential of AI in healthcare, particularly in addressing diagnostic challenges
like PCOS. Key findings, challenges, and recommendations for future
enhancements are discussed, highlighting the pathway for creating reliable,
interpretable, and scalable AI-driven medical diagnostic tools.
|
2501.01985 | Fall Detection in Passenger Elevators using Intelligent Surveillance
Camera Systems: An Application with YoloV8 Nano Model | cs.CV cs.AI | Computer vision technology, which involves analyzing images and videos
captured by cameras through deep learning algorithms, has significantly
advanced the field of human fall detection. This study focuses on the
application of the YoloV8 Nano model in identifying fall incidents within
passenger elevators, a context that presents unique challenges due to the
enclosed environment and varying lighting conditions. By training the model on
a robust dataset comprising over 10,000 images across diverse elevator types,
we aim to enhance the detection precision and recall rates. The model's
performance, with an 85% precision and 82% recall in fall detection,
underscores its potential for integration into existing elevator safety systems
to enable rapid intervention.
|
2501.01986 | FrameFusion: Combining Similarity and Importance for Video Token
Reduction on Large Visual Language Models | cs.CV cs.AI | The increasing demand to process long and high-resolution videos
significantly burdens Large Vision-Language Models (LVLMs) due to the enormous
number of visual tokens. Existing token reduction methods primarily focus on
importance-based token pruning, which overlooks the redundancy caused by frame
resemblance and repetitive visual elements. In this paper, we analyze the high
vision token similarities in LVLMs. We reveal that token similarity
distribution condenses as layers deepen while maintaining ranking consistency.
Leveraging the unique properties of similarity over importance, we introduce
FrameFusion, a novel approach that combines similarity-based merging with
importance-based pruning for better token reduction in LVLMs. FrameFusion
identifies and merges similar tokens before pruning, opening up a new
perspective for token reduction. We evaluate FrameFusion on diverse LVLMs,
including Llava-Video-{7B,32B,72B}, and MiniCPM-V-8B, on video understanding,
question-answering, and retrieval benchmarks. Experiments show that FrameFusion
reduces vision tokens by 70$\%$, achieving 3.4-4.4x LLM speedups and 1.6-1.9x
end-to-end speedups, with an average performance impact of less than 3$\%$. Our
code is available at https://github.com/thu-nics/FrameFusion.
|
2501.01987 | Gender Bias in Text-to-Video Generation Models: A case study of Sora | cs.CV cs.AI cs.CY cs.LG | The advent of text-to-video generation models has revolutionized content
creation as it produces high-quality videos from textual prompts. However,
concerns regarding inherent biases in such models have prompted scrutiny,
particularly regarding gender representation. Our study investigates the
presence of gender bias in OpenAI's Sora, a state-of-the-art text-to-video
generation model. We uncover significant evidence of bias by analyzing the
generated videos from a diverse set of gender-neutral and stereotypical
prompts. The results indicate that Sora disproportionately associates specific
genders with stereotypical behaviors and professions, which reflects societal
prejudices embedded in its training data.
|
2501.01989 | CRRG-CLIP: Automatic Generation of Chest Radiology Reports and
Classification of Chest Radiographs | cs.CV cs.AI | The complexity of stacked imaging and the massive number of radiographs make
writing radiology reports complex and inefficient. Even highly experienced
radiologists struggle to maintain accuracy and consistency in interpreting
radiographs under prolonged high-intensity work. To address these issues, this
work proposes the CRRG-CLIP Model (Chest Radiology Report Generation and
Radiograph Classification Model), an end-to-end model for automated report
generation and radiograph classification. The model consists of two modules:
the radiology report generation module and the radiograph classification
module. The generation module uses Faster R-CNN to identify anatomical regions
in radiographs, a binary classifier to select key regions, and GPT-2 to
generate semantically coherent reports. The classification module uses the
unsupervised Contrastive Language Image Pretraining (CLIP) model, addressing
the challenges of high-cost labelled datasets and insufficient features. The
results show that the generation module performs comparably to high-performance
baseline models on BLEU, METEOR, and ROUGE-L metrics, and outperformed the
GPT-4o model on BLEU-2, BLEU-3, BLEU-4, and ROUGE-L metrics. The classification
module significantly surpasses the state-of-the-art model in AUC and Accuracy.
This demonstrates that the proposed model achieves high accuracy, readability,
and fluency in report generation, while multimodal contrastive training with
unlabelled radiograph-report pairs enhances classification performance.
|
2501.01990 | Towards Sustainable Large Language Model Serving | cs.LG cs.DC | In this work, we study LLMs from a carbon emission perspective, addressing
both operational and embodied emissions, and paving the way for sustainable LLM
serving. We characterize the performance and energy of LLaMA with 1B, 3B, and
7B parameters using two Nvidia GPU types, a latest-generation RTX6000 Ada and
an older-generation T4. We analytically model operational carbon emissions
based on energy consumption and carbon intensities from three grid regions --
each representing a different energy source mix, and embodied carbon emissions
based on chip area and memory size. Our characterization and modeling provide
us with an in-depth understanding of the performance, energy, and carbon
emissions of LLM serving. Our findings highlight the potential for optimizing
sustainable LLM serving systems by considering both operational and embodied
carbon emissions simultaneously.
|
2501.01991 | A Hybrid Deep Learning and Model-Checking Framework for Accurate Brain
Tumor Detection and Validation | cs.CV cs.AI | Model checking, a formal verification technique, ensures systems meet
predefined requirements, playing a crucial role in minimizing errors and
enhancing quality during development. This paper introduces a novel hybrid
framework integrating model checking with deep learning for brain tumor
detection and validation in medical imaging. By combining model-checking
principles with CNN-based feature extraction and K-FCM clustering for
segmentation, the proposed approach enhances the reliability of tumor detection
and segmentation. Experimental results highlight the framework's effectiveness,
achieving 98\% accuracy, 96.15\% precision, and 100\% recall, demonstrating its
potential as a robust tool for advanced medical image analysis.
|
2501.01992 | Disagree and Commit: Degrees of Argumentation-based Agreements | cs.AI cs.LO cs.MA | In cooperative human decision-making, agreements are often not total; a
partial degree of agreement is sufficient to commit to a decision and move on,
as long as one is somewhat confident that the involved parties are likely to
stand by their commitment in the future, given no drastic unexpected changes.
In this paper, we introduce the notion of agreement scenarios that allow
artificial autonomous agents to reach such agreements, using formal models of
argumentation, in particular abstract argumentation and value-based
argumentation. We introduce the notions of degrees of satisfaction and
(minimum, mean, and median) agreement, as well as a measure of the impact a
value in a value-based argumentation framework has on these notions. We then
analyze how degrees of agreement are affected when agreement scenarios are
expanded with new information, to shed light on the reliability of partial
agreements in dynamic scenarios. An implementation of the introduced concepts
is provided as part of an argumentation-based reasoning software library.
|
2501.01993 | A Novel Convolution and Attention Mechanism-based Model for 6D Object
Pose Estimation | cs.CV cs.LG | Estimating 6D object poses from RGB images is challenging because the lack of
depth information requires inferring a three dimensional structure from 2D
projections. Traditional methods often rely on deep learning with grid based
data structures but struggle to capture complex dependencies among extracted
features. To overcome this, we introduce a graph based representation derived
directly from images, where spatial temporal features of each pixel serve as
nodes, and relationships between them are defined through node connectivity and
spatial interactions. We also employ feature selection mechanisms that use
spatial attention and self attention distillation, along with a Legendre
convolution layer leveraging the orthogonality of Legendre polynomials for
numerical stability. Experiments on the LINEMOD, Occluded LINEMOD, and YCB
Video datasets demonstrate that our method outperforms nine existing approaches
and achieves state of the art benchmark in object pose estimation.
|
2501.01994 | Fuzzy Model Identification and Self Learning with Smooth Compositions | eess.SY cs.AI cs.SY | This paper develops a smooth model identification and self-learning strategy
for dynamic systems taking into account possible parameter variations and
uncertainties. We have tried to solve the problem such that the model follows
the changes and variations in the system on a continuous and smooth surface.
Running the model to adaptively gain the optimum values of the parameters on a
smooth surface would facilitate further improvements in the application of
other derivative based optimization control algorithms such as MPC or robust
control algorithms to achieve a combined modeling-control scheme. Compared to
the earlier works on the smooth fuzzy modeling structures, we could reach a
desired trade-off between the model optimality and the computational load. The
proposed method has been evaluated on a test problem as well as the non-linear
dynamic of a chemical process.
|
2501.01998 | SmartSpatial: Enhancing the 3D Spatial Arrangement Capabilities of
Stable Diffusion Models and Introducing a Novel 3D Spatial Evaluation
Framework | cs.CV cs.AI | Stable Diffusion models have made remarkable strides in generating
photorealistic images from text prompts but often falter when tasked with
accurately representing complex spatial arrangements, particularly involving
intricate 3D relationships. To address this limitation, we introduce
SmartSpatial, an innovative approach that enhances the spatial arrangement
capabilities of Stable Diffusion models through 3D-aware conditioning and
attention-guided mechanisms. SmartSpatial incorporates depth information and
employs cross-attention control to ensure precise object placement, delivering
notable improvements in spatial accuracy metrics. In conjunction with
SmartSpatial, we present SmartSpatialEval, a comprehensive evaluation framework
designed to assess spatial relationships. This framework utilizes
vision-language models and graph-based dependency parsing for performance
analysis. Experimental results on the COCO and SpatialPrompts datasets show
that SmartSpatial significantly outperforms existing methods, setting new
benchmarks for spatial arrangement accuracy in image generation.
|
2501.01999 | On the Utility of Equivariance and Symmetry Breaking in Deep Learning
Architectures on Point Clouds | cs.CV cs.AI cs.LG | This paper explores the key factors that influence the performance of models
working with point clouds, across different tasks of varying geometric
complexity. In this work, we explore the trade-offs between flexibility and
weight-sharing introduced by equivariant layers, assessing when equivariance
boosts or detracts from performance. It is often argued that providing more
information as input improves a model's performance. However, if this
additional information breaks certain properties, such as $\SE(3)$
equivariance, does it remain beneficial? We identify the key aspects of
equivariant and non-equivariant architectures that drive success in different
tasks by benchmarking them on segmentation, regression, and generation tasks
across multiple datasets with increasing complexity. We observe a positive
impact of equivariance, which becomes more pronounced with increasing task
complexity, even when strict equivariance is not required.
|
2501.02000 | Multi-Center Study on Deep Learning-Assisted Detection and
Classification of Fetal Central Nervous System Anomalies Using Ultrasound
Imaging | eess.IV cs.AI cs.CV | Prenatal ultrasound evaluates fetal growth and detects congenital
abnormalities during pregnancy, but the examination of ultrasound images by
radiologists requires expertise and sophisticated equipment, which would
otherwise fail to improve the rate of identifying specific types of fetal
central nervous system (CNS) abnormalities and result in unnecessary patient
examinations. We construct a deep learning model to improve the overall
accuracy of the diagnosis of fetal cranial anomalies to aid prenatal diagnosis.
In our collected multi-center dataset of fetal craniocerebral anomalies
covering four typical anomalies of the fetal central nervous system (CNS):
anencephaly, encephalocele (including meningocele), holoprosencephaly, and
rachischisis, patient-level prediction accuracy reaches 94.5%, with an AUROC
value of 99.3%. In the subgroup analyzes, our model is applicable to the entire
gestational period, with good identification of fetal anomaly types for any
gestational period. Heatmaps superimposed on the ultrasound images not only
provide a visual interpretation for the algorithm but also provide an intuitive
visual aid to the physician by highlighting key areas that need to be reviewed,
helping the physician to quickly identify and validate key areas. Finally, the
retrospective reader study demonstrates that by combining the automatic
prediction of the DL system with the professional judgment of the radiologist,
the diagnostic accuracy and efficiency can be effectively improved and the
misdiagnosis rate can be reduced, which has an important clinical application
prospect.
|
2501.02001 | Communication Efficient Cooperative Edge AI via Event-Triggered
Computation Offloading | cs.LG eess.IV eess.SP | Rare events, despite their infrequency, often carry critical information and
require immediate attentions in mission-critical applications such as
autonomous driving, healthcare, and industrial automation. The data-intensive
nature of these tasks and their need for prompt responses, combined with
designing edge AI (or edge inference), pose significant challenges in systems
and techniques. Existing edge inference approaches often suffer from
communication bottlenecks due to high-dimensional data transmission and fail to
provide timely responses to rare events, limiting their effectiveness for
mission-critical applications in the sixth-generation (6G) mobile networks. To
overcome these challenges, we propose a channel-adaptive, event-triggered
edge-inference framework that prioritizes efficient rare-event processing.
Central to this framework is a dual-threshold, multi-exit architecture, which
enables early local inference for rare events detected locally while offloading
more complex rare events to edge servers for detailed classification. To
further enhance the system's performance, we developed a channel-adaptive
offloading policy paired with an online algorithm to dynamically determine the
optimal confidence thresholds for controlling offloading decisions. The
associated optimization problem is solved by reformulating the original
non-convex function into an equivalent strongly convex one. Using deep neural
network classifiers and real medical datasets, our experiments demonstrate that
the proposed framework not only achieves superior rare-event classification
accuracy, but also effectively reduces communication overhead, as opposed to
existing edge-inference approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.