id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.01271 | Energy-Efficiency and Spectral-Efficiency Trade-off in Distributed
Massive-MIMO Networks | cs.NI cs.IT math.IT | This paper investigates the inherent trade-off between energy efficiency (EE)
and spectral efficiency (SE) in distributed massive-MIMO (D-mMIMO) systems.
Optimizing the EE and SE together is crucial as increasing spectral efficiency
often leads to higher energy consumption. Joint power allocation and AP-UE
association are pivotal in this trade-off analysis because they directly
influence both EE and SE. We address the gap in existing literature where the
EE-SE trade-off has been analyzed but not optimized in the context of D-mMIMO
systems. The focus of this study is to maximize the EE with constraints on
uplink sum SE through judicious power allocation and AP-UE association,
essential for enhancing network throughput. Numerical simulations are performed
to validate the proposed model, exploring the impacts of AP-UE association and
power allocation on the EE-SE trade-off in uplink D-mMIMO scenarios.
|
2501.01273 | Does a Large Language Model Really Speak in Human-Like Language? | cs.CL stat.AP | Large Language Models (LLMs) have recently emerged, attracting considerable
attention due to their ability to generate highly natural, human-like text.
This study compares the latent community structures of LLM-generated text and
human-written text within a hypothesis testing procedure. Specifically, we
analyze three text sets: original human-written texts ($\mathcal{O}$), their
LLM-paraphrased versions ($\mathcal{G}$), and a twice-paraphrased set
($\mathcal{S}$) derived from $\mathcal{G}$. Our analysis addresses two key
questions: (1) Is the difference in latent community structures between
$\mathcal{O}$ and $\mathcal{G}$ the same as that between $\mathcal{G}$ and
$\mathcal{S}$? (2) Does $\mathcal{G}$ become more similar to $\mathcal{O}$ as
the LLM parameter controlling text variability is adjusted? The first question
is based on the assumption that if LLM-generated text truly resembles human
language, then the gap between the pair ($\mathcal{O}$, $\mathcal{G}$) should
be similar to that between the pair ($\mathcal{G}$, $\mathcal{S}$), as both
pairs consist of an original text and its paraphrase. The second question
examines whether the degree of similarity between LLM-generated and human text
varies with changes in the breadth of text generation. To address these
questions, we propose a statistical hypothesis testing framework that leverages
the fact that each text has corresponding parts across all datasets due to
their paraphrasing relationship. This relationship enables the mapping of one
dataset's relative position to another, allowing two datasets to be mapped to a
third dataset. As a result, both mapped datasets can be quantified with respect
to the space characterized by the third dataset, facilitating a direct
comparison between them. Our results indicate that GPT-generated text remains
distinct from human-authored text.
|
2501.01275 | HybridTrack: A Hybrid Approach for Robust Multi-Object Tracking | cs.CV cs.RO | The evolution of Advanced Driver Assistance Systems (ADAS) has increased the
need for robust and generalizable algorithms for multi-object tracking.
Traditional statistical model-based tracking methods rely on predefined motion
models and assumptions about system noise distributions. Although
computationally efficient, they often lack adaptability to varying traffic
scenarios and require extensive manual design and parameter tuning. To address
these issues, we propose a novel 3D multi-object tracking approach for
vehicles, HybridTrack, which integrates a data-driven Kalman Filter (KF) within
a tracking-by-detection paradigm. In particular, it learns the transition
residual and Kalman gain directly from data, which eliminates the need for
manual motion and stochastic parameter modeling. Validated on the real-world
KITTI dataset, HybridTrack achieves 82.08% HOTA accuracy, significantly
outperforming state-of-the-art methods. We also evaluate our method under
different configurations, achieving the fastest processing speed of 112 FPS.
Consequently, HybridTrack eliminates the dependency on scene-specific designs
while improving performance and maintaining real-time efficiency. The code will
be publicly available at the time of publishing:
https://github.com/leandro-svg/HybridTrack.git.
|
2501.01276 | Marketing Mix Modeling in Lemonade | stat.AP cs.LG | Marketing mix modeling (MMM) is a widely used method to assess the
effectiveness of marketing campaigns and optimize marketing strategies.
Bayesian MMM is an advanced approach that allows for the incorporation of prior
information, uncertainty quantification, and probabilistic predictions (1). In
this paper, we describe the process of building a Bayesian MMM model for the
online insurance company Lemonade. We first collected data on Lemonade's
marketing activities, such as online advertising, social media, and brand
marketing, as well as performance data. We then used a Bayesian framework to
estimate the contribution of each marketing channel on total performance, while
accounting for various factors such as seasonality, market trends, and
macroeconomic indicators. To validate the model, we compared its predictions
with the actual performance data from A/B-testing and sliding window holdout
data (2). The results showed that the predicted contribution of each marketing
channel is aligned with A/B test performance and is actionable. Furthermore, we
conducted several scenario analyses using convex optimization to test the
sensitivity of the model to different assumptions and to evaluate the impact of
changes in the marketing mix on sales. The insights gained from the model
allowed Lemonade to adjust their marketing strategy and allocate their budget
more effectively. Our case study demonstrates the benefits of using Bayesian
MMM for marketing attribution and optimization in a data-driven company like
Lemonade. The approach is flexible, interpretable, and can provide valuable
insights for decision-making.
|
2501.01282 | CultureVLM: Characterizing and Improving Cultural Understanding of
Vision-Language Models for over 100 Countries | cs.AI cs.CL cs.CV | Vision-language models (VLMs) have advanced human-AI interaction but struggle
with cultural understanding, often misinterpreting symbols, gestures, and
artifacts due to biases in predominantly Western-centric training data. In this
paper, we construct CultureVerse, a large-scale multimodal benchmark covering
19, 682 cultural concepts, 188 countries/regions, 15 cultural concepts, and 3
question types, with the aim of characterizing and improving VLMs'
multicultural understanding capabilities. Then, we propose CultureVLM, a series
of VLMs fine-tuned on our dataset to achieve significant performance
improvement in cultural understanding. Our evaluation of 16 models reveals
significant disparities, with a stronger performance in Western concepts and
weaker results in African and Asian contexts. Fine-tuning on our CultureVerse
enhances cultural perception, demonstrating cross-cultural, cross-continent,
and cross-dataset generalization without sacrificing performance on models'
general VLM benchmarks. We further present insights on cultural generalization
and forgetting. We hope that this work could lay the foundation for more
equitable and culturally aware multimodal AI systems.
|
2501.01284 | NeutraSum: A Language Model can help a Balanced Media Diet by
Neutralizing News Summaries | cs.CL cs.AI | Media bias in news articles arises from the political polarisation of media
outlets, which can reinforce societal stereotypes and beliefs. Reporting on the
same event often varies significantly between outlets, reflecting their
political leanings through polarised language and focus. Although previous
studies have attempted to generate bias-free summaries from multiperspective
news articles, they have not effectively addressed the challenge of mitigating
inherent media bias. To address this gap, we propose \textbf{NeutraSum}, a
novel framework that integrates two neutrality losses to adjust the semantic
space of generated summaries, thus minimising media bias. These losses,
designed to balance the semantic distances across polarised inputs and ensure
alignment with expert-written summaries, guide the generation of neutral and
factually rich summaries. To evaluate media bias, we employ the political
compass test, which maps political leanings based on economic and social
dimensions. Experimental results on the Allsides dataset demonstrate that
NeutraSum not only improves summarisation performance but also achieves
significant reductions in media bias, offering a promising approach for neutral
news summarisation.
|
2501.01287 | Optimized Relay Lens Design For High-Resolution Image Transmission In
Military Target Detection Systems | cs.LG physics.optics | The design and performance analysis of relay lenses that provide
high-performance image transmission for target acquisition and tracking in
military optical systems. Relay lenses are critical components for clear and
lossless image transmission over long distances. In this study, the optical
performance of a relay lens system designed and optimized using ZEMAX software
is investigated in detail. The analysis focuses on important optical properties
such as modulation transfer function (MTF), spot diagrams, Seidel diagram,
field curvature and distortion. The results show that the lens has significant
potential in military applications for target detection and tracking with high
resolution and low aberration.
|
2501.01290 | ToolComp: A Multi-Tool Reasoning & Process Supervision Benchmark | cs.CL | Despite recent advances in AI, the development of systems capable of
executing complex, multi-step reasoning tasks involving multiple tools remains
a significant challenge. Current benchmarks fall short in capturing the
real-world complexity of tool-use reasoning, where verifying the correctness of
not only the final answer but also the intermediate steps is important for
evaluation, development, and identifying failures during inference time. To
bridge this gap, we introduce ToolComp, a comprehensive benchmark designed to
evaluate multi-step tool-use reasoning. ToolComp is developed through a
collaboration between models and human annotators, featuring
human-edited/verified prompts, final answers, and process supervision labels,
allowing for the evaluation of both final outcomes and intermediate reasoning.
Evaluation across six different model families demonstrates the challenging
nature of our dataset, with the majority of models achieving less than 50%
accuracy. Additionally, we generate synthetic training data to compare the
performance of outcome-supervised reward models (ORMs) with process-supervised
reward models (PRMs) to assess their ability to improve complex tool-use
reasoning as evaluated by ToolComp. Our results show that PRMs generalize
significantly better than ORMs, achieving a 19% and 11% improvement in rank@1
accuracy for ranking base and fine-tuned model trajectories, respectively.
These findings highlight the critical role of process supervision in both the
evaluation and training of AI models, paving the way for more robust and
capable systems in complex, multi-step tool-use tasks.
|
2501.01291 | Change Detection-Based Procedures for Piecewise Stationary MABs: A
Modular Approach | cs.AI cs.LG cs.SY eess.SY stat.ML | Conventional Multi-Armed Bandit (MAB) algorithms are designed for stationary
environments, where the reward distributions associated with the arms do not
change with time. In many applications, however, the environment is more
accurately modeled as being nonstationary. In this work, piecewise stationary
MAB (PS-MAB) environments are investigated, in which the reward distributions
associated with a subset of the arms change at some change-points and remain
stationary between change-points. Our focus is on the asymptotic analysis of
PS-MABs, for which practical algorithms based on change detection (CD) have
been previously proposed. Our goal is to modularize the design and analysis of
such CD-based Bandit (CDB) procedures. To this end, we identify the
requirements for stationary bandit algorithms and change detectors in a CDB
procedure that are needed for the modularization. We assume that the rewards
are sub-Gaussian. Under this assumption and a condition on the separation of
the change-points, we show that the analysis of CDB procedures can indeed be
modularized, so that regret bounds can be obtained in a unified manner for
various combinations of change detectors and bandit algorithms. Through this
analysis, we develop new modular CDB procedures that are order-optimal. We
compare the performance of our modular CDB procedures with various other
methods in simulations.
|
2501.01293 | LEO-Split: A Semi-Supervised Split Learning Framework over LEO Satellite
Networks | cs.LG cs.AI cs.DC cs.NI | Recently, the increasing deployment of LEO satellite systems has enabled
various space analytics (e.g., crop and climate monitoring), which heavily
relies on the advancements in deep learning (DL). However, the intermittent
connectivity between LEO satellites and ground station (GS) significantly
hinders the timely transmission of raw data to GS for centralized learning,
while the scaled-up DL models hamper distributed learning on
resource-constrained LEO satellites. Though split learning (SL) can be a
potential solution to these problems by partitioning a model and offloading
primary training workload to GS, the labor-intensive labeling process remains
an obstacle, with intermittent connectivity and data heterogeneity being other
challenges. In this paper, we propose LEO-Split, a semi-supervised (SS) SL
design tailored for satellite networks to combat these challenges. Leveraging
SS learning to handle (labeled) data scarcity, we construct an auxiliary model
to tackle the training failure of the satellite-GS non-contact time. Moreover,
we propose a pseudo-labeling algorithm to rectify data imbalances across
satellites. Lastly, an adaptive activation interpolation scheme is devised to
prevent the overfitting of server-side sub-model training at GS. Extensive
experiments with real-world LEO satellite traces (e.g., Starlink) demonstrate
that our LEO-Split framework achieves superior performance compared to
state-ofthe-art benchmarks.
|
2501.01303 | Citations and Trust in LLM Generated Responses | cs.CL cs.AI | Question answering systems are rapidly advancing, but their opaque nature may
impact user trust. We explored trust through an anti-monitoring framework,
where trust is predicted to be correlated with presence of citations and
inversely related to checking citations. We tested this hypothesis with a live
question-answering experiment that presented text responses generated using a
commercial Chatbot along with varying citations (zero, one, or five), both
relevant and random, and recorded if participants checked the citations and
their self-reported trust in the generated responses. We found a significant
increase in trust when citations were present, a result that held true even
when the citations were random; we also found a significant decrease in trust
when participants checked the citations. These results highlight the importance
of citations in enhancing trust in AI-generated content.
|
2501.01305 | Large Language Models for Mental Health Diagnostic Assessments:
Exploring The Potential of Large Language Models for Assisting with Mental
Health Diagnostic Assessments -- The Depression and Anxiety Case | cs.CL | Large language models (LLMs) are increasingly attracting the attention of
healthcare professionals for their potential to assist in diagnostic
assessments, which could alleviate the strain on the healthcare system caused
by a high patient load and a shortage of providers. For LLMs to be effective in
supporting diagnostic assessments, it is essential that they closely replicate
the standard diagnostic procedures used by clinicians. In this paper, we
specifically examine the diagnostic assessment processes described in the
Patient Health Questionnaire-9 (PHQ-9) for major depressive disorder (MDD) and
the Generalized Anxiety Disorder-7 (GAD-7) questionnaire for generalized
anxiety disorder (GAD). We investigate various prompting and fine-tuning
techniques to guide both proprietary and open-source LLMs in adhering to these
processes, and we evaluate the agreement between LLM-generated diagnostic
outcomes and expert-validated ground truth. For fine-tuning, we utilize the
Mentalllama and Llama models, while for prompting, we experiment with
proprietary models like GPT-3.5 and GPT-4o, as well as open-source models such
as llama-3.1-8b and mixtral-8x7b.
|
2501.01306 | Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process
of Fast and Slow Thinking | cs.CL | Large language models (LLMs) demonstrate exceptional capabilities, yet still
face the hallucination issue. Typical text generation approaches adopt an
auto-regressive generation without deliberate reasoning, which often results in
untrustworthy and factually inaccurate responses. In this paper, we propose
HaluSearch, a novel framework that incorporates tree search-based algorithms
(e.g. MCTS) to enable an explicit slow thinking generation process for
mitigating hallucinations of LLMs during inference. Specifically, HaluSearch
frames text generation as a step-by-step reasoning process, using a
self-evaluation reward model to score each generation step and guide the tree
search towards the most reliable generation pathway for fully exploiting the
internal knowledge of LLMs. To balance efficiency and quality, we introduce a
hierarchical thinking system switch mechanism inspired by the dual process
theory in cognitive science, which dynamically alternates between fast and slow
thinking modes at both the instance and step levels, adapting to the complexity
of questions and reasoning states. We conduct extensive experiments on both
English and Chinese datasets and the results show that our approach
significantly outperforms baseline approaches.
|
2501.01311 | Multi-Head Explainer: A General Framework to Improve Explainability in
CNNs and Transformers | cs.CV cs.AI | In this study, we introduce the Multi-Head Explainer (MHEX), a versatile and
modular framework that enhances both the explainability and accuracy of
Convolutional Neural Networks (CNNs) and Transformer-based models. MHEX
consists of three core components: an Attention Gate that dynamically
highlights task-relevant features, Deep Supervision that guides early layers to
capture fine-grained details pertinent to the target class, and an Equivalent
Matrix that unifies refined local and global representations to generate
comprehensive saliency maps. Our approach demonstrates superior compatibility,
enabling effortless integration into existing residual networks like ResNet and
Transformer architectures such as BERT with minimal modifications. Extensive
experiments on benchmark datasets in medical imaging and text classification
show that MHEX not only improves classification accuracy but also produces
highly interpretable and detailed saliency scores.
|
2501.01312 | Learning Spectral Methods by Transformers | stat.ML cs.LG math.ST stat.TH | Transformers demonstrate significant advantages as the building block of
modern LLMs. In this work, we study the capacities of Transformers in
performing unsupervised learning. We show that multi-layered Transformers,
given a sufficiently large set of pre-training instances, are able to learn the
algorithms themselves and perform statistical estimation tasks given new
instances. This learning paradigm is distinct from the in-context learning
setup and is similar to the learning procedure of human brains where skills are
learned through past experience. Theoretically, we prove that pre-trained
Transformers can learn the spectral methods and use the classification of
bi-class Gaussian mixture model as an example. Our proof is constructive using
algorithmic design techniques. Our results are built upon the similarities of
multi-layered Transformer architecture with the iterative recovery algorithms
used in practice. Empirically, we verify the strong capacity of the
multi-layered (pre-trained) Transformer on unsupervised learning through the
lens of both the PCA and the Clustering tasks performed on the synthetic and
real-world datasets.
|
2501.01317 | Understanding Difficult-to-learn Examples in Contrastive Learning: A
Theoretical Framework for Spectral Contrastive Learning | cs.LG cs.AI | Unsupervised contrastive learning has shown significant performance
improvements in recent years, often approaching or even rivaling supervised
learning in various tasks. However, its learning mechanism is fundamentally
different from that of supervised learning. Previous works have shown that
difficult-to-learn examples (well-recognized in supervised learning as examples
around the decision boundary), which are essential in supervised learning,
contribute minimally in unsupervised settings. In this paper, perhaps
surprisingly, we find that the direct removal of difficult-to-learn examples,
although reduces the sample size, can boost the downstream classification
performance of contrastive learning. To uncover the reasons behind this, we
develop a theoretical framework modeling the similarity between different pairs
of samples. Guided by this theoretical framework, we conduct a thorough
theoretical analysis revealing that the presence of difficult-to-learn examples
negatively affects the generalization of contrastive learning. Furthermore, we
demonstrate that the removal of these examples, and techniques such as margin
tuning and temperature scaling can enhance its generalization bounds, thereby
improving performance. Empirically, we propose a simple and efficient mechanism
for selecting difficult-to-learn examples and validate the effectiveness of the
aforementioned methods, which substantiates the reliability of our proposed
theoretical framework.
|
2501.01320 | SeedVR: Seeding Infinity in Diffusion Transformer Towards Generic Video
Restoration | cs.CV | Video restoration poses non-trivial challenges in maintaining fidelity while
recovering temporally consistent details from unknown degradations in the wild.
Despite recent advances in diffusion-based restoration, these methods often
face limitations in generation capability and sampling efficiency. In this
work, we present SeedVR, a diffusion transformer designed to handle real-world
video restoration with arbitrary length and resolution. The core design of
SeedVR lies in the shifted window attention that facilitates effective
restoration on long video sequences. SeedVR further supports variable-sized
windows near the boundary of both spatial and temporal dimensions, overcoming
the resolution constraints of traditional window attention. Equipped with
contemporary practices, including causal video autoencoder, mixed image and
video training, and progressive training, SeedVR achieves highly-competitive
performance on both synthetic and real-world benchmarks, as well as
AI-generated videos. Extensive experiments demonstrate SeedVR's superiority
over existing methods for generic video restoration.
|
2501.01323 | Kiri-Spoon: A Kirigami Utensil for Robot-Assisted Feeding | cs.RO | For millions of adults with mobility limitations, eating meals is a daily
challenge. A variety of robotic systems have been developed to address this
societal need. Unfortunately, end-user adoption of robot-assisted feeding is
limited, in part because existing devices are unable to seamlessly grasp,
manipulate, and feed diverse foods. Recent works seek to address this issue by
creating new algorithms for food acquisition and bite transfer. In parallel to
these algorithmic developments, however, we hypothesize that mechanical
intelligence will make it fundamentally easier for robot arms to feed humans.
We therefore propose Kiri-Spoon, a soft utensil specifically designed for
robot-assisted feeding. Kiri-Spoon consists of a spoon-shaped kirigami
structure: when actuated, the kirigami sheet deforms into a bowl of increasing
curvature. Robot arms equipped with Kiri-Spoon can leverage the kirigami
structure to wrap-around morsels during acquisition, contain those items as the
robot moves, and then compliantly release the food into the user's mouth.
Overall, Kiri-Spoon combines the familiar and comfortable shape of a standard
spoon with the increased capabilities of soft robotic grippers. In what
follows, we first apply a stakeholder-driven design process to ensure that
Kiri-Spoon meets the needs of caregivers and users with physical disabilities.
We next characterize the dynamics of Kiri-Spoon, and derive a mechanics model
to relate actuation force to the spoon's shape. The paper concludes with three
separate experiments that evaluate (a) the mechanical advantage provided by
Kiri-Spoon, (b) the ways users with disabilities perceive our system, and (c)
how the mechanical intelligence of Kiri-Spoon complements state-of-the-art
algorithms. Our results suggest that Kiri-Spoon advances robot-assisted feeding
across diverse foods, multiple robotic platforms, and different manipulation
algorithms.
|
2501.01326 | Domain-invariant feature learning in brain MR imaging for content-based
image retrieval | cs.LG cs.CV cs.IR | When conducting large-scale studies that collect brain MR images from
multiple facilities, the impact of differences in imaging equipment and
protocols at each site cannot be ignored, and this domain gap has become a
significant issue in recent years. In this study, we propose a new
low-dimensional representation (LDR) acquisition method called style encoder
adversarial domain adaptation (SE-ADA) to realize content-based image retrieval
(CBIR) of brain MR images. SE-ADA reduces domain differences while preserving
pathological features by separating domain-specific information from LDR and
minimizing domain differences using adversarial learning.
In evaluation experiments comparing SE-ADA with recent domain harmonization
methods on eight public brain MR datasets (ADNI1/2/3, OASIS1/2/3/4, PPMI),
SE-ADA effectively removed domain information while preserving key aspects of
the original brain structure and demonstrated the highest disease search
accuracy.
|
2501.01327 | Enhancement of Neural Inertial Regression Networks: A Data-Driven
Perspective | cs.RO eess.SP | Inertial sensors are integral components in numerous applications, powering
crucial features in robotics and our daily lives. In recent years, deep
learning has significantly advanced inertial sensing performance and
robustness. Deep-learning techniques are used in different domains and
platforms to enhance network performance, but no common benchmark is available.
The latter is critical for fair comparison and evaluation in a standardized
framework as well as development in the field. To fill this gap, we define and
thoroughly analyze 13 data-driven techniques for improving neural inertial
regression networks. A focus is placed on three aspects of neural networks:
network architecture, data augmentation, and data preprocessing. Extensive
experiments were made across six diverse datasets that were collected from
various platforms including quadrotors, doors, pedestrians, and mobile robots.
In total, over 1079 minutes of inertial data sampled between 120-200Hz were
analyzed. Our results demonstrate that data augmentation through rotation and
noise addition consistently yields the most significant improvements. Moreover,
this study outlines benchmarking strategies for enhancing neural inertial
regression networks.
|
2501.01329 | The Prompt Alchemist: Automated LLM-Tailored Prompt Optimization for
Test Case Generation | cs.SE cs.AI cs.CL | Test cases are essential for validating the reliability and quality of
software applications. Recent studies have demonstrated the capability of Large
Language Models (LLMs) to generate useful test cases for given source code.
However, the existing work primarily relies on human-written plain prompts,
which often leads to suboptimal results since the performance of LLMs can be
highly influenced by the prompts. Moreover, these approaches use the same
prompt for all LLMs, overlooking the fact that different LLMs might be best
suited to different prompts. Given the wide variety of possible prompt
formulations, automatically discovering the optimal prompt for each LLM
presents a significant challenge. Although there are methods on automated
prompt optimization in the natural language processing field, they are hard to
produce effective prompts for the test case generation task. First, the methods
iteratively optimize prompts by simply combining and mutating existing ones
without proper guidance, resulting in prompts that lack diversity and tend to
repeat the same errors in the generated test cases. Second, the prompts are
generally lack of domain contextual knowledge, limiting LLMs' performance in
the task.
|
2501.01332 | Decoding Knowledge in Large Language Models: A Framework for
Categorization and Comprehension | cs.CL | Understanding how large language models (LLMs) acquire, retain, and apply
knowledge remains an open challenge. This paper introduces a novel framework,
K-(CSA)^2, which categorizes LLM knowledge along two dimensions: correctness
and confidence. The framework defines six categories of knowledge, ranging from
highly confident correctness to confidently held misconceptions, enabling a
nuanced evaluation of model comprehension beyond binary accuracy. Using this
framework, we demonstrate how techniques like chain-of-thought prompting and
reinforcement learning with human feedback fundamentally alter the knowledge
structures of internal (pre-trained) and external (context-dependent) knowledge
in LLMs. CoT particularly enhances base model performance and shows synergistic
benefits when applied to aligned LLMs. Moreover, our layer-wise analysis
reveals that higher layers in LLMs encode more high-confidence knowledge, while
low-confidence knowledge tends to emerge in middle-to-lower layers.
|
2501.01333 | On the Robustness of Cover Version Identification Models: A Study Using
Cover Versions from YouTube | cs.MM cs.IR cs.SI | Recent advances in cover song identification have shown great success.
However, models are usually tested on a fixed set of datasets which are relying
on the online cover song database SecondHandSongs. It is unclear how well
models perform on cover songs on online video platforms, which might exhibit
alterations that are not expected. In this paper, we annotate a subset of songs
from YouTube sampled by a multi-modal uncertainty sampling approach and
evaluate state-of-the-art models. We find that existing models achieve
significantly lower ranking performance on our dataset compared to a community
dataset. We additionally measure the performance of different types of versions
(e.g., instrumental versions) and find several types that are particularly hard
to rank. Lastly, we provide a taxonomy of alterations in cover versions on the
web.
|
2501.01335 | CySecBench: Generative AI-based CyberSecurity-focused Prompt Dataset for
Benchmarking Large Language Models | cs.CR cs.AI cs.LG | Numerous studies have investigated methods for jailbreaking Large Language
Models (LLMs) to generate harmful content. Typically, these methods are
evaluated using datasets of malicious prompts designed to bypass security
policies established by LLM providers. However, the generally broad scope and
open-ended nature of existing datasets can complicate the assessment of
jailbreaking effectiveness, particularly in specific domains, notably
cybersecurity. To address this issue, we present and publicly release
CySecBench, a comprehensive dataset containing 12662 prompts specifically
designed to evaluate jailbreaking techniques in the cybersecurity domain. The
dataset is organized into 10 distinct attack-type categories, featuring
close-ended prompts to enable a more consistent and accurate assessment of
jailbreaking attempts. Furthermore, we detail our methodology for dataset
generation and filtration, which can be adapted to create similar datasets in
other domains. To demonstrate the utility of CySecBench, we propose and
evaluate a jailbreaking approach based on prompt obfuscation. Our experimental
results show that this method successfully elicits harmful content from
commercial black-box LLMs, achieving Success Rates (SRs) of 65% with ChatGPT
and 88% with Gemini; in contrast, Claude demonstrated greater resilience with a
jailbreaking SR of 17%. Compared to existing benchmark approaches, our method
shows superior performance, highlighting the value of domain-specific
evaluation datasets for assessing LLM security measures. Moreover, when
evaluated using prompts from a widely used dataset (i.e., AdvBench), it
achieved an SR of 78.5%, higher than the state-of-the-art methods.
|
2501.01336 | Aligning Large Language Models for Faithful Integrity Against Opposing
Argument | cs.CL | Large Language Models (LLMs) have demonstrated impressive capabilities in
complex reasoning tasks. However, they can be easily misled by unfaithful
arguments during conversations, even when their original statements are
correct. To this end, we investigate the problem of maintaining faithful
integrity in LLMs. This involves ensuring that LLMs adhere to their faithful
statements in the face of opposing arguments and are able to correct their
incorrect statements when presented with faithful arguments. In this work, we
propose a novel framework, named Alignment for Faithful Integrity with
Confidence Estimation (AFICE), which aims to align the LLM responses with
faithful integrity. Specifically, AFICE first designs a Bilateral Confidence
Estimation (BCE) approach for estimating the uncertainty of each response
generated by the LLM given a specific context, which simultaneously estimate
the model's confidence to the question based on the internal states during
decoding as well as to the answer based on cumulative probability ratios. With
the BCE, we construct a conversational preference dataset composed of context,
original statement, and argument, which is adopted for aligning the LLM for
faithful integrity using Direct Preference Optimization (DPO). Extensive
experimental results on a wide range of benchmarks demonstrate significant
improvements in the LLM's ability to maintain faithful responses when
encountering opposing arguments, ensuring both the practical utility and
trustworthiness of LLMs in complex interactive settings. Code and data will be
released via https://github.com/zhaoy777/AFICE.git
|
2501.01339 | Simultaneous Latent State Estimation and Latent Linear Dynamics
Discovery from Image Observations | cs.LG | The problem of state estimation has a long history with many successful
algorithms that allow analytical derivation or approximation of posterior
filtering distribution given the noisy observations. This report tries to
conclude previous works to resolve the problem of latent state estimation given
image-based observations and also suggests a new solution to this problem.
|
2501.01342 | DeepFilter: An Instrumental Baseline for Accurate and Efficient Process
Monitoring | cs.AI cs.LG | Effective process monitoring is increasingly vital in industrial automation
for ensuring operational safety, necessitating both high accuracy and
efficiency. Although Transformers have demonstrated success in various fields,
their canonical form based on the self-attention mechanism is inadequate for
process monitoring due to two primary limitations: (1) the step-wise
correlations captured by self-attention mechanism are difficult to capture
discriminative patterns in monitoring logs due to the lacking semantics of each
step, thus compromising accuracy; (2) the quadratic computational complexity of
self-attention hampers efficiency. To address these issues, we propose
DeepFilter, a Transformer-style framework for process monitoring. The core
innovation is an efficient filtering layer that excel capturing long-term and
periodic patterns with reduced complexity. Equipping with the global filtering
layer, DeepFilter enhances both accuracy and efficiency, meeting the stringent
demands of process monitoring. Experimental results on real-world process
monitoring datasets validate DeepFilter's superiority in terms of accuracy and
efficiency compared to existing state-of-the-art models.
|
2501.01344 | Machine Learning for Modeling Wireless Radio Metrics with Crowdsourced
Data and Local Environment Features | cs.LG | This paper presents a suite of machine learning models, CRC-ML-Radio Metrics,
designed for modeling RSRP, RSRQ, and RSSI wireless radio metrics in 4G
environments. These models utilize crowdsourced data with local environmental
features to enhance prediction accuracy across both indoor at elevation and
outdoor urban settings. They achieve RMSE performance of 9.76 to 11.69 dB for
RSRP, 2.90 to 3.23 dB for RSRQ, and 9.50 to 10.36 dB for RSSI, evaluated on
over 300,000 data points in the Toronto, Montreal, and Vancouver areas. These
results demonstrate the robustness and adaptability of the models, supporting
precise network planning and quality of service optimization in complex
Canadian urban environments.
|
2501.01346 | Large Vision-Language Model Alignment and Misalignment: A Survey Through
the Lens of Explainability | cs.CV cs.CL | Large Vision-Language Models (LVLMs) have demonstrated remarkable
capabilities in processing both visual and textual information. However, the
critical challenge of alignment between visual and textual representations is
not fully understood. This survey presents a comprehensive examination of
alignment and misalignment in LVLMs through an explainability lens. We first
examine the fundamentals of alignment, exploring its representational and
behavioral aspects, training methodologies, and theoretical foundations. We
then analyze misalignment phenomena across three semantic levels: object,
attribute, and relational misalignment. Our investigation reveals that
misalignment emerges from challenges at multiple levels: the data level, the
model level, and the inference level. We provide a comprehensive review of
existing mitigation strategies, categorizing them into parameter-frozen and
parameter-tuning approaches. Finally, we outline promising future research
directions, emphasizing the need for standardized evaluation protocols and
in-depth explainability studies.
|
2501.01347 | AdaptVC: High Quality Voice Conversion with Adaptive Learning | cs.SD cs.CL eess.AS | The goal of voice conversion is to transform the speech of a source speaker
to sound like that of a reference speaker while preserving the original
content. A key challenge is to extract disentangled linguistic content from the
source and voice style from the reference. While existing approaches leverage
various methods to isolate the two, a generalization still requires further
attention, especially for robustness in zero-shot scenarios. In this paper, we
achieve successful disentanglement of content and speaker features by tuning
self-supervised speech features with adapters. The adapters are trained to
dynamically encode nuanced features from rich self-supervised features, and the
decoder fuses them to produce speech that accurately resembles the reference
with minimal loss of content. Moreover, we leverage a conditional flow matching
decoder with cross-attention speaker conditioning to further boost the
synthesis quality and efficiency. Subjective and objective evaluations in a
zero-shot scenario demonstrate that the proposed method outperforms existing
models in speech quality and similarity to the reference speech.
|
2501.01349 | Rethinking Relation Extraction: Beyond Shortcuts to Generalization with
a Debiased Benchmark | cs.AI | Benchmarks are crucial for evaluating machine learning algorithm performance,
facilitating comparison and identifying superior solutions. However, biases
within datasets can lead models to learn shortcut patterns, resulting in
inaccurate assessments and hindering real-world applicability. This paper
addresses the issue of entity bias in relation extraction tasks, where models
tend to rely on entity mentions rather than context. We propose a debiased
relation extraction benchmark DREB that breaks the pseudo-correlation between
entity mentions and relation types through entity replacement. DREB utilizes
Bias Evaluator and PPL Evaluator to ensure low bias and high naturalness,
providing a reliable and accurate assessment of model generalization in entity
bias scenarios. To establish a new baseline on DREB, we introduce MixDebias, a
debiasing method combining data-level and model training-level techniques.
MixDebias effectively improves model performance on DREB while maintaining
performance on the original dataset. Extensive experiments demonstrate the
effectiveness and robustness of MixDebias compared to existing methods,
highlighting its potential for improving the generalization ability of relation
extraction models. We will release DREB and MixDebias publicly.
|
2501.01353 | Privacy Preservation in MIMO-OFDM Localization Systems: A Beamforming
Approach | eess.SP cs.IT math.IT | We investigate an uplink MIMO-OFDM localization scenario where a legitimate
base station (BS) aims to localize a user equipment (UE) using pilot signals
transmitted by the UE, while an unauthorized BS attempts to localize the UE by
eavesdropping on these pilots, posing a risk to the UE's location privacy. To
enhance legitimate localization performance while protecting the UE's privacy,
we formulate an optimization problem regarding the beamformers at the UE,
aiming to minimize the Cram\'er-Rao bound (CRB) for legitimate localization
while constraining the CRB for unauthorized localization above a threshold. A
penalty dual decomposition optimization framework is employed to solve the
problem, leading to a novel beamforming approach for location privacy
preservation. Numerical results confirm the effectiveness of the proposed
approach and demonstrate its superiority over existing benchmarks.
|
2501.01359 | Smoothing traffic flow through automated vehicle control with optimal
parameter selection | eess.SY cs.SY | Stop-and-go traffic waves are known for reducing the efficiency of
transportation systems by increasing traffic oscillations and energy
consumption. In this study, we develop an approach to synthesize a class of
additive feedback controllers for automated vehicles (AVs) to smooth nonlinear
mixed traffic flow, including both AVs and human-driven vehicles (HVs). Unlike
recent explicit AV controllers that rely on strict assumptions such as
time-varying equilibrium traffic speed, our proposed AV controller requires
only local traffic information, such as inter-vehicle spacing and relative
speed, which are readily available through AV onboard sensors. Essentially, it
allows a controlled AV to track a subtler version of the perturbed speed
profile resulting from its preceding vehicle, thereby enabling smoother traffic
flow. Additionally, we provide a method for selecting the optimal control
parameters to achieve traffic-smoothing effects efficiently. These unique
features of the developed AV controller ensure much higher implementability. We
demonstrate the effectiveness of the proposed approach through simulations of
two distinct traffic scenarios with varying levels of oscillation. The results
show that AVs using the proposed controller are capable of effectively reducing
traffic oscillations and lowering vehicle fuel consumption by up to 46.78\% and
2.74\%, respectively, for a platoon of 10 vehicles. The traffic-smoothing
effect of the controller is more pronounced at higher penetration rates of AVs.
While the performance of the proposed approach is slightly less superior to
that of the most recent additive AV controller, it offers greater
implementability and provides an efficient method for selecting optimal control
parameters.
|
2501.01366 | ViGiL3D: A Linguistically Diverse Dataset for 3D Visual Grounding | cs.CV cs.AI cs.CL | 3D visual grounding (3DVG) involves localizing entities in a 3D scene
referred to by natural language text. Such models are useful for embodied AI
and scene retrieval applications, which involve searching for objects or
patterns using natural language descriptions. While recent works have focused
on LLM-based scaling of 3DVG datasets, these datasets do not capture the full
range of potential prompts which could be specified in the English language. To
ensure that we are scaling up and testing against a useful and representative
set of prompts, we propose a framework for linguistically analyzing 3DVG
prompts and introduce Visual Grounding with Diverse Language in 3D (ViGiL3D), a
diagnostic dataset for evaluating visual grounding methods against a diverse
set of language patterns. We evaluate existing open-vocabulary 3DVG methods to
demonstrate that these methods are not yet proficient in understanding and
identifying the targets of more challenging, out-of-distribution prompts,
toward real-world applications.
|
2501.01367 | Contrastive Learning from Exploratory Actions: Leveraging Natural
Interactions for Preference Elicitation | cs.RO cs.AI cs.HC cs.LG | People have a variety of preferences for how robots behave. To understand and
reason about these preferences, robots aim to learn a reward function that
describes how aligned robot behaviors are with a user's preferences. Good
representations of a robot's behavior can significantly reduce the time and
effort required for a user to teach the robot their preferences. Specifying
these representations -- what "features" of the robot's behavior matter to
users -- remains a difficult problem; Features learned from raw data lack
semantic meaning and features learned from user data require users to engage in
tedious labeling processes. Our key insight is that users tasked with
customizing a robot are intrinsically motivated to produce labels through
exploratory search; they explore behaviors that they find interesting and
ignore behaviors that are irrelevant. To harness this novel data source of
exploratory actions, we propose contrastive learning from exploratory actions
(CLEA) to learn trajectory features that are aligned with features that users
care about. We learned CLEA features from exploratory actions users performed
in an open-ended signal design activity (N=25) with a Kuri robot, and evaluated
CLEA features through a second user study with a different set of users (N=42).
CLEA features outperformed self-supervised features when eliciting user
preferences over four metrics: completeness, simplicity, minimality, and
explainability.
|
2501.01368 | Test-time Controllable Image Generation by Explicit Spatial Constraint
Enforcement | cs.CV | Recent text-to-image generation favors various forms of spatial conditions,
e.g., masks, bounding boxes, and key points. However, the majority of the prior
art requires form-specific annotations to fine-tune the original model, leading
to poor test-time generalizability. Meanwhile, existing training-free methods
work well only with simplified prompts and spatial conditions. In this work, we
propose a novel yet generic test-time controllable generation method that aims
at natural text prompts and complex conditions. Specifically, we decouple
spatial conditions into semantic and geometric conditions and then enforce
their consistency during the image-generation process individually. As for the
former, we target bridging the gap between the semantic condition and text
prompts, as well as the gap between such condition and the attention map from
diffusion models. To achieve this, we propose to first complete the prompt
w.r.t. semantic condition, and then remove the negative impact of distracting
prompt words by measuring their statistics in attention maps as well as
distances in word space w.r.t. this condition. To further cope with the complex
geometric conditions, we introduce a geometric transform module, in which
Region-of-Interests will be identified in attention maps and further used to
translate category-wise latents w.r.t. geometric condition. More importantly,
we propose a diffusion-based latents-refill method to explicitly remove the
impact of latents at the RoI, reducing the artifacts on generated images.
Experiments on Coco-stuff dataset showcase 30$\%$ relative boost compared to
SOTA training-free methods on layout consistency evaluation metrics.
|
2501.01370 | Embedding-based Approaches to Hyperpartisan News Detection | cs.LG cs.CL | In this paper, we describe our systems in which the objective is to determine
whether a given news article could be considered as hyperpartisan.
Hyperpartisan news is news that takes an extremely polarized political
standpoint with an intention of creating political divide among the public. We
attempted several approaches, including n-grams, sentiment analysis, as well as
sentence and document representation using pre-tained ELMo. Our best system
using pre-trained ELMo with Bidirectional LSTM achieved an accuracy of 83%
through 10-fold cross-validation without much hyperparameter tuning.
|
2501.01371 | CLIP-UP: CLIP-Based Unanswerable Problem Detection for Visual Question
Answering | cs.CV | Recent Vision-Language Models (VLMs) have demonstrated remarkable
capabilities in visual understanding and reasoning, and in particular on
multiple-choice Visual Question Answering (VQA). Still, these models can make
distinctly unnatural errors, for example, providing (wrong) answers to
unanswerable VQA questions, such as questions asking about objects that do not
appear in the image. To address this issue, we propose CLIP-UP: CLIP-based
Unanswerable Problem detection, a novel lightweight method for equipping VLMs
with the ability to withhold answers to unanswerable questions. By leveraging
CLIP to extract question-image alignment information, CLIP-UP requires only
efficient training of a few additional layers, while keeping the original VLMs'
weights unchanged. Tested across LLaVA models, CLIP-UP achieves
state-of-the-art results on the MM-UPD benchmark for assessing unanswerability
in multiple-choice VQA, while preserving the original performance on other
tasks.
|
2501.01372 | ScarNet: A Novel Foundation Model for Automated Myocardial Scar
Quantification from LGE in Cardiac MRI | eess.IV cs.AI cs.CV | Background: Late Gadolinium Enhancement (LGE) imaging is the gold standard
for assessing myocardial fibrosis and scarring, with left ventricular (LV) LGE
extent predicting major adverse cardiac events (MACE). Despite its importance,
routine LGE-based LV scar quantification is hindered by labor-intensive manual
segmentation and inter-observer variability. Methods: We propose ScarNet, a
hybrid model combining a transformer-based encoder from the Medical Segment
Anything Model (MedSAM) with a convolution-based U-Net decoder, enhanced by
tailored attention blocks. ScarNet was trained on 552 ischemic cardiomyopathy
patients with expert segmentations of myocardial and scar boundaries and tested
on 184 separate patients. Results: ScarNet achieved robust scar segmentation in
184 test patients, yielding a median Dice score of 0.912 (IQR: 0.863--0.944),
significantly outperforming MedSAM (median Dice = 0.046, IQR: 0.043--0.047) and
nnU-Net (median Dice = 0.638, IQR: 0.604--0.661). ScarNet demonstrated lower
bias (-0.63%) and coefficient of variation (4.3%) compared to MedSAM (bias:
-13.31%, CoV: 130.3%) and nnU-Net (bias: -2.46%, CoV: 20.3%). In Monte Carlo
simulations with noise perturbations, ScarNet achieved significantly higher
scar Dice (0.892 \pm 0.053, CoV = 5.9%) than MedSAM (0.048 \pm 0.112, CoV =
233.3%) and nnU-Net (0.615 \pm 0.537, CoV = 28.7%). Conclusion: ScarNet
outperformed MedSAM and nnU-Net in accurately segmenting myocardial and scar
boundaries in LGE images. The model exhibited robust performance across diverse
image qualities and scar patterns.
|
2501.01375 | Iris Recognition for Infants | cs.CV | Non-invasive, efficient, physical token-less, accurate and stable
identification methods for newborns may prevent baby swapping at birth, limit
baby abductions and improve post-natal health monitoring across geographies,
within the context of both the formal (i.e., hospitals) and informal (i.e.,
humanitarian and fragile settings) health sectors. This paper explores the
feasibility of application iris recognition to build biometric identifiers for
4-6 week old infants. We (a) collected near infrared (NIR) iris images from 17
infants using a specially-designed NIR iris sensor; (b) evaluated six iris
recognition methods to assess readiness of the state-of-the-art iris
recognition to be applied to newborns and infants; (c) proposed a new
segmentation model that correctly detects iris texture within infants iris
images, and coupled it with several iris texture encoding approaches to offer,
to the first of our knowledge, a fully-operational infant iris recognition
system; and, (d) trained a StyleGAN-based model to synthesize iris images
mimicking samples acquired from infants to deliver to the research community
privacy-safe infant iris images. The proposed system, incorporating the
specially-designed iris sensor and segmenter, and applied to the collected
infant iris samples, achieved Equal Error Rate (EER) of 3\% and Area Under ROC
Curve (AUC) of 99\%, compared to EER$\geq$20\% and AUC$\leq$88\% obtained for
state of the art adult iris recognition systems. This suggests that it may be
feasible to design methods that succesfully extract biometric features from
infant irises.
|
2501.01377 | Training Medical Large Vision-Language Models with Abnormal-Aware
Feedback | cs.CL cs.AI cs.CV cs.LG | Existing Medical Large Vision-Language Models (Med-LVLMs), which encapsulate
extensive medical knowledge, demonstrate excellent capabilities in
understanding medical images and responding to human queries based on these
images. However, there remain challenges in visual localization in medical
images, which is crucial for abnormality detection and interpretation. To
address these issues, we propose a novel UMed-LVLM designed with Unveiling
Medical abnormalities. Specifically, we collect a Medical Abnormalities
Unveiling (MAU) dataset and propose a two-stage training method for UMed-LVLM
training. To collect MAU dataset, we propose a prompt method utilizing the
GPT-4V to generate diagnoses based on identified abnormal areas in medical
images. Moreover, the two-stage training method includes Abnormal-Aware
Instruction Tuning and Abnormal-Aware Rewarding, comprising Abnormal
Localization Rewarding and Vision Relevance Rewarding. Experimental results
demonstrate that our UMed-LVLM surpasses existing Med-LVLMs in identifying and
understanding medical abnormality. In addition, this work shows that enhancing
the abnormality detection capabilities of Med-LVLMs significantly improves
their understanding of medical images and generalization capability.
|
2501.01383 | Electrical networks and data analysis in phylogenetics | math.CO cs.IT math-ph math.IT math.MP q-bio.PE | A classic problem in data analysis is studying the systems of subsets defined
by either a similarity or a dissimilarity function on $X$ which is either
observed directly or derived from a data set. For an electrical network there
are two functions on the set of the nodes defined by the resistance matrix and
the response matrix either of which defines the network completely. We argue
that these functions should be viewed as a similarity and a dissimilarity
function on the set of the nodes moreover they are related via the covariance
mapping also known as the Farris transform or the Gromov product. We will
explore the properties of electrical networks from this point of view. It has
been known for a while that the resistance matrix defines a metric on the nodes
of the electrical networks. Moreover for a circular electrical network this
metric obeys the Kalmanson property as it was shown recently. We will call such
a metric an electrical Kalmanson metric. The main results of this paper is a
complete description of the electrical Kalmanson metrics in the set of all
Kalmanson metrics in terms of the geometry of the positive Isotropic
Grassmannian whose connection to the theory of electrical networks was
discovered earlier. One important area of applications where Kalmanson metrics
are actively used is the theory of phylogenetic networks which are a
generalization of phylogenetic trees. Our results allow us to use in
phylogenetics the powerful methods of reconstruction of the minimal graphs of
electrical networks and possibly open the door into data analysis for the
methods of the theory of cluster algebras.
|
2501.01384 | OmniChat: Enhancing Spoken Dialogue Systems with Scalable Synthetic Data
for Diverse Scenarios | cs.CL cs.HC cs.SD eess.AS | With the rapid development of large language models, researchers have created
increasingly advanced spoken dialogue systems that can naturally converse with
humans. However, these systems still struggle to handle the full complexity of
real-world conversations, including audio events, musical contexts, and
emotional expressions, mainly because current dialogue datasets are constrained
in both scale and scenario diversity. In this paper, we propose leveraging
synthetic data to enhance the dialogue models across diverse scenarios. We
introduce ShareChatX, the first comprehensive, large-scale dataset for spoken
dialogue that spans diverse scenarios. Based on this dataset, we introduce
OmniChat, a multi-turn dialogue system with a heterogeneous feature fusion
module, designed to optimize feature selection in different dialogue contexts.
In addition, we explored critical aspects of training dialogue systems using
synthetic data. Through comprehensive experimentation, we determined the ideal
balance between synthetic and real data, achieving state-of-the-art results on
the real-world dialogue dataset DailyTalk. We also highlight the crucial
importance of synthetic data in tackling diverse, complex dialogue scenarios,
especially those involving audio and music. For more details, please visit our
demo page at \url{https://sharechatx.github.io/}.
|
2501.01389 | Optimal Strategy Revision in Population Games: A Mean Field Game Theory
Perspective | cs.MA cs.GT | This paper investigates the design of optimal strategy revision in Population
Games (PG) by establishing its connection to finite-state Mean Field Games
(MFG). Specifically, by linking Evolutionary Dynamics (ED) -- which models
agent decision-making in PG -- to the MFG framework, we demonstrate that
optimal strategy revision can be derived by solving the forward Fokker-Planck
(FP) equation and the backward Hamilton-Jacobi (HJ) equation, both central
components of the MFG framework. Furthermore, we show that the resulting
optimal strategy revision satisfies two key properties: positive correlation
and Nash stationarity, which are essential for ensuring convergence to the Nash
equilibrium. This convergence is then rigorously analyzed and established.
Additionally, we discuss how different design objectives for the optimal
strategy revision can recover existing ED models previously reported in the PG
literature. Numerical examples are provided to illustrate the effectiveness and
improved convergence properties of the optimal strategy revision design.
|
2501.01392 | ProjectedEx: Enhancing Generation in Explainable AI for Prostate Cancer | eess.IV cs.CV | Prostate cancer, a growing global health concern, necessitates precise
diagnostic tools, with Magnetic Resonance Imaging (MRI) offering
high-resolution soft tissue imaging that significantly enhances diagnostic
accuracy. Recent advancements in explainable AI and representation learning
have significantly improved prostate cancer diagnosis by enabling automated and
precise lesion classification. However, existing explainable AI methods,
particularly those based on frameworks like generative adversarial networks
(GANs), are predominantly developed for natural image generation, and their
application to medical imaging often leads to suboptimal performance due to the
unique characteristics and complexity of medical image. To address these
challenges, our paper introduces three key contributions. First, we propose
ProjectedEx, a generative framework that provides interpretable,
multi-attribute explanations, effectively linking medical image features to
classifier decisions. Second, we enhance the encoder module by incorporating
feature pyramids, which enables multiscale feedback to refine the latent space
and improves the quality of generated explanations. Additionally, we conduct
comprehensive experiments on both the generator and classifier, demonstrating
the clinical relevance and effectiveness of ProjectedEx in enhancing
interpretability and supporting the adoption of AI in medical settings. Code
will be released at https://github.com/Richardqiyi/ProjectedEx
|
2501.01393 | Learning 3D Garment Animation from Trajectories of A Piece of Cloth | cs.CV cs.GR | Garment animation is ubiquitous in various applications, such as virtual
reality, gaming, and film producing. Recently, learning-based approaches obtain
compelling performance in animating diverse garments under versatile scenarios.
Nevertheless, to mimic the deformations of the observed garments, data-driven
methods require large scale of garment data, which are both resource-wise
expensive and time-consuming. In addition, forcing models to match the dynamics
of observed garment animation may hinder the potentials to generalize to unseen
cases. In this paper, instead of using garment-wise supervised-learning we
adopt a disentangled scheme to learn how to animate observed garments: 1).
learning constitutive behaviors from the observed cloth; 2). dynamically
animate various garments constrained by the learned constitutive laws.
Specifically, we propose Energy Unit network (EUNet) to model the constitutive
relations in the format of energy. Without the priors from analytical physics
models and differentiable simulation engines, EUNet is able to directly capture
the constitutive behaviors from the observed piece of cloth and uniformly
describes the change of energy caused by deformations, such as stretching and
bending. We further apply the pre-trained EUNet to animate various garments
based on energy optimizations. The disentangled scheme alleviates the need of
garment data and enables us to utilize the dynamics of a piece of cloth for
animating garments. Experiments show that while EUNet effectively delivers the
energy gradients due to the deformations, models constrained by EUNet achieve
more stable and physically plausible performance comparing with those trained
in garment-wise supervised manner. Code is available at
https://github.com/ftbabi/EUNet_NeurIPS2024.git .
|
2501.01394 | A Unified Hyperparameter Optimization Pipeline for Transformer-Based
Time Series Forecasting Models | cs.LG cs.AI | Transformer-based models for time series forecasting (TSF) have attracted
significant attention in recent years due to their effectiveness and
versatility. However, these models often require extensive hyperparameter
optimization (HPO) to achieve the best possible performance, and a unified
pipeline for HPO in transformer-based TSF remains lacking. In this paper, we
present one such pipeline and conduct extensive experiments on several
state-of-the-art (SOTA) transformer-based TSF models. These experiments are
conducted on standard benchmark datasets to evaluate and compare the
performance of different models, generating practical insights and examples.
Our pipeline is generalizable beyond transformer-based architectures and can be
applied to other SOTA models, such as Mamba and TimeMixer, as demonstrated in
our experiments. The goal of this work is to provide valuable guidance to both
industry practitioners and academic researchers in efficiently identifying
optimal hyperparameters suited to their specific domain applications. The code
and complete experimental results are available on GitHub.
|
2501.01402 | Best Transition Matrix Esitimation or Best Label Noise Robustness
Classifier? Two Possible Methods to Enhance the Performance of T-revision | cs.LG | Label noise refers to incorrect labels in a dataset caused by human errors or
collection defects, which is common in real-world applications and can
significantly reduce the accuracy of models. This report explores how to
estimate noise transition matrices and construct deep learning classifiers that
are robust against label noise. In cases where the transition matrix is known,
we apply forward correction and importance reweighting methods to correct the
impact of label noise using the transition matrix. When the transition matrix
is unknown or inaccurate, we use the anchor point assumption and T-Revision
series methods to estimate or correct the noise matrix. In this study, we
further improved the T-Revision method by developing T-Revision-Alpha and
T-Revision-Softmax to enhance stability and robustness. Additionally, we
designed and implemented two baseline classifiers, a Multi-Layer Perceptron
(MLP) and ResNet-18, based on the cross-entropy loss function. We compared the
performance of these methods on predicting clean labels and estimating
transition matrices using the FashionMINIST dataset with known noise transition
matrices. For the CIFAR-10 dataset, where the noise transition matrix is
unknown, we estimated the noise matrix and evaluated the ability of the methods
to predict clean labels.
|
2501.01406 | nnY-Net: Swin-NeXt with Cross-Attention for 3D Medical Images
Segmentation | cs.CV | This paper provides a novel 3D medical image segmentation model structure
called nnY-Net. This name comes from the fact that our model adds a
cross-attention module at the bottom of the U-net structure to form a Y
structure. We integrate the advantages of the two latest SOTA models, MedNeXt
and SwinUNETR, and use Swin Transformer as the encoder and ConvNeXt as the
decoder to innovatively design the Swin-NeXt structure. Our model uses the
lowest-level feature map of the encoder as Key and Value and uses patient
features such as pathology and treatment information as Query to calculate the
attention weights in a Cross Attention module. Moreover, we simplify some pre-
and post-processing as well as data enhancement methods in 3D image
segmentation based on the dynUnet and nnU-net frameworks. We integrate our
proposed Swin-NeXt with Cross-Attention framework into this framework. Last, we
construct a DiceFocalCELoss to improve the training efficiency for the uneven
data convergence of voxel classification.
|
2501.01407 | Nested Attention: Semantic-aware Attention Values for Concept
Personalization | cs.CV cs.GR cs.LG | Personalizing text-to-image models to generate images of specific subjects
across diverse scenes and styles is a rapidly advancing field. Current
approaches often face challenges in maintaining a balance between identity
preservation and alignment with the input text prompt. Some methods rely on a
single textual token to represent a subject, which limits expressiveness, while
others employ richer representations but disrupt the model's prior, diminishing
prompt alignment. In this work, we introduce Nested Attention, a novel
mechanism that injects a rich and expressive image representation into the
model's existing cross-attention layers. Our key idea is to generate
query-dependent subject values, derived from nested attention layers that learn
to select relevant subject features for each region in the generated image. We
integrate these nested layers into an encoder-based personalization method, and
show that they enable high identity preservation while adhering to input text
prompts. Our approach is general and can be trained on various domains.
Additionally, its prior preservation allows us to combine multiple personalized
subjects from different domains in a single image.
|
2501.01409 | On Unifying Video Generation and Camera Pose Estimation | cs.CV cs.AI | Inspired by the emergent 3D capabilities in image generators, we explore
whether video generators similarly exhibit 3D awareness. Using
structure-from-motion (SfM) as a benchmark for 3D tasks, we investigate if
intermediate features from OpenSora, a video generation model, can support
camera pose estimation. We first examine native 3D awareness in video
generation features by routing raw intermediate outputs to SfM-prediction
modules like DUSt3R. Then, we explore the impact of fine-tuning on camera pose
estimation to enhance 3D awareness. Results indicate that while video generator
features have limited inherent 3D awareness, task-specific supervision
significantly boosts their accuracy for camera pose estimation, resulting in
competitive performance. The proposed unified model, named JOG3R, produces
camera pose estimates with competitive quality without degrading video
generation quality.
|
2501.01411 | Maximally Extendable Product Codes are Good Coboundary Expanders | cs.IT math.IT quant-ph | We investigate the coboundary expansion property of product codes called
product expansion, which plays an important role in the recent constructions of
good quantum LDPC codes and classical locally testable codes. Prior research
revealed that this property is equivalent to agreement testability and robust
testability for products of two codes of linear distance. However, for products
of more than two codes, product expansion is a strictly stronger property. In
this paper, we prove that the collection of random codes over a sufficiently
large field has good product expansion. We believe that in the case of four
codes, these ideas can be used to construct good quantum locally testable codes
in a way similar to the current constructions using only products of two codes.
|
2501.01414 | Deep Discrete Encoders: Identifiable Deep Generative Models for Rich
Data with Discrete Latent Layers | stat.ML cs.LG stat.ME | In the era of generative AI, deep generative models (DGMs) with latent
representations have gained tremendous popularity. Despite their impressive
empirical performance, the statistical properties of these models remain
underexplored. DGMs are often overparametrized, non-identifiable, and
uninterpretable black boxes, raising serious concerns when deploying them in
high-stakes applications. Motivated by this, we propose an interpretable deep
generative modeling framework for rich data types with discrete latent layers,
called Deep Discrete Encoders (DDEs). A DDE is a directed graphical model with
multiple binary latent layers. Theoretically, we propose transparent
identifiability conditions for DDEs, which imply progressively smaller sizes of
the latent layers as they go deeper. Identifiability ensures consistent
parameter estimation and inspires an interpretable design of the deep
architecture. Computationally, we propose a scalable estimation pipeline of a
layerwise nonlinear spectral initialization followed by a penalized stochastic
approximation EM algorithm. This procedure can efficiently estimate models with
exponentially many latent components. Extensive simulation studies validate our
theoretical results and demonstrate the proposed algorithms' excellent
performance. We apply DDEs to three diverse real datasets for hierarchical
topic modeling, image representation learning, response time modeling in
educational testing, and obtain interpretable findings.
|
2501.01416 | Hierarchical Alignment-enhanced Adaptive Grounding Network for
Generalized Referring Expression Comprehension | cs.CV | In this work, we address the challenging task of Generalized Referring
Expression Comprehension (GREC). Compared to the classic Referring Expression
Comprehension (REC) that focuses on single-target expressions, GREC extends the
scope to a more practical setting by further encompassing no-target and
multi-target expressions. Existing REC methods face challenges in handling the
complex cases encountered in GREC, primarily due to their fixed output and
limitations in multi-modal representations. To address these issues, we propose
a Hierarchical Alignment-enhanced Adaptive Grounding Network (HieA2G) for GREC,
which can flexibly deal with various types of referring expressions. First, a
Hierarchical Multi-modal Semantic Alignment (HMSA) module is proposed to
incorporate three levels of alignments, including word-object, phrase-object,
and text-image alignment. It enables hierarchical cross-modal interactions
across multiple levels to achieve comprehensive and robust multi-modal
understanding, greatly enhancing grounding ability for complex cases. Then, to
address the varying number of target objects in GREC, we introduce an Adaptive
Grounding Counter (AGC) to dynamically determine the number of output targets.
Additionally, an auxiliary contrastive loss is employed in AGC to enhance
object-counting ability by pulling in multi-modal features with the same
counting and pushing away those with different counting. Extensive experimental
results show that HieA2G achieves new state-of-the-art performance on the
challenging GREC task and also the other 4 tasks, including REC, Phrase
Grounding, Referring Expression Segmentation (RES), and Generalized Referring
Expression Segmentation (GRES), demonstrating the remarkable superiority and
generalizability of the proposed HieA2G.
|
2501.01420 | A Multi-task Supervised Compression Model for Split Computing | cs.CV cs.LG eess.IV | Split computing ($\neq$ split learning) is a promising approach to deep
learning models for resource-constrained edge computing systems, where weak
sensor (mobile) devices are wirelessly connected to stronger edge servers
through channels with limited communication capacity. State-of-theart work on
split computing presents methods for single tasks such as image classification,
object detection, or semantic segmentation. The application of existing methods
to multitask problems degrades model accuracy and/or significantly increase
runtime latency. In this study, we propose Ladon, the first multi-task-head
supervised compression model for multi-task split computing. Experimental
results show that the multi-task supervised compression model either
outperformed or rivaled strong lightweight baseline models in terms of
predictive performance for ILSVRC 2012, COCO 2017, and PASCAL VOC 2012 datasets
while learning compressed representations at its early layers. Furthermore, our
models reduced end-to-end latency (by up to 95.4%) and energy consumption of
mobile devices (by up to 88.2%) in multi-task split computing scenarios.
|
2501.01421 | R-SCoRe: Revisiting Scene Coordinate Regression for Robust Large-Scale
Visual Localization | cs.CV | Learning-based visual localization methods that use scene coordinate
regression (SCR) offer the advantage of smaller map sizes. However, on datasets
with complex illumination changes or image-level ambiguities, it remains a less
robust alternative to feature matching methods. This work aims to close the
gap. We introduce a covisibility graph-based global encoding learning and data
augmentation strategy, along with a depth-adjusted reprojection loss to
facilitate implicit triangulation. Additionally, we revisit the network
architecture and local feature extraction module. Our method achieves
state-of-the-art on challenging large-scale datasets without relying on network
ensembles or 3D supervision. On Aachen Day-Night, we are 10$\times$ more
accurate than previous SCR methods with similar map sizes and require at least
5$\times$ smaller map sizes than any other SCR method while still delivering
superior accuracy. Code will be available at: https://github.com/cvg/scrstudio .
|
2501.01422 | Multi-Modal Video Feature Extraction for Popularity Prediction | cs.CV cs.AI cs.LG | This work aims to predict the popularity of short videos using the videos
themselves and their related features. Popularity is measured by four key
engagement metrics: view count, like count, comment count, and share count.
This study employs video classification models with different architectures and
training methods as backbone networks to extract video modality features.
Meanwhile, the cleaned video captions are incorporated into a carefully
designed prompt framework, along with the video, as input for video-to-text
generation models, which generate detailed text-based video content
understanding. These texts are then encoded into vectors using a pre-trained
BERT model. Based on the six sets of vectors mentioned above, a neural network
is trained for each of the four prediction metrics. Moreover, the study
conducts data mining and feature engineering based on the video and tabular
data, constructing practical features such as the total frequency of hashtag
appearances, the total frequency of mention appearances, video duration, frame
count, frame rate, and total time online. Multiple machine learning models are
trained, and the most stable model, XGBoost, is selected. Finally, the
predictions from the neural network and XGBoost models are averaged to obtain
the final result.
|
2501.01423 | Reconstruction vs. Generation: Taming Optimization Dilemma in Latent
Diffusion Models | cs.CV cs.LG | Latent diffusion models with Transformer architectures excel at generating
high-fidelity images. However, recent studies reveal an optimization dilemma in
this two-stage design: while increasing the per-token feature dimension in
visual tokenizers improves reconstruction quality, it requires substantially
larger diffusion models and more training iterations to achieve comparable
generation performance. Consequently, existing systems often settle for
sub-optimal solutions, either producing visual artifacts due to information
loss within tokenizers or failing to converge fully due to expensive
computation costs. We argue that this dilemma stems from the inherent
difficulty in learning unconstrained high-dimensional latent spaces. To address
this, we propose aligning the latent space with pre-trained vision foundation
models when training the visual tokenizers. Our proposed VA-VAE (Vision
foundation model Aligned Variational AutoEncoder) significantly expands the
reconstruction-generation frontier of latent diffusion models, enabling faster
convergence of Diffusion Transformers (DiT) in high-dimensional latent spaces.
To exploit the full potential of VA-VAE, we build an enhanced DiT baseline with
improved training strategies and architecture designs, termed LightningDiT. The
integrated system achieves state-of-the-art (SOTA) performance on ImageNet
256x256 generation with an FID score of 1.35 while demonstrating remarkable
training efficiency by reaching an FID score of 2.11 in just 64
epochs--representing an over 21 times convergence speedup compared to the
original DiT. Models and codes are available at:
https://github.com/hustvl/LightningDiT.
|
2501.01424 | Object-level Visual Prompts for Compositional Image Generation | cs.CV cs.AI cs.GR | We introduce a method for composing object-level visual prompts within a
text-to-image diffusion model. Our approach addresses the task of generating
semantically coherent compositions across diverse scenes and styles, similar to
the versatility and expressiveness offered by text prompts. A key challenge in
this task is to preserve the identity of the objects depicted in the input
visual prompts, while also generating diverse compositions across different
images. To address this challenge, we introduce a new KV-mixed cross-attention
mechanism, in which keys and values are learned from distinct visual
representations. The keys are derived from an encoder with a small bottleneck
for layout control, whereas the values come from a larger bottleneck encoder
that captures fine-grained appearance details. By mixing keys and values from
these complementary sources, our model preserves the identity of the visual
prompts while supporting flexible variations in object arrangement, pose, and
composition. During inference, we further propose object-level compositional
guidance to improve the method's identity preservation and layout correctness.
Results show that our technique produces diverse scene compositions that
preserve the unique characteristics of each visual prompt, expanding the
creative potential of text-to-image generation.
|
2501.01425 | Free-Form Motion Control: A Synthetic Video Generation Dataset with
Controllable Camera and Object Motions | cs.CV | Controlling the movements of dynamic objects and the camera within generated
videos is a meaningful yet challenging task. Due to the lack of datasets with
comprehensive motion annotations, existing algorithms can not simultaneously
control the motions of both camera and objects, resulting in limited
controllability over generated contents. To address this issue and facilitate
the research in this field, we introduce a Synthetic Dataset for Free-Form
Motion Control (SynFMC). The proposed SynFMC dataset includes diverse objects
and environments and covers various motion patterns according to specific
rules, simulating common and complex real-world scenarios. The complete 6D pose
information facilitates models learning to disentangle the motion effects from
objects and the camera in a video. To validate the effectiveness and
generalization of SynFMC, we further propose a method, Free-Form Motion Control
(FMC). FMC enables independent or simultaneous control of object and camera
movements, producing high-fidelity videos. Moreover, it is compatible with
various personalized text-to-image (T2I) models for different content styles.
Extensive experiments demonstrate that the proposed FMC outperforms previous
methods across multiple scenarios.
|
2501.01426 | Unifying Specialized Visual Encoders for Video Language Models | cs.CV cs.CL cs.LG | The recent advent of Large Language Models (LLMs) has ushered sophisticated
reasoning capabilities into the realm of video through Video Large Language
Models (VideoLLMs). However, VideoLLMs currently rely on a single vision
encoder for all of their visual processing, which limits the amount and type of
visual information that can be conveyed to the LLM. Our method, MERV,
Multi-Encoder Representation of Videos, instead leverages multiple frozen
visual encoders to create a unified representation of a video, providing the
VideoLLM with a comprehensive set of specialized visual knowledge.
Spatio-temporally aligning the features from each encoder allows us to tackle a
wider range of open-ended and multiple-choice video understanding questions and
outperform prior state-of-the-art works. MERV is up to 3.7% better in accuracy
than Video-LLaVA across the standard suite video understanding benchmarks,
while also having a better Video-ChatGPT score. We also improve upon SeViLA,
the previous best on zero-shot Perception Test accuracy, by 2.2%. MERV
introduces minimal extra parameters and trains faster than equivalent
single-encoder methods while parallelizing the visual processing. Finally, we
provide qualitative evidence that MERV successfully captures domain knowledge
from each of its encoders. Our results offer promising directions in utilizing
multiple vision encoders for comprehensive video understanding.
|
2501.01427 | VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion
Control | cs.CV | Despite significant advancements in video generation, inserting a given
object into videos remains a challenging task. The difficulty lies in
preserving the appearance details of the reference object and accurately
modeling coherent motions at the same time. In this paper, we propose
VideoAnydoor, a zero-shot video object insertion framework with high-fidelity
detail preservation and precise motion control. Starting from a text-to-video
model, we utilize an ID extractor to inject the global identity and leverage a
box sequence to control the overall motion. To preserve the detailed appearance
and meanwhile support fine-grained motion control, we design a pixel warper. It
takes the reference image with arbitrary key-points and the corresponding
key-point trajectories as inputs. It warps the pixel details according to the
trajectories and fuses the warped features with the diffusion U-Net, thus
improving detail preservation and supporting users in manipulating the motion
trajectories. In addition, we propose a training strategy involving both videos
and static images with a weighted loss to enhance insertion quality.
VideoAnydoor demonstrates significant superiority over existing methods and
naturally supports various downstream applications (e.g., talking head
generation, video virtual try-on, multi-region editing) without task-specific
fine-tuning.
|
2501.01428 | GPT4Scene: Understand 3D Scenes from Videos with Vision-Language Models | cs.CV | In recent years, 2D Vision-Language Models (VLMs) have made significant
strides in image-text understanding tasks. However, their performance in 3D
spatial comprehension, which is critical for embodied intelligence, remains
limited. Recent advances have leveraged 3D point clouds and multi-view images
as inputs, yielding promising results. However, we propose exploring a purely
vision-based solution inspired by human perception, which merely relies on
visual cues for 3D spatial understanding. This paper empirically investigates
the limitations of VLMs in 3D spatial knowledge, revealing that their primary
shortcoming lies in the lack of global-local correspondence between the scene
and individual frames. To address this, we introduce GPT4Scene, a novel visual
prompting paradigm in VLM training and inference that helps build the
global-local relationship, significantly improving the 3D spatial understanding
of indoor scenes. Specifically, GPT4Scene constructs a 3D Bird's Eye View (BEV)
image from the video and marks consistent object IDs across both frames and the
BEV image. The model then inputs the concatenated BEV image and video frames
with markers. In zero-shot evaluations, GPT4Scene improves performance over
closed-source VLMs like GPT-4o. Additionally, we prepare a processed video
dataset consisting of 165K text annotation to fine-tune open-source VLMs,
achieving state-of-the-art performance on all 3D understanding tasks.
Surprisingly, after training with the GPT4Scene paradigm, VLMs consistently
improve during inference, even without visual prompting and BEV image as
explicit correspondence. It demonstrates that the proposed paradigm helps VLMs
develop an intrinsic ability to understand 3D scenes, which paves the way for a
noninvasive approach to extending pre-trained VLMs for 3D scene understanding.
|
2501.01429 | Item Association Factorization Mixed Markov Chains for Sequential
Recommendation | cs.IR | Sequential recommendation refers to recommending the next item of interest
for a specific user based on his/her historical behavior sequence up to a
certain time. While previous research has extensively examined Markov
chain-based sequential recommendation models, the majority of these studies has
focused on the user's historical behavior sequence but has paid little
attention to the overall correlation between items. This study introduces a
sequential recommendation algorithm known as Item Association Factorization
Mixed Markov Chains, which incorporates association information between items
using an item association graph, integrating it with user behavior sequence
information. Our experimental findings from the four public datasets
demonstrate that the newly introduced algorithm significantly enhances the
recommendation ranking results without substantially increasing the parameter
count. Additionally, research on tuning the prior balancing parameters
underscores the significance of incorporating item association information
across different datasets.
|
2501.01430 | TERA: A Simulation Environment for Terrain Excavation Robot Autonomy | cs.RO | Developing excavation autonomy is challenging given the environments where
excavators operate, the complexity of physical interaction and the degrees of
freedom of operation of the excavator itself. Simulation is a useful tool to
build parts of the autonomy without the complexity of experimentation.
Traditional excavator simulators are geared towards high fidelity interactions
between the joints or between the terrain but do not incorporate other
challenges such as perception required for end to end autonomy. A complete
simulator should be capable of supporting real time operation while providing
high fidelity simulation of the excavator(s), the environment, and their
interaction. In this paper we present TERA (Terrain Excavation Robot Autonomy),
a simulator geared towards autonomous excavator applications based on Unity3D
and AGX that provides the extensibility and scalability required to study full
autonomy. It provides the ability to configure the excavator and the
environment per the user requirements. We also demonstrate realistic dynamics
by incorporating a time-varying model that introduces variations in the
system's responses. The simulator is then evaluated with different scenarios
such as track deformation, velocities on different terrains, similarity of the
system with the real excavator and the overall path error to show the
capabilities of the simulation.
|
2501.01431 | CSI Compression using Channel Charting | cs.IT cs.LG eess.SP math.IT | Reaping the benefits of multi-antenna communication systems in frequency
division duplex (FDD) requires channel state information (CSI) reporting from
mobile users to the base station (BS). Over the last decades, the amount of CSI
to be collected has become very challenging owing to the dramatic increase of
the number of antennas at BSs. To mitigate the overhead associated with CSI
reporting, compressed CSI techniques have been proposed with the idea of
recovering the original CSI at the BS from its compressed version sent by the
mobile users. Channel charting is an unsupervised dimensionality reduction
method that consists in building a radio-environment map from CSIs. Such a
method can be considered in the context of the CSI compression problem, since a
chart location is, by definition, a low-dimensional representation of the CSI.
In this paper, the performance of channel charting for a task-based CSI
compression application is studied. A comparison of the proposed method against
baselines on realistic synthetic data is proposed, showing promising results.
|
2501.01432 | Survey on safe robot control via learning | cs.RO cs.AI | Control systems are critical to modern technological infrastructure, spanning
industries from aerospace to healthcare. This survey explores the landscape of
safe robot learning, investigating methods that balance high-performance
control with rigorous safety constraints. By examining classical control
techniques, learning-based approaches, and embedded system design, the research
seeks to understand how robotic systems can be developed to prevent hazardous
states while maintaining optimal performance across complex operational
environments.
|
2501.01433 | Mathematical Definition and Systematization of Puzzle Rules | cs.AI math.HO | While logic puzzles have engaged individuals through problem-solving and
critical thinking, the creation of new puzzle rules has largely relied on
ad-hoc processes. Pencil puzzles, such as Slitherlink and Sudoku, represent a
prominent subset of these games, celebrated for their intellectual challenges
rooted in combinatorial logic and spatial reasoning. Despite extensive research
into solving techniques and automated problem generation, a unified framework
for systematic and scalable rule design has been lacking. Here, we introduce a
mathematical framework for defining and systematizing pencil puzzle rules. This
framework formalizes grid elements, their positional relationships, and
iterative composition operations, allowing for the incremental construction of
structures that form the basis of puzzle rules. Furthermore, we establish a
formal method to describe constraints and domains for each structure, ensuring
solvability and coherence. Applying this framework, we successfully formalized
the rules of well-known Nikoli puzzles, including Slitherlink and Sudoku,
demonstrating the formal representation of a significant portion (approximately
one-fourth) of existing puzzles. These results validate the potential of the
framework to systematize and innovate puzzle rule design, establishing a
pathway to automated rule generation. By providing a mathematical foundation
for puzzle rule creation, this framework opens avenues for computers,
potentially enhanced by AI, to design novel puzzle rules tailored to player
preferences, expanding the scope of puzzle diversity. Beyond its direct
application to pencil puzzles, this work illustrates how mathematical
frameworks can bridge recreational mathematics and algorithmic design, offering
tools for broader exploration in logic-based systems, with potential
applications in educational game design, personalized learning, and
computational creativity.
|
2501.01435 | Fundamental Risks in the Current Deployment of General-Purpose AI
Models: What Have We (Not) Learnt From Cybersecurity? | cs.CR cs.AI | General Purpose AI - such as Large Language Models (LLMs) - have seen rapid
deployment in a wide range of use cases. Most surprisingly, they have have made
their way from plain language models, to chat-bots, all the way to an almost
``operating system''-like status that can control decisions and logic of an
application. Tool-use, Microsoft co-pilot/office integration, and OpenAIs
Altera are just a few examples of increased autonomy, data access, and
execution capabilities. These methods come with a range of cybersecurity
challenges. We highlight some of the work we have done in terms of evaluation
as well as outline future opportunities and challenges.
|
2501.01437 | On the reconstruction limits of complex networks | stat.AP cs.IT math.IT physics.data-an | Network reconstruction consists in retrieving the hidden interaction
structure of a system from observations. Many reconstruction algorithms have
been proposed, although less research has been devoted to describe their
theoretical limitations. In this work, we adopt an information-theoretic
perspective and define the reconstructability: The fraction of structural
information recoverable from data. The reconstructability depends on the true
data generating (TDG) model which is shown to set the reconstruction limit: any
algorithm can perform, on average, at best like the TDG model. We show that the
reconstructability is related to various performance measures, such as the
probability of error and the Jaccard similarity. In an empirical context where
the TDG model is unknown, we introduce the reconstruction index as an
approximation of the reconstructability. We find that performing model
selection is crucial for the validity of the reconstruction index as a proxy of
the reconstructability of empirical time series and networks.
|
2501.01438 | Toi uu hieu suat toc do dong co Servo DC su dung bo dieu khien PID ket
hop mang no-ron | cs.RO | DC motors have been widely used in many industrial applications, from small
jointed robots with multiple degrees of freedom to household appliances and
transportation vehicles such as electric cars and trains. The main function of
these motors is to ensure stable positioning performance and speed for
mechanical systems based on pre-designed control methods. However, achieving
optimal speed performance for servo motors faces many challenges due to the
impact of internal and external loads, which affect output stability. To
optimize the speed performance of DC Servo motors, a control method combining
PID controllers and artificial neural networks has been proposed. Traditional
PID controllers have the advantage of a simple structure and effective control
capability in many systems, but they face difficulties when dealing with
nonlinear and uncertain changes. The neural network is integrated to adjust the
PID parameters in real time, helping the system adapt to different operating
conditions. Simulation and experimental results have demonstrated that the
proposed method significantly improves the speed tracking capability and
stability of the motor while ensuring quick response, zero steady-state error,
and eliminating overshoot. This method offers high potential for application in
servo motor control systems requiring high precision and performance.
|
2501.01439 | Probabilistic Mission Design in Neuro-Symbolic Systems | cs.AI cs.RO | Advanced Air Mobility (AAM) is a growing field that demands accurate modeling
of legal concepts and restrictions in navigating intelligent vehicles. In
addition, any implementation of AAM needs to face the challenges posed by
inherently dynamic and uncertain human-inhabited spaces robustly. Nevertheless,
the employment of Unmanned Aircraft Systems (UAS) beyond visual line of sight
(BVLOS) is an endearing task that promises to enhance significantly today's
logistics and emergency response capabilities. To tackle these challenges, we
present a probabilistic and neuro-symbolic architecture to encode legal
frameworks and expert knowledge over uncertain spatial relations and noisy
perception in an interpretable and adaptable fashion. More specifically, we
demonstrate Probabilistic Mission Design (ProMis), a system architecture that
links geospatial and sensory data with declarative, Hybrid Probabilistic Logic
Programs (HPLP) to reason over the agent's state space and its legality. As a
result, ProMis generates Probabilistic Mission Landscapes (PML), which quantify
the agent's belief that a set of mission conditions is satisfied across its
navigation space. Extending prior work on ProMis' reasoning capabilities and
computational characteristics, we show its integration with potent machine
learning models such as Large Language Models (LLM) and Transformer-based
vision models. Hence, our experiments underpin the application of ProMis with
multi-modal input data and how our method applies to many important AAM
scenarios.
|
2501.01441 | Explanatory Debiasing: Involving Domain Experts in the Data Generation
Process to Mitigate Representation Bias in AI Systems | cs.HC cs.AI | Representation bias is one of the most common types of biases in artificial
intelligence (AI) systems, causing AI models to perform poorly on
underrepresented data segments. Although AI practitioners use various methods
to reduce representation bias, their effectiveness is often constrained by
insufficient domain knowledge in the debiasing process. To address this gap,
this paper introduces a set of generic design guidelines for effectively
involving domain experts in representation debiasing. We instantiated our
proposed guidelines in a healthcare-focused application and evaluated them
through a comprehensive mixed-methods user study with 35 healthcare experts.
Our findings show that involving domain experts can reduce representation bias
without compromising model accuracy. Based on our findings, we also offer
recommendations for developers to build robust debiasing systems guided by our
generic design guidelines, ensuring more effective inclusion of domain experts
in the debiasing process.
|
2501.01443 | Feedback Design and Implementation for Integrated Posture Manipulation
and Thrust Vectoring | cs.RO | This MS thesis outlines my contributions to the closed loop control and
system integration of two robotic platforms: 1) Aerobat, a flapping wing robot
stabilized by air jets, and 2) Harpy, a bipedal robot equipped with dual
thrusters. Both systems share a common theme of the integration of posture
manipulation and thrust vectoring to achieve stability and controlled movement.
For Aerobat, I developed the software and control architecture that enabled its
first untethered flights. The control system combines flapping wing dynamics
with multiple air jet stabilization to maintain roll, pitch and yaw stability.
These results were published in the IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS). For Harpy, I implemented a closed-loop
control framework that incorporates active thruster assisted frontal dynamics
stabilization . My work led to preliminary untethered dynamic walking. This
approach demonstrates how thrust assisted stability can enhance locomotion in
legged robots which has not been explored before.
|
2501.01447 | Analyzing Country-Level Vaccination Rates and Determinants of Practical
Capacity to Administer COVID-19 Vaccines | econ.GN cs.LG econ.EM q-fin.EC stat.AP | The COVID-19 vaccine development, manufacturing, transportation, and
administration proved an extreme logistics operation of global magnitude.
Global vaccination levels, however, remain a key concern in preventing the
emergence of new strains and minimizing the impact of the pandemic's disruption
of daily life. In this paper, country-level vaccination rates are analyzed
through a queuing framework to extract service rates that represent the
practical capacity of a country to administer vaccines. These rates are further
characterized through regression and interpretable machine learning methods
with country-level demographic, governmental, and socio-economic variates.
Model results show that participation in multi-governmental collaborations such
as COVAX may improve the ability to vaccinate. Similarly, improved
transportation and accessibility variates such as roads per area for low-income
countries and rail lines per area for high-income countries can improve rates.
It was also found that for low-income countries specifically, improvements in
basic and health infrastructure (as measured through spending on healthcare,
number of doctors and hospital beds per 100k, population percent with access to
electricity, life expectancy, and vehicles per 1000 people) resulted in higher
vaccination rates. Of the high-income countries, those with larger 65-plus
populations struggled to vaccinate at high rates, indicating potential
accessibility issues for the elderly. This study finds that improving basic and
health infrastructure, focusing on accessibility in the last mile, particularly
for the elderly, and fostering global partnerships can improve logistical
operations of such a scale. Such structural impediments and inequities in
global health care must be addressed in preparation for future global public
health crises.
|
2501.01449 | LS-GAN: Human Motion Synthesis with Latent-space GANs | cs.CV cs.AI | Human motion synthesis conditioned on textual input has gained significant
attention in recent years due to its potential applications in various domains
such as gaming, film production, and virtual reality. Conditioned Motion
synthesis takes a text input and outputs a 3D motion corresponding to the text.
While previous works have explored motion synthesis using raw motion data and
latent space representations with diffusion models, these approaches often
suffer from high training and inference times. In this paper, we introduce a
novel framework that utilizes Generative Adversarial Networks (GANs) in the
latent space to enable faster training and inference while achieving results
comparable to those of the state-of-the-art diffusion methods. We perform
experiments on the HumanML3D, HumanAct12 benchmarks and demonstrate that a
remarkably simple GAN in the latent space achieves a FID of 0.482 with more
than 91% in FLOPs reduction compared to latent diffusion model. Our work opens
up new possibilities for efficient and high-quality motion synthesis using
latent space GANs.
|
2501.01450 | Real-Time Computational Visual Aberration Correcting Display Through
High-Contrast Inverse Blurring | eess.IV cs.CV | This paper presents a framework for developing a live vision-correcting
display (VCD) to address refractive visual aberrations without the need for
traditional vision correction devices like glasses or contact lenses,
particularly in scenarios where wearing them may be inconvenient. We achieve
this correction through deconvolution of the displayed image using a point
spread function (PSF) associated with the viewer's eye. We address ringing
artefacts using a masking technique applied to the prefiltered image. We also
enhance the display's contrast and reduce color distortion by operating in the
YUV/YCbCr color space, where deconvolution is performed solely on the luma
(brightness) channel. Finally, we introduce a technique to calculate a
real-time PSF that adapts based on the viewer's spherical coordinates relative
to the screen. This ensures that the PSF remains accurate and undistorted even
when the viewer observes the display from an angle relative to the screen
normal, thereby providing consistent visual correction regardless of the
viewing angle. The results of our display demonstrate significant improvements
in visual clarity, achieving a structural similarity index (SSIM) of 83.04%,
highlighting the effectiveness of our approach.
|
2501.01451 | Human-AI Teaming Using Large Language Models: Boosting Brain-Computer
Interfacing (BCI) and Brain Research | cs.HC cs.AI | Recently, there is an increasing interest in using artificial intelligence
(AI) to automate aspects of the research process, or even autonomously conduct
the full research cycle from idea generation, over data analysis, to composing
and evaluation of scientific manuscripts. Examples of working AI scientist
systems have been demonstrated for computer science tasks and running molecular
biology labs. While some approaches aim for full autonomy of the scientific AI,
others rather aim for leveraging human-AI teaming. Here, we address how to
adapt such approaches for boosting Brain-Computer Interface (BCI) development,
as well as brain research resp. neuroscience at large. We argue that at this
time, a strong emphasis on human-AI teaming, in contrast to fully autonomous AI
BCI researcher will be the most promising way forward. We introduce the
collaborative workspaces concept for human-AI teaming based on a set of
Janusian design principles, looking both ways, to the human as well as to the
AI side. Based on these principles, we present ChatBCI, a Python-based toolbox
for enabling human-AI collaboration based on interaction with Large Language
Models (LLMs), designed for BCI research and development projects. We show how
ChatBCI was successfully used in a concrete BCI project on advancing motor
imagery decoding from EEG signals. Our approach can be straightforwardly
extended to broad neurotechnological and neuroscientific topics, and may by
design facilitate human expert knowledge transfer to scientific AI systems in
general.
|
2501.01453 | Geometry Matters: Benchmarking Scientific ML Approaches for Flow
Prediction around Complex Geometries | cs.LG physics.flu-dyn | Rapid yet accurate simulations of fluid dynamics around complex geometries is
critical in a variety of engineering and scientific applications, including
aerodynamics and biomedical flows. However, while scientific machine learning
(SciML) has shown promise, most studies are constrained to simple geometries,
leaving complex, real-world scenarios underexplored. This study addresses this
gap by benchmarking diverse SciML models, including neural operators and vision
transformer-based foundation models, for fluid flow prediction over intricate
geometries. Using a high-fidelity dataset of steady-state flows across various
geometries, we evaluate the impact of geometric representations -- Signed
Distance Fields (SDF) and binary masks -- on model accuracy, scalability, and
generalization. Central to this effort is the introduction of a novel, unified
scoring framework that integrates metrics for global accuracy, boundary layer
fidelity, and physical consistency to enable a robust, comparative evaluation
of model performance. Our findings demonstrate that foundation models
significantly outperform neural operators, particularly in data-limited
scenarios, and that SDF representations yield superior results with sufficient
training data. Despite these advancements, all models struggle with
out-of-distribution generalization, highlighting a critical challenge for
future SciML applications. By advancing both evaluation methodologies and
modeling capabilities, this work paves the way for robust and scalable ML
solutions for fluid dynamics across complex geometries.
|
2501.01454 | A Fourfold Pathogen Reference Ontology Suite | q-bio.OT cs.AI cs.LO | Infectious diseases remain a critical global health challenge, and the
integration of standardized ontologies plays a vital role in managing related
data. The Infectious Disease Ontology (IDO) and its extensions, such as the
Coronavirus Infectious Disease Ontology (CIDO), are essential for organizing
and disseminating information related to infectious diseases. The COVID-19
pandemic highlighted the need for updating IDO and its virus-specific
extensions. There is an additional need to update IDO extensions specific to
bacteria, fungus, and parasite infectious diseases. We adopt the "hub and
spoke" methodology to generate pathogen-specific extensions of IDO: Virus
Infectious Disease Ontology (VIDO), Bacteria Infectious Disease Ontology
(BIDO), Mycosis Infectious Disease Ontology (MIDO), and Parasite Infectious
Disease Ontology (PIDO). The creation of pathogen-specific reference ontologies
advances modularization and reusability of infectious disease data within the
IDO ecosystem. Future work will focus on further refining these ontologies,
creating new extensions, and developing application ontologies based on them,
in line with ongoing efforts to standardize biological and biomedical
terminologies for improved data sharing and analysis.
|
2501.01456 | SS-CTML: Self-Supervised Cross-Task Mutual Learning for CT Image
Reconstruction | eess.IV cs.CV cs.LG | Supervised deep-learning (SDL) techniques with paired training datasets have
been widely studied for X-ray computed tomography (CT) image reconstruction.
However, due to the difficulties of obtaining paired training datasets in
clinical routine, the SDL methods are still away from common uses in clinical
practices. In recent years, self-supervised deep-learning (SSDL) techniques
have shown great potential for the studies of CT image reconstruction. In this
work, we propose a self-supervised cross-task mutual learning (SS-CTML)
framework for CT image reconstruction. Specifically, a sparse-view scanned and
a limited-view scanned sinogram data are first extracted from a full-view
scanned sinogram data, which results in three individual reconstruction tasks,
i.e., the full-view CT (FVCT) reconstruction, the sparse-view CT (SVCT)
reconstruction, and limited-view CT (LVCT) reconstruction. Then, three neural
networks are constructed for the three reconstruction tasks. Considering that
the ultimate goals of the three tasks are all to reconstruct high-quality CT
images, we therefore construct a set of cross-task mutual learning objectives
for the three tasks, in which way, the three neural networks can be
self-supervised optimized by learning from each other. Clinical datasets are
adopted to evaluate the effectiveness of the proposed framework. Experimental
results demonstrate that the SS-CTML framework can obtain promising CT image
reconstruction performance in terms of both quantitative and qualitative
measurements.
|
2501.01457 | Reinforcing Thinking through Reasoning-Enhanced Reward Models | cs.LG cs.AI cs.CL | Large Language Models (LLMs) exhibit great potential in complex multi-step
reasoning through inference-time thinking but still struggle with deciding when
to stop thinking due to limited self-awareness about their knowledge
boundaries. While human preference alignment has shown extraordinary
opportunities, expensive labeling challenges adherence to scaling law. Language
model self-critique, as an alternative to using human-labeled reasoning data,
is questioned with its inherited biases. This work addresses these challenges
by distilling the LLM's own reasoning processes into synthetic behavioral data,
eliminating the need for manual labeling of intermediate steps. Building on
this concept, we propose Distillation-Reinforcement-Reasoning (DRR), a
three-step framework that leverages the LLM's inherent behaviors as external
feedback by first generating behavioral data using the Reasoner (LLM) to
reflect its reasoning capabilities, then training a lightweight discriminative
reward model (DM) on behavioral data, and finally deploying the DM at inference
time to assist the Reasoner's decision-making. Experiments on multiple
benchmarks show that the DRR framework outperforms self-critique approaches
without relying on additional complex data annotation. Benefiting from
lightweight design, ease of replication, and adaptability, DRR is applicable to
a wide range of LLM-centric tasks.
|
2501.01458 | GAN-TAT: A Novel Framework Using Protein Interaction Networks in
Druggable Gene Identification | cs.LG cs.AI q-bio.QM | Identifying druggable genes is essential for developing effective
pharmaceuticals. With the availability of extensive, high-quality data,
computational methods have become a significant asset. Protein Interaction
Network (PIN) is valuable but challenging to implement due to its high
dimensionality and sparsity. Previous methods relied on indirect integration,
leading to resolution loss. This study proposes GAN-TAT, a framework utilizing
an advanced graph embedding technology, ImGAGN, to directly integrate PIN for
druggable gene inference work. Tested on three Pharos datasets, GAN-TAT
achieved the highest AUC-ROC score of 0.951 on Tclin. Further evaluation shows
that GAN-TAT's predictions are supported by clinical evidence, highlighting its
potential practical applications in pharmacogenomics. This research represents
a methodological attempt with the direct utilization of PIN, expanding
potential new solutions for developing drug targets. The source code of GAN-TAT
is available at (https://github.com/george-yuanji-wang/GAN-TAT).
|
2501.01460 | GDSR: Global-Detail Integration through Dual-Branch Network with Wavelet
Losses for Remote Sensing Image Super-Resolution | eess.IV cs.CV cs.LG | In recent years, deep neural networks, including Convolutional Neural
Networks, Transformers, and State Space Models, have achieved significant
progress in Remote Sensing Image (RSI) Super-Resolution (SR). However, existing
SR methods typically overlook the complementary relationship between global and
local dependencies. These methods either focus on capturing local information
or prioritize global information, which results in models that are unable to
effectively capture both global and local features simultaneously. Moreover,
their computational cost becomes prohibitive when applied to large-scale RSIs.
To address these challenges, we introduce the novel application of Receptance
Weighted Key Value (RWKV) to RSI-SR, which captures long-range dependencies
with linear complexity. To simultaneously model global and local features, we
propose the Global-Detail dual-branch structure, GDSR, which performs SR
reconstruction by paralleling RWKV and convolutional operations to handle
large-scale RSIs. Furthermore, we introduce the Global-Detail Reconstruction
Module (GDRM) as an intermediary between the two branches to bridge their
complementary roles. In addition, we propose Wavelet Loss, a loss function that
effectively captures high-frequency detail information in images, thereby
enhancing the visual quality of SR, particularly in terms of detail
reconstruction. Extensive experiments on several benchmarks, including AID,
AID_CDM, RSSRD-QH, and RSSRD-QH_CDM, demonstrate that GSDR outperforms the
state-of-the-art Transformer-based method HAT by an average of 0.05 dB in PSNR,
while using only 63% of its parameters and 51% of its FLOPs, achieving an
inference speed 2.9 times faster. Furthermore, the Wavelet Loss shows excellent
generalization across various architectures, providing a novel perspective for
RSI-SR enhancement.
|
2501.01462 | Pan-infection Foundation Framework Enables Multiple Pathogen Prediction | cs.LG cs.AI q-bio.GN | Host-response-based diagnostics can improve the accuracy of diagnosing
bacterial and viral infections, thereby reducing inappropriate antibiotic
prescriptions. However, the existing cohorts with limited sample size and
coarse infections types are unable to support the exploration of an accurate
and generalizable diagnostic model. Here, we curate the largest infection
host-response transcriptome data, including 11,247 samples across 89 blood
transcriptome datasets from 13 countries and 21 platforms. We build a
diagnostic model for pathogen prediction starting from a pan-infection model as
foundation (AUC = 0.97) based on the pan-infection dataset. Then, we utilize
knowledge distillation to efficiently transfer the insights from this "teacher"
model to four lightweight pathogen "student" models, i.e., staphylococcal
infection (AUC = 0.99), streptococcal infection (AUC = 0.94), HIV infection
(AUC = 0.93), and RSV infection (AUC = 0.94), as well as a sepsis "student"
model (AUC = 0.99). The proposed knowledge distillation framework not only
facilitates the diagnosis of pathogens using pan-infection data, but also
enables an across-disease study from pan-infection to sepsis. Moreover, the
framework enables high-degree lightweight design of diagnostic models, which is
expected to be adaptively deployed in clinical settings.
|
2501.01463 | Goal Recognition using Actor-Critic Optimization | cs.LG cs.AI cs.MA | Goal Recognition aims to infer an agent's goal from a sequence of
observations. Existing approaches often rely on manually engineered domains and
discrete representations. Deep Recognition using Actor-Critic Optimization
(DRACO) is a novel approach based on deep reinforcement learning that overcomes
these limitations by providing two key contributions. First, it is the first
goal recognition algorithm that learns a set of policy networks from
unstructured data and uses them for inference. Second, DRACO introduces new
metrics for assessing goal hypotheses through continuous policy
representations. DRACO achieves state-of-the-art performance for goal
recognition in discrete settings while not using the structured inputs used by
existing approaches. Moreover, it outperforms these approaches in more
challenging, continuous settings at substantially reduced costs in both
computing and memory. Together, these results showcase the robustness of the
new algorithm, bridging traditional goal recognition and deep reinforcement
learning.
|
2501.01464 | Estimation of 3T MR images from 1.5T images regularized with Physics
based Constraint | eess.IV cs.CV cs.LG physics.med-ph | Limited accessibility to high field MRI scanners (such as 7T, 11T) has
motivated the development of post-processing methods to improve low field
images. Several existing post-processing methods have shown the feasibility to
improve 3T images to produce 7T-like images [3,18]. It has been observed that
improving lower field (LF, <=1.5T) images comes with additional challenges due
to poor image quality such as the function mapping 1.5T and higher field (HF,
3T) images is more complex than the function relating 3T and 7T images [10].
Except for [10], no method has been addressed to improve <=1.5T MRI images.
Further, most of the existing methods [3,18] including [10] require example
images, and also often rely on pixel to pixel correspondences between LF and HF
images which are usually inaccurate for <=1.5T images. The focus of this paper
is to address the unsupervised framework for quality improvement of 1.5T images
and avoid the expensive requirements of example images and associated image
registration. The LF and HF images are assumed to be related by a linear
transformation (LT). The unknown HF image and unknown LT are estimated in
alternate minimization framework. Further, a physics based constraint is
proposed that provides an additional non-linear function relating LF and HF
images in order to achieve the desired high contrast in estimated HF image. The
experimental results demonstrate that the proposed approach provides processed
1.5T images, i.e., estimated 3T-like images with improved image quality, and is
comparably better than the existing methods addressing similar problems. The
improvement in image quality is also shown to provide better tissue
segmentation and volume quantification as compared to scanner acquired 1.5T
images.
|
2501.01465 | Tech Report: Divide and Conquer 3D Real-Time Reconstruction for Improved
IGS | eess.IV cs.CV | Tracking surgical modifications based on endoscopic videos is technically
feasible and of great clinical advantages; however, it still remains
challenging. This report presents a modular pipeline to divide and conquer the
clinical challenges in the process. The pipeline integrates frame selection,
depth estimation, and 3D reconstruction components, allowing for flexibility
and adaptability in incorporating new methods. Recent advancements, including
the integration of Depth-Anything V2 and EndoDAC for depth estimation, as well
as improvements in the Iterative Closest Point (ICP) alignment process, are
detailed. Experiments conducted on the Hamlyn dataset demonstrate the
effectiveness of the integrated methods. System capability and limitations are
both discussed.
|
2501.01470 | Balance-aware Sequence Sampling Makes Multi-modal Learning Better | cs.LG cs.AI | To address the modality imbalance caused by data heterogeneity, existing
multi-modal learning (MML) approaches primarily focus on balancing this
difference from the perspective of optimization objectives. However, almost all
existing methods ignore the impact of sample sequences, i.e., an inappropriate
training order tends to trigger learning bias in the model, further
exacerbating modality imbalance. In this paper, we propose Balance-aware
Sequence Sampling (BSS) to enhance the robustness of MML. Specifically, we
first define a multi-perspective measurer to evaluate the balance degree of
each sample. Via the evaluation, we employ a heuristic scheduler based on
curriculum learning (CL) that incrementally provides training subsets,
progressing from balanced to imbalanced samples to rebalance MML. Moreover,
considering that sample balance may evolve as the model capability increases,
we propose a learning-based probabilistic sampling method to dynamically update
the training sequences at the epoch level, further improving MML performance.
Extensive experiments on widely used datasets demonstrate the superiority of
our method compared with state-of-the-art (SOTA) MML approaches.
|
2501.01472 | Augmented Contrastive Clustering with Uncertainty-Aware Prototyping for
Time Series Test Time Adaptation | cs.LG cs.AI | Test-time adaptation aims to adapt pre-trained deep neural networks using
solely online unlabelled test data during inference. Although TTA has shown
promise in visual applications, its potential in time series contexts remains
largely unexplored. Existing TTA methods, originally designed for visual tasks,
may not effectively handle the complex temporal dynamics of real-world time
series data, resulting in suboptimal adaptation performance. To address this
gap, we propose Augmented Contrastive Clustering with Uncertainty-aware
Prototyping (ACCUP), a straightforward yet effective TTA method for time series
data. Initially, our approach employs augmentation ensemble on the time series
data to capture diverse temporal information and variations, incorporating
uncertainty-aware prototypes to distill essential characteristics.
Additionally, we introduce an entropy comparison scheme to selectively acquire
more confident predictions, enhancing the reliability of pseudo labels.
Furthermore, we utilize augmented contrastive clustering to enhance feature
discriminability and mitigate error accumulation from noisy pseudo labels,
promoting cohesive clustering within the same class while facilitating clear
separation between different classes. Extensive experiments conducted on three
real-world time series datasets and an additional visual dataset demonstrate
the effectiveness and generalization potential of the proposed method,
advancing the underexplored realm of TTA for time series data.
|
2501.01473 | Unraveling Indirect In-Context Learning Using Influence Functions | cs.LG cs.AI | This work introduces a novel paradigm for generalized In-Context Learning
(ICL), termed Indirect In-Context Learning. In Indirect ICL, we explore
demonstration selection strategies tailored for two distinct real-world
scenarios: Mixture of Tasks and Noisy Demonstrations. We systematically
evaluate the effectiveness of Influence Functions (IFs) as a selection tool for
these settings, highlighting the potential for IFs to better capture the
informativeness of examples within the demonstration pool. For the Mixture of
Tasks setting, demonstrations are drawn from 28 diverse tasks, including MMLU,
BigBench, StrategyQA, and CommonsenseQA. We demonstrate that combining
BertScore-Recall (BSR) with an IF surrogate model can significantly improve
performance, leading to average absolute accuracy gains of 0.37\% and 1.45\%
for 3-shot and 5-shot setups when compared to traditional ICL metrics. In the
Noisy Demonstrations setting, we examine scenarios where demonstrations might
be mislabeled. Our experiments show that reweighting traditional ICL selectors
(BSR and Cosine Similarity) with IF-based selectors boosts accuracy by an
average of 2.90\% for Cosine Similarity and 2.94\% for BSR on noisy GLUE
benchmarks. In sum, we propose a robust framework for demonstration selection
that generalizes beyond traditional ICL, offering valuable insights into the
role of IFs for Indirect ICL.
|
2501.01477 | A Survey of Deep Learning Methods in Protein Bioinformatics and its
Impact on Protein Design | q-bio.BM cs.AI | Proteins are sequences of amino acids that serve as the basic building blocks
of living organisms. Despite rapidly growing databases documenting structural
and functional information for various protein sequences, our understanding of
proteins remains limited because of the large possible sequence space and the
complex inter- and intra-molecular forces. Deep learning, which is
characterized by its ability to learn relevant features directly from large
datasets, has demonstrated remarkable performance in fields such as computer
vision and natural language processing. It has also been increasingly applied
in recent years to the data-rich domain of protein sequences with great
success, most notably with Alphafold2's breakout performance in the protein
structure prediction. The performance improvements achieved by deep learning
unlocks new possibilities in the field of protein bioinformatics, including
protein design, one of the most difficult but useful tasks. In this paper, we
broadly categorize problems in protein bioinformatics into three main
categories: 1) structural prediction, 2) functional prediction, and 3) protein
design, and review the progress achieved from using deep learning methodologies
in each of them. We expand on the main challenges of the protein design problem
and highlight how advances in structural and functional prediction have
directly contributed to design tasks. Finally, we conclude by identifying
important topics and future research directions.
|
2501.01478 | Enhancing Reasoning through Process Supervision with Monte Carlo Tree
Search | cs.AI cs.CL cs.LG | Large language models (LLMs) have demonstrated their remarkable capacity
across a variety of tasks. However, reasoning remains a challenge for LLMs. To
improve LLMs' reasoning ability, process supervision has proven to be better
than outcome supervision. In this work, we study using Monte Carlo Tree Search
(MCTS) to generate process supervision data with LLMs themselves for training
them. We sample reasoning steps with an LLM and assign each step a score that
captures its "relative correctness," and the LLM is then trained by minimizing
weighted log-likelihood of generating the reasoning steps. This
generate-then-train process is repeated iteratively until convergence.Our
experimental results demonstrate that the proposed methods considerably improve
the performance of LLMs on two mathematical reasoning datasets. Furthermore,
models trained on one dataset also exhibit improved performance on the other,
showing the transferability of the enhanced reasoning ability.
|
2501.01480 | CORAL: Concept Drift Representation Learning for Co-evolving Time-series | cs.LG cs.AI | In the realm of time series analysis, tackling the phenomenon of concept
drift poses a significant challenge. Concept drift -- characterized by the
evolving statistical properties of time series data, affects the reliability
and accuracy of conventional analysis models. This is particularly evident in
co-evolving scenarios where interactions among variables are crucial. This
paper presents CORAL, a simple yet effective method that models time series as
an evolving ecosystem to learn representations of concept drift. CORAL employs
a kernel-induced self-representation learning to generate a representation
matrix, encapsulating the inherent dynamics of co-evolving time series. This
matrix serves as a key tool for identification and adaptation to concept drift
by observing its temporal variations. Furthermore, CORAL effectively identifies
prevailing patterns and offers insights into emerging trends through pattern
evolution analysis. Our empirical evaluation of CORAL across various datasets
demonstrates its effectiveness in handling the complexities of concept drift.
This approach introduces a novel perspective in the theoretical domain of
co-evolving time series analysis, enhancing adaptability and accuracy in the
face of dynamic data environments, and can be easily integrated into most deep
learning backbones.
|
2501.01481 | Unleashing Correlation and Continuity for Hyperspectral Reconstruction
from RGB Images | eess.IV cs.CV | Reconstructing Hyperspectral Images (HSI) from RGB images can yield high
spatial resolution HSI at a lower cost, demonstrating significant application
potential. This paper reveals that local correlation and global continuity of
the spectral characteristics are crucial for HSI reconstruction tasks.
Therefore, we fully explore these inter-spectral relationships and propose a
Correlation and Continuity Network (CCNet) for HSI reconstruction from RGB
images. For the correlation of local spectrum, we introduce the Group-wise
Spectral Correlation Modeling (GrSCM) module, which efficiently establishes
spectral band similarity within a localized range. For the continuity of global
spectrum, we design the Neighborhood-wise Spectral Continuity Modeling (NeSCM)
module, which employs memory units to recursively model the progressive
variation characteristics at the global level. In order to explore the inherent
complementarity of these two modules, we design the Patch-wise Adaptive Fusion
(PAF) module to efficiently integrate global continuity features into the
spectral features in a patch-wise adaptive manner. These innovations enhance
the quality of reconstructed HSI. We perform comprehensive comparison and
ablation experiments on the mainstream datasets NTIRE2022 and NTIRE2020 for the
spectral reconstruction task. Compared to the current advanced spectral
reconstruction algorithms, our designed algorithm achieves State-Of-The-Art
(SOTA) performance.
|
2501.01482 | An unsupervised method for MRI recovery: Deep image prior with
structured sparsity | eess.IV cs.CV cs.LG eess.SP | Objective: To propose and validate an unsupervised MRI reconstruction method
that does not require fully sampled k-space data. Materials and Methods: The
proposed method, deep image prior with structured sparsity (DISCUS), extends
the deep image prior (DIP) by introducing group sparsity to frame-specific code
vectors, enabling the discovery of a low-dimensional manifold for capturing
temporal variations. \discus was validated using four studies: (I) simulation
of a dynamic Shepp-Logan phantom to demonstrate its manifold discovery
capabilities, (II) comparison with compressed sensing and DIP-based methods
using simulated single-shot late gadolinium enhancement (LGE) image series from
six distinct digital cardiac phantoms in terms of normalized mean square error
(NMSE) and structural similarity index measure (SSIM), (III) evaluation on
retrospectively undersampled single-shot LGE data from eight patients, and (IV)
evaluation on prospectively undersampled single-shot LGE data from eight
patients, assessed via blind scoring from two expert readers. Results: DISCUS
outperformed competing methods, demonstrating superior reconstruction quality
in terms of NMSE and SSIM (Studies I--III) and expert reader scoring (Study
IV). Discussion: An unsupervised image reconstruction method is presented and
validated on simulated and measured data. These developments can benefit
applications where acquiring fully sampled data is challenging.
|
2501.01483 | Embedding Similarity Guided License Plate Super Resolution | eess.IV cs.CV | Super-resolution (SR) techniques play a pivotal role in enhancing the quality
of low-resolution images, particularly for applications such as security and
surveillance, where accurate license plate recognition is crucial. This study
proposes a novel framework that combines pixel-based loss with embedding
similarity learning to address the unique challenges of license plate
super-resolution (LPSR). The introduced pixel and embedding consistency loss
(PECL) integrates a Siamese network and applies contrastive loss to force
embedding similarities to improve perceptual and structural fidelity. By
effectively balancing pixel-wise accuracy with embedding-level consistency, the
framework achieves superior alignment of fine-grained features between
high-resolution (HR) and super-resolved (SR) license plates. Extensive
experiments on the CCPD dataset validate the efficacy of the proposed
framework, demonstrating consistent improvements over state-of-the-art methods
in terms of PSNR_RGB, PSNR_Y and optical character recognition (OCR) accuracy.
These results highlight the potential of embedding similarity learning to
advance both perceptual quality and task-specific performance in extreme
super-resolution scenarios.
|
2501.01484 | Sequencing Silicates in the IRS Debris Disk Catalog I: Methodology for
Unsupervised Clustering | astro-ph.EP astro-ph.IM cs.LG | Debris disks, which consist of dust, planetesimals, planets, and gas, offer a
unique window into the mineralogical composition of their parent bodies,
especially during the critical phase of terrestrial planet formation spanning
10 to a few hundred million years. Observations from the $\textit{Spitzer}$
Space Telescope have unveiled thousands of debris disks, yet systematic studies
remain scarce, let alone those with unsupervised clustering techniques. This
study introduces $\texttt{CLUES}$ (CLustering UnsupErvised with Sequencer), a
novel, non-parametric, fully-interpretable machine-learning spectral analysis
tool designed to analyze and classify the spectral data of debris disks.
$\texttt{CLUES}$ combines multiple unsupervised clustering methods with
multi-scale distance measures to discern new groupings and trends, offering
insights into compositional diversity and geophysical processes within these
disks. Our analysis allows us to explore a vast parameter space in debris disk
mineralogy and also offers broader applications in fields such as
protoplanetary disks and solar system objects. This paper details the
methodology, implementation, and initial results of $\texttt{CLUES}$, setting
the stage for more detailed follow-up studies focusing on debris disk
mineralogy and demographics.
|
2501.01496 | ORACLE: A Real-Time, Hierarchical, Deep-Learning Photometric Classifier
for the LSST | astro-ph.IM astro-ph.HE cs.AI cs.LG | We present ORACLE, the first hierarchical deep-learning model for real-time,
context-aware classification of transient and variable astrophysical phenomena.
ORACLE is a recurrent neural network with Gated Recurrent Units (GRUs), and has
been trained using a custom hierarchical cross-entropy loss function to provide
high-confidence classifications along an observationally-driven taxonomy with
as little as a single photometric observation. Contextual information for each
object, including host galaxy photometric redshift, offset, ellipticity and
brightness, is concatenated to the light curve embedding and used to make a
final prediction. Training on $\sim$0.5M events from the Extended LSST
Astronomical Time-Series Classification Challenge, we achieve a top-level
(Transient vs Variable) macro-averaged precision of 0.96 using only 1 day of
photometric observations after the first detection in addition to contextual
information, for each event; this increases to $>$0.99 once 64 days of the
light curve has been obtained, and 0.83 at 1024 days after first detection for
19-way classification (including supernova sub-types, active galactic nuclei,
variable stars, microlensing events, and kilonovae). We also compare ORACLE
with other state-of-the-art classifiers and report comparable performance for
the 19-way classification task, in addition to delivering accurate top-level
classifications much earlier. The code and model weights used in this work are
publicly available at our associated GitHub repository
(https://github.com/uiucsn/ELAsTiCC-Classification).
|
2501.01502 | Block components of generalized quaternion group codes | cs.IT math.CO math.IT | Codes in the generalized quaternion group algebra $\mathbb{F}_q[Q_{4n}]$ are
considered. Restricting to char$\mathbb{F}_q \nmid 4n$ the structure of an
arbitrary code $C \subseteq \mathbb{F}_q[Q_{4n}]$ is described via the
Wedderburn decomposition. Moreover it is known that in this case every code $C
\subseteq \mathbb{F}_q[Q_{4n}]$ has a generating idempotent $\lambda \in
\mathbb{F}_q[Q_{4n}]$. Given the generating idempotent of a code $C$ we
determine the different components in its decomposition $C \cong
\bigoplus_{j=1}^{r+s}C_j \oplus \bigoplus_{i=1}^{k+t}C'_{i}.$ Afterwards we
apply this result to describe the blocks of codes induced by cyclic group
codes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.