id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.16050
|
Skeleton-Guided-Translation: A Benchmarking Framework for Code
Repository Translation with Fine-Grained Quality Evaluation
|
cs.SE cs.AI
|
The advancement of large language models has intensified the need to
modernize enterprise applications and migrate legacy systems to secure,
versatile languages. However, existing code translation benchmarks primarily
focus on individual functions, overlooking the complexities involved in
translating entire repositories, such as maintaining inter-module coherence and
managing dependencies. While some recent repository-level translation
benchmarks attempt to address these challenges, they still face limitations,
including poor maintainability and overly coarse evaluation granularity, which
make them less developer-friendly. We introduce Skeleton-Guided-Translation, a
framework for repository-level Java to C# code translation with fine-grained
quality evaluation. It uses a two-step process: first translating the
repository's structural "skeletons", then translating the full repository
guided by these skeletons. Building on this, we present TRANSREPO-BENCH, a
benchmark of high quality open-source Java repositories and their corresponding
C# skeletons, including matching unit tests and build configurations. Our unit
tests are fixed and can be applied across multiple or incremental translations
without manual adjustments, enhancing automation and scalability in
evaluations. Additionally, we develop fine-grained evaluation metrics that
assess translation quality at the individual test case level, addressing
traditional binary metrics' inability to distinguish when build failures cause
all tests to fail. Evaluations using TRANSREPO-BENCH highlight key challenges
and advance more accurate repository level code translation.
|
2501.16061
|
The Unbearable Lightness of Prompting: A Critical Reflection on the
Environmental Impact of genAI use in Design Education
|
cs.HC cs.AI
|
Design educators are finding ways to support students in skillfully using
GenAI tools in their practices while encouraging the critical scrutiny of the
ethical and social issues around these technologies. However, the issue of
environmental sustainability remains unaddressed. There is a lack of both
resources to grasp the environmental costs of genAI in education and a lack of
shared practices for engaging with the issue. This paper critically reflects on
the energy costs of using genAI in design education, using a workshop held in
2023 with 49 students as a motivating example. Through this reflection, we
develop a set of five alternative stances, with related actions, that support
the conscious use of genAI in design education. The work contributes to the
field of design and HCI by bringing together ways for educators to reflect on
their practices, informing the future development of educational programs
around genAI.
|
2501.16065
|
CILP-FGDI: Exploiting Vision-Language Model for Generalizable Person
Re-Identification
|
cs.CV
|
The Visual Language Model, known for its robust cross-modal capabilities, has
been extensively applied in various computer vision tasks. In this paper, we
explore the use of CLIP (Contrastive Language-Image Pretraining), a
vision-language model pretrained on large-scale image-text pairs to align
visual and textual features, for acquiring fine-grained and domain-invariant
representations in generalizable person re-identification. The adaptation of
CLIP to the task presents two primary challenges: learning more fine-grained
features to enhance discriminative ability, and learning more domain-invariant
features to improve the model's generalization capabilities. To mitigate the
first challenge thereby enhance the ability to learn fine-grained features, a
three-stage strategy is proposed to boost the accuracy of text descriptions.
Initially, the image encoder is trained to effectively adapt to person
re-identification tasks. In the second stage, the features extracted by the
image encoder are used to generate textual descriptions (i.e., prompts) for
each image. Finally, the text encoder with the learned prompts is employed to
guide the training of the final image encoder. To enhance the model's
generalization capabilities to unseen domains, a bidirectional guiding method
is introduced to learn domain-invariant image features. Specifically,
domain-invariant and domain-relevant prompts are generated, and both positive
(pulling together image features and domain-invariant prompts) and negative
(pushing apart image features and domain-relevant prompts) views are used to
train the image encoder. Collectively, these strategies contribute to the
development of an innovative CLIP-based framework for learning fine-grained
generalized features in person re-identification.
|
2501.16070
|
Generalizing Egocentric Temporal Neighborhoods to probe for spatial
correlations in temporal networks and infer their topology
|
physics.soc-ph cs.SI
|
Motifs are thought to be some fundamental components of social face-to-face
interaction temporal networks. However, the motifs previously considered are
either limited to a handful of nodes and edges, or do not include triangles,
which are thought to be of critical relevance to understand the dynamics of
social systems. Thus, we introduce a new class of motifs, that include these
triangles, are not limited in their number of nodes or edges, and yet can be
mined efficiently in any temporal network. Referring to these motifs as the
edge-centered motifs, we show analytically how they subsume the Egocentric
Temporal Neighborhoods motifs of [A. Longa, G. Cencetti, B. Lepri, and A.
Passerini, An efficient procedure for mining egocentric temporal motifs, Data
Mining and Knowledge Discovery 36, 355 (2022)]. We also confirm in empirical
data that the edge-centered motifs bring relevant information with respect to
the Egocentric motifs by using a principle of maximum entropy. Then, we show
how mining for the edge-centered motifs in a network can be used to probe for
spatial correlations in the underlying dynamics that have produced that
network. We deduce an approximate formula for the distribution of the
edge-centered motifs in empirical networks of social face-to-face interactions.
In the last section of this paper, we explore how the statistics of the
edge-centered motifs can be used to infer the complete topology of the network
they were sampled from. This leads to the needs of mathematical development,
that we inaugurate here under the name of graph tiling theory.
|
2501.16073
|
Challenging Assumptions in Learning Generic Text Style Embeddings
|
cs.LG cs.CL
|
Recent advancements in language representation learning primarily emphasize
language modeling for deriving meaningful representations, often neglecting
style-specific considerations. This study addresses this gap by creating
generic, sentence-level style embeddings crucial for style-centric tasks. Our
approach is grounded on the premise that low-level text style changes can
compose any high-level style. We hypothesize that applying this concept to
representation learning enables the development of versatile text style
embeddings. By fine-tuning a general-purpose text encoder using contrastive
learning and standard cross-entropy loss, we aim to capture these low-level
style shifts, anticipating that they offer insights applicable to high-level
text styles. The outcomes prompt us to reconsider the underlying assumptions as
the results do not always show that the learned style representations capture
high-level text styles.
|
2501.16075
|
PISCO: Pretty Simple Compression for Retrieval-Augmented Generation
|
cs.CL cs.AI cs.IR
|
Retrieval-Augmented Generation (RAG) pipelines enhance Large Language Models
(LLMs) by retrieving relevant documents, but they face scalability issues due
to high inference costs and limited context size. Document compression is a
practical solution, but current soft compression methods suffer from accuracy
losses and require extensive pretraining. In this paper, we introduce PISCO, a
novel method that achieves a 16x compression rate with minimal accuracy loss
(0-3%) across diverse RAG-based question-answering (QA) tasks. Unlike existing
approaches, PISCO requires no pretraining or annotated data, relying solely on
sequence-level knowledge distillation from document-based questions. With the
ability to fine-tune a 7-10B LLM in 48 hours on a single A100 GPU, PISCO offers
a highly efficient and scalable solution. We present comprehensive experiments
showing that PISCO outperforms existing compression models by 8% in accuracy.
|
2501.16076
|
Minimizing Polarization and Disagreement in the Friedkin-Johnsen Model
with Unknown Innate Opinions
|
cs.SI
|
The bulk of the literature on opinion optimization in social networks adopts
the Friedkin-Johnsen (FJ) opinion dynamics model, in which the innate opinions
of all nodes are known: this is an unrealistic assumption. In this paper, we
study opinion optimization under the FJ model without the full knowledge of
innate opinions. Specifically, we borrow from the literature a series of
objective functions, aimed at minimizing polarization and/or disagreement, and
we tackle the budgeted optimization problem, where we can query the innate
opinions of only a limited number of nodes. Given the complexity of our
problem, we propose a framework based on three steps: (1) select the limited
number of nodes we query, (2) reconstruct the innate opinions of all nodes
based on those queried, and (3) optimize the objective function with the
reconstructed opinions. For each step of the framework, we present and
systematically evaluate several effective strategies. A key contribution of our
work is a rigorous error propagation analysis that quantifies how
reconstruction errors in innate opinions impact the quality of the final
solutions. Our experiments on various synthetic and real-world datasets show
that we can effectively minimize polarization and disagreement even if we have
quite limited information about innate opinions.
|
2501.16077
|
RelCAT: Advancing Extraction of Clinical Inter-Entity Relationships from
Unstructured Electronic Health Records
|
cs.CL
|
This study introduces RelCAT (Relation Concept Annotation Toolkit), an
interactive tool, library, and workflow designed to classify relations between
entities extracted from clinical narratives. Building upon the CogStack MedCAT
framework, RelCAT addresses the challenge of capturing complete clinical
relations dispersed within text. The toolkit implements state-of-the-art
machine learning models such as BERT and Llama along with proven evaluation and
training methods. We demonstrate a dataset annotation tool (built within
MedCATTrainer), model training, and evaluate our methodology on both openly
available gold-standard and real-world UK National Health Service (NHS)
hospital clinical datasets. We perform extensive experimentation and a
comparative analysis of the various publicly available models with varied
approaches selected for model fine-tuning. Finally, we achieve macro F1-scores
of 0.977 on the gold-standard n2c2, surpassing the previous state-of-the-art
performance, and achieve performance of >=0.93 F1 on our NHS gathered datasets.
|
2501.16078
|
Integration of LLM Quality Assurance into an NLG System
|
cs.CL
|
In this paper, we present a system that uses a Large Language Model (LLM) to
perform grammar and spelling correction as a component of Quality Assurance
(QA) for texts generated by NLG systems, which is important for text production
in real-world scenarios. Evaluating the results of the system on
work-in-progress sports news texts in three languages, we show that it is able
to deliver acceptable corrections.
|
2501.16080
|
Generating Spatial Synthetic Populations Using Wasserstein Generative
Adversarial Network: A Case Study with EU-SILC Data for Helsinki and
Thessaloniki
|
cs.LG cs.MA
|
Using agent-based social simulations can enhance our understanding of urban
planning, public health, and economic forecasting. Realistic synthetic
populations with numerous attributes strengthen these simulations. The
Wasserstein Generative Adversarial Network, trained on census data like
EU-SILC, can create robust synthetic populations. These methods, aided by
external statistics or EU-SILC weights, generate spatial synthetic populations
for agent-based models. The increased access to high-quality micro-data has
sparked interest in synthetic populations, which preserve demographic profiles
and analytical strength while ensuring privacy and preventing discrimination.
This study uses national data from Finland and Greece for Helsinki and
Thessaloniki to explore balanced spatial synthetic population generation.
Results show challenges related to balancing data with or without aggregated
statistics for the target population and the general under-representation of
fringe profiles by deep generative methods. The latter can lead to
discrimination in agent-based simulations.
|
2501.16081
|
Combating Interference for Over-the-Air Federated Learning: A
Statistical Approach via RIS
|
cs.IT eess.SP math.IT
|
Over-the-air computation (AirComp) integrates analog communication with
task-oriented computation, serving as a key enabling technique for
communication-efficient federated learning (FL) over wireless networks.
However, owing to its analog characteristics, AirComp-enabled FL (AirFL) is
vulnerable to both unintentional and intentional interference. In this paper,
we aim to attain robustness in AirComp aggregation against interference via
reconfigurable intelligent surface (RIS) technology to artificially reconstruct
wireless environments. Concretely, we establish performance objectives tailored
for interference suppression in wireless FL systems, aiming to achieve unbiased
gradient estimation and reduce its mean square error (MSE). Oriented at these
objectives, we introduce the concept of phase-manipulated favorable propagation
and channel hardening for AirFL, which relies on the adjustment of RIS phase
shifts to realize statistical interference elimination and reduce the error
variance of gradient estimation. Building upon this concept, we propose two
robust aggregation schemes of power control and RIS phase shifts design, both
ensuring unbiased gradient estimation in the presence of interference.
Theoretical analysis of the MSE and FL convergence affirms the
anti-interference capability of the proposed schemes. It is observed that
computation and interference errors diminish by an order of
$\mathcal{O}\left(\frac{1}{N}\right)$ where $N$ is the number of RIS elements,
and the ideal convergence rate without interference can be asymptotically
achieved by increasing $N$. Numerical results confirm the analytical results
and validate the superior performance of the proposed schemes over existing
baselines.
|
2501.16085
|
ARFlow: Autogressive Flow with Hybrid Linear Attention
|
cs.CV
|
Flow models are effective at progressively generating realistic images, but
they generally struggle to capture long-range dependencies during the
generation process as they compress all the information from previous time
steps into a single corrupted image. To address this limitation, we propose
integrating autoregressive modeling -- known for its excellence in modeling
complex, high-dimensional joint probability distributions -- into flow models.
During training, at each step, we construct causally-ordered sequences by
sampling multiple images from the same semantic category and applying different
levels of noise, where images with higher noise levels serve as causal
predecessors to those with lower noise levels. This design enables the model to
learn broader category-level variations while maintaining proper causal
relationships in the flow process. During generation, the model
autoregressively conditions the previously generated images from earlier
denoising steps, forming a contextual and coherent generation trajectory.
Additionally, we design a customized hybrid linear attention mechanism tailored
to our modeling approach to enhance computational efficiency. Our approach,
termed ARFlow, under 400k training steps, achieves 14.08 FID scores on ImageNet
at 128 * 128 without classifier-free guidance, reaching 4.34 FID with
classifier-free guidance 1.5, significantly outperforming the previous
flow-based model SiT's 9.17 FID. Extensive ablation studies demonstrate the
effectiveness of our modeling strategy and chunk-wise attention design.
|
2501.16086
|
Value-oriented forecast reconciliation for renewables in electricity
markets
|
stat.ML cs.LG
|
Forecast reconciliation is considered an effective method for achieving
coherence and improving forecast accuracy. However, the value of reconciled
forecasts in downstream decision-making tasks has been mostly overlooked. In a
multi-agent setup with heterogeneous loss functions, this oversight may lead to
unfair outcomes, hence resulting in conflicts during the reconciliation
process. To address this, we propose a value-oriented forecast reconciliation
approach that focuses on the forecast value for individual agents. Fairness is
ensured through the use of a Nash bargaining framework. Specifically, we model
this problem as a cooperative bargaining game, where each agent aims to
optimize their own gain while contributing to the overall reconciliation
process. We then present a primal-dual algorithm for parameter estimation based
on empirical risk minimization. From an application perspective, we consider an
aggregated wind energy trading problem, where profits are distributed using a
weighted allocation rule. We demonstrate the effectiveness of our approach
through several numerical experiments, showing that it consistently results in
increased profits for all agents involved.
|
2501.16093
|
STAR: Stepwise Task Augmentation and Relation Learning for Aspect
Sentiment Quad Prediction
|
cs.CL cs.AI
|
Aspect-based sentiment analysis (ABSA) aims to identify four sentiment
elements, including aspect term, aspect category, opinion term, and sentiment
polarity. These elements construct the complete picture of sentiments. The most
challenging task, aspect sentiment quad prediction (ASQP), predicts these
elements simultaneously, hindered by difficulties in accurately coupling
different sentiment elements. A key challenge is insufficient annotated data
that limits the capability of models in semantic understanding and reasoning
about quad prediction. To address this, we propose stepwise task augmentation
and relation learning (STAR), a strategy inspired by human reasoning. STAR
constructs auxiliary data to learn quadruple relationships incrementally by
augmenting with pairwise and overall relation tasks derived from training data.
By encouraging the model to infer causal relationships among sentiment elements
without requiring additional annotations, STAR effectively enhances quad
prediction. Extensive experiments demonstrate the proposed STAR exhibits
superior performance on four benchmark datasets.
|
2501.16098
|
Multi-Agent Meta-Offline Reinforcement Learning for Timely UAV Path
Planning and Data Collection
|
cs.MA
|
Multi-agent reinforcement learning (MARL) has been widely adopted in
high-performance computing and complex data-driven decision-making in the
wireless domain. However, conventional MARL schemes face many obstacles in
real-world scenarios. First, most MARL algorithms are online, which might be
unsafe and impractical. Second, MARL algorithms are environment-specific,
meaning network configuration changes require model retraining. This letter
proposes a novel meta-offline MARL algorithm that combines conservative
Q-learning (CQL) and model agnostic meta-learning (MAML). CQL enables offline
training by leveraging pre-collected datasets, while MAML ensures scalability
and adaptability to dynamic network configurations and objectives. We propose
two algorithm variants: independent training (M-I-MARL) and centralized
training decentralized execution (M-CTDE-MARL). Simulation results show that
the proposed algorithm outperforms conventional schemes, especially the CTDE
approach that achieves 50 % faster convergence in dynamic scenarios than the
benchmarks. The proposed framework enhances scalability, robustness, and
adaptability in wireless communication systems by optimizing UAV trajectories
and scheduling policies.
|
2501.16099
|
An Air-Gap Element for the Isogeometric Space-Time-Simulation of
Electric Machines
|
math.NA cs.CE cs.NA
|
Space-time methods promise more efficient time-domain simulations, in
particular of electrical machines. However, most approaches require the motion
to be known in advance so that it can be included in the space-time mesh. To
overcome this problem, this paper proposes to use the well-known air-gap
element for the rotor-stator coupling of an isogeometric machine model. First,
we derive the solution in the air-gap region and then employ it to couple the
rotor and stator. This coupling is angle dependent and we show how to
efficiently update the coupling matrices to a different angle, avoiding
expensive quadrature. Finally, the resulting time-dependent problem is solved
in a space-time setting. The spatial discretization using isogeometric analysis
is particularly suitable for coupling via the air-gap element, as NURBS can
exactly represent the geometry of the air-gap. Furthermore, the model including
the air-gap element can be seamlessly transferred to the space-time setting.
However, the air-gap element is well known in the literature. The originality
of this work is the application to isogeometric analysis and space-time.
|
2501.16100
|
Automated Detection of Sport Highlights from Audio and Video Sources
|
cs.CV cs.AI cs.LG
|
This study presents a novel Deep Learning-based and lightweight approach for
the automated detection of sports highlights (HLs) from audio and video
sources. HL detection is a key task in sports video analysis, traditionally
requiring significant human effort. Our solution leverages Deep Learning (DL)
models trained on relatively small datasets of audio Mel-spectrograms and
grayscale video frames, achieving promising accuracy rates of 89% and 83% for
audio and video detection, respectively. The use of small datasets, combined
with simple architectures, demonstrates the practicality of our method for fast
and cost-effective deployment. Furthermore, an ensemble model combining both
modalities shows improved robustness against false positives and false
negatives. The proposed methodology offers a scalable solution for automated HL
detection across various types of sports video content, reducing the need for
manual intervention. Future work will focus on enhancing model architectures
and extending this approach to broader scene-detection tasks in media analysis.
|
2501.16101
|
3D Reconstruction of non-visible surfaces of objects from a Single Depth
View -- Comparative Study
|
cs.RO cs.CV
|
Scene and object reconstruction is an important problem in robotics, in
particular in planning collision-free trajectories or in object manipulation.
This paper compares two strategies for the reconstruction of nonvisible parts
of the object surface from a single RGB-D camera view. The first method, named
DeepSDF predicts the Signed Distance Transform to the object surface for a
given point in 3D space. The second method, named MirrorNet reconstructs the
occluded objects' parts by generating images from the other side of the
observed object. Experiments performed with objects from the ShapeNet dataset,
show that the view-dependent MirrorNet is faster and has smaller reconstruction
errors in most categories.
|
2501.16103
|
Static Batching of Irregular Workloads on GPUs: Framework and
Application to Efficient MoE Model Inference
|
cs.DC cs.LG
|
It has long been a problem to arrange and execute irregular workloads on
massively parallel devices. We propose a general framework for statically
batching irregular workloads into a single kernel with a runtime task mapping
mechanism on GPUs. We further apply this framework to Mixture-of-Experts (MoE)
model inference and implement an optimized and efficient CUDA kernel. Our MoE
kernel achieves up to 91% of the peak Tensor Core throughput on NVIDIA H800 GPU
and 95% on NVIDIA H20 GPU.
|
2501.16106
|
Towards Explainable Multimodal Depression Recognition for Clinical
Interviews
|
cs.CL
|
Recently, multimodal depression recognition for clinical interviews (MDRC)
has recently attracted considerable attention. Existing MDRC studies mainly
focus on improving task performance and have achieved significant development.
However, for clinical applications, model transparency is critical, and
previous works ignore the interpretability of decision-making processes. To
address this issue, we propose an Explainable Multimodal Depression Recognition
for Clinical Interviews (EMDRC) task, which aims to provide evidence for
depression recognition by summarizing symptoms and uncovering underlying
causes. Given an interviewer-participant interaction scenario, the goal of
EMDRC is to structured summarize participant's symptoms based on the eight-item
Patient Health Questionnaire depression scale (PHQ-8), and predict their
depression severity. To tackle the EMDRC task, we construct a new dataset based
on an existing MDRC dataset. Moreover, we utilize the PHQ-8 and propose a
PHQ-aware multimodal multi-task learning framework, which captures the
utterance-level symptom-related semantic information to help generate
dialogue-level summary. Experiment results on our annotated dataset demonstrate
the superiority of our proposed methods over baseline systems on the EMDRC
task.
|
2501.16110
|
Using Generative Models to Produce Realistic Populations of UK
Windstorms
|
physics.ao-ph cs.LG
|
This study evaluates the potential of generative models, trained on
historical ERA5 reanalysis data, for simulating windstorms over the UK. Four
generative models, including a standard GAN, a WGAN-GP, a U-net diffusion
model, and a diffusion-GAN were assessed based on their ability to replicate
spatial and statistical characteristics of windstorms. Different models have
distinct strengths and limitations. The standard GAN displayed broader
variability and limited alignment on the PCA dimensions. The WGAN-GP had a more
balanced performance but occasionally misrepresented extreme events. The U-net
diffusion model produced high-quality spatial patterns but consistently
underestimated windstorm intensities. The diffusion-GAN performed better than
the other models in general but overestimated extremes. An ensemble approach
combining the strengths of these models could potentially improve their overall
reliability. This study provides a foundation for such generative models in
meteorological research and could potentially be applied in windstorm analysis
and risk assessment.
|
2501.16111
|
Options-Aware Dense Retrieval for Multiple-Choice query Answering
|
cs.IR
|
Long-context multiple-choice question answering tasks require robust
reasoning over extensive text sources. Since most of the pre-trained
transformer models are restricted to processing only a few hundred words at a
time, successful completion of such tasks often relies on the identification of
evidence spans, such as sentences, that provide supporting evidence for
selecting the correct answer. Prior research in this domain has predominantly
utilized pre-trained dense retrieval models, given the absence of supervision
to fine-tune the retrieval process. This paper proposes a novel method called
Options Aware Dense Retrieval (OADR) to address these challenges. ORDA uses an
innovative approach to fine-tuning retrieval by leveraging query-options
embeddings, which aim to mimic the embeddings of the oracle query (i.e., the
query paired with the correct answer) for enhanced identification of supporting
evidence. Through experiments conducted on the QuALITY benchmark dataset, we
demonstrate that our proposed model surpasses existing baselines in terms of
performance and accuracy.
|
2501.16112
|
Survey: Understand the challenges of MachineLearning Experts using Named
EntityRecognition Tools
|
cs.IR cs.CL
|
This paper presents a survey based on Kasunic's survey research methodology
to identify the criteria used by Machine Learning (ML) experts to evaluate
Named Entity Recognition (NER) tools and frameworks. Comparison and selection
of NER tools and frameworks is a critical step in leveraging NER for
Information Retrieval to support the development of Clinical Practice
Guidelines. In addition, this study examines the main challenges faced by ML
experts when choosing suitable NER tools and frameworks. Using Nunamaker's
methodology, the article begins with an introduction to the topic,
contextualizes the research, reviews the state-of-the-art in science and
technology, and identifies challenges for an expert survey on NER tools and
frameworks. This is followed by a description of the survey's design and
implementation. The paper concludes with an evaluation of the survey results
and the insights gained, ending with a summary and conclusions.
|
2501.16113
|
Fixed-sized clusters $k$-Means
|
cs.LG
|
We present a $k$-means-based clustering algorithm, which optimizes the mean
square error, for given cluster sizes. A straightforward application is
balanced clustering, where the sizes of each cluster are equal. In the
$k$-means assignment phase, the algorithm solves an assignment problem using
the Hungarian algorithm. This makes the assignment phase time complexity
$O(n^3)$. This enables clustering of datasets of size more than 5000 points.
|
2501.16117
|
A Unified Analysis of Stochastic Gradient Descent with Arbitrary Data
Permutations and Beyond
|
cs.LG
|
We aim to provide a unified convergence analysis for permutation-based
Stochastic Gradient Descent (SGD), where data examples are permuted before each
epoch. By examining the relations among permutations, we categorize existing
permutation-based SGD algorithms into four categories: Arbitrary Permutations,
Independent Permutations (including Random Reshuffling), One Permutation
(including Incremental Gradient, Shuffle One and Nice Permutation) and
Dependent Permutations (including GraBs Lu et al., 2022; Cooper et al., 2023).
Existing unified analyses failed to encompass the Dependent Permutations
category due to the inter-epoch dependencies in its permutations. In this work,
we propose a general assumption that captures the inter-epoch permutation
dependencies. Using the general assumption, we develop a unified framework for
permutation-based SGD with arbitrary permutations of examples, incorporating
all the aforementioned representative algorithms. Furthermore, we adapt our
framework on example ordering in SGD for client ordering in Federated Learning
(FL). Specifically, we develop a unified framework for
regularized-participation FL with arbitrary permutations of clients.
|
2501.16120
|
Copyright and Competition: Estimating Supply and Demand with
Unstructured Data
|
econ.EM cs.LG stat.AP stat.ML
|
Copyright policies play a pivotal role in protecting the intellectual
property of creators and companies in creative industries. The advent of
cost-reducing technologies, such as generative AI, in these industries calls
for renewed attention to the role of these policies. This paper studies product
positioning and competition in a market of creatively differentiated products
and the competitive and welfare effects of copyright protection. A common
feature of products with creative elements is that their key attributes (e.g.,
images and text) are unstructured and thus high-dimensional. We focus on a
stylized design product, fonts, and use data from the world's largest online
marketplace for fonts. We use neural network embeddings to quantify
unstructured attributes and measure the visual similarity. We show that this
measure closely aligns with actual human perception. Based on this measure, we
empirically find that competitions occur locally in the visual characteristics
space. We then develop a structural model for supply and demand that integrate
the embeddings. Through counterfactual analyses, we find that local copyright
protection can enhance consumer welfare when products are relocated, and the
interplay between copyright and cost-reducing technologies is essential in
determining an optimal policy for social welfare. We believe that the embedding
analysis and empirical models introduced in this paper can be applicable to a
range of industries where unstructured data captures essential features of
products and markets.
|
2501.16123
|
From #Dr00gtiktok to #harmreduction: Exploring Substance Use Hashtags on
TikTok
|
cs.CL
|
The rise of TikTok as a primary source of information for youth, combined
with its unique short-form video format, creates urgent questions about how
substance use content manifests and spreads on the platform. This paper
provides the first in-depth exploration of substance use-related content on
TikTok, covering all major substance categories as classified by the Drug
Enforcement Agency. Through social network analysis and qualitative coding, we
examined more than 2,333 hashtags across 39,509 videos, identified 16 distinct
hashtag communities and analyzed their interconnections and thematic content.
Our analysis revealed a highly interconnected small-world network where
recovery-focused hashtags like #addiction, #recovery, and #sober serve as
central bridges between communities. Through manual coding of 351
representative videos, we found that Recovery Advocacy content (33.9%) and
Satirical content (28.2%) dominate, while direct substance depiction appears in
only 26% of videos, with active use shown in just 6.5% of them. This suggests
TikTok functions primarily as a recovery support platform rather than a space
promoting substance use. We found strong alignment between hashtag communities
and video content, indicating organic community formation rather than attempts
to evade content moderation. Our findings inform how platforms can balance
content moderation with preserving valuable recovery support communities, while
also providing insights for the design of social media-based recovery
interventions.
|
2501.16125
|
SampleLLM: Optimizing Tabular Data Synthesis in Recommendations
|
cs.IR
|
Tabular data synthesis is crucial in machine learning, yet existing general
methods-primarily based on statistical or deep learning models-are highly
data-dependent and often fall short in recommender systems. This limitation
arises from their difficulty in capturing complex distributions and
understanding feature relationships from sparse and limited data, along with
their inability to grasp semantic feature relations. Recently, Large Language
Models (LLMs) have shown potential in generating synthetic data samples through
few-shot learning and semantic understanding. However, they often suffer from
inconsistent distribution and lack of diversity due to their inherent
distribution disparity with the target dataset. To address these challenges and
enhance tabular data synthesis for recommendation tasks, we propose a novel
two-stage framework named SampleLLM to improve the quality of LLM-based tabular
data synthesis for recommendations by ensuring better distribution alignment.
In the first stage, SampleLLM employs LLMs with Chain-of-Thought prompts and
diverse exemplars to generate data that closely aligns with the target dataset
distribution, even when input samples are limited. The second stage uses an
advanced feature attribution-based importance sampling method to refine feature
relationships within the synthesized data, reducing any distribution biases
introduced by the LLM. Experimental results on three recommendation datasets,
two general datasets, and online deployment illustrate that SampleLLM
significantly surpasses existing methods for recommendation tasks and holds
promise for a broader range of tabular data scenarios.
|
2501.16128
|
Graphene-Assisted Chemical Stabilization of Liquid Metal Nano Droplets
for Liquid Metal Based Energy Storage
|
eess.SY cond-mat.mtrl-sci cs.SY
|
Energy storage devices with liquid_metal electrodes have attracted interest
in recent years due to their potential for mechanical resilience, self_healing,
dendrite_free operation, and fast reaction kinetics. Gallium alloys like
Eutectic Gallium Indium (EGaIn) are appealing due to their low melting point
and high theoretical specific capacity. However, EGaIn electrodes are unstable
in highly alkaline electrolytes due to Gallium oxide dissolution. In this
letter, this bottleneck is addressed by introducing chemically stable films in
which nanoscale droplets of EGaIn are coated with trace amounts of graphene
oxide (GO). It is demonstrated that a GO to EGaIn weight ratio as low as 0.01
provides enough protection for a thin film formed by GO EGaIn nanocomposite
against significantly acidic or alkaline environments (pH 1-14). It is shown
that GO coating significantly enhances the surface stability in such
environments, thus improving the energy storage capacity by over 10x.
Microstructural analysis confirms GO EGaIn composite stability and enhanced
electrochemical performance. Utilizing this, a thin film supercapacitor is
fabricated. Results indicate that when coating the EGaIn with GO to EGaIn ratio
of 0.001 the areal capacitance improves by 10 times, reaching 20.02 mF cm_2.
This breakthrough paves the way for advanced liquid metal-based thin film
electrodes, promising significant improvements in energy storage applications.
|
2501.16130
|
ReFill: Reinforcement Learning for Fill-In Minimization
|
cs.LG
|
Efficiently solving sparse linear systems $Ax=b$, where $A$ is a large,
sparse, symmetric positive semi-definite matrix, is a core challenge in
scientific computing, machine learning, and optimization. A major bottleneck in
Gaussian elimination for these systems is fill-in, the creation of non-zero
entries that increase memory and computational cost. Minimizing fill-in is
NP-hard, and existing heuristics like Minimum Degree and Nested Dissection
offer limited adaptability across diverse problem instances.
We introduce \textit{ReFill}, a reinforcement learning framework enhanced by
Graph Neural Networks (GNNs) to learn adaptive ordering strategies for fill-in
minimization. ReFill trains a GNN-based heuristic to predict efficient
elimination orders, outperforming traditional heuristics by dynamically
adapting to the structure of input matrices. Experiments demonstrate that
ReFill outperforms strong heuristics in reducing fill-in, highlighting the
untapped potential of learning-based methods for this well-studied classical
problem.
|
2501.16135
|
Evaluation of NMT-Assisted Grammar Transfer for a Multi-Language
Configurable Data-to-Text System
|
cs.CL
|
One approach for multilingual data-to-text generation is to translate
grammatical configurations upfront from the source language into each target
language. These configurations are then used by a surface realizer and in
document planning stages to generate output. In this paper, we describe a
rule-based NLG implementation of this approach where the configuration is
translated by Neural Machine Translation (NMT) combined with a one-time human
review, and introduce a cross-language grammar dependency model to create a
multilingual NLG system that generates text from the source data, scaling the
generation phase without a human in the loop. Additionally, we introduce a
method for human post-editing evaluation on the automatically translated text.
Our evaluation on the SportSett:Basketball dataset shows that our NLG system
performs well, underlining its grammatical correctness in translation tasks.
|
2501.16138
|
Quantifying the Self-Interest Level of Markov Social Dilemmas
|
cs.GT cs.MA
|
This paper introduces a novel method for estimating the self-interest level
of computationally intractable Markov social dilemmas. We extend the concept of
self-interest level from normal-form games to Markov games, providing a
quantitative measure of the minimum reward exchange required to incentivize
cooperation by aligning individual and collective interests. We demonstrate our
method on three environments from the Melting Pot suite: which represent either
common-pool resources or public goods. Our results show that the proposed
method successfully identifies a threshold at which learning agents transition
from selfish to cooperative equilibria in a Markov social dilemma. This work
contributes to the fields of Cooperative AI and multiagent reinforcement
learning by providing a practical tool for analysing complex, multistep social
dilemmas. Our findings offer insights into how reward structures can promote or
hinger cooperation in challenging multiagent scenarios, with potential
applications in areas such as mechanism design.
|
2501.16142
|
Towards General-Purpose Model-Free Reinforcement Learning
|
cs.LG cs.AI
|
Reinforcement learning (RL) promises a framework for near-universal
problem-solving. In practice however, RL algorithms are often tailored to
specific benchmarks, relying on carefully tuned hyperparameters and algorithmic
choices. Recently, powerful model-based RL methods have shown impressive
general results across benchmarks but come at the cost of increased complexity
and slow run times, limiting their broader applicability. In this paper, we
attempt to find a unifying model-free deep RL algorithm that can address a
diverse class of domains and problem settings. To achieve this, we leverage
model-based representations that approximately linearize the value function,
taking advantage of the denser task objectives used by model-based RL while
avoiding the costs associated with planning or simulated trajectories. We
evaluate our algorithm, MR.Q, on a variety of common RL benchmarks with a
single set of hyperparameters and show a competitive performance against
domain-specific and general baselines, providing a concrete step towards
building general-purpose model-free deep RL algorithms.
|
2501.16145
|
Capacity-Achieving Input Distribution of the Additive Uniform Noise
Channel With Peak Amplitude and Cost Constraint
|
cs.IT math.IT
|
Under which condition is quantization optimal? We address this question in
the context of the additive uniform noise channel under peak amplitude and
power constraints. We compute analytically the capacity-achieving input
distribution as a function of the noise level, the average power constraint and
the exponent of the power constraint. We found that when the cost constraint is
tight and the cost function is concave, the capacity-achieving input
distribution is discrete, whereas when the cost function is convex, the support
of the capacity-achieving input distribution spans the entire interval.
|
2501.16146
|
Toward Efficient Generalization in 3D Human Pose Estimation via a
Canonical Domain Approach
|
cs.CV cs.AI
|
Recent advancements in deep learning methods have significantly improved the
performance of 3D Human Pose Estimation (HPE). However, performance degradation
caused by domain gaps between source and target domains remains a major
challenge to generalization, necessitating extensive data augmentation and/or
fine-tuning for each specific target domain. To address this issue more
efficiently, we propose a novel canonical domain approach that maps both the
source and target domains into a unified canonical domain, alleviating the need
for additional fine-tuning in the target domain. To construct the canonical
domain, we introduce a canonicalization process to generate a novel canonical
2D-3D pose mapping that ensures 2D-3D pose consistency and simplifies 2D-3D
pose patterns, enabling more efficient training of lifting networks. The
canonicalization of both domains is achieved through the following steps: (1)
in the source domain, the lifting network is trained within the canonical
domain; (2) in the target domain, input 2D poses are canonicalized prior to
inference by leveraging the properties of perspective projection and known
camera intrinsics. Consequently, the trained network can be directly applied to
the target domain without requiring additional fine-tuning. Experiments
conducted with various lifting networks and publicly available datasets (e.g.,
Human3.6M, Fit3D, MPI-INF-3DHP) demonstrate that the proposed method
substantially improves generalization capability across datasets while using
the same data volume.
|
2501.16147
|
Efficient Portrait Matte Creation With Layer Diffusion and Connectivity
Priors
|
cs.CV
|
Learning effective deep portrait matting models requires training data of
both high quality and large quantity. Neither quality nor quantity can be
easily met for portrait matting, however. Since the most accurate ground-truth
portrait mattes are acquired in front of the green screen, it is almost
impossible to harvest a large-scale portrait matting dataset in reality. This
work shows that one can leverage text prompts and the recent Layer Diffusion
model to generate high-quality portrait foregrounds and extract latent portrait
mattes. However, the portrait mattes cannot be readily in use due to
significant generation artifacts. Inspired by the connectivity priors observed
in portrait images, that is, the border of portrait foregrounds always appears
connected, a connectivity-aware approach is introduced to refine portrait
mattes. Building on this, a large-scale portrait matting dataset is created,
termed LD-Portrait-20K, with $20,051$ portrait foregrounds and high-quality
alpha mattes. Extensive experiments demonstrated the value of the
LD-Portrait-20K dataset, with models trained on it significantly outperforming
those trained on other datasets. In addition, comparisons with the chroma
keying algorithm and an ablation study on dataset capacity further confirmed
the effectiveness of the proposed matte creation approach. Further, the dataset
also contributes to state-of-the-art video portrait matting, implemented by
simple video segmentation and a trimap-based image matting model trained on
this dataset.
|
2501.16150
|
AI Agents for Computer Use: A Review of Instruction-based Computer
Control, GUI Automation, and Operator Assistants
|
cs.AI cs.HC cs.SY eess.SY
|
Instruction-based computer control agents (CCAs) execute complex action
sequences on personal computers or mobile devices to fulfill tasks using the
same graphical user interfaces as a human user would, provided instructions in
natural language. This review offers a comprehensive overview of the emerging
field of instruction-based computer control, examining available agents --
their taxonomy, development, and respective resources -- and emphasizing the
shift from manually designed, specialized agents to leveraging foundation
models such as large language models (LLMs) and vision-language models (VLMs).
We formalize the problem and establish a taxonomy of the field to analyze
agents from three perspectives: (a) the environment perspective, analyzing
computer environments; (b) the interaction perspective, describing observations
spaces (e.g., screenshots, HTML) and action spaces (e.g., mouse and keyboard
actions, executable code); and (c) the agent perspective, focusing on the core
principle of how an agent acts and learns to act. Our framework encompasses
both specialized and foundation agents, facilitating their comparative analysis
and revealing how prior solutions in specialized agents, such as an environment
learning step, can guide the development of more capable foundation agents.
Additionally, we review current CCA datasets and CCA evaluation methods and
outline the challenges to deploying such agents in a productive setting. In
total, we review and classify 86 CCAs and 33 related datasets. By highlighting
trends, limitations, and future research directions, this work presents a
comprehensive foundation to obtain a broad understanding of the field and push
its future development.
|
2501.16153
|
MILP initialization for solving parabolic PDEs with PINNs
|
cs.LG
|
Physics-Informed Neural Networks (PINNs) are a powerful deep learning method
capable of providing solutions and parameter estimations of physical systems.
Given the complexity of their neural network structure, the convergence speed
is still limited compared to numerical methods, mainly when used in
applications that model realistic systems. The network initialization follows a
random distribution of the initial weights, as in the case of traditional
neural networks, which could lead to severe model convergence bottlenecks. To
overcome this problem, we follow current studies that deal with optimal initial
weights in traditional neural networks. In this paper, we use a convex
optimization model to improve the initialization of the weights in PINNs and
accelerate convergence. We investigate two optimization models as a first
training step, defined as pre-training, one involving only the boundaries and
one including physics. The optimization is focused on the first layer of the
neural network part of the PINN model, while the other weights are randomly
initialized. We test the methods using a practical application of the heat
diffusion equation to model the temperature distribution of power transformers.
The PINN model with boundary pre-training is the fastest converging method at
the current stage.
|
2501.16154
|
AdaCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive
Chain-of-Thought
|
cs.CL cs.AI
|
Large language models (LLMs) have shown impressive multilingual capabilities
through pretraining on diverse corpora. While these models show strong
reasoning abilities, their performance varies significantly across languages
due to uneven training data distribution. Existing approaches using machine
translation, and extensive multilingual pretraining and cross-lingual tuning
face scalability challenges and often fail to capture nuanced reasoning
processes across languages. In this paper, we introduce AdaCoT (Adaptive
Chain-of-Thought), a framework that enhances multilingual reasoning by
dynamically routing thought processes through intermediary "thinking languages"
before generating target-language responses. AdaCoT leverages a
language-agnostic core and incorporates an adaptive, reward-based mechanism for
selecting optimal reasoning pathways without requiring additional pretraining.
Our comprehensive evaluation across multiple benchmarks demonstrates
substantial improvements in both factual reasoning quality and cross-lingual
consistency, with particularly strong performance gains in low-resource
language settings. The results suggest that adaptive reasoning paths can
effectively bridge the performance gap between high and low-resource languages
while maintaining cultural and linguistic nuances.
|
2501.16159
|
Comprehensive Benchmarking Environment for Worker Flexibility in
Flexible Job Shop Scheduling Problems
|
cs.NE
|
In Production Scheduling, the Flexible Job Shop Scheduling Problem (FJSSP)
aims to optimize a sequence of operations and assign each to an eligible
machine with varying processing times. For integration of the workforce, each
machine also requires a worker to be present to process an operation which
additionally affects the processing times. The resulting problem is called
Flexible Job Shop Scheduling Problem with Worker Flexibility (FJSSP-W). The
FJSSP has been approached with various problem representations, including Mixed
Integer Linear Programming (MILP), Constrained Programming (CP), and
Simulation-based Optimization (SBO). In the latter area in particular, there
exists a large number of specialized Evolutionary Algorithms (EA) like Particle
Swarm Optimization (PSO) or Genetic Algorithms (GA). Yet, the solvers are often
developed for single use cases only, and validated on a few selected test
instances, let alone compared with results from solvers using other problem
representations. While suitable approaches do also exist, the design of the
FJSSP-W instances is not standardized and the algorithms are hardly comparable.
This calls for a systematic benchmarking environment that provides a
comprehensive set of FJSSP(-W) instances and supports targeted algorithm
development. It will facilitate the comparison of algorithmic performance in
the face of different problem characteristics. The present paper presents a
collection of 402 commonly accepted FJSSP instances and proposes an approach to
extend these with worker flexibility. In addition, we present a detailed
procedure for the evaluation of scheduling algorithms on these problem sets and
provide suitable model representations for this purpose. We provide complexity
characteristics for all presented instances as well as baseline results of
common commercial solvers to facilitate the validation of new algorithmic
developments.
|
2501.16164
|
MetaDecorator: Generating Immersive Virtual Tours through Multimodality
|
cs.HC cs.AI cs.ET cs.MM
|
MetaDecorator, is a framework that empowers users to personalize virtual
spaces. By leveraging text-driven prompts and image synthesis techniques,
MetaDecorator adorns static panoramas captured by 360{\deg} imaging devices,
transforming them into uniquely styled and visually appealing environments.
This significantly enhances the realism and engagement of virtual tours
compared to traditional offerings. Beyond the core framework, we also discuss
the integration of Large Language Models (LLMs) and haptics in the VR
application to provide a more immersive experience.
|
2501.16167
|
A Dynamic Similarity Index for Assessing Voltage Source Behaviour in
Power Systems
|
eess.SY cs.SY
|
Due to the fundamental transition to a power electronic dominated power
system, the increasing diversity of dynamic elements underscores the need to
assess their similarity to mature electrical engineering models. This article
addresses the concept of the Dynamic Similarity Index (DSI) for its use in,
power electronics-dominated networks. The DSI is a multipurpose tool developed
to be used by different stakeholders (e.g., converter manufacturers and system
operators). Such an index is calculated per frequency, which serves to
anticipate potential differences in particular frequency ranges of interest
between the model under study and the reference model. Within the scope of this
study, the dynamic similarity of inverter-based generators to an ideal voltage
source behind an impedance is assessed, due to the relevance of this
fundamental circuit in the representation of generation units in power system
studies. The article presents two potential applications based on this
mathematical framework. First, for manufacturers to evaluate control
performance compared to a reference model and second, it enables operators to
diagnose buses with voltage vulnerability based on a user-defined reference
Short-Circuit Ratio (SCR) value. The DSI results for these two case studies are
validated using Matlab Simulink simulations.
|
2501.16168
|
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
|
cs.LG cs.DC math.OC stat.ML
|
Asynchronous Stochastic Gradient Descent (Asynchronous SGD) is a cornerstone
method for parallelizing learning in distributed machine learning. However, its
performance suffers under arbitrarily heterogeneous computation times across
workers, leading to suboptimal time complexity and inefficiency as the number
of workers scales. While several Asynchronous SGD variants have been proposed,
recent findings by Tyurin & Richt\'arik (NeurIPS 2023) reveal that none achieve
optimal time complexity, leaving a significant gap in the literature. In this
paper, we propose Ringmaster ASGD, a novel Asynchronous SGD method designed to
address these limitations and tame the inherent challenges of Asynchronous SGD.
We establish, through rigorous theoretical analysis, that Ringmaster ASGD
achieves optimal time complexity under arbitrarily heterogeneous and
dynamically fluctuating worker computation times. This makes it the first
Asynchronous SGD method to meet the theoretical lower bounds for time
complexity in such scenarios.
|
2501.16171
|
Separate This, and All of these Things Around It: Music Source
Separation via Hyperellipsoidal Queries
|
eess.AS cs.IR cs.LG cs.SD
|
Music source separation is an audio-to-audio retrieval task of extracting one
or more constituent components, or composites thereof, from a musical audio
mixture. Each of these constituent components is often referred to as a "stem"
in literature. Historically, music source separation has been dominated by a
stem-based paradigm, leading to most state-of-the-art systems being either a
collection of single-stem extraction models, or a tightly coupled system with a
fixed, difficult-to-modify, set of supported stems. Combined with the limited
data availability, advances in music source separation have thus been mostly
limited to the "VDBO" set of stems: \textit{vocals}, \textit{drum},
\textit{bass}, and the catch-all \textit{others}. Recent work in music source
separation has begun to challenge the fixed-stem paradigm, moving towards
models able to extract any musical sound as long as this target type of sound
could be specified to the model as an additional query input. We generalize
this idea to a \textit{query-by-region} source separation system, specifying
the target based on the query regardless of how many sound sources or which
sound classes are contained within it. To do so, we propose the use of
hyperellipsoidal regions as queries to allow for an intuitive yet easily
parametrizable approach to specifying both the target (location) as well as its
spread. Evaluation of the proposed system on the MoisesDB dataset demonstrated
state-of-the-art performance of the proposed system both in terms of
signal-to-noise ratios and retrieval metrics.
|
2501.16173
|
Will Systems of LLM Agents Cooperate: An Investigation into a Social
Dilemma
|
cs.MA cs.GT
|
As autonomous agents become more prevalent, understanding their collective
behaviour in strategic interactions is crucial. This study investigates the
emergent cooperative tendencies of systems of Large Language Model (LLM) agents
in a social dilemma. Unlike previous research where LLMs output individual
actions, we prompt state-of-the-art LLMs to generate complete strategies for
iterated Prisoner's Dilemma. Using evolutionary game theory, we simulate
populations of agents with different strategic dispositions (aggressive,
cooperative, or neutral) and observe their evolutionary dynamics. Our findings
reveal that different LLMs exhibit distinct biases affecting the relative
success of aggressive versus cooperative strategies. This research provides
insights into the potential long-term behaviour of systems of deployed
LLM-based autonomous agents and highlights the importance of carefully
considering the strategic environments in which they operate.
|
2501.16174
|
Measuring Heterogeneity in Machine Learning with Distributed Energy
Distance
|
stat.ML cs.AI cs.DC cs.LG
|
In distributed and federated learning, heterogeneity across data sources
remains a major obstacle to effective model aggregation and convergence. We
focus on feature heterogeneity and introduce energy distance as a sensitive
measure for quantifying distributional discrepancies. While we show that energy
distance is robust for detecting data distribution shifts, its direct use in
large-scale systems can be prohibitively expensive. To address this, we develop
Taylor approximations that preserve key theoretical quantitative properties
while reducing computational overhead. Through simulation studies, we show how
accurately capturing feature discrepancies boosts convergence in distributed
learning. Finally, we propose a novel application of energy distance to assign
penalty weights for aligning predictions across heterogeneous nodes, ultimately
enhancing coordination in federated and distributed settings.
|
2501.16177
|
BAG: Body-Aligned 3D Wearable Asset Generation
|
cs.CV cs.AI cs.GR
|
While recent advancements have shown remarkable progress in general 3D shape
generation models, the challenge of leveraging these approaches to
automatically generate wearable 3D assets remains unexplored. To this end, we
present BAG, a Body-aligned Asset Generation method to output 3D wearable asset
that can be automatically dressed on given 3D human bodies. This is achived by
controlling the 3D generation process using human body shape and pose
information. Specifically, we first build a general single-image to consistent
multiview image diffusion model, and train it on the large Objaverse dataset to
achieve diversity and generalizability. Then we train a Controlnet to guide the
multiview generator to produce body-aligned multiview images. The control
signal utilizes the multiview 2D projections of the target human body, where
pixel values represent the XYZ coordinates of the body surface in a canonical
space. The body-conditioned multiview diffusion generates body-aligned
multiview images, which are then fed into a native 3D diffusion model to
produce the 3D shape of the asset. Finally, by recovering the similarity
transformation using multiview silhouette supervision and addressing asset-body
penetration with physics simulators, the 3D asset can be accurately fitted onto
the target human body. Experimental results demonstrate significant advantages
over existing methods in terms of image prompt-following capability, shape
diversity, and shape quality. Our project page is available at
https://bag-3d.github.io/.
|
2501.16178
|
SWIFT: Mapping Sub-series with Wavelet Decomposition Improves Time
Series Forecasting
|
cs.LG stat.ML
|
In recent work on time-series prediction, Transformers and even large
language models have garnered significant attention due to their strong
capabilities in sequence modeling. However, in practical deployments,
time-series prediction often requires operation in resource-constrained
environments, such as edge devices, which are unable to handle the
computational overhead of large models. To address such scenarios, some
lightweight models have been proposed, but they exhibit poor performance on
non-stationary sequences. In this paper, we propose $\textit{SWIFT}$, a
lightweight model that is not only powerful, but also efficient in deployment
and inference for Long-term Time Series Forecasting (LTSF). Our model is based
on three key points: (i) Utilizing wavelet transform to perform lossless
downsampling of time series. (ii) Achieving cross-band information fusion with
a learnable filter. (iii) Using only one shared linear layer or one shallow MLP
for sub-series' mapping. We conduct comprehensive experiments, and the results
show that $\textit{SWIFT}$ achieves state-of-the-art (SOTA) performance on
multiple datasets, offering a promising method for edge computing and
deployment in this task. Moreover, it is noteworthy that the number of
parameters in $\textit{SWIFT-Linear}$ is only 25\% of what it would be with a
single-layer linear model for time-domain prediction. Our code is available at
https://github.com/LancelotXWX/SWIFT.
|
2501.16181
|
Can summarization approximate simplification? A gold standard comparison
|
cs.CL
|
This study explores the overlap between text summarization and simplification
outputs. While summarization evaluation methods are streamlined, simplification
lacks cohesion, prompting the question: how closely can abstractive
summarization resemble gold-standard simplification? We address this by
applying two BART-based BRIO summarization methods to the Newsela corpus,
comparing outputs with manually annotated simplifications and achieving a top
ROUGE-L score of 0.654. This provides insight into where summarization and
simplification outputs converge and differ.
|
2501.16182
|
The Linear Attention Resurrection in Vision Transformer
|
cs.CV cs.AI
|
Vision Transformers (ViTs) have recently taken computer vision by storm.
However, the softmax attention underlying ViTs comes with a quadratic
complexity in time and memory, hindering the application of ViTs to
high-resolution images. We revisit the attention design and propose a linear
attention method to address the limitation, which doesn't sacrifice ViT's core
advantage of capturing global representation like existing methods (e.g. local
window attention of Swin). We further investigate the key difference between
linear attention and softmax attention. Our empirical results suggest that
linear attention lacks a fundamental property of concentrating the distribution
of the attention matrix. Inspired by this observation, we introduce a local
concentration module to enhance linear attention. By incorporating enhanced
linear global attention and local window attention, we propose a new ViT
architecture, dubbed L$^2$ViT. Notably, L$^2$ViT can effectively capture both
global interactions and local representations while enjoying linear
computational complexity. Extensive experiments demonstrate the strong
performance of L$^2$ViT. On image classification, L$^2$ViT achieves 84.4% Top-1
accuracy on ImageNet-1K without any extra training data or label. By further
pre-training on ImageNet-22k, it attains 87.0% when fine-tuned with resolution
384$^2$. For downstream tasks, L$^2$ViT delivers favorable performance as a
backbone on object detection as well as semantic segmentation.
|
2501.16184
|
Cryptographic Compression
|
cs.CR cs.IT math.IT
|
We introduce a protocol called ENCORE which simultaneously compresses and
encrypts data in a one-pass process that can be implemented efficiently and
possesses a number of desirable features as a streaming encoder/decoder.
Motivated by the observation that both lossless compression and encryption
consist of performing an invertible transformation whose output is close to a
uniform distribution over bit streams, we show that these can be done
simultaneously, at least for ``typical'' data with a stable distribution, i.e.,
approximated reasonably well by the output of a Markov model. The strategy is
to transform the data into a dyadic distribution whose Huffman encoding is
close to uniform, and then store the transformations made to said data in a
compressed secondary stream interwoven into the first with a user-defined
encryption protocol. The result is an encoding which we show exhibits a
modified version of Yao's ``next-bit test'' while requiring many fewer bits of
entropy than standard encryption. Numerous open questions remain, particularly
regarding results that we suspect can be strengthened considerably.
|
2501.16186
|
Learn to Optimize Resource Allocation under QoS Constraint of AR
|
cs.LG
|
This paper studies the uplink and downlink power allocation for interactive
augmented reality (AR) services, where live video captured by an AR device is
uploaded to the network edge and then the augmented video is subsequently
downloaded. By modeling the AR transmission process as a tandem queuing system,
we derive an upper bound for the probabilistic quality of service (QoS)
requirement concerning end-to-end latency and reliability. The resource
allocation with the QoS constraints results in a functional optimization
problem. To address it, we design a deep neural network to learn the power
allocation policy, leveraging the structure of optimal power allocation to
enhance learning performance. Simulation results demonstrate that the proposed
method effectively reduces transmit powers while meeting the QoS requirement.
|
2501.16191
|
Raiders of the Lost Dependency: Fixing Dependency Conflicts in Python
using LLMs
|
cs.SE cs.AI
|
Fixing Python dependency issues is a tedious and error-prone task for
developers, who must manually identify and resolve environment dependencies and
version constraints of third-party modules and Python interpreters. Researchers
have attempted to automate this process by relying on large knowledge graphs
and database lookup tables. However, these traditional approaches face
limitations due to the variety of dependency error types, large sets of
possible module versions, and conflicts among transitive dependencies. This
study explores the potential of using large language models (LLMs) to
automatically fix dependency issues in Python programs. We introduce PLLM
(pronounced "plum"), a novel technique that employs retrieval-augmented
generation (RAG) to help an LLM infer Python versions and required modules for
a given Python file. PLLM builds a testing environment that iteratively (1)
prompts the LLM for module combinations, (2) tests the suggested changes, and
(3) provides feedback (error messages) to the LLM to refine the fix. This
feedback cycle leverages natural language processing (NLP) to intelligently
parse and interpret build error messages. We benchmark PLLM on the Gistable
HG2.9K dataset, a collection of challenging single-file Python gists. We
compare PLLM against two state-of-the-art automatic dependency inference
approaches, namely PyEGo and ReadPyE, w.r.t. the ability to resolve dependency
issues. Our results indicate that PLLM can fix more dependency issues than the
two baselines, with +218 (+15.97%) more fixes over ReadPyE and +281 (+21.58%)
over PyEGo. Our deeper analyses suggest that PLLM is particularly beneficial
for projects with many dependencies and for specific third-party numerical and
machine-learning modules. Our findings demonstrate the potential of LLM-based
approaches to iteratively resolve Python dependency issues.
|
2501.16193
|
Posting Patterns of Members of Parental Subreddits
|
cs.SI
|
Online forums (e.g., Reddit) are used by many parents to discuss their
challenges, needs, and receive support. While studies have investigated the
contents of posts made to popular parental subreddits revealing the family
health concerns being expressed, little is known about parents' posting
patterns or other issues they engage in. In this study, we explore the posting
activity of users of 55 parental subreddits. Exploring posts made by these
users (667K) across Reddit (34M posts) reveals that over 85% of posters are not
one-time users of Reddit and actively engage with the community. Studying
cross-posting patterns also reveals the use of subreddits dedicated to other
topics such as relationship and health advice (e.g., r/AskDocs,
r/relationship_advice) by this population. As a result, for a comprehensive
understanding of the type of information posters share and seek, future work
should investigate sub-communities outside of parental-specific ones. Finally,
we expand the list of parental subreddits, compiling a total of 115 subreddits
that could be utilized in future studies of parental concerns.
|
2501.16201
|
Enhancing and Exploring Mild Cognitive Impairment Detection with
W2V-BERT-2.0
|
eess.AS cs.CL cs.SD
|
This study explores a multi-lingual audio self-supervised learning model for
detecting mild cognitive impairment (MCI) using the TAUKADIAL cross-lingual
dataset. While speech transcription-based detection with BERT models is
effective, limitations exist due to a lack of transcriptions and temporal
information. To address these issues, the study utilizes features directly from
speech utterances with W2V-BERT-2.0. We propose a visualization method to
detect essential layers of the model for MCI classification and design a
specific inference logic considering the characteristics of MCI. The experiment
shows competitive results, and the proposed inference logic significantly
contributes to the improvements from the baseline. We also conduct detailed
analysis which reveals the challenges related to speaker bias in the features
and the sensitivity of MCI classification accuracy to the data split, providing
valuable insights for future research.
|
2501.16207
|
From Informal to Formal -- Incorporating and Evaluating LLMs on Natural
Language Requirements to Verifiable Formal Proofs
|
cs.AI cs.CL cs.PL
|
The research in AI-based formal mathematical reasoning has shown an unstop-
pable growth trend. These studies have excelled in mathematical competitions
like IMO and have made significant progress. This paper focuses on formal
verification, an immediate application scenario of formal reasoning, and breaks
it down into sub-tasks. We constructed 18k high-quality instruction-response
pairs across five formal specification languages (Coq, Lean4, Dafny, ACSL, and
TLA+) by distilling gpt-4o and evaluated against ten open-sourced LLMs,
including recent popular DeepSeek-R1. We also fine-tuned several 7~8B small
models to achieve comparable performance with Deepseek-R1-671B. Interestingly,
we observed that fine-tuning with formal data also enhances mathematics,
reasoning, and coding capabilities. Fine-tuned models are released at https:
//huggingface.co/fm-universe.
|
2501.16209
|
Solving Turbulent Rayleigh-B\'enard Convection using Fourier Neural
Operators
|
physics.flu-dyn cs.LG
|
We train Fourier Neural Operator (FNO) surrogate models for Rayleigh-B\'enard
Convection (RBC), a model for convection processes that occur in nature and
industrial settings. We compare the prediction accuracy and model properties of
FNO surrogates to two popular surrogates used in fluid dynamics: the Dynamic
Mode Decomposition and the Linearly-Recurrent Autoencoder Network. We regard
Direct Numerical Simulations (DNS) of the RBC equations as the ground truth on
which the models are trained and evaluated in different settings. The FNO
performs favorably when compared to the DMD and LRAN and its predictions are
fast and highly accurate for this task. Additionally, we show its zero-shot
super-resolution ability for the convection dynamics. The FNO model has a high
potential to be used in downstream tasks such as flow control in RBC.
|
2501.16210
|
New Frontiers in Fighting Misinformation
|
cs.SI
|
Despite extensive research and development of tools and technologies for
misinformation tracking and detection, we often find ourselves largely on the
losing side of the battle against misinformation. In an era where
misinformation poses a substantial threat to public discourse, trust in
information sources, and societal and political stability, it is imperative
that we regularly revisit and reorient our work strategies. While we have made
significant strides in understanding how and why misinformation spreads, we
must now broaden our focus and explore how technology can help realise new
approaches to address this complex challenge more efficiently.
|
2501.16211
|
UDBE: Unsupervised Diffusion-based Brightness Enhancement in Underwater
Images
|
cs.CV cs.AI eess.IV
|
Activities in underwater environments are paramount in several scenarios,
which drives the continuous development of underwater image enhancement
techniques. A major challenge in this domain is the depth at which images are
captured, with increasing depth resulting in a darker environment. Most
existing methods for underwater image enhancement focus on noise removal and
color adjustment, with few works dedicated to brightness enhancement. This work
introduces a novel unsupervised learning approach to underwater image
enhancement using a diffusion model. Our method, called UDBE, is based on
conditional diffusion to maintain the brightness details of the unpaired input
images. The input image is combined with a color map and a Signal-Noise
Relation map (SNR) to ensure stable training and prevent color distortion in
the output images. The results demonstrate that our approach achieves an
impressive accuracy rate in the datasets UIEB, SUIM and RUIE, well-established
underwater image benchmarks. Additionally, the experiments validate the
robustness of our approach, regarding the image quality metrics PSNR, SSIM,
UIQM, and UISM, indicating the good performance of the brightness enhancement
process. The source code is available here: https://github.com/gusanagy/UDBE.
|
2501.16212
|
An FPGA-Based Neuro-Fuzzy Sensor for Personalized Driving Assistance
|
cs.RO cs.LG
|
Advanced driving-assistance systems (ADAS) are intended to automatize driver
tasks, as well as improve driving and vehicle safety. This work proposes an
intelligent neuro-fuzzy sensor for driving style (DS) recognition, suitable for
ADAS enhancement. The development of the driving style intelligent sensor uses
naturalistic driving data from the SHRP2 study, which includes data from a CAN
bus, inertial measurement unit, and front radar. The system has been
successfully implemented using a field-programmable gate array (FPGA) device of
the Xilinx Zynq programmable system-on-chip (PSoC). It can mimic the typical
timing parameters of a group of drivers as well as tune these typical
parameters to model individual DSs. The neuro-fuzzy intelligent sensor provides
high-speed real-time active ADAS implementation and is able to personalize its
behavior into safe margins without driver intervention. In particular, the
personalization procedure of the time headway (THW) parameter for an ACC in
steady car following was developed, achieving a performance of 0.53
microseconds. This performance fulfilled the requirements of cutting-edge
active ADAS specifications.
|
2501.16214
|
Provence: efficient and robust context pruning for retrieval-augmented
generation
|
cs.CL cs.IR
|
Retrieval-augmented generation improves various aspects of large language
models (LLMs) generation, but suffers from computational overhead caused by
long contexts as well as the propagation of irrelevant retrieved information
into generated responses. Context pruning deals with both aspects, by removing
irrelevant parts of retrieved contexts before LLM generation. Existing context
pruning approaches are however limited, and do not provide a universal model
that would be both efficient and robust in a wide range of scenarios, e.g.,
when contexts contain a variable amount of relevant information or vary in
length, or when evaluated on various domains. In this work, we close this gap
and introduce Provence (Pruning and Reranking Of retrieVEd relevaNt ContExts),
an efficient and robust context pruner for Question Answering, which
dynamically detects the needed amount of pruning for a given context and can be
used out-of-the-box for various domains. The three key ingredients of Provence
are formulating the context pruning task as sequence labeling, unifying context
pruning capabilities with context reranking, and training on diverse data. Our
experimental results show that Provence enables context pruning with negligible
to no drop in performance, in various domains and settings, at almost no cost
in a standard RAG pipeline. We also conduct a deeper analysis alongside various
ablations to provide insights into training context pruners for future work.
|
2501.16215
|
Enhancing Visual Inspection Capability of Multi-Modal Large Language
Models on Medical Time Series with Supportive Conformalized and Interpretable
Small Specialized Models
|
cs.AI cs.LG eess.SP
|
Large language models (LLMs) exhibit remarkable capabilities in visual
inspection of medical time-series data, achieving proficiency comparable to
human clinicians. However, their broad scope limits domain-specific precision,
and proprietary weights hinder fine-tuning for specialized datasets. In
contrast, small specialized models (SSMs) excel in targeted tasks but lack the
contextual reasoning required for complex clinical decision-making. To address
these challenges, we propose ConMIL (Conformalized Multiple Instance Learning),
a decision-support SSM that integrates seamlessly with LLMs. By using Multiple
Instance Learning (MIL) to identify clinically significant signal segments and
conformal prediction for calibrated set-valued outputs, ConMIL enhances LLMs'
interpretative capabilities for medical time-series analysis. Experimental
results demonstrate that ConMIL significantly improves the performance of
state-of-the-art LLMs, such as ChatGPT4.0 and Qwen2-VL-7B. Specifically,
\ConMIL{}-supported Qwen2-VL-7B achieves 94.92% and 96.82% precision for
confident samples in arrhythmia detection and sleep staging, compared to
standalone LLM accuracy of 46.13% and 13.16%. These findings highlight the
potential of ConMIL to bridge task-specific precision and broader contextual
reasoning, enabling more reliable and interpretable AI-driven clinical decision
support.
|
2501.16218
|
Active Hypothesis Testing for Quantum Detection of Phase-Shift Keying
Coherent States
|
quant-ph cs.IT cs.SY eess.SY math.IT
|
This paper explores the quantum detection of Phase-Shift Keying (PSK)-coded
coherent states through the lens of active hypothesis testing, focusing on a
Dolinar-like receiver with constraints on displacement amplitude and energy.
With coherent state slicing, we formulate the problem as a controlled sensing
task in which observation kernels have parameters shrinking with sample size.
The constrained open-loop error exponent and a corresponding upper bound on the
Bayesian error probability are proven. Surprisingly, the exponent-optimal
open-loop policy for binary PSK with high dark counts is not simply
time-sharing. This work serves as a first step towards obtaining analytical
insights through the active hypothesis testing framework for designing
resource-constrained quantum communication receivers.
|
2501.16220
|
DBRouting: Routing End User Queries to Databases for Answerability
|
cs.CL
|
Enterprise level data is often distributed across multiple sources and
identifying the correct set-of data-sources with relevant information for a
knowledge request is a fundamental challenge. In this work, we define the novel
task of routing an end-user query to the appropriate data-source, where the
data-sources are databases. We synthesize datasets by extending existing
datasets designed for NL-to-SQL semantic parsing. We create baselines on these
datasets by using open-source LLMs, using both pre-trained and task specific
embeddings fine-tuned using the training data. With these baselines we
demonstrate that open-source LLMs perform better than embedding based approach,
but suffer from token length limitations. Embedding based approaches benefit
from task specific fine-tuning, more so when there is availability of data in
terms of database specific questions for training. We further find that the
task becomes more difficult (i) with an increase in the number of data-sources,
(ii) having data-sources closer in terms of their domains,(iii) having
databases without external domain knowledge required to interpret its entities
and (iv) with ambiguous and complex queries requiring more fine-grained
understanding of the data-sources or logical reasoning for routing to an
appropriate source. This calls for the need for developing more sophisticated
solutions to better address the task.
|
2501.16221
|
Automatic Calibration of a Multi-Camera System with Limited Overlapping
Fields of View for 3D Surgical Scene Reconstruction
|
cs.CV
|
The purpose of this study is to develop an automated and accurate external
camera calibration method for multi-camera systems used in 3D surgical scene
reconstruction (3D-SSR), eliminating the need for operator intervention or
specialized expertise. The method specifically addresses the problem of limited
overlapping fields of view caused by significant variations in optical zoom
levels and camera locations. We contribute a novel, fast, and fully automatic
calibration method based on the projection of multi-scale markers (MSMs) using
a ceiling-mounted projector. MSMs consist of 2D patterns projected at varying
scales, ensuring accurate extraction of well distributed point correspondences
across significantly different viewpoints and zoom levels. Validation is
performed using both synthetic and real data captured in a mock-up OR, with
comparisons to traditional manual marker-based methods as well as markerless
calibration methods. The method achieves accuracy comparable to manual,
operator-dependent calibration methods while exhibiting higher robustness under
conditions of significant differences in zoom levels. Additionally, we show
that state-of-the-art Structure-from-Motion (SfM) pipelines are ineffective in
3D-SSR settings, even when additional texture is projected onto the OR floor.
The use of a ceiling-mounted entry-level projector proves to be an effective
alternative to operator-dependent, traditional marker-based methods, paving the
way for fully automated 3D-SSR.
|
2501.16222
|
SPECIAL: Zero-shot Hyperspectral Image Classification With CLIP
|
cs.CV
|
Hyperspectral image (HSI) classification aims at categorizing each pixel in
an HSI into a specific land cover class, which is crucial for applications like
remote sensing, environmental monitoring, and agriculture. Although deep
learning-based HSI classification methods have achieved significant
advancements, existing methods still rely on manually labeled data for
training, which is both time-consuming and labor-intensive. To address this
limitation, we introduce a novel zero-shot hyperspectral image classification
framework based on CLIP (SPECIAL), aiming to eliminate the need for manual
annotations. The SPECIAL framework consists of two main stages: (1) CLIP-based
pseudo-label generation, and (2) noisy label learning. In the first stage, HSI
is spectrally interpolated to produce RGB bands. These bands are subsequently
classified using CLIP, resulting in noisy pseudo-labels that are accompanied by
confidence scores. To improve the quality of these labels, we propose a scaling
strategy that fuses predictions from multiple spatial scales. In the second
stage, spectral information and a label refinement technique are incorporated
to mitigate label noise and further enhance classification accuracy.
Experimental results on three benchmark datasets demonstrate that our SPECIAL
outperforms existing methods in zero-shot HSI classification, showing its
potential for more practical applications. The code is available at
https://github.com/LiPang/SPECIAL.
|
2501.16224
|
Language-Based Bayesian Optimization Research Assistant (BORA)
|
cs.LG cs.AI
|
Many important scientific problems involve multivariate optimization coupled
with slow and laborious experimental measurements. These complex,
high-dimensional searches can be defined by non-convex optimization landscapes
that resemble needle-in-a-haystack surfaces, leading to entrapment in local
minima. Contextualizing optimizers with human domain knowledge is a powerful
approach to guide searches to localized fruitful regions. However, this
approach is susceptible to human confirmation bias and it is also challenging
for domain experts to keep track of the rapidly expanding scientific
literature. Here, we propose the use of Large Language Models (LLMs) for
contextualizing Bayesian optimization (BO) via a hybrid optimization framework
that intelligently and economically blends stochastic inference with domain
knowledge-based insights from the LLM, which is used to suggest new,
better-performing areas of the search space for exploration. Our method fosters
user engagement by offering real-time commentary on the optimization progress,
explaining the reasoning behind the search strategies. We validate the
effectiveness of our approach on synthetic benchmarks with up to 15 independent
variables and demonstrate the ability of LLMs to reason in four real-world
experimental tasks where context-aware suggestions boost optimization
performance substantially.
|
2501.16226
|
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
|
stat.ML cond-mat.dis-nn cs.LG
|
Self-distillation (SD), a technique where a model refines itself from its own
predictions, has garnered attention as a simple yet powerful approach in
machine learning. Despite its widespread use, the mechanisms underlying its
effectiveness remain unclear. In this study, we investigate the efficacy of
hyperparameter-tuned multi-stage SD in binary classification tasks with noisy
labeled Gaussian mixture data, utilizing a replica theory. Our findings reveals
that the primary driver of SD's performance improvement is denoising through
hard pseudo-labels, with the most notable gains observed in moderately sized
datasets. We also demonstrate the efficacy of practical heuristics, such as
early stopping for extracting meaningful signal and bias fixation for
imbalanced data. These results provide both theoretical guarantees and
practical insights, advancing our understanding and application of SD in noisy
settings.
|
2501.16227
|
PDC-ViT : Source Camera Identification using Pixel Difference
Convolution and Vision Transformer
|
cs.CV
|
Source camera identification has emerged as a vital solution to unlock
incidents involving critical cases like terrorism, violence, and other criminal
activities. The ability to trace the origin of an image/video can aid law
enforcement agencies in gathering evidence and constructing the timeline of
events. Moreover, identifying the owner of a certain device narrows down the
area of search in a criminal investigation where smartphone devices are
involved. This paper proposes a new pixel-based method for source camera
identification, integrating Pixel Difference Convolution (PDC) with a Vision
Transformer network (ViT), and named PDC-ViT. While the PDC acts as the
backbone for feature extraction by exploiting Angular PDC (APDC) and Radial PDC
(RPDC). These techniques enhance the capability to capture subtle variations in
pixel information, which are crucial for distinguishing between different
source cameras. The second part of the methodology focuses on classification,
which is based on a Vision Transformer network. Unlike traditional methods that
utilize image patches directly for training the classification network, the
proposed approach uniquely inputs PDC features into the Vision Transformer
network. To demonstrate the effectiveness of the PDC-ViT approach, it has been
assessed on five different datasets, which include various image contents and
video scenes. The method has also been compared with state-of-the-art source
camera identification methods. Experimental results demonstrate the
effectiveness and superiority of the proposed system in terms of accuracy and
robustness when compared to its competitors. For example, our proposed PDC-ViT
has achieved an accuracy of 94.30%, 84%, 94.22% and 92.29% using the Vision
dataset, Daxing dataset, Socrates dataset and QUFVD dataset, respectively.
|
2501.16235
|
Echoes of Discord: Forecasting Hater Reactions to Counterspeech
|
cs.CL
|
Hate speech (HS) erodes the inclusiveness of online users and propagates
negativity and division. Counterspeech has been recognized as a way to mitigate
the harmful consequences. While some research has investigated the impact of
user-generated counterspeech on social media platforms, few have examined and
modeled haters' reactions toward counterspeech, despite the immediate
alteration of haters' attitudes being an important aspect of counterspeech.
This study fills the gap by analyzing the impact of counterspeech from the
hater's perspective, focusing on whether the counterspeech leads the hater to
reenter the conversation and if the reentry is hateful. We compile the Reddit
Echoes of Hate dataset (ReEco), which consists of triple-turn conversations
featuring haters' reactions, to assess the impact of counterspeech. To predict
haters' behaviors, we employ two strategies: a two-stage reaction predictor and
a three-way classifier. The linguistic analysis sheds insights on the language
of counterspeech to hate eliciting different haters' reactions. Experimental
results demonstrate that the 3-way classification model outperforms the
two-stage reaction predictor, which first predicts reentry and then determines
the reentry type. We conclude the study with an assessment showing the most
common errors identified by the best-performing model.
|
2501.16237
|
Application of Structured State Space Models to High energy physics with
locality-sensitive hashing
|
cs.LG physics.ins-det
|
Modern high-energy physics (HEP) experiments are increasingly challenged by
the vast size and complexity of their datasets, particularly regarding
large-scale point cloud processing and long sequences. In this study, to
address these challenges, we explore the application of structured state space
models (SSMs), proposing one of the first trials to integrate local-sensitive
hashing into either a hybrid or pure Mamba Model. Our results demonstrate that
pure SSMs could serve as powerful backbones for HEP problems involving tasks
for long sequence data with local inductive bias. By integrating
locality-sensitive hashing into Mamba blocks, we achieve significant
improvements over traditional backbones in key HEP tasks, surpassing them in
inference speed and physics metrics while reducing computational overhead. In
key tests, our approach demonstrated promising results, presenting a viable
alternative to traditional transformer backbones by significantly reducing
FLOPS while maintaining robust performance.
|
2501.16239
|
Distilling foundation models for robust and efficient models in digital
pathology
|
cs.CV
|
In recent years, the advent of foundation models (FM) for digital pathology
has relied heavily on scaling the pre-training datasets and the model size,
yielding large and powerful models. While it resulted in improving the
performance on diverse downstream tasks, it also introduced increased
computational cost and inference time. In this work, we explore the
distillation of a large foundation model into a smaller one, reducing the
number of parameters by several orders of magnitude. Leveraging distillation
techniques, our distilled model, H0-mini, achieves nearly comparable
performance to large FMs at a significantly reduced inference cost. It is
evaluated on several public benchmarks, achieving 3rd place on the HEST
benchmark and 5th place on the EVA benchmark. Additionally, a robustness
analysis conducted on the PLISM dataset demonstrates that our distilled model
reaches excellent robustness to variations in staining and scanning conditions,
significantly outperforming other state-of-the art models. This opens new
perspectives to design lightweight and robust models for digital pathology,
without compromising on performance.
|
2501.16241
|
Phase Transitions in Large Language Models and the $O(N)$ Model
|
cs.LG cs.CL hep-th physics.data-an
|
Large language models (LLMs) exhibit unprecedentedly rich scaling behaviors.
In physics, scaling behavior is closely related to phase transitions, critical
phenomena, and field theory. To investigate the phase transition phenomena in
LLMs, we reformulated the Transformer architecture as an $O(N)$ model. Our
study reveals two distinct phase transitions corresponding to the temperature
used in text generation and the model's parameter size, respectively. The first
phase transition enables us to estimate the internal dimension of the model,
while the second phase transition is of \textit{higher-depth} and signals the
emergence of new capabilities. As an application, the energy of the $O(N)$
model can be used to evaluate whether an LLM's parameters are sufficient to
learn the training data.
|
2501.16243
|
Accelerating Quantum Reinforcement Learning with a Quantum Natural
Policy Gradient Based Approach
|
quant-ph cs.AI stat.ML
|
We address the problem of quantum reinforcement learning (QRL) under
model-free settings with quantum oracle access to the Markov Decision Process
(MDP). This paper introduces a Quantum Natural Policy Gradient (QNPG)
algorithm, which replaces the random sampling used in classical Natural Policy
Gradient (NPG) estimators with a deterministic gradient estimation approach,
enabling seamless integration into quantum systems. While this modification
introduces a bounded bias in the estimator, the bias decays exponentially with
increasing truncation levels. This paper demonstrates that the proposed QNPG
algorithm achieves a sample complexity of
$\tilde{\mathcal{O}}(\epsilon^{-1.5})$ for queries to the quantum oracle,
significantly improving the classical lower bound of
$\tilde{\mathcal{O}}(\epsilon^{-2})$ for queries to the MDP.
|
2501.16245
|
SP-IMPact: A Framework for Static Partitioning Interference Mitigation
and Performance Analysis
|
cs.DC cs.PF cs.SY eess.SY
|
Modern embedded systems are evolving toward complex, heterogeneous
architectures to accommodate increasingly demanding applications. Driven by
SWAP-C constraints, this shift has led to consolidating multiple systems onto
single hardware platforms. Static Partitioning Hypervisors offer a promising
solution to partition hardware resources and provide spatial isolation between
critical workloads. However, shared resources like the Last-Level Cache and
system bus can introduce temporal interference between virtual machines (VMs),
negatively impacting performance and predictability. Over the past decade,
academia and industry have developed interference mitigation techniques, such
as cache partitioning and memory bandwidth reservation. However, configuring
these techniques is complex and time-consuming. Cache partitioning requires
balancing cache sections across VMs, while memory bandwidth reservation needs
tuning bandwidth budgets and periods. Testing all configurations is impractical
and often leads to suboptimal results. Moreover, understanding how these
techniques interact is limited, as their combined use can produce compounded or
conflicting effects on performance. Static analysis tools estimating worst-case
execution times offer guidance for configuring mitigation techniques but often
fail to capture the complexity of modern multi-core systems. They typically
focus on limited shared resources while neglecting others, such as IOMMUs and
interrupt controllers. To address these challenges, we present SP-IMPact, an
open-source framework for analyzing and guiding interference mitigation
configurations. SP-IMPact supports (i) cache coloring and (ii) memory bandwidth
reservation, while evaluating their interactions and cumulative impact. By
providing insights on real hardware, SP-IMPact helps optimize configurations
for mixed-criticality systems, ensuring performance and predictability.
|
2501.16246
|
CLISC: Bridging clip and sam by enhanced cam for unsupervised brain
tumor segmentation
|
cs.CV
|
Brain tumor segmentation is important for diagnosis of the tumor, and current
deep-learning methods rely on a large set of annotated images for training,
with high annotation costs. Unsupervised segmentation is promising to avoid
human annotations while the performance is often limited. In this study, we
present a novel unsupervised segmentation approach that leverages the
capabilities of foundation models, and it consists of three main steps: (1) A
vision-language model (i.e., CLIP) is employed to obtain image-level
pseudo-labels for training a classification network. Class Activation Mapping
(CAM) is then employed to extract Regions of Interest (ROIs), where an adaptive
masking-based data augmentation is used to enhance ROI identification.(2) The
ROIs are used to generate bounding box and point prompts for the Segment
Anything Model (SAM) to obtain segmentation pseudo-labels. (3) A 3D
segmentation network is trained with the SAM-derived pseudo-labels, where
low-quality pseudo-labels are filtered out in a self-learning process based on
the similarity between the SAM's output and the network's prediction.
Evaluation on the BraTS2020 dataset demonstrates that our approach obtained an
average Dice Similarity Score (DSC) of 85.60%, outperforming five
state-of-the-art unsupervised segmentation methods by more than 10 percentage
points. Besides, our approach outperforms directly using SAM for zero-shot
inference, and its performance is close to fully supervised learning.
|
2501.16247
|
Zero-Shot Decision Tree Construction via Large Language Models
|
cs.LG cs.CL
|
This paper introduces a novel algorithm for constructing decision trees using
large language models (LLMs) in a zero-shot manner based on Classification and
Regression Trees (CART) principles. Traditional decision tree induction methods
rely heavily on labeled data to recursively partition data using criteria such
as information gain or the Gini index. In contrast, we propose a method that
uses the pre-trained knowledge embedded in LLMs to build decision trees without
requiring training data. Our approach leverages LLMs to perform operations
essential for decision tree construction, including attribute discretization,
probability calculation, and Gini index computation based on the probabilities.
We show that these zero-shot decision trees can outperform baseline zero-shot
methods and achieve competitive performance compared to supervised data-driven
decision trees on tabular datasets. The decision trees constructed via this
method provide transparent and interpretable models, addressing data scarcity
while preserving interpretability. This work establishes a new baseline in
low-data machine learning, offering a principled, knowledge-driven alternative
to data-driven tree construction.
|
2501.16249
|
Lightweight Weighted Average Ensemble Model for Pneumonia Detection in
Chest X-Ray Images
|
eess.IV cs.AI cs.CV
|
Pneumonia is a leading cause of illness and death in children, underscoring
the need for early and accurate detection. In this study, we propose a novel
lightweight ensemble model for detecting pneumonia in children using chest
X-ray images. This ensemble model integrates two pre-trained convolutional
neural networks (CNNs), MobileNetV2 and NASNetMobile, selected for their
balance of computational efficiency and accuracy. These models were fine-tuned
on a pediatric chest X-ray dataset and combined to enhance classification
performance. Our proposed ensemble model achieved a classification accuracy of
98.63%, significantly outperforming individual models such as MobileNetV2
(97.10%) and NASNetMobile(96.25%) in terms of accuracy, precision, recall, and
F1 score. Moreover, the ensemble model outperformed state-of-the-art
architectures, including ResNet50, InceptionV3, and DenseNet201, while
maintaining computational efficiency. The proposed lightweight ensemble model
presents a highly effective and resource-efficient solution for pneumonia
detection, making it particularly suitable for deployment in
resource-constrained settings.
|
2501.16250
|
Runtime Analysis of the Compact Genetic Algorithm on the LeadingOnes
Benchmark
|
cs.NE
|
The compact genetic algorithm (cGA) is one of the simplest
estimation-of-distribution algorithms (EDAs). Next to the univariate marginal
distribution algorithm (UMDA) -- another simple EDA -- , the cGA has been
subject to extensive mathematical runtime analyses, often showcasing a similar
or even superior performance to competing approaches. Surprisingly though, up
to date and in contrast to the UMDA and many other heuristics, we lack a
rigorous runtime analysis of the cGA on the LeadingOnes benchmark -- one of the
most studied theory benchmarks in the domain of evolutionary computation.
We fill this gap in the literature by conducting a formal runtime analysis of
the cGA on LeadingOnes. For the cGA's single parameter -- called the
hypothetical population size -- at least polylogarithmically larger than the
problem size, we prove that the cGA samples the optimum of LeadingOnes with
high probability within a number of function evaluations quasi-linear in the
problem size and linear in the hypothetical population size. For the best
hypothetical population size, our result matches, up to polylogarithmic
factors, the typical quadratic runtime that many randomized search heuristics
exhibit on LeadingOnes. Our analysis exhibits some noteworthy differences in
the working principles of the two algorithms which were not visible in previous
works.
|
2501.16254
|
Multi-Agent Geospatial Copilots for Remote Sensing Workflows
|
cs.LG
|
We present GeoLLM-Squad, a geospatial Copilot that introduces the novel
multi-agent paradigm to remote sensing (RS) workflows. Unlike existing
single-agent approaches that rely on monolithic large language models (LLM),
GeoLLM-Squad separates agentic orchestration from geospatial task-solving, by
delegating RS tasks to specialized sub-agents. Built on the open-source AutoGen
and GeoLLM-Engine frameworks, our work enables the modular integration of
diverse applications, spanning urban monitoring, forestry protection, climate
analysis, and agriculture studies. Our results demonstrate that while
single-agent systems struggle to scale with increasing RS task complexity,
GeoLLM-Squad maintains robust performance, achieving a 17% improvement in
agentic correctness over state-of-the-art baselines. Our findings highlight the
potential of multi-agent AI in advancing RS workflows.
|
2501.16255
|
A foundation model for human-AI collaboration in medical literature
mining
|
cs.CL
|
Systematic literature review is essential for evidence-based medicine,
requiring comprehensive analysis of clinical trial publications. However, the
application of artificial intelligence (AI) models for medical literature
mining has been limited by insufficient training and evaluation across broad
therapeutic areas and diverse tasks. Here, we present LEADS, an AI foundation
model for study search, screening, and data extraction from medical literature.
The model is trained on 633,759 instruction data points in LEADSInstruct,
curated from 21,335 systematic reviews, 453,625 clinical trial publications,
and 27,015 clinical trial registries. We showed that LEADS demonstrates
consistent improvements over four cutting-edge generic large language models
(LLMs) on six tasks. Furthermore, LEADS enhances expert workflows by providing
supportive references following expert requests, streamlining processes while
maintaining high-quality results. A study with 16 clinicians and medical
researchers from 14 different institutions revealed that experts collaborating
with LEADS achieved a recall of 0.81 compared to 0.77 experts working alone in
study selection, with a time savings of 22.6%. In data extraction tasks,
experts using LEADS achieved an accuracy of 0.85 versus 0.80 without using
LEADS, alongside a 26.9% time savings. These findings highlight the potential
of specialized medical literature foundation models to outperform generic
models, delivering significant quality and efficiency benefits when integrated
into expert workflows for medical literature mining.
|
2501.16256
|
Improving DBMS Scheduling Decisions with Fine-grained Performance
Prediction on Concurrent Queries -- Extended
|
cs.DB cs.LG
|
Query scheduling is a critical task that directly impacts query performance
in database management systems (DBMS). Deeply integrated schedulers, which
require changes to DBMS internals, are usually customized for a specific engine
and can take months to implement. In contrast, non-intrusive schedulers make
coarse-grained decisions, such as controlling query admission and re-ordering
query execution, without requiring modifications to DBMS internals. They
require much less engineering effort and can be applied across a wide range of
DBMS engines, offering immediate benefits to end users. However, most existing
non-intrusive scheduling systems rely on simplified cost models and heuristics
that cannot accurately model query interactions under concurrency and different
system states, possibly leading to suboptimal scheduling decisions.
This work introduces IconqSched, a new, principled non-intrusive scheduler
that optimizes the execution order and timing of queries to enhance total
end-to-end runtime as experienced by the user query queuing time plus system
runtime. Unlike previous approaches, IconqSched features a novel fine-grained
predictor, Iconq, which treats the DBMS as a black box and accurately estimates
the system runtime of concurrently executed queries under different system
states. Using these predictions, IconqSched is able to capture system runtime
variations across different query mixes and system loads. It then employs a
greedy scheduling algorithm to effectively determine which queries to submit
and when to submit them. We compare IconqSched to other schedulers in terms of
end-to-end runtime using real workload traces. On Postgres, IconqSched reduces
end-to-end runtime by 16.2%-28.2% on average and 33.6%-38.9% in the tail.
Similarly, on Redshift, it reduces end-to-end runtime by 10.3%-14.1% on average
and 14.9%-22.2% in the tail.
|
2501.16265
|
Training Dynamics of In-Context Learning in Linear Attention
|
cs.LG
|
While attention-based models have demonstrated the remarkable ability of
in-context learning, the theoretical understanding of how these models acquired
this ability through gradient descent training is still preliminary. Towards
answering this question, we study the gradient descent dynamics of multi-head
linear self-attention trained for in-context linear regression. We examine two
parametrizations of linear self-attention: one with the key and query weights
merged as a single matrix (common in theoretical studies), and one with
separate key and query matrices (closer to practical settings). For the merged
parametrization, we show the training dynamics has two fixed points and the
loss trajectory exhibits a single, abrupt drop. We derive an analytical
time-course solution for a certain class of datasets and initialization. For
the separate parametrization, we show the training dynamics has exponentially
many fixed points and the loss exhibits saddle-to-saddle dynamics, which we
reduce to scalar ordinary differential equations. During training, the model
implements principal component regression in context with the number of
principal components increasing over training time. Overall, we characterize
how in-context learning abilities evolve during gradient descent training of
linear attention, revealing dynamics of abrupt acquisition versus progressive
improvements in models with different parametrizations.
|
2501.16271
|
From Molecules to Mixtures: Learning Representations of Olfactory
Mixture Similarity using Inductive Biases
|
cs.LG cs.AI
|
Olfaction -- how molecules are perceived as odors to humans -- remains poorly
understood. Recently, the principal odor map (POM) was introduced to digitize
the olfactory properties of single compounds. However, smells in real life are
not pure single molecules, but complex mixtures of molecules, whose
representations remain relatively under-explored. In this work, we introduce
POMMix, an extension of the POM to represent mixtures. Our representation
builds upon the symmetries of the problem space in a hierarchical manner: (1)
graph neural networks for building molecular embeddings, (2) attention
mechanisms for aggregating molecular representations into mixture
representations, and (3) cosine prediction heads to encode olfactory perceptual
distance in the mixture embedding space. POMMix achieves state-of-the-art
predictive performance across multiple datasets. We also evaluate the
generalizability of the representation on multiple splits when applied to
unseen molecules and mixture sizes. Our work advances the effort to digitize
olfaction, and highlights the synergy of domain expertise and deep learning in
crafting expressive representations in low-data regimes.
|
2501.16273
|
Return of the Encoder: Maximizing Parameter Efficiency for SLMs
|
cs.CL cs.AI cs.CV
|
The dominance of large decoder-only language models has overshadowed
encoder-decoder architectures, despite their fundamental efficiency advantages
in sequence processing. For small language models (SLMs) - those with 1 billion
parameters or fewer - our systematic analysis across GPU, CPU, and NPU
platforms reveals that encoder-decoder architectures achieve 47% lower
first-token latency and 4.7x higher throughput compared to decoder-only models
on edge devices. These gains may be attributed to encoder-decoder's one-time
input processing and efficient separation of understanding and generation
phases.
We introduce a novel knowledge distillation framework that enables
encoder-decoder models to leverage capabilities from large scalable
decoder-only teachers while preserving their architectural advantages,
achieving up to 6 average performance points improvement across diverse tasks,
with significant gains in asymmetric sequence tasks where input and output
distributions can benefit from different processing approaches.
When combined with modern advances like Rotary Positional Embeddings (RoPE)
and Vision encoders, our systematic investigation demonstrates that
encoder-decoder architectures provide a more practical path toward deploying
capable language models in resource-constrained environments. Our findings
challenge the prevailing trend toward decoder-only scaling, showing that
architectural choices become increasingly crucial as parameter budgets
decrease, particularly for on-device and edge deployments where computational
efficiency is paramount.
|
2501.16274
|
What is Formal Verification without Specifications? A Survey on mining
LTL Specifications
|
cs.FL cs.AI cs.LO
|
Virtually all verification techniques using formal methods rely on the
availability of a formal specification, which describes the design requirements
precisely. However, formulating specifications remains a manual task that is
notoriously challenging and error-prone. To address this bottleneck in formal
verification, recent research has thus focussed on automatically generating
specifications for formal verification from examples of (desired and undesired)
system behavior. In this survey, we list and compare recent advances in mining
specifications in Linear Temporal Logic (LTL), the de facto standard
specification language for reactive systems. Several approaches have been
designed for learning LTL formulas, which address different aspects and
settings of specification design. Moreover, the approaches rely on a diverse
range of techniques such as constraint solving, neural network training,
enumerative search, etc. We survey the current state-of-the-art techniques and
compare them for the convenience of the formal methods practitioners.
|
2501.16276
|
URAG: Implementing a Unified Hybrid RAG for Precise Answers in
University Admission Chatbots -- A Case Study at HCMUT
|
cs.CL cs.IR
|
With the rapid advancement of Artificial Intelligence, particularly in
Natural Language Processing, Large Language Models (LLMs) have become pivotal
in educational question-answering systems, especially university admission
chatbots. Concepts such as Retrieval-Augmented Generation (RAG) and other
advanced techniques have been developed to enhance these systems by integrating
specific university data, enabling LLMs to provide informed responses on
admissions and academic counseling. However, these enhanced RAG techniques
often involve high operational costs and require the training of complex,
specialized modules, which poses challenges for practical deployment.
Additionally, in the educational context, it is crucial to provide accurate
answers to prevent misinformation, a task that LLM-based systems find
challenging without appropriate strategies and methods. In this paper, we
introduce the Unified RAG (URAG) Framework, a hybrid approach that
significantly improves the accuracy of responses, particularly for critical
queries. Experimental results demonstrate that URAG enhances our in-house,
lightweight model to perform comparably to state-of-the-art commercial models.
Moreover, to validate its practical applicability, we conducted a case study at
our educational institution, which received positive feedback and acclaim. This
study not only proves the effectiveness of URAG but also highlights its
feasibility for real-world implementation in educational settings.
|
2501.16282
|
Brain-Adapter: Enhancing Neurological Disorder Analysis with
Adapter-Tuning Multimodal Large Language Models
|
eess.IV cs.AI cs.CV
|
Understanding brain disorders is crucial for accurate clinical diagnosis and
treatment. Recent advances in Multimodal Large Language Models (MLLMs) offer a
promising approach to interpreting medical images with the support of text
descriptions. However, previous research has primarily focused on 2D medical
images, leaving richer spatial information of 3D images under-explored, and
single-modality-based methods are limited by overlooking the critical clinical
information contained in other modalities. To address this issue, this paper
proposes Brain-Adapter, a novel approach that incorporates an extra bottleneck
layer to learn new knowledge and instill it into the original pre-trained
knowledge. The major idea is to incorporate a lightweight bottleneck layer to
train fewer parameters while capturing essential information and utilize a
Contrastive Language-Image Pre-training (CLIP) strategy to align multimodal
data within a unified representation space. Extensive experiments demonstrated
the effectiveness of our approach in integrating multimodal data to
significantly improve the diagnosis accuracy without high computational costs,
highlighting the potential to enhance real-world diagnostic workflows.
|
2501.16287
|
A Unified Representation of Density-Power-Based Divergences Reducible to
M-Estimation
|
cs.IT math.IT math.ST stat.ML stat.TH
|
Density-power-based divergences are known to provide robust inference
procedures against outliers, and their extensions have been widely studied. A
characteristic of successful divergences is that the estimation problem can be
reduced to M-estimation. In this paper, we define a norm-based Bregman density
power divergence (NB-DPD) -- density-power-based divergence with functional
flexibility within the framework of Bregman divergences that can be reduced to
M-estimation. We show that, by specifying the function $\phi_\gamma$, NB-DPD
reduces to well-known divergences, such as the density power divergence and the
$\gamma$-divergence. Furthermore, by examining the combinations of functions
$\phi_\gamma$ corresponding to existing divergences, we show that a new
divergence connecting these existing divergences can be derived. Finally, we
show that the redescending property, one of the key indicators of robustness,
holds only for the $\gamma$-divergence.
|
2501.16288
|
Upside Down Reinforcement Learning with Policy Generators
|
cs.LG cs.AI
|
Upside Down Reinforcement Learning (UDRL) is a promising framework for
solving reinforcement learning problems which focuses on learning
command-conditioned policies. In this work, we extend UDRL to the task of
learning a command-conditioned generator of deep neural network policies. We
accomplish this using Hypernetworks - a variant of Fast Weight Programmers,
which learn to decode input commands representing a desired expected return
into command-specific weight matrices. Our method, dubbed Upside Down
Reinforcement Learning with Policy Generators (UDRLPG), streamlines comparable
techniques by removing the need for an evaluator or critic to update the
weights of the generator. To counteract the increased variance in last returns
caused by not having an evaluator, we decouple the sampling probability of the
buffer from the absolute number of policies in it, which, together with a
simple weighting strategy, improves the empirical convergence of the algorithm.
Compared with existing algorithms, UDRLPG achieves competitive performance and
high returns, sometimes outperforming more complex architectures. Our
experiments show that a trained generator can generalize to create policies
that achieve unseen returns zero-shot. The proposed method appears to be
effective in mitigating some of the challenges associated with learning highly
multimodal functions. Altogether, we believe that UDRLPG represents a promising
step forward in achieving greater empirical sample efficiency in RL. A full
implementation of UDRLPG is publicly available at
https://github.com/JacopoD/udrlpg_
|
2501.16289
|
Multi-view Structural Convolution Network for Domain-Invariant Point
Cloud Recognition of Autonomous Vehicles
|
cs.CV
|
Point cloud representation has recently become a research hotspot in the
field of computer vision and has been utilized for autonomous vehicles.
However, adapting deep learning networks for point cloud data recognition is
challenging due to the variability in datasets and sensor technologies. This
variability underscores the necessity for adaptive techniques to maintain
accuracy under different conditions. In this paper, we present the Multi-View
Structural Convolution Network (MSCN) designed for domain-invariant point cloud
recognition. MSCN comprises Structural Convolution Layers (SCL) that extract
local context geometric features from point clouds and Structural Aggregation
Layers (SAL) that extract and aggregate both local and overall context features
from point clouds. Additionally, our MSCN enhances feature representation
robustness by training with unseen domain point clouds derived from source
domain point clouds. This method acquires domain-invariant features and
exhibits robust, consistent performance across various point cloud datasets,
ensuring compatibility with diverse sensor configurations without the need for
parameter adjustments. This highlights MSCN's potential to significantly
improve the reliability and domain invariant features in different
environments. Our code is available at https://github.com/MLMLab/MSCN.
|
2501.16295
|
Mixture-of-Mamba: Enhancing Multi-Modal State-Space Models with
Modality-Aware Sparsity
|
cs.LG cs.AI cs.CL cs.CV
|
State Space Models (SSMs) have emerged as efficient alternatives to
Transformers for sequential modeling, but their inability to leverage
modality-specific features limits their performance in multi-modal pretraining.
Here, we propose Mixture-of-Mamba, a novel SSM architecture that introduces
modality-aware sparsity through modality-specific parameterization of the Mamba
block. Building on Mixture-of-Transformers (W. Liang et al. arXiv:2411.04996;
2024), we extend the benefits of modality-aware sparsity to SSMs while
preserving their computational efficiency. We evaluate Mixture-of-Mamba across
three multi-modal pretraining settings: Transfusion (interleaved text and
continuous image tokens with diffusion loss), Chameleon (interleaved text and
discrete image tokens), and an extended three-modality framework incorporating
speech. Mixture-of-Mamba consistently reaches the same loss values at earlier
training steps with significantly reduced computational costs. In the
Transfusion setting, Mixture-of-Mamba achieves equivalent image loss using only
34.76% of the training FLOPs at the 1.4B scale. In the Chameleon setting,
Mixture-of-Mamba reaches similar image loss with just 42.50% of the FLOPs at
the 1.4B scale, and similar text loss with just 65.40% of the FLOPs. In the
three-modality setting, MoM matches speech loss at 24.80% of the FLOPs at the
1.4B scale. Our ablation study highlights the synergistic effects of decoupling
projection components, where joint decoupling yields greater gains than
individual modifications. These results establish modality-aware sparsity as a
versatile and effective design principle, extending its impact from
Transformers to SSMs and setting new benchmarks in multi-modal pretraining. Our
code can be accessed at https://github.com/Weixin-Liang/Mixture-of-Mamba
|
2501.16296
|
Entanglement-Assisted Coding for Arbitrary Linear Computations Over a
Quantum MAC
|
cs.IT cs.NI eess.SP math.IT quant-ph
|
We study a linear computation problem over a quantum multiple access channel
(LC-QMAC), where $S$ servers share an entangled state and separately store
classical data streams $W_1,\cdots, W_S$ over a finite field $\mathbb{F}_d$. A
user aims to compute $K$ linear combinations of these data streams, represented
as $Y = \mathbf{V}_1 W_1 + \mathbf{V}_2 W_2 + \cdots + \mathbf{V}_S W_S \in
\mathbb{F}_d^{K \times 1}$. To this end, each server encodes its classical
information into its local quantum subsystem and transmits it to the user, who
retrieves the desired computations via quantum measurements. In this work, we
propose an achievable scheme for LC-QMAC based on the stabilizer formalism and
the ideas from entanglement-assisted quantum error-correcting codes (EAQECC).
Specifically, given any linear computation matrix, we construct a
self-orthogonal matrix that can be implemented using the stabilizer formalism.
Also, we apply precoding matrices to minimize the number of auxiliary qudits
required. Our scheme achieves more computations per qudit, i.e., a higher
computation rate, compared to the best-known methods in the literature, and
attains the capacity in certain cases.
|
2501.16297
|
FALCON: Resolving Visual Redundancy and Fragmentation in High-resolution
Multimodal Large Language Models via Visual Registers
|
cs.CV
|
The incorporation of high-resolution visual input equips multimodal large
language models (MLLMs) with enhanced visual perception capabilities for
real-world tasks. However, most existing high-resolution MLLMs rely on a
cropping-based approach to process images, which leads to fragmented visual
encoding and a sharp increase in redundant tokens. To tackle these issues, we
propose the FALCON model. FALCON introduces a novel visual register technique
to simultaneously: 1) Eliminate redundant tokens at the stage of visual
encoding. To directly address the visual redundancy present in the output of
vision encoder, we propose a Register-based Representation Compacting
(ReCompact) mechanism. This mechanism introduces a set of learnable visual
registers designed to adaptively aggregate essential information while
discarding redundancy. It enables the encoder to produce a more compact visual
representation with a minimal number of output tokens, thus eliminating the
need for an additional compression module. 2) Ensure continuity in visual
encoding. To address the potential encoding errors caused by fragmented visual
inputs, we develop a Register Interactive Attention (ReAtten) module. This
module facilitates effective and efficient information exchange across
sub-images by enabling interactions between visual registers. It ensures the
continuity of visual semantics throughout the encoding. We conduct
comprehensive experiments with FALCON on high-resolution benchmarks across a
wide range of scenarios. FALCON demonstrates superior performance with a
remarkable 9-fold and 16-fold reduction in visual tokens.
|
2501.16298
|
Uncoded Download in Lagrange-Coded Elastic Computing with Straggler
Tolerance
|
cs.IT cs.DC math.IT
|
Coded elastic computing, introduced by Yang et al. in 2018, is a technique
designed to mitigate the impact of elasticity in cloud computing systems, where
machines can be preempted or be added during computing rounds. This approach
utilizes maximum distance separable (MDS) coding for both storage and download
in matrix-matrix multiplications. The proposed scheme is unable to tolerate
stragglers and has high encoding complexity and upload cost. In 2023, we
addressed these limitations by employing uncoded storage and Lagrange-coded
download. However, it results in a large storage size. To address the
challenges of storage size and upload cost, in this paper, we focus on
Lagrange-coded elastic computing based on uncoded download. We propose a new
class of elastic computing schemes, using Lagrange-coded storage with uncoded
download (LCSUD). Our proposed schemes address both elasticity and straggler
challenges while achieving lower storage size, reduced encoding complexity, and
upload cost compared to existing methods.
|
2501.16300
|
Large Models in Dialogue for Active Perception and Anomaly Detection
|
cs.CV cs.AI
|
Autonomous aerial monitoring is an important task aimed at gathering
information from areas that may not be easily accessible by humans. At the same
time, this task often requires recognizing anomalies from a significant
distance or not previously encountered in the past. In this paper, we propose a
novel framework that leverages the advanced capabilities provided by Large
Language Models (LLMs) to actively collect information and perform anomaly
detection in novel scenes. To this end, we propose an LLM based model dialogue
approach, in which two deep learning models engage in a dialogue to actively
control a drone to increase perception and anomaly detection accuracy. We
conduct our experiments in a high fidelity simulation environment where an LLM
is provided with a predetermined set of natural language movement commands
mapped into executable code functions. Additionally, we deploy a multimodal
Visual Question Answering (VQA) model charged with the task of visual question
answering and captioning. By engaging the two models in conversation, the LLM
asks exploratory questions while simultaneously flying a drone into different
parts of the scene, providing a novel way to implement active perception. By
leveraging LLMs reasoning ability, we output an improved detailed description
of the scene going beyond existing static perception approaches. In addition to
information gathering, our approach is utilized for anomaly detection and our
results demonstrate the proposed methods effectiveness in informing and
alerting about potential hazards.
|
2501.16302
|
Matryoshka Re-Ranker: A Flexible Re-Ranking Architecture With
Configurable Depth and Width
|
cs.CL
|
Large language models (LLMs) provide powerful foundations to perform
fine-grained text re-ranking. However, they are often prohibitive in reality
due to constraints on computation bandwidth. In this work, we propose a
\textbf{flexible} architecture called \textbf{Matroyshka Re-Ranker}, which is
designed to facilitate \textbf{runtime customization} of model layers and
sequence lengths at each layer based on users' configurations. Consequently,
the LLM-based re-rankers can be made applicable across various real-world
situations. The increased flexibility may come at the cost of precision loss.
To address this problem, we introduce a suite of techniques to optimize the
performance. First, we propose \textbf{cascaded self-distillation}, where each
sub-architecture learns to preserve a precise re-ranking performance from its
super components, whose predictions can be exploited as smooth and informative
teacher signals. Second, we design a \textbf{factorized compensation
mechanism}, where two collaborative Low-Rank Adaptation modules, vertical and
horizontal, are jointly employed to compensate for the precision loss resulted
from arbitrary combinations of layer and sequence compression. We perform
comprehensive experiments based on the passage and document retrieval datasets
from MSMARCO, along with all public datasets from BEIR benchmark. In our
experiments, Matryoshka Re-Ranker substantially outperforms the existing
methods, while effectively preserving its superior performance across various
forms of compression and different application scenarios.
|
2501.16303
|
RAPID: Retrieval-Augmented Parallel Inference Drafting for Text-Based
Video Event Retrieval
|
cs.CL cs.IR
|
Retrieving events from videos using text queries has become increasingly
challenging due to the rapid growth of multimedia content. Existing methods for
text-based video event retrieval often focus heavily on object-level
descriptions, overlooking the crucial role of contextual information. This
limitation is especially apparent when queries lack sufficient context, such as
missing location details or ambiguous background elements. To address these
challenges, we propose a novel system called RAPID (Retrieval-Augmented
Parallel Inference Drafting), which leverages advancements in Large Language
Models (LLMs) and prompt-based learning to semantically correct and enrich user
queries with relevant contextual information. These enriched queries are then
processed through parallel retrieval, followed by an evaluation step to select
the most relevant results based on their alignment with the original query.
Through extensive experiments on our custom-developed dataset, we demonstrate
that RAPID significantly outperforms traditional retrieval methods,
particularly for contextually incomplete queries. Our system was validated for
both speed and accuracy through participation in the Ho Chi Minh City AI
Challenge 2024, where it successfully retrieved events from over 300 hours of
video. Further evaluation comparing RAPID with the baseline proposed by the
competition organizers demonstrated its superior effectiveness, highlighting
the strength and robustness of our approach.
|
2501.16306
|
Graph Neural Network Based Hybrid Beamforming Design in Wideband
Terahertz MIMO-OFDM Systems
|
eess.SP cs.LG cs.NI
|
6G wireless technology is projected to adopt higher and wider frequency
bands, enabled by highly directional beamforming. However, the vast bandwidths
available also make the impact of beam squint in massive multiple input and
multiple output (MIMO) systems non-negligible. Traditional approaches such as
adding a true-time-delay line (TTD) on each antenna are costly due to the
massive antenna arrays required. This paper puts forth a signal processing
alternative, specifically adapted to the multicarrier structure of OFDM
systems, through an innovative application of Graph Neural Networks (GNNs) to
optimize hybrid beamforming. By integrating two types of graph nodes to
represent the analog and the digital beamforming matrices efficiently, our
approach not only reduces the computational and memory burdens but also
achieves high spectral efficiency performance, approaching that of all digital
beamforming. The GNN runtime and memory requirement are at a fraction of the
processing time and resource consumption of traditional signal processing
methods, hence enabling real-time adaptation of hybrid beamforming.
Furthermore, the proposed GNN exhibits strong resiliency to beam squinting,
achieving almost constant spectral efficiency even as the system bandwidth
increases at higher carrier frequencies.
|
2501.16309
|
Evaluating The Performance of Using Large Language Models to Automate
Summarization of CT Simulation Orders in Radiation Oncology
|
physics.med-ph cs.AI
|
Purpose: This study aims to use a large language model (LLM) to automate the
generation of summaries from the CT simulation orders and evaluate its
performance.
Materials and Methods: A total of 607 CT simulation orders for patients were
collected from the Aria database at our institution. A locally hosted Llama 3.1
405B model, accessed via the Application Programming Interface (API) service,
was used to extract keywords from the CT simulation orders and generate
summaries. The downloaded CT simulation orders were categorized into seven
groups based on treatment modalities and disease sites. For each group, a
customized instruction prompt was developed collaboratively with therapists to
guide the Llama 3.1 405B model in generating summaries. The ground truth for
the corresponding summaries was manually derived by carefully reviewing each CT
simulation order and subsequently verified by therapists. The accuracy of the
LLM-generated summaries was evaluated by therapists using the verified ground
truth as a reference.
Results: About 98% of the LLM-generated summaries aligned with the manually
generated ground truth in terms of accuracy. Our evaluations showed an improved
consistency in format and enhanced readability of the LLM-generated summaries
compared to the corresponding therapists-generated summaries. This automated
approach demonstrated a consistent performance across all groups, regardless of
modality or disease site.
Conclusions: This study demonstrated the high precision and consistency of
the Llama 3.1 405B model in extracting keywords and summarizing CT simulation
orders, suggesting that LLMs have great potential to help with this task,
reduce the workload of therapists and improve workflow efficiency.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.