id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.05465
|
Small Language Models (SLMs) Can Still Pack a Punch: A survey
|
cs.CL
|
As foundation AI models continue to increase in size, an important question
arises - is massive scale the only path forward? This survey of about 160
papers presents a family of Small Language Models (SLMs) in the 1 to 8 billion
parameter range that demonstrate smaller models can perform as well, or even
outperform large models. We explore task agnostic, general purpose SLMs,
task-specific SLMs and techniques to create SLMs that can guide the community
to build models while balancing performance, efficiency, scalability and cost.
Furthermore we define and characterize SLMs' effective sizes, representing
increased capability with respect to LLMs.
|
2501.05468
|
LatteReview: A Multi-Agent Framework for Systematic Review Automation
Using Large Language Models
|
cs.CL
|
Systematic literature reviews and meta-analyses are essential for
synthesizing research insights, but they remain time-intensive and
labor-intensive due to the iterative processes of screening, evaluation, and
data extraction. This paper introduces and evaluates LatteReview, a
Python-based framework that leverages large language models (LLMs) and
multi-agent systems to automate key elements of the systematic review process.
Designed to streamline workflows while maintaining rigor, LatteReview utilizes
modular agents for tasks such as title and abstract screening, relevance
scoring, and structured data extraction. These agents operate within
orchestrated workflows, supporting sequential and parallel review rounds,
dynamic decision-making, and iterative refinement based on user feedback.
LatteReview's architecture integrates LLM providers, enabling compatibility
with both cloud-based and locally hosted models. The framework supports
features such as Retrieval-Augmented Generation (RAG) for incorporating
external context, multimodal reviews, Pydantic-based validation for structured
inputs and outputs, and asynchronous programming for handling large-scale
datasets. The framework is available on the GitHub repository, with detailed
documentation and an installable package.
|
2501.05470
|
RTLSquad: Multi-Agent Based Interpretable RTL Design
|
cs.AR cs.AI cs.SE
|
Optimizing Register-Transfer Level (RTL) code is crucial for improving
hardware PPA performance. Large Language Models (LLMs) offer new approaches for
automatic RTL code generation and optimization. However, existing methods often
lack decision interpretability (sufficient, understandable justification for
decisions), making it difficult for hardware engineers to trust the generated
results, thus preventing these methods from being integrated into the design
process. To address this, we propose RTLSquad, a novel LLM-Based Multi-Agent
system for interpretable RTL code generation. RTLSquad divides the design
process into exploration, implementation, and verification & evaluation stages
managed by specialized agent squads, generating optimized RTL code through
inter-agent collaboration, and providing decision interpretability through the
communication process. Experiments show that RTLSquad excels in generating
functionally correct RTL code and optimizing PPA performance, while also having
the capability to provide decision paths, demonstrating the practical value of
our system.
|
2501.05471
|
Found in Translation: semantic approaches for enhancing AI
interpretability in face verification
|
cs.CV cs.AI cs.HC cs.LG
|
The increasing complexity of machine learning models in computer vision,
particularly in face verification, requires the development of explainable
artificial intelligence (XAI) to enhance interpretability and transparency.
This study extends previous work by integrating semantic concepts derived from
human cognitive processes into XAI frameworks to bridge the comprehension gap
between model outputs and human understanding. We propose a novel approach
combining global and local explanations, using semantic features defined by
user-selected facial landmarks to generate similarity maps and textual
explanations via large language models (LLMs). The methodology was validated
through quantitative experiments and user feedback, demonstrating improved
interpretability. Results indicate that our semantic-based approach,
particularly the most detailed set, offers a more nuanced understanding of
model decisions than traditional methods. User studies highlight a preference
for our semantic explanations over traditional pixelbased heatmaps, emphasizing
the benefits of human-centric interpretability in AI. This work contributes to
the ongoing efforts to create XAI frameworks that align AI models behaviour
with human cognitive processes, fostering trust and acceptance in critical
applications.
|
2501.05472
|
The 2nd Place Solution from the 3D Semantic Segmentation Track in the
2024 Waymo Open Dataset Challenge
|
cs.CV cs.LG cs.RO
|
3D semantic segmentation is one of the most crucial tasks in driving
perception. The ability of a learning-based model to accurately perceive dense
3D surroundings often ensures the safe operation of autonomous vehicles.
However, existing LiDAR-based 3D semantic segmentation databases consist of
sequentially acquired LiDAR scans that are long-tailed and lack training
diversity. In this report, we introduce MixSeg3D, a sophisticated combination
of the strong point cloud segmentation model with advanced 3D data mixing
strategies. Specifically, our approach integrates the MinkUNet family with
LaserMix and PolarMix, two scene-scale data augmentation methods that blend
LiDAR point clouds along the ego-scene's inclination and azimuth directions.
Through empirical experiments, we demonstrate the superiority of MixSeg3D over
the baseline and prior arts. Our team achieved 2nd place in the 3D semantic
segmentation track of the 2024 Waymo Open Dataset Challenge.
|
2501.05473
|
Implicit Guidance and Explicit Representation of Semantic Information in
Points Cloud: A Survey
|
cs.CV
|
Point clouds, a prominent method of 3D representation, are extensively
utilized across industries such as autonomous driving, surveying, electricity,
architecture, and gaming, and have been rigorously investigated for their
accuracy and resilience. The extraction of semantic information from scenes
enhances both human understanding and machine perception. By integrating
semantic information from two-dimensional scenes with three-dimensional point
clouds, researchers aim to improve the precision and efficiency of various
tasks. This paper provides a comprehensive review of the diverse applications
and recent advancements in the integration of semantic information within point
clouds. We explore the dual roles of semantic information in point clouds,
encompassing both implicit guidance and explicit representation, across
traditional and emerging tasks. Additionally, we offer a comparative analysis
of publicly available datasets tailored to specific tasks and present notable
observations. In conclusion, we discuss several challenges and potential issues
that may arise in the future when fully utilizing semantic information in point
clouds, providing our perspectives on these obstacles. The classified and
organized articles related to semantic based point cloud tasks, and
continuously followed up on relevant achievements in different fields, which
can be accessed through
https://github.com/Jasmine-tjy/Semantic-based-Point-Cloud-Tasks.
|
2501.05474
|
Modality-Invariant Bidirectional Temporal Representation Distillation
Network for Missing Multimodal Sentiment Analysis
|
cs.CL cs.AI cs.LG cs.SD eess.AS
|
Multimodal Sentiment Analysis (MSA) integrates diverse modalities(text,
audio, and video) to comprehensively analyze and understand individuals'
emotional states. However, the real-world prevalence of incomplete data poses
significant challenges to MSA, mainly due to the randomness of modality
missing. Moreover, the heterogeneity issue in multimodal data has yet to be
effectively addressed. To tackle these challenges, we introduce the
Modality-Invariant Bidirectional Temporal Representation Distillation Network
(MITR-DNet) for Missing Multimodal Sentiment Analysis. MITR-DNet employs a
distillation approach, wherein a complete modality teacher model guides a
missing modality student model, ensuring robustness in the presence of modality
missing. Simultaneously, we developed the Modality-Invariant Bidirectional
Temporal Representation Learning Module (MIB-TRL) to mitigate heterogeneity.
|
2501.05475
|
Retrieval-Augmented Generation by Evidence Retroactivity in LLMs
|
cs.CL cs.AI cs.IR
|
Retrieval-augmented generation has gained significant attention due to its
ability to integrate relevant external knowledge, enhancing the accuracy and
reliability of the LLMs' responses. Most of the existing methods apply a
dynamic multiple retrieval-generating process, to address multi-hop complex
questions by decomposing them into sub-problems. However, these methods rely on
an unidirectional forward reasoning paradigm, where errors from insufficient
reasoning steps or inherent flaws in current retrieval systems are
irreversible, potentially derailing the entire reasoning chain. For the first
time, this work introduces Retroactive Retrieval-Augmented Generation
(RetroRAG), a novel framework to build a retroactive reasoning paradigm.
RetroRAG revises and updates the evidence, redirecting the reasoning chain to
the correct direction. RetroRAG constructs an evidence-collation-discovery
framework to search, generate, and refine credible evidence. It synthesizes
inferential evidence related to the key entities in the question from the
existing source knowledge and formulates search queries to uncover additional
information. As new evidence is found, RetroRAG continually updates and
organizes this information, enhancing its ability to locate further necessary
evidence. Paired with an Answerer to generate and evaluate outputs, RetroRAG is
capable of refining its reasoning process iteratively until a reliable answer
is obtained. Empirical evaluations show that RetroRAG significantly outperforms
existing methods.
|
2501.05476
|
IntegrityAI at GenAI Detection Task 2: Detecting Machine-Generated
Academic Essays in English and Arabic Using ELECTRA and Stylometry
|
cs.CL cs.AI
|
Recent research has investigated the problem of detecting machine-generated
essays for academic purposes. To address this challenge, this research utilizes
pre-trained, transformer-based models fine-tuned on Arabic and English academic
essays with stylometric features. Custom models based on ELECTRA for English
and AraELECTRA for Arabic were trained and evaluated using a benchmark dataset.
Proposed models achieved excellent results with an F1-score of 99.7%, ranking
2nd among of 26 teams in the English subtask, and 98.4%, finishing 1st out of
23 teams in the Arabic one.
|
2501.05478
|
Language and Planning in Robotic Navigation: A Multilingual Evaluation
of State-of-the-Art Models
|
cs.CL cs.AI cs.CV cs.LG cs.RO
|
Large Language Models (LLMs) such as GPT-4, trained on huge amount of
datasets spanning multiple domains, exhibit significant reasoning,
understanding, and planning capabilities across various tasks. This study
presents the first-ever work in Arabic language integration within the
Vision-and-Language Navigation (VLN) domain in robotics, an area that has been
notably underexplored in existing research. We perform a comprehensive
evaluation of state-of-the-art multi-lingual Small Language Models (SLMs),
including GPT-4o mini, Llama 3 8B, and Phi-3 medium 14B, alongside the
Arabic-centric LLM, Jais. Our approach utilizes the NavGPT framework, a pure
LLM-based instruction-following navigation agent, to assess the impact of
language on navigation reasoning through zero-shot sequential action prediction
using the R2R dataset. Through comprehensive experiments, we demonstrate that
our framework is capable of high-level planning for navigation tasks when
provided with instructions in both English and Arabic. However, certain models
struggled with reasoning and planning in the Arabic language due to inherent
limitations in their capabilities, sub-optimal performance, and parsing issues.
These findings highlight the importance of enhancing planning and reasoning
capabilities in language models for effective navigation, emphasizing this as a
key area for further development while also unlocking the potential of
Arabic-language models for impactful real-world applications.
|
2501.05479
|
Practical Design and Benchmarking of Generative AI Applications for
Surgical Billing and Coding
|
cs.CL cs.LG
|
Background: Healthcare has many manual processes that can benefit from
automation and augmentation with Generative Artificial Intelligence (AI), the
medical billing and coding process. However, current foundational Large
Language Models (LLMs) perform poorly when tasked with generating accurate
International Classification of Diseases, 10th edition, Clinical Modification
(ICD-10-CM) and Current Procedural Terminology (CPT) codes. Additionally, there
are many security and financial challenges in the application of generative AI
to healthcare. We present a strategy for developing generative AI tools in
healthcare, specifically for medical billing and coding, that balances
accuracy, accessibility, and patient privacy.
Methods: We fine tune the PHI-3 Mini and PHI-3 Medium LLMs using
institutional data and compare the results against the PHI-3 base model, a
PHI-3 RAG application, and GPT-4o. We use the post operative surgical report as
input and the patients billing claim the associated ICD-10, CPT, and Modifier
codes as the target result. Performance is measured by accuracy of code
generation, proportion of invalid codes, and the fidelity of the billing claim
format.
Results: Both fine-tuned models performed better or as well as GPT-4o. The
Phi-3 Medium fine-tuned model showed the best performance (ICD-10 Recall and
Precision: 72%, 72%; CPT Recall and Precision: 77%, 79%; Modifier Recall and
Precision: 63%, 64%). The Phi-3 Medium fine-tuned model only fabricated 1% of
ICD-10 codes and 0.6% of CPT codes generated.
Conclusions: Our study shows that a small model that is fine-tuned on
domain-specific data for specific tasks using a simple set of open-source tools
and minimal technological and monetary requirements performs as well as the
larger contemporary consumer models.
|
2501.05480
|
The \textit{Questio de aqua et terra}: A Computational Authorship
Verification Study
|
cs.CL cs.DL
|
The Questio de aqua et terra is a cosmological treatise traditionally
attributed to Dante Alighieri. However, the authenticity of this text is
controversial, due to discrepancies with Dante's established works and to the
absence of contemporary references. This study investigates the authenticity of
the Questio via computational authorship verification (AV), a class of
techniques which combine supervised machine learning and stylometry. We build a
family of AV systems and assemble a corpus of 330 13th- and 14th-century Latin
texts, which we use to comparatively evaluate the AV systems through
leave-one-out cross-validation. Our best-performing system achieves high
verification accuracy (F1=0.970) despite the heterogeneity of the corpus in
terms of textual genre. The key contribution to the accuracy of this system is
shown to come from Distributional Random Oversampling (DRO), a technique
specially tailored to text classification which is here used for the first time
in AV.
The application of the AV system to the Questio returns a highly confident
prediction concerning its authenticity. These findings contribute to the debate
on the authorship of the Questio, and highlight DRO's potential in the
application of AV to cultural heritage.
|
2501.05482
|
HP-BERT: A framework for longitudinal study of Hinduphobia on social
media via LLMs
|
cs.CL cs.SI
|
During the COVID-19 pandemic, community tensions intensified, fuelling
Hinduphobic sentiments and discrimination against individuals of Hindu descent
within India and worldwide. Large language models (LLMs) have become prominent
in natural language processing (NLP) tasks and social media analysis, enabling
longitudinal studies of platforms like X (formerly Twitter) for specific issues
during COVID-19. We present an abuse detection and sentiment analysis framework
that offers a longitudinal analysis of Hinduphobia on X (Twitter) during and
after the COVID-19 pandemic. This framework assesses the prevalence and
intensity of Hinduphobic discourse, capturing elements such as derogatory jokes
and racist remarks through sentiment analysis and abuse detection from
pre-trained and fine-tuned LLMs. Additionally, we curate and publish a
"Hinduphobic COVID-19 X (Twitter) Dataset" of 8,000 tweets annotated for
Hinduphobic abuse detection, which is used to fine-tune a BERT model, resulting
in the development of the Hinduphobic BERT (HP-BERT) model. We then further
fine-tune HP-BERT using the SenWave dataset for multi-label sentiment analysis.
Our study encompasses approximately 27.4 million tweets from six countries,
including Australia, Brazil, India, Indonesia, Japan, and the United Kingdom.
Our findings reveal a strong correlation between spikes in COVID-19 cases and
surges in Hinduphobic rhetoric, highlighting how political narratives,
misinformation, and targeted jokes contributed to communal polarisation. These
insights provide valuable guidance for developing strategies to mitigate
communal tensions in future crises, both locally and globally. We advocate
implementing automated monitoring and removal of such content on social media
to curb divisive discourse.
|
2501.05483
|
Human Grasp Generation for Rigid and Deformable Objects with Decomposed
VQ-VAE
|
cs.RO cs.GR
|
Generating realistic human grasps is crucial yet challenging for object
manipulation in computer graphics and robotics. Current methods often struggle
to generate detailed and realistic grasps with full finger-object interaction,
as they typically rely on encoding the entire hand and estimating both posture
and position in a single step. Additionally, simulating object deformation
during grasp generation is still difficult, as modeling such deformation
requires capturing the comprehensive relationship among points of the object's
surface. To address these limitations, we propose a novel improved Decomposed
Vector-Quantized Variational Autoencoder (DVQ-VAE-2), which decomposes the hand
into distinct parts and encodes them separately. This part-aware architecture
allows for more precise management of hand-object interactions. Furthermore, we
introduce a dual-stage decoding strategy that first predicts the grasp type
under skeletal constraints and then identifies the optimal grasp position,
enhancing both the realism and adaptability of the model to unseen
interactions. Furthermore, we introduce a new Mesh UFormer as the backbone
network to extract the hierarchical structural representations from the mesh
and propose a new normal vector-guided position encoding to simulate the
hand-object deformation. In experiments, our model achieves a relative
improvement of approximately 14.1% in grasp quality compared to
state-of-the-art methods across four widely used benchmarks. Our comparisons
with other backbone networks show relative improvements of 2.23% in Hand-object
Contact Distance and 5.86% in Quality Index on deformable and rigid object
based datasets, respectively. Our source code and model are available at
https://github.com/florasion/D-VQVAE.
|
2501.05484
|
Tuning-Free Long Video Generation via Global-Local Collaborative
Diffusion
|
cs.CV
|
Creating high-fidelity, coherent long videos is a sought-after aspiration.
While recent video diffusion models have shown promising potential, they still
grapple with spatiotemporal inconsistencies and high computational resource
demands. We propose GLC-Diffusion, a tuning-free method for long video
generation. It models the long video denoising process by establishing
denoising trajectories through Global-Local Collaborative Denoising to ensure
overall content consistency and temporal coherence between frames.
Additionally, we introduce a Noise Reinitialization strategy which combines
local noise shuffling with frequency fusion to improve global content
consistency and visual diversity. Further, we propose a Video Motion
Consistency Refinement (VMCR) module that computes the gradient of pixel-wise
and frequency-wise losses to enhance visual consistency and temporal
smoothness. Extensive experiments, including quantitative and qualitative
evaluations on videos of varying lengths (\textit{e.g.}, 3\times and 6\times
longer), demonstrate that our method effectively integrates with existing video
diffusion models, producing coherent, high-fidelity long videos superior to
previous approaches.
|
2501.05485
|
S2 Chunking: A Hybrid Framework for Document Segmentation Through
Integrated Spatial and Semantic Analysis
|
cs.CL cs.IR cs.LG
|
Document chunking is a critical task in natural language processing (NLP)
that involves dividing a document into meaningful segments. Traditional methods
often rely solely on semantic analysis, ignoring the spatial layout of
elements, which is crucial for understanding relationships in complex
documents. This paper introduces a novel hybrid approach that combines layout
structure, semantic analysis, and spatial relationships to enhance the cohesion
and accuracy of document chunks. By leveraging bounding box information (bbox)
and text embeddings, our method constructs a weighted graph representation of
document elements, which is then clustered using spectral clustering.
Experimental results demonstrate that this approach outperforms traditional
methods, particularly in documents with diverse layouts such as reports,
articles, and multi-column designs. The proposed method also ensures that no
chunk exceeds a specified token length, making it suitable for use cases where
token limits are critical (e.g., language models with input size limitations)
|
2501.05486
|
Towards an Ontology of Traceable Impact Management in the Food Supply
Chain
|
physics.soc-ph cs.AI
|
The pursuit of quality improvements and accountability in the food supply
chains, especially how they relate to food-related outcomes, such as hunger,
has become increasingly vital, necessitating a comprehensive approach that
encompasses product quality and its impact on various stakeholders and their
communities. Such an approach offers numerous benefits in increasing product
quality and eliminating superfluous measurements while appraising and
alleviating the broader societal and environmental repercussions. A traceable
impact management model (TIMM) provides an impact structure and a reporting
mechanism that identifies each stakeholder's role in the total impact of food
production and consumption stages.
The model aims to increase traceability's utility in understanding the impact
of changes on communities affected by food production and consumption, aligning
with current and future government requirements, and addressing the needs of
communities and consumers. This holistic approach is further supported by an
ontological model that forms the logical foundation and a unified terminology.
By proposing a holistic and integrated solution across multiple stakeholders,
the model emphasizes quality and the extensive impact of championing
accountability, sustainability, and responsible practices with global
traceability.
With these combined efforts, the food supply chain moves toward a global
tracking and tracing process that not only ensures product quality but also
addresses its impact on a broader scale, fostering accountability,
sustainability, and responsible food production and consumption.
|
2501.05487
|
The Future of AI: Exploring the Potential of Large Concept Models
|
cs.CL
|
The field of Artificial Intelligence (AI) continues to drive transformative
innovations, with significant progress in conversational interfaces, autonomous
vehicles, and intelligent content creation. Since the launch of ChatGPT in late
2022, the rise of Generative AI has marked a pivotal era, with the term Large
Language Models (LLMs) becoming a ubiquitous part of daily life. LLMs have
demonstrated exceptional capabilities in tasks such as text summarization, code
generation, and creative writing. However, these models are inherently limited
by their token-level processing, which restricts their ability to perform
abstract reasoning, conceptual understanding, and efficient generation of
long-form content. To address these limitations, Meta has introduced Large
Concept Models (LCMs), representing a significant shift from traditional
token-based frameworks. LCMs use concepts as foundational units of
understanding, enabling more sophisticated semantic reasoning and context-aware
decision-making. Given the limited academic research on this emerging
technology, our study aims to bridge the knowledge gap by collecting,
analyzing, and synthesizing existing grey literature to provide a comprehensive
understanding of LCMs. Specifically, we (i) identify and describe the features
that distinguish LCMs from LLMs, (ii) explore potential applications of LCMs
across multiple domains, and (iii) propose future research directions and
practical strategies to advance LCM development and adoption.
|
2501.05488
|
EndoDINO: A Foundation Model for GI Endoscopy
|
eess.IV cs.CV
|
In this work, we present EndoDINO, a foundation model for GI endoscopy tasks
that achieves strong generalizability by pre-training on a well-curated image
dataset sampled from the largest known GI endoscopy video dataset in the
literature. Specifically, we pre-trained ViT models with 1B, 307M, and 86M
parameters using datasets ranging from 100K to 10M curated images. Using
EndoDINO as a frozen feature encoder, we achieved state-of-the-art performance
in anatomical landmark classification, polyp segmentation, and Mayo endoscopic
scoring (MES) for ulcerative colitis with only simple decoder heads.
|
2501.05490
|
Interpretable deep learning illuminates multiple structures fluorescence
imaging: a path toward trustworthy artificial intelligence in microscopy
|
q-bio.SC cs.AI eess.IV
|
Live-cell imaging of multiple subcellular structures is essential for
understanding subcellular dynamics. However, the conventional multi-color
sequential fluorescence microscopy suffers from significant imaging delays and
limited number of subcellular structure separate labeling, resulting in
substantial limitations for real-time live-cell research applications. Here, we
present the Adaptive Explainable Multi-Structure Network (AEMS-Net), a
deep-learning framework that enables simultaneous prediction of two subcellular
structures from a single image. The model normalizes staining intensity and
prioritizes critical image features by integrating attention mechanisms and
brightness adaptation layers. Leveraging the Kolmogorov-Arnold representation
theorem, our model decomposes learned features into interpretable univariate
functions, enhancing the explainability of complex subcellular morphologies. We
demonstrate that AEMS-Net allows real-time recording of interactions between
mitochondria and microtubules, requiring only half the conventional
sequential-channel imaging procedures. Notably, this approach achieves over 30%
improvement in imaging quality compared to traditional deep learning methods,
establishing a new paradigm for long-term, interpretable live-cell imaging that
advances the ability to explore subcellular dynamics.
|
2501.05493
|
Monotonic Learning in the PAC Framework: A New Perspective
|
cs.LG
|
Monotone learning refers to learning processes in which expected performance
consistently improves as more training data is introduced. Non-monotone
behavior of machine learning has been the topic of a series of recent works,
with various proposals that ensure monotonicity by applying transformations or
wrappers on learning algorithms. In this work, from a different perspective, we
tackle the topic of monotone learning within the framework of Probably
Approximately Correct (PAC) learning theory. Following the mechanism that
estimates sample complexity of a PAC-learnable problem, we derive a performance
lower bound for that problem, and prove the monotonicity of that bound as the
sample sizes increase. By calculating the lower bound distribution, we are able
to prove that given a PAC-learnable problem with a hypothesis space that is
either of finite size or of finite VC dimension, any learning algorithm based
on Empirical Risk Minimization (ERM) is monotone if training samples are
independent and identically distributed (i.i.d.). We further carry out an
experiment on two concrete machine learning problems, one of which has a finite
hypothesis set, and the other of finite VC dimension, and compared the
experimental data for the empirical risk distributions with the estimated
theoretical bound. The results of the comparison have confirmed the
monotonicity of learning for the two PAC-learnable problems.
|
2501.05494
|
Mathematical Modeling and Machine Learning for Predicting Shade-Seeking
Behavior in Cows Under Heat Stress
|
cs.LG
|
In this paper we develop a mathematical model combined with machine learning
techniques to predict shade-seeking behavior in cows exposed to heat stress.
The approach integrates advanced mathematical features, such as time-averaged
thermal indices and accumulated heat stress metrics, obtained by mathematical
analysis of data from a farm in Titaguas (Valencia, Spain), collected during
the summer of 2023. Two predictive models, Random Forests and Neural Networks,
are compared for accuracy, robustness, and interpretability. The Random Forest
model is highlighted for its balance between precision and explainability,
achieving an RMSE of $14.97$. The methodology also employs $5-$fold
cross-validation to ensure robustness under real-world conditions. This work
not only advances the mathematical modeling of animal behavior but also
provides useful insights for mitigating heat stress in livestock through
data-driven tools.
|
2501.05495
|
LSEBMCL: A Latent Space Energy-Based Model for Continual Learning
|
cs.LG cs.AI cs.CL
|
Continual learning has become essential in many practical applications such
as online news summaries and product classification. The primary challenge is
known as catastrophic forgetting, a phenomenon where a model inadvertently
discards previously learned knowledge when it is trained on new tasks. Existing
solutions involve storing exemplars from previous classes, regularizing
parameters during the fine-tuning process, or assigning different model
parameters to each task. The proposed solution LSEBMCL (Latent Space
Energy-Based Model for Continual Learning) in this work is to use energy-based
models (EBMs) to prevent catastrophic forgetting by sampling data points from
previous tasks when training on new ones. The EBM is a machine learning model
that associates an energy value with each input data point. The proposed method
uses an EBM layer as an outer-generator in the continual learning framework for
NLP tasks. The study demonstrates the efficacy of EBM in NLP tasks, achieving
state-of-the-art results in all experiments.
|
2501.05496
|
FedSA: A Unified Representation Learning via Semantic Anchors for
Prototype-based Federated Learning
|
cs.LG cs.AI
|
Prototype-based federated learning has emerged as a promising approach that
shares lightweight prototypes to transfer knowledge among clients with data
heterogeneity in a model-agnostic manner. However, existing methods often
collect prototypes directly from local models, which inevitably introduce
inconsistencies into representation learning due to the biased data
distributions and differing model architectures among clients. In this paper,
we identify that both statistical and model heterogeneity create a vicious
cycle of representation inconsistency, classifier divergence, and skewed
prototype alignment, which negatively impacts the performance of clients. To
break the vicious cycle, we propose a novel framework named Federated Learning
via Semantic Anchors (FedSA) to decouple the generation of prototypes from
local representation learning. We introduce a novel perspective that uses
simple yet effective semantic anchors serving as prototypes to guide local
models in learning consistent representations. By incorporating semantic
anchors, we further propose anchor-based regularization with margin-enhanced
contrastive learning and anchor-based classifier calibration to correct feature
extractors and calibrate classifiers across clients, achieving intra-class
compactness and inter-class separability of prototypes while ensuring
consistent decision boundaries. We then update the semantic anchors with these
consistent and discriminative prototypes, which iteratively encourage clients
to collaboratively learn a unified data representation with robust
generalization. Extensive experiments under both statistical and model
heterogeneity settings show that FedSA significantly outperforms existing
prototype-based FL methods on various classification tasks.
|
2501.05497
|
Spatial Information Integration in Small Language Models for Document
Layout Generation and Classification
|
cs.CL cs.AI cs.IR
|
Document layout understanding is a field of study that analyzes the spatial
arrangement of information in a document hoping to understand its structure and
layout. Models such as LayoutLM (and its subsequent iterations) can understand
semi-structured documents with SotA results; however, the lack of open
semi-structured data is a limitation in itself. While semi-structured data is
common in everyday life (balance sheets, purchase orders, receipts), there is a
lack of public datasets for training machine learning models for this type of
document. In this investigation we propose a method to generate new, synthetic,
layout information that can help overcoming this data shortage. According to
our results, the proposed method performs better than LayoutTransformer,
another popular layout generation method. We also show that, in some scenarios,
text classification can improve when supported by bounding box information.
|
2501.05498
|
Generative Flow Networks: Theory and Applications to Structure Learning
|
cs.LG
|
Without any assumptions about data generation, multiple causal models may
explain our observations equally well. To avoid selecting a single arbitrary
model that could result in unsafe decisions if it does not match reality, it is
therefore essential to maintain a notion of epistemic uncertainty about our
possible candidates. This thesis studies the problem of structure learning from
a Bayesian perspective, approximating the posterior distribution over the
structure of a causal model, represented as a directed acyclic graph (DAG),
given data. It introduces Generative Flow Networks (GFlowNets), a novel class
of probabilistic models designed for modeling distributions over discrete and
compositional objects such as graphs. They treat generation as a sequential
decision making problem, constructing samples of a target distribution defined
up to a normalization constant piece by piece. In the first part of this
thesis, we present the mathematical foundations of GFlowNets, their connections
to existing domains of machine learning and statistics such as variational
inference and reinforcement learning, and their extensions beyond discrete
problems. In the second part of this thesis, we show how GFlowNets can
approximate the posterior distribution over DAG structures of causal Bayesian
Networks, along with the parameters of its causal mechanisms, given
observational and experimental data.
|
2501.05499
|
Generalization of Urban Wind Environment Using Fourier Neural Operator
Across Different Wind Directions and Cities
|
cs.LG cs.CE physics.flu-dyn
|
Simulation of urban wind environments is crucial for urban planning,
pollution control, and renewable energy utilization. However, the computational
requirements of high-fidelity computational fluid dynamics (CFD) methods make
them impractical for real cities. To address these limitations, this study
investigates the effectiveness of the Fourier Neural Operator (FNO) model in
predicting flow fields under different wind directions and urban layouts. In
this study, we investigate the effectiveness of the Fourier Neural Operator
(FNO) model in predicting urban wind conditions under different wind directions
and urban layouts. By training the model on velocity data from large eddy
simulation data, we evaluate the performance of the model under different urban
configurations and wind conditions. The results show that the FNO model can
provide accurate predictions while significantly reducing the computational
time by 99%. Our innovative approach of dividing the wind field into smaller
spatial blocks for training improves the ability of the FNO model to capture
wind frequency features effectively. The SDF data also provides important
spatial building information, enhancing the model's ability to recognize
physical boundaries and generate more realistic predictions. The proposed FNO
approach enhances the AI model's generalizability for different wind directions
and urban layouts.
|
2501.05501
|
Strategy Masking: A Method for Guardrails in Value-based Reinforcement
Learning Agents
|
cs.AI cs.LG cs.MA
|
The use of reward functions to structure AI learning and decision making is
core to the current reinforcement learning paradigm; however, without careful
design of reward functions, agents can learn to solve problems in ways that may
be considered "undesirable" or "unethical." Without thorough understanding of
the incentives a reward function creates, it can be difficult to impose
principled yet general control mechanisms over its behavior. In this paper, we
study methods for constructing guardrails for AI agents that use reward
functions to learn decision making. We introduce a novel approach, which we
call strategy masking, to explicitly learn and then suppress undesirable AI
agent behavior. We apply our method to study lying in AI agents and show that
it can be used to effectively modify agent behavior by suppressing lying
post-training without compromising agent ability to perform effectively.
|
2501.05502
|
Shrink the longest: improving latent space isotropy with symplicial
geometry
|
cs.LG
|
Although transformer-based models have been dominating the field of deep
learning, various studies of their embedding space have shown that they suffer
from "representation degeneration problem": embeddings tend to be distributed
in a narrow cone, making the latent space highly anisotropic. Increasing the
isotropy has shown to improve performance in downstream tasks both in static
and contextual language models. However, most of approaches either add
inference overhead or require substantial amount of data for model
reparametrization. We propose a novel regularization technique based on
simplicial geometry to improve the isotropy of latent representations. The core
idea of our method is based on maximizing the persistent entropy of barcodes
obtained using Vietoris-Rips filtration from contextual embeddings in the
underlying latent space. We demonstrate that the method leads to an increase in
downstream performance while significantly lowering the anisotropy during
fine-tuning by exploiting existing geometric structures instead of
reparametrization.
|
2501.05503
|
The more polypersonal the better -- a short look on space geometry of
fine-tuned layers
|
cs.CL cs.LG
|
The interpretation of deep learning models is a rapidly growing field, with
particular interest in language models. There are various approaches to this
task, including training simpler models to replicate neural network predictions
and analyzing the latent space of the model. The latter method allows us to not
only identify patterns in the model's decision-making process, but also
understand the features of its internal structure. In this paper, we analyze
the changes in the internal representation of the BERT model when it is trained
with additional grammatical modules and data containing new grammatical
structures (polypersonality). We find that adding a single grammatical layer
causes the model to separate the new and old grammatical systems within itself,
improving the overall performance on perplexity metrics.
|
2501.05510
|
OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video
Understanding?
|
cs.CV cs.AI
|
Temporal Awareness, the ability to reason dynamically based on the timestamp
when a question is raised, is the key distinction between offline and online
video LLMs. Unlike offline models, which rely on complete videos for static,
post hoc analysis, online models process video streams incrementally and
dynamically adapt their responses based on the timestamp at which the question
is posed. Despite its significance, temporal awareness has not been adequately
evaluated in existing benchmarks. To fill this gap, we present OVO-Bench
(Online-VideO-Benchmark), a novel video benchmark that emphasizes the
importance of timestamps for advanced online video understanding capability
benchmarking. OVO-Bench evaluates the ability of video LLMs to reason and
respond to events occurring at specific timestamps under three distinct
scenarios: (1) Backward tracing: trace back to past events to answer the
question. (2) Real-time understanding: understand and respond to events as they
unfold at the current timestamp. (3) Forward active responding: delay the
response until sufficient future information becomes available to answer the
question accurately. OVO-Bench comprises 12 tasks, featuring 644 unique videos
and approximately human-curated 2,800 fine-grained meta-annotations with
precise timestamps. We combine automated generation pipelines with human
curation. With these high-quality samples, we further developed an evaluation
pipeline to systematically query video LLMs along the video timeline.
Evaluations of nine Video-LLMs reveal that, despite advancements on traditional
benchmarks, current models struggle with online video understanding, showing a
significant gap compared to human agents. We hope OVO-Bench will drive progress
in video LLMs and inspire future research in online video reasoning. Our
benchmark and code can be accessed at https://github.com/JoeLeelyf/OVO-Bench.
|
2501.05515
|
Neural Architecture Codesign for Fast Physics Applications
|
cs.LG cond-mat.mtrl-sci hep-ex physics.ins-det
|
We develop a pipeline to streamline neural architecture codesign for physics
applications to reduce the need for ML expertise when designing models for
novel tasks. Our method employs neural architecture search and network
compression in a two-stage approach to discover hardware efficient models. This
approach consists of a global search stage that explores a wide range of
architectures while considering hardware constraints, followed by a local
search stage that fine-tunes and compresses the most promising candidates. We
exceed performance on various tasks and show further speedup through model
compression techniques such as quantization-aware-training and neural network
pruning. We synthesize the optimal models to high level synthesis code for FPGA
deployment with the hls4ml library. Additionally, our hierarchical search space
provides greater flexibility in optimization, which can easily extend to other
tasks and domains. We demonstrate this with two case studies: Bragg peak
finding in materials science and jet classification in high energy physics,
achieving models with improved accuracy, smaller latencies, or reduced resource
utilization relative to the baseline models.
|
2501.05530
|
Outlyingness Scores with Cluster Catch Digraphs
|
stat.ML cs.LG
|
This paper introduces two novel, outlyingness scores (OSs) based on Cluster
Catch Digraphs (CCDs): Outbound Outlyingness Score (OOS) and Inbound
Outlyingness Score (IOS). These scores enhance the interpretability of outlier
detection results. Both OSs employ graph-, density-, and distribution-based
techniques, tailored to high-dimensional data with varying cluster shapes and
intensities. OOS evaluates the outlyingness of a point relative to its nearest
neighbors, while IOS assesses the total ``influence" a point receives from
others within its cluster. Both OSs effectively identify global and local
outliers, invariant to data collinearity. Moreover, IOS is robust to the
masking problems. With extensive Monte Carlo simulations, we compare the
performance of both OSs with CCD-based, traditional, and state-of-the-art
outlier detection methods. Both OSs exhibit substantial overall improvements
over the CCD-based methods in both artificial and real-world data sets,
particularly with IOS, which delivers the best overall performance among all
the methods, especially in high-dimensional settings.
Keywords: Outlier detection, Outlyingness score, Graph-based clustering,
Cluster catch digraphs, High-dimensional data.
|
2501.05534
|
OmniJet-${\alpha_{ C}}$: Learning point cloud calorimeter simulations
using generative transformers
|
hep-ph cs.LG hep-ex physics.ins-det
|
We show the first use of generative transformers for generating calorimeter
showers as point clouds in a high-granularity calorimeter. Using the tokenizer
and generative part of the OmniJet-${\alpha}$ model, we represent the hits in
the detector as sequences of integers. This model allows variable-length
sequences, which means that it supports realistic shower development and does
not need to be conditioned on the number of hits. Since the tokenization
represents the showers as point clouds, the model learns the geometry of the
showers without being restricted to any particular voxel grid.
|
2501.05541
|
Customizable LLM-Powered Chatbot for Behavioral Science Research
|
cs.LG
|
The rapid advancement of Artificial Intelligence has resulted in the advent
of Large Language Models (LLMs) with the capacity to produce text that closely
resembles human communication. These models have been seamlessly integrated
into diverse applications, enabling interactive and responsive communication
across multiple platforms. The potential utility of chatbots transcends these
traditional applications, particularly in research contexts, wherein they can
offer valuable insights and facilitate the design of innovative experiments. In
this study, we present a Customizable LLM-Powered Chatbot (CLPC), a web-based
chatbot system designed to assist in behavioral science research. The system is
meticulously designed to function as an experimental instrument rather than a
conventional chatbot, necessitating users to input a username and experiment
code upon access. This setup facilitates precise data cross-referencing,
thereby augmenting the integrity and applicability of the data collected for
research purposes. It can be easily expanded to accommodate new basic events as
needed; and it allows researchers to integrate their own logging events without
the necessity of implementing a separate logging mechanism. It is worth noting
that our system was built to assist primarily behavioral science research but
is not limited to it, it can easily be adapted to assist information retrieval
research or interacting with chat bot agents in general.
|
2501.05548
|
Switched Optimal Control with Dwell Time Constraints
|
math.OC cs.SY eess.SY
|
This paper presents an embedding-based approach for solving switched optimal
control problems (SOCPs) with dwell time constraints. At first, an embedded
optimal control problem (EOCP) is defined by replacing the discrete switching
signal with a continuous embedded variable that can take intermediate values
between the discrete modes. While embedding enables solutions of SOCPs via
conventional techniques, optimal solutions of EOCPs often involve nonexistent
modes and thus may not be feasible for the SOCP. In the modified EOCP (MEOCP),
a concave function is added to the cost function to enforce a bang-bang
solution in the embedded variable, which results in feasible solutions for the
SOCP. However, the MEOCP cannot guarantee the satisfaction of dwell-time
constraints.
In this paper, a MEOCP is combined with a filter layer to remove switching
times that violate the dwell time constraint. Insertion gradients are used to
minimize the effect of the filter on the optimal cost.
|
2501.05550
|
Emergent weight morphologies in deep neural networks
|
cs.LG cond-mat.dis-nn
|
Whether deep neural networks can exhibit emergent behaviour is not only
relevant for understanding how deep learning works, it is also pivotal for
estimating potential security risks of increasingly capable artificial
intelligence systems. Here, we show that training deep neural networks gives
rise to emergent weight morphologies independent of the training data.
Specifically, in analogy to condensed matter physics, we derive a theory that
predict that the homogeneous state of deep neural networks is unstable in a way
that leads to the emergence of periodic channel structures. We verified these
structures by performing numerical experiments on a variety of data sets. Our
work demonstrates emergence in the training of deep neural networks, which
impacts the achievable performance of deep neural networks.
|
2501.05552
|
The dynamics of meaning through time: Assessment of Large Language
Models
|
cs.CL cs.AI
|
Understanding how large language models (LLMs) grasp the historical context
of concepts and their semantic evolution is essential in advancing artificial
intelligence and linguistic studies. This study aims to evaluate the
capabilities of various LLMs in capturing temporal dynamics of meaning,
specifically how they interpret terms across different time periods. We analyze
a diverse set of terms from multiple domains, using tailored prompts and
measuring responses through both objective metrics (e.g., perplexity and word
count) and subjective human expert evaluations. Our comparative analysis
includes prominent models like ChatGPT, GPT-4, Claude, Bard, Gemini, and Llama.
Findings reveal marked differences in each model's handling of historical
context and semantic shifts, highlighting both strengths and limitations in
temporal semantic understanding. These insights offer a foundation for refining
LLMs to better address the evolving nature of language, with implications for
historical text analysis, AI design, and applications in digital humanities.
|
2501.05554
|
LLMQuoter: Enhancing RAG Capabilities Through Efficient Quote Extraction
From Large Contexts
|
cs.CL cs.AI
|
We introduce LLMQuoter, a lightweight, distillation-based model designed to
enhance Retrieval Augmented Generation (RAG) by extracting the most relevant
textual evidence for downstream reasoning tasks. Built on the LLaMA-3B
architecture and fine-tuned with Low-Rank Adaptation (LoRA) on a 15,000-sample
subset of HotpotQA, LLMQuoter adopts a "quote-first-then-answer" strategy,
efficiently identifying key quotes before passing curated snippets to reasoning
models. This workflow reduces cognitive overhead and outperforms full-context
approaches like Retrieval-Augmented Fine-Tuning (RAFT), achieving over 20-point
accuracy gains across both small and large language models. By leveraging
knowledge distillation from a high-performing teacher model, LLMQuoter achieves
competitive results in a resource-efficient fine-tuning setup. It democratizes
advanced RAG capabilities, delivering significant performance improvements
without requiring extensive model retraining. Our results highlight the
potential of distilled quote-based reasoning to streamline complex workflows,
offering a scalable and practical solution for researchers and practitioners
alike.
|
2501.05555
|
Improving Zero-Shot Object-Level Change Detection by Incorporating
Visual Correspondence
|
cs.CV cs.AI
|
Detecting object-level changes between two images across possibly different
views is a core task in many applications that involve visual inspection or
camera surveillance. Existing change-detection approaches suffer from three
major limitations: (1) lack of evaluation on image pairs that contain no
changes, leading to unreported false positive rates; (2) lack of
correspondences (i.e., localizing the regions before and after a change); and
(3) poor zero-shot generalization across different domains. To address these
issues, we introduce a novel method that leverages change correspondences (a)
during training to improve change detection accuracy, and (b) at test time, to
minimize false positives. That is, we harness the supervision labels of where
an object is added or removed to supervise change detectors, improving their
accuracy over previous work by a large margin. Our work is also the first to
predict correspondences between pairs of detected changes using estimated
homography and the Hungarian algorithm. Our model demonstrates superior
performance over existing methods, achieving state-of-the-art results in change
detection and change correspondence accuracy across both in-distribution and
zero-shot benchmarks.
|
2501.05558
|
Quantum Simplicial Neural Networks
|
cs.NE
|
Graph Neural Networks (GNNs) excel at learning from graph-structured data but
are limited to modeling pairwise interactions, insufficient for capturing
higher-order relationships present in many real-world systems. Topological Deep
Learning (TDL) has allowed for systematic modeling of hierarchical higher-order
interactions by relying on combinatorial topological spaces such as simplicial
complexes. In parallel, Quantum Neural Networks (QNNs) have been introduced to
leverage quantum mechanics for enhanced computational and learning power. In
this work, we present the first Quantum Topological Deep Learning Model:
Quantum Simplicial Networks (QSNs), being QNNs operating on simplicial
complexes. QSNs are a stack of Quantum Simplicial Layers, which are inspired by
the Ising model to encode higher-order structures into quantum states.
Experiments on synthetic classification tasks show that QSNs can outperform
classical simplicial TDL models in accuracy and efficiency, demonstrating the
potential of combining quantum computing with TDL for processing data on
combinatorial topological spaces.
|
2501.05559
|
Soup to go: mitigating forgetting during continual learning with model
averaging
|
cs.LG cs.AI
|
In continual learning, where task data arrives in a sequence, fine-tuning on
later tasks will often lead to performance degradation on earlier tasks. This
is especially pronounced when these tasks come from diverse domains. In this
setting, how can we mitigate catastrophic forgetting of earlier tasks and
retain what the model has learned with minimal computational expenses? Inspired
by other merging methods, and L2-regression, we propose Sequential Fine-tuning
with Averaging (SFA), a method that merges currently training models with
earlier checkpoints during the course of training. SOTA approaches typically
maintain a data buffer of past tasks or impose a penalty at each gradient step.
In contrast, our method achieves comparable results without the need to store
past data, or multiple copies of parameters for each gradient step.
Furthermore, our method outperforms common merging techniques such as Task
Arithmetic, TIES Merging, and WiSE-FT, as well as other penalty methods like L2
and Elastic Weight Consolidation. In turn, our method offers insight into the
benefits of merging partially-trained models during training across both image
and language domains.
|
2501.05563
|
Prediction-Assisted Online Distributed Deep Learning Workload Scheduling
in GPU Clusters
|
cs.DC cs.LG
|
The recent explosive growth of deep learning (DL) models has necessitated a
compelling need for efficient job scheduling for distributed deep learning
training with mixed parallelisms (DDLwMP) in GPU clusters. This paper proposes
an adaptive shortest-remaining-processing-time-first (A-SRPT) scheduling
algorithm, a novel prediction-assisted online scheduling approach designed to
mitigate the challenges associated with DL cluster scheduling. By modeling each
job as a graph corresponding to heterogeneous Deep Neural Network (DNN) models
and their associated distributed training configurations, A-SRPT strategically
assigns jobs to the available GPUs, thereby minimizing inter-server
communication overhead. Observing that most DDLwMP jobs recur, A-SRPT
incorporates a random forest regression model to predict training iterations.
Crucially, A-SRPT maps the complex scheduling problem into a single-machine
instance, which is addressed optimally by a preemptive
"shortest-remaining-processing-time-first" strategy. This optimized solution
serves as a guide for actual job scheduling within the GPU clusters, leading to
a theoretically provable competitive scheduling efficiency. We conduct
extensive real-world testbed and simulation experiments to verify our proposed
algorithms.
|
2501.05564
|
Analog Bayesian neural networks are insensitive to the shape of the
weight distribution
|
cs.LG cs.AR stat.ML
|
Recent work has demonstrated that Bayesian neural networks (BNN's) trained
with mean field variational inference (MFVI) can be implemented in analog
hardware, promising orders of magnitude energy savings compared to the standard
digital implementations. However, while Gaussians are typically used as the
variational distribution in MFVI, it is difficult to precisely control the
shape of the noise distributions produced by sampling analog devices. This
paper introduces a method for MFVI training using real device noise as the
variational distribution. Furthermore, we demonstrate empirically that the
predictive distributions from BNN's with the same weight means and variances
converge to the same distribution, regardless of the shape of the variational
distribution. This result suggests that analog device designers do not need to
consider the shape of the device noise distribution when hardware-implementing
BNNs performing MFVI.
|
2501.05566
|
Vision-Language Models for Autonomous Driving: CLIP-Based Dynamic Scene
Understanding
|
cs.CV cs.AI cs.CY
|
Scene understanding is essential for enhancing driver safety, generating
human-centric explanations for Automated Vehicle (AV) decisions, and leveraging
Artificial Intelligence (AI) for retrospective driving video analysis. This
study developed a dynamic scene retrieval system using Contrastive
Language-Image Pretraining (CLIP) models, which can be optimized for real-time
deployment on edge devices. The proposed system outperforms state-of-the-art
in-context learning methods, including the zero-shot capabilities of GPT-4o,
particularly in complex scenarios. By conducting frame-level analysis on the
Honda Scenes Dataset, which contains a collection of about 80 hours of
annotated driving videos capturing diverse real-world road and weather
conditions, our study highlights the robustness of CLIP models in learning
visual concepts from natural language supervision. Results also showed that
fine-tuning the CLIP models, such as ViT-L/14 and ViT-B/32, significantly
improved scene classification, achieving a top F1 score of 91.1%. These results
demonstrate the ability of the system to deliver rapid and precise scene
recognition, which can be used to meet the critical requirements of Advanced
Driver Assistance Systems (ADAS). This study shows the potential of CLIP models
to provide scalable and efficient frameworks for dynamic scene understanding
and classification. Furthermore, this work lays the groundwork for advanced
autonomous vehicle technologies by fostering a deeper understanding of driver
behavior, road conditions, and safety-critical scenarios, marking a significant
step toward smarter, safer, and more context-aware autonomous driving systems.
|
2501.05567
|
Approximate Supervised Object Distance Estimation on Unmanned Surface
Vehicles
|
cs.CV cs.AI
|
Unmanned surface vehicles (USVs) and boats are increasingly important in
maritime operations, yet their deployment is limited due to costly sensors and
complexity. LiDAR, radar, and depth cameras are either costly, yield sparse
point clouds or are noisy, and require extensive calibration. Here, we
introduce a novel approach for approximate distance estimation in USVs using
supervised object detection. We collected a dataset comprising images with
manually annotated bounding boxes and corresponding distance measurements.
Leveraging this data, we propose a specialized branch of an object detection
model, not only to detect objects but also to predict their distances from the
USV. This method offers a cost-efficient and intuitive alternative to
conventional distance measurement techniques, aligning more closely with human
estimation capabilities. We demonstrate its application in a marine assistance
system that alerts operators to nearby objects such as boats, buoys, or other
waterborne hazards.
|
2501.05580
|
Physics-Driven Learning for Inverse Problems in Quantum Chromodynamics
|
hep-lat cs.LG hep-ph nucl-th
|
The integration of deep learning techniques and physics-driven designs is
reforming the way we address inverse problems, in which accurate physical
properties are extracted from complex data sets. This is particularly relevant
for quantum chromodynamics (QCD), the theory of strong interactions, with its
inherent limitations in observational data and demanding computational
approaches. This perspective highlights advances and potential of
physics-driven learning methods, focusing on predictions of physical quantities
towards QCD physics, and drawing connections to machine learning(ML). It is
shown that the fusion of ML and physics can lead to more efficient and reliable
problem-solving strategies. Key ideas of ML, methodology of embedding physics
priors, and generative models as inverse modelling of physical probability
distributions are introduced. Specific applications cover first-principle
lattice calculations, and QCD physics of hadrons, neutron stars, and heavy-ion
collisions. These examples provide a structured and concise overview of how
incorporating prior knowledge such as symmetry, continuity and equations into
deep learning designs can address diverse inverse problems across different
physical sciences.
|
2501.05583
|
Learned Discrepancy Reconstruction and Benchmark Dataset for Magnetic
Particle Imaging
|
math.NA cs.LG cs.NA
|
Magnetic Particle Imaging (MPI) is an emerging imaging modality based on the
magnetic response of superparamagnetic iron oxide nanoparticles to achieve
high-resolution and real-time imaging without harmful radiation. One key
challenge in the MPI image reconstruction task arises from its underlying noise
model, which does not fulfill the implicit Gaussian assumptions that are made
when applying traditional reconstruction approaches. To address this challenge,
we introduce the Learned Discrepancy Approach, a novel learning-based
reconstruction method for inverse problems that includes a learned discrepancy
function. It enhances traditional techniques by incorporating an invertible
neural network to explicitly model problem-specific noise distributions. This
approach does not rely on implicit Gaussian noise assumptions, making it
especially suited to handle the sophisticated noise model in MPI and also
applicable to other inverse problems. To further advance MPI reconstruction
techniques, we introduce the MPI-MNIST dataset - a large collection of
simulated MPI measurements derived from the MNIST dataset of handwritten
digits. The dataset includes noise-perturbed measurements generated from
state-of-the-art model-based system matrices and measurements of a preclinical
MPI scanner device. This provides a realistic and flexible environment for
algorithm testing. Validated against the MPI-MNIST dataset, our method
demonstrates significant improvements in reconstruction quality in terms of
structural similarity when compared to classical reconstruction techniques.
|
2501.05588
|
Enforcing Fundamental Relations via Adversarial Attacks on Input
Parameter Correlations
|
cs.LG hep-ex
|
Correlations between input parameters play a crucial role in many scientific
classification tasks, since these are often related to fundamental laws of
nature. For example, in high energy physics, one of the common deep learning
use-cases is the classification of signal and background processes in particle
collisions. In many such cases, the fundamental principles of the correlations
between observables are often better understood than the actual distributions
of the observables themselves. In this work, we present a new adversarial
attack algorithm called Random Distribution Shuffle Attack (RDSA), emphasizing
the correlations between observables in the network rather than individual
feature characteristics. Correct application of the proposed novel attack can
result in a significant improvement in classification performance -
particularly in the context of data augmentation - when using the generated
adversaries within adversarial training. Given that correlations between input
features are also crucial in many other disciplines. We demonstrate the RDSA
effectiveness on six classification tasks, including two particle collision
challenges (using CERN Open Data), hand-written digit recognition (MNIST784),
human activity recognition (HAR), weather forecasting (Rain in Australia), and
ICU patient mortality (MIMIC-IV), demonstrating a general use case beyond
fundamental physics for this new type of adversarial attack algorithms.
|
2501.05590
|
Negative Ties Highlight Hidden Extremes in Social Media Polarization
|
physics.soc-ph cs.SI
|
Human interactions in the online world comprise a combination of positive and
negative exchanges. These diverse interactions can be captured using signed
network representations, where edges take positive or negative weights to
indicate the sentiment of the interaction between individuals. Signed networks
offer valuable insights into online political polarization by capturing
antagonistic interactions and ideological divides on social media platforms.
This study analyzes polarization on Men\'eame, a Spanish social media that
facilitates engagement with news stories through comments and voting. Using a
dual-method approach -- Signed Hamiltonian Eigenvector Embedding for Proximity
(SHEEP) for signed networks and Correspondence Analysis (CA) for unsigned
networks -- we investigate how including negative ties enhances the
understanding of structural polarization levels across different conversation
topics on the platform. We find that the unsigned Men\'eame network accurately
delineates ideological communities, but negative ties are necessary for
detecting extreme users who engage in antagonistic behaviors. We also show that
far-left users are more likely to use negative interactions to engage across
ideological lines, while far-right users interact primarily with users similar
to themselves.
|
2501.05591
|
Session-Level Dynamic Ad Load Optimization using Offline Robust
Reinforcement Learning
|
cs.LG
|
Session-level dynamic ad load optimization aims to personalize the density
and types of delivered advertisements in real time during a user's online
session by dynamically balancing user experience quality and ad monetization.
Traditional causal learning-based approaches struggle with key technical
challenges, especially in handling confounding bias and distribution shifts. In
this paper, we develop an offline deep Q-network (DQN)-based framework that
effectively mitigates confounding bias in dynamic systems and demonstrates more
than 80% offline gains compared to the best causal learning-based production
baseline. Moreover, to improve the framework's robustness against unanticipated
distribution shifts, we further enhance our framework with a novel offline
robust dueling DQN approach. This approach achieves more stable rewards on
multiple OpenAI-Gym datasets as perturbations increase, and provides an
additional 5% offline gains on real-world ad delivery data.
Deployed across multiple production systems, our approach has achieved
outsized topline gains. Post-launch online A/B tests have shown double-digit
improvements in the engagement-ad score trade-off efficiency, significantly
enhancing our platform's capability to serve both consumers and advertisers.
|
2501.05593
|
Bounds on Box Codes
|
cs.IT math.CO math.IT
|
Let $n_q(M,d)$ be the minimum length of a $q$-ary code of size $M$ and
minimum distance $d$. Bounding $n_q(M,d)$ is a fundamental problem that lies at
the heart of coding theory. This work considers a generalization $n^\bx_q(M,d)$
of $n_q(M,d)$ corresponding to codes in which codewords have \emph{protected}
and \emph{unprotected} entries; where (analogs of) distance and of length are
measured with respect to protected entries only. Such codes, here referred to
as \emph{box codes}, have seen prior studies in the context of bipartite graph
covering. Upper and lower bounds on $n^\bx_q(M,d)$ are presented.
|
2501.05601
|
Exploring Large Language Models for Translating Romanian Computational
Problems into English
|
cs.CL
|
Recent studies have suggested that large language models (LLMs) underperform
on mathematical and computer science tasks when these problems are translated
from Romanian into English, compared to their original Romanian format.
Accurate translation is critical for applications ranging from automatic
translations in programming competitions to the creation of high-quality
educational materials, as well as minimizing errors or fraud in human
translations. This study shows that robust large language models (LLMs) can
maintain or even enhance their performance in translating less common languages
when given well-structured prompts. Our findings suggest that LLMs, with
appropriate supervision, can be reliably used for the automatic translation of
IOI (International Olympiad in Informatics)-style tasks. We evaluate several
translation methods across multiple LLMs, including OpenRoLLM, Llama 3.1 8B,
Llama 3.2 3B and GPT-4o, assessing their translation accuracy and performance
stability through repeated runs. Additionally, we augment the OJI (Romanian
County-Level Informatics Olympiad) Romanian dataset with accurate English
translations, enhancing its utility for future LLM training and evaluation.
Through detailed syntactic and semantic analyses, we confirm that with human
oversight, LLMs can serve as a viable solution for multilingual
problem-solving. We also compare the translation quality of LLMs against human
translators, as evaluated by a certified expert, underscoring the potential of
LLMs in realworld scenarios.
|
2501.05605
|
Advancing Personalized Learning Analysis via an Innovative Domain
Knowledge Informed Attention-based Knowledge Tracing Method
|
cs.LG cs.AI cs.CY
|
Emerging Knowledge Tracing (KT) models, particularly deep learning and
attention-based Knowledge Tracing, have shown great potential in realizing
personalized learning analysis via prediction of students' future performance
based on their past interactions. The existing methods mainly focus on
immediate past interactions or individual concepts without accounting for
dependencies between knowledge concept, referred as knowledge concept routes,
that can be critical to advance the understanding the students' learning
outcomes. To address this, in this paper, we propose an innovative
attention-based method by effectively incorporating the domain knowledge of
knowledge concept routes in the given curriculum. Additionally, we leverage
XES3G5M dataset, a benchmark dataset with rich auxiliary information for
knowledge concept routes, to evaluate and compare the performance of our
proposed method to the seven State-of-the-art (SOTA) deep learning models.
|
2501.05606
|
Harmonizing Metadata of Language Resources for Enhanced Querying and
Accessibility
|
cs.CL cs.IR
|
This paper addresses the harmonization of metadata from diverse repositories
of language resources (LRs). Leveraging linked data and RDF techniques, we
integrate data from multiple sources into a unified model based on DCAT and
META-SHARE OWL ontology. Our methodology supports text-based search, faceted
browsing, and advanced SPARQL queries through Linghub, a newly developed
portal. Real user queries from the Corpora Mailing List (CML) were evaluated to
assess Linghub capability to satisfy actual user needs. Results indicate that
while some limitations persist, many user requests can be successfully
addressed. The study highlights significant metadata issues and advocates for
adherence to open vocabularies and standards to enhance metadata harmonization.
This initial research underscores the importance of API-based access to LRs,
promoting machine usability and data subset extraction for specific purposes,
paving the way for more efficient and standardized LR utilization.
|
2501.05610
|
Towards Probabilistic Inference of Human Motor Intentions by Assistive
Mobile Robots Controlled via a Brain-Computer Interface
|
cs.RO cs.ET cs.HC cs.LG
|
Assistive mobile robots are a transformative technology that helps persons
with disabilities regain the ability to move freely. Although autonomous
wheelchairs significantly reduce user effort, they still require human input to
allow users to maintain control and adapt to changing environments. Brain
Computer Interface (BCI) stands out as a highly user-friendly option that does
not require physical movement. Current BCI systems can understand whether users
want to accelerate or decelerate, but they implement these changes in discrete
speed steps rather than allowing for smooth, continuous velocity adjustments.
This limitation prevents the systems from mimicking the natural, fluid speed
changes seen in human self-paced motion. The authors aim to address this
limitation by redesigning the perception-action cycle in a BCI controlled
robotic system: improving how the robotic agent interprets the user's motion
intentions (world state) and implementing these actions in a way that better
reflects natural physical properties of motion, such as inertia and damping.
The scope of this paper focuses on the perception aspect. We asked and answered
a normative question "what computation should the robotic agent carry out to
optimally perceive incomplete or noisy sensory observations?" Empirical EEG
data were collected, and probabilistic representation that served as world
state distributions were learned and evaluated in a Generative Adversarial
Network framework. The ROS framework was established that connected with a
Gazebo environment containing a digital twin of an indoor space and a virtual
model of a robotic wheelchair. Signal processing and statistical analyses were
implemented to identity the most discriminative features in the
spatial-spectral-temporal dimensions, which are then used to construct the
world model for the robotic agent to interpret user motion intentions as a
Bayesian observer.
|
2501.05611
|
Bit-depth color recovery via off-the-shelf super-resolution models
|
eess.IV cs.CV
|
Advancements in imaging technology have enabled hardware to support 10 to 16
bits per channel, facilitating precise manipulation in applications like image
editing and video processing. While deep neural networks promise to recover
high bit-depth representations, existing methods often rely on scale-invariant
image information, limiting performance in certain scenarios. In this paper, we
introduce a novel approach that integrates a super-resolution architecture to
extract detailed a priori information from images. By leveraging interpolated
data generated during the super-resolution process, our method achieves
pixel-level recovery of fine-grained color details. Additionally, we
demonstrate that spatial features learned through the super-resolution process
significantly contribute to the recovery of detailed color depth information.
Experiments on benchmark datasets demonstrate that our approach outperforms
state-of-the-art methods, highlighting the potential of super-resolution for
high-fidelity color restoration.
|
2501.05614
|
Watermarking Graph Neural Networks via Explanations for Ownership
Protection
|
cs.CR cs.AI
|
Graph Neural Networks (GNNs) are the mainstream method to learn pervasive
graph data and are widely deployed in industry, making their intellectual
property valuable. However, protecting GNNs from unauthorized use remains a
challenge. Watermarking, which embeds ownership information into a model, is a
potential solution. However, existing watermarking methods have two key
limitations: First, almost all of them focus on non-graph data, with
watermarking GNNs for complex graph data largely unexplored. Second, the de
facto backdoor-based watermarking methods pollute training data and induce
ownership ambiguity through intentional misclassification. Our
explanation-based watermarking inherits the strengths of backdoor-based methods
(e.g., robust to watermark removal attacks), but avoids data pollution and
eliminates intentional misclassification. In particular, our method learns to
embed the watermark in GNN explanations such that this unique watermark is
statistically distinct from other potential solutions, and ownership claims
must show statistical significance to be verified. We theoretically prove that,
even with full knowledge of our method, locating the watermark is an NP-hard
problem. Empirically, our method manifests robustness to removal attacks like
fine-tuning and pruning. By addressing these challenges, our approach marks a
significant advancement in protecting GNN intellectual property.
|
2501.05628
|
Concerns and Values in Human-Robot Interactions: A Focus on Social
Robotics
|
cs.RO cs.HC
|
Robots, as AI with physical instantiation, inhabit our social and physical
world, where their actions have both social and physical consequences, posing
challenges for researchers when designing social robots. This study starts with
a scoping review to identify discussions and potential concerns arising from
interactions with robotic systems. Two focus groups of technology ethics
experts then validated a comprehensive list of key topics and values in
human-robot interaction (HRI) literature. These insights were integrated into
the HRI Value Compass web tool, to help HRI researchers identify ethical values
in robot design. The tool was evaluated in a pilot study. This work benefits
the HRI community by highlighting key concerns in human-robot interactions and
providing an instrument to help researchers design robots that align with human
values, ensuring future robotic systems adhere to these values in social
applications.
|
2501.05629
|
The Impact of Model Scaling on Seen and Unseen Language Performance
|
cs.CL cs.AI
|
The rapid advancement of Large Language Models (LLMs), particularly those
trained on multilingual corpora, has intensified the need for a deeper
understanding of their performance across a diverse range of languages and
model sizes. Our research addresses this critical need by studying the
performance and scaling behavior of multilingual LLMs in text classification
and machine translation tasks across 204 languages. We systematically examine
both seen and unseen languages across three model families of varying sizes in
zero-shot and few-shot settings. Our findings show significant differences in
scaling behavior between zero-shot and two-shot scenarios, with striking
disparities in performance between seen and unseen languages. Model scale has
little effect on zero-shot performance, which remains mostly flat. However, in
two-shot settings, larger models show clear linear improvements in multilingual
text classification. For translation tasks, however, only the instruction-tuned
model showed clear benefits from scaling. Our analysis also suggests that
overall resource levels, not just the proportions of pretraining languages, are
better predictors of model performance, shedding light on what drives
multilingual LLM effectiveness.
|
2501.05631
|
HFMF: Hierarchical Fusion Meets Multi-Stream Models for Deepfake
Detection
|
cs.CV
|
The rapid progress in deep generative models has led to the creation of
incredibly realistic synthetic images that are becoming increasingly difficult
to distinguish from real-world data. The widespread use of Variational Models,
Diffusion Models, and Generative Adversarial Networks has made it easier to
generate convincing fake images and videos, which poses significant challenges
for detecting and mitigating the spread of misinformation. As a result,
developing effective methods for detecting AI-generated fakes has become a
pressing concern. In our research, we propose HFMF, a comprehensive two-stage
deepfake detection framework that leverages both hierarchical cross-modal
feature fusion and multi-stream feature extraction to enhance detection
performance against imagery produced by state-of-the-art generative AI models.
The first component of our approach integrates vision Transformers and
convolutional nets through a hierarchical feature fusion mechanism. The second
component of our framework combines object-level information and a fine-tuned
convolutional net model. We then fuse the outputs from both components via an
ensemble deep neural net, enabling robust classification performances. We
demonstrate that our architecture achieves superior performance across diverse
dataset benchmarks while maintaining calibration and interoperability.
|
2501.05633
|
Regularized Top-$k$: A Bayesian Framework for Gradient Sparsification
|
cs.LG cs.IT eess.SP math.IT
|
Error accumulation is effective for gradient sparsification in distributed
settings: initially-unselected gradient entries are eventually selected as
their accumulated error exceeds a certain level. The accumulation essentially
behaves as a scaling of the learning rate for the selected entries. Although
this property prevents the slow-down of lateral movements in distributed
gradient descent, it can deteriorate convergence in some settings. This work
proposes a novel sparsification scheme that controls the learning rate scaling
of error accumulation. The development of this scheme follows two major steps:
first, gradient sparsification is formulated as an inverse probability
(inference) problem, and the Bayesian optimal sparsification mask is derived as
a maximum-a-posteriori estimator. Using the prior distribution inherited from
Top-$k$, we derive a new sparsification algorithm which can be interpreted as a
regularized form of Top-$k$. We call this algorithm regularized Top-$k$
(RegTop-$k$). It utilizes past aggregated gradients to evaluate posterior
statistics of the next aggregation. It then prioritizes the local accumulated
gradient entries based on these posterior statistics. We validate our
derivation through numerical experiments. In distributed linear regression, it
is observed that while Top-$k$ remains at a fixed distance from the global
optimum, RegTop-$k$ converges to the global optimum at significantly higher
compression ratios. We further demonstrate the generalization of this
observation by employing RegTop-$k$ in distributed training of ResNet-18 on
CIFAR-10, where it noticeably outperforms Top-$k$.
|
2501.05635
|
Enhancing Unsupervised Graph Few-shot Learning via Set Functions and
Optimal Transport
|
cs.LG
|
Graph few-shot learning has garnered significant attention for its ability to
rapidly adapt to downstream tasks with limited labeled data, sparking
considerable interest among researchers. Recent advancements in graph few-shot
learning models have exhibited superior performance across diverse
applications. Despite their successes, several limitations still exist. First,
existing models in the meta-training phase predominantly focus on
instance-level features within tasks, neglecting crucial set-level features
essential for distinguishing between different categories. Second, these models
often utilize query sets directly on classifiers trained with support sets
containing only a few labeled examples, overlooking potential distribution
shifts between these sets and leading to suboptimal performance. Finally,
previous models typically require necessitate abundant labeled data from base
classes to extract transferable knowledge, which is typically infeasible in
real-world scenarios. To address these issues, we propose a novel model named
STAR, which leverages Set funcTions and optimAl tRansport for enhancing
unsupervised graph few-shot learning. Specifically, STAR utilizes expressive
set functions to obtain set-level features in an unsupervised manner and
employs optimal transport principles to align the distributions of support and
query sets, thereby mitigating distribution shift effects. Theoretical analysis
demonstrates that STAR can capture more task-relevant information and enhance
generalization capabilities. Empirically, extensive experiments across multiple
datasets validate the effectiveness of STAR. Our code can be found here.
|
2501.05636
|
Identifying rich clubs in spatiotemporal interaction networks
|
cs.SI physics.soc-ph
|
Spatial networks are widely used in various fields to represent and analyze
interactions or relationships between locations or spatially distributed
entities.There is a network science concept known as the 'rich club'
phenomenon, which describes the tendency of 'rich' nodes to form densely
interconnected sub-networks. Although there are established methods to quantify
topological, weighted, and temporal rich clubs individually, there is limited
research on measuring the rich club effect in spatially-weighted temporal
networks, which could be particularly useful for studying dynamic spatial
interaction networks. To address this gap, we introduce the spatially-weighted
temporal rich club (WTRC), a metric that quantifies the strength and
consistency of connections between rich nodes in a spatiotemporal network.
Additionally, we present a unified rich club framework that distinguishes the
WTRC effect from other rich club effects, providing a way to measure
topological, weighted, and temporal rich club effects together. Through two
case studies of human mobility networks at different spatial scales, we
demonstrate how the WTRC is able to identify significant weighted temporal rich
club effects, whereas the unweighted equivalent in the same network either
fails to detect a rich club effect or inaccurately estimates its significance.
In each case study, we explore the spatial layout and temporal variations
revealed by the WTRC analysis, showcasing its particular value in studying
spatiotemporal interaction networks. This research offers new insights into the
study of spatiotemporal networks, with critical implications for applications
such as transportation, redistricting, and epidemiology.
|
2501.05639
|
Scaling Safe Multi-Agent Control for Signal Temporal Logic
Specifications
|
cs.MA cs.RO
|
Existing methods for safe multi-agent control using logic specifications like
Signal Temporal Logic (STL) often face scalability issues. This is because they
rely either on single-agent perspectives or on Mixed Integer Linear Programming
(MILP)-based planners, which are complex to optimize. These methods have proven
to be computationally expensive and inefficient when dealing with a large
number of agents. To address these limitations, we present a new scalable
approach to multi-agent control in this setting. Our method treats the
relationships between agents using a graph structure rather than in terms of a
single-agent perspective. Moreover, it combines a multi-agent collision
avoidance controller with a Graph Neural Network (GNN) based planner, models
the system in a decentralized fashion, and trains on STL-based objectives to
generate safe and efficient plans for multiple agents, thereby optimizing the
satisfaction of complex temporal specifications while also facilitating
multi-agent collision avoidance. Our experiments show that our approach
significantly outperforms existing methods that use a state-of-the-art
MILP-based planner in terms of scalability and performance. The project website
is https://jeappen.com/mastl-gcbf-website/ and the code is at
https://github.com/jeappen/mastl-gcbf .
|
2501.05640
|
Automating Date Format Detection for Data Visualization
|
cs.CL
|
Data preparation, specifically date parsing, is a significant bottleneck in
analytic workflows. To address this, we present two algorithms, one based on
minimum entropy and the other on natural language modeling that automatically
derive date formats from string data. These algorithms achieve over 90%
accuracy on a large corpus of data columns, streamlining the data preparation
process within visualization environments. The minimal entropy approach is
particularly fast, providing interactive feedback. Our methods simplify date
format extraction, making them suitable for integration into data visualization
tools and databases.
|
2501.05643
|
Iconicity in Large Language Models
|
cs.CL cs.AI
|
Lexical iconicity, a direct relation between a word's meaning and its form,
is an important aspect of every natural language, most commonly manifesting
through sound-meaning associations. Since Large language models' (LLMs') access
to both meaning and sound of text is only mediated (meaning through textual
context, sound through written representation, further complicated by
tokenization), we might expect that the encoding of iconicity in LLMs would be
either insufficient or significantly different from human processing. This
study addresses this hypothesis by having GPT-4 generate highly iconic
pseudowords in artificial languages. To verify that these words actually carry
iconicity, we had their meanings guessed by Czech and German participants
(n=672) and subsequently by LLM-based participants (generated by GPT-4 and
Claude 3.5 Sonnet). The results revealed that humans can guess the meanings of
pseudowords in the generated iconic language more accurately than words in
distant natural languages and that LLM-based participants are even more
successful than humans in this task. This core finding is accompanied by
several additional analyses concerning the universality of the generated
language and the cues that both human and LLM-based participants utilize.
|
2501.05644
|
Interpretable Enzyme Function Prediction via Residue-Level Detection
|
q-bio.BM cs.LG
|
Predicting multiple functions labeled with Enzyme Commission (EC) numbers
from the enzyme sequence is of great significance but remains a challenge due
to its sparse multi-label classification nature, i.e., each enzyme is typically
associated with only a few labels out of more than 6000 possible EC numbers.
However, existing machine learning algorithms generally learn a fixed global
representation for each enzyme to classify all functions, thereby they lack
interpretability and the fine-grained information of some function-specific
local residue fragments may be overwhelmed. Here we present an attention-based
framework, namely ProtDETR (Protein Detection Transformer), by casting enzyme
function prediction as a detection problem. It uses a set of learnable
functional queries to adaptatively extract different local representations from
the sequence of residue-level features for predicting different EC numbers.
ProtDETR not only significantly outperforms existing deep learning-based enzyme
function prediction methods, but also provides a new interpretable perspective
on automatically detecting different local regions for identifying different
functions through cross-attentions between queries and residue-level features.
Code is available at https://github.com/yangzhao1230/ProtDETR.
|
2501.05646
|
Efficient Representations for High-Cardinality Categorical Variables in
Machine Learning
|
cs.LG cs.AI
|
High\-cardinality categorical variables pose significant challenges in
machine learning, particularly in terms of computational efficiency and model
interpretability. Traditional one\-hot encoding often results in
high\-dimensional sparse feature spaces, increasing the risk of overfitting and
reducing scalability. This paper introduces novel encoding techniques,
including means encoding, low\-rank encoding, and multinomial logistic
regression encoding, to address these challenges. These methods leverage
sufficient representations to generate compact and informative embeddings of
categorical data. We conduct rigorous theoretical analyses and empirical
validations on diverse datasets, demonstrating significant improvements in
model performance and computational efficiency compared to baseline methods.
The proposed techniques are particularly effective in domains requiring
scalable solutions for large datasets, paving the way for more robust and
efficient applications in machine learning.
|
2501.05647
|
Collaboration of Large Language Models and Small Recommendation Models
for Device-Cloud Recommendation
|
cs.IR cs.AI cs.CL cs.DC
|
Large Language Models (LLMs) for Recommendation (LLM4Rec) is a promising
research direction that has demonstrated exceptional performance in this field.
However, its inability to capture real-time user preferences greatly limits the
practical application of LLM4Rec because (i) LLMs are costly to train and infer
frequently, and (ii) LLMs struggle to access real-time data (its large number
of parameters poses an obstacle to deployment on devices). Fortunately, small
recommendation models (SRMs) can effectively supplement these shortcomings of
LLM4Rec diagrams by consuming minimal resources for frequent training and
inference, and by conveniently accessing real-time data on devices.
In light of this, we designed the Device-Cloud LLM-SRM Collaborative
Recommendation Framework (LSC4Rec) under a device-cloud collaboration setting.
LSC4Rec aims to integrate the advantages of both LLMs and SRMs, as well as the
benefits of cloud and edge computing, achieving a complementary synergy. We
enhance the practicability of LSC4Rec by designing three strategies:
collaborative training, collaborative inference, and intelligent request.
During training, LLM generates candidate lists to enhance the ranking ability
of SRM in collaborative scenarios and enables SRM to update adaptively to
capture real-time user interests. During inference, LLM and SRM are deployed on
the cloud and on the device, respectively. LLM generates candidate lists and
initial ranking results based on user behavior, and SRM get reranking results
based on the candidate list, with final results integrating both LLM's and
SRM's scores. The device determines whether a new candidate list is needed by
comparing the consistency of the LLM's and SRM's sorted lists. Our
comprehensive and extensive experimental analysis validates the effectiveness
of each strategy in LSC4Rec.
|
2501.05651
|
A Practical Cross-Layer Approach for ML-Driven Storage Placement in
Warehouse-Scale Computers
|
cs.DC cs.LG
|
Storage systems account for a major portion of the total cost of ownership
(TCO) of warehouse-scale computers, and thus have a major impact on the overall
system's efficiency. Machine learning (ML)-based methods for solving key
problems in storage system efficiency, such as data placement, have shown
significant promise. However, there are few known practical deployments of such
methods. Studying this problem in the context of real-world hyperscale data
center deployments at Google, we identify a number of challenges that we
believe cause this lack of practical adoption. Specifically, prior work assumes
a monolithic model that resides entirely within the storage layer, an
unrealistic assumption in real-world data center deployments. We propose a
cross-layer approach that moves ML out of the storage system and performs it in
the application running on top of it, co-designed with a scheduling algorithm
at the storage layer that consumes predictions from these application-level
models. This approach combines small, interpretable models with a co-designed
heuristic that adapts to different online environments. We build a
proof-of-concept of this approach in a production distributed computation
framework at Google. Evaluations in a test deployment and large-scale
simulation studies using production traces show improvements of as much as
3.47x in TCO savings compared to state of the art baselines. We believe this
work represents a significant step towards more practical ML-driven storage
placement in warehouse-scale computers.
|
2501.05655
|
Downlink Performance of Cell-Free Massive MIMO for LEO Satellite
Mega-Constellation
|
eess.SP cs.IT cs.SY eess.SY math.IT
|
Low-earth orbit (LEO) satellite communication (SatCom) has emerged as a
promising technology for improving wireless connectivity in global areas.
Cell-free massive multiple-input multiple-output (CF-mMIMO), an architecture
recently proposed for next-generation networks, has yet to be fully explored
for LEO satellites. In this paper, we investigate the downlink performance of a
CF-mMIMO LEO SatCom network, where many satellite access points (SAPs)
simultaneously serve the corresponding ground user terminals (UTs). Using tools
from stochastic geometry, we model the locations of SAPs and UTs on surfaces of
concentric spheres using Poisson point processes (PPPs) and present expressions
based on linear minimum-mean-square-error (LMMSE) channel estimation and
conjugate beamforming. Then, we derive the coverage probabilities in both
fading and non-fading scenarios, with significant system parameters such as the
Nakagami fading parameter, number of UTs, number of SAPs, orbital altitude, and
service range brought by the dome angle. Finally, the analytical model is
verified by extensive Monte Carlo simulations. Simulation results show that
stronger line-of-sight (LoS) effects and a more comprehensive service range of
the UT bring higher coverage probability despite existing multi-user
interference. Moreover, we found that there exist optimal numbers of UTs for
different orbital altitudes and dome angles, which provides valuable system
design insights.
|
2501.05656
|
Evidential Deep Learning for Uncertainty Quantification and
Out-of-Distribution Detection in Jet Identification using Deep Neural
Networks
|
hep-ex cs.LG
|
Current methods commonly used for uncertainty quantification (UQ) in deep
learning (DL) models utilize Bayesian methods which are computationally
expensive and time-consuming. In this paper, we provide a detailed study of UQ
based on evidential deep learning (EDL) for deep neural network models designed
to identify jets in high energy proton-proton collisions at the Large Hadron
Collider and explore its utility in anomaly detection. EDL is a DL approach
that treats learning as an evidence acquisition process designed to provide
confidence (or epistemic uncertainty) about test data. Using publicly available
datasets for jet classification benchmarking, we explore hyperparameter
optimizations for EDL applied to the challenge of UQ for jet identification. We
also investigate how the uncertainty is distributed for each jet class, how
this method can be implemented for the detection of anomalies, how the
uncertainty compares with Bayesian ensemble methods, and how the uncertainty
maps onto latent spaces for the models. Our studies uncover some pitfalls of
EDL applied to anomaly detection and a more effective way to quantify
uncertainty from EDL as compared with the foundational EDL setup. These studies
illustrate a methodological approach to interpreting EDL in jet classification
models, providing new insights on how EDL quantifies uncertainty and detects
out-of-distribution data which may lead to improved EDL methods for DL models
applied to classification tasks.
|
2501.05660
|
Fully Decentralized Computation Offloading in Priority-Driven Edge
Computing Systems
|
cs.IT cs.GT cs.SY eess.SY math.IT
|
We develop a novel framework for fully decentralized offloading policy design
in multi-access edge computing (MEC) systems. The system comprises $N$
power-constrained user equipments (UEs) assisted by an edge server (ES) to
process incoming tasks. Tasks are labeled with urgency flags, and in this
paper, we classify them under three urgency levels, namely, high, moderate, and
low urgency. We formulate the problem of designing computation decisions for
the UEs within a large population noncooperative game framework, where each UE
selfishly decides on how to split task execution between its local onboard
processor and the ES. We employ the weighted average age of information (AoI)
metric to quantify information freshness at the UEs. Increased onboard
processing consumes more local power, while increased offloading may
potentially incur a higher average AoI due to other UEs' packets being
offloaded to the same ES. Thus, we use the mean-field game (MFG) formulation to
compute approximate decentralized Nash equilibrium offloading and local
computation policies for the UEs to balance between the information freshness
and local power consumption. Finally, we provide a projected gradient
descent-based algorithm to numerically assess the merits of our approach.
|
2501.05661
|
TAMER: A Test-Time Adaptive MoE-Driven Framework for EHR Representation
Learning
|
cs.LG
|
We propose TAMER, a Test-time Adaptive MoE-driven framework for EHR
Representation learning. TAMER combines a Mixture-of-Experts (MoE) with
Test-Time Adaptation (TTA) to address two critical challenges in EHR modeling:
patient population heterogeneity and distribution shifts. The MoE component
handles diverse patient subgroups, while TTA enables real-time adaptation to
evolving health status distributions when new patient samples are introduced.
Extensive experiments across four real-world EHR datasets demonstrate that
TAMER consistently improves predictive performance for both mortality and
readmission risk tasks when combined with diverse EHR modeling backbones. TAMER
offers a promising approach for dynamic and personalized EHR-based predictions
in practical clinical settings. Code is publicly available at
https://github.com/yhzhu99/TAMER.
|
2501.05662
|
Cascaded Self-Evaluation Augmented Training for Efficient Multimodal
Large Language Models
|
cs.CL cs.AI
|
Efficient Multimodal Large Language Models (EMLLMs) have rapidly advanced
recently. Incorporating Chain-of-Thought (CoT) reasoning and step-by-step
self-evaluation has improved their performance. However, limited parameters
often hinder EMLLMs from effectively using self-evaluation during inference.
Key challenges include synthesizing evaluation data, determining its quantity,
optimizing training and inference strategies, and selecting appropriate
prompts.
To address these issues, we introduce Self-Evaluation Augmented Training
(SEAT). SEAT uses more powerful EMLLMs for CoT reasoning, data selection, and
evaluation generation, then trains EMLLMs with the synthesized data. However,
handling long prompts and maintaining CoT reasoning quality are problematic.
Therefore, we propose Cascaded Self-Evaluation Augmented Training (Cas-SEAT),
which breaks down lengthy prompts into shorter, task-specific cascaded prompts
and reduces costs for resource-limited settings. During data synthesis, we
employ open-source 7B-parameter EMLLMs and annotate a small dataset with short
prompts.
Experiments demonstrate that Cas-SEAT significantly boosts EMLLMs'
self-evaluation abilities, improving performance by 19.68%, 55.57%, and 46.79%
on the MathVista, Math-V, and We-Math datasets, respectively. Additionally, our
Cas-SEAT Dataset serves as a valuable resource for future research in enhancing
EMLLM self-evaluation.
|
2501.05663
|
Learning to Measure Quantum Neural Networks
|
quant-ph cs.AI cs.ET cs.LG cs.NE
|
The rapid progress in quantum computing (QC) and machine learning (ML) has
attracted growing attention, prompting extensive research into quantum machine
learning (QML) algorithms to solve diverse and complex problems. Designing
high-performance QML models demands expert-level proficiency, which remains a
significant obstacle to the broader adoption of QML. A few major hurdles
include crafting effective data encoding techniques and parameterized quantum
circuits, both of which are crucial to the performance of QML models.
Additionally, the measurement phase is frequently overlooked-most current QML
models rely on pre-defined measurement protocols that often fail to account for
the specific problem being addressed. We introduce a novel approach that makes
the observable of the quantum system-specifically, the Hermitian
matrix-learnable. Our method features an end-to-end differentiable learning
framework, where the parameterized observable is trained alongside the ordinary
quantum circuit parameters simultaneously. Using numerical simulations, we show
that the proposed method can identify observables for variational quantum
circuits that lead to improved outcomes, such as higher classification
accuracy, thereby boosting the overall performance of QML models.
|
2501.05667
|
TransPlace: Transferable Circuit Global Placement via Graph Neural
Network
|
cs.LG cs.AI cs.AR
|
Global placement, a critical step in designing the physical layout of
computer chips, is essential to optimize chip performance. Prior global
placement methods optimize each circuit design individually from scratch. Their
neglect of transferable knowledge limits solution efficiency and chip
performance as circuit complexity drastically increases. This study presents
TransPlace, a global placement framework that learns to place millions of
mixed-size cells in continuous space. TransPlace introduces i) Netlist Graph to
efficiently model netlist topology, ii) Cell-flow and relative position
encoding to learn SE(2)-invariant representation, iii) a tailored graph neural
network architecture for informed parameterization of placement knowledge, and
iv) a two-stage strategy for coarse-to-fine placement. Compared to
state-of-the-art placement methods, TransPlace-trained on a few high-quality
placements-can place unseen circuits with 1.2x speedup while reducing
congestion by 30%, timing by 9%, and wirelength by 5%.
|
2501.05669
|
LPRnet: A self-supervised registration network for LiDAR and
photogrammetric point clouds
|
cs.CV eess.IV
|
LiDAR and photogrammetry are active and passive remote sensing techniques for
point cloud acquisition, respectively, offering complementary advantages and
heterogeneous. Due to the fundamental differences in sensing mechanisms,
spatial distributions and coordinate systems, their point clouds exhibit
significant discrepancies in density, precision, noise, and overlap. Coupled
with the lack of ground truth for large-scale scenes, integrating the
heterogeneous point clouds is a highly challenging task. This paper proposes a
self-supervised registration network based on a masked autoencoder, focusing on
heterogeneous LiDAR and photogrammetric point clouds. At its core, the method
introduces a multi-scale masked training strategy to extract robust features
from heterogeneous point clouds under self-supervision. To further enhance
registration performance, a rotation-translation embedding module is designed
to effectively capture the key features essential for accurate rigid
transformations. Building upon the robust representations, a transformer-based
architecture seamlessly integrates local and global features, fostering precise
alignment across diverse point cloud datasets. The proposed method demonstrates
strong feature extraction capabilities for both LiDAR and photogrammetric point
clouds, addressing the challenges of acquiring ground truth at the scene level.
Experiments conducted on two real-world datasets validate the effectiveness of
the proposed method in solving heterogeneous point cloud registration problems.
|
2501.05673
|
Network Diffuser for Placing-Scheduling Service Function Chains with
Inverse Demonstration
|
cs.NI cs.AI
|
Network services are increasingly managed by considering chained-up virtual
network functions and relevant traffic flows, known as the Service Function
Chains (SFCs). To deal with sequential arrivals of SFCs in an online fashion,
we must consider two closely-coupled problems - an SFC placement problem that
maps SFCs to servers/links in the network and an SFC scheduling problem that
determines when each SFC is executed. Solving the whole SFC problem targeting
these two optimizations jointly is extremely challenging. In this paper, we
propose a novel network diffuser using conditional generative modeling for this
SFC placing-scheduling optimization. Recent advances in generative AI and
diffusion models have made it possible to generate high-quality images/videos
and decision trajectories from language description. We formulate the SFC
optimization as a problem of generating a state sequence for planning and
perform graph diffusion on the state trajectories to enable extraction of SFC
decisions, with SFC optimization constraints and objectives as conditions. To
address the lack of demonstration data due to NP-hardness and exponential
problem space of the SFC optimization, we also propose a novel and somewhat
maverick approach -- Rather than solving instances of this difficult
optimization, we start with randomly-generated solutions as input, and then
determine appropriate SFC optimization problems that render these solutions
feasible. This inverse demonstration enables us to obtain sufficient expert
demonstrations, i.e., problem-solution pairs, through further optimization. In
our numerical evaluations, the proposed network diffuser outperforms learning
and heuristic baselines, by $\sim$20\% improvement in SFC reward and $\sim$50\%
reduction in SFC waiting time and blocking rate.
|
2501.05675
|
Synergizing Large Language Models and Task-specific Models for Time
Series Anomaly Detection
|
cs.AI cs.LG
|
In anomaly detection, methods based on large language models (LLMs) can
incorporate expert knowledge by reading professional document, while
task-specific small models excel at extracting normal data patterns and
detecting value fluctuations from training data of target applications.
Inspired by the human nervous system, where the brain stores expert knowledge
and the peripheral nervous system and spinal cord handle specific tasks like
withdrawal and knee-jerk reflexes, we propose CoLLaTe, a framework designed to
facilitate collaboration between LLMs and task-specific models, leveraging the
strengths of both models for anomaly detection.
In particular, we first formulate the collaboration process and identify two
key challenges in the collaboration:
(1) the misalignment between the expression domains of the LLMs and
task-specific small models, and (2) error accumulation arising from the
predictions of both models.
To address these challenges, we then introduce two key components in CoLLaTe:
a model alignment module and a collaborative loss function. Through theoretical
analysis and experimental validation, we demonstrate that these components
effectively mitigate the identified challenges and achieve better performance
than both LLM-based and task-specific models.
|
2501.05680
|
EXION: Exploiting Inter- and Intra-Iteration Output Sparsity for
Diffusion Models
|
cs.AR cs.AI cs.LG
|
Over the past few years, diffusion models have emerged as novel AI solutions,
generating diverse multi-modal outputs from text prompts. Despite their
capabilities, they face challenges in computing, such as excessive latency and
energy consumption due to their iterative architecture. Although prior works
specialized in transformer acceleration can be applied, the iterative nature of
diffusion models remains unresolved. In this paper, we present EXION, the first
SW-HW co-designed diffusion accelerator that solves the computation challenges
by exploiting the unique inter- and intra-iteration output sparsity in
diffusion models. To this end, we propose two SW-level optimizations. First, we
introduce the FFN-Reuse algorithm that identifies and skips redundant
computations in FFN layers across different iterations (inter-iteration
sparsity). Second, we use a modified eager prediction method that employs
two-step leading-one detection to accurately predict the attention score,
skipping unnecessary computations within an iteration (intra-iteration
sparsity). We also introduce a novel data compaction mechanism named ConMerge,
which can enhance HW utilization by condensing and merging sparse matrices into
compact forms. Finally, it has a dedicated HW architecture that supports the
above sparsity-inducing algorithms, translating high output sparsity into
improved energy efficiency and performance. To verify the feasibility of the
EXION, we first demonstrate that it has no impact on accuracy in various types
of multi-modal diffusion models. We then instantiate EXION in both server- and
edge-level settings and compare its performance against GPUs with similar
specifications. Our evaluation shows that EXION achieves dramatic improvements
in performance and energy efficiency by 3.2-379.3x and 45.1-3067.6x compared to
a server GPU and by 42.6-1090.9x and 196.9-4668.2x compared to an edge GPU.
|
2501.05684
|
Data driven discovery of human mobility models
|
physics.soc-ph cs.NE
|
Human mobility is a fundamental aspect of social behavior, with broad
applications in transportation, urban planning, and epidemic modeling. However,
for decades new mathematical formulas to model mobility phenomena have been
scarce and usually discovered by analogy to physical processes, such as the
gravity model and the radiation model. These sporadic discoveries are often
thought to rely on intuition and luck in fitting empirical data. Here, we
propose a systematic approach that leverages symbolic regression to
automatically discover interpretable models from human mobility data. Our
approach finds several well-known formulas, such as the distance decay effect
and classical gravity models, as well as previously unknown ones, such as an
exponential-power-law decay that can be explained by the maximum entropy
principle. By relaxing the constraints on the complexity of model expressions,
we further show how key variables of human mobility are progressively
incorporated into the model, making this framework a powerful tool for
revealing the underlying mathematical structures of complex social phenomena
directly from observational data.
|
2501.05686
|
Deep Reversible Consistency Learning for Cross-modal Retrieval
|
cs.CV cs.MM
|
Cross-modal retrieval (CMR) typically involves learning common
representations to directly measure similarities between multimodal samples.
Most existing CMR methods commonly assume multimodal samples in pairs and
employ joint training to learn common representations, limiting the flexibility
of CMR. Although some methods adopt independent training strategies for each
modality to improve flexibility in CMR, they utilize the randomly initialized
orthogonal matrices to guide representation learning, which is suboptimal since
they assume inter-class samples are independent of each other, limiting the
potential of semantic alignments between sample representations and
ground-truth labels. To address these issues, we propose a novel method termed
Deep Reversible Consistency Learning (DRCL) for cross-modal retrieval. DRCL
includes two core modules, \ie Selective Prior Learning (SPL) and Reversible
Semantic Consistency learning (RSC). More specifically, SPL first learns a
transformation weight matrix on each modality and selects the best one based on
the quality score as the Prior, which greatly avoids blind selection of priors
learned from low-quality modalities. Then, RSC employs a Modality-invariant
Representation Recasting mechanism (MRR) to recast the potential
modality-invariant representations from sample semantic labels by the
generalized inverse matrix of the prior. Since labels are devoid of
modal-specific information, we utilize the recast features to guide the
representation learning, thus maintaining semantic consistency to the fullest
extent possible. In addition, a feature augmentation mechanism (FA) is
introduced in RSC to encourage the model to learn over a wider data
distribution for diversity. Finally, extensive experiments conducted on five
widely used datasets and comparisons with 15 state-of-the-art baselines
demonstrate the effectiveness and superiority of our DRCL.
|
2501.05687
|
UniQ: Unified Decoder with Task-specific Queries for Efficient Scene
Graph Generation
|
cs.CV
|
Scene Graph Generation(SGG) is a scene understanding task that aims at
identifying object entities and reasoning their relationships within a given
image. In contrast to prevailing two-stage methods based on a large object
detector (e.g., Faster R-CNN), one-stage methods integrate a fixed-size set of
learnable queries to jointly reason relational triplets <subject, predicate,
object>. This paradigm demonstrates robust performance with significantly
reduced parameters and computational overhead. However, the challenge in
one-stage methods stems from the issue of weak entanglement, wherein entities
involved in relationships require both coupled features shared within triplets
and decoupled visual features. Previous methods either adopt a single decoder
for coupled triplet feature modeling or multiple decoders for separate visual
feature extraction but fail to consider both. In this paper, we introduce UniQ,
a Unified decoder with task-specific Queries architecture, where task-specific
queries generate decoupled visual features for subjects, objects, and
predicates respectively, and unified decoder enables coupled feature modeling
within relational triplets. Experimental results on the Visual Genome dataset
demonstrate that UniQ has superior performance to both one-stage and two-stage
methods.
|
2501.05688
|
eKalibr: Dynamic Intrinsic Calibration for Event Cameras From First
Principles of Events
|
cs.CV cs.RO
|
The bio-inspired event camera has garnered extensive research attention in
recent years, owing to its significant potential derived from its high dynamic
range and low latency characteristics. Similar to the standard camera, the
event camera requires precise intrinsic calibration to facilitate further
high-level visual applications, such as pose estimation and mapping. While
several calibration methods for event cameras have been proposed, most of them
are either (i) engineering-driven, heavily relying on conventional image-based
calibration pipelines, or (ii) inconvenient, requiring complex instrumentation.
To this end, we propose an accurate and convenient intrinsic calibration method
for event cameras, named eKalibr, which builds upon a carefully designed
event-based circle grid pattern recognition algorithm. To extract target
patterns from events, we perform event-based normal flow estimation to identify
potential events generated by circle edges, and cluster them spatially.
Subsequently, event clusters associated with the same grid circles are matched
and grouped using normal flows, for subsequent time-varying ellipse estimation.
Fitted ellipse centers are time-synchronized, for final grid pattern
recognition. We conducted extensive experiments to evaluate the performance of
eKalibr in terms of pattern extraction and intrinsic calibration. The
implementation of eKalibr is open-sourced at
(https://github.com/Unsigned-Long/eKalibr) to benefit the research community.
|
2501.05690
|
Overcoming Language Priors for Visual Question Answering Based on
Knowledge Distillation
|
cs.CV cs.CL
|
Previous studies have pointed out that visual question answering (VQA) models
are prone to relying on language priors for answer predictions. In this
context, predictions often depend on linguistic shortcuts rather than a
comprehensive grasp of multimodal knowledge, which diminishes their
generalization ability. In this paper, we propose a novel method, namely, KDAR,
leveraging knowledge distillation to address the prior-dependency dilemmas
within the VQA task. Specifically, the regularization effect facilitated by
soft labels from a well-trained teacher is employed to penalize overfitting to
the most common answers. The soft labels, which serve a regularization role,
also provide semantic guidance that narrows the range of candidate answers.
Additionally, we design an adaptive sample-wise reweighting learning strategy
to further mitigate bias by dynamically adjusting the importance of each
sample. Experimental results demonstrate that our method enhances performance
in both OOD and IID settings. Our method achieves state-of-the-art performance
on the VQA-CPv2 out-of-distribution (OOD) benchmark, significantly
outperforming previous state-of-the-art approaches.
|
2501.05700
|
Linguistic Entity Masking to Improve Cross-Lingual Representation of
Multilingual Language Models for Low-Resource Languages
|
cs.CL
|
Multilingual Pre-trained Language models (multiPLMs), trained on the Masked
Language Modelling (MLM) objective are commonly being used for cross-lingual
tasks such as bitext mining. However, the performance of these models is still
suboptimal for low-resource languages (LRLs). To improve the language
representation of a given multiPLM, it is possible to further pre-train it.
This is known as continual pre-training. Previous research has shown that
continual pre-training with MLM and subsequently with Translation Language
Modelling (TLM) improves the cross-lingual representation of multiPLMs.
However, during masking, both MLM and TLM give equal weight to all tokens in
the input sequence, irrespective of the linguistic properties of the tokens. In
this paper, we introduce a novel masking strategy, Linguistic Entity Masking
(LEM) to be used in the continual pre-training step to further improve the
cross-lingual representations of existing multiPLMs. In contrast to MLM and
TLM, LEM limits masking to the linguistic entity types nouns, verbs and named
entities, which hold a higher prominence in a sentence. Secondly, we limit
masking to a single token within the linguistic entity span thus keeping more
context, whereas, in MLM and TLM, tokens are masked randomly. We evaluate the
effectiveness of LEM using three downstream tasks, namely bitext mining,
parallel data curation and code-mixed sentiment analysis using three
low-resource language pairs English-Sinhala, English-Tamil, and Sinhala-Tamil.
Experiment results show that continually pre-training a multiPLM with LEM
outperforms a multiPLM continually pre-trained with MLM+TLM for all three
tasks.
|
2501.05707
|
Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) have achieved remarkable performance in recent
years but are fundamentally limited by the underlying training data. To improve
models beyond the training data, recent works have explored how LLMs can be
used to generate synthetic data for autonomous self-improvement. However,
successive steps of self-improvement can reach a point of diminishing returns.
In this work, we propose a complementary approach towards self-improvement
where finetuning is applied to a multiagent society of language models. A group
of language models, all starting from the same base model, are independently
specialized by updating each one using data generated through multiagent
interactions among the models. By training each model on independent sets of
data, we illustrate how this approach enables specialization across models and
diversification over the set of models. As a result, our overall system is able
to preserve diverse reasoning chains and autonomously improve over many more
rounds of fine-tuning than single-agent self-improvement methods. We
quantitatively illustrate the efficacy of the approach across a wide suite of
reasoning tasks.
|
2501.05708
|
Differential Properties of Information in Jump-diffusion Channels
|
cs.IT math.IT
|
We propose a channel modeling using jump-diffusion processes, and study the
differential properties of entropy and mutual information. By utilizing the
Kramers-Moyal and Kolmogorov-Feller equations, we express the mutual
information between the input and the output in series and integral forms,
presented by Fisher-type information and mismatched KL divergence. We extend de
Bruijn's identity and the I-MMSE relation to encompass general Markov
processes.
|
2501.05710
|
EmotiCrafter: Text-to-Emotional-Image Generation based on
Valence-Arousal Model
|
cs.CV
|
Recent research shows that emotions can enhance users' cognition and
influence information communication. While research on visual emotion analysis
is extensive, limited work has been done on helping users generate emotionally
rich image content. Existing work on emotional image generation relies on
discrete emotion categories, making it challenging to capture complex and
subtle emotional nuances accurately. Additionally, these methods struggle to
control the specific content of generated images based on text prompts. In this
work, we introduce the new task of continuous emotional image content
generation (C-EICG) and present EmotiCrafter, an emotional image generation
model that generates images based on text prompts and Valence-Arousal values.
Specifically, we propose a novel emotion-embedding mapping network that embeds
Valence-Arousal values into textual features, enabling the capture of specific
emotions in alignment with intended input prompts. Additionally, we introduce a
loss function to enhance emotion expression. The experimental results show that
our method effectively generates images representing specific emotions with the
desired content and outperforms existing techniques.
|
2501.05711
|
From My View to Yours: Ego-Augmented Learning in Large Vision Language
Models for Understanding Exocentric Daily Living Activities
|
cs.CV
|
Large Vision Language Models (LVLMs) have demonstrated impressive
capabilities in video understanding, yet their adoption for Activities of Daily
Living (ADL) remains limited by their inability to capture fine-grained
interactions and spatial relationships. This limitation is particularly evident
in ADL tasks, where understanding detailed human-object interaction and
human-centric motion is crucial for applications such as elderly monitoring and
cognitive assessment. To address this, we aim to leverage the complementary
nature of egocentric views to enhance LVLM's understanding of exocentric ADL
videos. Consequently, we propose an online ego2exo distillation approach to
learn ego-augmented exo representations in LVLMs. While effective, this
approach requires paired ego-exo training data, which is impractical to collect
for real-world ADL scenarios. Consequently, we develop EgoMimic, a
skeleton-guided method that can generate mimicked ego views from exocentric
videos. We find that the exo representations of our ego-augmented LVLMs
successfully learn to extract ego-perspective cues, demonstrated through
comprehensive evaluation on six ADL benchmarks and our proposed
EgoPerceptionMCQ benchmark designed specifically to assess egocentric
understanding from exocentric videos. Code, models, and data will be
open-sourced at https://github.com/dominickrei/EgoExo4ADL.
|
2501.05712
|
Multi-Step Reasoning in Korean and the Emergent Mirage
|
cs.CL
|
We introduce HRMCR (HAE-RAE Multi-Step Commonsense Reasoning), a benchmark
designed to evaluate large language models' ability to perform multi-step
reasoning in culturally specific contexts, focusing on Korean. The questions
are automatically generated via templates and algorithms, requiring LLMs to
integrate Korean cultural knowledge into sequential reasoning steps. Consistent
with prior observations on emergent abilities, our experiments reveal that
models trained on fewer than \(2 \cdot 10^{25}\) training FLOPs struggle to
solve any questions, showing near-zero performance. Beyond this threshold,
performance improves sharply. State-of-the-art models (e.g., O1) still score
under 50\%, underscoring the difficulty of our tasks. Notably, stepwise
analysis suggests the observed emergent behavior may stem from compounding
errors across multiple steps rather than reflecting a genuinely new capability.
We publicly release the benchmark and commit to regularly updating the dataset
to prevent contamination.
|
2501.05714
|
How to Enable Effective Cooperation Between Humans and NLP Models: A
Survey of Principles, Formalizations, and Beyond
|
cs.CL cs.AI cs.HC
|
With the advancement of large language models (LLMs), intelligent models have
evolved from mere tools to autonomous agents with their own goals and
strategies for cooperating with humans. This evolution has birthed a novel
paradigm in NLP, i.e., human-model cooperation, that has yielded remarkable
progress in numerous NLP tasks in recent years. In this paper, we take the
first step to present a thorough review of human-model cooperation, exploring
its principles, formalizations, and open challenges. In particular, we
introduce a new taxonomy that provides a unified perspective to summarize
existing approaches. Also, we discuss potential frontier areas and their
corresponding challenges. We regard our work as an entry point, paving the way
for more breakthrough research in this regard.
|
2501.05715
|
Non-intrusive Data-driven ADI-based Low-rank Balanced Truncation
|
eess.SY cs.SY
|
In this short note, a non-intrusive data-driven formulation of ADI-based
low-rank balanced truncation is provided. The proposed algorithm only requires
transfer function samples at the mirror images of ADI shifts. If some shifts
are used in both approximating the controllability Gramian and the
observability Gramian, then samples of the transfer function's derivative at
these shifts are also needed to enforce Hermite interpolation in the Loewner
framework. It is noted that ADI-based low-rank balanced truncation can be
viewed as a two-step process. The first step involves constructing an
interpolant of the original model at the mirror images of the ADI shifts, which
can be done non-intrusively within the Loewner framework. The second step
involves reducing this interpolant using low-rank factors of Gramians
associated with the interpolation data through the balanced square-root
algorithm. This second step does not require any system information, making the
overall process non-intrusive with the only required information being samples
of the transfer function and/or its derivative at the mirror images of ADI
shifts. Furthermore, it is shown that when the order of the reduced model in
ADI-based low-rank balanced truncation is selected to match the numerical rank
of the low-rank factors of the Gramians, it effectively reduces to standard
interpolation at the mirror images of the ADI shift. An illustrative example is
provided to explain the proposed approach.
|
2501.05717
|
Zero-shot Shark Tracking and Biometrics from Aerial Imagery
|
cs.CV cs.AI q-bio.QM
|
The recent widespread adoption of drones for studying marine animals provides
opportunities for deriving biological information from aerial imagery. The
large scale of imagery data acquired from drones is well suited for machine
learning (ML) analysis. Development of ML models for analyzing marine animal
aerial imagery has followed the classical paradigm of training, testing, and
deploying a new model for each dataset, requiring significant time, human
effort, and ML expertise. We introduce Frame Level ALIgment and tRacking
(FLAIR), which leverages the video understanding of Segment Anything Model 2
(SAM2) and the vision-language capabilities of Contrastive Language-Image
Pre-training (CLIP). FLAIR takes a drone video as input and outputs
segmentation masks of the species of interest across the video. Notably, FLAIR
leverages a zero-shot approach, eliminating the need for labeled data, training
a new model, or fine-tuning an existing model to generalize to other species.
With a dataset of 18,000 drone images of Pacific nurse sharks, we trained
state-of-the-art object detection models to compare against FLAIR. We show that
FLAIR massively outperforms these object detectors and performs competitively
against two human-in-the-loop methods for prompting SAM2, achieving a Dice
score of 0.81. FLAIR readily generalizes to other shark species without
additional human effort and can be combined with novel heuristics to
automatically extract relevant information including length and tailbeat
frequency. FLAIR has significant potential to accelerate aerial imagery
analysis workflows, requiring markedly less human effort and expertise than
traditional machine learning workflows, while achieving superior accuracy. By
reducing the effort required for aerial imagery analysis, FLAIR allows
scientists to spend more time interpreting results and deriving insights about
marine ecosystems.
|
2501.05718
|
Performance Analysis of Perturbation-enhanced SC decoders
|
cs.IT math.IT
|
In this paper, we analyze the delay probability of the first error position
in perturbation-enhanced Successive cancellation (SC) decoding for polar codes.
Our findings reveal that, asymptotically, an SC decoder's performance does not
degrade after one perturbation, and it improves with a probability of
$\frac{1}{2}$. This analysis explains the sustained performance gains of
perturbation-enhanced SC decoding as code length increases.
|
2501.05723
|
Robot Error Awareness Through Human Reactions: Implementation,
Evaluation, and Recommendations
|
cs.RO cs.HC
|
Effective error detection is crucial to prevent task disruption and maintain
user trust. Traditional methods often rely on task-specific models or user
reporting, which can be inflexible or slow. Recent research suggests social
signals, naturally exhibited by users in response to robot errors, can enable
more flexible, timely error detection. However, most studies rely on post hoc
analysis, leaving their real-time effectiveness uncertain and lacking
user-centric evaluation. In this work, we developed a proactive error detection
system that combines user behavioral signals (facial action units and speech),
user feedback, and error context for automatic error detection. In a study (N =
28), we compared our proactive system to a status quo reactive approach.
Results show our system 1) reliably and flexibly detects error, 2) detects
errors faster than the reactive approach, and 3) is perceived more favorably by
users than the reactive one. We discuss recommendations for enabling robot
error awareness in future HRI systems.
|
2501.05727
|
Enabling Scalable Oversight via Self-Evolving Critic
|
cs.CL cs.AI cs.LG
|
Despite their remarkable performance, the development of Large Language
Models (LLMs) faces a critical challenge in scalable oversight: providing
effective feedback for tasks where human evaluation is difficult or where LLMs
outperform humans. While there is growing interest in using LLMs for critique,
current approaches still rely on human annotations or more powerful models,
leaving the issue of enhancing critique capabilities without external
supervision unresolved. We introduce SCRIT (Self-evolving CRITic), a framework
that enables genuine self-evolution of critique abilities. Technically, SCRIT
self-improves by training on synthetic data, generated by a contrastive-based
self-critic that uses reference solutions for step-by-step critique, and a
self-validation mechanism that ensures critique quality through correction
outcomes. Implemented with Qwen2.5-72B-Instruct, one of the most powerful LLMs,
SCRIT achieves up to a 10.3\% improvement on critique-correction and error
identification benchmarks. Our analysis reveals that SCRIT's performance scales
positively with data and model size, outperforms alternative approaches, and
benefits critically from its self-validation component.
|
2501.05728
|
Super-class guided Transformer for Zero-Shot Attribute Classification
|
cs.CV
|
Attribute classification is crucial for identifying specific characteristics
within image regions. Vision-Language Models (VLMs) have been effective in
zero-shot tasks by leveraging their general knowledge from large-scale
datasets. Recent studies demonstrate that transformer-based models with
class-wise queries can effectively address zero-shot multi-label
classification. However, poor utilization of the relationship between seen and
unseen attributes makes the model lack generalizability. Additionally,
attribute classification generally involves many attributes, making maintaining
the model's scalability difficult. To address these issues, we propose
Super-class guided transFormer (SugaFormer), a novel framework that leverages
super-classes to enhance scalability and generalizability for zero-shot
attribute classification. SugaFormer employs Super-class Query Initialization
(SQI) to reduce the number of queries, utilizing common semantic information
from super-classes, and incorporates Multi-context Decoding (MD) to handle
diverse visual cues. To strengthen generalizability, we introduce two knowledge
transfer strategies that utilize VLMs. During training, Super-class guided
Consistency Regularization (SCR) aligns model's features with VLMs using
super-class guided prompts, and during inference, Zero-shot Retrieval-based
Score Enhancement (ZRSE) refines predictions for unseen attributes. Extensive
experiments demonstrate that SugaFormer achieves state-of-the-art performance
across three widely-used attribute classification benchmarks under zero-shot,
and cross-dataset transfer settings. Our code is available at
https://github.com/mlvlab/SugaFormer.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.