Search is not available for this dataset
id string | submitter string | authors string | title string | comments string | journal-ref string | doi string | report-no string | categories string | license string | abstract string | versions list | update_date timestamp[s] | authors_parsed list | prompt string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.09793 | Shailendra Joshi | Shailendra P. Joshi, Ashley Bucsek, Darren C. Pagan, Samantha Daly,
Suraj Ravindran, Jaime Marian, Miguel A. Bessa, Surya R. Kalidindi, Nikhil C.
Admal, Celia Reina, Somnath Ghosh, Jorge Vinals, and Ellad B.Tadmor | Integrated Experiment and Simulation Co-Design: A Key Infrastructure for
Predictive Mesoscale Materials Modeling | null | null | null | null | cond-mat.mtrl-sci physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | The design of structural & functional materials for specialized applications
is being fueled by rapid advancements in materials synthesis, characterization,
manufacturing, with sophisticated computational materials modeling frameworks
that span a wide spectrum of length & time scales in the mesoscale between
atomistic & continuum approaches. This is leading towards a systems-based
design methodology that will replace traditional empirical approaches,
embracing the principles of the Materials Genome Initiative. However, several
gaps remain in this framework as it relates to advanced structural
materials:(1) limited availability & access to high-fidelity experimental &
computational datasets, (2) lack of co-design of experiments & simulation aimed
at computational model validation,(3) lack of on-demand access to verified and
validated codes for simulation and for experimental analyses, & (4) limited
opportunities for workforce training and educational outreach. These
shortcomings stifle major innovations in structural materials design. This
paper describes plans for a community-driven research initiative that addresses
current gaps based on best-practice recommendations of leaders in mesoscale
modeling, experimentation & cyberinfrastructure obtained at an NSF-sponsored
workshop dedicated to this topic. The proposal is to create a hub for Mesoscale
Experimentation and Simulation co-Operation (hMESO)-that will (I) provide
curation and sharing of models, data, & codes, (II) foster co-design of
experiments for model validation with systematic uncertainty quantification, &
(III) provide a platform for education & workforce development. It will engage
experimental & computational experts in mesoscale mechanics and plasticity,
along with mathematicians and computer scientists with expertise in algorithms,
data science, machine learning, & large-scale cyberinfrastructure initiatives.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 19:55:34 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Joshi",
"Shailendra P.",
""
],
[
"Bucsek",
"Ashley",
""
],
[
"Pagan",
"Darren C.",
""
],
[
"Daly",
"Samantha",
""
],
[
"Ravindran",
"Suraj",
""
],
[
"Marian",
"Jaime",
""
],
[
"Bessa",
"Miguel A.",
""
],... | TITLE: Integrated Experiment and Simulation Co-Design: A Key Infrastructure for
Predictive Mesoscale Materials Modeling
ABSTRACT: The design of structural & functional materials for specialized applications
is being fueled by rapid advancements in materials synthesis, characterization,
manufacturing, with sophisticated computational materials modeling frameworks
that span a wide spectrum of length & time scales in the mesoscale between
atomistic & continuum approaches. This is leading towards a systems-based
design methodology that will replace traditional empirical approaches,
embracing the principles of the Materials Genome Initiative. However, several
gaps remain in this framework as it relates to advanced structural
materials:(1) limited availability & access to high-fidelity experimental &
computational datasets, (2) lack of co-design of experiments & simulation aimed
at computational model validation,(3) lack of on-demand access to verified and
validated codes for simulation and for experimental analyses, & (4) limited
opportunities for workforce training and educational outreach. These
shortcomings stifle major innovations in structural materials design. This
paper describes plans for a community-driven research initiative that addresses
current gaps based on best-practice recommendations of leaders in mesoscale
modeling, experimentation & cyberinfrastructure obtained at an NSF-sponsored
workshop dedicated to this topic. The proposal is to create a hub for Mesoscale
Experimentation and Simulation co-Operation (hMESO)-that will (I) provide
curation and sharing of models, data, & codes, (II) foster co-design of
experiments for model validation with systematic uncertainty quantification, &
(III) provide a platform for education & workforce development. It will engage
experimental & computational experts in mesoscale mechanics and plasticity,
along with mathematicians and computer scientists with expertise in algorithms,
data science, machine learning, & large-scale cyberinfrastructure initiatives.
|
2503.09797 | Benjamin Towle | Benjamin Towle, Xin Chen, Ke Zhou | SeqSAM: Autoregressive Multiple Hypothesis Prediction for Medical Image
Segmentation using SAM | Accepted to ISBI 2025 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Pre-trained segmentation models are a powerful and flexible tool for
segmenting images. Recently, this trend has extended to medical imaging. Yet,
often these methods only produce a single prediction for a given image,
neglecting inherent uncertainty in medical images, due to unclear object
boundaries and errors caused by the annotation tool. Multiple Choice Learning
is a technique for generating multiple masks, through multiple learned
prediction heads. However, this cannot readily be extended to producing more
outputs than its initial pre-training hyperparameters, as the sparse,
winner-takes-all loss function makes it easy for one prediction head to become
overly dominant, thus not guaranteeing the clinical relevancy of each mask
produced. We introduce SeqSAM, a sequential, RNN-inspired approach to
generating multiple masks, which uses a bipartite matching loss for ensuring
the clinical relevancy of each mask, and can produce an arbitrary number of
masks. We show notable improvements in quality of each mask produced across two
publicly available datasets. Our code is available at
https://github.com/BenjaminTowle/SeqSAM.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:01:52 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Towle",
"Benjamin",
""
],
[
"Chen",
"Xin",
""
],
[
"Zhou",
"Ke",
""
]
] | TITLE: SeqSAM: Autoregressive Multiple Hypothesis Prediction for Medical Image
Segmentation using SAM
ABSTRACT: Pre-trained segmentation models are a powerful and flexible tool for
segmenting images. Recently, this trend has extended to medical imaging. Yet,
often these methods only produce a single prediction for a given image,
neglecting inherent uncertainty in medical images, due to unclear object
boundaries and errors caused by the annotation tool. Multiple Choice Learning
is a technique for generating multiple masks, through multiple learned
prediction heads. However, this cannot readily be extended to producing more
outputs than its initial pre-training hyperparameters, as the sparse,
winner-takes-all loss function makes it easy for one prediction head to become
overly dominant, thus not guaranteeing the clinical relevancy of each mask
produced. We introduce SeqSAM, a sequential, RNN-inspired approach to
generating multiple masks, which uses a bipartite matching loss for ensuring
the clinical relevancy of each mask, and can produce an arbitrary number of
masks. We show notable improvements in quality of each mask produced across two
publicly available datasets. Our code is available at
https://github.com/BenjaminTowle/SeqSAM.
|
2503.09803 | Enes \"Ozeren | Enes \"Ozeren and Arka Bhowmick | Evaluating the Impact of Synthetic Data on Object Detection Tasks in
Autonomous Driving | 7 pages, 4 figures, 3 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The increasing applications of autonomous driving systems necessitates
large-scale, high-quality datasets to ensure robust performance across diverse
scenarios. Synthetic data has emerged as a viable solution to augment
real-world datasets due to its cost-effectiveness, availability of precise
ground-truth labels, and the ability to model specific edge cases. However,
synthetic data may introduce distributional differences and biases that could
impact model performance in real-world settings. To evaluate the utility and
limitations of synthetic data, we conducted controlled experiments using
multiple real-world datasets and a synthetic dataset generated by BIT
Technology Solutions GmbH. Our study spans two sensor modalities, camera and
LiDAR, and investigates both 2D and 3D object detection tasks. We compare
models trained on real, synthetic, and mixed datasets, analyzing their
robustness and generalization capabilities. Our findings demonstrate that the
use of a combination of real and synthetic data improves the robustness and
generalization of object detection models, underscoring the potential of
synthetic data in advancing autonomous driving technologies.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:13:33 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Özeren",
"Enes",
""
],
[
"Bhowmick",
"Arka",
""
]
] | TITLE: Evaluating the Impact of Synthetic Data on Object Detection Tasks in
Autonomous Driving
ABSTRACT: The increasing applications of autonomous driving systems necessitates
large-scale, high-quality datasets to ensure robust performance across diverse
scenarios. Synthetic data has emerged as a viable solution to augment
real-world datasets due to its cost-effectiveness, availability of precise
ground-truth labels, and the ability to model specific edge cases. However,
synthetic data may introduce distributional differences and biases that could
impact model performance in real-world settings. To evaluate the utility and
limitations of synthetic data, we conducted controlled experiments using
multiple real-world datasets and a synthetic dataset generated by BIT
Technology Solutions GmbH. Our study spans two sensor modalities, camera and
LiDAR, and investigates both 2D and 3D object detection tasks. We compare
models trained on real, synthetic, and mixed datasets, analyzing their
robustness and generalization capabilities. Our findings demonstrate that the
use of a combination of real and synthetic data improves the robustness and
generalization of object detection models, underscoring the potential of
synthetic data in advancing autonomous driving technologies.
|
2503.09807 | Qingwu Liu | Qingwu Liu, Nicolas Saunier and Guillaume-Alexandre Bilodeau | How good are deep learning methods for automated road safety analysis
using video data? An experimental study | This paper is accepted by TRB Annual Meeting 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image-based multi-object detection (MOD) and multi-object tracking (MOT) are
advancing at a fast pace. A variety of 2D and 3D MOD and MOT methods have been
developed for monocular and stereo cameras. Road safety analysis can benefit
from those advancements. As crashes are rare events, surrogate measures of
safety (SMoS) have been developed for safety analyses. (Semi-)Automated safety
analysis methods extract road user trajectories to compute safety indicators,
for example, Time-to-Collision (TTC) and Post-encroachment Time (PET). Inspired
by the success of deep learning in MOD and MOT, we investigate three MOT
methods, including one based on a stereo-camera, using the annotated KITTI
traffic video dataset. Two post-processing steps, IDsplit and SS, are developed
to improve the tracking results and investigate the factors influencing the
TTC. The experimental results show that, despite some advantages in terms of
the numbers of interactions or similarity to the TTC distributions, all the
tested methods systematically over-estimate the number of interactions and
under-estimate the TTC: they report more interactions and more severe
interactions, making the road user interactions appear less safe than they are.
Further efforts will be directed towards testing more methods and more data, in
particular from roadside sensors, to verify the results and improve the
performance.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:17:50 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Liu",
"Qingwu",
""
],
[
"Saunier",
"Nicolas",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
]
] | TITLE: How good are deep learning methods for automated road safety analysis
using video data? An experimental study
ABSTRACT: Image-based multi-object detection (MOD) and multi-object tracking (MOT) are
advancing at a fast pace. A variety of 2D and 3D MOD and MOT methods have been
developed for monocular and stereo cameras. Road safety analysis can benefit
from those advancements. As crashes are rare events, surrogate measures of
safety (SMoS) have been developed for safety analyses. (Semi-)Automated safety
analysis methods extract road user trajectories to compute safety indicators,
for example, Time-to-Collision (TTC) and Post-encroachment Time (PET). Inspired
by the success of deep learning in MOD and MOT, we investigate three MOT
methods, including one based on a stereo-camera, using the annotated KITTI
traffic video dataset. Two post-processing steps, IDsplit and SS, are developed
to improve the tracking results and investigate the factors influencing the
TTC. The experimental results show that, despite some advantages in terms of
the numbers of interactions or similarity to the TTC distributions, all the
tested methods systematically over-estimate the number of interactions and
under-estimate the TTC: they report more interactions and more severe
interactions, making the road user interactions appear less safe than they are.
Further efforts will be directed towards testing more methods and more data, in
particular from roadside sensors, to verify the results and improve the
performance.
|
2503.09808 | Chenjun Li | Chenjun Li, Laurin Lux, Alexander H. Berger, Martin J. Menten, Mert R.
Sabuncu, Johannes C. Paetzold | Fine-tuning Vision Language Models with Graph-based Knowledge for
Explainable Medical Image Analysis | 11 pages, 3 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Accurate staging of Diabetic Retinopathy (DR) is essential for guiding timely
interventions and preventing vision loss. However, current staging models are
hardly interpretable, and most public datasets contain no clinical reasoning or
interpretation beyond image-level labels. In this paper, we present a novel
method that integrates graph representation learning with vision-language
models (VLMs) to deliver explainable DR diagnosis. Our approach leverages
optical coherence tomography angiography (OCTA) images by constructing
biologically informed graphs that encode key retinal vascular features such as
vessel morphology and spatial connectivity. A graph neural network (GNN) then
performs DR staging while integrated gradients highlight critical nodes and
edges and their individual features that drive the classification decisions. We
collect this graph-based knowledge which attributes the model's prediction to
physiological structures and their characteristics. We then transform it into
textual descriptions for VLMs. We perform instruction-tuning with these textual
descriptions and the corresponding image to train a student VLM. This final
agent can classify the disease and explain its decision in a human
interpretable way solely based on a single image input. Experimental
evaluations on both proprietary and public datasets demonstrate that our method
not only improves classification accuracy but also offers more clinically
interpretable results. An expert study further demonstrates that our method
provides more accurate diagnostic explanations and paves the way for precise
localization of pathologies in OCTA images.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:19:07 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Chenjun",
""
],
[
"Lux",
"Laurin",
""
],
[
"Berger",
"Alexander H.",
""
],
[
"Menten",
"Martin J.",
""
],
[
"Sabuncu",
"Mert R.",
""
],
[
"Paetzold",
"Johannes C.",
""
]
] | TITLE: Fine-tuning Vision Language Models with Graph-based Knowledge for
Explainable Medical Image Analysis
ABSTRACT: Accurate staging of Diabetic Retinopathy (DR) is essential for guiding timely
interventions and preventing vision loss. However, current staging models are
hardly interpretable, and most public datasets contain no clinical reasoning or
interpretation beyond image-level labels. In this paper, we present a novel
method that integrates graph representation learning with vision-language
models (VLMs) to deliver explainable DR diagnosis. Our approach leverages
optical coherence tomography angiography (OCTA) images by constructing
biologically informed graphs that encode key retinal vascular features such as
vessel morphology and spatial connectivity. A graph neural network (GNN) then
performs DR staging while integrated gradients highlight critical nodes and
edges and their individual features that drive the classification decisions. We
collect this graph-based knowledge which attributes the model's prediction to
physiological structures and their characteristics. We then transform it into
textual descriptions for VLMs. We perform instruction-tuning with these textual
descriptions and the corresponding image to train a student VLM. This final
agent can classify the disease and explain its decision in a human
interpretable way solely based on a single image input. Experimental
evaluations on both proprietary and public datasets demonstrate that our method
not only improves classification accuracy but also offers more clinically
interpretable results. An expert study further demonstrates that our method
provides more accurate diagnostic explanations and paves the way for precise
localization of pathologies in OCTA images.
|
2503.09811 | Agata Fronczak | Maciej J. Mrowinski, Aleksandra Buczek, Agata Fronczak | Exploring the dynamics of self-citations and their role in shaping
scientific impact | 10 pages, 6 figures | null | null | null | cs.DL cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Understanding the mechanisms driving the distribution of scientific citations
is a key challenge in assessing the scientific impact of authors. We
investigate the influence of the preferential attachment rule (PAR) in this
process by analyzing individual citation events from the DBLP dataset, enabling
us to estimate the probability of citations being assigned preferentially. Our
findings reveal that, for the aggregated dataset, PAR dominates the citation
distribution process, with approximately 70% of citations adhering to this
mechanism. However, analysis at the individual level shows significant
variability, with some authors experiencing a greater prevalence of
preferential citations, particularly in the context of external citations. In
contrast, self-citations exhibit notably different behaviour, with only 20%
following PAR. We also demonstrate that the prominence of PAR increases with an
author's citability (average citations per paper), suggesting that more citable
authors are preferentially cited, while less-cited authors experience more
random citation patterns. Furthermore, we show that self-citations may
influence bibliometric indexes. Our results emphasise the distinct dynamics of
self-citations compared to external citations, raising questions about the
mechanisms driving self-citation patterns. These findings provide new insights
into citation behaviours and highlight the limitations of existing approaches
in capturing the nuances of scientific impact.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:20:45 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Mrowinski",
"Maciej J.",
""
],
[
"Buczek",
"Aleksandra",
""
],
[
"Fronczak",
"Agata",
""
]
] | TITLE: Exploring the dynamics of self-citations and their role in shaping
scientific impact
ABSTRACT: Understanding the mechanisms driving the distribution of scientific citations
is a key challenge in assessing the scientific impact of authors. We
investigate the influence of the preferential attachment rule (PAR) in this
process by analyzing individual citation events from the DBLP dataset, enabling
us to estimate the probability of citations being assigned preferentially. Our
findings reveal that, for the aggregated dataset, PAR dominates the citation
distribution process, with approximately 70% of citations adhering to this
mechanism. However, analysis at the individual level shows significant
variability, with some authors experiencing a greater prevalence of
preferential citations, particularly in the context of external citations. In
contrast, self-citations exhibit notably different behaviour, with only 20%
following PAR. We also demonstrate that the prominence of PAR increases with an
author's citability (average citations per paper), suggesting that more citable
authors are preferentially cited, while less-cited authors experience more
random citation patterns. Furthermore, we show that self-citations may
influence bibliometric indexes. Our results emphasise the distinct dynamics of
self-citations compared to external citations, raising questions about the
mechanisms driving self-citation patterns. These findings provide new insights
into citation behaviours and highlight the limitations of existing approaches
in capturing the nuances of scientific impact.
|
2503.09819 | Yuwei Zhang | Yuwei Zhang, Jayanth Srinivasa, Gaowen Liu, Jingbo Shang | Attention Reveals More Than Tokens: Training-Free Long-Context Reasoning
with Attention-guided Retrieval | Work in progress | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) often exhibit substantially shorter effective
context lengths than their claimed capacities, especially when handling complex
reasoning tasks that require integrating information from multiple parts of a
long context and performing multi-step reasoning. Although Chain-of-Thought
(CoT) prompting has shown promise in reducing task complexity, our empirical
analysis reveals that it does not fully resolve this limitation. Through
controlled experiments, we identify poor recall of implicit facts as the
primary cause of failure, which significantly hampers reasoning performance.
Interestingly, we observe that the internal attention weights from the
generated CoT tokens can effectively ground implicit facts, even when these
facts are not explicitly recalled. Building on this insight, we propose a novel
training-free algorithm, Attrieval, which leverages attention weights to
retrieve relevant facts from the long context and incorporates them into the
reasoning process. Additionally, we find that selecting context tokens from CoT
tokens further improves performance. Our results demonstrate that Attrieval
enhances long-context reasoning capability notably on both synthetic and
real-world QA datasets with various models.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:34:14 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zhang",
"Yuwei",
""
],
[
"Srinivasa",
"Jayanth",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Shang",
"Jingbo",
""
]
] | TITLE: Attention Reveals More Than Tokens: Training-Free Long-Context Reasoning
with Attention-guided Retrieval
ABSTRACT: Large Language Models (LLMs) often exhibit substantially shorter effective
context lengths than their claimed capacities, especially when handling complex
reasoning tasks that require integrating information from multiple parts of a
long context and performing multi-step reasoning. Although Chain-of-Thought
(CoT) prompting has shown promise in reducing task complexity, our empirical
analysis reveals that it does not fully resolve this limitation. Through
controlled experiments, we identify poor recall of implicit facts as the
primary cause of failure, which significantly hampers reasoning performance.
Interestingly, we observe that the internal attention weights from the
generated CoT tokens can effectively ground implicit facts, even when these
facts are not explicitly recalled. Building on this insight, we propose a novel
training-free algorithm, Attrieval, which leverages attention weights to
retrieve relevant facts from the long context and incorporates them into the
reasoning process. Additionally, we find that selecting context tokens from CoT
tokens further improves performance. Our results demonstrate that Attrieval
enhances long-context reasoning capability notably on both synthetic and
real-world QA datasets with various models.
|
2503.09820 | Mohamed Elnoor | Mohamed Elnoor, Kasun Weerakoon, Gershom Seneviratne, Jing Liang,
Vignesh Rajagopal and Dinesh Manocha | Vi-LAD: Vision-Language Attention Distillation for Socially-Aware Robot
Navigation in Dynamic Environments | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce Vision-Language Attention Distillation (Vi-LAD), a novel
approach for distilling socially compliant navigation knowledge from a large
Vision-Language Model (VLM) into a lightweight transformer model for real-time
robotic navigation. Unlike traditional methods that rely on expert
demonstrations or human-annotated datasets, Vi-LAD performs knowledge
distillation and fine-tuning at the intermediate layer representation level
(i.e., attention maps) by leveraging the backbone of a pre-trained
vision-action model. These attention maps highlight key navigational regions in
a given scene, which serve as implicit guidance for socially aware motion
planning. Vi-LAD fine-tunes a transformer-based model using intermediate
attention maps extracted from the pre-trained vision-action model, combined
with attention-like semantic maps constructed from a large VLM. To achieve
this, we introduce a novel attention-level distillation loss that fuses
knowledge from both sources, generating augmented attention maps with enhanced
social awareness. These refined attention maps are then utilized as a
traversability costmap within a socially aware model predictive controller
(MPC) for navigation. We validate our approach through real-world experiments
on a Husky wheeled robot, demonstrating significant improvements over
state-of-the-art (SOTA) navigation methods. Our results show up to 14.2% - 50%
improvement in success rate, which highlights the effectiveness of Vi-LAD in
enabling socially compliant and efficient robot navigation.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:38:23 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Elnoor",
"Mohamed",
""
],
[
"Weerakoon",
"Kasun",
""
],
[
"Seneviratne",
"Gershom",
""
],
[
"Liang",
"Jing",
""
],
[
"Rajagopal",
"Vignesh",
""
],
[
"Manocha",
"Dinesh",
""
]
] | TITLE: Vi-LAD: Vision-Language Attention Distillation for Socially-Aware Robot
Navigation in Dynamic Environments
ABSTRACT: We introduce Vision-Language Attention Distillation (Vi-LAD), a novel
approach for distilling socially compliant navigation knowledge from a large
Vision-Language Model (VLM) into a lightweight transformer model for real-time
robotic navigation. Unlike traditional methods that rely on expert
demonstrations or human-annotated datasets, Vi-LAD performs knowledge
distillation and fine-tuning at the intermediate layer representation level
(i.e., attention maps) by leveraging the backbone of a pre-trained
vision-action model. These attention maps highlight key navigational regions in
a given scene, which serve as implicit guidance for socially aware motion
planning. Vi-LAD fine-tunes a transformer-based model using intermediate
attention maps extracted from the pre-trained vision-action model, combined
with attention-like semantic maps constructed from a large VLM. To achieve
this, we introduce a novel attention-level distillation loss that fuses
knowledge from both sources, generating augmented attention maps with enhanced
social awareness. These refined attention maps are then utilized as a
traversability costmap within a socially aware model predictive controller
(MPC) for navigation. We validate our approach through real-world experiments
on a Husky wheeled robot, demonstrating significant improvements over
state-of-the-art (SOTA) navigation methods. Our results show up to 14.2% - 50%
improvement in success rate, which highlights the effectiveness of Vi-LAD in
enabling socially compliant and efficient robot navigation.
|
2503.09826 | Wenyi Lian | Wenyi Lian, Joakim Lindblad, Patrick Micke, Nata\v{s}a Sladoje | Isolated Channel Vision Transformers: From Single-Channel Pretraining to
Multi-Channel Finetuning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs) have achieved remarkable success in standard RGB
image processing tasks. However, applying ViTs to multi-channel imaging (MCI)
data, e.g., for medical and remote sensing applications, remains a challenge.
In particular, MCI data often consist of layers acquired from different
modalities. Directly training ViTs on such data can obscure complementary
information and impair the performance. In this paper, we introduce a simple
yet effective pretraining framework for large-scale MCI datasets. Our method,
named Isolated Channel ViT (IC-ViT), patchifies image channels individually and
thereby enables pretraining for multimodal multi-channel tasks. We show that
this channel-wise patchifying is a key technique for MCI processing. More
importantly, one can pretrain the IC-ViT on single channels and finetune it on
downstream multi-channel datasets. This pretraining framework captures
dependencies between patches as well as channels and produces robust feature
representation. Experiments on various tasks and benchmarks, including JUMP-CP
and CHAMMI for cell microscopy imaging, and So2Sat-LCZ42 for satellite imaging,
show that the proposed IC-ViT delivers 4-14 percentage points of performance
improvement over existing channel-adaptive approaches. Further, its efficient
training makes it a suitable candidate for large-scale pretraining of
foundation models on heterogeneous data.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 20:45:02 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Lian",
"Wenyi",
""
],
[
"Lindblad",
"Joakim",
""
],
[
"Micke",
"Patrick",
""
],
[
"Sladoje",
"Nataša",
""
]
] | TITLE: Isolated Channel Vision Transformers: From Single-Channel Pretraining to
Multi-Channel Finetuning
ABSTRACT: Vision Transformers (ViTs) have achieved remarkable success in standard RGB
image processing tasks. However, applying ViTs to multi-channel imaging (MCI)
data, e.g., for medical and remote sensing applications, remains a challenge.
In particular, MCI data often consist of layers acquired from different
modalities. Directly training ViTs on such data can obscure complementary
information and impair the performance. In this paper, we introduce a simple
yet effective pretraining framework for large-scale MCI datasets. Our method,
named Isolated Channel ViT (IC-ViT), patchifies image channels individually and
thereby enables pretraining for multimodal multi-channel tasks. We show that
this channel-wise patchifying is a key technique for MCI processing. More
importantly, one can pretrain the IC-ViT on single channels and finetune it on
downstream multi-channel datasets. This pretraining framework captures
dependencies between patches as well as channels and produces robust feature
representation. Experiments on various tasks and benchmarks, including JUMP-CP
and CHAMMI for cell microscopy imaging, and So2Sat-LCZ42 for satellite imaging,
show that the proposed IC-ViT delivers 4-14 percentage points of performance
improvement over existing channel-adaptive approaches. Further, its efficient
training makes it a suitable candidate for large-scale pretraining of
foundation models on heterogeneous data.
|
2503.09850 | Ali Eslamian | Ali Eslamian, Qiang Cheng | TabNSA: Native Sparse Attention for Efficient Tabular Data Learning | 5 pages, 4 tables | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tabular data poses unique challenges for deep learning due to its
heterogeneous features and lack of inherent spatial structure. This paper
introduces TabNSA, a novel deep learning architecture leveraging Native Sparse
Attention (NSA) specifically for efficient tabular data processing. TabNSA
incorporates a dynamic hierarchical sparse strategy, combining coarse-grained
feature compression with fine-grained feature selection to preserve both global
context awareness and local precision. By dynamically focusing on relevant
subsets of features, TabNSA effectively captures intricate feature
interactions. Extensive experiments demonstrate that TabNSA consistently
outperforms existing methods, including both deep learning architectures and
ensemble decision trees, achieving state-of-the-art performance across various
benchmark datasets.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 21:13:41 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Eslamian",
"Ali",
""
],
[
"Cheng",
"Qiang",
""
]
] | TITLE: TabNSA: Native Sparse Attention for Efficient Tabular Data Learning
ABSTRACT: Tabular data poses unique challenges for deep learning due to its
heterogeneous features and lack of inherent spatial structure. This paper
introduces TabNSA, a novel deep learning architecture leveraging Native Sparse
Attention (NSA) specifically for efficient tabular data processing. TabNSA
incorporates a dynamic hierarchical sparse strategy, combining coarse-grained
feature compression with fine-grained feature selection to preserve both global
context awareness and local precision. By dynamically focusing on relevant
subsets of features, TabNSA effectively captures intricate feature
interactions. Extensive experiments demonstrate that TabNSA consistently
outperforms existing methods, including both deep learning architectures and
ensemble decision trees, achieving state-of-the-art performance across various
benchmark datasets.
|
2503.09852 | An Yang | An Yang, Chenyu Liu, Pengcheng Xia, Jun Du | StyleSpeaker: Audio-Enhanced Fine-Grained Style Modeling for
Speech-Driven 3D Facial Animation | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech-driven 3D facial animation is challenging due to the diversity in
speaking styles and the limited availability of 3D audio-visual data. Speech
predominantly dictates the coarse motion trends of the lip region, while
specific styles determine the details of lip motion and the overall facial
expressions. Prior works lack fine-grained learning in style modeling and do
not adequately consider style biases across varying speech conditions, which
reduce the accuracy of style modeling and hamper the adaptation capability to
unseen speakers. To address this, we propose a novel framework, StyleSpeaker,
which explicitly extracts speaking styles based on speaker characteristics
while accounting for style biases caused by different speeches. Specifically,
we utilize a style encoder to capture speakers' styles from facial motions and
enhance them according to motion preferences elicited by varying speech
conditions. The enhanced styles are then integrated into the coarse motion
features via a style infusion module, which employs a set of style primitives
to learn fine-grained style representation. Throughout training, we maintain
this set of style primitives to comprehensively model the entire style space.
Hence, StyleSpeaker possesses robust style modeling capability for seen
speakers and can rapidly adapt to unseen speakers without fine-tuning.
Additionally, we design a trend loss and a local contrastive loss to improve
the synchronization between synthesized motions and speeches. Extensive
qualitative and quantitative experiments on three public datasets demonstrate
that our method outperforms existing state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 21:18:20 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Yang",
"An",
""
],
[
"Liu",
"Chenyu",
""
],
[
"Xia",
"Pengcheng",
""
],
[
"Du",
"Jun",
""
]
] | TITLE: StyleSpeaker: Audio-Enhanced Fine-Grained Style Modeling for
Speech-Driven 3D Facial Animation
ABSTRACT: Speech-driven 3D facial animation is challenging due to the diversity in
speaking styles and the limited availability of 3D audio-visual data. Speech
predominantly dictates the coarse motion trends of the lip region, while
specific styles determine the details of lip motion and the overall facial
expressions. Prior works lack fine-grained learning in style modeling and do
not adequately consider style biases across varying speech conditions, which
reduce the accuracy of style modeling and hamper the adaptation capability to
unseen speakers. To address this, we propose a novel framework, StyleSpeaker,
which explicitly extracts speaking styles based on speaker characteristics
while accounting for style biases caused by different speeches. Specifically,
we utilize a style encoder to capture speakers' styles from facial motions and
enhance them according to motion preferences elicited by varying speech
conditions. The enhanced styles are then integrated into the coarse motion
features via a style infusion module, which employs a set of style primitives
to learn fine-grained style representation. Throughout training, we maintain
this set of style primitives to comprehensively model the entire style space.
Hence, StyleSpeaker possesses robust style modeling capability for seen
speakers and can rapidly adapt to unseen speakers without fine-tuning.
Additionally, we design a trend loss and a local contrastive loss to improve
the synchronization between synthesized motions and speeches. Extensive
qualitative and quantitative experiments on three public datasets demonstrate
that our method outperforms existing state-of-the-art approaches.
|
2503.09860 | Nahid Ul Islam | Nahid Ul Islam, DongAo Ma, Jiaxuan Pang, Shivasakthi Senthil Velan,
Michael Gotway, Jianming Liang | Foundation X: Integrating Classification, Localization, and Segmentation
through Lock-Release Pretraining Strategy for Chest X-ray Analysis | Accepted by WACV 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing robust and versatile deep-learning models is essential for
enhancing diagnostic accuracy and guiding clinical interventions in medical
imaging, but it requires a large amount of annotated data. The advancement of
deep learning has facilitated the creation of numerous medical datasets with
diverse expert-level annotations. Aggregating these datasets can maximize data
utilization and address the inadequacy of labeled data. However, the
heterogeneity of expert-level annotations across tasks such as classification,
localization, and segmentation presents a significant challenge for learning
from these datasets. To this end, we introduce nFoundation X, an end-to-end
framework that utilizes diverse expert-level annotations from numerous public
datasets to train a foundation model capable of multiple tasks including
classification, localization, and segmentation. To address the challenges of
annotation and task heterogeneity, we propose a Lock-Release pretraining
strategy to enhance the cyclic learning from multiple datasets, combined with
the student-teacher learning paradigm, ensuring the model retains general
knowledge for all tasks while preventing overfitting to any single task. To
demonstrate the effectiveness of Foundation X, we trained a model using 11
chest X-ray datasets, covering annotations for classification, localization,
and segmentation tasks. Our experimental results show that Foundation X
achieves notable performance gains through extensive annotation utilization,
excels in cross-dataset and cross-task learning, and further enhances
performance in organ localization and segmentation tasks. All code and
pretrained models are publicly accessible at
https://github.com/jlianglab/Foundation_X.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 21:45:13 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Islam",
"Nahid Ul",
""
],
[
"Ma",
"DongAo",
""
],
[
"Pang",
"Jiaxuan",
""
],
[
"Velan",
"Shivasakthi Senthil",
""
],
[
"Gotway",
"Michael",
""
],
[
"Liang",
"Jianming",
""
]
] | TITLE: Foundation X: Integrating Classification, Localization, and Segmentation
through Lock-Release Pretraining Strategy for Chest X-ray Analysis
ABSTRACT: Developing robust and versatile deep-learning models is essential for
enhancing diagnostic accuracy and guiding clinical interventions in medical
imaging, but it requires a large amount of annotated data. The advancement of
deep learning has facilitated the creation of numerous medical datasets with
diverse expert-level annotations. Aggregating these datasets can maximize data
utilization and address the inadequacy of labeled data. However, the
heterogeneity of expert-level annotations across tasks such as classification,
localization, and segmentation presents a significant challenge for learning
from these datasets. To this end, we introduce nFoundation X, an end-to-end
framework that utilizes diverse expert-level annotations from numerous public
datasets to train a foundation model capable of multiple tasks including
classification, localization, and segmentation. To address the challenges of
annotation and task heterogeneity, we propose a Lock-Release pretraining
strategy to enhance the cyclic learning from multiple datasets, combined with
the student-teacher learning paradigm, ensuring the model retains general
knowledge for all tasks while preventing overfitting to any single task. To
demonstrate the effectiveness of Foundation X, we trained a model using 11
chest X-ray datasets, covering annotations for classification, localization,
and segmentation tasks. Our experimental results show that Foundation X
achieves notable performance gains through extensive annotation utilization,
excels in cross-dataset and cross-task learning, and further enhances
performance in organ localization and segmentation tasks. All code and
pretrained models are publicly accessible at
https://github.com/jlianglab/Foundation_X.
|
2503.09868 | Phil Travis | Phil Travis, Jacob Bortnik, Troy Carter | Machine-learned trends in mirror configurations in the Large Plasma
Device | 16 pages, 11 figures | null | null | null | physics.plasm-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study demonstrates the efficacy of ML-based trend inference using data
from the Large Plasma Device (LAPD). The LAPD is a flexible basic plasma
science device with a high discharge repetition rate (0.25-1 Hz) and
reproducible plasmas capable of collecting high-spatial-resolution probe
measurements. A diverse dataset is collected through random sampling of LAPD
operational parameters, including the magnetic field strength and profile,
fueling settings, and the discharge voltage. NN ensembles with uncertainty
quantification are trained to predict time-averaged ion saturation current
($I_\text{sat}$ -- proportional to density and the square root of electron
temperature) at any position within the dataset domain. Model-inferred trends,
such as the effects of introducing mirrors or changing the discharge voltage,
are consistent with current understanding. In addition, axial variation is
optimized via comprehensive search over $I_\text{sat}$ predictions.
Experimental validation of these optimized machine parameters demonstrate
qualitative agreement, with quantitative differences attributable to Langmuir
probe variation and cathode conditions. This investigation demonstrates, using
ML techniques, a new way of extracting insight from experiments and novel
optimization of plasmas. The code and data used in this study are made freely
available.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 22:00:48 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Travis",
"Phil",
""
],
[
"Bortnik",
"Jacob",
""
],
[
"Carter",
"Troy",
""
]
] | TITLE: Machine-learned trends in mirror configurations in the Large Plasma
Device
ABSTRACT: This study demonstrates the efficacy of ML-based trend inference using data
from the Large Plasma Device (LAPD). The LAPD is a flexible basic plasma
science device with a high discharge repetition rate (0.25-1 Hz) and
reproducible plasmas capable of collecting high-spatial-resolution probe
measurements. A diverse dataset is collected through random sampling of LAPD
operational parameters, including the magnetic field strength and profile,
fueling settings, and the discharge voltage. NN ensembles with uncertainty
quantification are trained to predict time-averaged ion saturation current
($I_\text{sat}$ -- proportional to density and the square root of electron
temperature) at any position within the dataset domain. Model-inferred trends,
such as the effects of introducing mirrors or changing the discharge voltage,
are consistent with current understanding. In addition, axial variation is
optimized via comprehensive search over $I_\text{sat}$ predictions.
Experimental validation of these optimized machine parameters demonstrate
qualitative agreement, with quantitative differences attributable to Langmuir
probe variation and cathode conditions. This investigation demonstrates, using
ML techniques, a new way of extracting insight from experiments and novel
optimization of plasmas. The code and data used in this study are made freely
available.
|
2503.09873 | Shoaib Meraj Sami | Shoaib Meraj Sami, Md Mahedi Hasan, Nasser M. Nasrabadi, Raghuveer Rao | FDCT: Frequency-Aware Decomposition and Cross-Modal Token-Alignment for
Multi-Sensor Target Classification | 12 pages Accepted in the IEEE Transactions on Aerospace and
Electronic Systems | null | 10.1109/TAES.2025.3550474 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In automatic target recognition (ATR) systems, sensors may fail to capture
discriminative, fine-grained detail features due to environmental conditions,
noise created by CMOS chips, occlusion, parallaxes, and sensor misalignment.
Therefore, multi-sensor image fusion is an effective choice to overcome these
constraints. However, multi-modal image sensors are heterogeneous and have
domain and granularity gaps. In addition, the multi-sensor images can be
misaligned due to intricate background clutters, fluctuating illumination
conditions, and uncontrolled sensor settings. In this paper, to overcome these
issues, we decompose, align, and fuse multiple image sensor data for target
classification. We extract the domain-specific and domain-invariant features
from each sensor data. We propose to develop a shared unified discrete token
(UDT) space between sensors to reduce the domain and granularity gaps.
Additionally, we develop an alignment module to overcome the misalignment
between multi-sensors and emphasize the discriminative representation of the
UDT space. In the alignment module, we introduce sparsity constraints to
provide a better cross-modal representation of the UDT space and robustness
against various sensor settings. We achieve superior classification performance
compared to single-modality classifiers and several state-of-the-art
multi-modal fusion algorithms on four multi-sensor ATR datasets.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 22:12:35 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Sami",
"Shoaib Meraj",
""
],
[
"Hasan",
"Md Mahedi",
""
],
[
"Nasrabadi",
"Nasser M.",
""
],
[
"Rao",
"Raghuveer",
""
]
] | TITLE: FDCT: Frequency-Aware Decomposition and Cross-Modal Token-Alignment for
Multi-Sensor Target Classification
ABSTRACT: In automatic target recognition (ATR) systems, sensors may fail to capture
discriminative, fine-grained detail features due to environmental conditions,
noise created by CMOS chips, occlusion, parallaxes, and sensor misalignment.
Therefore, multi-sensor image fusion is an effective choice to overcome these
constraints. However, multi-modal image sensors are heterogeneous and have
domain and granularity gaps. In addition, the multi-sensor images can be
misaligned due to intricate background clutters, fluctuating illumination
conditions, and uncontrolled sensor settings. In this paper, to overcome these
issues, we decompose, align, and fuse multiple image sensor data for target
classification. We extract the domain-specific and domain-invariant features
from each sensor data. We propose to develop a shared unified discrete token
(UDT) space between sensors to reduce the domain and granularity gaps.
Additionally, we develop an alignment module to overcome the misalignment
between multi-sensors and emphasize the discriminative representation of the
UDT space. In the alignment module, we introduce sparsity constraints to
provide a better cross-modal representation of the UDT space and robustness
against various sensor settings. We achieve superior classification performance
compared to single-modality classifiers and several state-of-the-art
multi-modal fusion algorithms on four multi-sensor ATR datasets.
|
2503.09896 | Ping Chen Dr. | Ping Chen, David Hinote, Guoqing Chen | A Rule Based Solution to Co-reference Resolution in Clinical Text | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Objective: The aim of this study was to build an effective co-reference
resolution system tailored for the biomedical domain. Materials and Methods:
Experiment materials used in this study is provided by the 2011 i2b2 Natural
Language Processing Challenge. The 2011 i2b2 challenge involves coreference
resolution in medical documents. Concept mentions have been annotated in
clinical texts, and the mentions that co-refer in each document are to be
linked by coreference chains. Normally, there are two ways of constructing a
system to automatically discover co-referent links. One is to manually build
rules for co-reference resolution, and the other category of approaches is to
use machine learning systems to learn automatically from training datasets and
then perform the resolution task on testing datasets. Results: Experiments show
the existing co-reference resolution systems are able to find some of the
co-referent links, and our rule based system performs well finding the majority
of the co-referent links. Our system achieved 89.6% overall performance on
multiple medical datasets. Conclusion: The experiment results show that
manually crafted rules based on observation of training data is a valid way to
accomplish high performance in this coreference resolution task for the
critical biomedical domain.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 23:29:08 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Chen",
"Ping",
""
],
[
"Hinote",
"David",
""
],
[
"Chen",
"Guoqing",
""
]
] | TITLE: A Rule Based Solution to Co-reference Resolution in Clinical Text
ABSTRACT: Objective: The aim of this study was to build an effective co-reference
resolution system tailored for the biomedical domain. Materials and Methods:
Experiment materials used in this study is provided by the 2011 i2b2 Natural
Language Processing Challenge. The 2011 i2b2 challenge involves coreference
resolution in medical documents. Concept mentions have been annotated in
clinical texts, and the mentions that co-refer in each document are to be
linked by coreference chains. Normally, there are two ways of constructing a
system to automatically discover co-referent links. One is to manually build
rules for co-reference resolution, and the other category of approaches is to
use machine learning systems to learn automatically from training datasets and
then perform the resolution task on testing datasets. Results: Experiments show
the existing co-reference resolution systems are able to find some of the
co-referent links, and our rule based system performs well finding the majority
of the co-referent links. Our system achieved 89.6% overall performance on
multiple medical datasets. Conclusion: The experiment results show that
manually crafted rules based on observation of training data is a valid way to
accomplish high performance in this coreference resolution task for the
critical biomedical domain.
|
2503.09902 | Zahra Abbasiantaeb | Zahra Abbasiantaeb, Simon Lupart, Leif Azzopardi, Jeffery Dalton,
Mohammad Aliannejadi | Conversational Gold: Evaluating Personalized Conversational Search
System using Gold Nuggets | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | The rise of personalized conversational search systems has been driven by
advancements in Large Language Models (LLMs), enabling these systems to
retrieve and generate answers for complex information needs. However, the
automatic evaluation of responses generated by Retrieval Augmented Generation
(RAG) systems remains an understudied challenge. In this paper, we introduce a
new resource for assessing the retrieval effectiveness and relevance of
response generated by RAG systems, using a nugget-based evaluation framework.
Built upon the foundation of TREC iKAT 2023, our dataset extends to the TREC
iKAT 2024 collection, which includes 17 conversations and 20,575 relevance
passage assessments, together with 2,279 extracted gold nuggets, and 62
manually written gold answers from NIST assessors. While maintaining the core
structure of its predecessor, this new collection enables a deeper exploration
of generation tasks in conversational settings. Key improvements in iKAT 2024
include: (1) ``gold nuggets'' -- concise, essential pieces of information
extracted from relevant passages of the collection -- which serve as a
foundation for automatic response evaluation; (2) manually written answers to
provide a gold standard for response evaluation; (3) unanswerable questions to
evaluate model hallucination; (4) expanded user personas, providing richer
contextual grounding; and (5) a transition from Personal Text Knowledge Base
(PTKB) ranking to PTKB classification and selection. Built on this resource, we
provide a framework for long-form answer generation evaluation, involving
nuggets extraction and nuggets matching, linked to retrieval. This establishes
a solid resource for advancing research in personalized conversational search
and long-form answer generation. Our resources are publicly available at
https://github.com/irlabamsterdam/CONE-RAG.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 23:44:10 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Abbasiantaeb",
"Zahra",
""
],
[
"Lupart",
"Simon",
""
],
[
"Azzopardi",
"Leif",
""
],
[
"Dalton",
"Jeffery",
""
],
[
"Aliannejadi",
"Mohammad",
""
]
] | TITLE: Conversational Gold: Evaluating Personalized Conversational Search
System using Gold Nuggets
ABSTRACT: The rise of personalized conversational search systems has been driven by
advancements in Large Language Models (LLMs), enabling these systems to
retrieve and generate answers for complex information needs. However, the
automatic evaluation of responses generated by Retrieval Augmented Generation
(RAG) systems remains an understudied challenge. In this paper, we introduce a
new resource for assessing the retrieval effectiveness and relevance of
response generated by RAG systems, using a nugget-based evaluation framework.
Built upon the foundation of TREC iKAT 2023, our dataset extends to the TREC
iKAT 2024 collection, which includes 17 conversations and 20,575 relevance
passage assessments, together with 2,279 extracted gold nuggets, and 62
manually written gold answers from NIST assessors. While maintaining the core
structure of its predecessor, this new collection enables a deeper exploration
of generation tasks in conversational settings. Key improvements in iKAT 2024
include: (1) ``gold nuggets'' -- concise, essential pieces of information
extracted from relevant passages of the collection -- which serve as a
foundation for automatic response evaluation; (2) manually written answers to
provide a gold standard for response evaluation; (3) unanswerable questions to
evaluate model hallucination; (4) expanded user personas, providing richer
contextual grounding; and (5) a transition from Personal Text Knowledge Base
(PTKB) ranking to PTKB classification and selection. Built on this resource, we
provide a framework for long-form answer generation evaluation, involving
nuggets extraction and nuggets matching, linked to retrieval. This establishes
a solid resource for advancing research in personalized conversational search
and long-form answer generation. Our resources are publicly available at
https://github.com/irlabamsterdam/CONE-RAG.
|
2503.09903 | Ti Nguyen | Ti Ti Nguyen, Thanh-Dung Le, Vu Nguyen Ha, Hong-fu Chou, Geoffrey
Eappen, Duc-Dung Tran, Hung Nguyen-Kha, Prabhu Thiruvasagam, Luis M.
Garces-Socarras, Jorge L. Gonzalez-Rios, Juan C. Merlano-Duncan, Symeon
Chatzinotas | A Semantic-Loss Function Modeling Framework With Task-Oriented Machine
Learning Perspectives | 6 pages, 11 figures | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | The integration of machine learning (ML) has significantly enhanced the
capabilities of Earth Observation (EO) systems by enabling the extraction of
actionable insights from complex datasets. However, the performance of
data-driven EO applications is heavily influenced by the data collection and
transmission processes, where limited satellite bandwidth and latency
constraints can hinder the full transmission of original data to the receivers.
To address this issue, adopting the concepts of Semantic Communication (SC)
offers a promising solution by prioritizing the transmission of essential data
semantics over raw information. Implementing SC for EO systems requires a
thorough understanding of the impact of data processing and communication
channel conditions on semantic loss at the processing center. This work
proposes a novel data-fitting framework to empirically model the semantic loss
using real-world EO datasets and domain-specific insights. The framework
quantifies two primary types of semantic loss: (1) source coding loss, assessed
via a data quality indicator measuring the impact of processing on raw source
data, and (2) transmission loss, evaluated by comparing practical transmission
performance against the Shannon limit. Semantic losses are estimated by
evaluating the accuracy of EO applications using four task-oriented ML models,
EfficientViT, MobileViT, ResNet50-DINO, and ResNet8-KD, on lossy image datasets
under varying channel conditions and compression ratios. These results underpin
a framework for efficient semantic-loss modeling in bandwidth-constrained EO
scenarios, enabling more reliable and effective operations.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 23:45:11 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Nguyen",
"Ti Ti",
""
],
[
"Le",
"Thanh-Dung",
""
],
[
"Ha",
"Vu Nguyen",
""
],
[
"Chou",
"Hong-fu",
""
],
[
"Eappen",
"Geoffrey",
""
],
[
"Tran",
"Duc-Dung",
""
],
[
"Nguyen-Kha",
"Hung",
""
],
[
"... | TITLE: A Semantic-Loss Function Modeling Framework With Task-Oriented Machine
Learning Perspectives
ABSTRACT: The integration of machine learning (ML) has significantly enhanced the
capabilities of Earth Observation (EO) systems by enabling the extraction of
actionable insights from complex datasets. However, the performance of
data-driven EO applications is heavily influenced by the data collection and
transmission processes, where limited satellite bandwidth and latency
constraints can hinder the full transmission of original data to the receivers.
To address this issue, adopting the concepts of Semantic Communication (SC)
offers a promising solution by prioritizing the transmission of essential data
semantics over raw information. Implementing SC for EO systems requires a
thorough understanding of the impact of data processing and communication
channel conditions on semantic loss at the processing center. This work
proposes a novel data-fitting framework to empirically model the semantic loss
using real-world EO datasets and domain-specific insights. The framework
quantifies two primary types of semantic loss: (1) source coding loss, assessed
via a data quality indicator measuring the impact of processing on raw source
data, and (2) transmission loss, evaluated by comparing practical transmission
performance against the Shannon limit. Semantic losses are estimated by
evaluating the accuracy of EO applications using four task-oriented ML models,
EfficientViT, MobileViT, ResNet50-DINO, and ResNet8-KD, on lossy image datasets
under varying channel conditions and compression ratios. These results underpin
a framework for efficient semantic-loss modeling in bandwidth-constrained EO
scenarios, enabling more reliable and effective operations.
|
2503.09905 | Allison Andreyev | Allison Andreyev | Quantization for OpenAI's Whisper Models: A Comparative Analysis | 7 pages | null | null | null | cs.SD cs.CL cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated speech recognition (ASR) models have gained prominence for
applications such as captioning, speech translation, and live transcription.
This paper studies Whisper and two model variants: one optimized for live
speech streaming and another for offline transcription. Notably, these models
have been found to generate hallucinated content, reducing transcription
reliability. Furthermore, larger model variants exhibit increased latency and
pose challenges for deployment on resource-constrained devices. This study
analyzes the similarities and differences between three Whisper models,
qualitatively examining their distinct capabilities. Next, this study
quantifies the impact of model quantization on latency and evaluates its
viability for edge deployment. Using the open source LibriSpeech dataset, this
paper evaluates the word error rate (WER) along with latency analysis of
whispercpp using 3 quantization methods (INT4, INT5, INT8). Results show that
quantization reduces latency by 19\% and model size by 45\%, while preserving
transcription accuracy. These findings provide insights into the optimal use
cases of different Whisper models and edge device deployment possibilities. All
code, datasets, and implementation details are available in a public GitHub
repository: https://github.com/allisonandreyev/WhisperQuantization.git
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 23:50:35 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Andreyev",
"Allison",
""
]
] | TITLE: Quantization for OpenAI's Whisper Models: A Comparative Analysis
ABSTRACT: Automated speech recognition (ASR) models have gained prominence for
applications such as captioning, speech translation, and live transcription.
This paper studies Whisper and two model variants: one optimized for live
speech streaming and another for offline transcription. Notably, these models
have been found to generate hallucinated content, reducing transcription
reliability. Furthermore, larger model variants exhibit increased latency and
pose challenges for deployment on resource-constrained devices. This study
analyzes the similarities and differences between three Whisper models,
qualitatively examining their distinct capabilities. Next, this study
quantifies the impact of model quantization on latency and evaluates its
viability for edge deployment. Using the open source LibriSpeech dataset, this
paper evaluates the word error rate (WER) along with latency analysis of
whispercpp using 3 quantization methods (INT4, INT5, INT8). Results show that
quantization reduces latency by 19\% and model size by 45\%, while preserving
transcription accuracy. These findings provide insights into the optimal use
cases of different Whisper models and edge device deployment possibilities. All
code, datasets, and implementation details are available in a public GitHub
repository: https://github.com/allisonandreyev/WhisperQuantization.git
|
2503.09911 | Kohei Hayashi | Kohei Hayashi, Masanori Koyama, Julian Jorge Andrade Guerreiro | Inter-environmental world modeling for continuous and compositional
dynamics | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Various world model frameworks are being developed today based on
autoregressive frameworks that rely on discrete representations of actions and
observations, and these frameworks are succeeding in constructing interactive
generative models for the target environment of interest. Meanwhile, humans
demonstrate remarkable generalization abilities to combine experiences in
multiple environments to mentally simulate and learn to control agents in
diverse environments. Inspired by this human capability, we introduce World
modeling through Lie Action (WLA), an unsupervised framework that learns
continuous latent action representations to simulate across environments. WLA
learns a control interface with high controllability and predictive ability by
simultaneously modeling the dynamics of multiple environments using Lie group
theory and object-centric autoencoder. On synthetic benchmark and real-world
datasets, we demonstrate that WLA can be trained using only video frames and,
with minimal or no action labels, can quickly adapt to new environments with
novel action sets.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 00:02:54 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Hayashi",
"Kohei",
""
],
[
"Koyama",
"Masanori",
""
],
[
"Guerreiro",
"Julian Jorge Andrade",
""
]
] | TITLE: Inter-environmental world modeling for continuous and compositional
dynamics
ABSTRACT: Various world model frameworks are being developed today based on
autoregressive frameworks that rely on discrete representations of actions and
observations, and these frameworks are succeeding in constructing interactive
generative models for the target environment of interest. Meanwhile, humans
demonstrate remarkable generalization abilities to combine experiences in
multiple environments to mentally simulate and learn to control agents in
diverse environments. Inspired by this human capability, we introduce World
modeling through Lie Action (WLA), an unsupervised framework that learns
continuous latent action representations to simulate across environments. WLA
learns a control interface with high controllability and predictive ability by
simultaneously modeling the dynamics of multiple environments using Lie group
theory and object-centric autoencoder. On synthetic benchmark and real-world
datasets, we demonstrate that WLA can be trained using only video frames and,
with minimal or no action labels, can quickly adapt to new environments with
novel action sets.
|
2503.09929 | Weiwei Zhou | Weiwei Zhou, Chenkun Ling, Zefeng Cai | Emotion Recognition with CLIP and Sequential Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human emotion recognition plays a crucial role in facilitating seamless
interactions between humans and computers. In this paper, we present our
innovative methodology for tackling the Valence-Arousal (VA) Estimation
Challenge, the Expression Recognition Challenge, and the Action Unit (AU)
Detection Challenge, all within the framework of the 8th Workshop and
Competition on Affective Behavior Analysis in-the-wild (ABAW).
Our approach introduces a novel framework aimed at enhancing continuous
emotion recognition. This is achieved by fine-tuning the CLIP model with the
aff-wild2 dataset, which provides annotated expression labels. The result is a
fine-tuned model that serves as an efficient visual feature extractor,
significantly improving its robustness. To further boost the performance of
continuous emotion recognition, we incorporate Temporal Convolutional Network
(TCN) modules alongside Transformer Encoder modules into our system
architecture. The integration of these advanced components allows our model to
outperform baseline performance, demonstrating its ability to recognize human
emotions with greater accuracy and efficiency.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 01:02:06 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zhou",
"Weiwei",
""
],
[
"Ling",
"Chenkun",
""
],
[
"Cai",
"Zefeng",
""
]
] | TITLE: Emotion Recognition with CLIP and Sequential Learning
ABSTRACT: Human emotion recognition plays a crucial role in facilitating seamless
interactions between humans and computers. In this paper, we present our
innovative methodology for tackling the Valence-Arousal (VA) Estimation
Challenge, the Expression Recognition Challenge, and the Action Unit (AU)
Detection Challenge, all within the framework of the 8th Workshop and
Competition on Affective Behavior Analysis in-the-wild (ABAW).
Our approach introduces a novel framework aimed at enhancing continuous
emotion recognition. This is achieved by fine-tuning the CLIP model with the
aff-wild2 dataset, which provides annotated expression labels. The result is a
fine-tuned model that serves as an efficient visual feature extractor,
significantly improving its robustness. To further boost the performance of
continuous emotion recognition, we incorporate Temporal Convolutional Network
(TCN) modules alongside Transformer Encoder modules into our system
architecture. The integration of these advanced components allows our model to
outperform baseline performance, demonstrating its ability to recognize human
emotions with greater accuracy and efficiency.
|
2503.09938 | Dongliang Zhou | Sen Wang, Dongliang Zhou, Liang Xie, Chao Xu, Ye Yan, Erwei Yin | PanoGen++: Domain-Adapted Text-Guided Panoramic Environment Generation
for Vision-and-Language Navigation | This paper was accepted by Neural Networks | null | 10.1016/j.neunet.2025.107320 | null | cs.CV cs.MM cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-and-language navigation (VLN) tasks require agents to navigate
three-dimensional environments guided by natural language instructions,
offering substantial potential for diverse applications. However, the scarcity
of training data impedes progress in this field. This paper introduces
PanoGen++, a novel framework that addresses this limitation by generating
varied and pertinent panoramic environments for VLN tasks. PanoGen++
incorporates pre-trained diffusion models with domain-specific fine-tuning,
employing parameter-efficient techniques such as low-rank adaptation to
minimize computational costs. We investigate two settings for environment
generation: masked image inpainting and recursive image outpainting. The former
maximizes novel environment creation by inpainting masked regions based on
textual descriptions, while the latter facilitates agents' learning of spatial
relationships within panoramas. Empirical evaluations on room-to-room (R2R),
room-for-room (R4R), and cooperative vision-and-dialog navigation (CVDN)
datasets reveal significant performance enhancements: a 2.44% increase in
success rate on the R2R test leaderboard, a 0.63% improvement on the R4R
validation unseen set, and a 0.75-meter enhancement in goal progress on the
CVDN validation unseen set. PanoGen++ augments the diversity and relevance of
training environments, resulting in improved generalization and efficacy in VLN
tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 01:16:58 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Sen",
""
],
[
"Zhou",
"Dongliang",
""
],
[
"Xie",
"Liang",
""
],
[
"Xu",
"Chao",
""
],
[
"Yan",
"Ye",
""
],
[
"Yin",
"Erwei",
""
]
] | TITLE: PanoGen++: Domain-Adapted Text-Guided Panoramic Environment Generation
for Vision-and-Language Navigation
ABSTRACT: Vision-and-language navigation (VLN) tasks require agents to navigate
three-dimensional environments guided by natural language instructions,
offering substantial potential for diverse applications. However, the scarcity
of training data impedes progress in this field. This paper introduces
PanoGen++, a novel framework that addresses this limitation by generating
varied and pertinent panoramic environments for VLN tasks. PanoGen++
incorporates pre-trained diffusion models with domain-specific fine-tuning,
employing parameter-efficient techniques such as low-rank adaptation to
minimize computational costs. We investigate two settings for environment
generation: masked image inpainting and recursive image outpainting. The former
maximizes novel environment creation by inpainting masked regions based on
textual descriptions, while the latter facilitates agents' learning of spatial
relationships within panoramas. Empirical evaluations on room-to-room (R2R),
room-for-room (R4R), and cooperative vision-and-dialog navigation (CVDN)
datasets reveal significant performance enhancements: a 2.44% increase in
success rate on the R2R test leaderboard, a 0.63% improvement on the R4R
validation unseen set, and a 0.75-meter enhancement in goal progress on the
CVDN validation unseen set. PanoGen++ augments the diversity and relevance of
training environments, resulting in improved generalization and efficacy in VLN
tasks.
|
2503.09941 | Wenyu Chen | Mu Chen, Wenyu Chen, Mingchuan Yang, Yuan Zhang, Tao Han, Xinchi Li,
Yunlong Li, Huaici Zhao | TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points
for 3D Environment Awareness | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D semantic occupancy has rapidly become a research focus in the fields of
robotics and autonomous driving environment perception due to its ability to
provide more realistic geometric perception and its closer integration with
downstream tasks. By performing occupancy prediction of the 3D space in the
environment, the ability and robustness of scene understanding can be
effectively improved. However, existing occupancy prediction tasks are
primarily modeled using voxel or point cloud-based approaches: voxel-based
network structures often suffer from the loss of spatial information due to the
voxelization process, while point cloud-based methods, although better at
retaining spatial location information, face limitations in representing
volumetric structural details. To address this issue, we propose a dual-modal
prediction method based on 3D Gaussian sets and sparse points, which balances
both spatial location and volumetric structural information, achieving higher
accuracy in semantic occupancy prediction. Specifically, our method adopts a
Transformer-based architecture, taking 3D Gaussian sets, sparse points, and
queries as inputs. Through the multi-layer structure of the Transformer, the
enhanced queries and 3D Gaussian sets jointly contribute to the semantic
occupancy prediction, and an adaptive fusion mechanism integrates the semantic
outputs of both modalities to generate the final prediction results.
Additionally, to further improve accuracy, we dynamically refine the point
cloud at each layer, allowing for more precise location information during
occupancy prediction. We conducted experiments on the Occ3DnuScenes dataset,
and the experimental results demonstrate superior performance of the proposed
method on IoU based metrics.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 01:35:04 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Chen",
"Mu",
""
],
[
"Chen",
"Wenyu",
""
],
[
"Yang",
"Mingchuan",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Han",
"Tao",
""
],
[
"Li",
"Xinchi",
""
],
[
"Li",
"Yunlong",
""
],
[
"Zhao",
"Huaici",
... | TITLE: TGP: Two-modal occupancy prediction with 3D Gaussian and sparse points
for 3D Environment Awareness
ABSTRACT: 3D semantic occupancy has rapidly become a research focus in the fields of
robotics and autonomous driving environment perception due to its ability to
provide more realistic geometric perception and its closer integration with
downstream tasks. By performing occupancy prediction of the 3D space in the
environment, the ability and robustness of scene understanding can be
effectively improved. However, existing occupancy prediction tasks are
primarily modeled using voxel or point cloud-based approaches: voxel-based
network structures often suffer from the loss of spatial information due to the
voxelization process, while point cloud-based methods, although better at
retaining spatial location information, face limitations in representing
volumetric structural details. To address this issue, we propose a dual-modal
prediction method based on 3D Gaussian sets and sparse points, which balances
both spatial location and volumetric structural information, achieving higher
accuracy in semantic occupancy prediction. Specifically, our method adopts a
Transformer-based architecture, taking 3D Gaussian sets, sparse points, and
queries as inputs. Through the multi-layer structure of the Transformer, the
enhanced queries and 3D Gaussian sets jointly contribute to the semantic
occupancy prediction, and an adaptive fusion mechanism integrates the semantic
outputs of both modalities to generate the final prediction results.
Additionally, to further improve accuracy, we dynamically refine the point
cloud at each layer, allowing for more precise location information during
occupancy prediction. We conducted experiments on the Occ3DnuScenes dataset,
and the experimental results demonstrate superior performance of the proposed
method on IoU based metrics.
|
2503.09950 | Yuxiang Fu | Yuxiang Fu, Qi Yan, Lele Wang, Ke Li, Renjie Liao | MoFlow: One-Step Flow Matching for Human Trajectory Forecasting via
Implicit Maximum Likelihood Estimation based Distillation | Accepted to CVPR 2025 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of human trajectory forecasting, which
aims to predict the inherently multi-modal future movements of humans based on
their past trajectories and other contextual cues. We propose a novel motion
prediction conditional flow matching model, termed MoFlow, to predict K-shot
future trajectories for all agents in a given scene. We design a novel flow
matching loss function that not only ensures at least one of the $K$ sets of
future trajectories is accurate but also encourages all $K$ sets of future
trajectories to be diverse and plausible. Furthermore, by leveraging the
implicit maximum likelihood estimation (IMLE), we propose a novel distillation
method for flow models that only requires samples from the teacher model.
Extensive experiments on the real-world datasets, including SportVU NBA games,
ETH-UCY, and SDD, demonstrate that both our teacher flow model and the
IMLE-distilled student model achieve state-of-the-art performance. These models
can generate diverse trajectories that are physically and socially plausible.
Moreover, our one-step student model is $\textbf{100}$ times faster than the
teacher flow model during sampling. The code, model, and data are available at
our project page: https://moflow-imle.github.io
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 01:53:05 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Fu",
"Yuxiang",
""
],
[
"Yan",
"Qi",
""
],
[
"Wang",
"Lele",
""
],
[
"Li",
"Ke",
""
],
[
"Liao",
"Renjie",
""
]
] | TITLE: MoFlow: One-Step Flow Matching for Human Trajectory Forecasting via
Implicit Maximum Likelihood Estimation based Distillation
ABSTRACT: In this paper, we address the problem of human trajectory forecasting, which
aims to predict the inherently multi-modal future movements of humans based on
their past trajectories and other contextual cues. We propose a novel motion
prediction conditional flow matching model, termed MoFlow, to predict K-shot
future trajectories for all agents in a given scene. We design a novel flow
matching loss function that not only ensures at least one of the $K$ sets of
future trajectories is accurate but also encourages all $K$ sets of future
trajectories to be diverse and plausible. Furthermore, by leveraging the
implicit maximum likelihood estimation (IMLE), we propose a novel distillation
method for flow models that only requires samples from the teacher model.
Extensive experiments on the real-world datasets, including SportVU NBA games,
ETH-UCY, and SDD, demonstrate that both our teacher flow model and the
IMLE-distilled student model achieve state-of-the-art performance. These models
can generate diverse trajectories that are physically and socially plausible.
Moreover, our one-step student model is $\textbf{100}$ times faster than the
teacher flow model during sampling. The code, model, and data are available at
our project page: https://moflow-imle.github.io
|
2503.09959 | Jiansheng Li | Jiansheng Li, Haotian Song, Jinni Zhou, Qiang Nie and Yi Cai | RMG: Real-Time Expressive Motion Generation with Self-collision
Avoidance for 6-DOF Companion Robotic Arms | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The six-degree-of-freedom (6-DOF) robotic arm has gained widespread
application in human-coexisting environments. While previous research has
predominantly focused on functional motion generation, the critical aspect of
expressive motion in human-robot interaction remains largely unexplored. This
paper presents a novel real-time motion generation planner that enhances
interactivity by creating expressive robotic motions between arbitrary start
and end states within predefined time constraints. Our approach involves three
key contributions: first, we develop a mapping algorithm to construct an
expressive motion dataset derived from human dance movements; second, we train
motion generation models in both Cartesian and joint spaces using this dataset;
third, we introduce an optimization algorithm that guarantees smooth,
collision-free motion while maintaining the intended expressive style.
Experimental results demonstrate the effectiveness of our method, which can
generate expressive and generalized motions in under 0.5 seconds while
satisfying all specified constraints.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 02:02:01 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Jiansheng",
""
],
[
"Song",
"Haotian",
""
],
[
"Zhou",
"Jinni",
""
],
[
"Nie",
"Qiang",
""
],
[
"Cai",
"Yi",
""
]
] | TITLE: RMG: Real-Time Expressive Motion Generation with Self-collision
Avoidance for 6-DOF Companion Robotic Arms
ABSTRACT: The six-degree-of-freedom (6-DOF) robotic arm has gained widespread
application in human-coexisting environments. While previous research has
predominantly focused on functional motion generation, the critical aspect of
expressive motion in human-robot interaction remains largely unexplored. This
paper presents a novel real-time motion generation planner that enhances
interactivity by creating expressive robotic motions between arbitrary start
and end states within predefined time constraints. Our approach involves three
key contributions: first, we develop a mapping algorithm to construct an
expressive motion dataset derived from human dance movements; second, we train
motion generation models in both Cartesian and joint spaces using this dataset;
third, we introduce an optimization algorithm that guarantees smooth,
collision-free motion while maintaining the intended expressive style.
Experimental results demonstrate the effectiveness of our method, which can
generate expressive and generalized motions in under 0.5 seconds while
satisfying all specified constraints.
|
2503.09960 | Muhammad Shahbaz Khan | Muhammad Hassan Jamal, Abdulwahab Alazeb, Shahid Allah Bakhsh, Wadii
Boulila, Syed Aziz Shah, Aizaz Ahmad Khattak and Muhammad Shahbaz Khan | Optimizing Fire Safety: Reducing False Alarms Using Advanced Machine
Learning Techniques | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fire safety practices are important to reduce the extent of destruction
caused by fire. While smoke alarms help save lives, firefighters struggle with
the increasing number of false alarms. This paper presents a precise and
efficient Weighted ensemble model for decreasing false alarms. It estimates the
density, computes weights according to the high and low-density regions,
forwards the high region weights to KNN and low region weights to XGBoost and
combines the predictions. The proposed model is effective at reducing response
time, increasing fire safety, and minimizing the damage that fires cause. A
specifically designed dataset for smoke detection is utilized to test the
proposed model. In addition, a variety of ML models, such as Logistic
Regression (LR), Decision Tree (DT), Random Forest (RF), Nai:ve Bayes (NB),
K-Nearest Neighbour (KNN), Support Vector Machine (SVM), Extreme Gradient
Boosting (XGBoost), Adaptive Boosting (ADAB), have also been utilized. To
maximize the use of the smoke detection dataset, all the algorithms utilize the
SMOTE re-sampling technique. After evaluating the assessment criteria, this
paper presents a concise summary of the comprehensive findings obtained by
comparing the outcomes of all models.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 02:07:14 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Jamal",
"Muhammad Hassan",
""
],
[
"Alazeb",
"Abdulwahab",
""
],
[
"Bakhsh",
"Shahid Allah",
""
],
[
"Boulila",
"Wadii",
""
],
[
"Shah",
"Syed Aziz",
""
],
[
"Khattak",
"Aizaz Ahmad",
""
],
[
"Khan",
"Muhammad... | TITLE: Optimizing Fire Safety: Reducing False Alarms Using Advanced Machine
Learning Techniques
ABSTRACT: Fire safety practices are important to reduce the extent of destruction
caused by fire. While smoke alarms help save lives, firefighters struggle with
the increasing number of false alarms. This paper presents a precise and
efficient Weighted ensemble model for decreasing false alarms. It estimates the
density, computes weights according to the high and low-density regions,
forwards the high region weights to KNN and low region weights to XGBoost and
combines the predictions. The proposed model is effective at reducing response
time, increasing fire safety, and minimizing the damage that fires cause. A
specifically designed dataset for smoke detection is utilized to test the
proposed model. In addition, a variety of ML models, such as Logistic
Regression (LR), Decision Tree (DT), Random Forest (RF), Nai:ve Bayes (NB),
K-Nearest Neighbour (KNN), Support Vector Machine (SVM), Extreme Gradient
Boosting (XGBoost), Adaptive Boosting (ADAB), have also been utilized. To
maximize the use of the smoke detection dataset, all the algorithms utilize the
SMOTE re-sampling technique. After evaluating the assessment criteria, this
paper presents a concise summary of the comprehensive findings obtained by
comparing the outcomes of all models.
|
2503.09964 | Usman Naseem | Bhavik Chandna, Mariam Aboujenane, Usman Naseem | ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist
Content | Preprint | null | null | null | cs.CR cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Multimodal Models (LMMs) are increasingly vulnerable to AI-generated
extremist content, including photorealistic images and text, which can be used
to bypass safety mechanisms and generate harmful outputs. However, existing
datasets for evaluating LMM robustness offer limited exploration of extremist
content, often lacking AI-generated images, diverse image generation models,
and comprehensive coverage of historical events, which hinders a complete
assessment of model vulnerabilities. To fill this gap, we introduce
ExtremeAIGC, a benchmark dataset and evaluation framework designed to assess
LMM vulnerabilities against such content. ExtremeAIGC simulates real-world
events and malicious use cases by curating diverse text- and image-based
examples crafted using state-of-the-art image generation techniques. Our study
reveals alarming weaknesses in LMMs, demonstrating that even cutting-edge
safety measures fail to prevent the generation of extremist material. We
systematically quantify the success rates of various attack strategies,
exposing critical gaps in current defenses and emphasizing the need for more
robust mitigation strategies.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 02:10:29 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Chandna",
"Bhavik",
""
],
[
"Aboujenane",
"Mariam",
""
],
[
"Naseem",
"Usman",
""
]
] | TITLE: ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist
Content
ABSTRACT: Large Multimodal Models (LMMs) are increasingly vulnerable to AI-generated
extremist content, including photorealistic images and text, which can be used
to bypass safety mechanisms and generate harmful outputs. However, existing
datasets for evaluating LMM robustness offer limited exploration of extremist
content, often lacking AI-generated images, diverse image generation models,
and comprehensive coverage of historical events, which hinders a complete
assessment of model vulnerabilities. To fill this gap, we introduce
ExtremeAIGC, a benchmark dataset and evaluation framework designed to assess
LMM vulnerabilities against such content. ExtremeAIGC simulates real-world
events and malicious use cases by curating diverse text- and image-based
examples crafted using state-of-the-art image generation techniques. Our study
reveals alarming weaknesses in LMMs, demonstrating that even cutting-edge
safety measures fail to prevent the generation of extremist material. We
systematically quantify the success rates of various attack strategies,
exposing critical gaps in current defenses and emphasizing the need for more
robust mitigation strategies.
|
2503.09969 | Nathan Drenkow | Nathan Drenkow and Mitchell Pavlak and Keith Harrigian and Ayah
Zirikly and Adarsh Subbaswamy and Mathias Unberath | Detecting Dataset Bias in Medical AI: A Generalized and
Modality-Agnostic Auditing Framework | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Data-driven AI is establishing itself at the center of evidence-based
medicine. However, reports of shortcomings and unexpected behavior are growing
due to AI's reliance on association-based learning. A major reason for this
behavior: latent bias in machine learning datasets can be amplified during
training and/or hidden during testing. We present a data modality-agnostic
auditing framework for generating targeted hypotheses about sources of bias
which we refer to as Generalized Attribute Utility and Detectability-Induced
bias Testing (G-AUDIT) for datasets. Our method examines the relationship
between task-level annotations and data properties including protected
attributes (e.g., race, age, sex) and environment and acquisition
characteristics (e.g., clinical site, imaging protocols). G-AUDIT automatically
quantifies the extent to which the observed data attributes may enable shortcut
learning, or in the case of testing data, hide predictions made based on
spurious associations. We demonstrate the broad applicability and value of our
method by analyzing large-scale medical datasets for three distinct modalities
and learning tasks: skin lesion classification in images, stigmatizing language
classification in Electronic Health Records (EHR), and mortality prediction for
ICU tabular data. In each setting, G-AUDIT successfully identifies subtle
biases commonly overlooked by traditional qualitative methods that focus
primarily on social and ethical objectives, underscoring its practical value in
exposing dataset-level risks and supporting the downstream development of
reliable AI systems. Our method paves the way for achieving deeper
understanding of machine learning datasets throughout the AI development
life-cycle from initial prototyping all the way to regulation, and creates
opportunities to reduce model bias, enabling safer and more trustworthy AI
systems.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 02:16:48 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Drenkow",
"Nathan",
""
],
[
"Pavlak",
"Mitchell",
""
],
[
"Harrigian",
"Keith",
""
],
[
"Zirikly",
"Ayah",
""
],
[
"Subbaswamy",
"Adarsh",
""
],
[
"Unberath",
"Mathias",
""
]
] | TITLE: Detecting Dataset Bias in Medical AI: A Generalized and
Modality-Agnostic Auditing Framework
ABSTRACT: Data-driven AI is establishing itself at the center of evidence-based
medicine. However, reports of shortcomings and unexpected behavior are growing
due to AI's reliance on association-based learning. A major reason for this
behavior: latent bias in machine learning datasets can be amplified during
training and/or hidden during testing. We present a data modality-agnostic
auditing framework for generating targeted hypotheses about sources of bias
which we refer to as Generalized Attribute Utility and Detectability-Induced
bias Testing (G-AUDIT) for datasets. Our method examines the relationship
between task-level annotations and data properties including protected
attributes (e.g., race, age, sex) and environment and acquisition
characteristics (e.g., clinical site, imaging protocols). G-AUDIT automatically
quantifies the extent to which the observed data attributes may enable shortcut
learning, or in the case of testing data, hide predictions made based on
spurious associations. We demonstrate the broad applicability and value of our
method by analyzing large-scale medical datasets for three distinct modalities
and learning tasks: skin lesion classification in images, stigmatizing language
classification in Electronic Health Records (EHR), and mortality prediction for
ICU tabular data. In each setting, G-AUDIT successfully identifies subtle
biases commonly overlooked by traditional qualitative methods that focus
primarily on social and ethical objectives, underscoring its practical value in
exposing dataset-level risks and supporting the downstream development of
reliable AI systems. Our method paves the way for achieving deeper
understanding of machine learning datasets throughout the AI development
life-cycle from initial prototyping all the way to regulation, and creates
opportunities to reduce model bias, enabling safer and more trustworthy AI
systems.
|
2503.09974 | Jiaqi Wu | Jiaqi Wu, Junbiao Pang, Qingming Huang | Uncertainty-aware Long-tailed Weights Model the Utility of Pseudo-labels
for Semi-supervised Learning | arXiv admin note: text overlap with arXiv:2408.04150 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current Semi-supervised Learning (SSL) adopts the pseudo-labeling strategy
and further filters pseudo-labels based on confidence thresholds. However, this
mechanism has notable drawbacks: 1) setting the reasonable threshold is an open
problem which significantly influences the selection of the high-quality
pseudo-labels; and 2) deep models often exhibit the over-confidence phenomenon
which makes the confidence value an unreliable indicator for assessing the
quality of pseudo-labels due to the scarcity of labeled data. In this paper, we
propose an Uncertainty-aware Ensemble Structure (UES) to assess the utility of
pseudo-labels for unlabeled samples. We further model the utility of
pseudo-labels as long-tailed weights to avoid the open problem of setting the
threshold. Concretely, the advantage of the long-tailed weights ensures that
even unreliable pseudo-labels still contribute to enhancing the model's
robustness. Besides, UES is lightweight and architecture-agnostic, easily
extending to various computer vision tasks, including classification and
regression. Experimental results demonstrate that combining the proposed method
with DualPose leads to a 3.47% improvement in Percentage of Correct Keypoints
(PCK) on the Sniffing dataset with 100 data points (30 labeled), a 7.29\%
improvement in PCK on the FLIC dataset with 100 data points (50 labeled), and a
3.91% improvement in PCK on the LSP dataset with 200 data points (100 labeled).
Furthermore, when combined with FixMatch, the proposed method achieves a 0.2%
accuracy improvement on the CIFAR-10 dataset with 40 labeled data points and a
0.26% accuracy improvement on the CIFAR-100 dataset with 400 labeled data
points.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 02:21:04 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wu",
"Jiaqi",
""
],
[
"Pang",
"Junbiao",
""
],
[
"Huang",
"Qingming",
""
]
] | TITLE: Uncertainty-aware Long-tailed Weights Model the Utility of Pseudo-labels
for Semi-supervised Learning
ABSTRACT: Current Semi-supervised Learning (SSL) adopts the pseudo-labeling strategy
and further filters pseudo-labels based on confidence thresholds. However, this
mechanism has notable drawbacks: 1) setting the reasonable threshold is an open
problem which significantly influences the selection of the high-quality
pseudo-labels; and 2) deep models often exhibit the over-confidence phenomenon
which makes the confidence value an unreliable indicator for assessing the
quality of pseudo-labels due to the scarcity of labeled data. In this paper, we
propose an Uncertainty-aware Ensemble Structure (UES) to assess the utility of
pseudo-labels for unlabeled samples. We further model the utility of
pseudo-labels as long-tailed weights to avoid the open problem of setting the
threshold. Concretely, the advantage of the long-tailed weights ensures that
even unreliable pseudo-labels still contribute to enhancing the model's
robustness. Besides, UES is lightweight and architecture-agnostic, easily
extending to various computer vision tasks, including classification and
regression. Experimental results demonstrate that combining the proposed method
with DualPose leads to a 3.47% improvement in Percentage of Correct Keypoints
(PCK) on the Sniffing dataset with 100 data points (30 labeled), a 7.29\%
improvement in PCK on the FLIC dataset with 100 data points (50 labeled), and a
3.91% improvement in PCK on the LSP dataset with 200 data points (100 labeled).
Furthermore, when combined with FixMatch, the proposed method achieves a 0.2%
accuracy improvement on the CIFAR-10 dataset with 40 labeled data points and a
0.26% accuracy improvement on the CIFAR-100 dataset with 400 labeled data
points.
|
2503.09978 | Jiacheng Xie | Jiacheng Xie, Hua-Chieh Shao, Yunxiang Li, Shunyu Yan, Chenyang Shen,
Jing Wang, You Zhang | A Conditional Point Cloud Diffusion Model for Deformable Liver Motion
Tracking Via a Single Arbitrarily-Angled X-ray Projection | 25 pages, 7 figures | null | null | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deformable liver motion tracking using a single X-ray projection enables
real-time motion monitoring and treatment intervention. We introduce a
conditional point cloud diffusion model-based framework for accurate and robust
liver motion tracking from arbitrarily angled single X-ray projections
(PCD-Liver), which estimates volumetric liver motion by solving deformable
vector fields (DVFs) of a prior liver surface point cloud based on a single
X-ray image. The model is patient-specific and consists of two main components:
a rigid alignment model to estimate the liver's overall shifts and a
conditional point cloud diffusion model that further corrects for liver surface
deformations. Conditioned on motion-encoded features extracted from a single
X-ray projection via a geometry-informed feature pooling layer, the diffusion
model iteratively solves detailed liver surface DVFs in a projection
angle-agnostic manner. The liver surface motion estimated by PCD-Liver serves
as a boundary condition for a U-Net-based biomechanical model to infer internal
liver motion and localize liver tumors. A dataset of ten liver cancer patients
was used for evaluation. The accuracy of liver point cloud motion estimation
was assessed using root mean square error (RMSE) and 95th-percentile Hausdorff
distance (HD95), while liver tumor localization error was quantified using
center-of-mass error (COME). The mean (standard deviation) RMSE, HD95, and COME
of the prior liver or tumor before motion estimation were 8.86(1.51) mm,
10.88(2.56) mm, and 9.41(3.08) mm, respectively. After PCD-Liver motion
estimation, the corresponding values improved to 3.59(0.28) mm, 4.29(0.62) mm,
and 3.45(0.96) mm. Under highly noisy conditions, PCD-Liver maintained stable
performance. This study presents an accurate and robust framework for
deformable liver motion estimation and tumor localization in image-guided
radiotherapy.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 02:27:26 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Xie",
"Jiacheng",
""
],
[
"Shao",
"Hua-Chieh",
""
],
[
"Li",
"Yunxiang",
""
],
[
"Yan",
"Shunyu",
""
],
[
"Shen",
"Chenyang",
""
],
[
"Wang",
"Jing",
""
],
[
"Zhang",
"You",
""
]
] | TITLE: A Conditional Point Cloud Diffusion Model for Deformable Liver Motion
Tracking Via a Single Arbitrarily-Angled X-ray Projection
ABSTRACT: Deformable liver motion tracking using a single X-ray projection enables
real-time motion monitoring and treatment intervention. We introduce a
conditional point cloud diffusion model-based framework for accurate and robust
liver motion tracking from arbitrarily angled single X-ray projections
(PCD-Liver), which estimates volumetric liver motion by solving deformable
vector fields (DVFs) of a prior liver surface point cloud based on a single
X-ray image. The model is patient-specific and consists of two main components:
a rigid alignment model to estimate the liver's overall shifts and a
conditional point cloud diffusion model that further corrects for liver surface
deformations. Conditioned on motion-encoded features extracted from a single
X-ray projection via a geometry-informed feature pooling layer, the diffusion
model iteratively solves detailed liver surface DVFs in a projection
angle-agnostic manner. The liver surface motion estimated by PCD-Liver serves
as a boundary condition for a U-Net-based biomechanical model to infer internal
liver motion and localize liver tumors. A dataset of ten liver cancer patients
was used for evaluation. The accuracy of liver point cloud motion estimation
was assessed using root mean square error (RMSE) and 95th-percentile Hausdorff
distance (HD95), while liver tumor localization error was quantified using
center-of-mass error (COME). The mean (standard deviation) RMSE, HD95, and COME
of the prior liver or tumor before motion estimation were 8.86(1.51) mm,
10.88(2.56) mm, and 9.41(3.08) mm, respectively. After PCD-Liver motion
estimation, the corresponding values improved to 3.59(0.28) mm, 4.29(0.62) mm,
and 3.45(0.96) mm. Under highly noisy conditions, PCD-Liver maintained stable
performance. This study presents an accurate and robust framework for
deformable liver motion estimation and tumor localization in image-guided
radiotherapy.
|
2503.09994 | Yunxiao Wang | Yunxiao Wang, Meng Liu, Rui Shao, Haoyu Zhang, Bin Wen, Fan Yang,
Tingting Gao, Di Zhang, Liqiang Nie | TIME: Temporal-sensitive Multi-dimensional Instruction Tuning and
Benchmarking for Video-LLMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video large language models have achieved remarkable performance in tasks
such as video question answering, however, their temporal understanding remains
suboptimal. To address this limitation, we curate a dedicated instruction
fine-tuning dataset that focuses on enhancing temporal comprehension across
five key dimensions. In order to reduce reliance on costly temporal
annotations, we introduce a multi-task prompt fine-tuning approach that
seamlessly integrates temporal-sensitive tasks into existing instruction
datasets without requiring additional annotations. Furthermore, we develop a
novel benchmark for temporal-sensitive video understanding that not only fills
the gaps in dimension coverage left by existing benchmarks but also rigorously
filters out potential shortcuts, ensuring a more accurate evaluation. Extensive
experimental results demonstrate that our approach significantly enhances the
temporal understanding of video-LLMs while avoiding reliance on shortcuts.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 03:05:11 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Yunxiao",
""
],
[
"Liu",
"Meng",
""
],
[
"Shao",
"Rui",
""
],
[
"Zhang",
"Haoyu",
""
],
[
"Wen",
"Bin",
""
],
[
"Yang",
"Fan",
""
],
[
"Gao",
"Tingting",
""
],
[
"Zhang",
"Di",
""
... | TITLE: TIME: Temporal-sensitive Multi-dimensional Instruction Tuning and
Benchmarking for Video-LLMs
ABSTRACT: Video large language models have achieved remarkable performance in tasks
such as video question answering, however, their temporal understanding remains
suboptimal. To address this limitation, we curate a dedicated instruction
fine-tuning dataset that focuses on enhancing temporal comprehension across
five key dimensions. In order to reduce reliance on costly temporal
annotations, we introduce a multi-task prompt fine-tuning approach that
seamlessly integrates temporal-sensitive tasks into existing instruction
datasets without requiring additional annotations. Furthermore, we develop a
novel benchmark for temporal-sensitive video understanding that not only fills
the gaps in dimension coverage left by existing benchmarks but also rigorously
filters out potential shortcuts, ensuring a more accurate evaluation. Extensive
experimental results demonstrate that our approach significantly enhances the
temporal understanding of video-LLMs while avoiding reliance on shortcuts.
|
2503.10009 | Bowen Zhang | Bowen Zhang, Pengcheng Luo | OR-LLM-Agent: Automating Modeling and Solving of Operations Research
Optimization Problem with Reasoning Large Language Model | 11 pages, 6 figures | null | null | null | cs.AI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Operations Research (OR) has been widely applied in various fields such as
resource allocation, production planning, and supply chain management. However,
addressing real-world OR problems requires OR experts to perform mathematical
modeling and programmers to develop solution algorithms. This traditional
method, heavily reliant on experts, is costly and has long development cycles,
severely limiting the widespread adoption of OR techniques. Few have considered
using Artificial Intelligence (AI) to replace professionals to achieve fully
automated solutions for OR problems. We propose OR-LLM-Agent, the first AI
agent that enables end-to-end automation for solving real-world OR problems.
OR-LLM-Agent leverages the Chain-of-Thought (CoT) reasoning capabilities of
Large Language Models (LLMs) to translate natural language problem descriptions
into formal mathematical models and automatically generate Gurobi solver code.
In OR-LLM-Agent, OR-CodeAgent is designed to automate code execution and repair
within a sandbox environment, facilitating the derivation of the final
solution. Due to the lack of dedicated benchmark datasets for evaluating the
automated solving of OR problems, we construct a benchmark dataset comprising
83 real-world OR problems described in natural language. We conduct comparative
experiments with state-of-the-art (SOTA) reasoning LLMs, including GPT-o3-mini,
DeepSeek-R1, and Gemini 2.0 Flash Thinking. The OR-LLM-Agent achieved the
highest pass rate of 100% and the highest solution accuracy of 85%,
demonstrating the feasibility of automated OR problem-solving. Data and code
have been publicly available at https://github.com/bwz96sco/or_llm_agent.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 03:40:50 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zhang",
"Bowen",
""
],
[
"Luo",
"Pengcheng",
""
]
] | TITLE: OR-LLM-Agent: Automating Modeling and Solving of Operations Research
Optimization Problem with Reasoning Large Language Model
ABSTRACT: Operations Research (OR) has been widely applied in various fields such as
resource allocation, production planning, and supply chain management. However,
addressing real-world OR problems requires OR experts to perform mathematical
modeling and programmers to develop solution algorithms. This traditional
method, heavily reliant on experts, is costly and has long development cycles,
severely limiting the widespread adoption of OR techniques. Few have considered
using Artificial Intelligence (AI) to replace professionals to achieve fully
automated solutions for OR problems. We propose OR-LLM-Agent, the first AI
agent that enables end-to-end automation for solving real-world OR problems.
OR-LLM-Agent leverages the Chain-of-Thought (CoT) reasoning capabilities of
Large Language Models (LLMs) to translate natural language problem descriptions
into formal mathematical models and automatically generate Gurobi solver code.
In OR-LLM-Agent, OR-CodeAgent is designed to automate code execution and repair
within a sandbox environment, facilitating the derivation of the final
solution. Due to the lack of dedicated benchmark datasets for evaluating the
automated solving of OR problems, we construct a benchmark dataset comprising
83 real-world OR problems described in natural language. We conduct comparative
experiments with state-of-the-art (SOTA) reasoning LLMs, including GPT-o3-mini,
DeepSeek-R1, and Gemini 2.0 Flash Thinking. The OR-LLM-Agent achieved the
highest pass rate of 100% and the highest solution accuracy of 85%,
demonstrating the feasibility of automated OR problem-solving. Data and code
have been publicly available at https://github.com/bwz96sco/or_llm_agent.
|
2503.10034 | Hao Xiang | Hao Xiang, Zhaoliang Zheng, Xin Xia, Seth Z. Zhao, Letian Gao, Zewei
Zhou, Tianhui Cai, Yun Zhang, Jiaqi Ma | V2X-ReaLO: An Open Online Framework and Dataset for Cooperative
Perception in Reality | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cooperative perception enabled by Vehicle-to-Everything (V2X) communication
holds significant promise for enhancing the perception capabilities of
autonomous vehicles, allowing them to overcome occlusions and extend their
field of view. However, existing research predominantly relies on simulated
environments or static datasets, leaving the feasibility and effectiveness of
V2X cooperative perception especially for intermediate fusion in real-world
scenarios largely unexplored. In this work, we introduce V2X-ReaLO, an open
online cooperative perception framework deployed on real vehicles and smart
infrastructure that integrates early, late, and intermediate fusion methods
within a unified pipeline and provides the first practical demonstration of
online intermediate fusion's feasibility and performance under genuine
real-world conditions. Additionally, we present an open benchmark dataset
specifically designed to assess the performance of online cooperative
perception systems. This new dataset extends V2X-Real dataset to dynamic,
synchronized ROS bags and provides 25,028 test frames with 6,850 annotated key
frames in challenging urban scenarios. By enabling real-time assessments of
perception accuracy and communication lantency under dynamic conditions,
V2X-ReaLO sets a new benchmark for advancing and optimizing cooperative
perception systems in real-world applications. The codes and datasets will be
released to further advance the field.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 04:31:20 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Xiang",
"Hao",
""
],
[
"Zheng",
"Zhaoliang",
""
],
[
"Xia",
"Xin",
""
],
[
"Zhao",
"Seth Z.",
""
],
[
"Gao",
"Letian",
""
],
[
"Zhou",
"Zewei",
""
],
[
"Cai",
"Tianhui",
""
],
[
"Zhang",
"Yun",... | TITLE: V2X-ReaLO: An Open Online Framework and Dataset for Cooperative
Perception in Reality
ABSTRACT: Cooperative perception enabled by Vehicle-to-Everything (V2X) communication
holds significant promise for enhancing the perception capabilities of
autonomous vehicles, allowing them to overcome occlusions and extend their
field of view. However, existing research predominantly relies on simulated
environments or static datasets, leaving the feasibility and effectiveness of
V2X cooperative perception especially for intermediate fusion in real-world
scenarios largely unexplored. In this work, we introduce V2X-ReaLO, an open
online cooperative perception framework deployed on real vehicles and smart
infrastructure that integrates early, late, and intermediate fusion methods
within a unified pipeline and provides the first practical demonstration of
online intermediate fusion's feasibility and performance under genuine
real-world conditions. Additionally, we present an open benchmark dataset
specifically designed to assess the performance of online cooperative
perception systems. This new dataset extends V2X-Real dataset to dynamic,
synchronized ROS bags and provides 25,028 test frames with 6,850 annotated key
frames in challenging urban scenarios. By enabling real-time assessments of
perception accuracy and communication lantency under dynamic conditions,
V2X-ReaLO sets a new benchmark for advancing and optimizing cooperative
perception systems in real-world applications. The codes and datasets will be
released to further advance the field.
|
2503.10040 | Seunghun Lee | Dongik Lee, Valentin Stanev, Xiaohang Zhang, Mijeong Kang, Ichiro
Takeuchi, and Seunghun Lee | Rapid analysis of point-contact Andreev reflection spectra via machine
learning with adaptive data augmentation | 18 pages, 3 figures | null | null | null | cond-mat.supr-con cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Delineating the superconducting order parameters is a pivotal task in
investigating superconductivity for probing pairing mechanisms, as well as
their symmetry and topology. Point-contact Andreev reflection (PCAR)
measurement is a simple yet powerful tool for identifying the order parameters.
The PCAR spectra exhibit significant variations depending on the type of the
order parameter in a superconductor, including its magnitude
($\mathit{\Delta}$), as well as temperature, interfacial quality, Fermi
velocity mismatch, and other factors. The information on the order parameter
can be obtained by finding the combination of these parameters, generating a
theoretical spectrum that fits a measured experimental spectrum. However, due
to the complexity of the spectra and the high dimensionality of parameters,
extracting the fitting parameters is often time-consuming and labor-intensive.
In this study, we employ a convolutional neural network (CNN) algorithm to
create models for rapid and automated analysis of PCAR spectra of various
superconductors with different pairing symmetries (conventional $s$-wave,
chiral $p_x+ip_y$-wave, and $d_{x^2-y^2}$-wave). The training datasets are
generated based on the Blonder-Tinkham-Klapwijk (BTK) theory and further
modified and augmented by selectively incorporating noise and peaks according
to the bias voltages. This approach not only replicates the experimental
spectra but also brings the model's attention to important features within the
spectra. The optimized models provide fitting parameters for experimentally
measured spectra in less than 100 ms per spectrum. Our approaches and findings
pave the way for rapid and automated spectral analysis which will help
accelerate research on superconductors with complex order parameters.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 04:45:38 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Lee",
"Dongik",
""
],
[
"Stanev",
"Valentin",
""
],
[
"Zhang",
"Xiaohang",
""
],
[
"Kang",
"Mijeong",
""
],
[
"Takeuchi",
"Ichiro",
""
],
[
"Lee",
"Seunghun",
""
]
] | TITLE: Rapid analysis of point-contact Andreev reflection spectra via machine
learning with adaptive data augmentation
ABSTRACT: Delineating the superconducting order parameters is a pivotal task in
investigating superconductivity for probing pairing mechanisms, as well as
their symmetry and topology. Point-contact Andreev reflection (PCAR)
measurement is a simple yet powerful tool for identifying the order parameters.
The PCAR spectra exhibit significant variations depending on the type of the
order parameter in a superconductor, including its magnitude
($\mathit{\Delta}$), as well as temperature, interfacial quality, Fermi
velocity mismatch, and other factors. The information on the order parameter
can be obtained by finding the combination of these parameters, generating a
theoretical spectrum that fits a measured experimental spectrum. However, due
to the complexity of the spectra and the high dimensionality of parameters,
extracting the fitting parameters is often time-consuming and labor-intensive.
In this study, we employ a convolutional neural network (CNN) algorithm to
create models for rapid and automated analysis of PCAR spectra of various
superconductors with different pairing symmetries (conventional $s$-wave,
chiral $p_x+ip_y$-wave, and $d_{x^2-y^2}$-wave). The training datasets are
generated based on the Blonder-Tinkham-Klapwijk (BTK) theory and further
modified and augmented by selectively incorporating noise and peaks according
to the bias voltages. This approach not only replicates the experimental
spectra but also brings the model's attention to important features within the
spectra. The optimized models provide fitting parameters for experimentally
measured spectra in less than 100 ms per spectrum. Our approaches and findings
pave the way for rapid and automated spectral analysis which will help
accelerate research on superconductors with complex order parameters.
|
2503.10045 | Yaoting Jiang | Meng Wang, Zi Yang, Ruifeng Zhao, Yaoting Jiang | CPLOYO: A Pulmonary Nodule Detection Model with Multi-Scale Feature
Fusion and Nonlinear Feature Learning | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of Internet of Things (IoT) technology in pulmonary nodule
detection significantly enhances the intelligence and real-time capabilities of
the detection system. Currently, lung nodule detection primarily focuses on the
identification of solid nodules, but different types of lung nodules correspond
to various forms of lung cancer. Multi-type detection contributes to improving
the overall lung cancer detection rate and enhancing the cure rate. To achieve
high sensitivity in nodule detection, targeted improvements were made to the
YOLOv8 model. Firstly, the C2f\_RepViTCAMF module was introduced to augment the
C2f module in the backbone, thereby enhancing detection accuracy for small lung
nodules and achieving a lightweight model design. Secondly, the MSCAF module
was incorporated to reconstruct the feature fusion section of the model,
improving detection accuracy for lung nodules of varying scales. Furthermore,
the KAN network was integrated into the model. By leveraging the KAN network's
powerful nonlinear feature learning capability, detection accuracy for small
lung nodules was further improved, and the model's generalization ability was
enhanced. Tests conducted on the LUNA16 dataset demonstrate that the improved
model outperforms the original model as well as other mainstream models such as
YOLOv9 and RT-DETR across various evaluation metrics.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 04:51:57 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Meng",
""
],
[
"Yang",
"Zi",
""
],
[
"Zhao",
"Ruifeng",
""
],
[
"Jiang",
"Yaoting",
""
]
] | TITLE: CPLOYO: A Pulmonary Nodule Detection Model with Multi-Scale Feature
Fusion and Nonlinear Feature Learning
ABSTRACT: The integration of Internet of Things (IoT) technology in pulmonary nodule
detection significantly enhances the intelligence and real-time capabilities of
the detection system. Currently, lung nodule detection primarily focuses on the
identification of solid nodules, but different types of lung nodules correspond
to various forms of lung cancer. Multi-type detection contributes to improving
the overall lung cancer detection rate and enhancing the cure rate. To achieve
high sensitivity in nodule detection, targeted improvements were made to the
YOLOv8 model. Firstly, the C2f\_RepViTCAMF module was introduced to augment the
C2f module in the backbone, thereby enhancing detection accuracy for small lung
nodules and achieving a lightweight model design. Secondly, the MSCAF module
was incorporated to reconstruct the feature fusion section of the model,
improving detection accuracy for lung nodules of varying scales. Furthermore,
the KAN network was integrated into the model. By leveraging the KAN network's
powerful nonlinear feature learning capability, detection accuracy for small
lung nodules was further improved, and the model's generalization ability was
enhanced. Tests conducted on the LUNA16 dataset demonstrate that the improved
model outperforms the original model as well as other mainstream models such as
YOLOv9 and RT-DETR across various evaluation metrics.
|
2503.10052 | Minjun Kim | Minje Kim, Minjun Kim, Xu Yang | DTA: Dual Temporal-channel-wise Attention for Spiking Neural Networks | Accepted by IEEE/CVF Winter Conference on Applications of Computer
Vision (WACV) 2025 | null | null | null | cs.CV cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) present a more energy-efficient alternative to
Artificial Neural Networks (ANNs) by harnessing spatio-temporal dynamics and
event-driven spikes. Effective utilization of temporal information is crucial
for SNNs, leading to the exploration of attention mechanisms to enhance this
capability. Conventional attention operations either apply identical operation
or employ non-identical operations across target dimensions. We identify that
these approaches provide distinct perspectives on temporal information. To
leverage the strengths of both operations, we propose a novel Dual
Temporal-channel-wise Attention (DTA) mechanism that integrates both
identical/non-identical attention strategies. To the best of our knowledge,
this is the first attempt to concentrate on both the correlation and dependency
of temporal-channel using both identical and non-identical attention
operations. Experimental results demonstrate that the DTA mechanism achieves
state-of-the-art performance on both static datasets (CIFAR10, CIFAR100,
ImageNet-1k) and dynamic dataset (CIFAR10-DVS), elevating spike representation
and capturing complex temporal-channel relationship. We open-source our code:
https://github.com/MnJnKIM/DTA-SNN.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 05:09:48 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Kim",
"Minje",
""
],
[
"Kim",
"Minjun",
""
],
[
"Yang",
"Xu",
""
]
] | TITLE: DTA: Dual Temporal-channel-wise Attention for Spiking Neural Networks
ABSTRACT: Spiking Neural Networks (SNNs) present a more energy-efficient alternative to
Artificial Neural Networks (ANNs) by harnessing spatio-temporal dynamics and
event-driven spikes. Effective utilization of temporal information is crucial
for SNNs, leading to the exploration of attention mechanisms to enhance this
capability. Conventional attention operations either apply identical operation
or employ non-identical operations across target dimensions. We identify that
these approaches provide distinct perspectives on temporal information. To
leverage the strengths of both operations, we propose a novel Dual
Temporal-channel-wise Attention (DTA) mechanism that integrates both
identical/non-identical attention strategies. To the best of our knowledge,
this is the first attempt to concentrate on both the correlation and dependency
of temporal-channel using both identical and non-identical attention
operations. Experimental results demonstrate that the DTA mechanism achieves
state-of-the-art performance on both static datasets (CIFAR10, CIFAR100,
ImageNet-1k) and dynamic dataset (CIFAR10-DVS), elevating spike representation
and capturing complex temporal-channel relationship. We open-source our code:
https://github.com/MnJnKIM/DTA-SNN.
|
2503.10055 | Donghyun Kim | Donghyun Kim, Hyunah Ko, Chanyoung Kim, Seong Jae Hwang | Fourier Decomposition for Explicit Representation of 3D Point Cloud
Attributes | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While 3D point clouds are widely utilized across various vision applications,
their irregular and sparse nature make them challenging to handle. In response,
numerous encoding approaches have been proposed to capture the rich semantic
information of point clouds. Yet, a critical limitation persists: a lack of
consideration for colored point clouds which are more capable 3D
representations as they contain diverse attributes: color and geometry. While
existing methods handle these attributes separately on a per-point basis, this
leads to a limited receptive field and restricted ability to capture
relationships across multiple points. To address this, we pioneer a point cloud
encoding methodology that leverages 3D Fourier decomposition to disentangle
color and geometric features while extending the receptive field through
spectral-domain operations. Our analysis confirms that this encoding approach
effectively separates feature components, where the amplitude uniquely captures
color attributes and the phase encodes geometric structure, thereby enabling
independent learning and utilization of both attributes. Furthermore, the
spectral-domain properties of these components naturally aggregate local
features while considering multiple points' information. We validate our point
cloud encoding approach on point cloud classification and style transfer tasks,
achieving state-of-the-art results on the DensePoint dataset with improvements
via a proposed amplitude-based data augmentation strategy.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 05:13:40 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Kim",
"Donghyun",
""
],
[
"Ko",
"Hyunah",
""
],
[
"Kim",
"Chanyoung",
""
],
[
"Hwang",
"Seong Jae",
""
]
] | TITLE: Fourier Decomposition for Explicit Representation of 3D Point Cloud
Attributes
ABSTRACT: While 3D point clouds are widely utilized across various vision applications,
their irregular and sparse nature make them challenging to handle. In response,
numerous encoding approaches have been proposed to capture the rich semantic
information of point clouds. Yet, a critical limitation persists: a lack of
consideration for colored point clouds which are more capable 3D
representations as they contain diverse attributes: color and geometry. While
existing methods handle these attributes separately on a per-point basis, this
leads to a limited receptive field and restricted ability to capture
relationships across multiple points. To address this, we pioneer a point cloud
encoding methodology that leverages 3D Fourier decomposition to disentangle
color and geometric features while extending the receptive field through
spectral-domain operations. Our analysis confirms that this encoding approach
effectively separates feature components, where the amplitude uniquely captures
color attributes and the phase encodes geometric structure, thereby enabling
independent learning and utilization of both attributes. Furthermore, the
spectral-domain properties of these components naturally aggregate local
features while considering multiple points' information. We validate our point
cloud encoding approach on point cloud classification and style transfer tasks,
achieving state-of-the-art results on the DensePoint dataset with improvements
via a proposed amplitude-based data augmentation strategy.
|
2503.10057 | Ho Hin Lee | Ho Hin Lee, Alberto Santamaria-Pang, Jameson Merkov, Matthew Lungren,
Ivan Tarapov | Multi-Modal Mamba Modeling for Survival Prediction (M4Survive): Adapting
Joint Foundation Model Representations | 10 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate survival prediction in oncology requires integrating diverse imaging
modalities to capture the complex interplay of tumor biology. Traditional
single-modality approaches often fail to leverage the complementary insights
provided by radiological and pathological assessments. In this work, we
introduce M4Survive (Multi-Modal Mamba Modeling for Survival Prediction), a
novel framework that learns joint foundation model representations using
efficient adapter networks. Our approach dynamically fuses heterogeneous
embeddings from a foundation model repository (e.g., MedImageInsight,
BiomedCLIP, Prov-GigaPath, UNI2-h), creating a correlated latent space
optimized for survival risk estimation. By leveraging Mamba-based adapters,
M4Survive enables efficient multi-modal learning while preserving computational
efficiency. Experimental evaluations on benchmark datasets demonstrate that our
approach outperforms both unimodal and traditional static multi-modal baselines
in survival prediction accuracy. This work underscores the potential of
foundation model-driven multi-modal fusion in advancing precision oncology and
predictive analytics.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 05:18:32 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Lee",
"Ho Hin",
""
],
[
"Santamaria-Pang",
"Alberto",
""
],
[
"Merkov",
"Jameson",
""
],
[
"Lungren",
"Matthew",
""
],
[
"Tarapov",
"Ivan",
""
]
] | TITLE: Multi-Modal Mamba Modeling for Survival Prediction (M4Survive): Adapting
Joint Foundation Model Representations
ABSTRACT: Accurate survival prediction in oncology requires integrating diverse imaging
modalities to capture the complex interplay of tumor biology. Traditional
single-modality approaches often fail to leverage the complementary insights
provided by radiological and pathological assessments. In this work, we
introduce M4Survive (Multi-Modal Mamba Modeling for Survival Prediction), a
novel framework that learns joint foundation model representations using
efficient adapter networks. Our approach dynamically fuses heterogeneous
embeddings from a foundation model repository (e.g., MedImageInsight,
BiomedCLIP, Prov-GigaPath, UNI2-h), creating a correlated latent space
optimized for survival risk estimation. By leveraging Mamba-based adapters,
M4Survive enables efficient multi-modal learning while preserving computational
efficiency. Experimental evaluations on benchmark datasets demonstrate that our
approach outperforms both unimodal and traditional static multi-modal baselines
in survival prediction accuracy. This work underscores the potential of
foundation model-driven multi-modal fusion in advancing precision oncology and
predictive analytics.
|
2503.10071 | Sunzida Siddique | Mohd Ariful Haque, Justin Williams, Sunzida Siddique, Md. Hujaifa
Islam, Hasmot Ali, Kishor Datta Gupta, and Roy George | Advanced Tool Learning and Selection System (ATLASS): A Closed-Loop
Framework Using LLM | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The combination of LLM agents with external tools enables models to solve
complex tasks beyond their knowledge base. Human-designed tools are inflexible
and restricted to solutions within the scope of pre-existing tools created by
experts. To address this problem, we propose ATLASS, an advanced tool learning
and selection system designed as a closed-loop framework. It enables the LLM to
solve problems by dynamically generating external tools on demand. In this
framework, agents play a crucial role in orchestrating tool selection,
execution, and refinement, ensuring adaptive problem-solving capabilities. The
operation of ATLASS follows three phases: The first phase, Understanding Tool
Requirements, involves the Agents determining whether tools are required and
specifying their functionality; the second phase, Tool Retrieval/Generation,
involves the Agents retrieving or generating tools based on their availability;
and the third phase, Task Solving, involves combining all the component tools
necessary to complete the initial task. The Tool Dataset stores the generated
tools, ensuring reusability and minimizing inference cost. Current LLM-based
tool generation systems have difficulty creating complex tools that need APIs
or external packages. In ATLASS, we solve the problem by automatically setting
up the environment, fetching relevant API documentation online, and using a
Python interpreter to create a reliable, versatile tool that works in a wider
range of situations. OpenAI GPT-4.0 is used as the LLM agent, and safety and
ethical concerns are handled through human feedback before executing generated
code. By addressing the limitations of predefined toolsets and enhancing
adaptability, ATLASS serves as a real-world solution that empowers users with
dynamically generated tools for complex problem-solving.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 05:39:00 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Haque",
"Mohd Ariful",
""
],
[
"Williams",
"Justin",
""
],
[
"Siddique",
"Sunzida",
""
],
[
"Islam",
"Md. Hujaifa",
""
],
[
"Ali",
"Hasmot",
""
],
[
"Gupta",
"Kishor Datta",
""
],
[
"George",
"Roy",
""
]... | TITLE: Advanced Tool Learning and Selection System (ATLASS): A Closed-Loop
Framework Using LLM
ABSTRACT: The combination of LLM agents with external tools enables models to solve
complex tasks beyond their knowledge base. Human-designed tools are inflexible
and restricted to solutions within the scope of pre-existing tools created by
experts. To address this problem, we propose ATLASS, an advanced tool learning
and selection system designed as a closed-loop framework. It enables the LLM to
solve problems by dynamically generating external tools on demand. In this
framework, agents play a crucial role in orchestrating tool selection,
execution, and refinement, ensuring adaptive problem-solving capabilities. The
operation of ATLASS follows three phases: The first phase, Understanding Tool
Requirements, involves the Agents determining whether tools are required and
specifying their functionality; the second phase, Tool Retrieval/Generation,
involves the Agents retrieving or generating tools based on their availability;
and the third phase, Task Solving, involves combining all the component tools
necessary to complete the initial task. The Tool Dataset stores the generated
tools, ensuring reusability and minimizing inference cost. Current LLM-based
tool generation systems have difficulty creating complex tools that need APIs
or external packages. In ATLASS, we solve the problem by automatically setting
up the environment, fetching relevant API documentation online, and using a
Python interpreter to create a reliable, versatile tool that works in a wider
range of situations. OpenAI GPT-4.0 is used as the LLM agent, and safety and
ethical concerns are handled through human feedback before executing generated
code. By addressing the limitations of predefined toolsets and enhancing
adaptability, ATLASS serves as a real-world solution that empowers users with
dynamically generated tools for complex problem-solving.
|
2503.10092 | Xintong Dong | Xintong Dong, Wenshuo Yu, Jun Lin, Zhenbo Guo, Hongzhou Wang, Jianhao
Yang | Light-weighted foundation model for seismic data processing based on
representative and non-redundant pre-training dataset | null | null | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the fields of computer vision (CV) and remote sensing (RS), foundational
models typically follow the "big data + large model parameters" paradigm.
However, the application of this strategy in seismic data processing faces
several challenges: seismic data is difficult to obtain and the scarcity of
publicly available datasets make it difficult to construct large-scale
datasets. Additionally, the high computational cost associated with a large
number of model parameters restricts widespread research in this domain.
Therefore, we propose a lightweight seismic processing foundational model
paradigm (SPFM), which aims to overcome the limitations of traditional methods
by data engineering and network architecture innovation. Specifically, we
propose an innovative dataset construction strategy that generates more seismic
data by data augmentation techniques, including collecting publicly available
field data and using generative diffusion models (GDM) for data enhancement.
Furthermore, we optimize the data distribution by employing dimensionality
reduction, cluster analysis, and stratified sampling methods, reducing
redundant information while preserving important seismic features, thus
constructing a comprehensive dataset. In terms of network architecture design,
we introduce the selective structured state-space model (Mamba) structure,
which effectively captures global features of seismic data and alleviates the
quadratic growth of computational complexity inherent in Transformer-based
models, thereby improving computational efficiency. This model, pre-trained
with only four A800 GPUs, outperforms traditional methods across multiple
tasks, including denoising, interpolation, frequency-band extrapolation, and
resolution enhancement. The lightweight paradigm provides an solution for
seismic data processing, advancing the generalization and accessibility of
seismic data processing.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 06:40:33 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Dong",
"Xintong",
""
],
[
"Yu",
"Wenshuo",
""
],
[
"Lin",
"Jun",
""
],
[
"Guo",
"Zhenbo",
""
],
[
"Wang",
"Hongzhou",
""
],
[
"Yang",
"Jianhao",
""
]
] | TITLE: Light-weighted foundation model for seismic data processing based on
representative and non-redundant pre-training dataset
ABSTRACT: In the fields of computer vision (CV) and remote sensing (RS), foundational
models typically follow the "big data + large model parameters" paradigm.
However, the application of this strategy in seismic data processing faces
several challenges: seismic data is difficult to obtain and the scarcity of
publicly available datasets make it difficult to construct large-scale
datasets. Additionally, the high computational cost associated with a large
number of model parameters restricts widespread research in this domain.
Therefore, we propose a lightweight seismic processing foundational model
paradigm (SPFM), which aims to overcome the limitations of traditional methods
by data engineering and network architecture innovation. Specifically, we
propose an innovative dataset construction strategy that generates more seismic
data by data augmentation techniques, including collecting publicly available
field data and using generative diffusion models (GDM) for data enhancement.
Furthermore, we optimize the data distribution by employing dimensionality
reduction, cluster analysis, and stratified sampling methods, reducing
redundant information while preserving important seismic features, thus
constructing a comprehensive dataset. In terms of network architecture design,
we introduce the selective structured state-space model (Mamba) structure,
which effectively captures global features of seismic data and alleviates the
quadratic growth of computational complexity inherent in Transformer-based
models, thereby improving computational efficiency. This model, pre-trained
with only four A800 GPUs, outperforms traditional methods across multiple
tasks, including denoising, interpolation, frequency-band extrapolation, and
resolution enhancement. The lightweight paradigm provides an solution for
seismic data processing, advancing the generalization and accessibility of
seismic data processing.
|
2503.10115 | Hanlin Pan | Hanlin Pan, Kunpeng Liu, Wanfu Gao | Reconsidering Feature Structure Information and Latent Space Alignment
in Partial Multi-label Feature Selection | 9pages,6 figures,accept at AAAI 25 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of partial multi-label feature selection is to select the most
representative feature subset, where the data comes from partial multi-label
datasets that have label ambiguity issues. For label disambiguation, previous
methods mainly focus on utilizing the information inside the labels and the
relationship between the labels and features. However, the information existing
in the feature space is rarely considered, especially in partial multi-label
scenarios where the noises is considered to be concentrated in the label space
while the feature information is correct. This paper proposes a method based on
latent space alignment, which uses the information mined in feature space to
disambiguate in latent space through the structural consistency between labels
and features. In addition, previous methods overestimate the consistency of
features and labels in the latent space after convergence. We comprehensively
consider the similarity of latent space projections to feature space and label
space, and propose new feature selection term. This method also significantly
improves the positive label identification ability of the selected features.
Comprehensive experiments demonstrate the superiority of the proposed method.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 07:21:29 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Pan",
"Hanlin",
""
],
[
"Liu",
"Kunpeng",
""
],
[
"Gao",
"Wanfu",
""
]
] | TITLE: Reconsidering Feature Structure Information and Latent Space Alignment
in Partial Multi-label Feature Selection
ABSTRACT: The purpose of partial multi-label feature selection is to select the most
representative feature subset, where the data comes from partial multi-label
datasets that have label ambiguity issues. For label disambiguation, previous
methods mainly focus on utilizing the information inside the labels and the
relationship between the labels and features. However, the information existing
in the feature space is rarely considered, especially in partial multi-label
scenarios where the noises is considered to be concentrated in the label space
while the feature information is correct. This paper proposes a method based on
latent space alignment, which uses the information mined in feature space to
disambiguate in latent space through the structural consistency between labels
and features. In addition, previous methods overestimate the consistency of
features and labels in the latent space after convergence. We comprehensively
consider the similarity of latent space projections to feature space and label
space, and propose new feature selection term. This method also significantly
improves the positive label identification ability of the selected features.
Comprehensive experiments demonstrate the superiority of the proposed method.
|
2503.10120 | Bingchen Li | Bingchen Li, Xin Li, Yiting Lu, Zhibo Chen | Hybrid Agents for Image Restoration | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing Image Restoration (IR) studies typically focus on task-specific or
universal modes individually, relying on the mode selection of users and
lacking the cooperation between multiple task-specific/universal restoration
modes. This leads to insufficient interaction for unprofessional users and
limits their restoration capability for complicated real-world applications. In
this work, we present HybridAgent, intending to incorporate multiple
restoration modes into a unified image restoration model and achieve
intelligent and efficient user interaction through our proposed hybrid agents.
Concretely, we propose the hybrid rule of fast, slow, and feedback restoration
agents. Here, the slow restoration agent optimizes the powerful multimodal
large language model (MLLM) with our proposed instruction-tuning dataset to
identify degradations within images with ambiguous user prompts and invokes
proper restoration tools accordingly. The fast restoration agent is designed
based on a lightweight large language model (LLM) via in-context learning to
understand the user prompts with simple and clear requirements, which can
obviate the unnecessary time/resource costs of MLLM. Moreover, we introduce the
mixed distortion removal mode for our HybridAgents, which is crucial but not
concerned in previous agent-based works. It can effectively prevent the error
propagation of step-by-step image restoration and largely improve the
efficiency of the agent system. We validate the effectiveness of HybridAgent
with both synthetic and real-world IR tasks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 07:28:33 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Bingchen",
""
],
[
"Li",
"Xin",
""
],
[
"Lu",
"Yiting",
""
],
[
"Chen",
"Zhibo",
""
]
] | TITLE: Hybrid Agents for Image Restoration
ABSTRACT: Existing Image Restoration (IR) studies typically focus on task-specific or
universal modes individually, relying on the mode selection of users and
lacking the cooperation between multiple task-specific/universal restoration
modes. This leads to insufficient interaction for unprofessional users and
limits their restoration capability for complicated real-world applications. In
this work, we present HybridAgent, intending to incorporate multiple
restoration modes into a unified image restoration model and achieve
intelligent and efficient user interaction through our proposed hybrid agents.
Concretely, we propose the hybrid rule of fast, slow, and feedback restoration
agents. Here, the slow restoration agent optimizes the powerful multimodal
large language model (MLLM) with our proposed instruction-tuning dataset to
identify degradations within images with ambiguous user prompts and invokes
proper restoration tools accordingly. The fast restoration agent is designed
based on a lightweight large language model (LLM) via in-context learning to
understand the user prompts with simple and clear requirements, which can
obviate the unnecessary time/resource costs of MLLM. Moreover, we introduce the
mixed distortion removal mode for our HybridAgents, which is crucial but not
concerned in previous agent-based works. It can effectively prevent the error
propagation of step-by-step image restoration and largely improve the
efficiency of the agent system. We validate the effectiveness of HybridAgent
with both synthetic and real-world IR tasks.
|
2503.10129 | Namal Jayasuriya | Namal Jayasuriya, Yi Guo, Wen Hu, Oula Ghannoum | Deep Learning-Based Direct Leaf Area Estimation using Two RGBD Datasets
for Model Development | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Estimation of a single leaf area can be a measure of crop growth and a
phenotypic trait to breed new varieties. It has also been used to measure leaf
area index and total leaf area. Some studies have used hand-held cameras, image
processing 3D reconstruction and unsupervised learning-based methods to
estimate the leaf area in plant images. Deep learning works well for object
detection and segmentation tasks; however, direct area estimation of objects
has not been explored. This work investigates deep learning-based leaf area
estimation, for RGBD images taken using a mobile camera setup in real-world
scenarios. A dataset for attached leaves captured with a top angle view and a
dataset for detached single leaves were collected for model development and
testing. First, image processing-based area estimation was tested on manually
segmented leaves. Then a Mask R-CNN-based model was investigated, and modified
to accept RGBD images and to estimate the leaf area. The detached-leaf data set
was then mixed with the attached-leaf plant data set to estimate the single
leaf area for plant images, and another network design with two backbones was
proposed: one for segmentation and the other for area estimation. Instead of
trying all possibilities or random values, an agile approach was used in
hyperparameter tuning. The final model was cross-validated with 5-folds and
tested with two unseen datasets: detached and attached leaves. The F1 score
with 90% IoA for segmentation result on unseen detached-leaf data was 1.0,
while R-squared of area estimation was 0.81. For unseen plant data
segmentation, the F1 score with 90% IoA was 0.59, while the R-squared score was
0.57. The research suggests using attached leaves with ground truth area to
improve the results.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 07:39:09 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Jayasuriya",
"Namal",
""
],
[
"Guo",
"Yi",
""
],
[
"Hu",
"Wen",
""
],
[
"Ghannoum",
"Oula",
""
]
] | TITLE: Deep Learning-Based Direct Leaf Area Estimation using Two RGBD Datasets
for Model Development
ABSTRACT: Estimation of a single leaf area can be a measure of crop growth and a
phenotypic trait to breed new varieties. It has also been used to measure leaf
area index and total leaf area. Some studies have used hand-held cameras, image
processing 3D reconstruction and unsupervised learning-based methods to
estimate the leaf area in plant images. Deep learning works well for object
detection and segmentation tasks; however, direct area estimation of objects
has not been explored. This work investigates deep learning-based leaf area
estimation, for RGBD images taken using a mobile camera setup in real-world
scenarios. A dataset for attached leaves captured with a top angle view and a
dataset for detached single leaves were collected for model development and
testing. First, image processing-based area estimation was tested on manually
segmented leaves. Then a Mask R-CNN-based model was investigated, and modified
to accept RGBD images and to estimate the leaf area. The detached-leaf data set
was then mixed with the attached-leaf plant data set to estimate the single
leaf area for plant images, and another network design with two backbones was
proposed: one for segmentation and the other for area estimation. Instead of
trying all possibilities or random values, an agile approach was used in
hyperparameter tuning. The final model was cross-validated with 5-folds and
tested with two unseen datasets: detached and attached leaves. The F1 score
with 90% IoA for segmentation result on unseen detached-leaf data was 1.0,
while R-squared of area estimation was 0.81. For unseen plant data
segmentation, the F1 score with 90% IoA was 0.59, while the R-squared score was
0.57. The research suggests using attached leaves with ground truth area to
improve the results.
|
2503.10149 | Zhenxuan Zeng | Zhenxuan Zeng, Qiao Wu, Xiyu Zhang, Lin Yuanbo Wu, Pei An, Jiaqi Yang,
Ji Wang, Peng Wang | Unlocking Generalization Power in LiDAR Point Cloud Registration | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world environments, a LiDAR point cloud registration method with
robust generalization capabilities (across varying distances and datasets) is
crucial for ensuring safety in autonomous driving and other LiDAR-based
applications. However, current methods fall short in achieving this level of
generalization. To address these limitations, we propose UGP, a pruned
framework designed to enhance generalization power for LiDAR point cloud
registration. The core insight in UGP is the elimination of cross-attention
mechanisms to improve generalization, allowing the network to concentrate on
intra-frame feature extraction. Additionally, we introduce a progressive
self-attention module to reduce ambiguity in large-scale scenes and integrate
Bird's Eye View (BEV) features to incorporate semantic information about scene
elements. Together, these enhancements significantly boost the network's
generalization performance. We validated our approach through various
generalization experiments in multiple outdoor scenes. In cross-distance
generalization experiments on KITTI and nuScenes, UGP achieved state-of-the-art
mean Registration Recall rates of 94.5% and 91.4%, respectively. In
cross-dataset generalization from nuScenes to KITTI, UGP achieved a
state-of-the-art mean Registration Recall of 90.9%. Code will be available at
https://github.com/peakpang/UGP.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 08:20:59 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zeng",
"Zhenxuan",
""
],
[
"Wu",
"Qiao",
""
],
[
"Zhang",
"Xiyu",
""
],
[
"Wu",
"Lin Yuanbo",
""
],
[
"An",
"Pei",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Wang",
"Ji",
""
],
[
"Wang",
"Peng",
""
... | TITLE: Unlocking Generalization Power in LiDAR Point Cloud Registration
ABSTRACT: In real-world environments, a LiDAR point cloud registration method with
robust generalization capabilities (across varying distances and datasets) is
crucial for ensuring safety in autonomous driving and other LiDAR-based
applications. However, current methods fall short in achieving this level of
generalization. To address these limitations, we propose UGP, a pruned
framework designed to enhance generalization power for LiDAR point cloud
registration. The core insight in UGP is the elimination of cross-attention
mechanisms to improve generalization, allowing the network to concentrate on
intra-frame feature extraction. Additionally, we introduce a progressive
self-attention module to reduce ambiguity in large-scale scenes and integrate
Bird's Eye View (BEV) features to incorporate semantic information about scene
elements. Together, these enhancements significantly boost the network's
generalization performance. We validated our approach through various
generalization experiments in multiple outdoor scenes. In cross-distance
generalization experiments on KITTI and nuScenes, UGP achieved state-of-the-art
mean Registration Recall rates of 94.5% and 91.4%, respectively. In
cross-dataset generalization from nuScenes to KITTI, UGP achieved a
state-of-the-art mean Registration Recall of 90.9%. Code will be available at
https://github.com/peakpang/UGP.
|
2503.10152 | Shenghao Fu | Shenghao Fu, Junkai Yan, Qize Yang, Xihan Wei, Xiaohua Xie, Wei-Shi
Zheng | A Hierarchical Semantic Distillation Framework for Open-Vocabulary
Object Detection | Accepted to TMM 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary object detection (OVD) aims to detect objects beyond the
training annotations, where detectors are usually aligned to a pre-trained
vision-language model, eg, CLIP, to inherit its generalizable recognition
ability so that detectors can recognize new or novel objects. However, previous
works directly align the feature space with CLIP and fail to learn the semantic
knowledge effectively. In this work, we propose a hierarchical semantic
distillation framework named HD-OVD to construct a comprehensive distillation
process, which exploits generalizable knowledge from the CLIP model in three
aspects. In the first hierarchy of HD-OVD, the detector learns fine-grained
instance-wise semantics from the CLIP image encoder by modeling relations among
single objects in the visual space. Besides, we introduce text space
novel-class-aware classification to help the detector assimilate the highly
generalizable class-wise semantics from the CLIP text encoder, representing the
second hierarchy. Lastly, abundant image-wise semantics containing multi-object
and their contexts are also distilled by an image-wise contrastive
distillation. Benefiting from the elaborated semantic distillation in triple
hierarchies, our HD-OVD inherits generalizable recognition ability from CLIP in
instance, class, and image levels. Thus, we boost the novel AP on the OV-COCO
dataset to 46.4% with a ResNet50 backbone, which outperforms others by a clear
margin. We also conduct extensive ablation studies to analyze how each
component works.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 08:27:18 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Fu",
"Shenghao",
""
],
[
"Yan",
"Junkai",
""
],
[
"Yang",
"Qize",
""
],
[
"Wei",
"Xihan",
""
],
[
"Xie",
"Xiaohua",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] | TITLE: A Hierarchical Semantic Distillation Framework for Open-Vocabulary
Object Detection
ABSTRACT: Open-vocabulary object detection (OVD) aims to detect objects beyond the
training annotations, where detectors are usually aligned to a pre-trained
vision-language model, eg, CLIP, to inherit its generalizable recognition
ability so that detectors can recognize new or novel objects. However, previous
works directly align the feature space with CLIP and fail to learn the semantic
knowledge effectively. In this work, we propose a hierarchical semantic
distillation framework named HD-OVD to construct a comprehensive distillation
process, which exploits generalizable knowledge from the CLIP model in three
aspects. In the first hierarchy of HD-OVD, the detector learns fine-grained
instance-wise semantics from the CLIP image encoder by modeling relations among
single objects in the visual space. Besides, we introduce text space
novel-class-aware classification to help the detector assimilate the highly
generalizable class-wise semantics from the CLIP text encoder, representing the
second hierarchy. Lastly, abundant image-wise semantics containing multi-object
and their contexts are also distilled by an image-wise contrastive
distillation. Benefiting from the elaborated semantic distillation in triple
hierarchies, our HD-OVD inherits generalizable recognition ability from CLIP in
instance, class, and image levels. Thus, we boost the novel AP on the OV-COCO
dataset to 46.4% with a ResNet50 backbone, which outperforms others by a clear
margin. We also conduct extensive ablation studies to analyze how each
component works.
|
2503.10154 | Junghyo Jo | Yechan Lim, Sangwon Lee, Junghyo Jo | Data augmentation using diffusion models to enhance inverse Ising
inference | null | null | null | null | physics.data-an cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying model parameters from observed configurations poses a fundamental
challenge in data science, especially with limited data. Recently, diffusion
models have emerged as a novel paradigm in generative machine learning, capable
of producing new samples that closely mimic observed data. These models learn
the gradient of model probabilities, bypassing the need for cumbersome
calculations of partition functions across all possible configurations. We
explore whether diffusion models can enhance parameter inference by augmenting
small datasets. Our findings demonstrate this potential through a synthetic
task involving inverse Ising inference and a real-world application of
reconstructing missing values in neural activity data. This study serves as a
proof-of-concept for using diffusion models for data augmentation in
physics-related problems, thereby opening new avenues in data science.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 08:29:17 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Lim",
"Yechan",
""
],
[
"Lee",
"Sangwon",
""
],
[
"Jo",
"Junghyo",
""
]
] | TITLE: Data augmentation using diffusion models to enhance inverse Ising
inference
ABSTRACT: Identifying model parameters from observed configurations poses a fundamental
challenge in data science, especially with limited data. Recently, diffusion
models have emerged as a novel paradigm in generative machine learning, capable
of producing new samples that closely mimic observed data. These models learn
the gradient of model probabilities, bypassing the need for cumbersome
calculations of partition functions across all possible configurations. We
explore whether diffusion models can enhance parameter inference by augmenting
small datasets. Our findings demonstrate this potential through a synthetic
task involving inverse Ising inference and a real-world application of
reconstructing missing values in neural activity data. This study serves as a
proof-of-concept for using diffusion models for data augmentation in
physics-related problems, thereby opening new avenues in data science.
|
2503.10166 | Pengfei Luo | Pengfei Luo, Jingbo Zhou, Tong Xu, Yuan Xia, Linli Xu, Enhong Chen | ImageScope: Unifying Language-Guided Image Retrieval via Large
Multimodal Model Collective Reasoning | WWW 2025 | null | null | null | cs.IR cs.AI cs.MM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the proliferation of images in online content, language-guided image
retrieval (LGIR) has emerged as a research hotspot over the past decade,
encompassing a variety of subtasks with diverse input forms. While the
development of large multimodal models (LMMs) has significantly facilitated
these tasks, existing approaches often address them in isolation, requiring the
construction of separate systems for each task. This not only increases system
complexity and maintenance costs, but also exacerbates challenges stemming from
language ambiguity and complex image content, making it difficult for retrieval
systems to provide accurate and reliable results. To this end, we propose
ImageScope, a training-free, three-stage framework that leverages collective
reasoning to unify LGIR tasks. The key insight behind the unification lies in
the compositional nature of language, which transforms diverse LGIR tasks into
a generalized text-to-image retrieval process, along with the reasoning of LMMs
serving as a universal verification to refine the results. To be specific, in
the first stage, we improve the robustness of the framework by synthesizing
search intents across varying levels of semantic granularity using
chain-of-thought (CoT) reasoning. In the second and third stages, we then
reflect on retrieval results by verifying predicate propositions locally, and
performing pairwise evaluations globally. Experiments conducted on six LGIR
datasets demonstrate that ImageScope outperforms competitive baselines.
Comprehensive evaluations and ablation studies further confirm the
effectiveness of our design.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 08:43:24 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Luo",
"Pengfei",
""
],
[
"Zhou",
"Jingbo",
""
],
[
"Xu",
"Tong",
""
],
[
"Xia",
"Yuan",
""
],
[
"Xu",
"Linli",
""
],
[
"Chen",
"Enhong",
""
]
] | TITLE: ImageScope: Unifying Language-Guided Image Retrieval via Large
Multimodal Model Collective Reasoning
ABSTRACT: With the proliferation of images in online content, language-guided image
retrieval (LGIR) has emerged as a research hotspot over the past decade,
encompassing a variety of subtasks with diverse input forms. While the
development of large multimodal models (LMMs) has significantly facilitated
these tasks, existing approaches often address them in isolation, requiring the
construction of separate systems for each task. This not only increases system
complexity and maintenance costs, but also exacerbates challenges stemming from
language ambiguity and complex image content, making it difficult for retrieval
systems to provide accurate and reliable results. To this end, we propose
ImageScope, a training-free, three-stage framework that leverages collective
reasoning to unify LGIR tasks. The key insight behind the unification lies in
the compositional nature of language, which transforms diverse LGIR tasks into
a generalized text-to-image retrieval process, along with the reasoning of LMMs
serving as a universal verification to refine the results. To be specific, in
the first stage, we improve the robustness of the framework by synthesizing
search intents across varying levels of semantic granularity using
chain-of-thought (CoT) reasoning. In the second and third stages, we then
reflect on retrieval results by verifying predicate propositions locally, and
performing pairwise evaluations globally. Experiments conducted on six LGIR
datasets demonstrate that ImageScope outperforms competitive baselines.
Comprehensive evaluations and ablation studies further confirm the
effectiveness of our design.
|
2503.10195 | Daqing Guo | Hongze Sun, Jun Wang, Wuque Cai, Duo Chen, Qianqian Liao, Jiayi He,
Yan Cui, Dezhong Yao, Daqing Guo | ST-FlowNet: An Efficient Spiking Neural Network for Event-Based Optical
Flow Estimation | 12 pages, 5 figures, 5 tables; This work has been submitted for
possible publication | null | null | null | cs.CV cs.NE q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) have emerged as a promising tool for
event-based optical flow estimation tasks due to their ability to leverage
spatio-temporal information and low-power capabilities. However, the
performance of SNN models is often constrained, limiting their application in
real-world scenarios. In this work, we address this gap by proposing a novel
neural network architecture, ST-FlowNet, specifically tailored for optical flow
estimation from event-based data. The ST-FlowNet architecture integrates
ConvGRU modules to facilitate cross-modal feature augmentation and temporal
alignment of the predicted optical flow, improving the network's ability to
capture complex motion dynamics. Additionally, to overcome the challenges
associated with training SNNs, we introduce a novel approach to derive SNN
models from pre-trained artificial neural networks (ANNs) through ANN-to-SNN
conversion or our proposed BISNN method. Notably, the BISNN method alleviates
the complexities involved in biological parameter selection, further enhancing
the robustness of SNNs in optical flow estimation tasks. Extensive evaluations
on three benchmark event-based datasets demonstrate that the SNN-based
ST-FlowNet model outperforms state-of-the-art methods, delivering superior
performance in accurate optical flow estimation across a diverse range of
dynamic visual scenes. Furthermore, the inherent energy efficiency of SNN
models is highlighted, establishing a compelling advantage for their practical
deployment. Overall, our work presents a novel framework for optical flow
estimation using SNNs and event-based data, contributing to the advancement of
neuromorphic vision applications.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:28:42 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Sun",
"Hongze",
""
],
[
"Wang",
"Jun",
""
],
[
"Cai",
"Wuque",
""
],
[
"Chen",
"Duo",
""
],
[
"Liao",
"Qianqian",
""
],
[
"He",
"Jiayi",
""
],
[
"Cui",
"Yan",
""
],
[
"Yao",
"Dezhong",
""
... | TITLE: ST-FlowNet: An Efficient Spiking Neural Network for Event-Based Optical
Flow Estimation
ABSTRACT: Spiking Neural Networks (SNNs) have emerged as a promising tool for
event-based optical flow estimation tasks due to their ability to leverage
spatio-temporal information and low-power capabilities. However, the
performance of SNN models is often constrained, limiting their application in
real-world scenarios. In this work, we address this gap by proposing a novel
neural network architecture, ST-FlowNet, specifically tailored for optical flow
estimation from event-based data. The ST-FlowNet architecture integrates
ConvGRU modules to facilitate cross-modal feature augmentation and temporal
alignment of the predicted optical flow, improving the network's ability to
capture complex motion dynamics. Additionally, to overcome the challenges
associated with training SNNs, we introduce a novel approach to derive SNN
models from pre-trained artificial neural networks (ANNs) through ANN-to-SNN
conversion or our proposed BISNN method. Notably, the BISNN method alleviates
the complexities involved in biological parameter selection, further enhancing
the robustness of SNNs in optical flow estimation tasks. Extensive evaluations
on three benchmark event-based datasets demonstrate that the SNN-based
ST-FlowNet model outperforms state-of-the-art methods, delivering superior
performance in accurate optical flow estimation across a diverse range of
dynamic visual scenes. Furthermore, the inherent energy efficiency of SNN
models is highlighted, establishing a compelling advantage for their practical
deployment. Overall, our work presents a novel framework for optical flow
estimation using SNNs and event-based data, contributing to the advancement of
neuromorphic vision applications.
|
2503.10198 | Xiangjie Kong | Xiangjie Kong, Zhenghao Chen, Weiyao Liu, Kaili Ning, Lechao Zhang,
Syauqie Muhammad Marier, Yichen Liu, Yuhao Chen, Feng Xia | Deep Learning for Time Series Forecasting: A Survey | null | Int. J. Mach. Learn. & Cyber. (2025) | 10.1007/s13042-025-02560-w | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series forecasting (TSF) has long been a crucial task in both industry
and daily life. Most classical statistical models may have certain limitations
when applied to practical scenarios in fields such as energy, healthcare,
traffic, meteorology, and economics, especially when high accuracy is required.
With the continuous development of deep learning, numerous new models have
emerged in the field of time series forecasting in recent years. However,
existing surveys have not provided a unified summary of the wide range of model
architectures in this field, nor have they given detailed summaries of works in
feature extraction and datasets. To address this gap, in this review, we
comprehensively study the previous works and summarize the general paradigms of
Deep Time Series Forecasting (DTSF) in terms of model architectures. Besides,
we take an innovative approach by focusing on the composition of time series
and systematically explain important feature extraction methods. Additionally,
we provide an overall compilation of datasets from various domains in existing
works. Finally, we systematically emphasize the significant challenges faced
and future research directions in this field.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:32:01 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Kong",
"Xiangjie",
""
],
[
"Chen",
"Zhenghao",
""
],
[
"Liu",
"Weiyao",
""
],
[
"Ning",
"Kaili",
""
],
[
"Zhang",
"Lechao",
""
],
[
"Marier",
"Syauqie Muhammad",
""
],
[
"Liu",
"Yichen",
""
],
[
"C... | TITLE: Deep Learning for Time Series Forecasting: A Survey
ABSTRACT: Time series forecasting (TSF) has long been a crucial task in both industry
and daily life. Most classical statistical models may have certain limitations
when applied to practical scenarios in fields such as energy, healthcare,
traffic, meteorology, and economics, especially when high accuracy is required.
With the continuous development of deep learning, numerous new models have
emerged in the field of time series forecasting in recent years. However,
existing surveys have not provided a unified summary of the wide range of model
architectures in this field, nor have they given detailed summaries of works in
feature extraction and datasets. To address this gap, in this review, we
comprehensively study the previous works and summarize the general paradigms of
Deep Time Series Forecasting (DTSF) in terms of model architectures. Besides,
we take an innovative approach by focusing on the composition of time series
and systematically explain important feature extraction methods. Additionally,
we provide an overall compilation of datasets from various domains in existing
works. Finally, we systematically emphasize the significant challenges faced
and future research directions in this field.
|
2503.10210 | Jialong Wu | Jialong Wu, Marco Braun, Dominic Spata, Matthias Rottmann | TARS: Traffic-Aware Radar Scene Flow Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene flow provides crucial motion information for autonomous driving. Recent
LiDAR scene flow models utilize the rigid-motion assumption at the instance
level, assuming objects are rigid bodies. However, these instance-level methods
are not suitable for sparse radar point clouds. In this work, we present a
novel $\textbf{T}$raffic-$\textbf{A}$ware $\textbf{R}$adar $\textbf{S}$cene
flow estimation method, named $\textbf{TARS}$, which utilizes the motion
rigidity at the traffic level. To address the challenges in radar scene flow,
we perform object detection and scene flow jointly and boost the latter. We
incorporate the feature map from the object detector, trained with detection
losses, to make radar scene flow aware of the environment and road users.
Therefrom, we construct a Traffic Vector Field (TVF) in the feature space,
enabling a holistic traffic-level scene understanding in our scene flow branch.
When estimating the scene flow, we consider both point-level motion cues from
point neighbors and traffic-level consistency of rigid motion within the space.
TARS outperforms the state of the art on a proprietary dataset and the
View-of-Delft dataset, improving the benchmarks by 23% and 15%, respectively.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:54:08 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wu",
"Jialong",
""
],
[
"Braun",
"Marco",
""
],
[
"Spata",
"Dominic",
""
],
[
"Rottmann",
"Matthias",
""
]
] | TITLE: TARS: Traffic-Aware Radar Scene Flow Estimation
ABSTRACT: Scene flow provides crucial motion information for autonomous driving. Recent
LiDAR scene flow models utilize the rigid-motion assumption at the instance
level, assuming objects are rigid bodies. However, these instance-level methods
are not suitable for sparse radar point clouds. In this work, we present a
novel $\textbf{T}$raffic-$\textbf{A}$ware $\textbf{R}$adar $\textbf{S}$cene
flow estimation method, named $\textbf{TARS}$, which utilizes the motion
rigidity at the traffic level. To address the challenges in radar scene flow,
we perform object detection and scene flow jointly and boost the latter. We
incorporate the feature map from the object detector, trained with detection
losses, to make radar scene flow aware of the environment and road users.
Therefrom, we construct a Traffic Vector Field (TVF) in the feature space,
enabling a holistic traffic-level scene understanding in our scene flow branch.
When estimating the scene flow, we consider both point-level motion cues from
point neighbors and traffic-level consistency of rigid motion within the space.
TARS outperforms the state of the art on a proprietary dataset and the
View-of-Delft dataset, improving the benchmarks by 23% and 15%, respectively.
|
2503.10214 | Zhiwu Wang | Zhiwu Wang, Yichen Wu, Renzhen Wang, Haokun Lin, Quanziang Wang, Qian
Zhao, Deyu Meng | Singular Value Fine-tuning for Few-Shot Class-Incremental Learning | 12 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class-Incremental Learning (CIL) aims to prevent catastrophic forgetting of
previously learned classes while sequentially incorporating new ones. The more
challenging Few-shot CIL (FSCIL) setting further complicates this by providing
only a limited number of samples for each new class, increasing the risk of
overfitting in addition to standard CIL challenges. While catastrophic
forgetting has been extensively studied, overfitting in FSCIL, especially with
large foundation models, has received less attention. To fill this gap, we
propose the Singular Value Fine-tuning for FSCIL (SVFCL) and compared it with
existing approaches for adapting foundation models to FSCIL, which primarily
build on Parameter Efficient Fine-Tuning (PEFT) methods like prompt tuning and
Low-Rank Adaptation (LoRA). Specifically, SVFCL applies singular value
decomposition to the foundation model weights, keeping the singular vectors
fixed while fine-tuning the singular values for each task, and then merging
them. This simple yet effective approach not only alleviates the forgetting
problem but also mitigates overfitting more effectively while significantly
reducing trainable parameters. Extensive experiments on four benchmark
datasets, along with visualizations and ablation studies, validate the
effectiveness of SVFCL. The code will be made available.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:57:28 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Zhiwu",
""
],
[
"Wu",
"Yichen",
""
],
[
"Wang",
"Renzhen",
""
],
[
"Lin",
"Haokun",
""
],
[
"Wang",
"Quanziang",
""
],
[
"Zhao",
"Qian",
""
],
[
"Meng",
"Deyu",
""
]
] | TITLE: Singular Value Fine-tuning for Few-Shot Class-Incremental Learning
ABSTRACT: Class-Incremental Learning (CIL) aims to prevent catastrophic forgetting of
previously learned classes while sequentially incorporating new ones. The more
challenging Few-shot CIL (FSCIL) setting further complicates this by providing
only a limited number of samples for each new class, increasing the risk of
overfitting in addition to standard CIL challenges. While catastrophic
forgetting has been extensively studied, overfitting in FSCIL, especially with
large foundation models, has received less attention. To fill this gap, we
propose the Singular Value Fine-tuning for FSCIL (SVFCL) and compared it with
existing approaches for adapting foundation models to FSCIL, which primarily
build on Parameter Efficient Fine-Tuning (PEFT) methods like prompt tuning and
Low-Rank Adaptation (LoRA). Specifically, SVFCL applies singular value
decomposition to the foundation model weights, keeping the singular vectors
fixed while fine-tuning the singular values for each task, and then merging
them. This simple yet effective approach not only alleviates the forgetting
problem but also mitigates overfitting more effectively while significantly
reducing trainable parameters. Extensive experiments on four benchmark
datasets, along with visualizations and ablation studies, validate the
effectiveness of SVFCL. The code will be made available.
|
2503.10216 | Kaixiang Yang | Kaixiang Yang, Xin Li, Qiang Li, Zhiwei Wang | CoStoDet-DDPM: Collaborative Training of Stochastic and Deterministic
Models Improves Surgical Workflow Anticipation and Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anticipating and recognizing surgical workflows are critical for intelligent
surgical assistance systems. However, existing methods rely on deterministic
decision-making, struggling to generalize across the large anatomical and
procedural variations inherent in real-world surgeries.In this paper, we
introduce an innovative framework that incorporates stochastic modeling through
a denoising diffusion probabilistic model (DDPM) into conventional
deterministic learning for surgical workflow analysis. At the heart of our
approach is a collaborative co-training paradigm: the DDPM branch captures
procedural uncertainties to enrich feature representations, while the task
branch focuses on predicting surgical phases and instrument
usage.Theoretically, we demonstrate that this mutual refinement mechanism
benefits both branches: the DDPM reduces prediction errors in uncertain
scenarios, and the task branch directs the DDPM toward clinically meaningful
representations. Notably, the DDPM branch is discarded during inference,
enabling real-time predictions without sacrificing accuracy.Experiments on the
Cholec80 dataset show that for the anticipation task, our method achieves a 16%
reduction in eMAE compared to state-of-the-art approaches, and for phase
recognition, it improves the Jaccard score by 1.0%. Additionally, on the
AutoLaparo dataset, our method achieves a 1.5% improvement in the Jaccard score
for phase recognition, while also exhibiting robust generalization to
patient-specific variations. Our code and weight are available at
https://github.com/kk42yy/CoStoDet-DDPM.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 09:59:05 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Yang",
"Kaixiang",
""
],
[
"Li",
"Xin",
""
],
[
"Li",
"Qiang",
""
],
[
"Wang",
"Zhiwei",
""
]
] | TITLE: CoStoDet-DDPM: Collaborative Training of Stochastic and Deterministic
Models Improves Surgical Workflow Anticipation and Recognition
ABSTRACT: Anticipating and recognizing surgical workflows are critical for intelligent
surgical assistance systems. However, existing methods rely on deterministic
decision-making, struggling to generalize across the large anatomical and
procedural variations inherent in real-world surgeries.In this paper, we
introduce an innovative framework that incorporates stochastic modeling through
a denoising diffusion probabilistic model (DDPM) into conventional
deterministic learning for surgical workflow analysis. At the heart of our
approach is a collaborative co-training paradigm: the DDPM branch captures
procedural uncertainties to enrich feature representations, while the task
branch focuses on predicting surgical phases and instrument
usage.Theoretically, we demonstrate that this mutual refinement mechanism
benefits both branches: the DDPM reduces prediction errors in uncertain
scenarios, and the task branch directs the DDPM toward clinically meaningful
representations. Notably, the DDPM branch is discarded during inference,
enabling real-time predictions without sacrificing accuracy.Experiments on the
Cholec80 dataset show that for the anticipation task, our method achieves a 16%
reduction in eMAE compared to state-of-the-art approaches, and for phase
recognition, it improves the Jaccard score by 1.0%. Additionally, on the
AutoLaparo dataset, our method achieves a 1.5% improvement in the Jaccard score
for phase recognition, while also exhibiting robust generalization to
patient-specific variations. Our code and weight are available at
https://github.com/kk42yy/CoStoDet-DDPM.
|
2503.10225 | Zhixuan Li | Zhixuan Li, Hyunse Yoon, Sanghoon Lee, Weisi Lin | Unveiling the Invisible: Reasoning Complex Occlusions Amodally with AURA | 11 pages, 5 figures, 5 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Amodal segmentation aims to infer the complete shape of occluded objects,
even when the occluded region's appearance is unavailable. However, current
amodal segmentation methods lack the capability to interact with users through
text input and struggle to understand or reason about implicit and complex
purposes. While methods like LISA integrate multi-modal large language models
(LLMs) with segmentation for reasoning tasks, they are limited to predicting
only visible object regions and face challenges in handling complex occlusion
scenarios. To address these limitations, we propose a novel task named amodal
reasoning segmentation, aiming to predict the complete amodal shape of occluded
objects while providing answers with elaborations based on user text input. We
develop a generalizable dataset generation pipeline and introduce a new dataset
focusing on daily life scenarios, encompassing diverse real-world occlusions.
Furthermore, we present AURA (Amodal Understanding and Reasoning Assistant), a
novel model with advanced global and spatial-level designs specifically
tailored to handle complex occlusions. Extensive experiments validate AURA's
effectiveness on the proposed dataset. The code, model, and dataset will be
publicly released.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:08:18 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Zhixuan",
""
],
[
"Yoon",
"Hyunse",
""
],
[
"Lee",
"Sanghoon",
""
],
[
"Lin",
"Weisi",
""
]
] | TITLE: Unveiling the Invisible: Reasoning Complex Occlusions Amodally with AURA
ABSTRACT: Amodal segmentation aims to infer the complete shape of occluded objects,
even when the occluded region's appearance is unavailable. However, current
amodal segmentation methods lack the capability to interact with users through
text input and struggle to understand or reason about implicit and complex
purposes. While methods like LISA integrate multi-modal large language models
(LLMs) with segmentation for reasoning tasks, they are limited to predicting
only visible object regions and face challenges in handling complex occlusion
scenarios. To address these limitations, we propose a novel task named amodal
reasoning segmentation, aiming to predict the complete amodal shape of occluded
objects while providing answers with elaborations based on user text input. We
develop a generalizable dataset generation pipeline and introduce a new dataset
focusing on daily life scenarios, encompassing diverse real-world occlusions.
Furthermore, we present AURA (Amodal Understanding and Reasoning Assistant), a
novel model with advanced global and spatial-level designs specifically
tailored to handle complex occlusions. Extensive experiments validate AURA's
effectiveness on the proposed dataset. The code, model, and dataset will be
publicly released.
|
2503.10228 | Andi Nika | Andi Nika, Jonathan N\"other, Debmalya Mandal, Parameswaran
Kamalaruban, Adish Singla and Goran Radanovi\'c | Policy Teaching via Data Poisoning in Learning from Human Preferences | In AISTATS 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study data poisoning attacks in learning from human preferences. More
specifically, we consider the problem of teaching/enforcing a target policy
$\pi^\dagger$ by synthesizing preference data. We seek to understand the
susceptibility of different preference-based learning paradigms to poisoned
preference data by analyzing the number of samples required by the attacker to
enforce $\pi^\dagger$. We first propose a general data poisoning formulation in
learning from human preferences and then study it for two popular paradigms,
namely: (a) reinforcement learning from human feedback (RLHF) that operates by
learning a reward model using preferences; (b) direct preference optimization
(DPO) that directly optimizes policy using preferences. We conduct a
theoretical analysis of the effectiveness of data poisoning in a setting where
the attacker is allowed to augment a pre-existing dataset and also study its
special case where the attacker can synthesize the entire preference dataset
from scratch. As our main results, we provide lower/upper bounds on the number
of samples required to enforce $\pi^\dagger$. Finally, we discuss the
implications of our results in terms of the susceptibility of these learning
paradigms under such data poisoning attacks.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:11:54 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Nika",
"Andi",
""
],
[
"Nöther",
"Jonathan",
""
],
[
"Mandal",
"Debmalya",
""
],
[
"Kamalaruban",
"Parameswaran",
""
],
[
"Singla",
"Adish",
""
],
[
"Radanović",
"Goran",
""
]
] | TITLE: Policy Teaching via Data Poisoning in Learning from Human Preferences
ABSTRACT: We study data poisoning attacks in learning from human preferences. More
specifically, we consider the problem of teaching/enforcing a target policy
$\pi^\dagger$ by synthesizing preference data. We seek to understand the
susceptibility of different preference-based learning paradigms to poisoned
preference data by analyzing the number of samples required by the attacker to
enforce $\pi^\dagger$. We first propose a general data poisoning formulation in
learning from human preferences and then study it for two popular paradigms,
namely: (a) reinforcement learning from human feedback (RLHF) that operates by
learning a reward model using preferences; (b) direct preference optimization
(DPO) that directly optimizes policy using preferences. We conduct a
theoretical analysis of the effectiveness of data poisoning in a setting where
the attacker is allowed to augment a pre-existing dataset and also study its
special case where the attacker can synthesize the entire preference dataset
from scratch. As our main results, we provide lower/upper bounds on the number
of samples required to enforce $\pi^\dagger$. Finally, we discuss the
implications of our results in terms of the susceptibility of these learning
paradigms under such data poisoning attacks.
|
2503.10233 | Laya Mahmoudi | Samira Zangooei, Amirhossein Darmani, Hossein Farahmand Nezhad, Laya
Mahmoudi | ARLED: Leveraging LED-based ARMAN Model for Abstractive Summarization of
Persian Long Documents | 11 pages, 3 tables | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing volume of textual data poses challenges in reading and
comprehending large documents, particularly for scholars who need to extract
useful information from research articles. Automatic text summarization has
emerged as a powerful tool to condense lengthy documents into concise and
informative summaries. Depending on the approach used, text summarization can
be categorized as either extractive or abstractive. While extractive methods
are commonly used due to their simplicity, they often miss important
information. On the other hand, Abstractive Summarization can generate more
coherent and informative summaries by understanding the underlying meaning of
the text. Abstractive techniques have gained attention in various languages,
and recent advancements have been achieved through pre-training models such as
BERT, BART, and T5. However, the challenge of summarizing long documents
remains, and alternative models like Longformer have been introduced to address
this limitation. In this context, this paper focuses on abstractive
summarization in the Persian language. The authors introduce a new dataset of
300,000 full-text Persian papers obtained from the Ensani website and apply the
ARMAN model, based on the Longformer architecture, to generate summaries. The
experimental results demonstrate promising performance in Persian text
summarization. The paper provides a comprehensive overview of related work,
discusses the methodology, presents the experimental results, and concludes
with future research directions.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:16:46 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zangooei",
"Samira",
""
],
[
"Darmani",
"Amirhossein",
""
],
[
"Nezhad",
"Hossein Farahmand",
""
],
[
"Mahmoudi",
"Laya",
""
]
] | TITLE: ARLED: Leveraging LED-based ARMAN Model for Abstractive Summarization of
Persian Long Documents
ABSTRACT: The increasing volume of textual data poses challenges in reading and
comprehending large documents, particularly for scholars who need to extract
useful information from research articles. Automatic text summarization has
emerged as a powerful tool to condense lengthy documents into concise and
informative summaries. Depending on the approach used, text summarization can
be categorized as either extractive or abstractive. While extractive methods
are commonly used due to their simplicity, they often miss important
information. On the other hand, Abstractive Summarization can generate more
coherent and informative summaries by understanding the underlying meaning of
the text. Abstractive techniques have gained attention in various languages,
and recent advancements have been achieved through pre-training models such as
BERT, BART, and T5. However, the challenge of summarizing long documents
remains, and alternative models like Longformer have been introduced to address
this limitation. In this context, this paper focuses on abstractive
summarization in the Persian language. The authors introduce a new dataset of
300,000 full-text Persian papers obtained from the Ensani website and apply the
ARMAN model, based on the Longformer architecture, to generate summaries. The
experimental results demonstrate promising performance in Persian text
summarization. The paper provides a comprehensive overview of related work,
discusses the methodology, presents the experimental results, and concludes
with future research directions.
|
2503.10239 | Yifeng Cai | Yifeng Cai, Ziqi Zhang, Mengyu Yao, Junlin Liu, Xiaoke Zhao, Xinyi Fu,
Ruoyu Li, Zhe Li, Xiangqun Chen, Yao Guo, Ding Li | I Can Tell Your Secrets: Inferring Privacy Attributes from Mini-app
Interaction History in Super-apps | Accepted by USENIX Security 2025 | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Super-apps have emerged as comprehensive platforms integrating various
mini-apps to provide diverse services. While super-apps offer convenience and
enriched functionality, they can introduce new privacy risks. This paper
reveals a new privacy leakage source in super-apps: mini-app interaction
history, including mini-app usage history (Mini-H) and operation history
(Op-H). Mini-H refers to the history of mini-apps accessed by users, such as
their frequency and categories. Op-H captures user interactions within
mini-apps, including button clicks, bar drags, and image views. Super-apps can
naturally collect these data without instrumentation due to the web-based
feature of mini-apps. We identify these data types as novel and unexplored
privacy risks through a literature review of 30 papers and an empirical
analysis of 31 super-apps. We design a mini-app interaction history-oriented
inference attack (THEFT), to exploit this new vulnerability. Using THEFT, the
insider threats within the low-privilege business department of the super-app
vendor acting as the adversary can achieve more than 95.5% accuracy in
inferring privacy attributes of over 16.1% of users. THEFT only requires a
small training dataset of 200 users from public breached databases on the
Internet. We also engage with super-app vendors and a standards association to
increase industry awareness and commitment to protect this data. Our
contributions are significant in identifying overlooked privacy risks,
demonstrating the effectiveness of a new attack, and influencing industry
practices toward better privacy protection in the super-app ecosystem.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:29:40 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Cai",
"Yifeng",
""
],
[
"Zhang",
"Ziqi",
""
],
[
"Yao",
"Mengyu",
""
],
[
"Liu",
"Junlin",
""
],
[
"Zhao",
"Xiaoke",
""
],
[
"Fu",
"Xinyi",
""
],
[
"Li",
"Ruoyu",
""
],
[
"Li",
"Zhe",
""
... | TITLE: I Can Tell Your Secrets: Inferring Privacy Attributes from Mini-app
Interaction History in Super-apps
ABSTRACT: Super-apps have emerged as comprehensive platforms integrating various
mini-apps to provide diverse services. While super-apps offer convenience and
enriched functionality, they can introduce new privacy risks. This paper
reveals a new privacy leakage source in super-apps: mini-app interaction
history, including mini-app usage history (Mini-H) and operation history
(Op-H). Mini-H refers to the history of mini-apps accessed by users, such as
their frequency and categories. Op-H captures user interactions within
mini-apps, including button clicks, bar drags, and image views. Super-apps can
naturally collect these data without instrumentation due to the web-based
feature of mini-apps. We identify these data types as novel and unexplored
privacy risks through a literature review of 30 papers and an empirical
analysis of 31 super-apps. We design a mini-app interaction history-oriented
inference attack (THEFT), to exploit this new vulnerability. Using THEFT, the
insider threats within the low-privilege business department of the super-app
vendor acting as the adversary can achieve more than 95.5% accuracy in
inferring privacy attributes of over 16.1% of users. THEFT only requires a
small training dataset of 200 users from public breached databases on the
Internet. We also engage with super-app vendors and a standards association to
increase industry awareness and commitment to protect this data. Our
contributions are significant in identifying overlooked privacy risks,
demonstrating the effectiveness of a new attack, and influencing industry
practices toward better privacy protection in the super-app ecosystem.
|
2503.10240 | Bogdan Chornomaz | Bogdan Chornomaz, Shay Moran, Tom Waknine | Spherical dimension | null | null | null | null | cs.DM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce and study the spherical dimension, a natural topological
relaxation of the VC dimension that unifies several results in learning theory
where topology plays a key role in the proofs. The spherical dimension is
defined by extending the set of realizable datasets (used to define the VC
dimension) to the continuous space of realizable distributions. In this space,
a shattered set of size d (in the VC sense) is completed into a continuous
object, specifically a d-dimensional sphere of realizable distributions. The
spherical dimension is then defined as the dimension of the largest sphere in
this space. Thus, the spherical dimension is at least the VC dimension.
The spherical dimension serves as a common foundation for leveraging the
Borsuk-Ulam theorem and related topological tools. We demonstrate the utility
of the spherical dimension in diverse applications, including disambiguations
of partial concept classes, reductions from classification to stochastic convex
optimization, stability and replicability, and sample compression schemes.
Perhaps surprisingly, we show that the open question posed by Alon, Hanneke,
Holzman, and Moran (FOCS 2021) of whether there exist non-trivial
disambiguations for halfspaces with margin is equivalent to the basic open
question of whether the VC and spherical dimensions are finite together.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:32:25 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Chornomaz",
"Bogdan",
""
],
[
"Moran",
"Shay",
""
],
[
"Waknine",
"Tom",
""
]
] | TITLE: Spherical dimension
ABSTRACT: We introduce and study the spherical dimension, a natural topological
relaxation of the VC dimension that unifies several results in learning theory
where topology plays a key role in the proofs. The spherical dimension is
defined by extending the set of realizable datasets (used to define the VC
dimension) to the continuous space of realizable distributions. In this space,
a shattered set of size d (in the VC sense) is completed into a continuous
object, specifically a d-dimensional sphere of realizable distributions. The
spherical dimension is then defined as the dimension of the largest sphere in
this space. Thus, the spherical dimension is at least the VC dimension.
The spherical dimension serves as a common foundation for leveraging the
Borsuk-Ulam theorem and related topological tools. We demonstrate the utility
of the spherical dimension in diverse applications, including disambiguations
of partial concept classes, reductions from classification to stochastic convex
optimization, stability and replicability, and sample compression schemes.
Perhaps surprisingly, we show that the open question posed by Alon, Hanneke,
Holzman, and Moran (FOCS 2021) of whether there exist non-trivial
disambiguations for halfspaces with margin is equivalent to the basic open
question of whether the VC and spherical dimensions are finite together.
|
2503.10247 | Zhijie Zhu | Zhijie Zhu, Lei Fan, Maurice Pagnucco, Yang Song | Interpretable Image Classification via Non-parametric Part Prototype
Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Classifying images with an interpretable decision-making process is a
long-standing problem in computer vision. In recent years, Prototypical Part
Networks has gained traction as an approach for self-explainable neural
networks, due to their ability to mimic human visual reasoning by providing
explanations based on prototypical object parts. However, the quality of the
explanations generated by these methods leaves room for improvement, as the
prototypes usually focus on repetitive and redundant concepts. Leveraging
recent advances in prototype learning, we present a framework for part-based
interpretable image classification that learns a set of semantically
distinctive object parts for each class, and provides diverse and comprehensive
explanations. The core of our method is to learn the part-prototypes in a
non-parametric fashion, through clustering deep features extracted from
foundation vision models that encode robust semantic information. To
quantitatively evaluate the quality of explanations provided by ProtoPNets, we
introduce Distinctiveness Score and Comprehensiveness Score. Through evaluation
on CUB-200-2011, Stanford Cars and Stanford Dogs datasets, we show that our
framework compares favourably against existing ProtoPNets while achieving
better interpretability. Code is available at:
https://github.com/zijizhu/proto-non-param.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 10:46:53 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zhu",
"Zhijie",
""
],
[
"Fan",
"Lei",
""
],
[
"Pagnucco",
"Maurice",
""
],
[
"Song",
"Yang",
""
]
] | TITLE: Interpretable Image Classification via Non-parametric Part Prototype
Learning
ABSTRACT: Classifying images with an interpretable decision-making process is a
long-standing problem in computer vision. In recent years, Prototypical Part
Networks has gained traction as an approach for self-explainable neural
networks, due to their ability to mimic human visual reasoning by providing
explanations based on prototypical object parts. However, the quality of the
explanations generated by these methods leaves room for improvement, as the
prototypes usually focus on repetitive and redundant concepts. Leveraging
recent advances in prototype learning, we present a framework for part-based
interpretable image classification that learns a set of semantically
distinctive object parts for each class, and provides diverse and comprehensive
explanations. The core of our method is to learn the part-prototypes in a
non-parametric fashion, through clustering deep features extracted from
foundation vision models that encode robust semantic information. To
quantitatively evaluate the quality of explanations provided by ProtoPNets, we
introduce Distinctiveness Score and Comprehensiveness Score. Through evaluation
on CUB-200-2011, Stanford Cars and Stanford Dogs datasets, we show that our
framework compares favourably against existing ProtoPNets while achieving
better interpretability. Code is available at:
https://github.com/zijizhu/proto-non-param.
|
2503.10256 | Yeonjin Chang | Yeonjin Chang, Erqun Dong, Seunghyeon Seo, Nojun Kwak, Kwang Moo Yi | ROODI: Reconstructing Occluded Objects with Denoising Inpainters | Project page: https://yeonjin-chang.github.io/ROODI/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While the quality of novel-view images has improved dramatically with 3D
Gaussian Splatting, extracting specific objects from scenes remains
challenging. Isolating individual 3D Gaussian primitives for each object and
handling occlusions in scenes remain far from being solved. We propose a novel
object extraction method based on two key principles: (1) being object-centric
by pruning irrelevant primitives; and (2) leveraging generative inpainting to
compensate for missing observations caused by occlusions. For pruning, we
analyze the local structure of primitives using K-nearest neighbors, and retain
only relevant ones. For inpainting, we employ an off-the-shelf diffusion-based
inpainter combined with occlusion reasoning, utilizing the 3D representation of
the entire scene. Our findings highlight the crucial synergy between pruning
and inpainting, both of which significantly enhance extraction performance. We
evaluate our method on a standard real-world dataset and introduce a synthetic
dataset for quantitative analysis. Our approach outperforms the
state-of-the-art, demonstrating its effectiveness in object extraction from
complex scenes.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:16:21 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Chang",
"Yeonjin",
""
],
[
"Dong",
"Erqun",
""
],
[
"Seo",
"Seunghyeon",
""
],
[
"Kwak",
"Nojun",
""
],
[
"Yi",
"Kwang Moo",
""
]
] | TITLE: ROODI: Reconstructing Occluded Objects with Denoising Inpainters
ABSTRACT: While the quality of novel-view images has improved dramatically with 3D
Gaussian Splatting, extracting specific objects from scenes remains
challenging. Isolating individual 3D Gaussian primitives for each object and
handling occlusions in scenes remain far from being solved. We propose a novel
object extraction method based on two key principles: (1) being object-centric
by pruning irrelevant primitives; and (2) leveraging generative inpainting to
compensate for missing observations caused by occlusions. For pruning, we
analyze the local structure of primitives using K-nearest neighbors, and retain
only relevant ones. For inpainting, we employ an off-the-shelf diffusion-based
inpainter combined with occlusion reasoning, utilizing the 3D representation of
the entire scene. Our findings highlight the crucial synergy between pruning
and inpainting, both of which significantly enhance extraction performance. We
evaluate our method on a standard real-world dataset and introduce a synthetic
dataset for quantitative analysis. Our approach outperforms the
state-of-the-art, demonstrating its effectiveness in object extraction from
complex scenes.
|
2503.10257 | Zeyi Xu | Zeyi Xu, Jinfan Liu, Kuangxu Chen, Ye Chen, Zhangli Hu, Bingbing Ni | AMR-Transformer: Enabling Efficient Long-range Interaction for Complex
Neural Fluid Simulation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurately and efficiently simulating complex fluid dynamics is a challenging
task that has traditionally relied on computationally intensive methods. Neural
network-based approaches, such as convolutional and graph neural networks, have
partially alleviated this burden by enabling efficient local feature
extraction. However, they struggle to capture long-range dependencies due to
limited receptive fields, and Transformer-based models, while providing global
context, incur prohibitive computational costs. To tackle these challenges, we
propose AMR-Transformer, an efficient and accurate neural CFD-solving pipeline
that integrates a novel adaptive mesh refinement scheme with a Navier-Stokes
constraint-aware fast pruning module. This design encourages long-range
interactions between simulation cells and facilitates the modeling of global
fluid wave patterns, such as turbulence and shockwaves. Experiments show that
our approach achieves significant gains in efficiency while preserving critical
details, making it suitable for high-resolution physical simulations with
long-range dependencies. On CFDBench, PDEBench and a new shockwave dataset, our
pipeline demonstrates up to an order-of-magnitude improvement in accuracy over
baseline models. Additionally, compared to ViT, our approach achieves a
reduction in FLOPs of up to 60 times.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:16:42 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Xu",
"Zeyi",
""
],
[
"Liu",
"Jinfan",
""
],
[
"Chen",
"Kuangxu",
""
],
[
"Chen",
"Ye",
""
],
[
"Hu",
"Zhangli",
""
],
[
"Ni",
"Bingbing",
""
]
] | TITLE: AMR-Transformer: Enabling Efficient Long-range Interaction for Complex
Neural Fluid Simulation
ABSTRACT: Accurately and efficiently simulating complex fluid dynamics is a challenging
task that has traditionally relied on computationally intensive methods. Neural
network-based approaches, such as convolutional and graph neural networks, have
partially alleviated this burden by enabling efficient local feature
extraction. However, they struggle to capture long-range dependencies due to
limited receptive fields, and Transformer-based models, while providing global
context, incur prohibitive computational costs. To tackle these challenges, we
propose AMR-Transformer, an efficient and accurate neural CFD-solving pipeline
that integrates a novel adaptive mesh refinement scheme with a Navier-Stokes
constraint-aware fast pruning module. This design encourages long-range
interactions between simulation cells and facilitates the modeling of global
fluid wave patterns, such as turbulence and shockwaves. Experiments show that
our approach achieves significant gains in efficiency while preserving critical
details, making it suitable for high-resolution physical simulations with
long-range dependencies. On CFDBench, PDEBench and a new shockwave dataset, our
pipeline demonstrates up to an order-of-magnitude improvement in accuracy over
baseline models. Additionally, compared to ViT, our approach achieves a
reduction in FLOPs of up to 60 times.
|
2503.10259 | Yunpeng Qu | Yunpeng Qu, Kun Yuan, Qizhi Xie, Ming Sun, Chao Zhou, Jian Wang | KVQ: Boosting Video Quality Assessment via Saliency-guided Local
Perception | 11 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video Quality Assessment (VQA), which intends to predict the perceptual
quality of videos, has attracted increasing attention. Due to factors like
motion blur or specific distortions, the quality of different regions in a
video varies. Recognizing the region-wise local quality within a video is
beneficial for assessing global quality and can guide us in adopting
fine-grained enhancement or transcoding strategies. Due to the heavy cost of
annotating region-wise quality, the lack of ground truth constraints from
relevant datasets further complicates the utilization of local perception.
Inspired by the Human Visual System (HVS) that links global quality to the
local texture of different regions and their visual saliency, we propose a
Kaleidoscope Video Quality Assessment (KVQ) framework, which aims to
effectively assess both saliency and local texture, thereby facilitating the
assessment of global quality. Our framework extracts visual saliency and
allocates attention using Fusion-Window Attention (FWA) while incorporating a
Local Perception Constraint (LPC) to mitigate the reliance of regional texture
perception on neighboring areas. KVQ obtains significant improvements across
multiple scenarios on five VQA benchmarks compared to SOTA methods.
Furthermore, to assess local perception, we establish a new Local Perception
Visual Quality (LPVQ) dataset with region-wise annotations. Experimental
results demonstrate the capability of KVQ in perceiving local distortions. KVQ
models and the LPVQ dataset will be available at
https://github.com/qyp2000/KVQ.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:16:58 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Qu",
"Yunpeng",
""
],
[
"Yuan",
"Kun",
""
],
[
"Xie",
"Qizhi",
""
],
[
"Sun",
"Ming",
""
],
[
"Zhou",
"Chao",
""
],
[
"Wang",
"Jian",
""
]
] | TITLE: KVQ: Boosting Video Quality Assessment via Saliency-guided Local
Perception
ABSTRACT: Video Quality Assessment (VQA), which intends to predict the perceptual
quality of videos, has attracted increasing attention. Due to factors like
motion blur or specific distortions, the quality of different regions in a
video varies. Recognizing the region-wise local quality within a video is
beneficial for assessing global quality and can guide us in adopting
fine-grained enhancement or transcoding strategies. Due to the heavy cost of
annotating region-wise quality, the lack of ground truth constraints from
relevant datasets further complicates the utilization of local perception.
Inspired by the Human Visual System (HVS) that links global quality to the
local texture of different regions and their visual saliency, we propose a
Kaleidoscope Video Quality Assessment (KVQ) framework, which aims to
effectively assess both saliency and local texture, thereby facilitating the
assessment of global quality. Our framework extracts visual saliency and
allocates attention using Fusion-Window Attention (FWA) while incorporating a
Local Perception Constraint (LPC) to mitigate the reliance of regional texture
perception on neighboring areas. KVQ obtains significant improvements across
multiple scenarios on five VQA benchmarks compared to SOTA methods.
Furthermore, to assess local perception, we establish a new Local Perception
Visual Quality (LPVQ) dataset with region-wise annotations. Experimental
results demonstrate the capability of KVQ in perceiving local distortions. KVQ
models and the LPVQ dataset will be available at
https://github.com/qyp2000/KVQ.
|
2503.10265 | Chang Han Low | Chang Han Low, Ziyue Wang, Tianyi Zhang, Zhitao Zeng, Zhu Zhuo,
Evangelos B. Mazomenos, Yueming Jin | SurgRAW: Multi-Agent Workflow with Chain-of-Thought Reasoning for
Surgical Intelligence | null | null | null | null | cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Integration of Vision-Language Models (VLMs) in surgical intelligence is
hindered by hallucinations, domain knowledge gaps, and limited understanding of
task interdependencies within surgical scenes, undermining clinical
reliability. While recent VLMs demonstrate strong general reasoning and
thinking capabilities, they still lack the domain expertise and task-awareness
required for precise surgical scene interpretation. Although Chain-of-Thought
(CoT) can structure reasoning more effectively, current approaches rely on
self-generated CoT steps, which often exacerbate inherent domain gaps and
hallucinations. To overcome this, we present SurgRAW, a CoT-driven multi-agent
framework that delivers transparent, interpretable insights for most tasks in
robotic-assisted surgery. By employing specialized CoT prompts across five
tasks: instrument recognition, action recognition, action prediction, patient
data extraction, and outcome assessment, SurgRAW mitigates hallucinations
through structured, domain-aware reasoning. Retrieval-Augmented Generation
(RAG) is also integrated to external medical knowledge to bridge domain gaps
and improve response reliability. Most importantly, a hierarchical agentic
system ensures that CoT-embedded VLM agents collaborate effectively while
understanding task interdependencies, with a panel discussion mechanism
promotes logical consistency. To evaluate our method, we introduce
SurgCoTBench, the first reasoning-based dataset with structured frame-level
annotations. With comprehensive experiments, we demonstrate the effectiveness
of proposed SurgRAW with 29.32% accuracy improvement over baseline VLMs on 12
robotic procedures, achieving the state-of-the-art performance and advancing
explainable, trustworthy, and autonomous surgical assistance.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:23:13 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Low",
"Chang Han",
""
],
[
"Wang",
"Ziyue",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Zeng",
"Zhitao",
""
],
[
"Zhuo",
"Zhu",
""
],
[
"Mazomenos",
"Evangelos B.",
""
],
[
"Jin",
"Yueming",
""
]
] | TITLE: SurgRAW: Multi-Agent Workflow with Chain-of-Thought Reasoning for
Surgical Intelligence
ABSTRACT: Integration of Vision-Language Models (VLMs) in surgical intelligence is
hindered by hallucinations, domain knowledge gaps, and limited understanding of
task interdependencies within surgical scenes, undermining clinical
reliability. While recent VLMs demonstrate strong general reasoning and
thinking capabilities, they still lack the domain expertise and task-awareness
required for precise surgical scene interpretation. Although Chain-of-Thought
(CoT) can structure reasoning more effectively, current approaches rely on
self-generated CoT steps, which often exacerbate inherent domain gaps and
hallucinations. To overcome this, we present SurgRAW, a CoT-driven multi-agent
framework that delivers transparent, interpretable insights for most tasks in
robotic-assisted surgery. By employing specialized CoT prompts across five
tasks: instrument recognition, action recognition, action prediction, patient
data extraction, and outcome assessment, SurgRAW mitigates hallucinations
through structured, domain-aware reasoning. Retrieval-Augmented Generation
(RAG) is also integrated to external medical knowledge to bridge domain gaps
and improve response reliability. Most importantly, a hierarchical agentic
system ensures that CoT-embedded VLM agents collaborate effectively while
understanding task interdependencies, with a panel discussion mechanism
promotes logical consistency. To evaluate our method, we introduce
SurgCoTBench, the first reasoning-based dataset with structured frame-level
annotations. With comprehensive experiments, we demonstrate the effectiveness
of proposed SurgRAW with 29.32% accuracy improvement over baseline VLMs on 12
robotic procedures, achieving the state-of-the-art performance and advancing
explainable, trustworthy, and autonomous surgical assistance.
|
2503.10269 | Wassim Bouaziz | Wassim Bouaziz, El-Mahdi El-Mhamdi, Nicolas Usunier | Targeted Data Poisoning for Black-Box Audio Datasets Ownership
Verification | Published at ICASSP 2025, 5 pages, 7 figures | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Protecting the use of audio datasets is a major concern for data owners,
particularly with the recent rise of audio deep learning models. While
watermarks can be used to protect the data itself, they do not allow to
identify a deep learning model trained on a protected dataset. In this paper,
we adapt to audio data the recently introduced data taggants approach. Data
taggants is a method to verify if a neural network was trained on a protected
image dataset with top-$k$ predictions access to the model only. This method
relies on a targeted data poisoning scheme by discreetly altering a small
fraction (1%) of the dataset as to induce a harmless behavior on
out-of-distribution data called keys. We evaluate our method on the
Speechcommands and the ESC50 datasets and state of the art transformer models,
and show that we can detect the use of the dataset with high confidence without
loss of performance. We also show the robustness of our method against common
data augmentation techniques, making it a practical method to protect audio
datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:25:25 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Bouaziz",
"Wassim",
""
],
[
"El-Mhamdi",
"El-Mahdi",
""
],
[
"Usunier",
"Nicolas",
""
]
] | TITLE: Targeted Data Poisoning for Black-Box Audio Datasets Ownership
Verification
ABSTRACT: Protecting the use of audio datasets is a major concern for data owners,
particularly with the recent rise of audio deep learning models. While
watermarks can be used to protect the data itself, they do not allow to
identify a deep learning model trained on a protected dataset. In this paper,
we adapt to audio data the recently introduced data taggants approach. Data
taggants is a method to verify if a neural network was trained on a protected
image dataset with top-$k$ predictions access to the model only. This method
relies on a targeted data poisoning scheme by discreetly altering a small
fraction (1%) of the dataset as to induce a harmless behavior on
out-of-distribution data called keys. We evaluate our method on the
Speechcommands and the ESC50 datasets and state of the art transformer models,
and show that we can detect the use of the dataset with high confidence without
loss of performance. We also show the robustness of our method against common
data augmentation techniques, making it a practical method to protect audio
datasets.
|
2503.10284 | Zhen Zhang | Zhen Zhang, Meihan Liu, Bingsheng He | PyGDA: A Python Library for Graph Domain Adaptation | Under Review | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph domain adaptation has emerged as a promising approach to facilitate
knowledge transfer across different domains. Recently, numerous models have
been proposed to enhance their generalization capabilities in this field.
However, there is still no unified library that brings together existing
techniques and simplifies their implementation. To fill this gap, we introduce
PyGDA, an open-source Python library tailored for graph domain adaptation. As
the first comprehensive library in this area, PyGDA covers more than 20 widely
used graph domain adaptation methods together with different types of graph
datasets. Specifically, PyGDA offers modular components, enabling users to
seamlessly build custom models with a variety of commonly used utility
functions. To handle large-scale graphs, PyGDA includes support for features
such as sampling and mini-batch processing, ensuring efficient computation. In
addition, PyGDA also includes comprehensive performance benchmarks and
well-documented user-friendly API for both researchers and practitioners. To
foster convenient accessibility, PyGDA is released under the MIT license at
https://github.com/pygda-team/pygda, and the API documentation is
https://pygda.readthedocs.io/en/stable/.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:52:23 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zhang",
"Zhen",
""
],
[
"Liu",
"Meihan",
""
],
[
"He",
"Bingsheng",
""
]
] | TITLE: PyGDA: A Python Library for Graph Domain Adaptation
ABSTRACT: Graph domain adaptation has emerged as a promising approach to facilitate
knowledge transfer across different domains. Recently, numerous models have
been proposed to enhance their generalization capabilities in this field.
However, there is still no unified library that brings together existing
techniques and simplifies their implementation. To fill this gap, we introduce
PyGDA, an open-source Python library tailored for graph domain adaptation. As
the first comprehensive library in this area, PyGDA covers more than 20 widely
used graph domain adaptation methods together with different types of graph
datasets. Specifically, PyGDA offers modular components, enabling users to
seamlessly build custom models with a variety of commonly used utility
functions. To handle large-scale graphs, PyGDA includes support for features
such as sampling and mini-batch processing, ensuring efficient computation. In
addition, PyGDA also includes comprehensive performance benchmarks and
well-documented user-friendly API for both researchers and practitioners. To
foster convenient accessibility, PyGDA is released under the MIT license at
https://github.com/pygda-team/pygda, and the API documentation is
https://pygda.readthedocs.io/en/stable/.
|
2503.10286 | Zhiqi Li | Zhiqi Li, Chengrui Dong, Yiming Chen, Zhangchi Huang, Peidong Liu | VicaSplat: A Single Run is All You Need for 3D Gaussian Splatting and
Camera Estimation from Unposed Video Frames | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present VicaSplat, a novel framework for joint 3D Gaussians reconstruction
and camera pose estimation from a sequence of unposed video frames, which is a
critical yet underexplored task in real-world 3D applications. The core of our
method lies in a novel transformer-based network architecture. In particular,
our model starts with an image encoder that maps each image to a list of visual
tokens. All visual tokens are concatenated with additional inserted learnable
camera tokens. The obtained tokens then fully communicate with each other
within a tailored transformer decoder. The camera tokens causally aggregate
features from visual tokens of different views, and further modulate them
frame-wisely to inject view-dependent features. 3D Gaussian splats and camera
pose parameters can then be estimated via different prediction heads.
Experiments show that VicaSplat surpasses baseline methods for multi-view
inputs, and achieves comparable performance to prior two-view approaches.
Remarkably, VicaSplat also demonstrates exceptional cross-dataset
generalization capability on the ScanNet benchmark, achieving superior
performance without any fine-tuning. Project page:
https://lizhiqi49.github.io/VicaSplat.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:56:05 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Zhiqi",
""
],
[
"Dong",
"Chengrui",
""
],
[
"Chen",
"Yiming",
""
],
[
"Huang",
"Zhangchi",
""
],
[
"Liu",
"Peidong",
""
]
] | TITLE: VicaSplat: A Single Run is All You Need for 3D Gaussian Splatting and
Camera Estimation from Unposed Video Frames
ABSTRACT: We present VicaSplat, a novel framework for joint 3D Gaussians reconstruction
and camera pose estimation from a sequence of unposed video frames, which is a
critical yet underexplored task in real-world 3D applications. The core of our
method lies in a novel transformer-based network architecture. In particular,
our model starts with an image encoder that maps each image to a list of visual
tokens. All visual tokens are concatenated with additional inserted learnable
camera tokens. The obtained tokens then fully communicate with each other
within a tailored transformer decoder. The camera tokens causally aggregate
features from visual tokens of different views, and further modulate them
frame-wisely to inject view-dependent features. 3D Gaussian splats and camera
pose parameters can then be estimated via different prediction heads.
Experiments show that VicaSplat surpasses baseline methods for multi-view
inputs, and achieves comparable performance to prior two-view approaches.
Remarkably, VicaSplat also demonstrates exceptional cross-dataset
generalization capability on the ScanNet benchmark, achieving superior
performance without any fine-tuning. Project page:
https://lizhiqi49.github.io/VicaSplat.
|
2503.10287 | Hao Zhou | Hao Zhou, Xiaobao Guo, Yuzhe Zhu, Adams Wai-Kin Kong | MACS: Multi-source Audio-to-image Generation with Contextual
Significance and Semantic Alignment | null | null | null | null | cs.SD cs.CV cs.GR eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Propelled by the breakthrough in deep generative models, audio-to-image
generation has emerged as a pivotal cross-model task that converts complex
auditory signals into rich visual representations. However, previous works only
focus on single-source audio inputs for image generation, ignoring the
multi-source characteristic in natural auditory scenes, thus limiting the
performance in generating comprehensive visual content. To bridge this gap, a
method called MACS is proposed to conduct multi-source audio-to-image
generation. This is the first work that explicitly separates multi-source audio
to capture the rich audio components before image generation. MACS is a
two-stage method. In the first stage, multi-source audio inputs are separated
by a weakly supervised method, where the audio and text labels are semantically
aligned by casting into a common space using the large pre-trained CLAP model.
We introduce a ranking loss to consider the contextual significance of the
separated audio signals. In the second stage, efficient image generation is
achieved by mapping the separated audio signals to the generation condition
using only a trainable adapter and a MLP layer. We preprocess the LLP dataset
as the first full multi-source audio-to-image generation benchmark. The
experiments are conducted on multi-source, mixed-source, and single-source
audio-to-image generation tasks. The proposed MACS outperforms the current
state-of-the-art methods in 17 of the 21 evaluation indexes on all tasks and
delivers superior visual quality. The code will be publicly available.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 11:56:25 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zhou",
"Hao",
""
],
[
"Guo",
"Xiaobao",
""
],
[
"Zhu",
"Yuzhe",
""
],
[
"Kong",
"Adams Wai-Kin",
""
]
] | TITLE: MACS: Multi-source Audio-to-image Generation with Contextual
Significance and Semantic Alignment
ABSTRACT: Propelled by the breakthrough in deep generative models, audio-to-image
generation has emerged as a pivotal cross-model task that converts complex
auditory signals into rich visual representations. However, previous works only
focus on single-source audio inputs for image generation, ignoring the
multi-source characteristic in natural auditory scenes, thus limiting the
performance in generating comprehensive visual content. To bridge this gap, a
method called MACS is proposed to conduct multi-source audio-to-image
generation. This is the first work that explicitly separates multi-source audio
to capture the rich audio components before image generation. MACS is a
two-stage method. In the first stage, multi-source audio inputs are separated
by a weakly supervised method, where the audio and text labels are semantically
aligned by casting into a common space using the large pre-trained CLAP model.
We introduce a ranking loss to consider the contextual significance of the
separated audio signals. In the second stage, efficient image generation is
achieved by mapping the separated audio signals to the generation condition
using only a trainable adapter and a MLP layer. We preprocess the LLP dataset
as the first full multi-source audio-to-image generation benchmark. The
experiments are conducted on multi-source, mixed-source, and single-source
audio-to-image generation tasks. The proposed MACS outperforms the current
state-of-the-art methods in 17 of the 21 evaluation indexes on all tasks and
delivers superior visual quality. The code will be publicly available.
|
2503.10291 | Weiyun Wang | Weiyun Wang, Zhangwei Gao, Lianjie Chen, Zhe Chen, Jinguo Zhu, Xiangyu
Zhao, Yangzhou Liu, Yue Cao, Shenglong Ye, Xizhou Zhu, Lewei Lu, Haodong
Duan, Yu Qiao, Jifeng Dai, Wenhai Wang | VisualPRM: An Effective Process Reward Model for Multimodal Reasoning | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce VisualPRM, an advanced multimodal Process Reward Model (PRM)
with 8B parameters, which improves the reasoning abilities of existing
Multimodal Large Language Models (MLLMs) across different model scales and
families with Best-of-N (BoN) evaluation strategies. Specifically, our model
improves the reasoning performance of three types of MLLMs and four different
model scales. Even when applied to the highly capable InternVL2.5-78B, it
achieves a 5.9-point improvement across seven multimodal reasoning benchmarks.
Experimental results show that our model exhibits superior performance compared
to Outcome Reward Models and Self-Consistency during BoN evaluation. To
facilitate the training of multimodal PRMs, we construct a multimodal process
supervision dataset VisualPRM400K using an automated data pipeline. For the
evaluation of multimodal PRMs, we propose VisualProcessBench, a benchmark with
human-annotated step-wise correctness labels, to measure the abilities of PRMs
to detect erroneous steps in multimodal reasoning tasks. We hope that our work
can inspire more future research and contribute to the development of MLLMs.
Our model, data, and benchmark are released in
https://internvl.github.io/blog/2025-03-13-VisualPRM/.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:03:37 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Weiyun",
""
],
[
"Gao",
"Zhangwei",
""
],
[
"Chen",
"Lianjie",
""
],
[
"Chen",
"Zhe",
""
],
[
"Zhu",
"Jinguo",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Liu",
"Yangzhou",
""
],
[
"Cao",
"Yue"... | TITLE: VisualPRM: An Effective Process Reward Model for Multimodal Reasoning
ABSTRACT: We introduce VisualPRM, an advanced multimodal Process Reward Model (PRM)
with 8B parameters, which improves the reasoning abilities of existing
Multimodal Large Language Models (MLLMs) across different model scales and
families with Best-of-N (BoN) evaluation strategies. Specifically, our model
improves the reasoning performance of three types of MLLMs and four different
model scales. Even when applied to the highly capable InternVL2.5-78B, it
achieves a 5.9-point improvement across seven multimodal reasoning benchmarks.
Experimental results show that our model exhibits superior performance compared
to Outcome Reward Models and Self-Consistency during BoN evaluation. To
facilitate the training of multimodal PRMs, we construct a multimodal process
supervision dataset VisualPRM400K using an automated data pipeline. For the
evaluation of multimodal PRMs, we propose VisualProcessBench, a benchmark with
human-annotated step-wise correctness labels, to measure the abilities of PRMs
to detect erroneous steps in multimodal reasoning tasks. We hope that our work
can inspire more future research and contribute to the development of MLLMs.
Our model, data, and benchmark are released in
https://internvl.github.io/blog/2025-03-13-VisualPRM/.
|
2503.10301 | Moreno La Quatra | Moreno La Quatra, Juan Rafael Orozco-Arroyave, Marco Sabato
Siniscalchi | Bilingual Dual-Head Deep Model for Parkinson's Disease Detection from
Speech | Accepted at ICASSP 2025 - Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses | null | 10.1109/ICASSP49660.2025.10889445 | null | eess.AS cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work aims to tackle the Parkinson's disease (PD) detection problem from
the speech signal in a bilingual setting by proposing an ad-hoc dual-head deep
neural architecture for type-based binary classification. One head is
specialized for diadochokinetic patterns. The other head looks for natural
speech patterns present in continuous spoken utterances. Only one of the two
heads is operative accordingly to the nature of the input. Speech
representations are extracted from self-supervised learning (SSL) models and
wavelet transforms. Adaptive layers, convolutional bottlenecks, and contrastive
learning are exploited to reduce variations across languages. Our solution is
assessed against two distinct datasets, EWA-DB, and PC-GITA, which cover Slovak
and Spanish languages, respectively. Results indicate that conventional models
trained on a single language dataset struggle with cross-linguistic
generalization, and naive combinations of datasets are suboptimal. In contrast,
our model improves generalization on both languages, simultaneously.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:23:11 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"La Quatra",
"Moreno",
""
],
[
"Orozco-Arroyave",
"Juan Rafael",
""
],
[
"Siniscalchi",
"Marco Sabato",
""
]
] | TITLE: Bilingual Dual-Head Deep Model for Parkinson's Disease Detection from
Speech
ABSTRACT: This work aims to tackle the Parkinson's disease (PD) detection problem from
the speech signal in a bilingual setting by proposing an ad-hoc dual-head deep
neural architecture for type-based binary classification. One head is
specialized for diadochokinetic patterns. The other head looks for natural
speech patterns present in continuous spoken utterances. Only one of the two
heads is operative accordingly to the nature of the input. Speech
representations are extracted from self-supervised learning (SSL) models and
wavelet transforms. Adaptive layers, convolutional bottlenecks, and contrastive
learning are exploited to reduce variations across languages. Our solution is
assessed against two distinct datasets, EWA-DB, and PC-GITA, which cover Slovak
and Spanish languages, respectively. Results indicate that conventional models
trained on a single language dataset struggle with cross-linguistic
generalization, and naive combinations of datasets are suboptimal. In contrast,
our model improves generalization on both languages, simultaneously.
|
2503.10305 | Emil Mededovic | Emil Mededovic, Yuli Wu, Henning Konermann, Marcin Kopaczka, Mareike
Schulz, Rene Tolba, Johannes Stegmaier | Eye on the Target: Eye Tracking Meets Rodent Tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing animal behavior from video recordings is crucial for scientific
research, yet manual annotation remains labor-intensive and prone to
subjectivity. Efficient segmentation methods are needed to automate this
process while maintaining high accuracy. In this work, we propose a novel
pipeline that utilizes eye-tracking data from Aria glasses to generate prompt
points, which are then used to produce segmentation masks via a fast zero-shot
segmentation model. Additionally, we apply post-processing to refine the
prompts, leading to improved segmentation quality. Through our approach, we
demonstrate that combining eye-tracking-based annotation with smart prompt
refinement can enhance segmentation accuracy, achieving an improvement of 70.6%
from 38.8 to 66.2 in the Jaccard Index for segmentation results in the rats
dataset.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:27:42 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Mededovic",
"Emil",
""
],
[
"Wu",
"Yuli",
""
],
[
"Konermann",
"Henning",
""
],
[
"Kopaczka",
"Marcin",
""
],
[
"Schulz",
"Mareike",
""
],
[
"Tolba",
"Rene",
""
],
[
"Stegmaier",
"Johannes",
""
]
] | TITLE: Eye on the Target: Eye Tracking Meets Rodent Tracking
ABSTRACT: Analyzing animal behavior from video recordings is crucial for scientific
research, yet manual annotation remains labor-intensive and prone to
subjectivity. Efficient segmentation methods are needed to automate this
process while maintaining high accuracy. In this work, we propose a novel
pipeline that utilizes eye-tracking data from Aria glasses to generate prompt
points, which are then used to produce segmentation masks via a fast zero-shot
segmentation model. Additionally, we apply post-processing to refine the
prompts, leading to improved segmentation quality. Through our approach, we
demonstrate that combining eye-tracking-based annotation with smart prompt
refinement can enhance segmentation accuracy, achieving an improvement of 70.6%
from 38.8 to 66.2 in the Jaccard Index for segmentation results in the rats
dataset.
|
2503.10307 | Martin C\'ifka | Georgy Ponimatkin, Martin C\'ifka, Tom\'a\v{s} Sou\v{c}ek, M\'ed\'eric
Fourmy, Yann Labb\'e, Vladimir Petrik, Josef Sivic | 6D Object Pose Tracking in Internet Videos for Robotic Manipulation | Accepted to ICLR 2025. Project page available at
https://ponimatkin.github.io/wildpose/ | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | We seek to extract a temporally consistent 6D pose trajectory of a
manipulated object from an Internet instructional video. This is a challenging
set-up for current 6D pose estimation methods due to uncontrolled capturing
conditions, subtle but dynamic object motions, and the fact that the exact mesh
of the manipulated object is not known. To address these challenges, we present
the following contributions. First, we develop a new method that estimates the
6D pose of any object in the input image without prior knowledge of the object
itself. The method proceeds by (i) retrieving a CAD model similar to the
depicted object from a large-scale model database, (ii) 6D aligning the
retrieved CAD model with the input image, and (iii) grounding the absolute
scale of the object with respect to the scene. Second, we extract smooth 6D
object trajectories from Internet videos by carefully tracking the detected
objects across video frames. The extracted object trajectories are then
retargeted via trajectory optimization into the configuration space of a
robotic manipulator. Third, we thoroughly evaluate and ablate our 6D pose
estimation method on YCB-V and HOPE-Video datasets as well as a new dataset of
instructional videos manually annotated with approximate 6D object
trajectories. We demonstrate significant improvements over existing
state-of-the-art RGB 6D pose estimation methods. Finally, we show that the 6D
object motion estimated from Internet videos can be transferred to a 7-axis
robotic manipulator both in a virtual simulator as well as in a real world
set-up. We also successfully apply our method to egocentric videos taken from
the EPIC-KITCHENS dataset, demonstrating potential for Embodied AI
applications.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:33:34 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Ponimatkin",
"Georgy",
""
],
[
"Cífka",
"Martin",
""
],
[
"Souček",
"Tomáš",
""
],
[
"Fourmy",
"Médéric",
""
],
[
"Labbé",
"Yann",
""
],
[
"Petrik",
"Vladimir",
""
],
[
"Sivic",
"Josef",
""
]
] | TITLE: 6D Object Pose Tracking in Internet Videos for Robotic Manipulation
ABSTRACT: We seek to extract a temporally consistent 6D pose trajectory of a
manipulated object from an Internet instructional video. This is a challenging
set-up for current 6D pose estimation methods due to uncontrolled capturing
conditions, subtle but dynamic object motions, and the fact that the exact mesh
of the manipulated object is not known. To address these challenges, we present
the following contributions. First, we develop a new method that estimates the
6D pose of any object in the input image without prior knowledge of the object
itself. The method proceeds by (i) retrieving a CAD model similar to the
depicted object from a large-scale model database, (ii) 6D aligning the
retrieved CAD model with the input image, and (iii) grounding the absolute
scale of the object with respect to the scene. Second, we extract smooth 6D
object trajectories from Internet videos by carefully tracking the detected
objects across video frames. The extracted object trajectories are then
retargeted via trajectory optimization into the configuration space of a
robotic manipulator. Third, we thoroughly evaluate and ablate our 6D pose
estimation method on YCB-V and HOPE-Video datasets as well as a new dataset of
instructional videos manually annotated with approximate 6D object
trajectories. We demonstrate significant improvements over existing
state-of-the-art RGB 6D pose estimation methods. Finally, we show that the 6D
object motion estimated from Internet videos can be transferred to a 7-axis
robotic manipulator both in a virtual simulator as well as in a real world
set-up. We also successfully apply our method to egocentric videos taken from
the EPIC-KITCHENS dataset, demonstrating potential for Embodied AI
applications.
|
2503.10322 | Haoxuan Li | Haoxuan Li, Sixu Yan, Yuhan Li, Xinggang Wang | Towards Fast, Memory-based and Data-Efficient Vision-Language Policy | 11 pages, 7 figures, 6 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Language Models (VLMs) pretrained on Internet-scale vision-language
data have demonstrated the potential to transfer their knowledge to robotic
learning. However, the existing paradigm encounters three critical challenges:
(1) expensive inference cost resulting from large-scale model parameters, (2)
frequent domain shifts caused by mismatched data modalities, and (3) limited
capacity to handle past or future experiences. In this work, we propose
LiteVLP, a lightweight, memory-based, and general-purpose vision-language
policy generation model. LiteVLP is built upon a pre-trained 1B-parameter VLM
and fine-tuned on a tiny-scale and conversation-style robotic dataset. Through
extensive experiments, we demonstrate that LiteVLP outperforms state-of-the-art
vision-language policy on VIMA-Bench, with minimal training time. Furthermore,
LiteVLP exhibits superior inference speed while maintaining exceptional high
accuracy. In long-horizon manipulation tasks, LiteVLP also shows remarkable
memory ability, outperforming the best-performing baseline model by 18.8%.
These results highlight LiteVLP as a promising model to integrating the
intelligence of VLMs into robotic learning.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 12:58:40 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Haoxuan",
""
],
[
"Yan",
"Sixu",
""
],
[
"Li",
"Yuhan",
""
],
[
"Wang",
"Xinggang",
""
]
] | TITLE: Towards Fast, Memory-based and Data-Efficient Vision-Language Policy
ABSTRACT: Vision Language Models (VLMs) pretrained on Internet-scale vision-language
data have demonstrated the potential to transfer their knowledge to robotic
learning. However, the existing paradigm encounters three critical challenges:
(1) expensive inference cost resulting from large-scale model parameters, (2)
frequent domain shifts caused by mismatched data modalities, and (3) limited
capacity to handle past or future experiences. In this work, we propose
LiteVLP, a lightweight, memory-based, and general-purpose vision-language
policy generation model. LiteVLP is built upon a pre-trained 1B-parameter VLM
and fine-tuned on a tiny-scale and conversation-style robotic dataset. Through
extensive experiments, we demonstrate that LiteVLP outperforms state-of-the-art
vision-language policy on VIMA-Bench, with minimal training time. Furthermore,
LiteVLP exhibits superior inference speed while maintaining exceptional high
accuracy. In long-horizon manipulation tasks, LiteVLP also shows remarkable
memory ability, outperforming the best-performing baseline model by 18.8%.
These results highlight LiteVLP as a promising model to integrating the
intelligence of VLMs into robotic learning.
|
2503.10331 | Maxim Popov | Maxim Popov, Regina Kurkova, Mikhail Iumanov, Jaafar Mahmoud, Sergey
Kolyubin | OSMa-Bench: Evaluating Open Semantic Mapping Under Varying Lighting
Conditions | Project page: https://be2rlab.github.io/OSMa-Bench/ | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open Semantic Mapping (OSM) is a key technology in robotic perception,
combining semantic segmentation and SLAM techniques. This paper introduces a
dynamically configurable and highly automated LLM/LVLM-powered pipeline for
evaluating OSM solutions called OSMa-Bench (Open Semantic Mapping Benchmark).
The study focuses on evaluating state-of-the-art semantic mapping algorithms
under varying indoor lighting conditions, a critical challenge in indoor
environments. We introduce a novel dataset with simulated RGB-D sequences and
ground truth 3D reconstructions, facilitating the rigorous analysis of mapping
performance across different lighting conditions. Through experiments on
leading models such as ConceptGraphs, BBQ and OpenScene, we evaluate the
semantic fidelity of object recognition and segmentation. Additionally, we
introduce a Scene Graph evaluation method to analyze the ability of models to
interpret semantic structure. The results provide insights into the robustness
of these models, forming future research directions for developing resilient
and adaptable robotic systems. Our code is available at
https://be2rlab.github.io/OSMa-Bench/.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 13:07:51 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Popov",
"Maxim",
""
],
[
"Kurkova",
"Regina",
""
],
[
"Iumanov",
"Mikhail",
""
],
[
"Mahmoud",
"Jaafar",
""
],
[
"Kolyubin",
"Sergey",
""
]
] | TITLE: OSMa-Bench: Evaluating Open Semantic Mapping Under Varying Lighting
Conditions
ABSTRACT: Open Semantic Mapping (OSM) is a key technology in robotic perception,
combining semantic segmentation and SLAM techniques. This paper introduces a
dynamically configurable and highly automated LLM/LVLM-powered pipeline for
evaluating OSM solutions called OSMa-Bench (Open Semantic Mapping Benchmark).
The study focuses on evaluating state-of-the-art semantic mapping algorithms
under varying indoor lighting conditions, a critical challenge in indoor
environments. We introduce a novel dataset with simulated RGB-D sequences and
ground truth 3D reconstructions, facilitating the rigorous analysis of mapping
performance across different lighting conditions. Through experiments on
leading models such as ConceptGraphs, BBQ and OpenScene, we evaluate the
semantic fidelity of object recognition and segmentation. Additionally, we
introduce a Scene Graph evaluation method to analyze the ability of models to
interpret semantic structure. The results provide insights into the robustness
of these models, forming future research directions for developing resilient
and adaptable robotic systems. Our code is available at
https://be2rlab.github.io/OSMa-Bench/.
|
2503.10350 | Ali Salar | Ali Salar, Qing Liu, Yingli Tian and Guoying Zhao | Enhancing Facial Privacy Protection via Weakening Diffusion Purification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of social media has led to the widespread sharing of
individual portrait images, which pose serious privacy risks due to the
capabilities of automatic face recognition (AFR) systems for mass surveillance.
Hence, protecting facial privacy against unauthorized AFR systems is essential.
Inspired by the generation capability of the emerging diffusion models, recent
methods employ diffusion models to generate adversarial face images for privacy
protection. However, they suffer from the diffusion purification effect,
leading to a low protection success rate (PSR). In this paper, we first propose
learning unconditional embeddings to increase the learning capacity for
adversarial modifications and then use them to guide the modification of the
adversarial latent code to weaken the diffusion purification effect. Moreover,
we integrate an identity-preserving structure to maintain structural
consistency between the original and generated images, allowing human observers
to recognize the generated image as having the same identity as the original.
Extensive experiments conducted on two public datasets, i.e., CelebA-HQ and
LADN, demonstrate the superiority of our approach. The protected faces
generated by our method outperform those produced by existing facial privacy
protection approaches in terms of transferability and natural appearance.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 13:27:53 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Salar",
"Ali",
""
],
[
"Liu",
"Qing",
""
],
[
"Tian",
"Yingli",
""
],
[
"Zhao",
"Guoying",
""
]
] | TITLE: Enhancing Facial Privacy Protection via Weakening Diffusion Purification
ABSTRACT: The rapid growth of social media has led to the widespread sharing of
individual portrait images, which pose serious privacy risks due to the
capabilities of automatic face recognition (AFR) systems for mass surveillance.
Hence, protecting facial privacy against unauthorized AFR systems is essential.
Inspired by the generation capability of the emerging diffusion models, recent
methods employ diffusion models to generate adversarial face images for privacy
protection. However, they suffer from the diffusion purification effect,
leading to a low protection success rate (PSR). In this paper, we first propose
learning unconditional embeddings to increase the learning capacity for
adversarial modifications and then use them to guide the modification of the
adversarial latent code to weaken the diffusion purification effect. Moreover,
we integrate an identity-preserving structure to maintain structural
consistency between the original and generated images, allowing human observers
to recognize the generated image as having the same identity as the original.
Extensive experiments conducted on two public datasets, i.e., CelebA-HQ and
LADN, demonstrate the superiority of our approach. The protected faces
generated by our method outperform those produced by existing facial privacy
protection approaches in terms of transferability and natural appearance.
|
2503.10356 | Toni Schneidereit | Toni Schneidereit, Stefan Gohrenz, Michael Breu{\ss} | Object detection characteristics in a learning factory environment using
YOLOv8 | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI-based object detection, and efforts to explain and investigate their
characteristics, is a topic of high interest. The impact of, e.g., complex
background structures with similar appearances as the objects of interest, on
the detection accuracy and, beforehand, the necessary dataset composition are
topics of ongoing research. In this paper, we present a systematic
investigation of background influences and different features of the object to
be detected. The latter includes various materials and surfaces, partially
transparent and with shiny reflections in the context of an Industry 4.0
learning factory. Different YOLOv8 models have been trained for each of the
materials on different sized datasets, where the appearance was the only
changing parameter. In the end, similar characteristics tend to show different
behaviours and sometimes unexpected results. While some background components
tend to be detected, others with the same features are not part of the
detection. Additionally, some more precise conclusions can be drawn from the
results. Therefore, we contribute a challenging dataset with detailed
investigations on 92 trained YOLO models, addressing some issues on the
detection accuracy and possible overfitting.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 13:33:27 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Schneidereit",
"Toni",
""
],
[
"Gohrenz",
"Stefan",
""
],
[
"Breuß",
"Michael",
""
]
] | TITLE: Object detection characteristics in a learning factory environment using
YOLOv8
ABSTRACT: AI-based object detection, and efforts to explain and investigate their
characteristics, is a topic of high interest. The impact of, e.g., complex
background structures with similar appearances as the objects of interest, on
the detection accuracy and, beforehand, the necessary dataset composition are
topics of ongoing research. In this paper, we present a systematic
investigation of background influences and different features of the object to
be detected. The latter includes various materials and surfaces, partially
transparent and with shiny reflections in the context of an Industry 4.0
learning factory. Different YOLOv8 models have been trained for each of the
materials on different sized datasets, where the appearance was the only
changing parameter. In the end, similar characteristics tend to show different
behaviours and sometimes unexpected results. While some background components
tend to be detected, others with the same features are not part of the
detection. Additionally, some more precise conclusions can be drawn from the
results. Therefore, we contribute a challenging dataset with detailed
investigations on 92 trained YOLO models, addressing some issues on the
detection accuracy and possible overfitting.
|
2503.10370 | Iman Nematollahi | Iman Nematollahi, Branton DeMoss, Akshay L Chandra, Nick Hawes,
Wolfram Burgard, Ingmar Posner | LUMOS: Language-Conditioned Imitation Learning with World Models | Accepted at the 2025 IEEE International Conference on Robotics and
Automation (ICRA) | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce LUMOS, a language-conditioned multi-task imitation learning
framework for robotics. LUMOS learns skills by practicing them over many
long-horizon rollouts in the latent space of a learned world model and
transfers these skills zero-shot to a real robot. By learning on-policy in the
latent space of the learned world model, our algorithm mitigates policy-induced
distribution shift which most offline imitation learning methods suffer from.
LUMOS learns from unstructured play data with fewer than 1% hindsight language
annotations but is steerable with language commands at test time. We achieve
this coherent long-horizon performance by combining latent planning with both
image- and language-based hindsight goal relabeling during training, and by
optimizing an intrinsic reward defined in the latent space of the world model
over multiple time steps, effectively reducing covariate shift. In experiments
on the difficult long-horizon CALVIN benchmark, LUMOS outperforms prior
learning-based methods with comparable approaches on chained multi-task
evaluations. To the best of our knowledge, we are the first to learn a
language-conditioned continuous visuomotor control for a real-world robot
within an offline world model. Videos, dataset and code are available at
http://lumos.cs.uni-freiburg.de.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 13:48:24 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Nematollahi",
"Iman",
""
],
[
"DeMoss",
"Branton",
""
],
[
"Chandra",
"Akshay L",
""
],
[
"Hawes",
"Nick",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Posner",
"Ingmar",
""
]
] | TITLE: LUMOS: Language-Conditioned Imitation Learning with World Models
ABSTRACT: We introduce LUMOS, a language-conditioned multi-task imitation learning
framework for robotics. LUMOS learns skills by practicing them over many
long-horizon rollouts in the latent space of a learned world model and
transfers these skills zero-shot to a real robot. By learning on-policy in the
latent space of the learned world model, our algorithm mitigates policy-induced
distribution shift which most offline imitation learning methods suffer from.
LUMOS learns from unstructured play data with fewer than 1% hindsight language
annotations but is steerable with language commands at test time. We achieve
this coherent long-horizon performance by combining latent planning with both
image- and language-based hindsight goal relabeling during training, and by
optimizing an intrinsic reward defined in the latent space of the world model
over multiple time steps, effectively reducing covariate shift. In experiments
on the difficult long-horizon CALVIN benchmark, LUMOS outperforms prior
learning-based methods with comparable approaches on chained multi-task
evaluations. To the best of our knowledge, we are the first to learn a
language-conditioned continuous visuomotor control for a real-world robot
within an offline world model. Videos, dataset and code are available at
http://lumos.cs.uni-freiburg.de.
|
2503.10391 | Yufan Deng | Yufan Deng, Xun Guo, Yizhi Wang, Jacob Zhiyuan Fang, Angtian Wang,
Shenghai Yuan, Yiding Yang, Bo Liu, Haibin Huang, Chongyang Ma | CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video generation has witnessed remarkable progress with the advent of deep
generative models, particularly diffusion models. While existing methods excel
in generating high-quality videos from text prompts or single images,
personalized multi-subject video generation remains a largely unexplored
challenge. This task involves synthesizing videos that incorporate multiple
distinct subjects, each defined by separate reference images, while ensuring
temporal and spatial consistency. Current approaches primarily rely on mapping
subject images to keywords in text prompts, which introduces ambiguity and
limits their ability to model subject relationships effectively. In this paper,
we propose CINEMA, a novel framework for coherent multi-subject video
generation by leveraging Multimodal Large Language Model (MLLM). Our approach
eliminates the need for explicit correspondences between subject images and
text entities, mitigating ambiguity and reducing annotation effort. By
leveraging MLLM to interpret subject relationships, our method facilitates
scalability, enabling the use of large and diverse datasets for training.
Furthermore, our framework can be conditioned on varying numbers of subjects,
offering greater flexibility in personalized content creation. Through
extensive evaluations, we demonstrate that our approach significantly improves
subject consistency, and overall video coherence, paving the way for advanced
applications in storytelling, interactive media, and personalized video
generation.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:07:58 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Deng",
"Yufan",
""
],
[
"Guo",
"Xun",
""
],
[
"Wang",
"Yizhi",
""
],
[
"Fang",
"Jacob Zhiyuan",
""
],
[
"Wang",
"Angtian",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Yang",
"Yiding",
""
],
[
"Liu",
"... | TITLE: CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance
ABSTRACT: Video generation has witnessed remarkable progress with the advent of deep
generative models, particularly diffusion models. While existing methods excel
in generating high-quality videos from text prompts or single images,
personalized multi-subject video generation remains a largely unexplored
challenge. This task involves synthesizing videos that incorporate multiple
distinct subjects, each defined by separate reference images, while ensuring
temporal and spatial consistency. Current approaches primarily rely on mapping
subject images to keywords in text prompts, which introduces ambiguity and
limits their ability to model subject relationships effectively. In this paper,
we propose CINEMA, a novel framework for coherent multi-subject video
generation by leveraging Multimodal Large Language Model (MLLM). Our approach
eliminates the need for explicit correspondences between subject images and
text entities, mitigating ambiguity and reducing annotation effort. By
leveraging MLLM to interpret subject relationships, our method facilitates
scalability, enabling the use of large and diverse datasets for training.
Furthermore, our framework can be conditioned on varying numbers of subjects,
offering greater flexibility in personalized content creation. Through
extensive evaluations, we demonstrate that our approach significantly improves
subject consistency, and overall video coherence, paving the way for advanced
applications in storytelling, interactive media, and personalized video
generation.
|
2503.10392 | Fengxiang Wang | Fengxiang Wang, Hongzhen Wang, Yulin Wang, Di Wang, Mingshuo Chen,
Haiyan Zhao, Yangang Sun, Shuo Wang, Long Lan, Wenjing Yang, Jing Zhang | RoMA: Scaling up Mamba-based Foundation Models for Remote Sensing | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in self-supervised learning for Vision Transformers (ViTs)
have fueled breakthroughs in remote sensing (RS) foundation models. However,
the quadratic complexity of self-attention poses a significant barrier to
scalability, particularly for large models and high-resolution images. While
the linear-complexity Mamba architecture offers a promising alternative,
existing RS applications of Mamba remain limited to supervised tasks on small,
domain-specific datasets. To address these challenges, we propose RoMA, a
framework that enables scalable self-supervised pretraining of Mamba-based RS
foundation models using large-scale, diverse, unlabeled data. RoMA enhances
scalability for high-resolution images through a tailored auto-regressive
learning strategy, incorporating two key innovations: 1) a rotation-aware
pretraining mechanism combining adaptive cropping with angular embeddings to
handle sparsely distributed objects with arbitrary orientations, and 2)
multi-scale token prediction objectives that address the extreme variations in
object scales inherent to RS imagery. Systematic empirical studies validate
that Mamba adheres to RS data and parameter scaling laws, with performance
scaling reliably as model and data size increase. Furthermore, experiments
across scene classification, object detection, and semantic segmentation tasks
demonstrate that RoMA-pretrained Mamba models consistently outperform ViT-based
counterparts in both accuracy and computational efficiency. The source code and
pretrained models will be released at https://github.com/MiliLab/RoMA.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:09:18 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Fengxiang",
""
],
[
"Wang",
"Hongzhen",
""
],
[
"Wang",
"Yulin",
""
],
[
"Wang",
"Di",
""
],
[
"Chen",
"Mingshuo",
""
],
[
"Zhao",
"Haiyan",
""
],
[
"Sun",
"Yangang",
""
],
[
"Wang",
"S... | TITLE: RoMA: Scaling up Mamba-based Foundation Models for Remote Sensing
ABSTRACT: Recent advances in self-supervised learning for Vision Transformers (ViTs)
have fueled breakthroughs in remote sensing (RS) foundation models. However,
the quadratic complexity of self-attention poses a significant barrier to
scalability, particularly for large models and high-resolution images. While
the linear-complexity Mamba architecture offers a promising alternative,
existing RS applications of Mamba remain limited to supervised tasks on small,
domain-specific datasets. To address these challenges, we propose RoMA, a
framework that enables scalable self-supervised pretraining of Mamba-based RS
foundation models using large-scale, diverse, unlabeled data. RoMA enhances
scalability for high-resolution images through a tailored auto-regressive
learning strategy, incorporating two key innovations: 1) a rotation-aware
pretraining mechanism combining adaptive cropping with angular embeddings to
handle sparsely distributed objects with arbitrary orientations, and 2)
multi-scale token prediction objectives that address the extreme variations in
object scales inherent to RS imagery. Systematic empirical studies validate
that Mamba adheres to RS data and parameter scaling laws, with performance
scaling reliably as model and data size increase. Furthermore, experiments
across scene classification, object detection, and semantic segmentation tasks
demonstrate that RoMA-pretrained Mamba models consistently outperform ViT-based
counterparts in both accuracy and computational efficiency. The source code and
pretrained models will be released at https://github.com/MiliLab/RoMA.
|
2503.10398 | Niklas Houba | Niklas Houba | Deep source separation of overlapping gravitational-wave signals and
non-stationary noise artifacts | 19 pages, 10 figures | null | null | null | astro-ph.IM gr-qc physics.data-an physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Laser Interferometer Space Antenna (LISA) will observe gravitational
waves in the millihertz frequency band, detecting signals from a vast number of
astrophysical sources embedded in instrumental noise. Extracting individual
signals from these overlapping contributions is a fundamental challenge in LISA
data analysis and is traditionally addressed using computationally expensive
stochastic Bayesian techniques. In this work, we present a deep learning-based
framework for blind source separation in LISA data, employing an
encoder-decoder architecture commonly used in digital audio processing to
isolate individual signals within complex mixtures. Our approach enables
signals from massive black-hole binaries, Galactic binaries, and instrumental
glitches to be disentangled directly in a single step, circumventing the need
for sequential source identification and subtraction. By learning clustered
latent space representations, the framework provides a scalable alternative to
conventional methods, with applications in both low-latency event detection and
full-scale global-fit analyses. As a proof of concept, we assess the model's
performance using simulated LISA data in a controlled setting with a limited
number of overlapping sources. The results highlight deep source separation as
a promising tool for LISA, paving the way for future extensions to more complex
datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:19:13 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Houba",
"Niklas",
""
]
] | TITLE: Deep source separation of overlapping gravitational-wave signals and
non-stationary noise artifacts
ABSTRACT: The Laser Interferometer Space Antenna (LISA) will observe gravitational
waves in the millihertz frequency band, detecting signals from a vast number of
astrophysical sources embedded in instrumental noise. Extracting individual
signals from these overlapping contributions is a fundamental challenge in LISA
data analysis and is traditionally addressed using computationally expensive
stochastic Bayesian techniques. In this work, we present a deep learning-based
framework for blind source separation in LISA data, employing an
encoder-decoder architecture commonly used in digital audio processing to
isolate individual signals within complex mixtures. Our approach enables
signals from massive black-hole binaries, Galactic binaries, and instrumental
glitches to be disentangled directly in a single step, circumventing the need
for sequential source identification and subtraction. By learning clustered
latent space representations, the framework provides a scalable alternative to
conventional methods, with applications in both low-latency event detection and
full-scale global-fit analyses. As a proof of concept, we assess the model's
performance using simulated LISA data in a controlled setting with a limited
number of overlapping sources. The results highlight deep source separation as
a promising tool for LISA, paving the way for future extensions to more complex
datasets.
|
2503.10404 | Fabrizio Pittorino | Matteo Gambella, Fabrizio Pittorino, Manuel Roveri | Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in
Neural Architecture Search | 22 pages, 11 figures, 3 tables | null | null | null | cs.LG cond-mat.dis-nn cs.CV | http://creativecommons.org/licenses/by/4.0/ | Neural Architecture Search (NAS) has become an essential tool for designing
effective and efficient neural networks. In this paper, we investigate the
geometric properties of neural architecture spaces commonly used in
differentiable NAS methods, specifically NAS-Bench-201 and DARTS. By defining
flatness metrics such as neighborhoods and loss barriers along paths in
architecture space, we reveal locality and flatness characteristics analogous
to the well-known properties of neural network loss landscapes in weight space.
In particular, we find that highly accurate architectures cluster together in
flat regions, while suboptimal architectures remain isolated, unveiling the
detailed geometrical structure of the architecture search landscape. Building
on these insights, we propose Architecture-Aware Minimization (A$^2$M), a novel
analytically derived algorithmic framework that explicitly biases, for the
first time, the gradient of differentiable NAS methods towards flat minima in
architecture space. A$^2$M consistently improves generalization over
state-of-the-art DARTS-based algorithms on benchmark datasets including
CIFAR-10, CIFAR-100, and ImageNet16-120, across both NAS-Bench-201 and DARTS
search spaces. Notably, A$^2$M is able to increase the test accuracy, on
average across different differentiable NAS methods, by +3.60\% on CIFAR-10,
+4.60\% on CIFAR-100, and +3.64\% on ImageNet16-120, demonstrating its superior
effectiveness in practice. A$^2$M can be easily integrated into existing
differentiable NAS frameworks, offering a versatile tool for future research
and applications in automated machine learning. We open-source our code at
https://github.com/AI-Tech-Research-Lab/AsquaredM.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:30:17 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Gambella",
"Matteo",
""
],
[
"Pittorino",
"Fabrizio",
""
],
[
"Roveri",
"Manuel",
""
]
] | TITLE: Architecture-Aware Minimization (A$^2$M): How to Find Flat Minima in
Neural Architecture Search
ABSTRACT: Neural Architecture Search (NAS) has become an essential tool for designing
effective and efficient neural networks. In this paper, we investigate the
geometric properties of neural architecture spaces commonly used in
differentiable NAS methods, specifically NAS-Bench-201 and DARTS. By defining
flatness metrics such as neighborhoods and loss barriers along paths in
architecture space, we reveal locality and flatness characteristics analogous
to the well-known properties of neural network loss landscapes in weight space.
In particular, we find that highly accurate architectures cluster together in
flat regions, while suboptimal architectures remain isolated, unveiling the
detailed geometrical structure of the architecture search landscape. Building
on these insights, we propose Architecture-Aware Minimization (A$^2$M), a novel
analytically derived algorithmic framework that explicitly biases, for the
first time, the gradient of differentiable NAS methods towards flat minima in
architecture space. A$^2$M consistently improves generalization over
state-of-the-art DARTS-based algorithms on benchmark datasets including
CIFAR-10, CIFAR-100, and ImageNet16-120, across both NAS-Bench-201 and DARTS
search spaces. Notably, A$^2$M is able to increase the test accuracy, on
average across different differentiable NAS methods, by +3.60\% on CIFAR-10,
+4.60\% on CIFAR-100, and +3.64\% on ImageNet16-120, demonstrating its superior
effectiveness in practice. A$^2$M can be easily integrated into existing
differentiable NAS frameworks, offering a versatile tool for future research
and applications in automated machine learning. We open-source our code at
https://github.com/AI-Tech-Research-Lab/AsquaredM.
|
2503.10406 | Yijing Lin | Yijing Lin, Mengqi Huang, Shuhan Zhuang, Zhendong Mao | RealGeneral: Unifying Visual Generation via Temporal In-Context Learning
with Video Models | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unifying diverse image generation tasks within a single framework remains a
fundamental challenge in visual generation. While large language models (LLMs)
achieve unification through task-agnostic data and generation, existing visual
generation models fail to meet these principles. Current approaches either rely
on per-task datasets and large-scale training or adapt pre-trained image models
with task-specific modifications, limiting their generalizability. In this
work, we explore video models as a foundation for unified image generation,
leveraging their inherent ability to model temporal correlations. We introduce
RealGeneral, a novel framework that reformulates image generation as a
conditional frame prediction task, analogous to in-context learning in LLMs. To
bridge the gap between video models and condition-image pairs, we propose (1) a
Unified Conditional Embedding module for multi-modal alignment and (2) a
Unified Stream DiT Block with decoupled adaptive LayerNorm and attention mask
to mitigate cross-modal interference. RealGeneral demonstrates effectiveness in
multiple important visual generation tasks, e.g., it achieves a 14.5%
improvement in subject similarity for customized generation and a 10%
enhancement in image quality for canny-to-image task. Project page:
https://lyne1.github.io/RealGeneral/
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:31:52 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Lin",
"Yijing",
""
],
[
"Huang",
"Mengqi",
""
],
[
"Zhuang",
"Shuhan",
""
],
[
"Mao",
"Zhendong",
""
]
] | TITLE: RealGeneral: Unifying Visual Generation via Temporal In-Context Learning
with Video Models
ABSTRACT: Unifying diverse image generation tasks within a single framework remains a
fundamental challenge in visual generation. While large language models (LLMs)
achieve unification through task-agnostic data and generation, existing visual
generation models fail to meet these principles. Current approaches either rely
on per-task datasets and large-scale training or adapt pre-trained image models
with task-specific modifications, limiting their generalizability. In this
work, we explore video models as a foundation for unified image generation,
leveraging their inherent ability to model temporal correlations. We introduce
RealGeneral, a novel framework that reformulates image generation as a
conditional frame prediction task, analogous to in-context learning in LLMs. To
bridge the gap between video models and condition-image pairs, we propose (1) a
Unified Conditional Embedding module for multi-modal alignment and (2) a
Unified Stream DiT Block with decoupled adaptive LayerNorm and attention mask
to mitigate cross-modal interference. RealGeneral demonstrates effectiveness in
multiple important visual generation tasks, e.g., it achieves a 14.5%
improvement in subject similarity for customized generation and a 10%
enhancement in image quality for canny-to-image task. Project page:
https://lyne1.github.io/RealGeneral/
|
2503.10410 | Yuwen Du | Yuwen Du, Anning Hu, Zichen Chao, Yifan Lu, Junhao Ge, Genjia Liu,
Weitao Wu, Lanjun Wang, Siheng Chen | RoCo-Sim: Enhancing Roadside Collaborative Perception through Foreground
Simulation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Roadside Collaborative Perception refers to a system where multiple roadside
units collaborate to pool their perceptual data, assisting vehicles in
enhancing their environmental awareness. Existing roadside perception methods
concentrate on model design but overlook data issues like calibration errors,
sparse information, and multi-view consistency, leading to poor performance on
recent published datasets. To significantly enhance roadside collaborative
perception and address critical data issues, we present the first simulation
framework RoCo-Sim for road-side collaborative perception. RoCo-Sim is capable
of generating diverse, multi-view consistent simulated roadside data through
dynamic foreground editing and full-scene style transfer of a single image.
RoCo-Sim consists of four components: (1) Camera Extrinsic Optimization ensures
accurate 3D to 2D projection for roadside cameras; (2) A novel Multi-View
Occlusion-Aware Sampler (MOAS) determines the placement of diverse digital
assets within 3D space; (3) DepthSAM innovatively models foreground-background
relationships from single-frame fixed-view images, ensuring multi-view
consistency of foreground; and (4) Scalable Post-Processing Toolkit generates
more realistic and enriched scenes through style transfer and other
enhancements. RoCo-Sim significantly improves roadside 3D object detection,
outperforming SOTA methods by 83.74 on Rcooper-Intersection and 83.12 on
TUMTraf-V2X for AP70. RoCo-Sim fills a critical gap in roadside perception
simulation. Code and pre-trained models will be released soon:
https://github.com/duyuwen-duen/RoCo-Sim
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:33:42 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Du",
"Yuwen",
""
],
[
"Hu",
"Anning",
""
],
[
"Chao",
"Zichen",
""
],
[
"Lu",
"Yifan",
""
],
[
"Ge",
"Junhao",
""
],
[
"Liu",
"Genjia",
""
],
[
"Wu",
"Weitao",
""
],
[
"Wang",
"Lanjun",
""
... | TITLE: RoCo-Sim: Enhancing Roadside Collaborative Perception through Foreground
Simulation
ABSTRACT: Roadside Collaborative Perception refers to a system where multiple roadside
units collaborate to pool their perceptual data, assisting vehicles in
enhancing their environmental awareness. Existing roadside perception methods
concentrate on model design but overlook data issues like calibration errors,
sparse information, and multi-view consistency, leading to poor performance on
recent published datasets. To significantly enhance roadside collaborative
perception and address critical data issues, we present the first simulation
framework RoCo-Sim for road-side collaborative perception. RoCo-Sim is capable
of generating diverse, multi-view consistent simulated roadside data through
dynamic foreground editing and full-scene style transfer of a single image.
RoCo-Sim consists of four components: (1) Camera Extrinsic Optimization ensures
accurate 3D to 2D projection for roadside cameras; (2) A novel Multi-View
Occlusion-Aware Sampler (MOAS) determines the placement of diverse digital
assets within 3D space; (3) DepthSAM innovatively models foreground-background
relationships from single-frame fixed-view images, ensuring multi-view
consistency of foreground; and (4) Scalable Post-Processing Toolkit generates
more realistic and enriched scenes through style transfer and other
enhancements. RoCo-Sim significantly improves roadside 3D object detection,
outperforming SOTA methods by 83.74 on Rcooper-Intersection and 83.12 on
TUMTraf-V2X for AP70. RoCo-Sim fills a critical gap in roadside perception
simulation. Code and pre-trained models will be released soon:
https://github.com/duyuwen-duen/RoCo-Sim
|
2503.10421 | Zhenwei Wang | Zhenwei Wang, Ruibin Bai, Tiehua Zhang | Towards Constraint-Based Adaptive Hypergraph Learning for Solving
Vehicle Routing: An End-to-End Solution | null | null | null | null | cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | The application of learning based methods to vehicle routing problems has
emerged as a pivotal area of research in combinatorial optimization. These
problems are characterized by vast solution spaces and intricate constraints,
making traditional approaches such as exact mathematical models or heuristic
methods prone to high computational overhead or reliant on the design of
complex heuristic operators to achieve optimal or near optimal solutions.
Meanwhile, although some recent learning-based methods can produce good
performance for VRP with straightforward constraint scenarios, they often fail
to effectively handle hard constraints that are common in practice. This study
introduces a novel end-to-end framework that combines constraint-oriented
hypergraphs with reinforcement learning to address vehicle routing problems. A
central innovation of this work is the development of a constraint-oriented
dynamic hyperedge reconstruction strategy within an encoder, which
significantly enhances hypergraph representation learning. Additionally, the
decoder leverages a double-pointer attention mechanism to iteratively generate
solutions. The proposed model is trained by incorporating asynchronous
parameter updates informed by hypergraph constraints and optimizing a dual loss
function comprising constraint loss and policy gradient loss. The experiment
results on benchmark datasets demonstrate that the proposed approach not only
eliminates the need for sophisticated heuristic operators but also achieves
substantial improvements in solution quality.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:42:44 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Zhenwei",
""
],
[
"Bai",
"Ruibin",
""
],
[
"Zhang",
"Tiehua",
""
]
] | TITLE: Towards Constraint-Based Adaptive Hypergraph Learning for Solving
Vehicle Routing: An End-to-End Solution
ABSTRACT: The application of learning based methods to vehicle routing problems has
emerged as a pivotal area of research in combinatorial optimization. These
problems are characterized by vast solution spaces and intricate constraints,
making traditional approaches such as exact mathematical models or heuristic
methods prone to high computational overhead or reliant on the design of
complex heuristic operators to achieve optimal or near optimal solutions.
Meanwhile, although some recent learning-based methods can produce good
performance for VRP with straightforward constraint scenarios, they often fail
to effectively handle hard constraints that are common in practice. This study
introduces a novel end-to-end framework that combines constraint-oriented
hypergraphs with reinforcement learning to address vehicle routing problems. A
central innovation of this work is the development of a constraint-oriented
dynamic hyperedge reconstruction strategy within an encoder, which
significantly enhances hypergraph representation learning. Additionally, the
decoder leverages a double-pointer attention mechanism to iteratively generate
solutions. The proposed model is trained by incorporating asynchronous
parameter updates informed by hypergraph constraints and optimizing a dual loss
function comprising constraint loss and policy gradient loss. The experiment
results on benchmark datasets demonstrate that the proposed approach not only
eliminates the need for sophisticated heuristic operators but also achieves
substantial improvements in solution quality.
|
2503.10426 | Javad Pourmostafa Roshan Sharami | Bennet van den Broek and Javad Pourmostafa Roshan Sharami | Improving Medical Waste Classification with Hybrid Capsule Networks | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The improper disposal and mismanagement of medical waste pose severe
environmental and public health risks, contributing to greenhouse gas emissions
and the spread of infectious diseases. Efficient and accurate medical waste
classification is crucial for mitigating these risks. We explore the
integration of capsule networks with a pretrained DenseNet model to improve
medical waste classification. To the best of our knowledge, capsule networks
have not yet been applied to this task, making this study the first to assess
their effectiveness.
A diverse dataset of medical waste images collected from multiple public
sources, is used to evaluate three model configurations: (1) a pretrained
DenseNet model as a baseline, (2) a pretrained DenseNet with frozen layers
combined with a capsule network, and (3) a pretrained DenseNet with unfrozen
layers combined with a capsule network. Experimental results demonstrate that
incorporating capsule networks improves classification performance, with F1
scores increasing from 0.89 (baseline) to 0.92 (hybrid model with unfrozen
layers). This highlights the potential of capsule networks to address the
spatial limitations of traditional convolutional models and improve
classification robustness.
While the capsule-enhanced model demonstrated improved classification
performance, direct comparisons with prior studies were challenging due to
differences in dataset size and diversity. Previous studies relied on smaller,
domain-specific datasets, which inherently yielded higher accuracy. In
contrast, our study employs a significantly larger and more diverse dataset,
leading to better generalization but introducing additional classification
challenges. This highlights the trade-off between dataset complexity and model
performance.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:49:30 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Broek",
"Bennet van den",
""
],
[
"Sharami",
"Javad Pourmostafa Roshan",
""
]
] | TITLE: Improving Medical Waste Classification with Hybrid Capsule Networks
ABSTRACT: The improper disposal and mismanagement of medical waste pose severe
environmental and public health risks, contributing to greenhouse gas emissions
and the spread of infectious diseases. Efficient and accurate medical waste
classification is crucial for mitigating these risks. We explore the
integration of capsule networks with a pretrained DenseNet model to improve
medical waste classification. To the best of our knowledge, capsule networks
have not yet been applied to this task, making this study the first to assess
their effectiveness.
A diverse dataset of medical waste images collected from multiple public
sources, is used to evaluate three model configurations: (1) a pretrained
DenseNet model as a baseline, (2) a pretrained DenseNet with frozen layers
combined with a capsule network, and (3) a pretrained DenseNet with unfrozen
layers combined with a capsule network. Experimental results demonstrate that
incorporating capsule networks improves classification performance, with F1
scores increasing from 0.89 (baseline) to 0.92 (hybrid model with unfrozen
layers). This highlights the potential of capsule networks to address the
spatial limitations of traditional convolutional models and improve
classification robustness.
While the capsule-enhanced model demonstrated improved classification
performance, direct comparisons with prior studies were challenging due to
differences in dataset size and diversity. Previous studies relied on smaller,
domain-specific datasets, which inherently yielded higher accuracy. In
contrast, our study employs a significantly larger and more diverse dataset,
leading to better generalization but introducing additional classification
challenges. This highlights the trade-off between dataset complexity and model
performance.
|
2503.10434 | Derun Li | Derun Li, Jianwei Ren, Yue Wang, Xin Wen, Pengxiang Li, Leimeng Xu,
Kun Zhan, Zhongpu Xia, Peng Jia, Xianpeng Lang, Ningyi Xu, Hang Zhao | Finetuning Generative Trajectory Model with Reinforcement Learning from
Human Feedback | 10 pages, 5 figures | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generating human-like and adaptive trajectories is essential for autonomous
driving in dynamic environments. While generative models have shown promise in
synthesizing feasible trajectories, they often fail to capture the nuanced
variability of human driving styles due to dataset biases and distributional
shifts. To address this, we introduce TrajHF, a human feedback-driven
finetuning framework for generative trajectory models, designed to align motion
planning with diverse driving preferences. TrajHF incorporates
multi-conditional denoiser and reinforcement learning with human feedback to
refine multi-modal trajectory generation beyond conventional imitation
learning. This enables better alignment with human driving preferences while
maintaining safety and feasibility constraints. TrajHF achieves PDMS of 93.95
on NavSim benchmark, significantly exceeding other methods. TrajHF sets a new
paradigm for personalized and adaptable trajectory generation in autonomous
driving.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 14:56:17 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Derun",
""
],
[
"Ren",
"Jianwei",
""
],
[
"Wang",
"Yue",
""
],
[
"Wen",
"Xin",
""
],
[
"Li",
"Pengxiang",
""
],
[
"Xu",
"Leimeng",
""
],
[
"Zhan",
"Kun",
""
],
[
"Xia",
"Zhongpu",
""
... | TITLE: Finetuning Generative Trajectory Model with Reinforcement Learning from
Human Feedback
ABSTRACT: Generating human-like and adaptive trajectories is essential for autonomous
driving in dynamic environments. While generative models have shown promise in
synthesizing feasible trajectories, they often fail to capture the nuanced
variability of human driving styles due to dataset biases and distributional
shifts. To address this, we introduce TrajHF, a human feedback-driven
finetuning framework for generative trajectory models, designed to align motion
planning with diverse driving preferences. TrajHF incorporates
multi-conditional denoiser and reinforcement learning with human feedback to
refine multi-modal trajectory generation beyond conventional imitation
learning. This enables better alignment with human driving preferences while
maintaining safety and feasibility constraints. TrajHF achieves PDMS of 93.95
on NavSim benchmark, significantly exceeding other methods. TrajHF sets a new
paradigm for personalized and adaptable trajectory generation in autonomous
driving.
|
2503.10450 | Maarten Perneel | Maarten Perneel, Ines Adriaens, Ben Aernouts, Jan Verwaeren | Consistent multi-animal pose estimation in cattle using dynamic Kalman
filter based tracking | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Over the past decade, studying animal behaviour with the help of computer
vision has become more popular. Replacing human observers by computer vision
lowers the cost of data collection and therefore allows to collect more
extensive datasets. However, the majority of available computer vision
algorithms to study animal behaviour is highly tailored towards a single
research objective, limiting possibilities for data reuse. In this perspective,
pose-estimation in combination with animal tracking offers opportunities to
yield a higher level representation capturing both the spatial and temporal
component of animal behaviour. Such a higher level representation allows to
answer a wide variety of research questions simultaneously, without the need to
develop repeatedly tailored computer vision algorithms. In this paper, we
therefore first cope with several weaknesses of current pose-estimation
algorithms and thereafter introduce KeySORT (Keypoint Simple and Online
Realtime Tracking). KeySORT deploys an adaptive Kalman filter to construct
tracklets in a bounding-box free manner, significantly improving the temporal
consistency of detected keypoints. In this paper, we focus on pose estimation
in cattle, but our methodology can easily be generalised to any other animal
species. Our test results indicate our algorithm is able to detect up to 80% of
the ground truth keypoints with high accuracy, with only a limited drop in
performance when daylight recordings are compared to nightvision recordings.
Moreover, by using KeySORT to construct skeletons, the temporal consistency of
generated keypoint coordinates was largely improved, offering opportunities
with regard to automated behaviour monitoring of animals.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:15:54 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Perneel",
"Maarten",
""
],
[
"Adriaens",
"Ines",
""
],
[
"Aernouts",
"Ben",
""
],
[
"Verwaeren",
"Jan",
""
]
] | TITLE: Consistent multi-animal pose estimation in cattle using dynamic Kalman
filter based tracking
ABSTRACT: Over the past decade, studying animal behaviour with the help of computer
vision has become more popular. Replacing human observers by computer vision
lowers the cost of data collection and therefore allows to collect more
extensive datasets. However, the majority of available computer vision
algorithms to study animal behaviour is highly tailored towards a single
research objective, limiting possibilities for data reuse. In this perspective,
pose-estimation in combination with animal tracking offers opportunities to
yield a higher level representation capturing both the spatial and temporal
component of animal behaviour. Such a higher level representation allows to
answer a wide variety of research questions simultaneously, without the need to
develop repeatedly tailored computer vision algorithms. In this paper, we
therefore first cope with several weaknesses of current pose-estimation
algorithms and thereafter introduce KeySORT (Keypoint Simple and Online
Realtime Tracking). KeySORT deploys an adaptive Kalman filter to construct
tracklets in a bounding-box free manner, significantly improving the temporal
consistency of detected keypoints. In this paper, we focus on pose estimation
in cattle, but our methodology can easily be generalised to any other animal
species. Our test results indicate our algorithm is able to detect up to 80% of
the ground truth keypoints with high accuracy, with only a limited drop in
performance when daylight recordings are compared to nightvision recordings.
Moreover, by using KeySORT to construct skeletons, the temporal consistency of
generated keypoint coordinates was largely improved, offering opportunities
with regard to automated behaviour monitoring of animals.
|
2503.10452 | Wenhao Hu | Wenhao Hu, Jinhao Duan, Chunchen Wei, Li Zhang, Yue Zhang, Kaidi Xu | DynaCode: A Dynamic Complexity-Aware Code Benchmark for Evaluating Large
Language Models in Code Generation | 16 pages, 11 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of large language models (LLMs) has significantly
improved their performance in code generation tasks. However, existing code
benchmarks remain static, consisting of fixed datasets with predefined
problems. This makes them vulnerable to memorization during training, where
LLMs recall specific test cases instead of generalizing to new problems,
leading to data contamination and unreliable evaluation results. To address
these issues, we introduce DynaCode, a dynamic, complexity-aware benchmark that
overcomes the limitations of static datasets. DynaCode evaluates LLMs
systematically using a complexity-aware metric, incorporating both code
complexity and call-graph structures. DynaCode achieves large-scale diversity,
generating up to 189 million unique nested code problems across four distinct
levels of code complexity, referred to as units, and 16 types of call graphs.
Results on 12 latest LLMs show an average performance drop of 16.8% to 45.7%
compared to MBPP+, a static code generation benchmark, with performance
progressively decreasing as complexity increases. This demonstrates DynaCode's
ability to effectively differentiate LLMs. Additionally, by leveraging call
graphs, we gain insights into LLM behavior, particularly their preference for
handling subfunction interactions within nested code.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:18:56 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Hu",
"Wenhao",
""
],
[
"Duan",
"Jinhao",
""
],
[
"Wei",
"Chunchen",
""
],
[
"Zhang",
"Li",
""
],
[
"Zhang",
"Yue",
""
],
[
"Xu",
"Kaidi",
""
]
] | TITLE: DynaCode: A Dynamic Complexity-Aware Code Benchmark for Evaluating Large
Language Models in Code Generation
ABSTRACT: The rapid advancement of large language models (LLMs) has significantly
improved their performance in code generation tasks. However, existing code
benchmarks remain static, consisting of fixed datasets with predefined
problems. This makes them vulnerable to memorization during training, where
LLMs recall specific test cases instead of generalizing to new problems,
leading to data contamination and unreliable evaluation results. To address
these issues, we introduce DynaCode, a dynamic, complexity-aware benchmark that
overcomes the limitations of static datasets. DynaCode evaluates LLMs
systematically using a complexity-aware metric, incorporating both code
complexity and call-graph structures. DynaCode achieves large-scale diversity,
generating up to 189 million unique nested code problems across four distinct
levels of code complexity, referred to as units, and 16 types of call graphs.
Results on 12 latest LLMs show an average performance drop of 16.8% to 45.7%
compared to MBPP+, a static code generation benchmark, with performance
progressively decreasing as complexity increases. This demonstrates DynaCode's
ability to effectively differentiate LLMs. Additionally, by leveraging call
graphs, we gain insights into LLM behavior, particularly their preference for
handling subfunction interactions within nested code.
|
2503.10464 | Xunzhi Zheng | Xunzhi Zheng and Dan Xu | Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within
Unified Neural Representations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning accurate scene reconstruction without pose priors in neural radiance
fields is challenging due to inherent geometric ambiguity. Recent development
either relies on correspondence priors for regularization or uses off-the-shelf
flow estimators to derive analytical poses. However, the potential for jointly
learning scene geometry, camera poses, and dense flow within a unified neural
representation remains largely unexplored. In this paper, we present Flow-NeRF,
a unified framework that simultaneously optimizes scene geometry, camera poses,
and dense optical flow all on-the-fly. To enable the learning of dense flow
within the neural radiance field, we design and build a bijective mapping for
flow estimation, conditioned on pose. To make the scene reconstruction benefit
from the flow estimation, we develop an effective feature enhancement mechanism
to pass canonical space features to world space representations, significantly
enhancing scene geometry. We validate our model across four important tasks,
i.e., novel view synthesis, depth estimation, camera pose prediction, and dense
optical flow estimation, using several datasets. Our approach surpasses
previous methods in almost all metrics for novel-view view synthesis and depth
estimation and yields both qualitatively sound and quantitatively accurate
novel-view flow. Our project page is https://zhengxunzhi.github.io/flownerf/.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:37:11 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Zheng",
"Xunzhi",
""
],
[
"Xu",
"Dan",
""
]
] | TITLE: Flow-NeRF: Joint Learning of Geometry, Poses, and Dense Flow within
Unified Neural Representations
ABSTRACT: Learning accurate scene reconstruction without pose priors in neural radiance
fields is challenging due to inherent geometric ambiguity. Recent development
either relies on correspondence priors for regularization or uses off-the-shelf
flow estimators to derive analytical poses. However, the potential for jointly
learning scene geometry, camera poses, and dense flow within a unified neural
representation remains largely unexplored. In this paper, we present Flow-NeRF,
a unified framework that simultaneously optimizes scene geometry, camera poses,
and dense optical flow all on-the-fly. To enable the learning of dense flow
within the neural radiance field, we design and build a bijective mapping for
flow estimation, conditioned on pose. To make the scene reconstruction benefit
from the flow estimation, we develop an effective feature enhancement mechanism
to pass canonical space features to world space representations, significantly
enhancing scene geometry. We validate our model across four important tasks,
i.e., novel view synthesis, depth estimation, camera pose prediction, and dense
optical flow estimation, using several datasets. Our approach surpasses
previous methods in almost all metrics for novel-view view synthesis and depth
estimation and yields both qualitatively sound and quantitatively accurate
novel-view flow. Our project page is https://zhengxunzhi.github.io/flownerf/.
|
2503.10471 | Liming Wu | Liming Wu, Wenbing Huang, Rui Jiao, Jianxing Huang, Liwei Liu, Yipeng
Zhou, Hao Sun, Yang Liu, Fuchun Sun, Yuxiang Ren, Jirong Wen | Siamese Foundation Models for Crystal Structure Prediction | null | null | null | null | cond-mat.mtrl-sci cs.AI | http://creativecommons.org/licenses/by/4.0/ | Crystal Structure Prediction (CSP), which aims to generate stable crystal
structures from compositions, represents a critical pathway for discovering
novel materials. While structure prediction tasks in other domains, such as
proteins, have seen remarkable progress, CSP remains a relatively underexplored
area due to the more complex geometries inherent in crystal structures. In this
paper, we propose Siamese foundation models specifically designed to address
CSP. Our pretrain-finetune framework, named DAO, comprises two complementary
foundation models: DAO-G for structure generation and DAO-P for energy
prediction. Experiments on CSP benchmarks (MP-20 and MPTS-52) demonstrate that
our DAO-G significantly surpasses state-of-the-art (SOTA) methods across all
metrics. Extensive ablation studies further confirm that DAO-G excels in
generating diverse polymorphic structures, and the dataset relaxation and
energy guidance provided by DAO-P are essential for enhancing DAO-G's
performance. When applied to three real-world superconductors
($\text{CsV}_3\text{Sb}_5$, $ \text{Zr}_{16}\text{Rh}_8\text{O}_4$ and
$\text{Zr}_{16}\text{Pd}_8\text{O}_4$) that are known to be challenging to
analyze, our foundation models achieve accurate critical temperature
predictions and structure generations. For instance, on
$\text{CsV}_3\text{Sb}_5$, DAO-G generates a structure close to the
experimental one with an RMSE of 0.0085; DAO-P predicts the $T_c$ value with
high accuracy (2.26 K vs. the ground-truth value of 2.30 K). In contrast,
conventional DFT calculators like Quantum Espresso only successfully derive the
structure of the first superconductor within an acceptable time, while the RMSE
is nearly 8 times larger, and the computation speed is more than 1000 times
slower. These compelling results collectively highlight the potential of our
approach for advancing materials science research and development.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:44:16 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wu",
"Liming",
""
],
[
"Huang",
"Wenbing",
""
],
[
"Jiao",
"Rui",
""
],
[
"Huang",
"Jianxing",
""
],
[
"Liu",
"Liwei",
""
],
[
"Zhou",
"Yipeng",
""
],
[
"Sun",
"Hao",
""
],
[
"Liu",
"Yang",
... | TITLE: Siamese Foundation Models for Crystal Structure Prediction
ABSTRACT: Crystal Structure Prediction (CSP), which aims to generate stable crystal
structures from compositions, represents a critical pathway for discovering
novel materials. While structure prediction tasks in other domains, such as
proteins, have seen remarkable progress, CSP remains a relatively underexplored
area due to the more complex geometries inherent in crystal structures. In this
paper, we propose Siamese foundation models specifically designed to address
CSP. Our pretrain-finetune framework, named DAO, comprises two complementary
foundation models: DAO-G for structure generation and DAO-P for energy
prediction. Experiments on CSP benchmarks (MP-20 and MPTS-52) demonstrate that
our DAO-G significantly surpasses state-of-the-art (SOTA) methods across all
metrics. Extensive ablation studies further confirm that DAO-G excels in
generating diverse polymorphic structures, and the dataset relaxation and
energy guidance provided by DAO-P are essential for enhancing DAO-G's
performance. When applied to three real-world superconductors
($\text{CsV}_3\text{Sb}_5$, $ \text{Zr}_{16}\text{Rh}_8\text{O}_4$ and
$\text{Zr}_{16}\text{Pd}_8\text{O}_4$) that are known to be challenging to
analyze, our foundation models achieve accurate critical temperature
predictions and structure generations. For instance, on
$\text{CsV}_3\text{Sb}_5$, DAO-G generates a structure close to the
experimental one with an RMSE of 0.0085; DAO-P predicts the $T_c$ value with
high accuracy (2.26 K vs. the ground-truth value of 2.30 K). In contrast,
conventional DFT calculators like Quantum Espresso only successfully derive the
structure of the first superconductor within an acceptable time, while the RMSE
is nearly 8 times larger, and the computation speed is more than 1000 times
slower. These compelling results collectively highlight the potential of our
approach for advancing materials science research and development.
|
2503.10486 | Gaurav Kumar Gupta | Gaurav Kumar Gupta and Pranal Pande | LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3
Mini Across Chronic Health Conditions | 12 pages, 3 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are revolutionizing medical diagnostics by
enhancing both disease classification and clinical decision-making. In this
study, we evaluate the performance of two LLM- based diagnostic tools, DeepSeek
R1 and O3 Mini, using a structured dataset of symptoms and diagnoses. We
assessed their predictive accuracy at both the disease and category levels, as
well as the reliability of their confidence scores. DeepSeek R1 achieved a
disease-level accuracy of 76% and an overall accuracy of 82%, outperforming O3
Mini, which attained 72% and 75% respectively. Notably, DeepSeek R1
demonstrated exceptional performance in Mental Health, Neurological Disorders,
and Oncology, where it reached 100% accuracy, while O3 Mini excelled in
Autoimmune Disease classification with 100% accuracy. Both models, however,
struggled with Respiratory Disease classification, recording accuracies of only
40% for DeepSeek R1 and 20% for O3 Mini. Additionally, the analysis of
confidence scores revealed that DeepSeek R1 provided high-confidence
predictions in 92% of cases, compared to 68% for O3 Mini. Ethical
considerations regarding bias, model interpretability, and data privacy are
also discussed to ensure the responsible integration of LLMs into clinical
practice. Overall, our findings offer valuable insights into the strengths and
limitations of LLM-based diagnostic systems and provide a roadmap for future
enhancements in AI-driven healthcare.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 15:54:26 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Gupta",
"Gaurav Kumar",
""
],
[
"Pande",
"Pranal",
""
]
] | TITLE: LLMs in Disease Diagnosis: A Comparative Study of DeepSeek-R1 and O3
Mini Across Chronic Health Conditions
ABSTRACT: Large Language Models (LLMs) are revolutionizing medical diagnostics by
enhancing both disease classification and clinical decision-making. In this
study, we evaluate the performance of two LLM- based diagnostic tools, DeepSeek
R1 and O3 Mini, using a structured dataset of symptoms and diagnoses. We
assessed their predictive accuracy at both the disease and category levels, as
well as the reliability of their confidence scores. DeepSeek R1 achieved a
disease-level accuracy of 76% and an overall accuracy of 82%, outperforming O3
Mini, which attained 72% and 75% respectively. Notably, DeepSeek R1
demonstrated exceptional performance in Mental Health, Neurological Disorders,
and Oncology, where it reached 100% accuracy, while O3 Mini excelled in
Autoimmune Disease classification with 100% accuracy. Both models, however,
struggled with Respiratory Disease classification, recording accuracies of only
40% for DeepSeek R1 and 20% for O3 Mini. Additionally, the analysis of
confidence scores revealed that DeepSeek R1 provided high-confidence
predictions in 92% of cases, compared to 68% for O3 Mini. Ethical
considerations regarding bias, model interpretability, and data privacy are
also discussed to ensure the responsible integration of LLMs into clinical
practice. Overall, our findings offer valuable insights into the strengths and
limitations of LLM-based diagnostic systems and provide a roadmap for future
enhancements in AI-driven healthcare.
|
2503.10501 | Xudong Tan | Xudong Tan, Peng Ye, Chongjun Tu, Jianjian Cao, Yaoxin Yang, Lin
Zhang, Dongzhan Zhou, Tao Chen | TokenCarve: Information-Preserving Visual Token Compression in
Multimodal Large Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multimodal Large Language Models (MLLMs) are becoming increasingly popular,
while the high computational cost associated with multimodal data input,
particularly from visual tokens, poses a significant challenge. Existing
training-based token compression methods improve inference efficiency but
require costly retraining, while training-free methods struggle to maintain
performance when aggressively reducing token counts. In this study, we reveal
that the performance degradation of MLLM closely correlates with the
accelerated loss of information in the attention output matrix. This insight
introduces a novel information-preserving perspective, making it possible to
maintain performance even under extreme token compression. Based on this
finding, we propose TokenCarve, a training-free, plug-and-play, two-stage token
compression framework. The first stage employs an
Information-Preservation-Guided Selection (IPGS) strategy to prune
low-information tokens, while the second stage further leverages IPGS to guide
token merging, minimizing information loss. Extensive experiments on 11
datasets and 2 model variants demonstrate the effectiveness of TokenCarve. It
can even reduce the number of visual tokens to 22.2% of the original count,
achieving a 1.23x speedup in inference, a 64% reduction in KV cache storage,
and only a 1.54% drop in accuracy. Our code is available at
https://github.com/ShawnTan86/TokenCarve.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:04:31 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Tan",
"Xudong",
""
],
[
"Ye",
"Peng",
""
],
[
"Tu",
"Chongjun",
""
],
[
"Cao",
"Jianjian",
""
],
[
"Yang",
"Yaoxin",
""
],
[
"Zhang",
"Lin",
""
],
[
"Zhou",
"Dongzhan",
""
],
[
"Chen",
"Tao",
... | TITLE: TokenCarve: Information-Preserving Visual Token Compression in
Multimodal Large Language Models
ABSTRACT: Multimodal Large Language Models (MLLMs) are becoming increasingly popular,
while the high computational cost associated with multimodal data input,
particularly from visual tokens, poses a significant challenge. Existing
training-based token compression methods improve inference efficiency but
require costly retraining, while training-free methods struggle to maintain
performance when aggressively reducing token counts. In this study, we reveal
that the performance degradation of MLLM closely correlates with the
accelerated loss of information in the attention output matrix. This insight
introduces a novel information-preserving perspective, making it possible to
maintain performance even under extreme token compression. Based on this
finding, we propose TokenCarve, a training-free, plug-and-play, two-stage token
compression framework. The first stage employs an
Information-Preservation-Guided Selection (IPGS) strategy to prune
low-information tokens, while the second stage further leverages IPGS to guide
token merging, minimizing information loss. Extensive experiments on 11
datasets and 2 model variants demonstrate the effectiveness of TokenCarve. It
can even reduce the number of visual tokens to 22.2% of the original count,
achieving a 1.23x speedup in inference, a 64% reduction in KV cache storage,
and only a 1.54% drop in accuracy. Our code is available at
https://github.com/ShawnTan86/TokenCarve.
|
2503.10512 | Devjeet Roy | Hooman Shahrokhi, Devjeet Raj Roy, Yan Yan, Venera Arnaoudova and
Janaradhan Rao Doppa | Conformal Prediction Sets for Deep Generative Models via Reduction to
Conformal Regression | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We consider the problem of generating valid and small prediction sets by
sampling outputs (e.g., software code and natural language text) from a
black-box deep generative model for a given input (e.g., textual prompt). The
validity of a prediction set is determined by a user-defined binary
admissibility function depending on the target application. For example,
requiring at least one program in the set to pass all test cases in code
generation application. To address this problem, we develop a simple and
effective conformal inference algorithm referred to as Generative Prediction
Sets (GPS). Given a set of calibration examples and black-box access to a deep
generative model, GPS can generate prediction sets with provable guarantees.
The key insight behind GPS is to exploit the inherent structure within the
distribution over the minimum number of samples needed to obtain an admissible
output to develop a simple conformal regression approach over the minimum
number of samples. Experiments on multiple datasets for code and math word
problems using different large language models demonstrate the efficacy of GPS
over state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:16:23 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Shahrokhi",
"Hooman",
""
],
[
"Roy",
"Devjeet Raj",
""
],
[
"Yan",
"Yan",
""
],
[
"Arnaoudova",
"Venera",
""
],
[
"Doppa",
"Janaradhan Rao",
""
]
] | TITLE: Conformal Prediction Sets for Deep Generative Models via Reduction to
Conformal Regression
ABSTRACT: We consider the problem of generating valid and small prediction sets by
sampling outputs (e.g., software code and natural language text) from a
black-box deep generative model for a given input (e.g., textual prompt). The
validity of a prediction set is determined by a user-defined binary
admissibility function depending on the target application. For example,
requiring at least one program in the set to pass all test cases in code
generation application. To address this problem, we develop a simple and
effective conformal inference algorithm referred to as Generative Prediction
Sets (GPS). Given a set of calibration examples and black-box access to a deep
generative model, GPS can generate prediction sets with provable guarantees.
The key insight behind GPS is to exploit the inherent structure within the
distribution over the minimum number of samples needed to obtain an admissible
output to develop a simple conformal regression approach over the minimum
number of samples. Experiments on multiple datasets for code and math word
problems using different large language models demonstrate the efficacy of GPS
over state-of-the-art methods.
|
2503.10522 | Zeyue Tian | Zeyue Tian, Yizhu Jin, Zhaoyang Liu, Ruibin Yuan, Xu Tan, Qifeng Chen,
Wei Xue, Yike Guo | AudioX: Diffusion Transformer for Anything-to-Audio Generation | The code and datasets will be available at
https://zeyuet.github.io/AudioX/ | null | null | null | cs.MM cs.CV cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Audio and music generation have emerged as crucial tasks in many
applications, yet existing approaches face significant limitations: they
operate in isolation without unified capabilities across modalities, suffer
from scarce high-quality, multi-modal training data, and struggle to
effectively integrate diverse inputs. In this work, we propose AudioX, a
unified Diffusion Transformer model for Anything-to-Audio and Music Generation.
Unlike previous domain-specific models, AudioX can generate both general audio
and music with high quality, while offering flexible natural language control
and seamless processing of various modalities including text, video, image,
music, and audio. Its key innovation is a multi-modal masked training strategy
that masks inputs across modalities and forces the model to learn from masked
inputs, yielding robust and unified cross-modal representations. To address
data scarcity, we curate two comprehensive datasets: vggsound-caps with 190K
audio captions based on the VGGSound dataset, and V2M-caps with 6 million music
captions derived from the V2M dataset. Extensive experiments demonstrate that
AudioX not only matches or outperforms state-of-the-art specialized models, but
also offers remarkable versatility in handling diverse input modalities and
generation tasks within a unified architecture. The code and datasets will be
available at https://zeyuet.github.io/AudioX/
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:30:59 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Tian",
"Zeyue",
""
],
[
"Jin",
"Yizhu",
""
],
[
"Liu",
"Zhaoyang",
""
],
[
"Yuan",
"Ruibin",
""
],
[
"Tan",
"Xu",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Xue",
"Wei",
""
],
[
"Guo",
"Yike",
""
... | TITLE: AudioX: Diffusion Transformer for Anything-to-Audio Generation
ABSTRACT: Audio and music generation have emerged as crucial tasks in many
applications, yet existing approaches face significant limitations: they
operate in isolation without unified capabilities across modalities, suffer
from scarce high-quality, multi-modal training data, and struggle to
effectively integrate diverse inputs. In this work, we propose AudioX, a
unified Diffusion Transformer model for Anything-to-Audio and Music Generation.
Unlike previous domain-specific models, AudioX can generate both general audio
and music with high quality, while offering flexible natural language control
and seamless processing of various modalities including text, video, image,
music, and audio. Its key innovation is a multi-modal masked training strategy
that masks inputs across modalities and forces the model to learn from masked
inputs, yielding robust and unified cross-modal representations. To address
data scarcity, we curate two comprehensive datasets: vggsound-caps with 190K
audio captions based on the VGGSound dataset, and V2M-caps with 6 million music
captions derived from the V2M dataset. Extensive experiments demonstrate that
AudioX not only matches or outperforms state-of-the-art specialized models, but
also offers remarkable versatility in handling diverse input modalities and
generation tasks within a unified architecture. The code and datasets will be
available at https://zeyuet.github.io/AudioX/
|
2503.10523 | Yongqi Wang | Jun Yu, Yongqi Wang, Lei Wang, Yang Zheng, Shengfan Xu | Interactive Multimodal Fusion with Temporal Modeling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper presents our method for the estimation of valence-arousal (VA) in
the 8th Affective Behavior Analysis in-the-Wild (ABAW) competition. Our
approach integrates visual and audio information through a multimodal
framework. The visual branch uses a pre-trained ResNet model to extract spatial
features from facial images. The audio branches employ pre-trained VGG models
to extract VGGish and LogMel features from speech signals. These features
undergo temporal modeling using Temporal Convolutional Networks (TCNs). We then
apply cross-modal attention mechanisms, where visual features interact with
audio features through query-key-value attention structures. Finally, the
features are concatenated and passed through a regression layer to predict
valence and arousal. Our method achieves competitive performance on the
Aff-Wild2 dataset, demonstrating effective multimodal fusion for VA estimation
in-the-wild.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:31:56 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Yu",
"Jun",
""
],
[
"Wang",
"Yongqi",
""
],
[
"Wang",
"Lei",
""
],
[
"Zheng",
"Yang",
""
],
[
"Xu",
"Shengfan",
""
]
] | TITLE: Interactive Multimodal Fusion with Temporal Modeling
ABSTRACT: This paper presents our method for the estimation of valence-arousal (VA) in
the 8th Affective Behavior Analysis in-the-Wild (ABAW) competition. Our
approach integrates visual and audio information through a multimodal
framework. The visual branch uses a pre-trained ResNet model to extract spatial
features from facial images. The audio branches employ pre-trained VGG models
to extract VGGish and LogMel features from speech signals. These features
undergo temporal modeling using Temporal Convolutional Networks (TCNs). We then
apply cross-modal attention mechanisms, where visual features interact with
audio features through query-key-value attention structures. Finally, the
features are concatenated and passed through a regression layer to predict
valence and arousal. Our method achieves competitive performance on the
Aff-Wild2 dataset, demonstrating effective multimodal fusion for VA estimation
in-the-wild.
|
2503.10529 | Zilu Guo | Zilu Guo, Hongbin Lin, Zhihao Yuan, Chaoda Zheng, Pengshuo Qiu,
Dongzhi Jiang, Renrui Zhang, Chun-Mei Feng, Zhen Li | PiSA: A Self-Augmented Data Engine and Training Strategy for 3D
Understanding with Large Models | Technical Report | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | 3D Multimodal Large Language Models (MLLMs) have recently made substantial
advancements. However, their potential remains untapped, primarily due to the
limited quantity and suboptimal quality of 3D datasets. Current approaches
attempt to transfer knowledge from 2D MLLMs to expand 3D instruction data, but
still face modality and domain gaps. To this end, we introduce PiSA-Engine
(Point-Self-Augmented-Engine), a new framework for generating instruction
point-language datasets enriched with 3D spatial semantics. We observe that
existing 3D MLLMs offer a comprehensive understanding of point clouds for
annotation, while 2D MLLMs excel at cross-validation by providing complementary
information. By integrating holistic 2D and 3D insights from off-the-shelf
MLLMs, PiSA-Engine enables a continuous cycle of high-quality data generation.
We select PointLLM as the baseline and adopt this co-evolution training
framework to develop an enhanced 3D MLLM, termed PointLLM-PiSA. Additionally,
we identify limitations in previous 3D benchmarks, which often feature coarse
language captions and insufficient category diversity, resulting in inaccurate
evaluations. To address this gap, we further introduce PiSA-Bench, a
comprehensive 3D benchmark covering six key aspects with detailed and diverse
labels. Experimental results demonstrate PointLLM-PiSA's state-of-the-art
performance in zero-shot 3D object captioning and generative classification on
our PiSA-Bench, achieving significant improvements of 46.45% (+8.33%) and
63.75% (+16.25%), respectively. We will release the code, datasets, and
benchmark.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:37:26 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Guo",
"Zilu",
""
],
[
"Lin",
"Hongbin",
""
],
[
"Yuan",
"Zhihao",
""
],
[
"Zheng",
"Chaoda",
""
],
[
"Qiu",
"Pengshuo",
""
],
[
"Jiang",
"Dongzhi",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Feng",
"C... | TITLE: PiSA: A Self-Augmented Data Engine and Training Strategy for 3D
Understanding with Large Models
ABSTRACT: 3D Multimodal Large Language Models (MLLMs) have recently made substantial
advancements. However, their potential remains untapped, primarily due to the
limited quantity and suboptimal quality of 3D datasets. Current approaches
attempt to transfer knowledge from 2D MLLMs to expand 3D instruction data, but
still face modality and domain gaps. To this end, we introduce PiSA-Engine
(Point-Self-Augmented-Engine), a new framework for generating instruction
point-language datasets enriched with 3D spatial semantics. We observe that
existing 3D MLLMs offer a comprehensive understanding of point clouds for
annotation, while 2D MLLMs excel at cross-validation by providing complementary
information. By integrating holistic 2D and 3D insights from off-the-shelf
MLLMs, PiSA-Engine enables a continuous cycle of high-quality data generation.
We select PointLLM as the baseline and adopt this co-evolution training
framework to develop an enhanced 3D MLLM, termed PointLLM-PiSA. Additionally,
we identify limitations in previous 3D benchmarks, which often feature coarse
language captions and insufficient category diversity, resulting in inaccurate
evaluations. To address this gap, we further introduce PiSA-Bench, a
comprehensive 3D benchmark covering six key aspects with detailed and diverse
labels. Experimental results demonstrate PointLLM-PiSA's state-of-the-art
performance in zero-shot 3D object captioning and generative classification on
our PiSA-Bench, achieving significant improvements of 46.45% (+8.33%) and
63.75% (+16.25%), respectively. We will release the code, datasets, and
benchmark.
|
2503.10539 | Reshma Rastogi | Reshma Rastogi and Ankush Bisht and Sanjay Kumar and Suresh Chandra | GBSVR: Granular Ball Support Vector Regression | null | null | null | null | cs.LG cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Support Vector Regression (SVR) and its variants are widely used to handle
regression tasks, however, since their solution involves solving an expensive
quadratic programming problem, it limits its application, especially when
dealing with large datasets. Additionally, SVR uses an epsilon-insensitive loss
function which is sensitive to outliers and therefore can adversely affect its
performance. We propose Granular Ball Support Vector Regression (GBSVR) to
tackle problem of regression by using granular ball concept. These balls are
useful in simplifying complex data spaces for machine learning tasks, however,
to the best of our knowledge, they have not been sufficiently explored for
regression problems. Granular balls group the data points into balls based on
their proximity and reduce the computational cost in SVR by replacing the large
number of data points with far fewer granular balls. This work also suggests a
discretization method for continuous-valued attributes to facilitate the
construction of granular balls. The effectiveness of the proposed approach is
evaluated on several benchmark datasets and it outperforms existing
state-of-the-art approaches
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:52:43 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Rastogi",
"Reshma",
""
],
[
"Bisht",
"Ankush",
""
],
[
"Kumar",
"Sanjay",
""
],
[
"Chandra",
"Suresh",
""
]
] | TITLE: GBSVR: Granular Ball Support Vector Regression
ABSTRACT: Support Vector Regression (SVR) and its variants are widely used to handle
regression tasks, however, since their solution involves solving an expensive
quadratic programming problem, it limits its application, especially when
dealing with large datasets. Additionally, SVR uses an epsilon-insensitive loss
function which is sensitive to outliers and therefore can adversely affect its
performance. We propose Granular Ball Support Vector Regression (GBSVR) to
tackle problem of regression by using granular ball concept. These balls are
useful in simplifying complex data spaces for machine learning tasks, however,
to the best of our knowledge, they have not been sufficiently explored for
regression problems. Granular balls group the data points into balls based on
their proximity and reduce the computational cost in SVR by replacing the large
number of data points with far fewer granular balls. This work also suggests a
discretization method for continuous-valued attributes to facilitate the
construction of granular balls. The effectiveness of the proposed approach is
evaluated on several benchmark datasets and it outperforms existing
state-of-the-art approaches
|
2503.10545 | Vastal Srivastava Mr. | Vatsal Srivastava | From Linear to Spline-Based Classification:Developing and Enhancing SMPA
for Noisy Non-Linear Datasets | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building upon the concepts and mechanisms used for the development in Moving
Points Algorithm, we will now explore how non linear decision boundaries can be
developed for classification tasks. First we will look at the classification
performance of MPA and some minor developments in the original algorithm. We
then discuss the concepts behind using cubic splines for classification with a
similar learning mechanism and finally analyze training results on synthetic
datasets with known properties.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 16:58:40 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Srivastava",
"Vatsal",
""
]
] | TITLE: From Linear to Spline-Based Classification:Developing and Enhancing SMPA
for Noisy Non-Linear Datasets
ABSTRACT: Building upon the concepts and mechanisms used for the development in Moving
Points Algorithm, we will now explore how non linear decision boundaries can be
developed for classification tasks. First we will look at the classification
performance of MPA and some minor developments in the original algorithm. We
then discuss the concepts behind using cubic splines for classification with a
similar learning mechanism and finally analyze training results on synthetic
datasets with known properties.
|
2503.10560 | Nicolas Pr\"ollochs | Kirill Solovev, Nicolas Pr\"ollochs | References to unbiased sources increase the helpfulness of community
fact-checks | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community-based fact-checking is a promising approach to address
misinformation on social media at scale. However, an understanding of what
makes community-created fact-checks helpful to users is still in its infancy.
In this paper, we analyze the determinants of the helpfulness of
community-created fact-checks. For this purpose, we draw upon a unique dataset
of real-world community-created fact-checks and helpfulness ratings from X's
(formerly Twitter) Community Notes platform. Our empirical analysis implies
that the key determinant of helpfulness in community-based fact-checking is
whether users provide links to external sources to underpin their assertions.
On average, the odds for community-created fact-checks to be perceived as
helpful are 2.70 times higher if they provide links to external sources.
Furthermore, we demonstrate that the helpfulness of community-created
fact-checks varies depending on their level of political bias. Here, we find
that community-created fact-checks linking to high-bias sources (of either
political side) are perceived as significantly less helpful. This suggests that
the rating mechanism on the Community Notes platform successfully penalizes
one-sidedness and politically motivated reasoning. These findings have
important implications for social media platforms, which can utilize our
results to optimize their community-based fact-checking systems.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:12:01 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Solovev",
"Kirill",
""
],
[
"Pröllochs",
"Nicolas",
""
]
] | TITLE: References to unbiased sources increase the helpfulness of community
fact-checks
ABSTRACT: Community-based fact-checking is a promising approach to address
misinformation on social media at scale. However, an understanding of what
makes community-created fact-checks helpful to users is still in its infancy.
In this paper, we analyze the determinants of the helpfulness of
community-created fact-checks. For this purpose, we draw upon a unique dataset
of real-world community-created fact-checks and helpfulness ratings from X's
(formerly Twitter) Community Notes platform. Our empirical analysis implies
that the key determinant of helpfulness in community-based fact-checking is
whether users provide links to external sources to underpin their assertions.
On average, the odds for community-created fact-checks to be perceived as
helpful are 2.70 times higher if they provide links to external sources.
Furthermore, we demonstrate that the helpfulness of community-created
fact-checks varies depending on their level of political bias. Here, we find
that community-created fact-checks linking to high-bias sources (of either
political side) are perceived as significantly less helpful. This suggests that
the rating mechanism on the Community Notes platform successfully penalizes
one-sidedness and politically motivated reasoning. These findings have
important implications for social media platforms, which can utilize our
results to optimize their community-based fact-checking systems.
|
2503.10567 | Nannan Wu | Nannan Wu, Zengqiang Yan, Nong Sang, Li Yu, Chang Wen Chen | FedPCA: Noise-Robust Fair Federated Learning via Performance-Capacity
Analysis | Preprint | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training a model that effectively handles both common and rare data-i.e.,
achieving performance fairness-is crucial in federated learning (FL). While
existing fair FL methods have shown effectiveness, they remain vulnerable to
mislabeled data. Ensuring robustness in fair FL is therefore essential.
However, fairness and robustness inherently compete, which causes robust
strategies to hinder fairness. In this paper, we attribute this competition to
the homogeneity in loss patterns exhibited by rare and mislabeled data clients,
preventing existing loss-based fair and robust FL methods from effectively
distinguishing and handling these two distinct client types. To address this,
we propose performance-capacity analysis, which jointly considers model
performance on each client and its capacity to handle the dataset, measured by
loss and a newly introduced feature dispersion score. This allows mislabeled
clients to be identified by their significantly deviated performance relative
to capacity while preserving rare data clients. Building on this, we introduce
FedPCA, an FL method that robustly achieves fairness. FedPCA first identifies
mislabeled clients via a Gaussian Mixture Model on loss-dispersion pairs, then
applies fairness and robustness strategies in global aggregation and local
training by adjusting client weights and selectively using reliable data.
Extensive experiments on three datasets demonstrate FedPCA's effectiveness in
tackling this complex challenge. Code will be publicly available upon
acceptance.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:18:18 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wu",
"Nannan",
""
],
[
"Yan",
"Zengqiang",
""
],
[
"Sang",
"Nong",
""
],
[
"Yu",
"Li",
""
],
[
"Chen",
"Chang Wen",
""
]
] | TITLE: FedPCA: Noise-Robust Fair Federated Learning via Performance-Capacity
Analysis
ABSTRACT: Training a model that effectively handles both common and rare data-i.e.,
achieving performance fairness-is crucial in federated learning (FL). While
existing fair FL methods have shown effectiveness, they remain vulnerable to
mislabeled data. Ensuring robustness in fair FL is therefore essential.
However, fairness and robustness inherently compete, which causes robust
strategies to hinder fairness. In this paper, we attribute this competition to
the homogeneity in loss patterns exhibited by rare and mislabeled data clients,
preventing existing loss-based fair and robust FL methods from effectively
distinguishing and handling these two distinct client types. To address this,
we propose performance-capacity analysis, which jointly considers model
performance on each client and its capacity to handle the dataset, measured by
loss and a newly introduced feature dispersion score. This allows mislabeled
clients to be identified by their significantly deviated performance relative
to capacity while preserving rare data clients. Building on this, we introduce
FedPCA, an FL method that robustly achieves fairness. FedPCA first identifies
mislabeled clients via a Gaussian Mixture Model on loss-dispersion pairs, then
applies fairness and robustness strategies in global aggregation and local
training by adjusting client weights and selectively using reliable data.
Extensive experiments on three datasets demonstrate FedPCA's effectiveness in
tackling this complex challenge. Code will be publicly available upon
acceptance.
|
2503.10573 | Afrar Jahin | Afrar Jahin, Arif Hassan Zidan, Yu Bao, Shizhe Liang, Tianming Liu,
Wei Zhang | Unveiling the Mathematical Reasoning in DeepSeek Models: A Comparative
Study of Large Language Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the rapid evolution of Artificial Intelligence (AI), Large Language
Models (LLMs) have reshaped the frontiers of various fields, spanning
healthcare, public health, engineering, science, agriculture, education, arts,
humanities, and mathematical reasoning. Among these advancements, DeepSeek
models have emerged as noteworthy contenders, demonstrating promising
capabilities that set them apart from their peers. While previous studies have
conducted comparative analyses of LLMs, few have delivered a comprehensive
evaluation of mathematical reasoning across a broad spectrum of LLMs. In this
work, we aim to bridge this gap by conducting an in-depth comparative study,
focusing on the strengths and limitations of DeepSeek models in relation to
their leading counterparts. In particular, our study systematically evaluates
the mathematical reasoning performance of two DeepSeek models alongside five
prominent LLMs across three independent benchmark datasets. The findings reveal
several key insights: 1). DeepSeek-R1 consistently achieved the highest
accuracy on two of the three datasets, demonstrating strong mathematical
reasoning capabilities. 2). The distilled variant of LLMs significantly
underperformed compared to its peers, highlighting potential drawbacks in using
distillation techniques. 3). In terms of response time, Gemini 2.0 Flash
demonstrated the fastest processing speed, outperforming other models in
efficiency, which is a crucial factor for real-time applications. Beyond these
quantitative assessments, we delve into how architecture, training, and
optimization impact LLMs' mathematical reasoning. Moreover, our study goes
beyond mere performance comparison by identifying key areas for future
advancements in LLM-driven mathematical reasoning. This research enhances our
understanding of LLMs' mathematical reasoning and lays the groundwork for
future advancements
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:23:45 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Jahin",
"Afrar",
""
],
[
"Zidan",
"Arif Hassan",
""
],
[
"Bao",
"Yu",
""
],
[
"Liang",
"Shizhe",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhang",
"Wei",
""
]
] | TITLE: Unveiling the Mathematical Reasoning in DeepSeek Models: A Comparative
Study of Large Language Models
ABSTRACT: With the rapid evolution of Artificial Intelligence (AI), Large Language
Models (LLMs) have reshaped the frontiers of various fields, spanning
healthcare, public health, engineering, science, agriculture, education, arts,
humanities, and mathematical reasoning. Among these advancements, DeepSeek
models have emerged as noteworthy contenders, demonstrating promising
capabilities that set them apart from their peers. While previous studies have
conducted comparative analyses of LLMs, few have delivered a comprehensive
evaluation of mathematical reasoning across a broad spectrum of LLMs. In this
work, we aim to bridge this gap by conducting an in-depth comparative study,
focusing on the strengths and limitations of DeepSeek models in relation to
their leading counterparts. In particular, our study systematically evaluates
the mathematical reasoning performance of two DeepSeek models alongside five
prominent LLMs across three independent benchmark datasets. The findings reveal
several key insights: 1). DeepSeek-R1 consistently achieved the highest
accuracy on two of the three datasets, demonstrating strong mathematical
reasoning capabilities. 2). The distilled variant of LLMs significantly
underperformed compared to its peers, highlighting potential drawbacks in using
distillation techniques. 3). In terms of response time, Gemini 2.0 Flash
demonstrated the fastest processing speed, outperforming other models in
efficiency, which is a crucial factor for real-time applications. Beyond these
quantitative assessments, we delve into how architecture, training, and
optimization impact LLMs' mathematical reasoning. Moreover, our study goes
beyond mere performance comparison by identifying key areas for future
advancements in LLM-driven mathematical reasoning. This research enhances our
understanding of LLMs' mathematical reasoning and lays the groundwork for
future advancements
|
2503.10592 | Hao He | Hao He, Ceyuan Yang, Shanchuan Lin, Yinghao Xu, Meng Wei, Liangke Gui,
Qi Zhao, Gordon Wetzstein, Lu Jiang, Hongsheng Li | CameraCtrl II: Dynamic Scene Exploration via Camera-controlled Video
Diffusion Models | Project page: https://hehao13.github.io/Projects-CameraCtrl-II/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper introduces CameraCtrl II, a framework that enables large-scale
dynamic scene exploration through a camera-controlled video diffusion model.
Previous camera-conditioned video generative models suffer from diminished
video dynamics and limited range of viewpoints when generating videos with
large camera movement. We take an approach that progressively expands the
generation of dynamic scenes -- first enhancing dynamic content within
individual video clip, then extending this capability to create seamless
explorations across broad viewpoint ranges. Specifically, we construct a
dataset featuring a large degree of dynamics with camera parameter annotations
for training while designing a lightweight camera injection module and training
scheme to preserve dynamics of the pretrained models. Building on these
improved single-clip techniques, we enable extended scene exploration by
allowing users to iteratively specify camera trajectories for generating
coherent video sequences. Experiments across diverse scenarios demonstrate that
CameraCtrl Ii enables camera-controlled dynamic scene synthesis with
substantially wider spatial exploration than previous approaches.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:42:01 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"He",
"Hao",
""
],
[
"Yang",
"Ceyuan",
""
],
[
"Lin",
"Shanchuan",
""
],
[
"Xu",
"Yinghao",
""
],
[
"Wei",
"Meng",
""
],
[
"Gui",
"Liangke",
""
],
[
"Zhao",
"Qi",
""
],
[
"Wetzstein",
"Gordon",
... | TITLE: CameraCtrl II: Dynamic Scene Exploration via Camera-controlled Video
Diffusion Models
ABSTRACT: This paper introduces CameraCtrl II, a framework that enables large-scale
dynamic scene exploration through a camera-controlled video diffusion model.
Previous camera-conditioned video generative models suffer from diminished
video dynamics and limited range of viewpoints when generating videos with
large camera movement. We take an approach that progressively expands the
generation of dynamic scenes -- first enhancing dynamic content within
individual video clip, then extending this capability to create seamless
explorations across broad viewpoint ranges. Specifically, we construct a
dataset featuring a large degree of dynamics with camera parameter annotations
for training while designing a lightweight camera injection module and training
scheme to preserve dynamics of the pretrained models. Building on these
improved single-clip techniques, we enable extended scene exploration by
allowing users to iteratively specify camera trajectories for generating
coherent video sequences. Experiments across diverse scenarios demonstrate that
CameraCtrl Ii enables camera-controlled dynamic scene synthesis with
substantially wider spatial exploration than previous approaches.
|
2503.10596 | Tianheng Cheng | Rui Hu, Lianghui Zhu, Yuxuan Zhang, Tianheng Cheng, Lei Liu, Heng Liu,
Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang | GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding | Work in progress. Code: https://github.com/hustvl/GroundingSuite | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Pixel grounding, encompassing tasks such as Referring Expression Segmentation
(RES), has garnered considerable attention due to its immense potential for
bridging the gap between vision and language modalities. However, advancements
in this domain are currently constrained by limitations inherent in existing
datasets, including limited object categories, insufficient textual diversity,
and a scarcity of high-quality annotations. To mitigate these limitations, we
introduce GroundingSuite, which comprises: (1) an automated data annotation
framework leveraging multiple Vision-Language Model (VLM) agents; (2) a
large-scale training dataset encompassing 9.56 million diverse referring
expressions and their corresponding segmentations; and (3) a meticulously
curated evaluation benchmark consisting of 3,800 images. The GroundingSuite
training dataset facilitates substantial performance improvements, enabling
models trained on it to achieve state-of-the-art results. Specifically, a cIoU
of 68.9 on gRefCOCO and a gIoU of 55.3 on RefCOCOm. Moreover, the
GroundingSuite annotation framework demonstrates superior efficiency compared
to the current leading data annotation method, i.e., $4.5 \times$ faster than
the GLaMM.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 17:43:10 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Hu",
"Rui",
""
],
[
"Zhu",
"Lianghui",
""
],
[
"Zhang",
"Yuxuan",
""
],
[
"Cheng",
"Tianheng",
""
],
[
"Liu",
"Lei",
""
],
[
"Liu",
"Heng",
""
],
[
"Ran",
"Longjin",
""
],
[
"Chen",
"Xiaoxin",
... | TITLE: GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding
ABSTRACT: Pixel grounding, encompassing tasks such as Referring Expression Segmentation
(RES), has garnered considerable attention due to its immense potential for
bridging the gap between vision and language modalities. However, advancements
in this domain are currently constrained by limitations inherent in existing
datasets, including limited object categories, insufficient textual diversity,
and a scarcity of high-quality annotations. To mitigate these limitations, we
introduce GroundingSuite, which comprises: (1) an automated data annotation
framework leveraging multiple Vision-Language Model (VLM) agents; (2) a
large-scale training dataset encompassing 9.56 million diverse referring
expressions and their corresponding segmentations; and (3) a meticulously
curated evaluation benchmark consisting of 3,800 images. The GroundingSuite
training dataset facilitates substantial performance improvements, enabling
models trained on it to achieve state-of-the-art results. Specifically, a cIoU
of 68.9 on gRefCOCO and a gIoU of 55.3 on RefCOCOm. Moreover, the
GroundingSuite annotation framework demonstrates superior efficiency compared
to the current leading data annotation method, i.e., $4.5 \times$ faster than
the GLaMM.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.