Search is not available for this dataset
id string | submitter string | authors string | title string | comments string | journal-ref string | doi string | report-no string | categories string | license string | abstract string | versions list | update_date timestamp[s] | authors_parsed list | prompt string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2502.04323 | Calvin Osborne | Calvin Osborne and Eliza O'Reilly | The Uniformly Rotated Mondrian Kernel | 22 pages, 4 figures, accepted to 28th International Conference on
Artificial Intelligence and Statistics (AISTATS 2025) | null | null | null | cs.LG math.PR | http://creativecommons.org/licenses/by/4.0/ | Random feature maps are used to decrease the computational cost of kernel
machines in large-scale problems. The Mondrian kernel is one such example of a
fast random feature approximation of the Laplace kernel, generated by a
computationally efficient hierarchical random partition of the input space
known as the Mondrian process. In this work, we study a variation of this
random feature map by applying a uniform random rotation to the input space
before running the Mondrian process to approximate a kernel that is invariant
under rotations. We obtain a closed-form expression for the isotropic kernel
that is approximated, as well as a uniform convergence rate of the uniformly
rotated Mondrian kernel to this limit. To this end, we utilize techniques from
the theory of stationary random tessellations in stochastic geometry and prove
a new result on the geometry of the typical cell of the superposition of
uniformly rotated Mondrian tessellations. Finally, we test the empirical
performance of this random feature map on both synthetic and real-world
datasets, demonstrating its improved performance over the Mondrian kernel on a
dataset that is debiased from the standard coordinate axes.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2025 18:59:24 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 18:50:29 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Osborne",
"Calvin",
""
],
[
"O'Reilly",
"Eliza",
""
]
] | TITLE: The Uniformly Rotated Mondrian Kernel
ABSTRACT: Random feature maps are used to decrease the computational cost of kernel
machines in large-scale problems. The Mondrian kernel is one such example of a
fast random feature approximation of the Laplace kernel, generated by a
computationally efficient hierarchical random partition of the input space
known as the Mondrian process. In this work, we study a variation of this
random feature map by applying a uniform random rotation to the input space
before running the Mondrian process to approximate a kernel that is invariant
under rotations. We obtain a closed-form expression for the isotropic kernel
that is approximated, as well as a uniform convergence rate of the uniformly
rotated Mondrian kernel to this limit. To this end, we utilize techniques from
the theory of stationary random tessellations in stochastic geometry and prove
a new result on the geometry of the typical cell of the superposition of
uniformly rotated Mondrian tessellations. Finally, we test the empirical
performance of this random feature map on both synthetic and real-world
datasets, demonstrating its improved performance over the Mondrian kernel on a
dataset that is debiased from the standard coordinate axes.
|
2502.06734 | Bojia Zi | Bojia Zi, Penghui Ruan, Marco Chen, Xianbiao Qi, Shaozhe Hao, Shihao
Zhao, Youze Huang, Bin Liang, Rong Xiao, Kam-Fai Wong | Se\~norita-2M: A High-Quality Instruction-based Dataset for General
Video Editing by Video Specialists | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in video generation have spurred the development of video
editing techniques, which can be divided into inversion-based and end-to-end
methods. However, current video editing methods still suffer from several
challenges. Inversion-based methods, though training-free and flexible, are
time-consuming during inference, struggle with fine-grained editing
instructions, and produce artifacts and jitter. On the other hand, end-to-end
methods, which rely on edited video pairs for training, offer faster inference
speeds but often produce poor editing results due to a lack of high-quality
training video pairs. In this paper, to close the gap in end-to-end methods, we
introduce Se\~norita-2M, a high-quality video editing dataset. Se\~norita-2M
consists of approximately 2 millions of video editing pairs. It is built by
crafting four high-quality, specialized video editing models, each crafted and
trained by our team to achieve state-of-the-art editing results. We also
propose a filtering pipeline to eliminate poorly edited video pairs.
Furthermore, we explore common video editing architectures to identify the most
effective structure based on current pre-trained generative model. Extensive
experiments show that our dataset can help to yield remarkably high-quality
video editing results. More details are available at
https://senorita-2m-dataset.github.io.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 17:58:22 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 07:09:58 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 07:47:48 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zi",
"Bojia",
""
],
[
"Ruan",
"Penghui",
""
],
[
"Chen",
"Marco",
""
],
[
"Qi",
"Xianbiao",
""
],
[
"Hao",
"Shaozhe",
""
],
[
"Zhao",
"Shihao",
""
],
[
"Huang",
"Youze",
""
],
[
"Liang",
"Bin",... | TITLE: Se\~norita-2M: A High-Quality Instruction-based Dataset for General
Video Editing by Video Specialists
ABSTRACT: Recent advancements in video generation have spurred the development of video
editing techniques, which can be divided into inversion-based and end-to-end
methods. However, current video editing methods still suffer from several
challenges. Inversion-based methods, though training-free and flexible, are
time-consuming during inference, struggle with fine-grained editing
instructions, and produce artifacts and jitter. On the other hand, end-to-end
methods, which rely on edited video pairs for training, offer faster inference
speeds but often produce poor editing results due to a lack of high-quality
training video pairs. In this paper, to close the gap in end-to-end methods, we
introduce Se\~norita-2M, a high-quality video editing dataset. Se\~norita-2M
consists of approximately 2 millions of video editing pairs. It is built by
crafting four high-quality, specialized video editing models, each crafted and
trained by our team to achieve state-of-the-art editing results. We also
propose a filtering pipeline to eliminate poorly edited video pairs.
Furthermore, we explore common video editing architectures to identify the most
effective structure based on current pre-trained generative model. Extensive
experiments show that our dataset can help to yield remarkably high-quality
video editing results. More details are available at
https://senorita-2m-dataset.github.io.
|
2502.08377 | Liying Yang | Liying Yang, Chen Liu, Zhenwei Zhu, Ajian Liu, Hui Ma, Jian Nong,
Yanyan Liang | Not All Frame Features Are Equal: Video-to-4D Generation via Decoupling
Dynamic-Static Features | Revised version | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the generation of dynamic 3D objects from a video has shown
impressive results. Existing methods directly optimize Gaussians using whole
information in frames. However, when dynamic regions are interwoven with static
regions within frames, particularly if the static regions account for a large
proportion, existing methods often overlook information in dynamic regions and
are prone to overfitting on static regions. This leads to producing results
with blurry textures. We consider that decoupling dynamic-static features to
enhance dynamic representations can alleviate this issue. Thus, we propose a
dynamic-static feature decoupling module (DSFD). Along temporal axes, it
regards the regions of current frame features that possess significant
differences relative to reference frame features as dynamic features.
Conversely, the remaining parts are the static features. Then, we acquire
decoupled features driven by dynamic features and current frame features.
Moreover, to further enhance the dynamic representation of decoupled features
from different viewpoints and ensure accurate motion prediction, we design a
temporal-spatial similarity fusion module (TSSF). Along spatial axes, it
adaptively selects similar information of dynamic regions. Hinging on the
above, we construct a novel approach, DS4D. Experimental results verify our
method achieves state-of-the-art (SOTA) results in video-to-4D. In addition,
the experiments on a real-world scenario dataset demonstrate its effectiveness
on the 4D scene. Our code will be publicly available.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 13:08:35 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 02:49:03 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Yang",
"Liying",
""
],
[
"Liu",
"Chen",
""
],
[
"Zhu",
"Zhenwei",
""
],
[
"Liu",
"Ajian",
""
],
[
"Ma",
"Hui",
""
],
[
"Nong",
"Jian",
""
],
[
"Liang",
"Yanyan",
""
]
] | TITLE: Not All Frame Features Are Equal: Video-to-4D Generation via Decoupling
Dynamic-Static Features
ABSTRACT: Recently, the generation of dynamic 3D objects from a video has shown
impressive results. Existing methods directly optimize Gaussians using whole
information in frames. However, when dynamic regions are interwoven with static
regions within frames, particularly if the static regions account for a large
proportion, existing methods often overlook information in dynamic regions and
are prone to overfitting on static regions. This leads to producing results
with blurry textures. We consider that decoupling dynamic-static features to
enhance dynamic representations can alleviate this issue. Thus, we propose a
dynamic-static feature decoupling module (DSFD). Along temporal axes, it
regards the regions of current frame features that possess significant
differences relative to reference frame features as dynamic features.
Conversely, the remaining parts are the static features. Then, we acquire
decoupled features driven by dynamic features and current frame features.
Moreover, to further enhance the dynamic representation of decoupled features
from different viewpoints and ensure accurate motion prediction, we design a
temporal-spatial similarity fusion module (TSSF). Along spatial axes, it
adaptively selects similar information of dynamic regions. Hinging on the
above, we construct a novel approach, DS4D. Experimental results verify our
method achieves state-of-the-art (SOTA) results in video-to-4D. In addition,
the experiments on a real-world scenario dataset demonstrate its effectiveness
on the 4D scene. Our code will be publicly available.
|
2502.08590 | Yujie Zhou | Yujie Zhou, Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Qidong Huang,
Jinsong Li, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Anyi Rao, Jiaqi Wang, Li
Niu | Light-A-Video: Training-free Video Relighting via Progressive Light
Fusion | Project Page: https://bujiazi.github.io/light-a-video.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in image relighting models, driven by large-scale
datasets and pre-trained diffusion models, have enabled the imposition of
consistent lighting. However, video relighting still lags, primarily due to the
excessive training costs and the scarcity of diverse, high-quality video
relighting datasets. A simple application of image relighting models on a
frame-by-frame basis leads to several issues: lighting source inconsistency and
relighted appearance inconsistency, resulting in flickers in the generated
videos. In this work, we propose Light-A-Video, a training-free approach to
achieve temporally smooth video relighting. Adapted from image relighting
models, Light-A-Video introduces two key techniques to enhance lighting
consistency. First, we design a Consistent Light Attention (CLA) module, which
enhances cross-frame interactions within the self-attention layers of the image
relight model to stabilize the generation of the background lighting source.
Second, leveraging the physical principle of light transport independence, we
apply linear blending between the source video's appearance and the relighted
appearance, using a Progressive Light Fusion (PLF) strategy to ensure smooth
temporal transitions in illumination. Experiments show that Light-A-Video
improves the temporal consistency of relighted video while maintaining the
relighted image quality, ensuring coherent lighting transitions across frames.
Project page: https://bujiazi.github.io/light-a-video.github.io/.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 17:24:19 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 08:38:20 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhou",
"Yujie",
""
],
[
"Bu",
"Jiazi",
""
],
[
"Ling",
"Pengyang",
""
],
[
"Zhang",
"Pan",
""
],
[
"Wu",
"Tong",
""
],
[
"Huang",
"Qidong",
""
],
[
"Li",
"Jinsong",
""
],
[
"Dong",
"Xiaoyi",
... | TITLE: Light-A-Video: Training-free Video Relighting via Progressive Light
Fusion
ABSTRACT: Recent advancements in image relighting models, driven by large-scale
datasets and pre-trained diffusion models, have enabled the imposition of
consistent lighting. However, video relighting still lags, primarily due to the
excessive training costs and the scarcity of diverse, high-quality video
relighting datasets. A simple application of image relighting models on a
frame-by-frame basis leads to several issues: lighting source inconsistency and
relighted appearance inconsistency, resulting in flickers in the generated
videos. In this work, we propose Light-A-Video, a training-free approach to
achieve temporally smooth video relighting. Adapted from image relighting
models, Light-A-Video introduces two key techniques to enhance lighting
consistency. First, we design a Consistent Light Attention (CLA) module, which
enhances cross-frame interactions within the self-attention layers of the image
relight model to stabilize the generation of the background lighting source.
Second, leveraging the physical principle of light transport independence, we
apply linear blending between the source video's appearance and the relighted
appearance, using a Progressive Light Fusion (PLF) strategy to ensure smooth
temporal transitions in illumination. Experiments show that Light-A-Video
improves the temporal consistency of relighted video while maintaining the
relighted image quality, ensuring coherent lighting transitions across frames.
Project page: https://bujiazi.github.io/light-a-video.github.io/.
|
2502.10259 | Laura Dodds | Laura Dodds, Tara Boroushaki, Cusuh Ham, Fadel Adib | MITO: A Millimeter-Wave Dataset and Simulator for Non-Line-of-Sight
Perception | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The ability to observe the world is fundamental to reasoning and making
informed decisions on how to interact with the environment. However, optical
perception can often be disrupted due to common occurrences, such as
occlusions, which can pose challenges to existing vision systems. We present
MITO, the first millimeter-wave (mmWave) dataset of diverse, everyday objects,
collected using a UR5 robotic arm with two mmWave radars operating at different
frequencies and an RGB-D camera. Unlike visible light, mmWave signals can
penetrate common occlusions (e.g., cardboard boxes, fabric, plastic) but each
mmWave frame has much lower resolution than typical cameras. To capture
higher-resolution mmWave images, we leverage the robot's mobility and fuse
frames over the synthesized aperture. MITO captures over 24 million mmWave
frames and uses them to generate 550 high-resolution mmWave (synthetic
aperture) images in line-of-sight and non-light-of-sight (NLOS), as well as
RGB-D images, segmentation masks, and raw mmWave signals, taken from 76
different objects. We develop an open-source simulation tool that can be used
to generate synthetic mmWave images for any 3D triangle mesh. Finally, we
demonstrate the utility of our dataset and simulator for enabling broader NLOS
perception by developing benchmarks for NLOS segmentation and classification.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 16:12:14 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:38:55 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 18:31:32 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Dodds",
"Laura",
""
],
[
"Boroushaki",
"Tara",
""
],
[
"Ham",
"Cusuh",
""
],
[
"Adib",
"Fadel",
""
]
] | TITLE: MITO: A Millimeter-Wave Dataset and Simulator for Non-Line-of-Sight
Perception
ABSTRACT: The ability to observe the world is fundamental to reasoning and making
informed decisions on how to interact with the environment. However, optical
perception can often be disrupted due to common occurrences, such as
occlusions, which can pose challenges to existing vision systems. We present
MITO, the first millimeter-wave (mmWave) dataset of diverse, everyday objects,
collected using a UR5 robotic arm with two mmWave radars operating at different
frequencies and an RGB-D camera. Unlike visible light, mmWave signals can
penetrate common occlusions (e.g., cardboard boxes, fabric, plastic) but each
mmWave frame has much lower resolution than typical cameras. To capture
higher-resolution mmWave images, we leverage the robot's mobility and fuse
frames over the synthesized aperture. MITO captures over 24 million mmWave
frames and uses them to generate 550 high-resolution mmWave (synthetic
aperture) images in line-of-sight and non-light-of-sight (NLOS), as well as
RGB-D images, segmentation masks, and raw mmWave signals, taken from 76
different objects. We develop an open-source simulation tool that can be used
to generate synthetic mmWave images for any 3D triangle mesh. Finally, we
demonstrate the utility of our dataset and simulator for enabling broader NLOS
perception by developing benchmarks for NLOS segmentation and classification.
|
2502.11234 | Michael Fuest | Michael Fuest, Vincent Tao Hu, Bj\"orn Ommer | MaskFlow: Discrete Flows For Flexible and Efficient Long Video
Generation | Project page: https://compvis.github.io/maskflow/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generating long, high-quality videos remains a challenge due to the complex
interplay of spatial and temporal dynamics and hardware limitations. In this
work, we introduce MaskFlow, a unified video generation framework that combines
discrete representations with flow-matching to enable efficient generation of
high-quality long videos. By leveraging a frame-level masking strategy during
training, MaskFlow conditions on previously generated unmasked frames to
generate videos with lengths ten times beyond that of the training sequences.
MaskFlow does so very efficiently by enabling the use of fast Masked Generative
Model (MGM)-style sampling and can be deployed in both fully autoregressive as
well as full-sequence generation modes. We validate the quality of our method
on the FaceForensics (FFS) and Deepmind Lab (DMLab) datasets and report Frechet
Video Distance (FVD) competitive with state-of-the-art approaches. We also
provide a detailed analysis on the sampling efficiency of our method and
demonstrate that MaskFlow can be applied to both timestep-dependent and
timestep-independent models in a training-free manner.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 18:59:11 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 16:27:37 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Fuest",
"Michael",
""
],
[
"Hu",
"Vincent Tao",
""
],
[
"Ommer",
"Björn",
""
]
] | TITLE: MaskFlow: Discrete Flows For Flexible and Efficient Long Video
Generation
ABSTRACT: Generating long, high-quality videos remains a challenge due to the complex
interplay of spatial and temporal dynamics and hardware limitations. In this
work, we introduce MaskFlow, a unified video generation framework that combines
discrete representations with flow-matching to enable efficient generation of
high-quality long videos. By leveraging a frame-level masking strategy during
training, MaskFlow conditions on previously generated unmasked frames to
generate videos with lengths ten times beyond that of the training sequences.
MaskFlow does so very efficiently by enabling the use of fast Masked Generative
Model (MGM)-style sampling and can be deployed in both fully autoregressive as
well as full-sequence generation modes. We validate the quality of our method
on the FaceForensics (FFS) and Deepmind Lab (DMLab) datasets and report Frechet
Video Distance (FVD) competitive with state-of-the-art approaches. We also
provide a detailed analysis on the sampling efficiency of our method and
demonstrate that MaskFlow can be applied to both timestep-dependent and
timestep-independent models in a training-free manner.
|
2502.13403 | Heiko Hoffmann | Heiko Hoffmann and Richard Hoffmann | Object-Pose Estimation With Neural Population Codes | null | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Robotic assembly tasks require object-pose estimation, particularly for tasks
that avoid costly mechanical constraints. Object symmetry complicates the
direct mapping of sensory input to object rotation, as the rotation becomes
ambiguous and lacks a unique training target. Some proposed solutions involve
evaluating multiple pose hypotheses against the input or predicting a
probability distribution, but these approaches suffer from significant
computational overhead. Here, we show that representing object rotation with a
neural population code overcomes these limitations, enabling a direct mapping
to rotation and end-to-end learning. As a result, population codes facilitate
fast and accurate pose estimation. On the T-LESS dataset, we achieve inference
in 3.2 milliseconds on an Apple M1 CPU and a Maximum Symmetry-Aware Surface
Distance accuracy of 84.7% using only gray-scale image input, compared to 69.7%
accuracy when directly mapping to pose.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 03:23:43 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 23:24:30 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Hoffmann",
"Heiko",
""
],
[
"Hoffmann",
"Richard",
""
]
] | TITLE: Object-Pose Estimation With Neural Population Codes
ABSTRACT: Robotic assembly tasks require object-pose estimation, particularly for tasks
that avoid costly mechanical constraints. Object symmetry complicates the
direct mapping of sensory input to object rotation, as the rotation becomes
ambiguous and lacks a unique training target. Some proposed solutions involve
evaluating multiple pose hypotheses against the input or predicting a
probability distribution, but these approaches suffer from significant
computational overhead. Here, we show that representing object rotation with a
neural population code overcomes these limitations, enabling a direct mapping
to rotation and end-to-end learning. As a result, population codes facilitate
fast and accurate pose estimation. On the T-LESS dataset, we achieve inference
in 3.2 milliseconds on an Apple M1 CPU and a Maximum Symmetry-Aware Surface
Distance accuracy of 84.7% using only gray-scale image input, compared to 69.7%
accuracy when directly mapping to pose.
|
2502.13763 | Eva Zangerle | Andreas Peintner and Marta Moscati and Emilia Parada-Cabaleiro and
Markus Schedl and Eva Zangerle | Unsupervised Graph Embeddings for Session-based Recommendation with Item
Features | Paper accepted at CARS: Workshop on Context-Aware Recommender Systems
at the 16th ACM Conference on Recommender Systems (RecSys) 2022 | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | In session-based recommender systems, predictions are based on the user's
preceding behavior in the session. State-of-the-art sequential recommendation
algorithms either use graph neural networks to model sessions in a graph or
leverage the similarity of sessions by exploiting item features. In this paper,
we combine these two approaches and propose a novel method, Graph Convolutional
Network Extension (GCNext), which incorporates item features directly into the
graph representation via graph convolutional networks. GCNext creates a
feature-rich item co-occurrence graph and learns the corresponding item
embeddings in an unsupervised manner. We show on three datasets that
integrating GCNext into sequential recommendation algorithms significantly
boosts the performance of nearest-neighbor methods as well as neural network
models. Our flexible extension is easy to incorporate in state-of-the-art
methods and increases the MRR@20 by up to 12.79%.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 14:23:18 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 18:52:16 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Peintner",
"Andreas",
""
],
[
"Moscati",
"Marta",
""
],
[
"Parada-Cabaleiro",
"Emilia",
""
],
[
"Schedl",
"Markus",
""
],
[
"Zangerle",
"Eva",
""
]
] | TITLE: Unsupervised Graph Embeddings for Session-based Recommendation with Item
Features
ABSTRACT: In session-based recommender systems, predictions are based on the user's
preceding behavior in the session. State-of-the-art sequential recommendation
algorithms either use graph neural networks to model sessions in a graph or
leverage the similarity of sessions by exploiting item features. In this paper,
we combine these two approaches and propose a novel method, Graph Convolutional
Network Extension (GCNext), which incorporates item features directly into the
graph representation via graph convolutional networks. GCNext creates a
feature-rich item co-occurrence graph and learns the corresponding item
embeddings in an unsupervised manner. We show on three datasets that
integrating GCNext into sequential recommendation algorithms significantly
boosts the performance of nearest-neighbor methods as well as neural network
models. Our flexible extension is easy to incorporate in state-of-the-art
methods and increases the MRR@20 by up to 12.79%.
|
2502.15844 | Borui Yang | Borui Yang, Md Afif Al Mamun, Jie M. Zhang, Gias Uddin | Hallucination Detection in Large Language Models with Metamorphic
Relations | Accepted to the ACM Joint European Software Engineering Conference
and Symposium on the Foundations of Software Engineering (ESEC/FSE 2025) | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) are prone to hallucinations, e.g., factually
incorrect information, in their responses. These hallucinations present
challenges for LLM-based applications that demand high factual accuracy.
Existing hallucination detection methods primarily depend on external
resources, which can suffer from issues such as low availability, incomplete
coverage, privacy concerns, high latency, low reliability, and poor
scalability. There are also methods depending on output probabilities, which
are often inaccessible for closed-source LLMs like GPT models. This paper
presents MetaQA, a self-contained hallucination detection approach that
leverages metamorphic relation and prompt mutation. Unlike existing methods,
MetaQA operates without any external resources and is compatible with both
open-source and closed-source LLMs. MetaQA is based on the hypothesis that if
an LLM's response is a hallucination, the designed metamorphic relations will
be violated. We compare MetaQA with the state-of-the-art zero-resource
hallucination detection method, SelfCheckGPT, across multiple datasets, and on
two open-source and two closed-source LLMs. Our results reveal that MetaQA
outperforms SelfCheckGPT in terms of precision, recall, and f1 score. For the
four LLMs we study, MetaQA outperforms SelfCheckGPT with a superiority margin
ranging from 0.041 - 0.113 (for precision), 0.143 - 0.430 (for recall), and
0.154 - 0.368 (for F1-score). For instance, with Mistral-7B, MetaQA achieves an
average F1-score of 0.435, compared to SelfCheckGPT's F1-score of 0.205,
representing an improvement rate of 112.2%. MetaQA also demonstrates
superiority across all different categories of questions.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 19:44:33 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 18:28:18 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Yang",
"Borui",
""
],
[
"Mamun",
"Md Afif Al",
""
],
[
"Zhang",
"Jie M.",
""
],
[
"Uddin",
"Gias",
""
]
] | TITLE: Hallucination Detection in Large Language Models with Metamorphic
Relations
ABSTRACT: Large Language Models (LLMs) are prone to hallucinations, e.g., factually
incorrect information, in their responses. These hallucinations present
challenges for LLM-based applications that demand high factual accuracy.
Existing hallucination detection methods primarily depend on external
resources, which can suffer from issues such as low availability, incomplete
coverage, privacy concerns, high latency, low reliability, and poor
scalability. There are also methods depending on output probabilities, which
are often inaccessible for closed-source LLMs like GPT models. This paper
presents MetaQA, a self-contained hallucination detection approach that
leverages metamorphic relation and prompt mutation. Unlike existing methods,
MetaQA operates without any external resources and is compatible with both
open-source and closed-source LLMs. MetaQA is based on the hypothesis that if
an LLM's response is a hallucination, the designed metamorphic relations will
be violated. We compare MetaQA with the state-of-the-art zero-resource
hallucination detection method, SelfCheckGPT, across multiple datasets, and on
two open-source and two closed-source LLMs. Our results reveal that MetaQA
outperforms SelfCheckGPT in terms of precision, recall, and f1 score. For the
four LLMs we study, MetaQA outperforms SelfCheckGPT with a superiority margin
ranging from 0.041 - 0.113 (for precision), 0.143 - 0.430 (for recall), and
0.154 - 0.368 (for F1-score). For instance, with Mistral-7B, MetaQA achieves an
average F1-score of 0.435, compared to SelfCheckGPT's F1-score of 0.205,
representing an improvement rate of 112.2%. MetaQA also demonstrates
superiority across all different categories of questions.
|
2502.15996 | Aditya Kumar | Aditya Kumar, Simon Rauch, Mario Cypko and Oliver Amft | Med-gte-hybrid: A contextual embedding transformer model for extracting
actionable information from clinical texts | 22 pages, 4 figures, 2 tables | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel contextual embedding model med-gte-hybrid that was
derived from the gte-large sentence transformer to extract information from
unstructured clinical narratives. Our model tuning strategy for med-gte-hybrid
combines contrastive learning and a denoising autoencoder. To evaluate the
performance of med-gte-hybrid, we investigate several clinical prediction tasks
in large patient cohorts extracted from the MIMIC-IV dataset, including Chronic
Kidney Disease (CKD) patient prognosis, estimated glomerular filtration rate
(eGFR) prediction, and patient mortality prediction. Furthermore, we
demonstrate that the med-gte-hybrid model improves patient stratification,
clustering, and text retrieval, thus outperforms current state-of-the-art
models on the Massive Text Embedding Benchmark (MTEB). While some of our
evaluations focus on CKD, our hybrid tuning of sentence transformers could be
transferred to other medical domains and has the potential to improve clinical
decision-making and personalised treatment pathways in various healthcare
applications.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 23:17:31 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 16:17:01 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Kumar",
"Aditya",
""
],
[
"Rauch",
"Simon",
""
],
[
"Cypko",
"Mario",
""
],
[
"Amft",
"Oliver",
""
]
] | TITLE: Med-gte-hybrid: A contextual embedding transformer model for extracting
actionable information from clinical texts
ABSTRACT: We introduce a novel contextual embedding model med-gte-hybrid that was
derived from the gte-large sentence transformer to extract information from
unstructured clinical narratives. Our model tuning strategy for med-gte-hybrid
combines contrastive learning and a denoising autoencoder. To evaluate the
performance of med-gte-hybrid, we investigate several clinical prediction tasks
in large patient cohorts extracted from the MIMIC-IV dataset, including Chronic
Kidney Disease (CKD) patient prognosis, estimated glomerular filtration rate
(eGFR) prediction, and patient mortality prediction. Furthermore, we
demonstrate that the med-gte-hybrid model improves patient stratification,
clustering, and text retrieval, thus outperforms current state-of-the-art
models on the Massive Text Embedding Benchmark (MTEB). While some of our
evaluations focus on CKD, our hybrid tuning of sentence transformers could be
transferred to other medical domains and has the potential to improve clinical
decision-making and personalised treatment pathways in various healthcare
applications.
|
2502.18913 | Jiaming Zhou | Jiaming Zhou, Yujie Guo, Shiwan Zhao, Haoqin Sun, Hui Wang, Jiabei He,
Aobo Kong, Shiyao Wang, Xi Yang, Yequan Wang, Yonghua Lin, Yong Qin | CS-Dialogue: A 104-Hour Dataset of Spontaneous Mandarin-English
Code-Switching Dialogues for Speech Recognition | null | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code-switching (CS), the alternation between two or more languages within a
single conversation, presents significant challenges for automatic speech
recognition (ASR) systems. Existing Mandarin-English code-switching datasets
often suffer from limitations in size, spontaneity, and the lack of full-length
dialogue recordings with transcriptions, hindering the development of robust
ASR models for real-world conversational scenarios. This paper introduces
CS-Dialogue, a novel large-scale Mandarin-English code-switching speech dataset
comprising 104 hours of spontaneous conversations from 200 speakers. Unlike
previous datasets, CS-Dialogue provides full-length dialogue recordings with
complete transcriptions, capturing naturalistic code-switching patterns in
continuous speech. We describe the data collection and annotation processes,
present detailed statistics of the dataset, and establish benchmark ASR
performance using state-of-the-art models. Our experiments, using Transformer,
Conformer, and Branchformer, demonstrate the challenges of code-switching ASR,
and show that existing pre-trained models such as Whisper still have the space
to improve. The CS-Dialogue dataset will be made freely available for all
academic purposes.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 07:59:55 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 03:06:01 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhou",
"Jiaming",
""
],
[
"Guo",
"Yujie",
""
],
[
"Zhao",
"Shiwan",
""
],
[
"Sun",
"Haoqin",
""
],
[
"Wang",
"Hui",
""
],
[
"He",
"Jiabei",
""
],
[
"Kong",
"Aobo",
""
],
[
"Wang",
"Shiyao",
... | TITLE: CS-Dialogue: A 104-Hour Dataset of Spontaneous Mandarin-English
Code-Switching Dialogues for Speech Recognition
ABSTRACT: Code-switching (CS), the alternation between two or more languages within a
single conversation, presents significant challenges for automatic speech
recognition (ASR) systems. Existing Mandarin-English code-switching datasets
often suffer from limitations in size, spontaneity, and the lack of full-length
dialogue recordings with transcriptions, hindering the development of robust
ASR models for real-world conversational scenarios. This paper introduces
CS-Dialogue, a novel large-scale Mandarin-English code-switching speech dataset
comprising 104 hours of spontaneous conversations from 200 speakers. Unlike
previous datasets, CS-Dialogue provides full-length dialogue recordings with
complete transcriptions, capturing naturalistic code-switching patterns in
continuous speech. We describe the data collection and annotation processes,
present detailed statistics of the dataset, and establish benchmark ASR
performance using state-of-the-art models. Our experiments, using Transformer,
Conformer, and Branchformer, demonstrate the challenges of code-switching ASR,
and show that existing pre-trained models such as Whisper still have the space
to improve. The CS-Dialogue dataset will be made freely available for all
academic purposes.
|
2502.19800 | Dongbo Shi | Dongbo Shi, Shen Cao, Lubin Fan, Bojian Wu, Jinhui Guo, Renjie Chen,
Ligang Liu, Jieping Ye | TrackGS: Optimizing COLMAP-Free 3D Gaussian Splatting with Global Track
Constraints | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While 3D Gaussian Splatting (3DGS) has advanced ability on novel view
synthesis, it still depends on accurate pre-computaed camera parameters, which
are hard to obtain and prone to noise. Previous COLMAP-Free methods optimize
camera poses using local constraints, but they often struggle in complex
scenarios. To address this, we introduce TrackGS, which incorporates feature
tracks to globally constrain multi-view geometry. We select the Gaussians
associated with each track, which will be trained and rescaled to an
infinitesimally small size to guarantee the spatial accuracy. We also propose
minimizing both reprojection and backprojection errors for better geometric
consistency. Moreover, by deriving the gradient of intrinsics, we unify camera
parameter estimation with 3DGS training into a joint optimization framework,
achieving SOTA performance on challenging datasets with severe camera
movements.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 06:16:04 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 08:03:52 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Shi",
"Dongbo",
""
],
[
"Cao",
"Shen",
""
],
[
"Fan",
"Lubin",
""
],
[
"Wu",
"Bojian",
""
],
[
"Guo",
"Jinhui",
""
],
[
"Chen",
"Renjie",
""
],
[
"Liu",
"Ligang",
""
],
[
"Ye",
"Jieping",
"... | TITLE: TrackGS: Optimizing COLMAP-Free 3D Gaussian Splatting with Global Track
Constraints
ABSTRACT: While 3D Gaussian Splatting (3DGS) has advanced ability on novel view
synthesis, it still depends on accurate pre-computaed camera parameters, which
are hard to obtain and prone to noise. Previous COLMAP-Free methods optimize
camera poses using local constraints, but they often struggle in complex
scenarios. To address this, we introduce TrackGS, which incorporates feature
tracks to globally constrain multi-view geometry. We select the Gaussians
associated with each track, which will be trained and rescaled to an
infinitesimally small size to guarantee the spatial accuracy. We also propose
minimizing both reprojection and backprojection errors for better geometric
consistency. Moreover, by deriving the gradient of intrinsics, we unify camera
parameter estimation with 3DGS training into a joint optimization framework,
achieving SOTA performance on challenging datasets with severe camera
movements.
|
2502.19844 | Xiangyan Qu | Xiangyan Qu, Gaopeng Gou, Jiamin Zhuang, Jing Yu, Kun Song, Qihao
Wang, Yili Li, Gang Xiong | ProAPO: Progressively Automatic Prompt Optimization for Visual
Classification | Accepted to the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language models (VLMs) have made significant progress in image
classification by training with large-scale paired image-text data. Their
performances largely depend on the prompt quality. While recent methods show
that visual descriptions generated by large language models (LLMs) enhance the
generalization of VLMs, class-specific prompts may be inaccurate or lack
discrimination due to the hallucination in LLMs. In this paper, we aim to find
visually discriminative prompts for fine-grained categories with minimal
supervision and no human-in-the-loop. An evolution-based algorithm is proposed
to progressively optimize language prompts from task-specific templates to
class-specific descriptions. Unlike optimizing templates, the search space
shows an explosion in class-specific candidate prompts. This increases prompt
generation costs, iterative times, and the overfitting problem. To this end, we
first introduce several simple yet effective edit-based and evolution-based
operations to generate diverse candidate prompts by one-time query of LLMs.
Then, two sampling strategies are proposed to find a better initial search
point and reduce traversed categories, saving iteration costs. Moreover, we
apply a novel fitness score with entropy constraints to mitigate overfitting.
In a challenging one-shot image classification setting, our method outperforms
existing textual prompt-based methods and improves LLM-generated description
methods across 13 datasets. Meanwhile, we demonstrate that our optimal prompts
improve adapter-based methods and transfer effectively across different
backbones.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 07:39:23 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 01:18:01 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 08:56:58 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Qu",
"Xiangyan",
""
],
[
"Gou",
"Gaopeng",
""
],
[
"Zhuang",
"Jiamin",
""
],
[
"Yu",
"Jing",
""
],
[
"Song",
"Kun",
""
],
[
"Wang",
"Qihao",
""
],
[
"Li",
"Yili",
""
],
[
"Xiong",
"Gang",
"... | TITLE: ProAPO: Progressively Automatic Prompt Optimization for Visual
Classification
ABSTRACT: Vision-language models (VLMs) have made significant progress in image
classification by training with large-scale paired image-text data. Their
performances largely depend on the prompt quality. While recent methods show
that visual descriptions generated by large language models (LLMs) enhance the
generalization of VLMs, class-specific prompts may be inaccurate or lack
discrimination due to the hallucination in LLMs. In this paper, we aim to find
visually discriminative prompts for fine-grained categories with minimal
supervision and no human-in-the-loop. An evolution-based algorithm is proposed
to progressively optimize language prompts from task-specific templates to
class-specific descriptions. Unlike optimizing templates, the search space
shows an explosion in class-specific candidate prompts. This increases prompt
generation costs, iterative times, and the overfitting problem. To this end, we
first introduce several simple yet effective edit-based and evolution-based
operations to generate diverse candidate prompts by one-time query of LLMs.
Then, two sampling strategies are proposed to find a better initial search
point and reduce traversed categories, saving iteration costs. Moreover, we
apply a novel fitness score with entropy constraints to mitigate overfitting.
In a challenging one-shot image classification setting, our method outperforms
existing textual prompt-based methods and improves LLM-generated description
methods across 13 datasets. Meanwhile, we demonstrate that our optimal prompts
improve adapter-based methods and transfer effectively across different
backbones.
|
2502.19962 | Xin Liu Prof. | Quanxing Zha, Xin Liu, Shu-Juan Peng, Yiu-ming Cheung, Xing Xu, Nannan
Wang | ReCon: Enhancing True Correspondence Discrimination through Relation
Consistency for Robust Noisy Correspondence Learning | 10 pages, 4 figures, Accepted by CVPR2025 | null | null | null | cs.CV cs.IR | http://creativecommons.org/publicdomain/zero/1.0/ | Can we accurately identify the true correspondences from multimodal datasets
containing mismatched data pairs? Existing methods primarily emphasize the
similarity matching between the representations of objects across modalities,
potentially neglecting the crucial relation consistency within modalities that
are particularly important for distinguishing the true and false
correspondences. Such an omission often runs the risk of misidentifying
negatives as positives, thus leading to unanticipated performance degradation.
To address this problem, we propose a general Relation Consistency learning
framework, namely ReCon, to accurately discriminate the true correspondences
among the multimodal data and thus effectively mitigate the adverse impact
caused by mismatches. Specifically, ReCon leverages a novel relation
consistency learning to ensure the dual-alignment, respectively of, the
cross-modal relation consistency between different modalities and the
intra-modal relation consistency within modalities. Thanks to such dual
constrains on relations, ReCon significantly enhances its effectiveness for
true correspondence discrimination and therefore reliably filters out the
mismatched pairs to mitigate the risks of wrong supervisions. Extensive
experiments on three widely-used benchmark datasets, including Flickr30K,
MS-COCO, and Conceptual Captions, are conducted to demonstrate the
effectiveness and superiority of ReCon compared with other SOTAs. The code is
available at: https://github.com/qxzha/ReCon.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 10:38:03 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 10:13:56 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zha",
"Quanxing",
""
],
[
"Liu",
"Xin",
""
],
[
"Peng",
"Shu-Juan",
""
],
[
"Cheung",
"Yiu-ming",
""
],
[
"Xu",
"Xing",
""
],
[
"Wang",
"Nannan",
""
]
] | TITLE: ReCon: Enhancing True Correspondence Discrimination through Relation
Consistency for Robust Noisy Correspondence Learning
ABSTRACT: Can we accurately identify the true correspondences from multimodal datasets
containing mismatched data pairs? Existing methods primarily emphasize the
similarity matching between the representations of objects across modalities,
potentially neglecting the crucial relation consistency within modalities that
are particularly important for distinguishing the true and false
correspondences. Such an omission often runs the risk of misidentifying
negatives as positives, thus leading to unanticipated performance degradation.
To address this problem, we propose a general Relation Consistency learning
framework, namely ReCon, to accurately discriminate the true correspondences
among the multimodal data and thus effectively mitigate the adverse impact
caused by mismatches. Specifically, ReCon leverages a novel relation
consistency learning to ensure the dual-alignment, respectively of, the
cross-modal relation consistency between different modalities and the
intra-modal relation consistency within modalities. Thanks to such dual
constrains on relations, ReCon significantly enhances its effectiveness for
true correspondence discrimination and therefore reliably filters out the
mismatched pairs to mitigate the risks of wrong supervisions. Extensive
experiments on three widely-used benchmark datasets, including Flickr30K,
MS-COCO, and Conceptual Captions, are conducted to demonstrate the
effectiveness and superiority of ReCon compared with other SOTAs. The code is
available at: https://github.com/qxzha/ReCon.
|
2502.20256 | Yancheng Cai | Yancheng Cai, Fei Yin, Dounia Hammou, Rafal Mantiuk | Do computer vision foundation models learn the low-level characteristics
of the human visual system? | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Computer vision foundation models, such as DINO or OpenCLIP, are trained in a
self-supervised manner on large image datasets. Analogously, substantial
evidence suggests that the human visual system (HVS) is influenced by the
statistical distribution of colors and patterns in the natural world,
characteristics also present in the training data of foundation models. The
question we address in this paper is whether foundation models trained on
natural images mimic some of the low-level characteristics of the human visual
system, such as contrast detection, contrast masking, and contrast constancy.
Specifically, we designed a protocol comprising nine test types to evaluate the
image encoders of 45 foundation and generative models. Our results indicate
that some foundation models (e.g., DINO, DINOv2, and OpenCLIP), share some of
the characteristics of human vision, but other models show little resemblance.
Foundation models tend to show smaller sensitivity to low contrast and rather
irregular responses to contrast across frequencies. The foundation models show
the best agreement with human data in terms of contrast masking. Our findings
suggest that human vision and computer vision may take both similar and
different paths when learning to interpret images of the real world. Overall,
while differences remain, foundation models trained on vision tasks start to
align with low-level human vision, with DINOv2 showing the closest resemblance.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 16:43:56 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 21:52:23 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Cai",
"Yancheng",
""
],
[
"Yin",
"Fei",
""
],
[
"Hammou",
"Dounia",
""
],
[
"Mantiuk",
"Rafal",
""
]
] | TITLE: Do computer vision foundation models learn the low-level characteristics
of the human visual system?
ABSTRACT: Computer vision foundation models, such as DINO or OpenCLIP, are trained in a
self-supervised manner on large image datasets. Analogously, substantial
evidence suggests that the human visual system (HVS) is influenced by the
statistical distribution of colors and patterns in the natural world,
characteristics also present in the training data of foundation models. The
question we address in this paper is whether foundation models trained on
natural images mimic some of the low-level characteristics of the human visual
system, such as contrast detection, contrast masking, and contrast constancy.
Specifically, we designed a protocol comprising nine test types to evaluate the
image encoders of 45 foundation and generative models. Our results indicate
that some foundation models (e.g., DINO, DINOv2, and OpenCLIP), share some of
the characteristics of human vision, but other models show little resemblance.
Foundation models tend to show smaller sensitivity to low contrast and rather
irregular responses to contrast across frequencies. The foundation models show
the best agreement with human data in terms of contrast masking. Our findings
suggest that human vision and computer vision may take both similar and
different paths when learning to interpret images of the real world. Overall,
while differences remain, foundation models trained on vision tasks start to
align with low-level human vision, with DINOv2 showing the closest resemblance.
|
2503.02880 | Purba Mukherjee | Purba Mukherjee, Anjan A Sen | A New $\sim 5\sigma$ Tension at Characteristic Redshift from DESI-DR1
BAO and DES-SN5YR Observations | 4 pages, 1 table, 3 figures. Comments are welcome. New References
added | null | null | null | astro-ph.CO cs.LG gr-qc hep-th | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We perform a model-independent reconstruction of the angular diameter
distance ($D_{A}$) using the Multi-Task Gaussian Process (MTGP) framework with
DESI-DR1 BAO and DES-SN5YR datasets. We calibrate the comoving sound horizon at
the baryon drag epoch $r_d$ to the Planck best-fit value, ensuring consistency
with early-universe physics. With the reconstructed $D_A$ at two key redshifts,
$z\sim 1.63$ (where $D_{A}^{\prime} =0$) and at $z\sim 0.512$ (where
$D_{A}^{\prime} = D_{A}$), we derive the expansion rate of the Universe $H(z)$
at these redshifts. Our findings reveal that at $z\sim 1.63$, the $H(z)$ is
fully consistent with the Planck-2018 $\Lambda$CDM prediction, confirming no
new physics at that redshift. However, at $z \sim 0.512$, the derived $H(z)$
shows a more than $5\sigma$ discrepancy with the Planck-2018 $\Lambda$CDM
prediction, suggesting a possible breakdown of the $\Lambda$CDM model as
constrained by Planck-2018 at this lower redshift. This emerging $\sim 5\sigma$
tension at $z\sim 0.512$, distinct from the existing ``Hubble Tension'', may
signal the first strong evidence for new physics at low redshifts.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:58:15 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 08:13:04 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Mukherjee",
"Purba",
""
],
[
"Sen",
"Anjan A",
""
]
] | TITLE: A New $\sim 5\sigma$ Tension at Characteristic Redshift from DESI-DR1
BAO and DES-SN5YR Observations
ABSTRACT: We perform a model-independent reconstruction of the angular diameter
distance ($D_{A}$) using the Multi-Task Gaussian Process (MTGP) framework with
DESI-DR1 BAO and DES-SN5YR datasets. We calibrate the comoving sound horizon at
the baryon drag epoch $r_d$ to the Planck best-fit value, ensuring consistency
with early-universe physics. With the reconstructed $D_A$ at two key redshifts,
$z\sim 1.63$ (where $D_{A}^{\prime} =0$) and at $z\sim 0.512$ (where
$D_{A}^{\prime} = D_{A}$), we derive the expansion rate of the Universe $H(z)$
at these redshifts. Our findings reveal that at $z\sim 1.63$, the $H(z)$ is
fully consistent with the Planck-2018 $\Lambda$CDM prediction, confirming no
new physics at that redshift. However, at $z \sim 0.512$, the derived $H(z)$
shows a more than $5\sigma$ discrepancy with the Planck-2018 $\Lambda$CDM
prediction, suggesting a possible breakdown of the $\Lambda$CDM model as
constrained by Planck-2018 at this lower redshift. This emerging $\sim 5\sigma$
tension at $z\sim 0.512$, distinct from the existing ``Hubble Tension'', may
signal the first strong evidence for new physics at low redshifts.
|
2503.03241 | Yue Hou | Yue Hou, He Zhu, Ruomei Liu, Yingke Su, Jinxiang Xia, Junran Wu, Ke Xu | Structural Entropy Guided Unsupervised Graph Out-Of-Distribution
Detection | Accepted by AAAI 2025 (The 39th Annual AAAI Conference on Artificial
Intelligence) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the emerging of huge amount of unlabeled data, unsupervised
out-of-distribution (OOD) detection is vital for ensuring the reliability of
graph neural networks (GNNs) by identifying OOD samples from in-distribution
(ID) ones during testing, where encountering novel or unknown data is
inevitable. Existing methods often suffer from compromised performance due to
redundant information in graph structures, which impairs their ability to
effectively differentiate between ID and OOD data. To address this challenge,
we propose SEGO, an unsupervised framework that integrates structural entropy
into OOD detection regarding graph classification. Specifically, within the
architecture of contrastive learning, SEGO introduces an anchor view in the
form of coding tree by minimizing structural entropy. The obtained coding tree
effectively removes redundant information from graphs while preserving
essential structural information, enabling the capture of distinct graph
patterns between ID and OOD samples. Furthermore, we present a multi-grained
contrastive learning scheme at local, global, and tree levels using triplet
views, where coding trees with essential information serve as the anchor view.
Extensive experiments on real-world datasets validate the effectiveness of
SEGO, demonstrating superior performance over state-of-the-art baselines in OOD
detection. Specifically, our method achieves the best performance on 9 out of
10 dataset pairs, with an average improvement of 3.7\% on OOD detection
datasets, significantly surpassing the best competitor by 10.8\% on the
FreeSolv/ToxCast dataset pair.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 07:47:57 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 10:24:40 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Hou",
"Yue",
""
],
[
"Zhu",
"He",
""
],
[
"Liu",
"Ruomei",
""
],
[
"Su",
"Yingke",
""
],
[
"Xia",
"Jinxiang",
""
],
[
"Wu",
"Junran",
""
],
[
"Xu",
"Ke",
""
]
] | TITLE: Structural Entropy Guided Unsupervised Graph Out-Of-Distribution
Detection
ABSTRACT: With the emerging of huge amount of unlabeled data, unsupervised
out-of-distribution (OOD) detection is vital for ensuring the reliability of
graph neural networks (GNNs) by identifying OOD samples from in-distribution
(ID) ones during testing, where encountering novel or unknown data is
inevitable. Existing methods often suffer from compromised performance due to
redundant information in graph structures, which impairs their ability to
effectively differentiate between ID and OOD data. To address this challenge,
we propose SEGO, an unsupervised framework that integrates structural entropy
into OOD detection regarding graph classification. Specifically, within the
architecture of contrastive learning, SEGO introduces an anchor view in the
form of coding tree by minimizing structural entropy. The obtained coding tree
effectively removes redundant information from graphs while preserving
essential structural information, enabling the capture of distinct graph
patterns between ID and OOD samples. Furthermore, we present a multi-grained
contrastive learning scheme at local, global, and tree levels using triplet
views, where coding trees with essential information serve as the anchor view.
Extensive experiments on real-world datasets validate the effectiveness of
SEGO, demonstrating superior performance over state-of-the-art baselines in OOD
detection. Specifically, our method achieves the best performance on 9 out of
10 dataset pairs, with an average improvement of 3.7\% on OOD detection
datasets, significantly surpassing the best competitor by 10.8\% on the
FreeSolv/ToxCast dataset pair.
|
2503.05040 | John Flournoy | John C. Flournoy, Carol S. Lee, Maggie Wu, Catherine M. Hicks | No Silver Bullets: Why Understanding Software Cycle Time is Messy, Not
Magic | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Understanding factors that influence software development velocity is crucial
for engineering teams and organizations, yet empirical evidence at scale
remains limited. A more robust understanding of the dynamics of cycle time may
help practitioners avoid pitfalls in relying on velocity measures while
evaluating software work. We analyze cycle time, a widely-used metric measuring
time from ticket creation to completion, using a dataset of over 55,000
observations across 216 organizations. Through Bayesian hierarchical modeling
that appropriately separates individual and organizational variation, we
examine how coding time, task scoping, and collaboration patterns affect cycle
time while characterizing its substantial variability across contexts. We find
precise but modest associations between cycle time and factors including coding
days per week, number of merged pull requests, and degree of collaboration.
However, these effects are set against considerable unexplained variation both
between and within individuals. Our findings suggest that while common
workplace factors do influence cycle time in expected directions, any single
observation provides limited signal about typical performance. This work
demonstrates methods for analyzing complex operational metrics at scale while
highlighting potential pitfalls in using such measurements to drive
decision-making. We conclude that improving software delivery velocity likely
requires systems-level thinking rather than individual-focused interventions.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 23:32:53 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:52:46 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Mar 2025 18:57:05 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Flournoy",
"John C.",
""
],
[
"Lee",
"Carol S.",
""
],
[
"Wu",
"Maggie",
""
],
[
"Hicks",
"Catherine M.",
""
]
] | TITLE: No Silver Bullets: Why Understanding Software Cycle Time is Messy, Not
Magic
ABSTRACT: Understanding factors that influence software development velocity is crucial
for engineering teams and organizations, yet empirical evidence at scale
remains limited. A more robust understanding of the dynamics of cycle time may
help practitioners avoid pitfalls in relying on velocity measures while
evaluating software work. We analyze cycle time, a widely-used metric measuring
time from ticket creation to completion, using a dataset of over 55,000
observations across 216 organizations. Through Bayesian hierarchical modeling
that appropriately separates individual and organizational variation, we
examine how coding time, task scoping, and collaboration patterns affect cycle
time while characterizing its substantial variability across contexts. We find
precise but modest associations between cycle time and factors including coding
days per week, number of merged pull requests, and degree of collaboration.
However, these effects are set against considerable unexplained variation both
between and within individuals. Our findings suggest that while common
workplace factors do influence cycle time in expected directions, any single
observation provides limited signal about typical performance. This work
demonstrates methods for analyzing complex operational metrics at scale while
highlighting potential pitfalls in using such measurements to drive
decision-making. We conclude that improving software delivery velocity likely
requires systems-level thinking rather than individual-focused interventions.
|
2503.05063 | Haosen Zhang | Haosen Zhang, Jiahao Huang, Yinzhe Wu, Congren Dai, Fanwen Wang,
Zhenxuan Zhang, and Guang Yang | Lightweight Hypercomplex MRI Reconstruction: A Generalized
Kronecker-Parameterized Approach | 11 pages, 3 figures. Submitted for publication | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic Resonance Imaging (MRI) is crucial for clinical diagnostics but is
hindered by prolonged scan times. Current deep learning models enhance MRI
reconstruction but are often memory-intensive and unsuitable for
resource-limited systems. This paper introduces a lightweight MRI
reconstruction model leveraging Kronecker-Parameterized Hypercomplex Neural
Networks to achieve high performance with reduced parameters. By integrating
Kronecker-based modules, including Kronecker MLP, Kronecker Window Attention,
and Kronecker Convolution, the proposed model efficiently extracts spatial
features while preserving representational power. We introduce Kronecker U-Net
and Kronecker SwinMR, which maintain high reconstruction quality with
approximately 50% fewer parameters compared to existing models. Experimental
evaluation on the FastMRI dataset demonstrates competitive PSNR, SSIM, and
LPIPS metrics, even at high acceleration factors (8x and 16x), with no
significant performance drop. Additionally, Kronecker variants exhibit superior
generalization and reduced overfitting on limited datasets, facilitating
efficient MRI reconstruction on hardware-constrained systems. This approach
sets a new benchmark for parameter-efficient medical imaging models.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 00:47:15 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 21:38:43 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhang",
"Haosen",
""
],
[
"Huang",
"Jiahao",
""
],
[
"Wu",
"Yinzhe",
""
],
[
"Dai",
"Congren",
""
],
[
"Wang",
"Fanwen",
""
],
[
"Zhang",
"Zhenxuan",
""
],
[
"Yang",
"Guang",
""
]
] | TITLE: Lightweight Hypercomplex MRI Reconstruction: A Generalized
Kronecker-Parameterized Approach
ABSTRACT: Magnetic Resonance Imaging (MRI) is crucial for clinical diagnostics but is
hindered by prolonged scan times. Current deep learning models enhance MRI
reconstruction but are often memory-intensive and unsuitable for
resource-limited systems. This paper introduces a lightweight MRI
reconstruction model leveraging Kronecker-Parameterized Hypercomplex Neural
Networks to achieve high performance with reduced parameters. By integrating
Kronecker-based modules, including Kronecker MLP, Kronecker Window Attention,
and Kronecker Convolution, the proposed model efficiently extracts spatial
features while preserving representational power. We introduce Kronecker U-Net
and Kronecker SwinMR, which maintain high reconstruction quality with
approximately 50% fewer parameters compared to existing models. Experimental
evaluation on the FastMRI dataset demonstrates competitive PSNR, SSIM, and
LPIPS metrics, even at high acceleration factors (8x and 16x), with no
significant performance drop. Additionally, Kronecker variants exhibit superior
generalization and reduced overfitting on limited datasets, facilitating
efficient MRI reconstruction on hardware-constrained systems. This approach
sets a new benchmark for parameter-efficient medical imaging models.
|
2503.05245 | Johanna Paula M\"uller | Johanna P. M\"uller, Robert Wright, Thomas G. Day, Lorenzo Venturini,
Samuel F. Budd, Hadrien Reynaud, Joseph V. Hajnal, Reza Razavi, Bernhard
Kainz | L-FUSION: Laplacian Fetal Ultrasound Segmentation & Uncertainty
Estimation | Under Review | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate analysis of prenatal ultrasound (US) is essential for early
detection of developmental anomalies. However, operator dependency and
technical limitations (e.g. intrinsic artefacts and effects, setting errors)
can complicate image interpretation and the assessment of diagnostic
uncertainty. We present L-FUSION (Laplacian Fetal US Segmentation with
Integrated FoundatiON models), a framework that integrates uncertainty
quantification through unsupervised, normative learning and large-scale
foundation models for robust segmentation of fetal structures in normal and
pathological scans. We propose to utilise the aleatoric logit distributions of
Stochastic Segmentation Networks and Laplace approximations with fast Hessian
estimations to estimate epistemic uncertainty only from the segmentation head.
This enables us to achieve reliable abnormality quantification for instant
diagnostic feedback. Combined with an integrated Dropout component, L-FUSION
enables reliable differentiation of lesions from normal fetal anatomy with
enhanced uncertainty maps and segmentation counterfactuals in US imaging. It
improves epistemic and aleatoric uncertainty interpretation and removes the
need for manual disease-labelling. Evaluations across multiple datasets show
that L-FUSION achieves superior segmentation accuracy and consistent
uncertainty quantification, supporting on-site decision-making and offering a
scalable solution for advancing fetal ultrasound analysis in clinical settings.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 08:57:38 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 10:11:17 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Müller",
"Johanna P.",
""
],
[
"Wright",
"Robert",
""
],
[
"Day",
"Thomas G.",
""
],
[
"Venturini",
"Lorenzo",
""
],
[
"Budd",
"Samuel F.",
""
],
[
"Reynaud",
"Hadrien",
""
],
[
"Hajnal",
"Joseph V.",
""
... | TITLE: L-FUSION: Laplacian Fetal Ultrasound Segmentation & Uncertainty
Estimation
ABSTRACT: Accurate analysis of prenatal ultrasound (US) is essential for early
detection of developmental anomalies. However, operator dependency and
technical limitations (e.g. intrinsic artefacts and effects, setting errors)
can complicate image interpretation and the assessment of diagnostic
uncertainty. We present L-FUSION (Laplacian Fetal US Segmentation with
Integrated FoundatiON models), a framework that integrates uncertainty
quantification through unsupervised, normative learning and large-scale
foundation models for robust segmentation of fetal structures in normal and
pathological scans. We propose to utilise the aleatoric logit distributions of
Stochastic Segmentation Networks and Laplace approximations with fast Hessian
estimations to estimate epistemic uncertainty only from the segmentation head.
This enables us to achieve reliable abnormality quantification for instant
diagnostic feedback. Combined with an integrated Dropout component, L-FUSION
enables reliable differentiation of lesions from normal fetal anatomy with
enhanced uncertainty maps and segmentation counterfactuals in US imaging. It
improves epistemic and aleatoric uncertainty interpretation and removes the
need for manual disease-labelling. Evaluations across multiple datasets show
that L-FUSION achieves superior segmentation accuracy and consistent
uncertainty quantification, supporting on-site decision-making and offering a
scalable solution for advancing fetal ultrasound analysis in clinical settings.
|
2503.06001 | Keyao Zhan | Keyao Zhan, Puheng Li, Lei Wu | Analyzing the Role of Permutation Invariance in Linear Mode Connectivity | Accepted at AISTATS 2025 | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | It was empirically observed in Entezari et al. (2021) that when accounting
for the permutation invariance of neural networks, there is likely no loss
barrier along the linear interpolation between two SGD solutions -- a
phenomenon known as linear mode connectivity (LMC) modulo permutation. This
phenomenon has sparked significant attention due to both its theoretical
interest and practical relevance in applications such as model merging. In this
paper, we provide a fine-grained analysis of this phenomenon for two-layer ReLU
networks under a teacher-student setup. We show that as the student network
width $m$ increases, the LMC loss barrier modulo permutation exhibits a double
descent behavior. Particularly, when $m$ is sufficiently large, the barrier
decreases to zero at a rate $O(m^{-1/2})$. Notably, this rate does not suffer
from the curse of dimensionality and demonstrates how substantial permutation
can reduce the LMC loss barrier. Moreover, we observe a sharp transition in the
sparsity of GD/SGD solutions when increasing the learning rate and investigate
how this sparsity preference affects the LMC loss barrier modulo permutation.
Experiments on both synthetic and MNIST datasets corroborate our theoretical
predictions and reveal a similar trend for more complex network architectures.
| [
{
"version": "v1",
"created": "Sat, 8 Mar 2025 01:12:27 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 16:22:51 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhan",
"Keyao",
""
],
[
"Li",
"Puheng",
""
],
[
"Wu",
"Lei",
""
]
] | TITLE: Analyzing the Role of Permutation Invariance in Linear Mode Connectivity
ABSTRACT: It was empirically observed in Entezari et al. (2021) that when accounting
for the permutation invariance of neural networks, there is likely no loss
barrier along the linear interpolation between two SGD solutions -- a
phenomenon known as linear mode connectivity (LMC) modulo permutation. This
phenomenon has sparked significant attention due to both its theoretical
interest and practical relevance in applications such as model merging. In this
paper, we provide a fine-grained analysis of this phenomenon for two-layer ReLU
networks under a teacher-student setup. We show that as the student network
width $m$ increases, the LMC loss barrier modulo permutation exhibits a double
descent behavior. Particularly, when $m$ is sufficiently large, the barrier
decreases to zero at a rate $O(m^{-1/2})$. Notably, this rate does not suffer
from the curse of dimensionality and demonstrates how substantial permutation
can reduce the LMC loss barrier. Moreover, we observe a sharp transition in the
sparsity of GD/SGD solutions when increasing the learning rate and investigate
how this sparsity preference affects the LMC loss barrier modulo permutation.
Experiments on both synthetic and MNIST datasets corroborate our theoretical
predictions and reveal a similar trend for more complex network architectures.
|
2503.06677 | Di Wu | Di Wu, Liu Liu, Zhou Linli, Anran Huang, Liangtu Song, Qiaojun Yu, Qi
Wu, Cewu Lu | REArtGS: Reconstructing and Generating Articulated Objects via 3D
Gaussian Splatting with Geometric and Motion Constraints | 11pages, 6 figures | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Articulated objects, as prevalent entities in human life, their 3D
representations play crucial roles across various applications. However,
achieving both high-fidelity textured surface reconstruction and dynamic
generation for articulated objects remains challenging for existing methods. In
this paper, we present REArtGS, a novel framework that introduces additional
geometric and motion constraints to 3D Gaussian primitives, enabling
high-quality textured surface reconstruction and generation for articulated
objects. Specifically, given multi-view RGB images of arbitrary two states of
articulated objects, we first introduce an unbiased Signed Distance Field (SDF)
guidance to regularize Gaussian opacity fields, enhancing geometry constraints
and improving surface reconstruction quality. Then we establish deformable
fields for 3D Gaussians constrained by the kinematic structures of articulated
objects, achieving unsupervised generation of surface meshes in unseen states.
Extensive experiments on both synthetic and real datasets demonstrate our
approach achieves high-quality textured surface reconstruction for given
states, and enables high-fidelity surface generation for unseen states. Codes
will be released within the next four months and the project website is at
https://sites.google.com/view/reartgs/home.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 16:05:36 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 02:50:33 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Wu",
"Di",
""
],
[
"Liu",
"Liu",
""
],
[
"Linli",
"Zhou",
""
],
[
"Huang",
"Anran",
""
],
[
"Song",
"Liangtu",
""
],
[
"Yu",
"Qiaojun",
""
],
[
"Wu",
"Qi",
""
],
[
"Lu",
"Cewu",
""
]
] | TITLE: REArtGS: Reconstructing and Generating Articulated Objects via 3D
Gaussian Splatting with Geometric and Motion Constraints
ABSTRACT: Articulated objects, as prevalent entities in human life, their 3D
representations play crucial roles across various applications. However,
achieving both high-fidelity textured surface reconstruction and dynamic
generation for articulated objects remains challenging for existing methods. In
this paper, we present REArtGS, a novel framework that introduces additional
geometric and motion constraints to 3D Gaussian primitives, enabling
high-quality textured surface reconstruction and generation for articulated
objects. Specifically, given multi-view RGB images of arbitrary two states of
articulated objects, we first introduce an unbiased Signed Distance Field (SDF)
guidance to regularize Gaussian opacity fields, enhancing geometry constraints
and improving surface reconstruction quality. Then we establish deformable
fields for 3D Gaussians constrained by the kinematic structures of articulated
objects, achieving unsupervised generation of surface meshes in unseen states.
Extensive experiments on both synthetic and real datasets demonstrate our
approach achieves high-quality textured surface reconstruction for given
states, and enables high-fidelity surface generation for unseen states. Codes
will be released within the next four months and the project website is at
https://sites.google.com/view/reartgs/home.
|
2503.06770 | Kentaro Hoffman | Simon Nguyen, Kentaro Hoffman, Tyler McCormick | Unique Rashomon Sets for Robust Active Learning | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Collecting labeled data for machine learning models is often expensive and
time-consuming. Active learning addresses this challenge by selectively
labeling the most informative observations, but when initial labeled data is
limited, it becomes difficult to distinguish genuinely informative points from
those appearing uncertain primarily due to noise. Ensemble methods like random
forests are a powerful approach to quantifying this uncertainty but do so by
aggregating all models indiscriminately. This includes poor performing models
and redundant models, a problem that worsens in the presence of noisy data. We
introduce UNique Rashomon Ensembled Active Learning (UNREAL), which selectively
ensembles only distinct models from the Rashomon set, which is the set of
nearly optimal models. Restricting ensemble membership to high-performing
models with different explanations helps distinguish genuine uncertainty from
noise-induced variation. We show that UNREAL achieves faster theoretical
convergence rates than traditional active learning approaches and demonstrates
empirical improvements of up to 20% in predictive accuracy across five
benchmark datasets, while simultaneously enhancing model interpretability.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 20:50:34 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 01:53:55 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Nguyen",
"Simon",
""
],
[
"Hoffman",
"Kentaro",
""
],
[
"McCormick",
"Tyler",
""
]
] | TITLE: Unique Rashomon Sets for Robust Active Learning
ABSTRACT: Collecting labeled data for machine learning models is often expensive and
time-consuming. Active learning addresses this challenge by selectively
labeling the most informative observations, but when initial labeled data is
limited, it becomes difficult to distinguish genuinely informative points from
those appearing uncertain primarily due to noise. Ensemble methods like random
forests are a powerful approach to quantifying this uncertainty but do so by
aggregating all models indiscriminately. This includes poor performing models
and redundant models, a problem that worsens in the presence of noisy data. We
introduce UNique Rashomon Ensembled Active Learning (UNREAL), which selectively
ensembles only distinct models from the Rashomon set, which is the set of
nearly optimal models. Restricting ensemble membership to high-performing
models with different explanations helps distinguish genuine uncertainty from
noise-induced variation. We show that UNREAL achieves faster theoretical
convergence rates than traditional active learning approaches and demonstrates
empirical improvements of up to 20% in predictive accuracy across five
benchmark datasets, while simultaneously enhancing model interpretability.
|
2503.06955 | Zeyu Zhang | Zeyu Zhang, Yiran Wang, Wei Mao, Danning Li, Rui Zhao, Biao Wu, Zirui
Song, Bohan Zhuang, Ian Reid, Richard Hartley | Motion Anything: Any to Motion Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Conditional motion generation has been extensively studied in computer
vision, yet two critical challenges remain. First, while masked autoregressive
methods have recently outperformed diffusion-based approaches, existing masking
models lack a mechanism to prioritize dynamic frames and body parts based on
given conditions. Second, existing methods for different conditioning
modalities often fail to integrate multiple modalities effectively, limiting
control and coherence in generated motion. To address these challenges, we
propose Motion Anything, a multimodal motion generation framework that
introduces an Attention-based Mask Modeling approach, enabling fine-grained
spatial and temporal control over key frames and actions. Our model adaptively
encodes multimodal conditions, including text and music, improving
controllability. Additionally, we introduce Text-Music-Dance (TMD), a new
motion dataset consisting of 2,153 pairs of text, music, and dance, making it
twice the size of AIST++, thereby filling a critical gap in the community.
Extensive experiments demonstrate that Motion Anything surpasses
state-of-the-art methods across multiple benchmarks, achieving a 15%
improvement in FID on HumanML3D and showing consistent performance gains on
AIST++ and TMD. See our project website
https://steve-zeyu-zhang.github.io/MotionAnything
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 06:04:31 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 01:45:04 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhang",
"Zeyu",
""
],
[
"Wang",
"Yiran",
""
],
[
"Mao",
"Wei",
""
],
[
"Li",
"Danning",
""
],
[
"Zhao",
"Rui",
""
],
[
"Wu",
"Biao",
""
],
[
"Song",
"Zirui",
""
],
[
"Zhuang",
"Bohan",
""
... | TITLE: Motion Anything: Any to Motion Generation
ABSTRACT: Conditional motion generation has been extensively studied in computer
vision, yet two critical challenges remain. First, while masked autoregressive
methods have recently outperformed diffusion-based approaches, existing masking
models lack a mechanism to prioritize dynamic frames and body parts based on
given conditions. Second, existing methods for different conditioning
modalities often fail to integrate multiple modalities effectively, limiting
control and coherence in generated motion. To address these challenges, we
propose Motion Anything, a multimodal motion generation framework that
introduces an Attention-based Mask Modeling approach, enabling fine-grained
spatial and temporal control over key frames and actions. Our model adaptively
encodes multimodal conditions, including text and music, improving
controllability. Additionally, we introduce Text-Music-Dance (TMD), a new
motion dataset consisting of 2,153 pairs of text, music, and dance, making it
twice the size of AIST++, thereby filling a critical gap in the community.
Extensive experiments demonstrate that Motion Anything surpasses
state-of-the-art methods across multiple benchmarks, achieving a 15%
improvement in FID on HumanML3D and showing consistent performance gains on
AIST++ and TMD. See our project website
https://steve-zeyu-zhang.github.io/MotionAnything
|
2503.07033 | HaoLong Ma | Haolong Ma, Hui Li, Chunyang Cheng, Zeyang Zhang, Xiaoning Song,
Xiao-Jun Wu | Learning a Unified Degradation-aware Representation Model for
Multi-modal Image Fusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | All-in-One Degradation-Aware Fusion Models (ADFMs), a class of multi-modal
image fusion models, address complex scenes by mitigating degradations from
source images and generating high-quality fused images. Mainstream ADFMs often
rely on highly synthetic multi-modal multi-quality images for supervision,
limiting their effectiveness in cross-modal and rare degradation scenarios. The
inherent relationship among these multi-modal, multi-quality images of the same
scene provides explicit supervision for training, but also raises above
problems. To address these limitations, we present LURE, a Learning-driven
Unified Representation model for infrared and visible Image Fusion, which is
degradation-aware. LURE decouples multi-modal multi-quality data at the data
level and recouples this relationship in a unified latent feature space (ULFS)
by proposing a novel unified loss. This decoupling circumvents data-level
limitations of prior models and allows leveraging real-world restoration
datasets for training high-quality degradation-aware models, sidestepping above
issues. To enhance text-image interaction, we refine image-text interaction and
residual structures via Text-Guided Attention (TGA) and an inner residual
structure. These enhances text's spatial perception of images and preserve more
visual details. Experiments show our method outperforms state-of-the-art (SOTA)
methods across general fusion, degradation-aware fusion, and downstream tasks.
The code will be publicly available.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:16:36 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 03:43:50 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Ma",
"Haolong",
""
],
[
"Li",
"Hui",
""
],
[
"Cheng",
"Chunyang",
""
],
[
"Zhang",
"Zeyang",
""
],
[
"Song",
"Xiaoning",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | TITLE: Learning a Unified Degradation-aware Representation Model for
Multi-modal Image Fusion
ABSTRACT: All-in-One Degradation-Aware Fusion Models (ADFMs), a class of multi-modal
image fusion models, address complex scenes by mitigating degradations from
source images and generating high-quality fused images. Mainstream ADFMs often
rely on highly synthetic multi-modal multi-quality images for supervision,
limiting their effectiveness in cross-modal and rare degradation scenarios. The
inherent relationship among these multi-modal, multi-quality images of the same
scene provides explicit supervision for training, but also raises above
problems. To address these limitations, we present LURE, a Learning-driven
Unified Representation model for infrared and visible Image Fusion, which is
degradation-aware. LURE decouples multi-modal multi-quality data at the data
level and recouples this relationship in a unified latent feature space (ULFS)
by proposing a novel unified loss. This decoupling circumvents data-level
limitations of prior models and allows leveraging real-world restoration
datasets for training high-quality degradation-aware models, sidestepping above
issues. To enhance text-image interaction, we refine image-text interaction and
residual structures via Text-Guided Attention (TGA) and an inner residual
structure. These enhances text's spatial perception of images and preserve more
visual details. Experiments show our method outperforms state-of-the-art (SOTA)
methods across general fusion, degradation-aware fusion, and downstream tasks.
The code will be publicly available.
|
2503.07085 | Ruidan Xing | Ruidan Xing, Runyi Huang, Qing Xu, Lei He | RS2V-L: Vehicle-Mounted LiDAR Data Generation from Roadside Sensor
Observations | Upon self-examination, we have found that the data in the
experimental section of our paper is uncertain. To ensure academic rigor, we
are applying for the withdrawal of the paper. We will resubmit it after
reconfirming and correcting the data. Thank you for your understanding | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | End-to-end autonomous driving solutions, which process multi-modal sensory
data to directly generate refined control commands, have become a dominant
paradigm in autonomous driving research. However, these approaches
predominantly depend on single-vehicle data collection for model training and
optimization, resulting in significant challenges such as high data acquisition
and annotation costs, the scarcity of critical driving scenarios, and
fragmented datasets that impede model generalization. To mitigate these
limitations, we introduce RS2V-L, a novel framework for reconstructing and
synthesizing vehicle-mounted LiDAR data from roadside sensor observations.
Specifically, our method transforms roadside LiDAR point clouds into the
vehicle-mounted LiDAR coordinate system by leveraging the target vehicle's
relative pose. Subsequently, high-fidelity vehicle-mounted LiDAR data is
synthesized through virtual LiDAR modeling, point cloud classification, and
resampling techniques. To the best of our knowledge, this is the first approach
to reconstruct vehicle-mounted LiDAR data from roadside sensor inputs.
Extensive experimental evaluations demonstrate that incorporating the generated
data into model training-complementing the KITTI dataset-enhances 3D object
detection accuracy by over \text{30\%} while improving the efficiency of
end-to-end autonomous driving data generation by more than an order of
magnitude. These findings strongly validate the effectiveness of the proposed
method and underscore its potential in reducing dependence on costly
vehicle-mounted data collection while improving the robustness of autonomous
driving models.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:08:05 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 14:32:28 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Xing",
"Ruidan",
""
],
[
"Huang",
"Runyi",
""
],
[
"Xu",
"Qing",
""
],
[
"He",
"Lei",
""
]
] | TITLE: RS2V-L: Vehicle-Mounted LiDAR Data Generation from Roadside Sensor
Observations
ABSTRACT: End-to-end autonomous driving solutions, which process multi-modal sensory
data to directly generate refined control commands, have become a dominant
paradigm in autonomous driving research. However, these approaches
predominantly depend on single-vehicle data collection for model training and
optimization, resulting in significant challenges such as high data acquisition
and annotation costs, the scarcity of critical driving scenarios, and
fragmented datasets that impede model generalization. To mitigate these
limitations, we introduce RS2V-L, a novel framework for reconstructing and
synthesizing vehicle-mounted LiDAR data from roadside sensor observations.
Specifically, our method transforms roadside LiDAR point clouds into the
vehicle-mounted LiDAR coordinate system by leveraging the target vehicle's
relative pose. Subsequently, high-fidelity vehicle-mounted LiDAR data is
synthesized through virtual LiDAR modeling, point cloud classification, and
resampling techniques. To the best of our knowledge, this is the first approach
to reconstruct vehicle-mounted LiDAR data from roadside sensor inputs.
Extensive experimental evaluations demonstrate that incorporating the generated
data into model training-complementing the KITTI dataset-enhances 3D object
detection accuracy by over \text{30\%} while improving the efficiency of
end-to-end autonomous driving data generation by more than an order of
magnitude. These findings strongly validate the effectiveness of the proposed
method and underscore its potential in reducing dependence on costly
vehicle-mounted data collection while improving the robustness of autonomous
driving models.
|
2503.07168 | Jing Yang | Jing Yang and Sen Yang and Xiao Tan and Hanli Wang | HisTrackMap: Global Vectorized High-Definition Map Construction via
History Map Tracking | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | As an essential component of autonomous driving systems, high-definition (HD)
maps provide rich and precise environmental information for auto-driving
scenarios; however, existing methods, which primarily rely on query-based
detection frameworks to directly model map elements or implicitly propagate
queries over time, often struggle to maintain consistent temporal perception
outcomes. These inconsistencies pose significant challenges to the stability
and reliability of real-world autonomous driving and map data collection
systems. To address this limitation, we propose a novel end-to-end tracking
framework for global map construction by temporally tracking map elements'
historical trajectories. Firstly, instance-level historical rasterization map
representation is designed to explicitly store previous perception results,
which can control and maintain different global instances' history information
in a fine-grained way. Secondly, we introduce a Map-Trajectory Prior Fusion
module within this tracking framework, leveraging historical priors for tracked
instances to improve temporal smoothness and continuity. Thirdly, we propose a
global perspective metric to evaluate the quality of temporal geometry
construction in HD maps, filling the gap in current metrics for assessing
global geometric perception results. Substantial experiments on the nuScenes
and Argoverse2 datasets demonstrate that the proposed method outperforms
state-of-the-art (SOTA) methods in both single-frame and temporal metrics. The
project page is available at: https://yj772881654.github.io/HisTrackMap.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:44:43 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 08:21:55 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Yang",
"Jing",
""
],
[
"Yang",
"Sen",
""
],
[
"Tan",
"Xiao",
""
],
[
"Wang",
"Hanli",
""
]
] | TITLE: HisTrackMap: Global Vectorized High-Definition Map Construction via
History Map Tracking
ABSTRACT: As an essential component of autonomous driving systems, high-definition (HD)
maps provide rich and precise environmental information for auto-driving
scenarios; however, existing methods, which primarily rely on query-based
detection frameworks to directly model map elements or implicitly propagate
queries over time, often struggle to maintain consistent temporal perception
outcomes. These inconsistencies pose significant challenges to the stability
and reliability of real-world autonomous driving and map data collection
systems. To address this limitation, we propose a novel end-to-end tracking
framework for global map construction by temporally tracking map elements'
historical trajectories. Firstly, instance-level historical rasterization map
representation is designed to explicitly store previous perception results,
which can control and maintain different global instances' history information
in a fine-grained way. Secondly, we introduce a Map-Trajectory Prior Fusion
module within this tracking framework, leveraging historical priors for tracked
instances to improve temporal smoothness and continuity. Thirdly, we propose a
global perspective metric to evaluate the quality of temporal geometry
construction in HD maps, filling the gap in current metrics for assessing
global geometric perception results. Substantial experiments on the nuScenes
and Argoverse2 datasets demonstrate that the proposed method outperforms
state-of-the-art (SOTA) methods in both single-frame and temporal metrics. The
project page is available at: https://yj772881654.github.io/HisTrackMap.
|
2503.07878 | Bhanu Tokas | Rahul Nair, Bhanu Tokas, Neel Shah and Hannah Kerner | Measuring directional bias amplification in image captions using
predictability | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | When we train models on biased ML datasets, they not only learn these biases
but can inflate them at test time - a phenomenon called bias amplification. To
measure bias amplification in ML datasets, many co-occurrence-based metrics
have been proposed. Co-occurrence-based metrics are effective in measuring bias
amplification in simple problems like image classification. However, these
metrics are ineffective for complex problems like image captioning as they
cannot capture the semantics of a caption. To measure bias amplification in
captions, prior work introduced a predictability-based metric called Leakage in
Captioning (LIC). While LIC captures the semantics and context of captions, it
has limitations. LIC cannot identify the direction in which bias is amplified,
poorly estimates dataset bias due to a weak vocabulary substitution strategy,
and is highly sensitive to attacker models (a hyperparameter in
predictability-based metrics). To overcome these issues, we propose Directional
Predictability Amplification in Captioning (DPAC). DPAC measures directional
bias amplification in captions, provides a better estimate of dataset bias
using an improved substitution strategy, and is less sensitive to attacker
models. Our experiments on the COCO captioning dataset show how DPAC is the
most reliable metric to measure bias amplification in captions.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 21:50:58 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 02:47:54 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Nair",
"Rahul",
""
],
[
"Tokas",
"Bhanu",
""
],
[
"Shah",
"Neel",
""
],
[
"Kerner",
"Hannah",
""
]
] | TITLE: Measuring directional bias amplification in image captions using
predictability
ABSTRACT: When we train models on biased ML datasets, they not only learn these biases
but can inflate them at test time - a phenomenon called bias amplification. To
measure bias amplification in ML datasets, many co-occurrence-based metrics
have been proposed. Co-occurrence-based metrics are effective in measuring bias
amplification in simple problems like image classification. However, these
metrics are ineffective for complex problems like image captioning as they
cannot capture the semantics of a caption. To measure bias amplification in
captions, prior work introduced a predictability-based metric called Leakage in
Captioning (LIC). While LIC captures the semantics and context of captions, it
has limitations. LIC cannot identify the direction in which bias is amplified,
poorly estimates dataset bias due to a weak vocabulary substitution strategy,
and is highly sensitive to attacker models (a hyperparameter in
predictability-based metrics). To overcome these issues, we propose Directional
Predictability Amplification in Captioning (DPAC). DPAC measures directional
bias amplification in captions, provides a better estimate of dataset bias
using an improved substitution strategy, and is less sensitive to attacker
models. Our experiments on the COCO captioning dataset show how DPAC is the
most reliable metric to measure bias amplification in captions.
|
2503.08046 | Xuan Lu | Xuan Lu, Sifan Liu, Bochao Yin, Yongqi Li, Xinghao Chen, Hui Su,
Yaohui Jin, Wenjun Zeng, Xiaoyu Shen | MultiConIR: Towards multi-condition Information Retrieval | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce MultiConIR, the first benchmark designed to
evaluate retrieval models in multi-condition scenarios. Unlike existing
datasets that primarily focus on single-condition queries from search engines,
MultiConIR captures real-world complexity by incorporating five diverse
domains: books, movies, people, medical cases, and legal documents. We propose
three tasks to systematically assess retrieval and reranking models on
multi-condition robustness, monotonic relevance ranking, and query format
sensitivity. Our findings reveal that existing retrieval and reranking models
struggle with multi-condition retrieval, with rerankers suffering severe
performance degradation as query complexity increases. We further investigate
the performance gap between retrieval and reranking models, exploring potential
reasons for these discrepancies, and analysis the impact of different pooling
strategies on condition placement sensitivity. Finally, we highlight the
strengths of GritLM and Nv-Embed, which demonstrate enhanced adaptability to
multi-condition queries, offering insights for future retrieval models. The
code and datasets are available at https://github.com/EIT-NLP/MultiConIR.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:02:03 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 02:13:15 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Lu",
"Xuan",
""
],
[
"Liu",
"Sifan",
""
],
[
"Yin",
"Bochao",
""
],
[
"Li",
"Yongqi",
""
],
[
"Chen",
"Xinghao",
""
],
[
"Su",
"Hui",
""
],
[
"Jin",
"Yaohui",
""
],
[
"Zeng",
"Wenjun",
""
... | TITLE: MultiConIR: Towards multi-condition Information Retrieval
ABSTRACT: In this paper, we introduce MultiConIR, the first benchmark designed to
evaluate retrieval models in multi-condition scenarios. Unlike existing
datasets that primarily focus on single-condition queries from search engines,
MultiConIR captures real-world complexity by incorporating five diverse
domains: books, movies, people, medical cases, and legal documents. We propose
three tasks to systematically assess retrieval and reranking models on
multi-condition robustness, monotonic relevance ranking, and query format
sensitivity. Our findings reveal that existing retrieval and reranking models
struggle with multi-condition retrieval, with rerankers suffering severe
performance degradation as query complexity increases. We further investigate
the performance gap between retrieval and reranking models, exploring potential
reasons for these discrepancies, and analysis the impact of different pooling
strategies on condition placement sensitivity. Finally, we highlight the
strengths of GritLM and Nv-Embed, which demonstrate enhanced adaptability to
multi-condition queries, offering insights for future retrieval models. The
code and datasets are available at https://github.com/EIT-NLP/MultiConIR.
|
2503.08229 | Ao Li | Ao Li, Zongfang Liu, Xinhua Li, Jinghui Zhang, Pengwei Wang, Hu Wang | Modeling Variants of Prompts for Vision-Language Models | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large pre-trained vision-language models (VLMs) offer a promising approach to
leveraging human language for enhancing downstream tasks. However, VLMs such as
CLIP face significant limitation: its performance is highly sensitive to prompt
template design. Although prompt learning methods can address the sensitivity
issue by replacing natural language prompts with learnable ones, they are
incomprehensible to humans. Ensuring consistent performance across various
prompt templates enables models to adapt seamlessly to diverse phrasings,
enhancing their ability to handle downstream tasks without requiring extensive
prompt engineering. In this work, we introduce the RobustPrompt Benchmark, a
systematic benchmark to evaluate robustness to different prompt templates for
VLMs. It includes a dataset with hundreds of carefully designed prompt
templates, divided into six types, covering a wide variety of commonly used
templates. Beside the benchmark, we propose Modeling Variants of Prompts (MVP),
a simple yet effective method that mitigates sensitivity by modeling variants
of prompt structures. The innovation of MVP lies in decoupling prompts into
templates and class names, and using Variational Autoencoders (VAE) to model
the distribution of diverse prompt structures. Experiments across 11 datasets
demonstrate that MVP can greatly enhance model robustness to variations in
input prompts without a drop in performance. The code is available at
https://github.com/liaolea/MVP.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 09:46:25 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 12:30:01 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Li",
"Ao",
""
],
[
"Liu",
"Zongfang",
""
],
[
"Li",
"Xinhua",
""
],
[
"Zhang",
"Jinghui",
""
],
[
"Wang",
"Pengwei",
""
],
[
"Wang",
"Hu",
""
]
] | TITLE: Modeling Variants of Prompts for Vision-Language Models
ABSTRACT: Large pre-trained vision-language models (VLMs) offer a promising approach to
leveraging human language for enhancing downstream tasks. However, VLMs such as
CLIP face significant limitation: its performance is highly sensitive to prompt
template design. Although prompt learning methods can address the sensitivity
issue by replacing natural language prompts with learnable ones, they are
incomprehensible to humans. Ensuring consistent performance across various
prompt templates enables models to adapt seamlessly to diverse phrasings,
enhancing their ability to handle downstream tasks without requiring extensive
prompt engineering. In this work, we introduce the RobustPrompt Benchmark, a
systematic benchmark to evaluate robustness to different prompt templates for
VLMs. It includes a dataset with hundreds of carefully designed prompt
templates, divided into six types, covering a wide variety of commonly used
templates. Beside the benchmark, we propose Modeling Variants of Prompts (MVP),
a simple yet effective method that mitigates sensitivity by modeling variants
of prompt structures. The innovation of MVP lies in decoupling prompts into
templates and class names, and using Variational Autoencoders (VAE) to model
the distribution of diverse prompt structures. Experiments across 11 datasets
demonstrate that MVP can greatly enhance model robustness to variations in
input prompts without a drop in performance. The code is available at
https://github.com/liaolea/MVP.
|
2503.08454 | Xiuying Chen | Zhangming Chan, Xiuying Chen, Yongliang Wang, Juntao Li, Zhiqiang
Zhang, Kun Gai, Dongyan Zhao, Rui Yan | Stick to Facts: Towards Fidelity-oriented Product Description Generation | Accepted by EMNLP 2019 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Different from other text generation tasks, in product description
generation, it is of vital importance to generate faithful descriptions that
stick to the product attribute information. However, little attention has been
paid to this problem. To bridge this gap, we propose a model named
Fidelity-oriented Product Description Generator (FPDG). FPDG takes the entity
label of each word into account, since the product attribute information is
always conveyed by entity words. Specifically, we first propose a Recurrent
Neural Network (RNN) decoder based on the Entity-label-guided Long Short-Term
Memory (ELSTM) cell, taking both the embedding and the entity label of each
word as input. Second, we establish a keyword memory that stores the entity
labels as keys and keywords as values, allowing FPDG to attend to keywords by
attending to their entity labels. Experiments conducted on a large-scale
real-world product description dataset show that our model achieves
state-of-the-art performance in terms of both traditional generation metrics
and human evaluations. Specifically, FPDG increases the fidelity of the
generated descriptions by 25%.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 14:04:24 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 06:41:38 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Chan",
"Zhangming",
""
],
[
"Chen",
"Xiuying",
""
],
[
"Wang",
"Yongliang",
""
],
[
"Li",
"Juntao",
""
],
[
"Zhang",
"Zhiqiang",
""
],
[
"Gai",
"Kun",
""
],
[
"Zhao",
"Dongyan",
""
],
[
"Yan",
... | TITLE: Stick to Facts: Towards Fidelity-oriented Product Description Generation
ABSTRACT: Different from other text generation tasks, in product description
generation, it is of vital importance to generate faithful descriptions that
stick to the product attribute information. However, little attention has been
paid to this problem. To bridge this gap, we propose a model named
Fidelity-oriented Product Description Generator (FPDG). FPDG takes the entity
label of each word into account, since the product attribute information is
always conveyed by entity words. Specifically, we first propose a Recurrent
Neural Network (RNN) decoder based on the Entity-label-guided Long Short-Term
Memory (ELSTM) cell, taking both the embedding and the entity label of each
word as input. Second, we establish a keyword memory that stores the entity
labels as keys and keywords as values, allowing FPDG to attend to keywords by
attending to their entity labels. Experiments conducted on a large-scale
real-world product description dataset show that our model achieves
state-of-the-art performance in terms of both traditional generation metrics
and human evaluations. Specifically, FPDG increases the fidelity of the
generated descriptions by 25%.
|
2503.08581 | Jiangping Wen | Jiangping Wen, Jinyu Wen, Meie Fang | MsaMIL-Net: An End-to-End Multi-Scale Aware Multiple Instance Learning
Network for Efficient Whole Slide Image Classification | summited to ICCV2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bag-based Multiple Instance Learning (MIL) approaches have emerged as the
mainstream methodology for Whole Slide Image (WSI) classification. However,
most existing methods adopt a segmented training strategy, which first extracts
features using a pre-trained feature extractor and then aggregates these
features through MIL. This segmented training approach leads to insufficient
collaborative optimization between the feature extraction network and the MIL
network, preventing end-to-end joint optimization and thereby limiting the
overall performance of the model. Additionally, conventional methods typically
extract features from all patches of fixed size, ignoring the multi-scale
observation characteristics of pathologists. This not only results in
significant computational resource waste when tumor regions represent a minimal
proportion (as in the Camelyon16 dataset) but may also lead the model to
suboptimal solutions.
To address these limitations, this paper proposes an end-to-end multi-scale
WSI classification framework that integrates multi-scale feature extraction
with multiple instance learning. Specifically, our approach includes: (1) a
semantic feature filtering module to reduce interference from non-lesion areas;
(2) a multi-scale feature extraction module to capture pathological information
at different levels; and (3) a multi-scale fusion MIL module for global
modeling and feature integration. Through an end-to-end training strategy, we
simultaneously optimize both the feature extractor and MIL network, ensuring
maximum compatibility between them.
Experiments were conducted on three cross-center datasets (DigestPath2019,
BCNB, and UBC-OCEAN). Results demonstrate that our proposed method outperforms
existing state-of-the-art approaches in terms of both accuracy (ACC) and AUC
metrics.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 16:16:44 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 09:27:31 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Wen",
"Jiangping",
""
],
[
"Wen",
"Jinyu",
""
],
[
"Fang",
"Meie",
""
]
] | TITLE: MsaMIL-Net: An End-to-End Multi-Scale Aware Multiple Instance Learning
Network for Efficient Whole Slide Image Classification
ABSTRACT: Bag-based Multiple Instance Learning (MIL) approaches have emerged as the
mainstream methodology for Whole Slide Image (WSI) classification. However,
most existing methods adopt a segmented training strategy, which first extracts
features using a pre-trained feature extractor and then aggregates these
features through MIL. This segmented training approach leads to insufficient
collaborative optimization between the feature extraction network and the MIL
network, preventing end-to-end joint optimization and thereby limiting the
overall performance of the model. Additionally, conventional methods typically
extract features from all patches of fixed size, ignoring the multi-scale
observation characteristics of pathologists. This not only results in
significant computational resource waste when tumor regions represent a minimal
proportion (as in the Camelyon16 dataset) but may also lead the model to
suboptimal solutions.
To address these limitations, this paper proposes an end-to-end multi-scale
WSI classification framework that integrates multi-scale feature extraction
with multiple instance learning. Specifically, our approach includes: (1) a
semantic feature filtering module to reduce interference from non-lesion areas;
(2) a multi-scale feature extraction module to capture pathological information
at different levels; and (3) a multi-scale fusion MIL module for global
modeling and feature integration. Through an end-to-end training strategy, we
simultaneously optimize both the feature extractor and MIL network, ensuring
maximum compatibility between them.
Experiments were conducted on three cross-center datasets (DigestPath2019,
BCNB, and UBC-OCEAN). Results demonstrate that our proposed method outperforms
existing state-of-the-art approaches in terms of both accuracy (ACC) and AUC
metrics.
|
2503.08700 | Julien Posso | Julien Posso, Hugo Kieffer, Nicolas Menga, Omar Hlimi, S\'ebastien
Tarris, Hubert Guerard, Guy Bois, Matthieu Couderc, Eric Jenn | Real-Time Semantic Segmentation of Aerial Images Using an Embedded
U-Net: A Comparison of CPU, GPU, and FPGA Workflows | ERTS2024, Jun 2024, Toulouse, France | null | null | null | cs.CV cs.AI cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study introduces a lightweight U-Net model optimized for real-time
semantic segmentation of aerial images, targeting the efficient utilization of
Commercial Off-The-Shelf (COTS) embedded computing platforms. We maintain the
accuracy of the U-Net on a real-world dataset while significantly reducing the
model's parameters and Multiply-Accumulate (MAC) operations by a factor of 16.
Our comprehensive analysis covers three hardware platforms (CPU, GPU, and FPGA)
and five different toolchains (TVM, FINN, Vitis AI, TensorFlow GPU, and cuDNN),
assessing each on metrics such as latency, power consumption, memory footprint,
energy efficiency, and FPGA resource usage. The results highlight the
trade-offs between these platforms and toolchains, with a particular focus on
the practical deployment challenges in real-world applications. Our findings
demonstrate that while the FPGA with Vitis AI emerges as the superior choice
due to its performance, energy efficiency, and maturity, it requires
specialized hardware knowledge, emphasizing the need for a balanced approach in
selecting embedded computing solutions for semantic segmentation tasks
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 08:33:28 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Posso",
"Julien",
""
],
[
"Kieffer",
"Hugo",
""
],
[
"Menga",
"Nicolas",
""
],
[
"Hlimi",
"Omar",
""
],
[
"Tarris",
"Sébastien",
""
],
[
"Guerard",
"Hubert",
""
],
[
"Bois",
"Guy",
""
],
[
"Couderc... | TITLE: Real-Time Semantic Segmentation of Aerial Images Using an Embedded
U-Net: A Comparison of CPU, GPU, and FPGA Workflows
ABSTRACT: This study introduces a lightweight U-Net model optimized for real-time
semantic segmentation of aerial images, targeting the efficient utilization of
Commercial Off-The-Shelf (COTS) embedded computing platforms. We maintain the
accuracy of the U-Net on a real-world dataset while significantly reducing the
model's parameters and Multiply-Accumulate (MAC) operations by a factor of 16.
Our comprehensive analysis covers three hardware platforms (CPU, GPU, and FPGA)
and five different toolchains (TVM, FINN, Vitis AI, TensorFlow GPU, and cuDNN),
assessing each on metrics such as latency, power consumption, memory footprint,
energy efficiency, and FPGA resource usage. The results highlight the
trade-offs between these platforms and toolchains, with a particular focus on
the practical deployment challenges in real-world applications. Our findings
demonstrate that while the FPGA with Vitis AI emerges as the superior choice
due to its performance, energy efficiency, and maturity, it requires
specialized hardware knowledge, emphasizing the need for a balanced approach in
selecting embedded computing solutions for semantic segmentation tasks
|
2503.08705 | Yajie Wen | Yajie Wen and Defu Zhang | A Block-Based Heuristic Algorithm for the Three-Dimensional Nuclear
Waste Packing Problem | 10 pages,7 figures | null | null | null | math.OC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we present a block-based heuristic search algorithm to address
the nuclear waste container packing problem in the context of real-world
nuclear power plants. Additionally, we provide a dataset comprising 1600
problem instances for future researchers to use. Experimental results on this
dataset demonstrate that the proposed algorithm effectively enhances the
disposal pool's space utilization while minimizing the radiation dose within
the pool. The code and data employed in this study are publicly available to
facilitate reproducibility and further investigation.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 14:20:48 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Wen",
"Yajie",
""
],
[
"Zhang",
"Defu",
""
]
] | TITLE: A Block-Based Heuristic Algorithm for the Three-Dimensional Nuclear
Waste Packing Problem
ABSTRACT: In this study, we present a block-based heuristic search algorithm to address
the nuclear waste container packing problem in the context of real-world
nuclear power plants. Additionally, we provide a dataset comprising 1600
problem instances for future researchers to use. Experimental results on this
dataset demonstrate that the proposed algorithm effectively enhances the
disposal pool's space utilization while minimizing the radiation dose within
the pool. The code and data employed in this study are publicly available to
facilitate reproducibility and further investigation.
|
2503.08711 | Yajie Wen | Yajie Wen and Defu Zhang | A Beam Search Based Parallel Algorithm for the Two-Dimensional Strip
Packing Problem | 9 pages,4figures | null | null | null | math.OC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces BSPA, a parallel algorithm that leverages beam search
to address the two-dimensional strip packing problem. The study begins with a
comprehensive review of existing approaches and methodologies, followed by a
detailed presentation of the BSPA algorithm. Experimental results demonstrate
the effectiveness of the proposed method. To facilitate further research, both
the code and datasets are publicly available.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 04:20:45 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Wen",
"Yajie",
""
],
[
"Zhang",
"Defu",
""
]
] | TITLE: A Beam Search Based Parallel Algorithm for the Two-Dimensional Strip
Packing Problem
ABSTRACT: This paper introduces BSPA, a parallel algorithm that leverages beam search
to address the two-dimensional strip packing problem. The study begins with a
comprehensive review of existing approaches and methodologies, followed by a
detailed presentation of the BSPA algorithm. Experimental results demonstrate
the effectiveness of the proposed method. To facilitate further research, both
the code and datasets are publicly available.
|
2503.08712 | Ahmad Chaddad | Yan Hu, Ahmad Chaddad | SHAP-Integrated Convolutional Diagnostic Networks for Feature-Selective
Medical Analysis | 5 pages | ICASSP 2025 | null | null | eess.IV cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study introduces the SHAP-integrated convolutional diagnostic network
(SICDN), an interpretable feature selection method designed for limited
datasets, to address the challenge posed by data privacy regulations that
restrict access to medical datasets. The SICDN model was tested on
classification tasks using pneumonia and breast cancer datasets, demonstrating
over 97% accuracy and surpassing four popular CNN models. We also integrated a
historical weighted moving average technique to enhance feature selection. The
SICDN shows potential in medical image prediction, with the code available on
https://github.com/AIPMLab/SICDN.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:48:35 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Hu",
"Yan",
""
],
[
"Chaddad",
"Ahmad",
""
]
] | TITLE: SHAP-Integrated Convolutional Diagnostic Networks for Feature-Selective
Medical Analysis
ABSTRACT: This study introduces the SHAP-integrated convolutional diagnostic network
(SICDN), an interpretable feature selection method designed for limited
datasets, to address the challenge posed by data privacy regulations that
restrict access to medical datasets. The SICDN model was tested on
classification tasks using pneumonia and breast cancer datasets, demonstrating
over 97% accuracy and surpassing four popular CNN models. We also integrated a
historical weighted moving average technique to enhance feature selection. The
SICDN shows potential in medical image prediction, with the code available on
https://github.com/AIPMLab/SICDN.
|
2503.08716 | Arthur Gervais | Isaac David and Arthur Gervais | AuthorMist: Evading AI Text Detectors with Reinforcement Learning | null | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the age of powerful AI-generated text, automatic detectors have emerged to
identify machine-written content. This poses a threat to author privacy and
freedom, as text authored with AI assistance may be unfairly flagged. We
propose AuthorMist, a novel reinforcement learning-based system to transform
AI-generated text into human-like writing. AuthorMist leverages a
3-billion-parameter language model as a backbone, fine-tuned with Group
Relative Policy Optimization (GPRO) to paraphrase text in a way that evades AI
detectors.
Our framework establishes a generic approach where external detector APIs
(GPTZero, WinstonAI, Originality.ai, etc.) serve as reward functions within the
reinforcement learning loop, enabling the model to systematically learn outputs
that these detectors are less likely to classify as AI-generated. This
API-as-reward methodology can be applied broadly to optimize text against any
detector with an accessible interface. Experiments on multiple datasets and
detectors demonstrate that AuthorMist effectively reduces the detectability of
AI-generated text while preserving the original meaning. Our evaluation shows
attack success rates ranging from 78.6% to 96.2% against individual detectors,
significantly outperforming baseline paraphrasing methods. AuthorMist maintains
high semantic similarity (above 0.94) with the original text while successfully
evading detection. These results highlight limitations in current AI text
detection technologies and raise questions about the sustainability of the
detection-evasion arms race.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:41:05 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"David",
"Isaac",
""
],
[
"Gervais",
"Arthur",
""
]
] | TITLE: AuthorMist: Evading AI Text Detectors with Reinforcement Learning
ABSTRACT: In the age of powerful AI-generated text, automatic detectors have emerged to
identify machine-written content. This poses a threat to author privacy and
freedom, as text authored with AI assistance may be unfairly flagged. We
propose AuthorMist, a novel reinforcement learning-based system to transform
AI-generated text into human-like writing. AuthorMist leverages a
3-billion-parameter language model as a backbone, fine-tuned with Group
Relative Policy Optimization (GPRO) to paraphrase text in a way that evades AI
detectors.
Our framework establishes a generic approach where external detector APIs
(GPTZero, WinstonAI, Originality.ai, etc.) serve as reward functions within the
reinforcement learning loop, enabling the model to systematically learn outputs
that these detectors are less likely to classify as AI-generated. This
API-as-reward methodology can be applied broadly to optimize text against any
detector with an accessible interface. Experiments on multiple datasets and
detectors demonstrate that AuthorMist effectively reduces the detectability of
AI-generated text while preserving the original meaning. Our evaluation shows
attack success rates ranging from 78.6% to 96.2% against individual detectors,
significantly outperforming baseline paraphrasing methods. AuthorMist maintains
high semantic similarity (above 0.94) with the original text while successfully
evading detection. These results highlight limitations in current AI text
detection technologies and raise questions about the sustainability of the
detection-evasion arms race.
|
2503.08727 | Lucas Caccia | Lucas Caccia, Alan Ansell, Edoardo Ponti, Ivan Vuli\'c, Alessandro
Sordoni | Training Plug-n-Play Knowledge Modules with Deep Context Distillation | Preprint | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamically integrating new or rapidly evolving information after (Large)
Language Model pre-training remains challenging, particularly in low-data
scenarios or when dealing with private and specialized documents. In-context
learning and retrieval-augmented generation (RAG) face limitations, including
their high inference costs and their inability to capture global document
information. In this paper, we propose a way of modularizing knowledge by
training document-level Knowledge Modules (KMs). KMs are lightweight components
implemented as parameter-efficient LoRA modules, which are trained to store
information about new documents and can be easily plugged into models on
demand. We show that next-token prediction performs poorly as the training
objective for KMs. We instead propose Deep Context Distillation: we learn KMs
parameters such as to simulate hidden states and logits of a teacher that takes
the document in context. Our method outperforms standard next-token prediction
and pre-instruction training techniques, across two datasets. Finally, we
highlight synergies between KMs and retrieval-augmented generation.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:07:57 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Caccia",
"Lucas",
""
],
[
"Ansell",
"Alan",
""
],
[
"Ponti",
"Edoardo",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Sordoni",
"Alessandro",
""
]
] | TITLE: Training Plug-n-Play Knowledge Modules with Deep Context Distillation
ABSTRACT: Dynamically integrating new or rapidly evolving information after (Large)
Language Model pre-training remains challenging, particularly in low-data
scenarios or when dealing with private and specialized documents. In-context
learning and retrieval-augmented generation (RAG) face limitations, including
their high inference costs and their inability to capture global document
information. In this paper, we propose a way of modularizing knowledge by
training document-level Knowledge Modules (KMs). KMs are lightweight components
implemented as parameter-efficient LoRA modules, which are trained to store
information about new documents and can be easily plugged into models on
demand. We show that next-token prediction performs poorly as the training
objective for KMs. We instead propose Deep Context Distillation: we learn KMs
parameters such as to simulate hidden states and logits of a teacher that takes
the document in context. Our method outperforms standard next-token prediction
and pre-instruction training techniques, across two datasets. Finally, we
highlight synergies between KMs and retrieval-augmented generation.
|
2503.08729 | Ishaan Malhi | Ishaan Malhi, Praneet Dutta, Ellie Talius, Sally Ma, Brendan Driscoll,
Krista Holden, Garima Pruthi, Arunachalam Narayanaswamy | Preserving Product Fidelity in Large Scale Image Recontextualization
with Diffusion Models | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a framework for high-fidelity product image recontextualization
using text-to-image diffusion models and a novel data augmentation pipeline.
This pipeline leverages image-to-video diffusion, in/outpainting & negatives to
create synthetic training data, addressing limitations of real-world data
collection for this task. Our method improves the quality and diversity of
generated images by disentangling product representations and enhancing the
model's understanding of product characteristics. Evaluation on the ABO dataset
and a private product dataset, using automated metrics and human assessment,
demonstrates the effectiveness of our framework in generating realistic and
compelling product visualizations, with implications for applications such as
e-commerce and virtual product showcasing.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:24:39 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Malhi",
"Ishaan",
""
],
[
"Dutta",
"Praneet",
""
],
[
"Talius",
"Ellie",
""
],
[
"Ma",
"Sally",
""
],
[
"Driscoll",
"Brendan",
""
],
[
"Holden",
"Krista",
""
],
[
"Pruthi",
"Garima",
""
],
[
"Naray... | TITLE: Preserving Product Fidelity in Large Scale Image Recontextualization
with Diffusion Models
ABSTRACT: We present a framework for high-fidelity product image recontextualization
using text-to-image diffusion models and a novel data augmentation pipeline.
This pipeline leverages image-to-video diffusion, in/outpainting & negatives to
create synthetic training data, addressing limitations of real-world data
collection for this task. Our method improves the quality and diversity of
generated images by disentangling product representations and enhancing the
model's understanding of product characteristics. Evaluation on the ABO dataset
and a private product dataset, using automated metrics and human assessment,
demonstrates the effectiveness of our framework in generating realistic and
compelling product visualizations, with implications for applications such as
e-commerce and virtual product showcasing.
|
2503.08731 | Seyyed Mohammad Sadegh Moosavi Khorzooghi | Seyyed Mohammad Sadegh Moosavi Khorzooghi, Poojitha Thota, Mohit
Singhal, Abolfazl Asudeh, Gautam Das, Shirin Nilizadeh | FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face
Obfuscation Methods | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The lack of a common platform and benchmark datasets for evaluating face
obfuscation methods has been a challenge, with every method being tested using
arbitrary experiments, datasets, and metrics. While prior work has demonstrated
that face recognition systems exhibit bias against some demographic groups,
there exists a substantial gap in our understanding regarding the fairness of
face obfuscation methods. Providing fair face obfuscation methods can ensure
equitable protection across diverse demographic groups, especially since they
can be used to preserve the privacy of vulnerable populations. To address these
gaps, this paper introduces a comprehensive framework, named FairDeFace,
designed to assess the adversarial robustness and fairness of face obfuscation
methods. The framework introduces a set of modules encompassing data
benchmarks, face detection and recognition algorithms, adversarial models,
utility detection models, and fairness metrics. FairDeFace serves as a
versatile platform where any face obfuscation method can be integrated,
allowing for rigorous testing and comparison with other state-of-the-art
methods. In its current implementation, FairDeFace incorporates 6 attacks, and
several privacy, utility and fairness metrics. Using FairDeFace, and by
conducting more than 500 experiments, we evaluated and compared the adversarial
robustness of seven face obfuscation methods. This extensive analysis led to
many interesting findings both in terms of the degree of robustness of existing
methods and their biases against some gender or racial groups. FairDeFace also
uses visualization of focused areas for both obfuscation and verification
attacks to show not only which areas are mostly changed in the obfuscation
process for some demographics, but also why they failed through focus area
comparison of obfuscation and verification.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:49:43 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Khorzooghi",
"Seyyed Mohammad Sadegh Moosavi",
""
],
[
"Thota",
"Poojitha",
""
],
[
"Singhal",
"Mohit",
""
],
[
"Asudeh",
"Abolfazl",
""
],
[
"Das",
"Gautam",
""
],
[
"Nilizadeh",
"Shirin",
""
]
] | TITLE: FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face
Obfuscation Methods
ABSTRACT: The lack of a common platform and benchmark datasets for evaluating face
obfuscation methods has been a challenge, with every method being tested using
arbitrary experiments, datasets, and metrics. While prior work has demonstrated
that face recognition systems exhibit bias against some demographic groups,
there exists a substantial gap in our understanding regarding the fairness of
face obfuscation methods. Providing fair face obfuscation methods can ensure
equitable protection across diverse demographic groups, especially since they
can be used to preserve the privacy of vulnerable populations. To address these
gaps, this paper introduces a comprehensive framework, named FairDeFace,
designed to assess the adversarial robustness and fairness of face obfuscation
methods. The framework introduces a set of modules encompassing data
benchmarks, face detection and recognition algorithms, adversarial models,
utility detection models, and fairness metrics. FairDeFace serves as a
versatile platform where any face obfuscation method can be integrated,
allowing for rigorous testing and comparison with other state-of-the-art
methods. In its current implementation, FairDeFace incorporates 6 attacks, and
several privacy, utility and fairness metrics. Using FairDeFace, and by
conducting more than 500 experiments, we evaluated and compared the adversarial
robustness of seven face obfuscation methods. This extensive analysis led to
many interesting findings both in terms of the degree of robustness of existing
methods and their biases against some gender or racial groups. FairDeFace also
uses visualization of focused areas for both obfuscation and verification
attacks to show not only which areas are mostly changed in the obfuscation
process for some demographics, but also why they failed through focus area
comparison of obfuscation and verification.
|
2503.08732 | Jiaqing Zhang | Yuanfang Ren, Andrea E. Davidson, Jiaqing Zhang, Miguel Contreras,
Ayush K. Patel, Michelle Gumz, Tezcan Ozrazgat-Baslanti, Parisa Rashidi, Azra
Bihorac | Quantifying Circadian Desynchrony in ICU Patients and Its Association
with Delirium | null | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background: Circadian desynchrony characterized by the misalignment between
an individual's internal biological rhythms and external environmental cues,
significantly affects various physiological processes and health outcomes.
Quantifying circadian desynchrony often requires prolonged and frequent
monitoring, and currently, an easy tool for this purpose is missing.
Additionally, its association with the incidence of delirium has not been
clearly explored. Methods: A prospective observational study was carried out in
intensive care units (ICU) of a tertiary hospital. Circadian transcriptomics of
blood monocytes from 86 individuals were collected on two consecutive days,
although a second sample could not be obtained from all participants. Using two
public datasets comprised of healthy volunteers, we replicated a model for
determining internal circadian time. We developed an approach to quantify
circadian desynchrony by comparing internal circadian time and external blood
collection time. We applied the model and quantified circadian desynchrony
index among ICU patients, and investigated its association with the incidence
of delirium. Results: The replicated model for determining internal circadian
time achieved comparable high accuracy. The quantified circadian desynchrony
index was significantly higher among critically ill ICU patients compared to
healthy subjects, with values of 10.03 hours vs 2.50-2.95 hours (p < 0.001).
Most ICU patients had a circadian desynchrony index greater than 9 hours.
Additionally, the index was lower in patients whose blood samples were drawn
after 3pm, with values of 5.00 hours compared to 10.01-10.90 hours in other
groups (p < 0.001)...
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 03:56:10 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Ren",
"Yuanfang",
""
],
[
"Davidson",
"Andrea E.",
""
],
[
"Zhang",
"Jiaqing",
""
],
[
"Contreras",
"Miguel",
""
],
[
"Patel",
"Ayush K.",
""
],
[
"Gumz",
"Michelle",
""
],
[
"Ozrazgat-Baslanti",
"Tezcan",
... | TITLE: Quantifying Circadian Desynchrony in ICU Patients and Its Association
with Delirium
ABSTRACT: Background: Circadian desynchrony characterized by the misalignment between
an individual's internal biological rhythms and external environmental cues,
significantly affects various physiological processes and health outcomes.
Quantifying circadian desynchrony often requires prolonged and frequent
monitoring, and currently, an easy tool for this purpose is missing.
Additionally, its association with the incidence of delirium has not been
clearly explored. Methods: A prospective observational study was carried out in
intensive care units (ICU) of a tertiary hospital. Circadian transcriptomics of
blood monocytes from 86 individuals were collected on two consecutive days,
although a second sample could not be obtained from all participants. Using two
public datasets comprised of healthy volunteers, we replicated a model for
determining internal circadian time. We developed an approach to quantify
circadian desynchrony by comparing internal circadian time and external blood
collection time. We applied the model and quantified circadian desynchrony
index among ICU patients, and investigated its association with the incidence
of delirium. Results: The replicated model for determining internal circadian
time achieved comparable high accuracy. The quantified circadian desynchrony
index was significantly higher among critically ill ICU patients compared to
healthy subjects, with values of 10.03 hours vs 2.50-2.95 hours (p < 0.001).
Most ICU patients had a circadian desynchrony index greater than 9 hours.
Additionally, the index was lower in patients whose blood samples were drawn
after 3pm, with values of 5.00 hours compared to 10.01-10.90 hours in other
groups (p < 0.001)...
|
2503.08739 | Shilong Sang | Shilong Sang, Ke-Jia Chen, Zheng liu | HeGMN: Heterogeneous Graph Matching Network for Learning Graph
Similarity | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph similarity learning (GSL), also referred to as graph matching in many
scenarios, is a fundamental problem in computer vision, pattern recognition,
and graph learning. However, previous GSL methods assume that graphs are
homogeneous and struggle to maintain their performance on heterogeneous graphs.
To address this problem, this paper proposes a Heterogeneous Graph Matching
Network (HeGMN), which is an end-to-end graph similarity learning framework
composed of a two-tier matching mechanism. Firstly, a heterogeneous graph
isomorphism network is proposed as the encoder, which reinvents graph
isomorphism network for heterogeneous graphs by perceiving different semantic
relationships during aggregation. Secondly, a graph-level and node-level
matching modules are designed, both employing type-aligned matching principles.
The former conducts graph-level matching by node type alignment, and the latter
computes the interactions between the cross-graph nodes with the same type thus
reducing noise interference and computational overhead. Finally, the
graph-level and node-level matching features are combined and fed into fully
connected layers for predicting graph similarity scores. In experiments, we
propose a heterogeneous graph resampling method to construct heterogeneous
graph pairs and define the corresponding heterogeneous graph edit distance,
filling the gap in missing datasets. Extensive experiments demonstrate that
HeGMN consistently achieves advanced performance on graph similarity prediction
across all datasets.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:36:35 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Sang",
"Shilong",
""
],
[
"Chen",
"Ke-Jia",
""
],
[
"liu",
"Zheng",
""
]
] | TITLE: HeGMN: Heterogeneous Graph Matching Network for Learning Graph
Similarity
ABSTRACT: Graph similarity learning (GSL), also referred to as graph matching in many
scenarios, is a fundamental problem in computer vision, pattern recognition,
and graph learning. However, previous GSL methods assume that graphs are
homogeneous and struggle to maintain their performance on heterogeneous graphs.
To address this problem, this paper proposes a Heterogeneous Graph Matching
Network (HeGMN), which is an end-to-end graph similarity learning framework
composed of a two-tier matching mechanism. Firstly, a heterogeneous graph
isomorphism network is proposed as the encoder, which reinvents graph
isomorphism network for heterogeneous graphs by perceiving different semantic
relationships during aggregation. Secondly, a graph-level and node-level
matching modules are designed, both employing type-aligned matching principles.
The former conducts graph-level matching by node type alignment, and the latter
computes the interactions between the cross-graph nodes with the same type thus
reducing noise interference and computational overhead. Finally, the
graph-level and node-level matching features are combined and fed into fully
connected layers for predicting graph similarity scores. In experiments, we
propose a heterogeneous graph resampling method to construct heterogeneous
graph pairs and define the corresponding heterogeneous graph edit distance,
filling the gap in missing datasets. Extensive experiments demonstrate that
HeGMN consistently achieves advanced performance on graph similarity prediction
across all datasets.
|
2503.08745 | Chao Zhou | Chao Zhou, Wei Pu, and Miguel Rodrigues | Neural Network for Blind Unmixing: a novel MatrixConv Unmixing (MCU)
Approach | null | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Hyperspectral image (HSI) unmixing is a challenging research problem that
tries to identify the constituent components, known as endmembers, and their
corresponding proportions, known as abundances, in the scene by analysing
images captured by hyperspectral cameras. Recently, many deep learning based
unmixing approaches have been proposed with the surge of machine learning
techniques, especially convolutional neural networks (CNN). However, these
methods face two notable challenges: 1. They frequently yield results lacking
physical significance, such as signatures corresponding to unknown or
non-existent materials. 2. CNNs, as general-purpose network structures, are not
explicitly tailored for unmixing tasks. In response to these concerns, our work
draws inspiration from double deep image prior (DIP) techniques and algorithm
unrolling, presenting a novel network structure that effectively addresses both
issues. Specifically, we first propose a MatrixConv Unmixing (MCU) approach for
endmember and abundance estimation, respectively, which can be solved via
certain iterative solvers. We then unroll these solvers to build two
sub-networks, endmember estimation DIP (UEDIP) and abundance estimation DIP
(UADIP), to generate the estimation of endmember and abundance, respectively.
The overall network is constructed by assembling these two sub-networks. In
order to generate meaningful unmixing results, we also propose a composite loss
function. To further improve the unmixing quality, we also add explicitly a
regularizer for endmember and abundance estimation, respectively. The proposed
methods are tested for effectiveness on both synthetic and real datasets.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 09:41:57 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhou",
"Chao",
""
],
[
"Pu",
"Wei",
""
],
[
"Rodrigues",
"Miguel",
""
]
] | TITLE: Neural Network for Blind Unmixing: a novel MatrixConv Unmixing (MCU)
Approach
ABSTRACT: Hyperspectral image (HSI) unmixing is a challenging research problem that
tries to identify the constituent components, known as endmembers, and their
corresponding proportions, known as abundances, in the scene by analysing
images captured by hyperspectral cameras. Recently, many deep learning based
unmixing approaches have been proposed with the surge of machine learning
techniques, especially convolutional neural networks (CNN). However, these
methods face two notable challenges: 1. They frequently yield results lacking
physical significance, such as signatures corresponding to unknown or
non-existent materials. 2. CNNs, as general-purpose network structures, are not
explicitly tailored for unmixing tasks. In response to these concerns, our work
draws inspiration from double deep image prior (DIP) techniques and algorithm
unrolling, presenting a novel network structure that effectively addresses both
issues. Specifically, we first propose a MatrixConv Unmixing (MCU) approach for
endmember and abundance estimation, respectively, which can be solved via
certain iterative solvers. We then unroll these solvers to build two
sub-networks, endmember estimation DIP (UEDIP) and abundance estimation DIP
(UADIP), to generate the estimation of endmember and abundance, respectively.
The overall network is constructed by assembling these two sub-networks. In
order to generate meaningful unmixing results, we also propose a composite loss
function. To further improve the unmixing quality, we also add explicitly a
regularizer for endmember and abundance estimation, respectively. The proposed
methods are tested for effectiveness on both synthetic and real datasets.
|
2503.08750 | Yuhan Zhi | Yuhan Zhi, Xiaoyu Zhang, Longtian Wang, Shumin Jiang, Shiqing Ma,
Xiaohong Guan, Chao Shen | Exposing Product Bias in LLM Investment Recommendation | null | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs), as a new generation of recommendation engines,
possess powerful summarization and data analysis capabilities, surpassing
traditional recommendation systems in both scope and performance. One promising
application is investment recommendation. In this paper, we reveal a novel
product bias in LLM investment recommendation, where LLMs exhibit systematic
preferences for specific products. Such preferences can subtly influence user
investment decisions, potentially leading to inflated valuations of products
and financial bubbles, posing risks to both individual investors and market
stability. To comprehensively study the product bias, we develop an automated
pipeline to create a dataset of 567,000 samples across five asset classes
(stocks, mutual funds, cryptocurrencies, savings, and portfolios). With this
dataset, we present the bf first study on product bias in LLM investment
recommendations. Our findings reveal that LLMs exhibit clear product
preferences, such as certain stocks (e.g., `AAPL' from Apple and `MSFT' from
Microsoft). Notably, this bias persists even after applying debiasing
techniques. We urge AI researchers to take heed of the product bias in LLM
investment recommendations and its implications, ensuring fairness and security
in the digital space and market.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 13:10:00 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhi",
"Yuhan",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Wang",
"Longtian",
""
],
[
"Jiang",
"Shumin",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Guan",
"Xiaohong",
""
],
[
"Shen",
"Chao",
""
]
] | TITLE: Exposing Product Bias in LLM Investment Recommendation
ABSTRACT: Large language models (LLMs), as a new generation of recommendation engines,
possess powerful summarization and data analysis capabilities, surpassing
traditional recommendation systems in both scope and performance. One promising
application is investment recommendation. In this paper, we reveal a novel
product bias in LLM investment recommendation, where LLMs exhibit systematic
preferences for specific products. Such preferences can subtly influence user
investment decisions, potentially leading to inflated valuations of products
and financial bubbles, posing risks to both individual investors and market
stability. To comprehensively study the product bias, we develop an automated
pipeline to create a dataset of 567,000 samples across five asset classes
(stocks, mutual funds, cryptocurrencies, savings, and portfolios). With this
dataset, we present the bf first study on product bias in LLM investment
recommendations. Our findings reveal that LLMs exhibit clear product
preferences, such as certain stocks (e.g., `AAPL' from Apple and `MSFT' from
Microsoft). Notably, this bias persists even after applying debiasing
techniques. We urge AI researchers to take heed of the product bias in LLM
investment recommendations and its implications, ensuring fairness and security
in the digital space and market.
|
2503.08759 | Nouhaila Innan | Siddhant Dutta, Nouhaila Innan, Khadijeh Najafi, Sadok Ben Yahia,
Muhammad Shafique | QUIET-SR: Quantum Image Enhancement Transformer for Single Image
Super-Resolution | 10 figures, 3 pages | null | null | null | quant-ph cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Single-Image Super-Resolution (SISR) using deep
learning have significantly improved image restoration quality. However, the
high computational cost of processing high-resolution images due to the large
number of parameters in classical models, along with the scalability challenges
of quantum algorithms for image processing, remains a major obstacle. In this
paper, we propose the Quantum Image Enhancement Transformer for
Super-Resolution (QUIET-SR), a hybrid framework that extends the Swin
transformer architecture with a novel shifted quantum window attention
mechanism, built upon variational quantum neural networks. QUIET-SR effectively
captures complex residual mappings between low-resolution and high-resolution
images, leveraging quantum attention mechanisms to enhance feature extraction
and image restoration while requiring a minimal number of qubits, making it
suitable for the Noisy Intermediate-Scale Quantum (NISQ) era. We evaluate our
framework in MNIST (30.24 PSNR, 0.989 SSIM), FashionMNIST (29.76 PSNR, 0.976
SSIM) and the MedMNIST dataset collection, demonstrating that QUIET-SR achieves
PSNR and SSIM scores comparable to state-of-the-art methods while using fewer
parameters. These findings highlight the potential of scalable variational
quantum machine learning models for SISR, marking a step toward practical
quantum-enhanced image super-resolution.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 16:06:16 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Dutta",
"Siddhant",
""
],
[
"Innan",
"Nouhaila",
""
],
[
"Najafi",
"Khadijeh",
""
],
[
"Yahia",
"Sadok Ben",
""
],
[
"Shafique",
"Muhammad",
""
]
] | TITLE: QUIET-SR: Quantum Image Enhancement Transformer for Single Image
Super-Resolution
ABSTRACT: Recent advancements in Single-Image Super-Resolution (SISR) using deep
learning have significantly improved image restoration quality. However, the
high computational cost of processing high-resolution images due to the large
number of parameters in classical models, along with the scalability challenges
of quantum algorithms for image processing, remains a major obstacle. In this
paper, we propose the Quantum Image Enhancement Transformer for
Super-Resolution (QUIET-SR), a hybrid framework that extends the Swin
transformer architecture with a novel shifted quantum window attention
mechanism, built upon variational quantum neural networks. QUIET-SR effectively
captures complex residual mappings between low-resolution and high-resolution
images, leveraging quantum attention mechanisms to enhance feature extraction
and image restoration while requiring a minimal number of qubits, making it
suitable for the Noisy Intermediate-Scale Quantum (NISQ) era. We evaluate our
framework in MNIST (30.24 PSNR, 0.989 SSIM), FashionMNIST (29.76 PSNR, 0.976
SSIM) and the MedMNIST dataset collection, demonstrating that QUIET-SR achieves
PSNR and SSIM scores comparable to state-of-the-art methods while using fewer
parameters. These findings highlight the potential of scalable variational
quantum machine learning models for SISR, marking a step toward practical
quantum-enhanced image super-resolution.
|
2503.08760 | Keyue Jiang | Keyue Jiang, Bohan Tang, Xiaowen Dong, Laura Toni | Heterogeneous Graph Structure Learning through the Lens of
Data-generating Processes | null | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Inferring the graph structure from observed data is a key task in graph
machine learning to capture the intrinsic relationship between data entities.
While significant advancements have been made in learning the structure of
homogeneous graphs, many real-world graphs exhibit heterogeneous patterns where
nodes and edges have multiple types. This paper fills this gap by introducing
the first approach for heterogeneous graph structure learning (HGSL). To this
end, we first propose a novel statistical model for the data-generating process
(DGP) of heterogeneous graph data, namely hidden Markov networks for
heterogeneous graphs (H2MN). Then we formalize HGSL as a maximum a-posterior
estimation problem parameterized by such DGP and derive an alternating
optimization method to obtain a solution together with a theoretical
justification of the optimization conditions. Finally, we conduct extensive
experiments on both synthetic and real-world datasets to demonstrate that our
proposed method excels in learning structure on heterogeneous graphs in terms
of edge type identification and edge weight recovery.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 16:14:53 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Jiang",
"Keyue",
""
],
[
"Tang",
"Bohan",
""
],
[
"Dong",
"Xiaowen",
""
],
[
"Toni",
"Laura",
""
]
] | TITLE: Heterogeneous Graph Structure Learning through the Lens of
Data-generating Processes
ABSTRACT: Inferring the graph structure from observed data is a key task in graph
machine learning to capture the intrinsic relationship between data entities.
While significant advancements have been made in learning the structure of
homogeneous graphs, many real-world graphs exhibit heterogeneous patterns where
nodes and edges have multiple types. This paper fills this gap by introducing
the first approach for heterogeneous graph structure learning (HGSL). To this
end, we first propose a novel statistical model for the data-generating process
(DGP) of heterogeneous graph data, namely hidden Markov networks for
heterogeneous graphs (H2MN). Then we formalize HGSL as a maximum a-posterior
estimation problem parameterized by such DGP and derive an alternating
optimization method to obtain a solution together with a theoretical
justification of the optimization conditions. Finally, we conduct extensive
experiments on both synthetic and real-world datasets to demonstrate that our
proposed method excels in learning structure on heterogeneous graphs in terms
of edge type identification and edge weight recovery.
|
2503.08764 | John Yang | Nithin Parsan, David J. Yang, John J. Yang | Towards Interpretable Protein Structure Prediction with Sparse
Autoencoders | Published at the GEMBio ICLR 2025 Workshop | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Protein language models have revolutionized structure prediction, but their
nonlinear nature obscures how sequence representations inform structure
prediction. While sparse autoencoders (SAEs) offer a path to interpretability
here by learning linear representations in high-dimensional space, their
application has been limited to smaller protein language models unable to
perform structure prediction. In this work, we make two key advances: (1) we
scale SAEs to ESM2-3B, the base model for ESMFold, enabling mechanistic
interpretability of protein structure prediction for the first time, and (2) we
adapt Matryoshka SAEs for protein language models, which learn hierarchically
organized features by forcing nested groups of latents to reconstruct inputs
independently. We demonstrate that our Matryoshka SAEs achieve comparable or
better performance than standard architectures. Through comprehensive
evaluations, we show that SAEs trained on ESM2-3B significantly outperform
those trained on smaller models for both biological concept discovery and
contact map prediction. Finally, we present an initial case study demonstrating
how our approach enables targeted steering of ESMFold predictions, increasing
structure solvent accessibility while fixing the input sequence. To facilitate
further investigation by the broader community, we open-source our code,
dataset, pretrained models https://github.com/johnyang101/reticular-sae , and
visualizer https://sae.reticular.ai .
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 17:57:29 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Parsan",
"Nithin",
""
],
[
"Yang",
"David J.",
""
],
[
"Yang",
"John J.",
""
]
] | TITLE: Towards Interpretable Protein Structure Prediction with Sparse
Autoencoders
ABSTRACT: Protein language models have revolutionized structure prediction, but their
nonlinear nature obscures how sequence representations inform structure
prediction. While sparse autoencoders (SAEs) offer a path to interpretability
here by learning linear representations in high-dimensional space, their
application has been limited to smaller protein language models unable to
perform structure prediction. In this work, we make two key advances: (1) we
scale SAEs to ESM2-3B, the base model for ESMFold, enabling mechanistic
interpretability of protein structure prediction for the first time, and (2) we
adapt Matryoshka SAEs for protein language models, which learn hierarchically
organized features by forcing nested groups of latents to reconstruct inputs
independently. We demonstrate that our Matryoshka SAEs achieve comparable or
better performance than standard architectures. Through comprehensive
evaluations, we show that SAEs trained on ESM2-3B significantly outperform
those trained on smaller models for both biological concept discovery and
contact map prediction. Finally, we present an initial case study demonstrating
how our approach enables targeted steering of ESMFold predictions, increasing
structure solvent accessibility while fixing the input sequence. To facilitate
further investigation by the broader community, we open-source our code,
dataset, pretrained models https://github.com/johnyang101/reticular-sae , and
visualizer https://sae.reticular.ai .
|
2503.08798 | Rodrigo Mira | Minsu Kim, Rodrigo Mira, Honglie Chen, Stavros Petridis, Maja Pantic | Contextual Speech Extraction: Leveraging Textual History as an Implicit
Cue for Target Speech Extraction | Accepted to ICASSP 2025 | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate a novel approach for Target Speech Extraction
(TSE), which relies solely on textual context to extract the target speech. We
refer to this task as Contextual Speech Extraction (CSE). Unlike traditional
TSE methods that rely on pre-recorded enrollment utterances, video of the
target speaker's face, spatial information, or other explicit cues to identify
the target stream, our proposed method requires only a few turns of previous
dialogue (or monologue) history. This approach is naturally feasible in mobile
messaging environments where voice recordings are typically preceded by textual
dialogue that can be leveraged implicitly. We present three CSE models and
analyze their performances on three datasets. Through our experiments, we
demonstrate that even when the model relies purely on dialogue history, it can
achieve over 90 % accuracy in identifying the correct target stream with only
two previous dialogue turns. Furthermore, we show that by leveraging both
textual context and enrollment utterances as cues during training, we further
enhance our model's flexibility and effectiveness, allowing us to use either
cue during inference, or combine both for improved performance. Samples and
code available on https://miraodasilva.github.io/cse-project-page .
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:26:10 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Kim",
"Minsu",
""
],
[
"Mira",
"Rodrigo",
""
],
[
"Chen",
"Honglie",
""
],
[
"Petridis",
"Stavros",
""
],
[
"Pantic",
"Maja",
""
]
] | TITLE: Contextual Speech Extraction: Leveraging Textual History as an Implicit
Cue for Target Speech Extraction
ABSTRACT: In this paper, we investigate a novel approach for Target Speech Extraction
(TSE), which relies solely on textual context to extract the target speech. We
refer to this task as Contextual Speech Extraction (CSE). Unlike traditional
TSE methods that rely on pre-recorded enrollment utterances, video of the
target speaker's face, spatial information, or other explicit cues to identify
the target stream, our proposed method requires only a few turns of previous
dialogue (or monologue) history. This approach is naturally feasible in mobile
messaging environments where voice recordings are typically preceded by textual
dialogue that can be leveraged implicitly. We present three CSE models and
analyze their performances on three datasets. Through our experiments, we
demonstrate that even when the model relies purely on dialogue history, it can
achieve over 90 % accuracy in identifying the correct target stream with only
two previous dialogue turns. Furthermore, we show that by leveraging both
textual context and enrollment utterances as cues during training, we further
enhance our model's flexibility and effectiveness, allowing us to use either
cue during inference, or combine both for improved performance. Samples and
code available on https://miraodasilva.github.io/cse-project-page .
|
2503.08801 | Zixuan Liang | Zixuan Liang | Enhanced Estimation Techniques for Certified Radii in Randomized
Smoothing | IEEE The 8th International Conference on Artificial Intelligence and
Big Data (ICAIBD 2025) | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | This paper presents novel methods for estimating certified radii in
randomized smoothing, a technique crucial for certifying the robustness of
neural networks against adversarial perturbations. Our proposed techniques
significantly improve the accuracy of certified test-set accuracy by providing
tighter bounds on the certified radii. We introduce advanced algorithms for
both discrete and continuous domains, demonstrating their effectiveness on
CIFAR-10 and ImageNet datasets. The new methods show considerable improvements
over existing approaches, particularly in reducing discrepancies in certified
radii estimates. We also explore the impact of various hyperparameters,
including sample size, standard deviation, and temperature, on the performance
of these methods. Our findings highlight the potential for more efficient
certification processes and pave the way for future research on tighter
confidence sequences and improved theoretical frameworks. The study concludes
with a discussion of potential future directions, including enhanced estimation
techniques for discrete domains and further theoretical advancements to bridge
the gap between empirical and theoretical performance in randomized smoothing.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:30:47 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Liang",
"Zixuan",
""
]
] | TITLE: Enhanced Estimation Techniques for Certified Radii in Randomized
Smoothing
ABSTRACT: This paper presents novel methods for estimating certified radii in
randomized smoothing, a technique crucial for certifying the robustness of
neural networks against adversarial perturbations. Our proposed techniques
significantly improve the accuracy of certified test-set accuracy by providing
tighter bounds on the certified radii. We introduce advanced algorithms for
both discrete and continuous domains, demonstrating their effectiveness on
CIFAR-10 and ImageNet datasets. The new methods show considerable improvements
over existing approaches, particularly in reducing discrepancies in certified
radii estimates. We also explore the impact of various hyperparameters,
including sample size, standard deviation, and temperature, on the performance
of these methods. Our findings highlight the potential for more efficient
certification processes and pave the way for future research on tighter
confidence sequences and improved theoretical frameworks. The study concludes
with a discussion of potential future directions, including enhanced estimation
techniques for discrete domains and further theoretical advancements to bridge
the gap between empirical and theoretical performance in randomized smoothing.
|
2503.08803 | Johan Rodriguez | Johan R. Portela and Nicol\'as Perez and Rub\'en Manrique | ESNLIR: A Spanish Multi-Genre Dataset with Causal Relationships | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Natural Language Inference (NLI), also known as Recognizing Textual
Entailment (RTE), serves as a crucial area within the domain of Natural
Language Processing (NLP). This area fundamentally empowers machines to discern
semantic relationships between assorted sections of text. Even though
considerable work has been executed for the English language, it has been
observed that efforts for the Spanish language are relatively sparse. Keeping
this in view, this paper focuses on generating a multi-genre Spanish dataset
for NLI, ESNLIR, particularly accounting for causal Relationships. A
preliminary baseline has been conceptualized and subjected to an evaluation,
leveraging models drawn from the BERT family. The findings signify that the
enrichment of genres essentially contributes to the enrichment of the model's
capability to generalize.
The code, notebooks and whole datasets for this experiments is available at:
https://zenodo.org/records/15002575. If you are interested only in the dataset
you can find it here: https://zenodo.org/records/15002371.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:32:16 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Portela",
"Johan R.",
""
],
[
"Perez",
"Nicolás",
""
],
[
"Manrique",
"Rubén",
""
]
] | TITLE: ESNLIR: A Spanish Multi-Genre Dataset with Causal Relationships
ABSTRACT: Natural Language Inference (NLI), also known as Recognizing Textual
Entailment (RTE), serves as a crucial area within the domain of Natural
Language Processing (NLP). This area fundamentally empowers machines to discern
semantic relationships between assorted sections of text. Even though
considerable work has been executed for the English language, it has been
observed that efforts for the Spanish language are relatively sparse. Keeping
this in view, this paper focuses on generating a multi-genre Spanish dataset
for NLI, ESNLIR, particularly accounting for causal Relationships. A
preliminary baseline has been conceptualized and subjected to an evaluation,
leveraging models drawn from the BERT family. The findings signify that the
enrichment of genres essentially contributes to the enrichment of the model's
capability to generalize.
The code, notebooks and whole datasets for this experiments is available at:
https://zenodo.org/records/15002575. If you are interested only in the dataset
you can find it here: https://zenodo.org/records/15002371.
|
2503.08805 | Mikey Shechter | Mikey Shechter and Yair Carmon | Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce Filter Like You Test (FLYT), a method for curating large-scale
vision-language datasets that learns the usefulness of each data point as a
pretraining example. FLYT trains a scoring model that learns to weigh each
example using gradient signals from downstream tasks training sets. Using the
same training methodology, we develop Mixing-FLYT (M-FLYT), which takes the
per-example scores generated by different scoring methods and learns to unify
them into a single score. Our training methodology naturally produces a
distribution over the training examples, which we leverage through Soft Cap
Sampling (SCS), a strategy for obtaining a filtered pretraining dataset from
per-example probabilities that samples examples while preventing
over-representation through a repetition penalty. Using all three methods, we
achieve 40.1% ImageNet zero-shot accuracy on the DataComp medium scale
filtering benchmark, a 1.9% absolute accuracy increase over all previous
results and a 5.5% increase over results that -- like us -- use only public
resources.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:34:12 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Shechter",
"Mikey",
""
],
[
"Carmon",
"Yair",
""
]
] | TITLE: Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining
ABSTRACT: We introduce Filter Like You Test (FLYT), a method for curating large-scale
vision-language datasets that learns the usefulness of each data point as a
pretraining example. FLYT trains a scoring model that learns to weigh each
example using gradient signals from downstream tasks training sets. Using the
same training methodology, we develop Mixing-FLYT (M-FLYT), which takes the
per-example scores generated by different scoring methods and learns to unify
them into a single score. Our training methodology naturally produces a
distribution over the training examples, which we leverage through Soft Cap
Sampling (SCS), a strategy for obtaining a filtered pretraining dataset from
per-example probabilities that samples examples while preventing
over-representation through a repetition penalty. Using all three methods, we
achieve 40.1% ImageNet zero-shot accuracy on the DataComp medium scale
filtering benchmark, a 1.9% absolute accuracy increase over all previous
results and a 5.5% increase over results that -- like us -- use only public
resources.
|
2503.08810 | Daniel Abou-Ras | Daniel Abou-Ras and Matthias Maiberg | Recombination velocities at grain boundaries in solar-cell absorbers --
revisited | 19 pages, 6 + 3 figures | null | null | null | cond-mat.mtrl-sci physics.app-ph | http://creativecommons.org/licenses/by/4.0/ | The present work revisits the recombination velocities ($s_{\mathrm{GB}}$) of
minority-charge carriers determined at grain boundaries in polycrystalline
absorber materials for solar cells. The equations describing $s_{\mathrm{GB}}$
as well as the barriers for electrons and holes were derived. It is shown that
for given net-doping density and absolute temperature, the experimentally
determined recombination velocity of a specific grain boundary depends only on
the excess-charge density at this planar defect as well as on the prefactor
$s_{\mathrm{GB,0}}$ describing the nonradiative recombination. Value ranges for
these two quantities can be determined for any measured $s_{\mathrm{GB}}$
value. When analyzing $s_{\mathrm{GB}}$ datasets acquired on various
(Ag,Cu)(In,Ga)Se$_2$ and microcrystalline Si absorbers, it is apparent that
both, the excess-charge density and the prefactor $s_{\mathrm{GB,0}}$, remain
within about the same orders of magnitude for all grain boundaries analyzed in
a specific absorber. The broad range of the recombination velocities over
several orders magnitude indicate upward as well as downward band bending, and
the band-bending values are on the order of several $\pm$10 meV for all
materials analyzed.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:41:58 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Abou-Ras",
"Daniel",
""
],
[
"Maiberg",
"Matthias",
""
]
] | TITLE: Recombination velocities at grain boundaries in solar-cell absorbers --
revisited
ABSTRACT: The present work revisits the recombination velocities ($s_{\mathrm{GB}}$) of
minority-charge carriers determined at grain boundaries in polycrystalline
absorber materials for solar cells. The equations describing $s_{\mathrm{GB}}$
as well as the barriers for electrons and holes were derived. It is shown that
for given net-doping density and absolute temperature, the experimentally
determined recombination velocity of a specific grain boundary depends only on
the excess-charge density at this planar defect as well as on the prefactor
$s_{\mathrm{GB,0}}$ describing the nonradiative recombination. Value ranges for
these two quantities can be determined for any measured $s_{\mathrm{GB}}$
value. When analyzing $s_{\mathrm{GB}}$ datasets acquired on various
(Ag,Cu)(In,Ga)Se$_2$ and microcrystalline Si absorbers, it is apparent that
both, the excess-charge density and the prefactor $s_{\mathrm{GB,0}}$, remain
within about the same orders of magnitude for all grain boundaries analyzed in
a specific absorber. The broad range of the recombination velocities over
several orders magnitude indicate upward as well as downward band bending, and
the band-bending values are on the order of several $\pm$10 meV for all
materials analyzed.
|
2503.08819 | Afsana Ahsan Jeny | Md baharul Islam, Afsana Ahsan Jeny | Residual Learning and Filtering Networks for End-to-End Lossless Video
Compression | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Existing learning-based video compression methods still face challenges
related to inaccurate motion estimates and inadequate motion compensation
structures. These issues result in compression errors and a suboptimal
rate-distortion trade-off. To address these challenges, this work presents an
end-to-end video compression method that incorporates several key operations.
Specifically, we propose an autoencoder-type network with a residual skip
connection to efficiently compress motion information. Additionally, we design
motion vector and residual frame filtering networks to mitigate compression
errors in the video compression system. To improve the effectiveness of the
motion compensation network, we utilize powerful nonlinear transforms, such as
the Parametric Rectified Linear Unit (PReLU), to delve deeper into the motion
compensation architecture. Furthermore, a buffer is introduced to fine-tune the
previous reference frames, thereby enhancing the reconstructed frame quality.
These modules are combined with a carefully designed loss function that
assesses the trade-off and enhances the overall video quality of the decoded
output. Experimental results showcase the competitive performance of our method
on various datasets, including HEVC (sequences B, C, and D), UVG, VTL, and
MCL-JCV. The proposed approach tackles the challenges of accurate motion
estimation and motion compensation in video compression, and the results
highlight its competitive performance compared to existing methods.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 18:51:36 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Islam",
"Md baharul",
""
],
[
"Jeny",
"Afsana Ahsan",
""
]
] | TITLE: Residual Learning and Filtering Networks for End-to-End Lossless Video
Compression
ABSTRACT: Existing learning-based video compression methods still face challenges
related to inaccurate motion estimates and inadequate motion compensation
structures. These issues result in compression errors and a suboptimal
rate-distortion trade-off. To address these challenges, this work presents an
end-to-end video compression method that incorporates several key operations.
Specifically, we propose an autoencoder-type network with a residual skip
connection to efficiently compress motion information. Additionally, we design
motion vector and residual frame filtering networks to mitigate compression
errors in the video compression system. To improve the effectiveness of the
motion compensation network, we utilize powerful nonlinear transforms, such as
the Parametric Rectified Linear Unit (PReLU), to delve deeper into the motion
compensation architecture. Furthermore, a buffer is introduced to fine-tune the
previous reference frames, thereby enhancing the reconstructed frame quality.
These modules are combined with a carefully designed loss function that
assesses the trade-off and enhances the overall video quality of the decoded
output. Experimental results showcase the competitive performance of our method
on various datasets, including HEVC (sequences B, C, and D), UVG, VTL, and
MCL-JCV. The proposed approach tackles the challenges of accurate motion
estimation and motion compensation in video compression, and the results
highlight its competitive performance compared to existing methods.
|
2503.08829 | Ivan Sabolic | Ivan Saboli\'c, Matej Grci\'c, Sini\v{s}a \v{S}egvi\'c | Seal Your Backdoor with Variational Defense | null | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose VIBE, a model-agnostic framework that trains classifiers resilient
to backdoor attacks. The key concept behind our approach is to treat malicious
inputs and corrupted labels from the training dataset as observed random
variables, while the actual clean labels are latent. VIBE then recovers the
corresponding latent clean label posterior through variational inference. The
resulting training procedure follows the expectation-maximization (EM)
algorithm. The E-step infers the clean pseudolabels by solving an
entropy-regularized optimal transport problem, while the M-step updates the
classifier parameters via gradient descent. Being modular, VIBE can seamlessly
integrate with recent advancements in self-supervised representation learning,
which enhance its ability to resist backdoor attacks. We experimentally
validate the method effectiveness against contemporary backdoor attacks on
standard datasets, a large-scale setup with 1$k$ classes, and a dataset
poisoned with multiple attacks. VIBE consistently outperforms previous defenses
across all tested scenarios.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 19:08:31 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Sabolić",
"Ivan",
""
],
[
"Grcić",
"Matej",
""
],
[
"Šegvić",
"Siniša",
""
]
] | TITLE: Seal Your Backdoor with Variational Defense
ABSTRACT: We propose VIBE, a model-agnostic framework that trains classifiers resilient
to backdoor attacks. The key concept behind our approach is to treat malicious
inputs and corrupted labels from the training dataset as observed random
variables, while the actual clean labels are latent. VIBE then recovers the
corresponding latent clean label posterior through variational inference. The
resulting training procedure follows the expectation-maximization (EM)
algorithm. The E-step infers the clean pseudolabels by solving an
entropy-regularized optimal transport problem, while the M-step updates the
classifier parameters via gradient descent. Being modular, VIBE can seamlessly
integrate with recent advancements in self-supervised representation learning,
which enhance its ability to resist backdoor attacks. We experimentally
validate the method effectiveness against contemporary backdoor attacks on
standard datasets, a large-scale setup with 1$k$ classes, and a dataset
poisoned with multiple attacks. VIBE consistently outperforms previous defenses
across all tested scenarios.
|
2503.08834 | Ali Shamooni Pour Dezfouli | Ali Shamooni and Oliver T. Stein and Andreas Kronenburg | Super-resolution of turbulent velocity and scalar fields using different
scalar distributions | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, sub-grid models for turbulent mixing have been developed by
data-driven methods for large eddy simulation (LES). Super-resolution is a
data-driven deconvolution technique in which deep convolutional neural networks
are trained using direct numerical simulation (DNS) data to learn mappings
between the input data from a low resolution domain to the super-resolved high
resolution output domain. While the technique has been of a great success in
a-priori tests, the assessment of its generalization capabilities is required
for further a-posteriori applications. In this study we assess the
generalization capability of a super-resolution generative adversarial network
(GAN) in reconstructing scalars with different distributions. Forced turbulence
mixing DNS data with a fixed Reynolds number but different bulk scalar
distributions, are generated and used as training/testing datasets. The results
show that the velocity vector field can be reconstructed well, but the model
fails to super-resolve the scalars from out-of-sample distributions. Including
two extreme mixture fraction distributions, namely double Pareto and
semi-Gaussian, in the training dataset significantly improves the performance
of the model, not only for those distributions, but also for previously unseen
bimodal distributions.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 19:15:48 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Shamooni",
"Ali",
""
],
[
"Stein",
"Oliver T.",
""
],
[
"Kronenburg",
"Andreas",
""
]
] | TITLE: Super-resolution of turbulent velocity and scalar fields using different
scalar distributions
ABSTRACT: In recent years, sub-grid models for turbulent mixing have been developed by
data-driven methods for large eddy simulation (LES). Super-resolution is a
data-driven deconvolution technique in which deep convolutional neural networks
are trained using direct numerical simulation (DNS) data to learn mappings
between the input data from a low resolution domain to the super-resolved high
resolution output domain. While the technique has been of a great success in
a-priori tests, the assessment of its generalization capabilities is required
for further a-posteriori applications. In this study we assess the
generalization capability of a super-resolution generative adversarial network
(GAN) in reconstructing scalars with different distributions. Forced turbulence
mixing DNS data with a fixed Reynolds number but different bulk scalar
distributions, are generated and used as training/testing datasets. The results
show that the velocity vector field can be reconstructed well, but the model
fails to super-resolve the scalars from out-of-sample distributions. Including
two extreme mixture fraction distributions, namely double Pareto and
semi-Gaussian, in the training dataset significantly improves the performance
of the model, not only for those distributions, but also for previously unseen
bimodal distributions.
|
2503.08836 | Dylan Cashman | Dylan Cashman, Mark Keller, Hyeon Jeon, Bum Chul Kwon, Qianwen Wang | A Critical Analysis of the Usage of Dimensionality Reduction in Four
Domains | In submission to TVCG. Currently under minor revision | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dimensionality reduction is used as an important tool for unraveling the
complexities of high-dimensional datasets in many fields of science, such as
cell biology, chemical informatics, and physics. Visualizations of the
dimensionally reduced data enable scientists to delve into the intrinsic
structures of their datasets and align them with established hypotheses.
Visualization researchers have thus proposed many dimensionality reduction
methods and interactive systems designed to uncover latent structures. At the
same time, different scientific domains have formulated guidelines or common
workflows for using dimensionality reduction techniques and visualizations for
their respective fields. In this work, we present a critical analysis of the
usage of dimensionality reduction in scientific domains outside of computer
science. First, we conduct a bibliometric analysis of 21,249 academic
publications that use dimensionality reduction to observe differences in the
frequency of techniques across fields. Next, we conduct a survey of a 71-paper
sample from four fields: biology, chemistry, physics, and business. Through
this survey, we uncover common workflows, processes, and usage patterns,
including the mixed use of confirmatory data analysis to validate a dataset and
projection method and exploratory data analysis to then generate more
hypotheses. We also find that misinterpretations and inappropriate usage is
common, particularly in the visual interpretation of the resulting
dimensionally reduced view. Lastly, we compare our observations with recent
works in the visualization community in order to match work within our
community to potential areas of impact outside our community.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 19:18:25 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Cashman",
"Dylan",
""
],
[
"Keller",
"Mark",
""
],
[
"Jeon",
"Hyeon",
""
],
[
"Kwon",
"Bum Chul",
""
],
[
"Wang",
"Qianwen",
""
]
] | TITLE: A Critical Analysis of the Usage of Dimensionality Reduction in Four
Domains
ABSTRACT: Dimensionality reduction is used as an important tool for unraveling the
complexities of high-dimensional datasets in many fields of science, such as
cell biology, chemical informatics, and physics. Visualizations of the
dimensionally reduced data enable scientists to delve into the intrinsic
structures of their datasets and align them with established hypotheses.
Visualization researchers have thus proposed many dimensionality reduction
methods and interactive systems designed to uncover latent structures. At the
same time, different scientific domains have formulated guidelines or common
workflows for using dimensionality reduction techniques and visualizations for
their respective fields. In this work, we present a critical analysis of the
usage of dimensionality reduction in scientific domains outside of computer
science. First, we conduct a bibliometric analysis of 21,249 academic
publications that use dimensionality reduction to observe differences in the
frequency of techniques across fields. Next, we conduct a survey of a 71-paper
sample from four fields: biology, chemistry, physics, and business. Through
this survey, we uncover common workflows, processes, and usage patterns,
including the mixed use of confirmatory data analysis to validate a dataset and
projection method and exploratory data analysis to then generate more
hypotheses. We also find that misinterpretations and inappropriate usage is
common, particularly in the visual interpretation of the resulting
dimensionally reduced view. Lastly, we compare our observations with recent
works in the visualization community in order to match work within our
community to potential areas of impact outside our community.
|
2503.08842 | Kun Qian | Tianyu Sun, Kun Qian, Wenhong Wang | Contrastive Speaker-Aware Learning for Multi-party Dialogue Generation
with LLMs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-party dialogue generation presents significant challenges due to the
complex interplay of multiple speakers and interwoven conversational threads.
Traditional approaches often fall short in capturing these complexities,
particularly when relying on manually annotated dialogue relations. This paper
introduces Speaker-Attentive LLM (SA-LLM), a novel generative model that
leverages pre-trained Large Language Models (LLMs) and a speaker-aware
contrastive learning strategy to address these challenges. SA-LLM incorporates
a speaker-attributed input encoding and a contrastive learning objective to
implicitly learn contextual coherence and speaker roles without explicit
relation annotations. Extensive experiments on the Ubuntu IRC and Movie
Dialogues datasets demonstrate that SA-LLM significantly outperforms
state-of-the-art baselines in automatic and human evaluations, achieving
superior performance in fluency, coherence, informativeness, and response
diversity. Ablation studies and detailed error analyses further validate the
effectiveness of the proposed speaker-attentive training approach, highlighting
its robustness across different speaker roles and context lengths. The results
underscore the potential of SA-LLM as a powerful and annotation-free solution
for high-quality multi-party dialogue generation.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 19:28:12 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Sun",
"Tianyu",
""
],
[
"Qian",
"Kun",
""
],
[
"Wang",
"Wenhong",
""
]
] | TITLE: Contrastive Speaker-Aware Learning for Multi-party Dialogue Generation
with LLMs
ABSTRACT: Multi-party dialogue generation presents significant challenges due to the
complex interplay of multiple speakers and interwoven conversational threads.
Traditional approaches often fall short in capturing these complexities,
particularly when relying on manually annotated dialogue relations. This paper
introduces Speaker-Attentive LLM (SA-LLM), a novel generative model that
leverages pre-trained Large Language Models (LLMs) and a speaker-aware
contrastive learning strategy to address these challenges. SA-LLM incorporates
a speaker-attributed input encoding and a contrastive learning objective to
implicitly learn contextual coherence and speaker roles without explicit
relation annotations. Extensive experiments on the Ubuntu IRC and Movie
Dialogues datasets demonstrate that SA-LLM significantly outperforms
state-of-the-art baselines in automatic and human evaluations, achieving
superior performance in fluency, coherence, informativeness, and response
diversity. Ablation studies and detailed error analyses further validate the
effectiveness of the proposed speaker-attentive training approach, highlighting
its robustness across different speaker roles and context lengths. The results
underscore the potential of SA-LLM as a powerful and annotation-free solution
for high-quality multi-party dialogue generation.
|
2503.08857 | Mateo Alejandro Rojas | Rafael Carranza, Mateo Alejandro Rojas | Interpretable and Robust Dialogue State Tracking via Natural Language
Summarization with LLMs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel approach to Dialogue State Tracking (DST) that
leverages Large Language Models (LLMs) to generate natural language
descriptions of dialogue states, moving beyond traditional slot-value
representations. Conventional DST methods struggle with open-domain dialogues
and noisy inputs. Motivated by the generative capabilities of LLMs, our Natural
Language DST (NL-DST) framework trains an LLM to directly synthesize
human-readable state descriptions. We demonstrate through extensive experiments
on MultiWOZ 2.1 and Taskmaster-1 datasets that NL-DST significantly outperforms
rule-based and discriminative BERT-based DST baselines, as well as generative
slot-filling GPT-2 DST models, in both Joint Goal Accuracy and Slot Accuracy.
Ablation studies and human evaluations further validate the effectiveness of
natural language state generation, highlighting its robustness to noise and
enhanced interpretability. Our findings suggest that NL-DST offers a more
flexible, accurate, and human-understandable approach to dialogue state
tracking, paving the way for more robust and adaptable task-oriented dialogue
systems.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 19:52:02 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Carranza",
"Rafael",
""
],
[
"Rojas",
"Mateo Alejandro",
""
]
] | TITLE: Interpretable and Robust Dialogue State Tracking via Natural Language
Summarization with LLMs
ABSTRACT: This paper introduces a novel approach to Dialogue State Tracking (DST) that
leverages Large Language Models (LLMs) to generate natural language
descriptions of dialogue states, moving beyond traditional slot-value
representations. Conventional DST methods struggle with open-domain dialogues
and noisy inputs. Motivated by the generative capabilities of LLMs, our Natural
Language DST (NL-DST) framework trains an LLM to directly synthesize
human-readable state descriptions. We demonstrate through extensive experiments
on MultiWOZ 2.1 and Taskmaster-1 datasets that NL-DST significantly outperforms
rule-based and discriminative BERT-based DST baselines, as well as generative
slot-filling GPT-2 DST models, in both Joint Goal Accuracy and Slot Accuracy.
Ablation studies and human evaluations further validate the effectiveness of
natural language state generation, highlighting its robustness to noise and
enhanced interpretability. Our findings suggest that NL-DST offers a more
flexible, accurate, and human-understandable approach to dialogue state
tracking, paving the way for more robust and adaptable task-oriented dialogue
systems.
|
2503.08867 | Yuhong Guo | Abdullah Alchihabi, Hanping Zhang, Yuhong Guo | Zero-Shot Action Generalization with Limited Observations | AISTATS 2025 | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning (RL) has demonstrated remarkable success in solving
sequential decision-making problems. However, in real-world scenarios, RL
agents often struggle to generalize when faced with unseen actions that were
not encountered during training. Some previous works on zero-shot action
generalization rely on large datasets of action observations to capture the
behaviors of new actions, making them impractical for real-world applications.
In this paper, we introduce a novel zero-shot framework, Action Generalization
from Limited Observations (AGLO). Our framework has two main components: an
action representation learning module and a policy learning module. The action
representation learning module extracts discriminative embeddings of actions
from limited observations, while the policy learning module leverages the
learned action representations, along with augmented synthetic action
representations, to learn a policy capable of handling tasks with unseen
actions. The experimental results demonstrate that our framework significantly
outperforms state-of-the-art methods for zero-shot action generalization across
multiple benchmark tasks, showcasing its effectiveness in generalizing to new
actions with minimal action observations.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 20:14:25 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Alchihabi",
"Abdullah",
""
],
[
"Zhang",
"Hanping",
""
],
[
"Guo",
"Yuhong",
""
]
] | TITLE: Zero-Shot Action Generalization with Limited Observations
ABSTRACT: Reinforcement Learning (RL) has demonstrated remarkable success in solving
sequential decision-making problems. However, in real-world scenarios, RL
agents often struggle to generalize when faced with unseen actions that were
not encountered during training. Some previous works on zero-shot action
generalization rely on large datasets of action observations to capture the
behaviors of new actions, making them impractical for real-world applications.
In this paper, we introduce a novel zero-shot framework, Action Generalization
from Limited Observations (AGLO). Our framework has two main components: an
action representation learning module and a policy learning module. The action
representation learning module extracts discriminative embeddings of actions
from limited observations, while the policy learning module leverages the
learned action representations, along with augmented synthetic action
representations, to learn a policy capable of handling tasks with unseen
actions. The experimental results demonstrate that our framework significantly
outperforms state-of-the-art methods for zero-shot action generalization across
multiple benchmark tasks, showcasing its effectiveness in generalizing to new
actions with minimal action observations.
|
2503.08870 | Robin Schmitt | Rafael R. Oexner, Robin Schmitt, Hyunchan Ahn, Ravi A. Shah, Anna
Zoccarato, Konstantinos Theofilatos, Ajay M. Shah | Comprehensive Benchmarking of Machine Learning Methods for Risk
Prediction Modelling from Large-Scale Survival Data: A UK Biobank Study | null | null | null | null | cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | Predictive modelling is vital to guide preventive efforts. Whilst large-scale
prospective cohort studies and a diverse toolkit of available machine learning
(ML) algorithms have facilitated such survival task efforts, choosing the
best-performing algorithm remains challenging. Benchmarking studies to date
focus on relatively small-scale datasets and it is unclear how well such
findings translate to large datasets that combine omics and clinical features.
We sought to benchmark eight distinct survival task implementations, ranging
from linear to deep learning (DL) models, within the large-scale prospective
cohort study UK Biobank (UKB). We compared discrimination and computational
requirements across heterogenous predictor matrices and endpoints. Finally, we
assessed how well different architectures scale with sample sizes ranging from
n = 5,000 to n = 250,000 individuals. Our results show that discriminative
performance across a multitude of metrices is dependent on endpoint frequency
and predictor matrix properties, with very robust performance of (penalised)
COX Proportional Hazards (COX-PH) models. Of note, there are certain scenarios
which favour more complex frameworks, specifically if working with larger
numbers of observations and relatively simple predictor matrices. The observed
computational requirements were vastly different, and we provide solutions in
cases where current implementations were impracticable. In conclusion, this
work delineates how optimal model choice is dependent on a variety of factors,
including sample size, endpoint frequency and predictor matrix properties, thus
constituting an informative resource for researchers working on similar
datasets. Furthermore, we showcase how linear models still display a highly
effective and scalable platform to perform risk modelling at scale and suggest
that those are reported alongside non-linear ML models.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 20:27:20 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Oexner",
"Rafael R.",
""
],
[
"Schmitt",
"Robin",
""
],
[
"Ahn",
"Hyunchan",
""
],
[
"Shah",
"Ravi A.",
""
],
[
"Zoccarato",
"Anna",
""
],
[
"Theofilatos",
"Konstantinos",
""
],
[
"Shah",
"Ajay M.",
""
]... | TITLE: Comprehensive Benchmarking of Machine Learning Methods for Risk
Prediction Modelling from Large-Scale Survival Data: A UK Biobank Study
ABSTRACT: Predictive modelling is vital to guide preventive efforts. Whilst large-scale
prospective cohort studies and a diverse toolkit of available machine learning
(ML) algorithms have facilitated such survival task efforts, choosing the
best-performing algorithm remains challenging. Benchmarking studies to date
focus on relatively small-scale datasets and it is unclear how well such
findings translate to large datasets that combine omics and clinical features.
We sought to benchmark eight distinct survival task implementations, ranging
from linear to deep learning (DL) models, within the large-scale prospective
cohort study UK Biobank (UKB). We compared discrimination and computational
requirements across heterogenous predictor matrices and endpoints. Finally, we
assessed how well different architectures scale with sample sizes ranging from
n = 5,000 to n = 250,000 individuals. Our results show that discriminative
performance across a multitude of metrices is dependent on endpoint frequency
and predictor matrix properties, with very robust performance of (penalised)
COX Proportional Hazards (COX-PH) models. Of note, there are certain scenarios
which favour more complex frameworks, specifically if working with larger
numbers of observations and relatively simple predictor matrices. The observed
computational requirements were vastly different, and we provide solutions in
cases where current implementations were impracticable. In conclusion, this
work delineates how optimal model choice is dependent on a variety of factors,
including sample size, endpoint frequency and predictor matrix properties, thus
constituting an informative resource for researchers working on similar
datasets. Furthermore, we showcase how linear models still display a highly
effective and scalable platform to perform risk modelling at scale and suggest
that those are reported alongside non-linear ML models.
|
2503.08884 | Parsa Hosseni | Parsa Hosseini, Sumit Nawathe, Mazda Moayeri, Sriram Balasubramanian,
Soheil Feizi | Seeing What's Not There: Spurious Correlation in Multimodal LLMs | null | null | null | null | cs.CV cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Unimodal vision models are known to rely on spurious correlations, but it
remains unclear to what extent Multimodal Large Language Models (MLLMs) exhibit
similar biases despite language supervision. In this paper, we investigate
spurious bias in MLLMs and introduce SpurLens, a pipeline that leverages GPT-4
and open-set object detectors to automatically identify spurious visual cues
without human supervision. Our findings reveal that spurious correlations cause
two major failure modes in MLLMs: (1) over-reliance on spurious cues for object
recognition, where removing these cues reduces accuracy, and (2) object
hallucination, where spurious cues amplify the hallucination by over 10x. We
validate our findings in various MLLMs and datasets. Beyond diagnosing these
failures, we explore potential mitigation strategies, such as prompt ensembling
and reasoning-based prompting, and conduct ablation studies to examine the root
causes of spurious bias in MLLMs. By exposing the persistence of spurious
correlations, our study calls for more rigorous evaluation methods and
mitigation strategies to enhance the reliability of MLLMs.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 20:53:00 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Hosseini",
"Parsa",
""
],
[
"Nawathe",
"Sumit",
""
],
[
"Moayeri",
"Mazda",
""
],
[
"Balasubramanian",
"Sriram",
""
],
[
"Feizi",
"Soheil",
""
]
] | TITLE: Seeing What's Not There: Spurious Correlation in Multimodal LLMs
ABSTRACT: Unimodal vision models are known to rely on spurious correlations, but it
remains unclear to what extent Multimodal Large Language Models (MLLMs) exhibit
similar biases despite language supervision. In this paper, we investigate
spurious bias in MLLMs and introduce SpurLens, a pipeline that leverages GPT-4
and open-set object detectors to automatically identify spurious visual cues
without human supervision. Our findings reveal that spurious correlations cause
two major failure modes in MLLMs: (1) over-reliance on spurious cues for object
recognition, where removing these cues reduces accuracy, and (2) object
hallucination, where spurious cues amplify the hallucination by over 10x. We
validate our findings in various MLLMs and datasets. Beyond diagnosing these
failures, we explore potential mitigation strategies, such as prompt ensembling
and reasoning-based prompting, and conduct ablation studies to examine the root
causes of spurious bias in MLLMs. By exposing the persistence of spurious
correlations, our study calls for more rigorous evaluation methods and
mitigation strategies to enhance the reliability of MLLMs.
|
2503.08890 | Zhiwen You | Zhiwen You, Yue Guo | PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain
Language Summaries Generation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Hallucinated outputs from language models pose risks in the medical domain,
especially for lay audiences making health-related decisions. Existing
factuality evaluation methods, such as entailment- and question-answering-based
(QA), struggle with plain language summary (PLS) generation due to elaborative
explanation phenomenon, which introduces external content (e.g., definitions,
background, examples) absent from the source document to enhance comprehension.
To address this, we introduce PlainQAFact, a framework trained on a
fine-grained, human-annotated dataset PlainFact, to evaluate the factuality of
both source-simplified and elaboratively explained sentences. PlainQAFact first
classifies factuality type and then assesses factuality using a
retrieval-augmented QA-based scoring method. Our approach is lightweight and
computationally efficient. Empirical results show that existing factuality
metrics fail to effectively evaluate factuality in PLS, especially for
elaborative explanations, whereas PlainQAFact achieves state-of-the-art
performance. We further analyze its effectiveness across external knowledge
sources, answer extraction strategies, overlap measures, and document
granularity levels, refining its overall factuality assessment.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 20:59:53 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"You",
"Zhiwen",
""
],
[
"Guo",
"Yue",
""
]
] | TITLE: PlainQAFact: Automatic Factuality Evaluation Metric for Biomedical Plain
Language Summaries Generation
ABSTRACT: Hallucinated outputs from language models pose risks in the medical domain,
especially for lay audiences making health-related decisions. Existing
factuality evaluation methods, such as entailment- and question-answering-based
(QA), struggle with plain language summary (PLS) generation due to elaborative
explanation phenomenon, which introduces external content (e.g., definitions,
background, examples) absent from the source document to enhance comprehension.
To address this, we introduce PlainQAFact, a framework trained on a
fine-grained, human-annotated dataset PlainFact, to evaluate the factuality of
both source-simplified and elaboratively explained sentences. PlainQAFact first
classifies factuality type and then assesses factuality using a
retrieval-augmented QA-based scoring method. Our approach is lightweight and
computationally efficient. Empirical results show that existing factuality
metrics fail to effectively evaluate factuality in PLS, especially for
elaborative explanations, whereas PlainQAFact achieves state-of-the-art
performance. We further analyze its effectiveness across external knowledge
sources, answer extraction strategies, overlap measures, and document
granularity levels, refining its overall factuality assessment.
|
2503.08902 | Forough Fazeli-Asl | Forough Fazeliasl, Michael Minyi Zhang, Bei Jiang, Linglong Kong | A Deep Bayesian Nonparametric Framework for Robust Mutual Information
Estimation | null | null | null | null | stat.ML cs.LG stat.AP stat.CO | http://creativecommons.org/licenses/by/4.0/ | Mutual Information (MI) is a crucial measure for capturing dependencies
between variables, but exact computation is challenging in high dimensions with
intractable likelihoods, impacting accuracy and robustness. One idea is to use
an auxiliary neural network to train an MI estimator; however, methods based on
the empirical distribution function (EDF) can introduce sharp fluctuations in
the MI loss due to poor out-of-sample performance, destabilizing convergence.
We present a Bayesian nonparametric (BNP) solution for training an MI estimator
by constructing the MI loss with a finite representation of the Dirichlet
process posterior to incorporate regularization in the training process. With
this regularization, the MI loss integrates both prior knowledge and empirical
data to reduce the loss sensitivity to fluctuations and outliers in the sample
data, especially in small sample settings like mini-batches. This approach
addresses the challenge of balancing accuracy and low variance by effectively
reducing variance, leading to stabilized and robust MI loss gradients during
training and enhancing the convergence of the MI approximation while offering
stronger theoretical guarantees for convergence. We explore the application of
our estimator in maximizing MI between the data space and the latent space of a
variational autoencoder. Experimental results demonstrate significant
improvements in convergence over EDF-based methods, with applications across
synthetic and real datasets, notably in 3D CT image generation, yielding
enhanced structure discovery and reduced overfitting in data synthesis. While
this paper focuses on generative models in application, the proposed estimator
is not restricted to this setting and can be applied more broadly in various
BNP learning procedures.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 21:27:48 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Fazeliasl",
"Forough",
""
],
[
"Zhang",
"Michael Minyi",
""
],
[
"Jiang",
"Bei",
""
],
[
"Kong",
"Linglong",
""
]
] | TITLE: A Deep Bayesian Nonparametric Framework for Robust Mutual Information
Estimation
ABSTRACT: Mutual Information (MI) is a crucial measure for capturing dependencies
between variables, but exact computation is challenging in high dimensions with
intractable likelihoods, impacting accuracy and robustness. One idea is to use
an auxiliary neural network to train an MI estimator; however, methods based on
the empirical distribution function (EDF) can introduce sharp fluctuations in
the MI loss due to poor out-of-sample performance, destabilizing convergence.
We present a Bayesian nonparametric (BNP) solution for training an MI estimator
by constructing the MI loss with a finite representation of the Dirichlet
process posterior to incorporate regularization in the training process. With
this regularization, the MI loss integrates both prior knowledge and empirical
data to reduce the loss sensitivity to fluctuations and outliers in the sample
data, especially in small sample settings like mini-batches. This approach
addresses the challenge of balancing accuracy and low variance by effectively
reducing variance, leading to stabilized and robust MI loss gradients during
training and enhancing the convergence of the MI approximation while offering
stronger theoretical guarantees for convergence. We explore the application of
our estimator in maximizing MI between the data space and the latent space of a
variational autoencoder. Experimental results demonstrate significant
improvements in convergence over EDF-based methods, with applications across
synthetic and real datasets, notably in 3D CT image generation, yielding
enhanced structure discovery and reduced overfitting in data synthesis. While
this paper focuses on generative models in application, the proposed estimator
is not restricted to this setting and can be applied more broadly in various
BNP learning procedures.
|
2503.08906 | Wenhui Zhu | Xiwen Chen, Wenhui Zhu, Peijie Qiu, Hao Wang, Huayu Li, Haiyu Wu,
Aristeidis Sotiras, Yalin Wang and Abolfazl Razi | Prompt-OT: An Optimal Transport Regularization Paradigm for Knowledge
Preservation in Vision-Language Model Adaptation | null | null | null | null | cs.CV cs.AI cs.CL cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vision-language models (VLMs) such as CLIP demonstrate strong performance but
struggle when adapted to downstream tasks. Prompt learning has emerged as an
efficient and effective strategy to adapt VLMs while preserving their
pre-trained knowledge. However, existing methods still lead to overfitting and
degrade zero-shot generalization. To address this challenge, we propose an
optimal transport (OT)-guided prompt learning framework that mitigates
forgetting by preserving the structural consistency of feature distributions
between pre-trained and fine-tuned models. Unlike conventional point-wise
constraints, OT naturally captures cross-instance relationships and expands the
feasible parameter space for prompt tuning, allowing a better trade-off between
adaptation and generalization. Our approach enforces joint constraints on both
vision and text representations, ensuring a holistic feature alignment.
Extensive experiments on benchmark datasets demonstrate that our simple yet
effective method can outperform existing prompt learning strategies in
base-to-novel generalization, cross-dataset evaluation, and domain
generalization without additional augmentation or ensemble techniques. The code
is available at https://github.com/ChongQingNoSubway/Prompt-OT
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 21:38:34 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Chen",
"Xiwen",
""
],
[
"Zhu",
"Wenhui",
""
],
[
"Qiu",
"Peijie",
""
],
[
"Wang",
"Hao",
""
],
[
"Li",
"Huayu",
""
],
[
"Wu",
"Haiyu",
""
],
[
"Sotiras",
"Aristeidis",
""
],
[
"Wang",
"Yalin",
... | TITLE: Prompt-OT: An Optimal Transport Regularization Paradigm for Knowledge
Preservation in Vision-Language Model Adaptation
ABSTRACT: Vision-language models (VLMs) such as CLIP demonstrate strong performance but
struggle when adapted to downstream tasks. Prompt learning has emerged as an
efficient and effective strategy to adapt VLMs while preserving their
pre-trained knowledge. However, existing methods still lead to overfitting and
degrade zero-shot generalization. To address this challenge, we propose an
optimal transport (OT)-guided prompt learning framework that mitigates
forgetting by preserving the structural consistency of feature distributions
between pre-trained and fine-tuned models. Unlike conventional point-wise
constraints, OT naturally captures cross-instance relationships and expands the
feasible parameter space for prompt tuning, allowing a better trade-off between
adaptation and generalization. Our approach enforces joint constraints on both
vision and text representations, ensuring a holistic feature alignment.
Extensive experiments on benchmark datasets demonstrate that our simple yet
effective method can outperform existing prompt learning strategies in
base-to-novel generalization, cross-dataset evaluation, and domain
generalization without additional augmentation or ensemble techniques. The code
is available at https://github.com/ChongQingNoSubway/Prompt-OT
|
2503.08915 | Matthieu Terris | Matthieu Terris, Samuel Hurault, Maxime Song, Julian Tachella | Reconstruct Anything Model: a lightweight foundation model for
computational imaging | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Most existing learning-based methods for solving imaging inverse problems can
be roughly divided into two classes: iterative algorithms, such as
plug-and-play and diffusion methods, that leverage pretrained denoisers, and
unrolled architectures that are trained end-to-end for specific imaging
problems. Iterative methods in the first class are computationally costly and
often provide suboptimal reconstruction performance, whereas unrolled
architectures are generally specific to a single inverse problem and require
expensive training. In this work, we propose a novel non-iterative, lightweight
architecture that incorporates knowledge about the forward operator
(acquisition physics and noise parameters) without relying on unrolling. Our
model is trained to solve a wide range of inverse problems beyond denoising,
including deblurring, magnetic resonance imaging, computed tomography,
inpainting, and super-resolution. The proposed model can be easily adapted to
unseen inverse problems or datasets with a few fine-tuning steps (up to a few
images) in a self-supervised way, without ground-truth references. Throughout a
series of experiments, we demonstrate state-of-the-art performance from medical
imaging to low-photon imaging and microscopy.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 21:53:58 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Terris",
"Matthieu",
""
],
[
"Hurault",
"Samuel",
""
],
[
"Song",
"Maxime",
""
],
[
"Tachella",
"Julian",
""
]
] | TITLE: Reconstruct Anything Model: a lightweight foundation model for
computational imaging
ABSTRACT: Most existing learning-based methods for solving imaging inverse problems can
be roughly divided into two classes: iterative algorithms, such as
plug-and-play and diffusion methods, that leverage pretrained denoisers, and
unrolled architectures that are trained end-to-end for specific imaging
problems. Iterative methods in the first class are computationally costly and
often provide suboptimal reconstruction performance, whereas unrolled
architectures are generally specific to a single inverse problem and require
expensive training. In this work, we propose a novel non-iterative, lightweight
architecture that incorporates knowledge about the forward operator
(acquisition physics and noise parameters) without relying on unrolling. Our
model is trained to solve a wide range of inverse problems beyond denoising,
including deblurring, magnetic resonance imaging, computed tomography,
inpainting, and super-resolution. The proposed model can be easily adapted to
unseen inverse problems or datasets with a few fine-tuning steps (up to a few
images) in a self-supervised way, without ground-truth references. Throughout a
series of experiments, we demonstrate state-of-the-art performance from medical
imaging to low-photon imaging and microscopy.
|
2503.08929 | Hrishikesh Viswanath | Hrishikesh Viswanath, Md Ashiqur Rahman, Chi Lin, Damon Conover,
Aniket Bera | HessianForge: Scalable LiDAR reconstruction with Physics-Informed Neural
Representation and Smoothness Energy Constraints | null | null | null | null | cs.GR cs.AI cs.CV cs.LG cs.RO eess.IV | http://creativecommons.org/licenses/by/4.0/ | Accurate and efficient 3D mapping of large-scale outdoor environments from
LiDAR measurements is a fundamental challenge in robotics, particularly towards
ensuring smooth and artifact-free surface reconstructions. Although the
state-of-the-art methods focus on memory-efficient neural representations for
high-fidelity surface generation, they often fail to produce artifact-free
manifolds, with artifacts arising due to noisy and sparse inputs. To address
this issue, we frame surface mapping as a physics-informed energy optimization
problem, enforcing surface smoothness by optimizing an energy functional that
penalizes sharp surface ridges. Specifically, we propose a deep learning based
approach that learns the signed distance field (SDF) of the surface manifold
from raw LiDAR point clouds using a physics-informed loss function that
optimizes the $L_2$-Hessian energy of the surface. Our learning framework
includes a hierarchical octree based input feature encoding and a multi-scale
neural network to iteratively refine the signed distance field at different
scales of resolution. Lastly, we introduce a test-time refinement strategy to
correct topological inconsistencies and edge distortions that can arise in the
generated mesh. We propose a \texttt{CUDA}-accelerated least-squares
optimization that locally adjusts vertex positions to enforce
feature-preserving smoothing. We evaluate our approach on large-scale outdoor
datasets and demonstrate that our approach outperforms current state-of-the-art
methods in terms of improved accuracy and smoothness. Our code is available at
\href{https://github.com/HrishikeshVish/HessianForge/}{https://github.com/HrishikeshVish/HessianForge/}
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:18:51 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Viswanath",
"Hrishikesh",
""
],
[
"Rahman",
"Md Ashiqur",
""
],
[
"Lin",
"Chi",
""
],
[
"Conover",
"Damon",
""
],
[
"Bera",
"Aniket",
""
]
] | TITLE: HessianForge: Scalable LiDAR reconstruction with Physics-Informed Neural
Representation and Smoothness Energy Constraints
ABSTRACT: Accurate and efficient 3D mapping of large-scale outdoor environments from
LiDAR measurements is a fundamental challenge in robotics, particularly towards
ensuring smooth and artifact-free surface reconstructions. Although the
state-of-the-art methods focus on memory-efficient neural representations for
high-fidelity surface generation, they often fail to produce artifact-free
manifolds, with artifacts arising due to noisy and sparse inputs. To address
this issue, we frame surface mapping as a physics-informed energy optimization
problem, enforcing surface smoothness by optimizing an energy functional that
penalizes sharp surface ridges. Specifically, we propose a deep learning based
approach that learns the signed distance field (SDF) of the surface manifold
from raw LiDAR point clouds using a physics-informed loss function that
optimizes the $L_2$-Hessian energy of the surface. Our learning framework
includes a hierarchical octree based input feature encoding and a multi-scale
neural network to iteratively refine the signed distance field at different
scales of resolution. Lastly, we introduce a test-time refinement strategy to
correct topological inconsistencies and edge distortions that can arise in the
generated mesh. We propose a \texttt{CUDA}-accelerated least-squares
optimization that locally adjusts vertex positions to enforce
feature-preserving smoothing. We evaluate our approach on large-scale outdoor
datasets and demonstrate that our approach outperforms current state-of-the-art
methods in terms of improved accuracy and smoothness. Our code is available at
\href{https://github.com/HrishikeshVish/HessianForge/}{https://github.com/HrishikeshVish/HessianForge/}
|
2503.08930 | Tianxiang Lin | Tianxiang Lin, Mohamad Qadri, Kevin Zhang, Adithya Pediredla,
Christopher A. Metzler, Michael Kaess | Acoustic Neural 3D Reconstruction Under Pose Drift | 8 pages, 8 figures. This paper is under review | null | null | null | eess.SP cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of optimizing neural implicit surfaces for 3D
reconstruction using acoustic images collected with drifting sensor poses. The
accuracy of current state-of-the-art 3D acoustic modeling algorithms is highly
dependent on accurate pose estimation; small errors in sensor pose can lead to
severe reconstruction artifacts. In this paper, we propose an algorithm that
jointly optimizes the neural scene representation and sonar poses. Our
algorithm does so by parameterizing the 6DoF poses as learnable parameters and
backpropagating gradients through the neural renderer and implicit
representation. We validated our algorithm on both real and simulated datasets.
It produces high-fidelity 3D reconstructions even under significant pose drift.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:18:57 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Lin",
"Tianxiang",
""
],
[
"Qadri",
"Mohamad",
""
],
[
"Zhang",
"Kevin",
""
],
[
"Pediredla",
"Adithya",
""
],
[
"Metzler",
"Christopher A.",
""
],
[
"Kaess",
"Michael",
""
]
] | TITLE: Acoustic Neural 3D Reconstruction Under Pose Drift
ABSTRACT: We consider the problem of optimizing neural implicit surfaces for 3D
reconstruction using acoustic images collected with drifting sensor poses. The
accuracy of current state-of-the-art 3D acoustic modeling algorithms is highly
dependent on accurate pose estimation; small errors in sensor pose can lead to
severe reconstruction artifacts. In this paper, we propose an algorithm that
jointly optimizes the neural scene representation and sonar poses. Our
algorithm does so by parameterizing the 6DoF poses as learnable parameters and
backpropagating gradients through the neural renderer and implicit
representation. We validated our algorithm on both real and simulated datasets.
It produces high-fidelity 3D reconstructions even under significant pose drift.
|
2503.08937 | Mohammad Farzanullah | Mohammad Farzanullah, Han Zhang, Akram Bin Sediq, Ali Afana, Melike
Erol-Kantarci | Beam Selection in ISAC using Contextual Bandit with Multi-modal
Transformer and Transfer Learning | 6 pages, 4 figures, 2 tables, IEEE International Conference on
Communications 2025 | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sixth generation (6G) wireless technology is anticipated to introduce
Integrated Sensing and Communication (ISAC) as a transformative paradigm. ISAC
unifies wireless communication and RADAR or other forms of sensing to optimize
spectral and hardware resources. This paper presents a pioneering framework
that leverages ISAC sensing data to enhance beam selection processes in complex
indoor environments. By integrating multi-modal transformer models with a
multi-agent contextual bandit algorithm, our approach utilizes ISAC sensing
data to improve communication performance and achieves high spectral efficiency
(SE). Specifically, the multi-modal transformer can capture inter-modal
relationships, enhancing model generalization across diverse scenarios.
Experimental evaluations on the DeepSense 6G dataset demonstrate that our model
outperforms traditional deep reinforcement learning (DRL) methods, achieving
superior beam prediction accuracy and adaptability. In the single-user
scenario, we achieve an average SE regret improvement of 49.6% as compared to
DRL. Furthermore, we employ transfer reinforcement learning to reduce training
time and improve model performance in multi-user environments. In the
multi-user scenario, this approach enhances the average SE regret, which is a
measure to demonstrate how far the learned policy is from the optimal SE
policy, by 19.7% compared to training from scratch, even when the latter is
trained 100 times longer.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:35:19 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Farzanullah",
"Mohammad",
""
],
[
"Zhang",
"Han",
""
],
[
"Sediq",
"Akram Bin",
""
],
[
"Afana",
"Ali",
""
],
[
"Erol-Kantarci",
"Melike",
""
]
] | TITLE: Beam Selection in ISAC using Contextual Bandit with Multi-modal
Transformer and Transfer Learning
ABSTRACT: Sixth generation (6G) wireless technology is anticipated to introduce
Integrated Sensing and Communication (ISAC) as a transformative paradigm. ISAC
unifies wireless communication and RADAR or other forms of sensing to optimize
spectral and hardware resources. This paper presents a pioneering framework
that leverages ISAC sensing data to enhance beam selection processes in complex
indoor environments. By integrating multi-modal transformer models with a
multi-agent contextual bandit algorithm, our approach utilizes ISAC sensing
data to improve communication performance and achieves high spectral efficiency
(SE). Specifically, the multi-modal transformer can capture inter-modal
relationships, enhancing model generalization across diverse scenarios.
Experimental evaluations on the DeepSense 6G dataset demonstrate that our model
outperforms traditional deep reinforcement learning (DRL) methods, achieving
superior beam prediction accuracy and adaptability. In the single-user
scenario, we achieve an average SE regret improvement of 49.6% as compared to
DRL. Furthermore, we employ transfer reinforcement learning to reduce training
time and improve model performance in multi-user environments. In the
multi-user scenario, this approach enhances the average SE regret, which is a
measure to demonstrate how far the learned policy is from the optimal SE
policy, by 19.7% compared to training from scratch, even when the latter is
trained 100 times longer.
|
2503.08939 | Jorge Luiz Dos Santos Canuto | Jorge Luiz dos Santos Canuto, Linnyer Beatrys Ruiz Aylon, Rodrigo
Clemente Thom de Souza | KAN-Mixers: a new deep learning architecture for image classification | 8 pages, 6 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Due to their effective performance, Convolutional Neural Network (CNN) and
Vision Transformer (ViT) architectures have become the standard for solving
computer vision tasks. Such architectures require large data sets and rely on
convolution and self-attention operations. In 2021, MLP-Mixer emerged, an
architecture that relies only on Multilayer Perceptron (MLP) and achieves
extremely competitive results when compared to CNNs and ViTs. Despite its good
performance in computer vision tasks, the MLP-Mixer architecture may not be
suitable for refined feature extraction in images. Recently, the
Kolmogorov-Arnold Network (KAN) was proposed as a promising alternative to MLP
models. KANs promise to improve accuracy and interpretability when compared to
MLPs. Therefore, the present work aims to design a new mixer-based
architecture, called KAN-Mixers, using KANs as main layers and evaluate its
performance, in terms of several performance metrics, in the image
classification task. As main results obtained, the KAN-Mixers model was
superior to the MLP, MLP-Mixer and KAN models in the Fashion-MNIST and CIFAR-10
datasets, with 0.9030 and 0.6980 of average accuracy, respectively.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:41:22 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Canuto",
"Jorge Luiz dos Santos",
""
],
[
"Aylon",
"Linnyer Beatrys Ruiz",
""
],
[
"de Souza",
"Rodrigo Clemente Thom",
""
]
] | TITLE: KAN-Mixers: a new deep learning architecture for image classification
ABSTRACT: Due to their effective performance, Convolutional Neural Network (CNN) and
Vision Transformer (ViT) architectures have become the standard for solving
computer vision tasks. Such architectures require large data sets and rely on
convolution and self-attention operations. In 2021, MLP-Mixer emerged, an
architecture that relies only on Multilayer Perceptron (MLP) and achieves
extremely competitive results when compared to CNNs and ViTs. Despite its good
performance in computer vision tasks, the MLP-Mixer architecture may not be
suitable for refined feature extraction in images. Recently, the
Kolmogorov-Arnold Network (KAN) was proposed as a promising alternative to MLP
models. KANs promise to improve accuracy and interpretability when compared to
MLPs. Therefore, the present work aims to design a new mixer-based
architecture, called KAN-Mixers, using KANs as main layers and evaluate its
performance, in terms of several performance metrics, in the image
classification task. As main results obtained, the KAN-Mixers model was
superior to the MLP, MLP-Mixer and KAN models in the Fashion-MNIST and CIFAR-10
datasets, with 0.9030 and 0.6980 of average accuracy, respectively.
|
2503.08940 | Bruno Magacho da Silva | B. Magacho | Coherent Structures and Lattice-Boltzmann Hydrodynamics in Turbulent
Pipe Flows | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Coherent structures (CS) are known to be part of the foundations of turbulent
flow dynamics. For a long time, their appearance was believed to be chaotic and
unorganized. However, it has been demonstrated through numerical simulations
and experiments that a high degree of organization of CS could be attributed to
the constitution of a turbulent state. Understanding these organizational
dynamics promises to bring valuable theoretical and applied predictions, such
as the average lifetime of turbulent structures and understanding the role of
CS in particulate transport.
The identification of CS was achieved by selecting the most energetic mode in
the flow direction within a specified reference shell. Furthermore, the
transition dynamics between the identified CS was investigated as a stochastic
process, revealing a non-Markovian effect through an algebraic decay of the
temporal self-correlation of the identified CS. Finally, the non-Markovian
behavior observed between the transitions of CS was reproduced by a low-level
Markovian model, which takes into account the degeneracy effects in the
definition of the identified CS.
In order to obtain an algorithm capable of simulating the quasi-static regime
in magnetohydrodynamic (MHD) flows a multiple-relaxation-time (MRT) model and a
distance-dependent boundary condition were introduced for the lattice Boltzmann
method (LBM) associated with the induction equation for MHD flows. Finally, a
turbulent pipe flow simulation was performed by the LBM with a MRT model for
hydrodynamic distributions. The identification of CS revealed a non-trivial
memory effect with respect to the force that triggered the turbulent state. The
transition dynamics of CS revealed a Markovian behavior for finely resolved
time data, indicating that experimental behavior could be recovered for larger
time separations and, consequently, a larger dataset.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 22:43:32 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Magacho",
"B.",
""
]
] | TITLE: Coherent Structures and Lattice-Boltzmann Hydrodynamics in Turbulent
Pipe Flows
ABSTRACT: Coherent structures (CS) are known to be part of the foundations of turbulent
flow dynamics. For a long time, their appearance was believed to be chaotic and
unorganized. However, it has been demonstrated through numerical simulations
and experiments that a high degree of organization of CS could be attributed to
the constitution of a turbulent state. Understanding these organizational
dynamics promises to bring valuable theoretical and applied predictions, such
as the average lifetime of turbulent structures and understanding the role of
CS in particulate transport.
The identification of CS was achieved by selecting the most energetic mode in
the flow direction within a specified reference shell. Furthermore, the
transition dynamics between the identified CS was investigated as a stochastic
process, revealing a non-Markovian effect through an algebraic decay of the
temporal self-correlation of the identified CS. Finally, the non-Markovian
behavior observed between the transitions of CS was reproduced by a low-level
Markovian model, which takes into account the degeneracy effects in the
definition of the identified CS.
In order to obtain an algorithm capable of simulating the quasi-static regime
in magnetohydrodynamic (MHD) flows a multiple-relaxation-time (MRT) model and a
distance-dependent boundary condition were introduced for the lattice Boltzmann
method (LBM) associated with the induction equation for MHD flows. Finally, a
turbulent pipe flow simulation was performed by the LBM with a MRT model for
hydrodynamic distributions. The identification of CS revealed a non-trivial
memory effect with respect to the force that triggered the turbulent state. The
transition dynamics of CS revealed a Markovian behavior for finely resolved
time data, indicating that experimental behavior could be recovered for larger
time separations and, consequently, a larger dataset.
|
2503.08950 | Geng Chen | Rujia Yang, Geng Chen, Chuan Wen and Yang Gao | FP3: A 3D Foundation Policy for Robotic Manipulation | Project website: https://3d-foundation-policy.github.io | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following its success in natural language processing and computer vision,
foundation models that are pre-trained on large-scale multi-task datasets have
also shown great potential in robotics. However, most existing robot foundation
models rely solely on 2D image observations, ignoring 3D geometric information,
which is essential for robots to perceive and reason about the 3D world. In
this paper, we introduce FP3, a first large-scale 3D foundation policy model
for robotic manipulation. FP3 builds on a scalable diffusion transformer
architecture and is pre-trained on 60k trajectories with point cloud
observations. With the model design and diverse pre-training data, FP3 can be
efficiently fine-tuned for downstream tasks while exhibiting strong
generalization capabilities. Experiments on real robots demonstrate that with
only 80 demonstrations, FP3 is able to learn a new task with over 90% success
rates in novel environments with unseen objects, significantly surpassing
existing robot foundation models.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 23:01:08 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Yang",
"Rujia",
""
],
[
"Chen",
"Geng",
""
],
[
"Wen",
"Chuan",
""
],
[
"Gao",
"Yang",
""
]
] | TITLE: FP3: A 3D Foundation Policy for Robotic Manipulation
ABSTRACT: Following its success in natural language processing and computer vision,
foundation models that are pre-trained on large-scale multi-task datasets have
also shown great potential in robotics. However, most existing robot foundation
models rely solely on 2D image observations, ignoring 3D geometric information,
which is essential for robots to perceive and reason about the 3D world. In
this paper, we introduce FP3, a first large-scale 3D foundation policy model
for robotic manipulation. FP3 builds on a scalable diffusion transformer
architecture and is pre-trained on 60k trajectories with point cloud
observations. With the model design and diverse pre-training data, FP3 can be
efficiently fine-tuned for downstream tasks while exhibiting strong
generalization capabilities. Experiments on real robots demonstrate that with
only 80 demonstrations, FP3 is able to learn a new task with over 90% success
rates in novel environments with unseen objects, significantly surpassing
existing robot foundation models.
|
2503.08953 | Yifan Tang | Yifan Tang, Mostafa Rahmani Dehaghani, G. Gary Wang | Capturing Lifecycle System Degradation in Digital Twin Model Updating | 32 pages, 25 figures | null | null | null | cs.CE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Digital twin (DT) has emerged as a powerful tool to facilitate monitoring,
control, and other decision-making tasks in real-world engineering systems.
Online update methods have been proposed to update DT models. Considering the
degradation behavior in the system lifecycle, these methods fail to enable DT
models to predict the system responses affected by the system degradation over
time. To alleviate this problem, degradation models of measurable parameters
have been integrated into DT construction. However, identifying the degradation
parameters relies on prior knowledge of the system and expensive experiments.
To mitigate those limitations, this paper proposes a lifelong update method for
DT models to capture the effects of system degradation on system responses
without any prior knowledge and expensive offline experiments on the system.
The core idea in the work is to represent the system degradation during the
lifecycle as the dynamic changes of DT configurations (i.e., model parameters
with a fixed model structure) at all degradation stages. During the lifelong
update process, an Autoencoder is adopted to reconstruct the model parameters
of all hidden layers simultaneously, so that the latent features taking into
account the dependencies among hidden layers are obtained for each degradation
stage. The dynamic behavior of latent features among successive degradation
stages is then captured by a long short-term memory model, which enables
prediction of the latent feature at any unseen stage. Based on the predicted
latent features, the model configuration at future degradation stage is
reconstructed to determine the new DT model, which predicts the system
responses affected by the degradation at the same stage. The test results on
two engineering datasets demonstrate that the proposed update method could
capture effects of system degradation on system responses during the lifecycle.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 23:05:01 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Tang",
"Yifan",
""
],
[
"Dehaghani",
"Mostafa Rahmani",
""
],
[
"Wang",
"G. Gary",
""
]
] | TITLE: Capturing Lifecycle System Degradation in Digital Twin Model Updating
ABSTRACT: Digital twin (DT) has emerged as a powerful tool to facilitate monitoring,
control, and other decision-making tasks in real-world engineering systems.
Online update methods have been proposed to update DT models. Considering the
degradation behavior in the system lifecycle, these methods fail to enable DT
models to predict the system responses affected by the system degradation over
time. To alleviate this problem, degradation models of measurable parameters
have been integrated into DT construction. However, identifying the degradation
parameters relies on prior knowledge of the system and expensive experiments.
To mitigate those limitations, this paper proposes a lifelong update method for
DT models to capture the effects of system degradation on system responses
without any prior knowledge and expensive offline experiments on the system.
The core idea in the work is to represent the system degradation during the
lifecycle as the dynamic changes of DT configurations (i.e., model parameters
with a fixed model structure) at all degradation stages. During the lifelong
update process, an Autoencoder is adopted to reconstruct the model parameters
of all hidden layers simultaneously, so that the latent features taking into
account the dependencies among hidden layers are obtained for each degradation
stage. The dynamic behavior of latent features among successive degradation
stages is then captured by a long short-term memory model, which enables
prediction of the latent feature at any unseen stage. Based on the predicted
latent features, the model configuration at future degradation stage is
reconstructed to determine the new DT model, which predicts the system
responses affected by the degradation at the same stage. The test results on
two engineering datasets demonstrate that the proposed update method could
capture effects of system degradation on system responses during the lifecycle.
|
2503.08956 | Francesco Marchiori | Francesco Marchiori, Mauro Conti | Leaky Batteries: A Novel Set of Side-Channel Attacks on Electric
Vehicles | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Advancements in battery technology have accelerated the adoption of Electric
Vehicles (EVs) due to their environmental benefits. However, their growing
sophistication introduces security and privacy challenges. Often seen as mere
operational data, battery consumption patterns can unintentionally reveal
critical information exploitable for malicious purposes. These risks go beyond
privacy, impacting vehicle security and regulatory compliance. Despite these
concerns, current research has largely overlooked the broader implications of
battery consumption data exposure. As EVs integrate further into smart
transportation networks, addressing these gaps is crucial to ensure their
safety, reliability, and resilience. In this work, we introduce a novel class
of side-channel attacks that exploit EV battery data to extract sensitive user
information. Leveraging only battery consumption patterns, we demonstrate a
methodology to accurately identify the EV driver and their driving style,
determine the number of occupants, and infer the vehicle's start and end
locations when user habits are known. We utilize several machine learning
models and feature extraction techniques to analyze EV power consumption
patterns, validating our approach on simulated and real-world datasets
collected from actual drivers. Our attacks achieve an average success rate of
95.4% across all attack objectives. Our findings highlight the privacy risks
associated with EV battery data, emphasizing the need for stronger protections
to safeguard user privacy and vehicle security.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 23:18:26 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Marchiori",
"Francesco",
""
],
[
"Conti",
"Mauro",
""
]
] | TITLE: Leaky Batteries: A Novel Set of Side-Channel Attacks on Electric
Vehicles
ABSTRACT: Advancements in battery technology have accelerated the adoption of Electric
Vehicles (EVs) due to their environmental benefits. However, their growing
sophistication introduces security and privacy challenges. Often seen as mere
operational data, battery consumption patterns can unintentionally reveal
critical information exploitable for malicious purposes. These risks go beyond
privacy, impacting vehicle security and regulatory compliance. Despite these
concerns, current research has largely overlooked the broader implications of
battery consumption data exposure. As EVs integrate further into smart
transportation networks, addressing these gaps is crucial to ensure their
safety, reliability, and resilience. In this work, we introduce a novel class
of side-channel attacks that exploit EV battery data to extract sensitive user
information. Leveraging only battery consumption patterns, we demonstrate a
methodology to accurately identify the EV driver and their driving style,
determine the number of occupants, and infer the vehicle's start and end
locations when user habits are known. We utilize several machine learning
models and feature extraction techniques to analyze EV power consumption
patterns, validating our approach on simulated and real-world datasets
collected from actual drivers. Our attacks achieve an average success rate of
95.4% across all attack objectives. Our findings highlight the privacy risks
associated with EV battery data, emphasizing the need for stronger protections
to safeguard user privacy and vehicle security.
|
2503.08960 | Jo\~ao Marques | Joao D.S. Marques and Arlindo L. Oliveira | Are ECGs enough? Deep learning classification of cardiac anomalies using
only electrocardiograms | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electrocardiography (ECG) is an essential tool for diagnosing multiple
cardiac anomalies: it provides valuable clinical insights, while being
affordable, fast and available in many settings. However, in the current
literature, the role of ECG analysis is often unclear: many approaches either
rely on additional imaging modalities, such as Computed Tomography Pulmonary
Angiography (CTPA), which may not always be available, or do not effectively
generalize across different classification problems. Furthermore, the
availability of public ECG datasets is limited and, in practice, these datasets
tend to be small, making it essential to optimize learning strategies. In this
study, we investigate the performance of multiple neural network architectures
in order to assess the impact of various approaches. Moreover, we check whether
these practices enhance model generalization when transfer learning is used to
translate information learned in larger ECG datasets, such as PTB-XL and
CPSC18, to a smaller, more challenging dataset for pulmonary embolism (PE)
detection. By leveraging transfer learning, we analyze the extent to which we
can improve learning efficiency and predictive performance on limited data.
Code available at
https://github.com/joaodsmarques/Are-ECGs-enough-Deep-Learning-Classifiers .
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 23:37:18 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Marques",
"Joao D. S.",
""
],
[
"Oliveira",
"Arlindo L.",
""
]
] | TITLE: Are ECGs enough? Deep learning classification of cardiac anomalies using
only electrocardiograms
ABSTRACT: Electrocardiography (ECG) is an essential tool for diagnosing multiple
cardiac anomalies: it provides valuable clinical insights, while being
affordable, fast and available in many settings. However, in the current
literature, the role of ECG analysis is often unclear: many approaches either
rely on additional imaging modalities, such as Computed Tomography Pulmonary
Angiography (CTPA), which may not always be available, or do not effectively
generalize across different classification problems. Furthermore, the
availability of public ECG datasets is limited and, in practice, these datasets
tend to be small, making it essential to optimize learning strategies. In this
study, we investigate the performance of multiple neural network architectures
in order to assess the impact of various approaches. Moreover, we check whether
these practices enhance model generalization when transfer learning is used to
translate information learned in larger ECG datasets, such as PTB-XL and
CPSC18, to a smaller, more challenging dataset for pulmonary embolism (PE)
detection. By leveraging transfer learning, we analyze the extent to which we
can improve learning efficiency and predictive performance on limited data.
Code available at
https://github.com/joaodsmarques/Are-ECGs-enough-Deep-Learning-Classifiers .
|
2503.08973 | Jos\'e Cano | Idris Zakariyya, Ferheen Ayaz, Mounia Kharbouche-Harrari, Jeremy
Singer, Sye Loong Keoh, Danilo Pau, Jos\'e Cano | Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to
Adversarial Attacks | arXiv admin note: substantial text overlap with arXiv:2304.12829 | null | null | null | cs.LG cs.CR cs.PF | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reducing the memory footprint of Machine Learning (ML) models, especially
Deep Neural Networks (DNNs), is imperative to facilitate their deployment on
resource-constrained edge devices. However, a notable drawback of DNN models
lies in their susceptibility to adversarial attacks, wherein minor input
perturbations can deceive them. A primary challenge revolves around the
development of accurate, resilient, and compact DNN models suitable for
deployment on resource-constrained edge devices. This paper presents the
outcomes of a compact DNN model that exhibits resilience against both black-box
and white-box adversarial attacks. This work has achieved this resilience
through training with the QKeras quantization-aware training framework. The
study explores the potential of QKeras and an adversarial robustness technique,
Jacobian Regularization (JR), to co-optimize the DNN architecture through
per-layer JR methodology. As a result, this paper has devised a DNN model
employing this co-optimization strategy based on Stochastic Ternary
Quantization (STQ). Its performance was compared against existing DNN models in
the face of various white-box and black-box attacks. The experimental findings
revealed that, the proposed DNN model had small footprint and on average, it
exhibited better performance than Quanos and DS-CNN MLCommons/TinyML (MLC/T)
benchmarks when challenged with white-box and black-box attacks, respectively,
on the CIFAR-10 image and Google Speech Commands audio datasets.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 00:34:25 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zakariyya",
"Idris",
""
],
[
"Ayaz",
"Ferheen",
""
],
[
"Kharbouche-Harrari",
"Mounia",
""
],
[
"Singer",
"Jeremy",
""
],
[
"Keoh",
"Sye Loong",
""
],
[
"Pau",
"Danilo",
""
],
[
"Cano",
"José",
""
]
] | TITLE: Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to
Adversarial Attacks
ABSTRACT: Reducing the memory footprint of Machine Learning (ML) models, especially
Deep Neural Networks (DNNs), is imperative to facilitate their deployment on
resource-constrained edge devices. However, a notable drawback of DNN models
lies in their susceptibility to adversarial attacks, wherein minor input
perturbations can deceive them. A primary challenge revolves around the
development of accurate, resilient, and compact DNN models suitable for
deployment on resource-constrained edge devices. This paper presents the
outcomes of a compact DNN model that exhibits resilience against both black-box
and white-box adversarial attacks. This work has achieved this resilience
through training with the QKeras quantization-aware training framework. The
study explores the potential of QKeras and an adversarial robustness technique,
Jacobian Regularization (JR), to co-optimize the DNN architecture through
per-layer JR methodology. As a result, this paper has devised a DNN model
employing this co-optimization strategy based on Stochastic Ternary
Quantization (STQ). Its performance was compared against existing DNN models in
the face of various white-box and black-box attacks. The experimental findings
revealed that, the proposed DNN model had small footprint and on average, it
exhibited better performance than Quanos and DS-CNN MLCommons/TinyML (MLC/T)
benchmarks when challenged with white-box and black-box attacks, respectively,
on the CIFAR-10 image and Google Speech Commands audio datasets.
|
2503.08974 | Yong Li | Yong Li, Yi Ren, Xuesong Niu, Yi Ding, Xiu-Shen Wei, Cuntai Guan | Beyond Overfitting: Doubly Adaptive Dropout for Generalizable AU
Detection | Accetped by IEEE Transactions on Affective Computing 2025. A novel
method for cross-domain facial action unit detection | IEEE Transactions on Affective Computing 2025 | 10.1109/TAFFC.2025.3545915 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Facial Action Units (AUs) are essential for conveying psychological states
and emotional expressions. While automatic AU detection systems leveraging deep
learning have progressed, they often overfit to specific datasets and
individual features, limiting their cross-domain applicability. To overcome
these limitations, we propose a doubly adaptive dropout approach for
cross-domain AU detection, which enhances the robustness of convolutional
feature maps and spatial tokens against domain shifts. This approach includes a
Channel Drop Unit (CD-Unit) and a Token Drop Unit (TD-Unit), which work
together to reduce domain-specific noise at both the channel and token levels.
The CD-Unit preserves domain-agnostic local patterns in feature maps, while the
TD-Unit helps the model identify AU relationships generalizable across domains.
An auxiliary domain classifier, integrated at each layer, guides the selective
omission of domain-sensitive features. To prevent excessive feature dropout, a
progressive training strategy is used, allowing for selective exclusion of
sensitive features at any model layer. Our method consistently outperforms
existing techniques in cross-domain AU detection, as demonstrated by extensive
experimental evaluations. Visualizations of attention maps also highlight clear
and meaningful patterns related to both individual and combined AUs, further
validating the approach's effectiveness.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 00:34:43 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Li",
"Yong",
""
],
[
"Ren",
"Yi",
""
],
[
"Niu",
"Xuesong",
""
],
[
"Ding",
"Yi",
""
],
[
"Wei",
"Xiu-Shen",
""
],
[
"Guan",
"Cuntai",
""
]
] | TITLE: Beyond Overfitting: Doubly Adaptive Dropout for Generalizable AU
Detection
ABSTRACT: Facial Action Units (AUs) are essential for conveying psychological states
and emotional expressions. While automatic AU detection systems leveraging deep
learning have progressed, they often overfit to specific datasets and
individual features, limiting their cross-domain applicability. To overcome
these limitations, we propose a doubly adaptive dropout approach for
cross-domain AU detection, which enhances the robustness of convolutional
feature maps and spatial tokens against domain shifts. This approach includes a
Channel Drop Unit (CD-Unit) and a Token Drop Unit (TD-Unit), which work
together to reduce domain-specific noise at both the channel and token levels.
The CD-Unit preserves domain-agnostic local patterns in feature maps, while the
TD-Unit helps the model identify AU relationships generalizable across domains.
An auxiliary domain classifier, integrated at each layer, guides the selective
omission of domain-sensitive features. To prevent excessive feature dropout, a
progressive training strategy is used, allowing for selective exclusion of
sensitive features at any model layer. Our method consistently outperforms
existing techniques in cross-domain AU detection, as demonstrated by extensive
experimental evaluations. Visualizations of attention maps also highlight clear
and meaningful patterns related to both individual and combined AUs, further
validating the approach's effectiveness.
|
2503.08976 | Leo Yu Zhang Dr. | Zirui Gong, Yanjun Zhang, Leo Yu Zhang, Zhaoxi Zhang, Yong Xiang, and
Shirui Pan | Not All Edges are Equally Robust: Evaluating the Robustness of
Ranking-Based Federated Learning | 18 pages. To appear in the IEEE Symposium on Security and Privacy
2025 | null | null | null | cs.LG cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Ranking Learning (FRL) is a state-of-the-art FL framework that
stands out for its communication efficiency and resilience to poisoning
attacks. It diverges from the traditional FL framework in two ways: 1) it
leverages discrete rankings instead of gradient updates, significantly reducing
communication costs and limiting the potential space for malicious updates, and
2) it uses majority voting on the server side to establish the global ranking,
ensuring that individual updates have minimal influence since each client
contributes only a single vote. These features enhance the system's scalability
and position FRL as a promising paradigm for FL training.
However, our analysis reveals that FRL is not inherently robust, as certain
edges are particularly vulnerable to poisoning attacks. Through a theoretical
investigation, we prove the existence of these vulnerable edges and establish a
lower bound and an upper bound for identifying them in each layer. Based on
this finding, we introduce a novel local model poisoning attack against FRL,
namely the Vulnerable Edge Manipulation (VEM) attack. The VEM attack focuses on
identifying and perturbing the most vulnerable edges in each layer and
leveraging an optimization-based approach to maximize the attack's impact.
Through extensive experiments on benchmark datasets, we demonstrate that our
attack achieves an overall 53.23% attack impact and is 3.7x more impactful than
existing methods. Our findings highlight significant vulnerabilities in
ranking-based FL systems and underline the urgency for the development of new
robust FL frameworks.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 00:38:14 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Gong",
"Zirui",
""
],
[
"Zhang",
"Yanjun",
""
],
[
"Zhang",
"Leo Yu",
""
],
[
"Zhang",
"Zhaoxi",
""
],
[
"Xiang",
"Yong",
""
],
[
"Pan",
"Shirui",
""
]
] | TITLE: Not All Edges are Equally Robust: Evaluating the Robustness of
Ranking-Based Federated Learning
ABSTRACT: Federated Ranking Learning (FRL) is a state-of-the-art FL framework that
stands out for its communication efficiency and resilience to poisoning
attacks. It diverges from the traditional FL framework in two ways: 1) it
leverages discrete rankings instead of gradient updates, significantly reducing
communication costs and limiting the potential space for malicious updates, and
2) it uses majority voting on the server side to establish the global ranking,
ensuring that individual updates have minimal influence since each client
contributes only a single vote. These features enhance the system's scalability
and position FRL as a promising paradigm for FL training.
However, our analysis reveals that FRL is not inherently robust, as certain
edges are particularly vulnerable to poisoning attacks. Through a theoretical
investigation, we prove the existence of these vulnerable edges and establish a
lower bound and an upper bound for identifying them in each layer. Based on
this finding, we introduce a novel local model poisoning attack against FRL,
namely the Vulnerable Edge Manipulation (VEM) attack. The VEM attack focuses on
identifying and perturbing the most vulnerable edges in each layer and
leveraging an optimization-based approach to maximize the attack's impact.
Through extensive experiments on benchmark datasets, we demonstrate that our
attack achieves an overall 53.23% attack impact and is 3.7x more impactful than
existing methods. Our findings highlight significant vulnerabilities in
ranking-based FL systems and underline the urgency for the development of new
robust FL frameworks.
|
2503.08979 | Mourad Gridach | Mourad Gridach, Jay Nanavati, Khaldoun Zine El Abidine, Lenon Mendes
and Christina Mack | Agentic AI for Scientific Discovery: A Survey of Progress, Challenges,
and Future Directions | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of Agentic AI into scientific discovery marks a new frontier
in research automation. These AI systems, capable of reasoning, planning, and
autonomous decision-making, are transforming how scientists perform literature
review, generate hypotheses, conduct experiments, and analyze results. This
survey provides a comprehensive overview of Agentic AI for scientific
discovery, categorizing existing systems and tools, and highlighting recent
progress across fields such as chemistry, biology, and materials science. We
discuss key evaluation metrics, implementation frameworks, and commonly used
datasets to offer a detailed understanding of the current state of the field.
Finally, we address critical challenges, such as literature review automation,
system reliability, and ethical concerns, while outlining future research
directions that emphasize human-AI collaboration and enhanced system
calibration.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 01:00:05 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Gridach",
"Mourad",
""
],
[
"Nanavati",
"Jay",
""
],
[
"Abidine",
"Khaldoun Zine El",
""
],
[
"Mendes",
"Lenon",
""
],
[
"Mack",
"Christina",
""
]
] | TITLE: Agentic AI for Scientific Discovery: A Survey of Progress, Challenges,
and Future Directions
ABSTRACT: The integration of Agentic AI into scientific discovery marks a new frontier
in research automation. These AI systems, capable of reasoning, planning, and
autonomous decision-making, are transforming how scientists perform literature
review, generate hypotheses, conduct experiments, and analyze results. This
survey provides a comprehensive overview of Agentic AI for scientific
discovery, categorizing existing systems and tools, and highlighting recent
progress across fields such as chemistry, biology, and materials science. We
discuss key evaluation metrics, implementation frameworks, and commonly used
datasets to offer a detailed understanding of the current state of the field.
Finally, we address critical challenges, such as literature review automation,
system reliability, and ethical concerns, while outlining future research
directions that emphasize human-AI collaboration and enhanced system
calibration.
|
2503.09008 | Huidong Liang | Huidong Liang, Haitz S\'aez de Oc\'ariz Borde, Baskaran
Sripathmanathan, Michael Bronstein, Xiaowen Dong | Towards Quantifying Long-Range Interactions in Graph Machine Learning: a
Large Graph Dataset and a Measurement | work in progress | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Long-range dependencies are critical for effective graph representation
learning, yet most existing datasets focus on small graphs tailored to
inductive tasks, offering limited insight into long-range interactions. Current
evaluations primarily compare models employing global attention (e.g., graph
transformers) with those using local neighborhood aggregation (e.g.,
message-passing neural networks) without a direct measurement of long-range
dependency. In this work, we introduce City-Networks, a novel large-scale
transductive learning dataset derived from real-world city roads. This dataset
features graphs with over $10^5$ nodes and significantly larger diameters than
those in existing benchmarks, naturally embodying long-range information. We
annotate the graphs using an eccentricity-based approach, ensuring that the
classification task inherently requires information from distant nodes.
Furthermore, we propose a model-agnostic measurement based on the Jacobians of
neighbors from distant hops, offering a principled quantification of long-range
dependencies. Finally, we provide theoretical justifications for both our
dataset design and the proposed measurement - particularly by focusing on
over-smoothing and influence score dilution - which establishes a robust
foundation for further exploration of long-range interactions in graph neural
networks.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 02:51:17 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Liang",
"Huidong",
""
],
[
"Borde",
"Haitz Sáez de Ocáriz",
""
],
[
"Sripathmanathan",
"Baskaran",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Dong",
"Xiaowen",
""
]
] | TITLE: Towards Quantifying Long-Range Interactions in Graph Machine Learning: a
Large Graph Dataset and a Measurement
ABSTRACT: Long-range dependencies are critical for effective graph representation
learning, yet most existing datasets focus on small graphs tailored to
inductive tasks, offering limited insight into long-range interactions. Current
evaluations primarily compare models employing global attention (e.g., graph
transformers) with those using local neighborhood aggregation (e.g.,
message-passing neural networks) without a direct measurement of long-range
dependency. In this work, we introduce City-Networks, a novel large-scale
transductive learning dataset derived from real-world city roads. This dataset
features graphs with over $10^5$ nodes and significantly larger diameters than
those in existing benchmarks, naturally embodying long-range information. We
annotate the graphs using an eccentricity-based approach, ensuring that the
classification task inherently requires information from distant nodes.
Furthermore, we propose a model-agnostic measurement based on the Jacobians of
neighbors from distant hops, offering a principled quantification of long-range
dependencies. Finally, we provide theoretical justifications for both our
dataset design and the proposed measurement - particularly by focusing on
over-smoothing and influence score dilution - which establishes a robust
foundation for further exploration of long-range interactions in graph neural
networks.
|
2503.09011 | AmirMohammad Azadi | Amirmohammad Azadi, Sina Zamani, Mohammadmostafa Rostamkhani, Sauleh
Eetemadi | Word2winners at SemEval-2025 Task 7: Multilingual and Crosslingual
Fact-Checked Claim Retrieval | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper describes our system for SemEval 2025 Task 7: Previously
Fact-Checked Claim Retrieval. The task requires retrieving relevant fact-checks
for a given input claim from the extensive, multilingual MultiClaim dataset,
which comprises social media posts and fact-checks in several languages. To
address this challenge, we first evaluated zero-shot performance using
state-of-the-art English and multilingual retrieval models and then fine-tuned
the most promising systems, leveraging machine translation to enhance
crosslingual retrieval. Our best model achieved an accuracy of 85% on
crosslingual data and 92% on monolingual data.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 02:59:41 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Azadi",
"Amirmohammad",
""
],
[
"Zamani",
"Sina",
""
],
[
"Rostamkhani",
"Mohammadmostafa",
""
],
[
"Eetemadi",
"Sauleh",
""
]
] | TITLE: Word2winners at SemEval-2025 Task 7: Multilingual and Crosslingual
Fact-Checked Claim Retrieval
ABSTRACT: This paper describes our system for SemEval 2025 Task 7: Previously
Fact-Checked Claim Retrieval. The task requires retrieving relevant fact-checks
for a given input claim from the extensive, multilingual MultiClaim dataset,
which comprises social media posts and fact-checks in several languages. To
address this challenge, we first evaluated zero-shot performance using
state-of-the-art English and multilingual retrieval models and then fine-tuned
the most promising systems, leveraging machine translation to enhance
crosslingual retrieval. Our best model achieved an accuracy of 85% on
crosslingual data and 92% on monolingual data.
|
2503.09013 | Rongxin Liao | Rongxin Liao, Feng Li, Yanyan Wei, Zenglin Shi, Le Zhang, Huihui Bai
and Meng Wang | Prompt to Restore, Restore to Prompt: Cyclic Prompting for Universal
Adverse Weather Removal | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Universal adverse weather removal (UAWR) seeks to address various weather
degradations within a unified framework. Recent methods are inspired by prompt
learning using pre-trained vision-language models (e.g., CLIP), leveraging
degradation-aware prompts to facilitate weather-free image restoration,
yielding significant improvements. In this work, we propose CyclicPrompt, an
innovative cyclic prompt approach designed to enhance the effectiveness,
adaptability, and generalizability of UAWR. CyclicPrompt Comprises two key
components: 1) a composite context prompt that integrates weather-related
information and context-aware representations into the network to guide
restoration. This prompt differs from previous methods by marrying learnable
input-conditional vectors with weather-specific knowledge, thereby improving
adaptability across various degradations. 2) The erase-and-paste mechanism,
after the initial guided restoration, substitutes weather-specific knowledge
with constrained restoration priors, inducing high-quality weather-free
concepts into the composite prompt to further fine-tune the restoration
process. Therefore, we can form a cyclic "Prompt-Restore-Prompt" pipeline that
adeptly harnesses weather-specific knowledge, textual contexts, and reliable
textures. Extensive experiments on synthetic and real-world datasets validate
the superior performance of CyclicPrompt. The code is available at:
https://github.com/RongxinL/CyclicPrompt.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:03:06 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Liao",
"Rongxin",
""
],
[
"Li",
"Feng",
""
],
[
"Wei",
"Yanyan",
""
],
[
"Shi",
"Zenglin",
""
],
[
"Zhang",
"Le",
""
],
[
"Bai",
"Huihui",
""
],
[
"Wang",
"Meng",
""
]
] | TITLE: Prompt to Restore, Restore to Prompt: Cyclic Prompting for Universal
Adverse Weather Removal
ABSTRACT: Universal adverse weather removal (UAWR) seeks to address various weather
degradations within a unified framework. Recent methods are inspired by prompt
learning using pre-trained vision-language models (e.g., CLIP), leveraging
degradation-aware prompts to facilitate weather-free image restoration,
yielding significant improvements. In this work, we propose CyclicPrompt, an
innovative cyclic prompt approach designed to enhance the effectiveness,
adaptability, and generalizability of UAWR. CyclicPrompt Comprises two key
components: 1) a composite context prompt that integrates weather-related
information and context-aware representations into the network to guide
restoration. This prompt differs from previous methods by marrying learnable
input-conditional vectors with weather-specific knowledge, thereby improving
adaptability across various degradations. 2) The erase-and-paste mechanism,
after the initial guided restoration, substitutes weather-specific knowledge
with constrained restoration priors, inducing high-quality weather-free
concepts into the composite prompt to further fine-tune the restoration
process. Therefore, we can form a cyclic "Prompt-Restore-Prompt" pipeline that
adeptly harnesses weather-specific knowledge, textual contexts, and reliable
textures. Extensive experiments on synthetic and real-world datasets validate
the superior performance of CyclicPrompt. The code is available at:
https://github.com/RongxinL/CyclicPrompt.
|
2503.09025 | Logan Barnhart | Logan Barnhart, Reza Akbarian Bafghi, Stephen Becker, Maziar Raissi | Aligning to What? Limits to RLHF Based Alignment | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Reinforcement Learning from Human Feedback (RLHF) is increasingly used to
align large language models (LLMs) with human preferences. However, the
effectiveness of RLHF in addressing underlying biases remains unclear. This
study investigates the relationship between RLHF and both covert and overt
biases in LLMs, particularly focusing on biases against African Americans. We
applied various RLHF techniques (DPO, ORPO, and RLOO) to Llama 3 8B and
evaluated the covert and overt biases of the resulting models using
matched-guise probing and explicit bias testing. We performed additional tests
with DPO on different base models and datasets; among several implications, we
found that SFT before RLHF calcifies model biases. Additionally, we extend the
tools for measuring biases to multi-modal models. Through our experiments we
collect evidence that indicates that current alignment techniques are
inadequate for nebulous tasks such as mitigating covert biases, highlighting
the need for capable datasets, data curating techniques, or alignment tools.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:24:44 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Barnhart",
"Logan",
""
],
[
"Bafghi",
"Reza Akbarian",
""
],
[
"Becker",
"Stephen",
""
],
[
"Raissi",
"Maziar",
""
]
] | TITLE: Aligning to What? Limits to RLHF Based Alignment
ABSTRACT: Reinforcement Learning from Human Feedback (RLHF) is increasingly used to
align large language models (LLMs) with human preferences. However, the
effectiveness of RLHF in addressing underlying biases remains unclear. This
study investigates the relationship between RLHF and both covert and overt
biases in LLMs, particularly focusing on biases against African Americans. We
applied various RLHF techniques (DPO, ORPO, and RLOO) to Llama 3 8B and
evaluated the covert and overt biases of the resulting models using
matched-guise probing and explicit bias testing. We performed additional tests
with DPO on different base models and datasets; among several implications, we
found that SFT before RLHF calcifies model biases. Additionally, we extend the
tools for measuring biases to multi-modal models. Through our experiments we
collect evidence that indicates that current alignment techniques are
inadequate for nebulous tasks such as mitigating covert biases, highlighting
the need for capable datasets, data curating techniques, or alignment tools.
|
2503.09030 | Kazuhiro Matsuyama | Kazuhiro Matsuyama, Usman Anjum, Satoko Matsuyama, Tetsuo Shoda,
Justin Zhan | Adaptive Temperature Based on Logits Correlation in Knowledge
Distillation | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge distillation is a technique to imitate a performance that a deep
learning model has, but reduce the size on another model. It applies the
outputs of a model to train another model having comparable accuracy. These two
distinct models are similar to the way information is delivered in human
society, with one acting as the "teacher" and the other as the "student".
Softmax plays a role in comparing logits generated by models with each other by
converting probability distributions. It delivers the logits of a teacher to a
student with compression through a parameter named temperature. Tuning this
variable reinforces the distillation performance. Although only this parameter
helps with the interaction of logits, it is not clear how temperatures promote
information transfer. In this paper, we propose a novel approach to calculate
the temperature. Our method only refers to the maximum logit generated by a
teacher model, which reduces computational time against state-of-the-art
methods. Our method shows a promising result in different student and teacher
models on a standard benchmark dataset. Algorithms using temperature can obtain
the improvement by plugging in this dynamic approach. Furthermore, the
approximation of the distillation process converges to a correlation of logits
by both models. This reinforces the previous argument that the distillation
conveys the relevance of logits. We report that this approximating algorithm
yields a higher temperature compared to the commonly used static values in
testing.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:41:31 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Matsuyama",
"Kazuhiro",
""
],
[
"Anjum",
"Usman",
""
],
[
"Matsuyama",
"Satoko",
""
],
[
"Shoda",
"Tetsuo",
""
],
[
"Zhan",
"Justin",
""
]
] | TITLE: Adaptive Temperature Based on Logits Correlation in Knowledge
Distillation
ABSTRACT: Knowledge distillation is a technique to imitate a performance that a deep
learning model has, but reduce the size on another model. It applies the
outputs of a model to train another model having comparable accuracy. These two
distinct models are similar to the way information is delivered in human
society, with one acting as the "teacher" and the other as the "student".
Softmax plays a role in comparing logits generated by models with each other by
converting probability distributions. It delivers the logits of a teacher to a
student with compression through a parameter named temperature. Tuning this
variable reinforces the distillation performance. Although only this parameter
helps with the interaction of logits, it is not clear how temperatures promote
information transfer. In this paper, we propose a novel approach to calculate
the temperature. Our method only refers to the maximum logit generated by a
teacher model, which reduces computational time against state-of-the-art
methods. Our method shows a promising result in different student and teacher
models on a standard benchmark dataset. Algorithms using temperature can obtain
the improvement by plugging in this dynamic approach. Furthermore, the
approximation of the distillation process converges to a correlation of logits
by both models. This reinforces the previous argument that the distillation
conveys the relevance of logits. We report that this approximating algorithm
yields a higher temperature compared to the commonly used static values in
testing.
|
2503.09032 | Younwoo Choi Mr. | Younwoo Choi, Muhammad Adil Asif, Ziwen Han, John Willes, Rahul G.
Krishnan | Teaching LLMs How to Learn with Contextual Fine-Tuning | ICLR 2025 | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Prompting Large Language Models (LLMs), or providing context on the expected
model of operation, is an effective way to steer the outputs of such models to
satisfy human desiderata after they have been trained. But in rapidly evolving
domains, there is often need to fine-tune LLMs to improve either the kind of
knowledge in their memory or their abilities to perform open ended reasoning in
new domains. When human's learn new concepts, we often do so by linking the new
material that we are studying to concepts we have already learned before. To
that end, we ask, "can prompting help us teach LLMs how to learn". In this
work, we study a novel generalization of instruction tuning, called contextual
fine-tuning, to fine-tune LLMs. Our method leverages instructional prompts
designed to mimic human cognitive strategies in learning and problem-solving to
guide the learning process during training, aiming to improve the model's
interpretation and understanding of domain-specific knowledge. We empirically
demonstrate that this simple yet effective modification improves the ability of
LLMs to be fine-tuned rapidly on new datasets both within the medical and
financial domains.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:45:53 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Choi",
"Younwoo",
""
],
[
"Asif",
"Muhammad Adil",
""
],
[
"Han",
"Ziwen",
""
],
[
"Willes",
"John",
""
],
[
"Krishnan",
"Rahul G.",
""
]
] | TITLE: Teaching LLMs How to Learn with Contextual Fine-Tuning
ABSTRACT: Prompting Large Language Models (LLMs), or providing context on the expected
model of operation, is an effective way to steer the outputs of such models to
satisfy human desiderata after they have been trained. But in rapidly evolving
domains, there is often need to fine-tune LLMs to improve either the kind of
knowledge in their memory or their abilities to perform open ended reasoning in
new domains. When human's learn new concepts, we often do so by linking the new
material that we are studying to concepts we have already learned before. To
that end, we ask, "can prompting help us teach LLMs how to learn". In this
work, we study a novel generalization of instruction tuning, called contextual
fine-tuning, to fine-tune LLMs. Our method leverages instructional prompts
designed to mimic human cognitive strategies in learning and problem-solving to
guide the learning process during training, aiming to improve the model's
interpretation and understanding of domain-specific knowledge. We empirically
demonstrate that this simple yet effective modification improves the ability of
LLMs to be fine-tuned rapidly on new datasets both within the medical and
financial domains.
|
2503.09040 | Xinyu Zhang | Xinyu Zhang, Haonan Chang, Yuhan Liu, Abdeslam Boularias | Motion Blender Gaussian Splatting for Dynamic Reconstruction | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Gaussian splatting has emerged as a powerful tool for high-fidelity
reconstruction of dynamic scenes. However, existing methods primarily rely on
implicit motion representations, such as encoding motions into neural networks
or per-Gaussian parameters, which makes it difficult to further manipulate the
reconstructed motions. This lack of explicit controllability limits existing
methods to replaying recorded motions only, which hinders a wider application.
To address this, we propose Motion Blender Gaussian Splatting (MB-GS), a novel
framework that uses motion graph as an explicit and sparse motion
representation. The motion of graph links is propagated to individual Gaussians
via dual quaternion skinning, with learnable weight painting functions
determining the influence of each link. The motion graphs and 3D Gaussians are
jointly optimized from input videos via differentiable rendering. Experiments
show that MB-GS achieves state-of-the-art performance on the iPhone dataset
while being competitive on HyperNeRF. Additionally, we demonstrate the
application potential of our method in generating novel object motions and
robot demonstrations through motion editing. Video demonstrations can be found
at https://mlzxy.github.io/mbgs.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:55:16 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhang",
"Xinyu",
""
],
[
"Chang",
"Haonan",
""
],
[
"Liu",
"Yuhan",
""
],
[
"Boularias",
"Abdeslam",
""
]
] | TITLE: Motion Blender Gaussian Splatting for Dynamic Reconstruction
ABSTRACT: Gaussian splatting has emerged as a powerful tool for high-fidelity
reconstruction of dynamic scenes. However, existing methods primarily rely on
implicit motion representations, such as encoding motions into neural networks
or per-Gaussian parameters, which makes it difficult to further manipulate the
reconstructed motions. This lack of explicit controllability limits existing
methods to replaying recorded motions only, which hinders a wider application.
To address this, we propose Motion Blender Gaussian Splatting (MB-GS), a novel
framework that uses motion graph as an explicit and sparse motion
representation. The motion of graph links is propagated to individual Gaussians
via dual quaternion skinning, with learnable weight painting functions
determining the influence of each link. The motion graphs and 3D Gaussians are
jointly optimized from input videos via differentiable rendering. Experiments
show that MB-GS achieves state-of-the-art performance on the iPhone dataset
while being competitive on HyperNeRF. Additionally, we demonstrate the
application potential of our method in generating novel object motions and
robot demonstrations through motion editing. Video demonstrations can be found
at https://mlzxy.github.io/mbgs.
|
2503.09041 | Muhammad Shahbaz Khan | Hafsa Wazir, Jawad Ahmad, Muazzam A. Khan, Sana Ullah Jan, Fadia Ali
Khan, Muhammad Shahbaz Khan | A Hybrid Neural Network with Smart Skip Connections for High-Precision,
Low-Latency EMG-Based Hand Gesture Recognition | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Electromyography (EMG) is extensively used in key biomedical areas, such as
prosthetics, and assistive and interactive technologies. This paper presents a
new hybrid neural network named ConSGruNet for precise and efficient hand
gesture recognition. The proposed model comprises convolutional neural networks
with smart skip connections in conjunction with a Gated Recurrent Unit (GRU).
The proposed model is trained on the complete Ninapro DB1 dataset. The proposed
model boasts an accuracy of 99.7\% in classifying 53 classes in just 25
milliseconds. In addition to being fast, the proposed model is lightweight with
just 3,946 KB in size. Moreover, the proposed model has also been evaluated for
the reliability parameters, i.e., Cohen's kappa coefficient, Matthew's
correlation coefficient, and confidence intervals. The close to ideal results
of these parameters validate the models performance on unseen data.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 04:01:32 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Wazir",
"Hafsa",
""
],
[
"Ahmad",
"Jawad",
""
],
[
"Khan",
"Muazzam A.",
""
],
[
"Jan",
"Sana Ullah",
""
],
[
"Khan",
"Fadia Ali",
""
],
[
"Khan",
"Muhammad Shahbaz",
""
]
] | TITLE: A Hybrid Neural Network with Smart Skip Connections for High-Precision,
Low-Latency EMG-Based Hand Gesture Recognition
ABSTRACT: Electromyography (EMG) is extensively used in key biomedical areas, such as
prosthetics, and assistive and interactive technologies. This paper presents a
new hybrid neural network named ConSGruNet for precise and efficient hand
gesture recognition. The proposed model comprises convolutional neural networks
with smart skip connections in conjunction with a Gated Recurrent Unit (GRU).
The proposed model is trained on the complete Ninapro DB1 dataset. The proposed
model boasts an accuracy of 99.7\% in classifying 53 classes in just 25
milliseconds. In addition to being fast, the proposed model is lightweight with
just 3,946 KB in size. Moreover, the proposed model has also been evaluated for
the reliability parameters, i.e., Cohen's kappa coefficient, Matthew's
correlation coefficient, and confidence intervals. The close to ideal results
of these parameters validate the models performance on unseen data.
|
2503.09051 | Shengyao Lu | Shengyao Lu, Jiuding Yang, Baochun Li, Di Niu | TreeX: Generating Global Graphical GNN Explanations via Critical Subtree
Extraction | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The growing demand for transparency and interpretability in critical domains
has driven increased interests in comprehending the explainability of
Message-Passing (MP) Graph Neural Networks (GNNs). Although substantial
research efforts have been made to generate explanations for individual graph
instances, identifying global explaining concepts for a GNN still poses great
challenges, especially when concepts are desired in a graphical form on the
dataset level. While most prior works treat GNNs as black boxes, in this paper,
we propose to unbox GNNs by analyzing and extracting critical subtrees incurred
by the inner workings of message passing, which correspond to critical
subgraphs in the datasets. By aggregating subtrees in an embedding space with
an efficient algorithm, which does not require complex subgraph matching or
search, we can make intuitive graphical explanations for Message-Passing GNNs
on local, class and global levels. We empirically show that our proposed
approach not only generates clean subgraph concepts on a dataset level in
contrast to existing global explaining methods which generate non-graphical
rules (e.g., language or embeddings) as explanations, but it is also capable of
providing explanations for individual instances with a comparable or even
superior performance as compared to leading local-level GNN explainers.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 04:36:28 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Lu",
"Shengyao",
""
],
[
"Yang",
"Jiuding",
""
],
[
"Li",
"Baochun",
""
],
[
"Niu",
"Di",
""
]
] | TITLE: TreeX: Generating Global Graphical GNN Explanations via Critical Subtree
Extraction
ABSTRACT: The growing demand for transparency and interpretability in critical domains
has driven increased interests in comprehending the explainability of
Message-Passing (MP) Graph Neural Networks (GNNs). Although substantial
research efforts have been made to generate explanations for individual graph
instances, identifying global explaining concepts for a GNN still poses great
challenges, especially when concepts are desired in a graphical form on the
dataset level. While most prior works treat GNNs as black boxes, in this paper,
we propose to unbox GNNs by analyzing and extracting critical subtrees incurred
by the inner workings of message passing, which correspond to critical
subgraphs in the datasets. By aggregating subtrees in an embedding space with
an efficient algorithm, which does not require complex subgraph matching or
search, we can make intuitive graphical explanations for Message-Passing GNNs
on local, class and global levels. We empirically show that our proposed
approach not only generates clean subgraph concepts on a dataset level in
contrast to existing global explaining methods which generate non-graphical
rules (e.g., language or embeddings) as explanations, but it is also capable of
providing explanations for individual instances with a comparable or even
superior performance as compared to leading local-level GNN explainers.
|
2503.09068 | Sehyun Lee | Youngju Joung, Sehyun Lee, Jaesik Choi | Probing Network Decisions: Capturing Uncertainties and Unveiling
Vulnerabilities Without Label Information | ICPRAI 2024 | Pattern Recognition and Artificial Intelligence. ICPRAI 2024.
Lecture Notes in Computer Science, vol 14892, 2025, p.308-p.321 | 10.1007/978-981-97-8702-9_21 | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by/4.0/ | To improve trust and transparency, it is crucial to be able to interpret the
decisions of Deep Neural classifiers (DNNs). Instance-level examinations, such
as attribution techniques, are commonly employed to interpret the model
decisions. However, when interpreting misclassified decisions, human
intervention may be required. Analyzing the attribu tions across each class
within one instance can be particularly labor intensive and influenced by the
bias of the human interpreter. In this paper, we present a novel framework to
uncover the weakness of the classifier via counterfactual examples. A prober is
introduced to learn the correctness of the classifier's decision in terms of
binary code-hit or miss. It enables the creation of the counterfactual example
concerning the prober's decision. We test the performance of our prober's
misclassification detection and verify its effectiveness on the image
classification benchmark datasets. Furthermore, by generating counterfactuals
that penetrate the prober, we demonstrate that our framework effectively
identifies vulnerabilities in the target classifier without relying on label
information on the MNIST dataset.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 05:05:58 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Joung",
"Youngju",
""
],
[
"Lee",
"Sehyun",
""
],
[
"Choi",
"Jaesik",
""
]
] | TITLE: Probing Network Decisions: Capturing Uncertainties and Unveiling
Vulnerabilities Without Label Information
ABSTRACT: To improve trust and transparency, it is crucial to be able to interpret the
decisions of Deep Neural classifiers (DNNs). Instance-level examinations, such
as attribution techniques, are commonly employed to interpret the model
decisions. However, when interpreting misclassified decisions, human
intervention may be required. Analyzing the attribu tions across each class
within one instance can be particularly labor intensive and influenced by the
bias of the human interpreter. In this paper, we present a novel framework to
uncover the weakness of the classifier via counterfactual examples. A prober is
introduced to learn the correctness of the classifier's decision in terms of
binary code-hit or miss. It enables the creation of the counterfactual example
concerning the prober's decision. We test the performance of our prober's
misclassification detection and verify its effectiveness on the image
classification benchmark datasets. Furthermore, by generating counterfactuals
that penetrate the prober, we demonstrate that our framework effectively
identifies vulnerabilities in the target classifier without relying on label
information on the MNIST dataset.
|
2503.09077 | Huadong Pang | Yu Peng, Guoqing Zhang, Huadong Pang | Impact of Short-Duration Aerobic Exercise Intensity on Executive
Function and Sleep | 14 pages | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | IoT-based devices and wearable sensors are now common in daily life, with
smartwatches, smartphones, and other digital tools tracking physical activity
and health data. This lifelogging process provides valuable insights into
people's lives. This paper analyzes a publicly available lifelog dataset of 14
individuals to explore how exercise affects mood and, in turn, executive
function. Results show that moderate physical activity significantly improves
mood, reduces stress, and enhances cognitive functions like decision-making and
focus. Improved mood not only boosts exercise performance but also strengthens
executive function, suggesting exercise benefits both emotional and cognitive
well-being. This opens the door for personalized exercise plans tailored to
emotional states to optimize brain function.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 05:20:16 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Peng",
"Yu",
""
],
[
"Zhang",
"Guoqing",
""
],
[
"Pang",
"Huadong",
""
]
] | TITLE: Impact of Short-Duration Aerobic Exercise Intensity on Executive
Function and Sleep
ABSTRACT: IoT-based devices and wearable sensors are now common in daily life, with
smartwatches, smartphones, and other digital tools tracking physical activity
and health data. This lifelogging process provides valuable insights into
people's lives. This paper analyzes a publicly available lifelog dataset of 14
individuals to explore how exercise affects mood and, in turn, executive
function. Results show that moderate physical activity significantly improves
mood, reduces stress, and enhances cognitive functions like decision-making and
focus. Improved mood not only boosts exercise performance but also strengthens
executive function, suggesting exercise benefits both emotional and cognitive
well-being. This opens the door for personalized exercise plans tailored to
emotional states to optimize brain function.
|
2503.09081 | Xiaowei Bi | Xiaowei Bi, Zheyuan Xu | Everything Can Be Described in Words: A Simple Unified Multi-Modal
Framework with Semantic and Temporal Alignment | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Long Video Question Answering (LVQA) is challenging due to the need for
temporal reasoning and large-scale multimodal data processing. Existing methods
struggle with retrieving cross-modal information from long videos, especially
when relevant details are sparsely distributed. We introduce UMaT (Unified
Multi-modal as Text), a retrieval-augmented generation (RAG) framework that
efficiently processes extremely long videos while maintaining cross-modal
coherence. UMaT converts visual and auditory data into a unified textual
representation, ensuring semantic and temporal alignment. Short video clips are
analyzed using a vision-language model, while automatic speech recognition
(ASR) transcribes dialogue. These text-based representations are structured
into temporally aligned segments, with adaptive filtering to remove redundancy
and retain salient details. The processed data is embedded into a vector
database, enabling precise retrieval of dispersed yet relevant content.
Experiments on a benchmark LVQA dataset show that UMaT outperforms existing
methods in multimodal integration, long-form video understanding, and sparse
information retrieval. Its scalability and interpretability allow it to process
videos over an hour long while maintaining semantic and temporal coherence.
These findings underscore the importance of structured retrieval and multimodal
synchronization for advancing LVQA and long-form AI systems.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 05:28:24 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Bi",
"Xiaowei",
""
],
[
"Xu",
"Zheyuan",
""
]
] | TITLE: Everything Can Be Described in Words: A Simple Unified Multi-Modal
Framework with Semantic and Temporal Alignment
ABSTRACT: Long Video Question Answering (LVQA) is challenging due to the need for
temporal reasoning and large-scale multimodal data processing. Existing methods
struggle with retrieving cross-modal information from long videos, especially
when relevant details are sparsely distributed. We introduce UMaT (Unified
Multi-modal as Text), a retrieval-augmented generation (RAG) framework that
efficiently processes extremely long videos while maintaining cross-modal
coherence. UMaT converts visual and auditory data into a unified textual
representation, ensuring semantic and temporal alignment. Short video clips are
analyzed using a vision-language model, while automatic speech recognition
(ASR) transcribes dialogue. These text-based representations are structured
into temporally aligned segments, with adaptive filtering to remove redundancy
and retain salient details. The processed data is embedded into a vector
database, enabling precise retrieval of dispersed yet relevant content.
Experiments on a benchmark LVQA dataset show that UMaT outperforms existing
methods in multimodal integration, long-form video understanding, and sparse
information retrieval. Its scalability and interpretability allow it to process
videos over an hour long while maintaining semantic and temporal coherence.
These findings underscore the importance of structured retrieval and multimodal
synchronization for advancing LVQA and long-form AI systems.
|
2503.09094 | Kimiaki Shirahama | Zihao Chen, Hisashi Handa, Miho Ohsaki and Kimiaki Shirahama | Domain Adaptation for Japanese Sentence Embeddings with Contrastive
Learning based on Synthetic Sentence Generation | 39 pages, 7 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several backbone models pre-trained on general domain datasets can encode a
sentence into a widely useful embedding. Such sentence embeddings can be
further enhanced by domain adaptation that adapts a backbone model to a
specific domain. However, domain adaptation for low-resource languages like
Japanese is often difficult due to the scarcity of large-scale labeled
datasets. To overcome this, this paper introduces SDJC (Self-supervised Domain
adaptation for Japanese sentence embeddings with Contrastive learning) that
utilizes a data generator to generate sentences, which have the same syntactic
structure to a sentence in an unlabeled specific domain corpus but convey
different semantic meanings. Generated sentences are then used to boost
contrastive learning that adapts a backbone model to accurately discriminate
sentences in the specific domain. In addition, the components of SDJC like a
backbone model and a method to adapt it need to be carefully selected, but no
benchmark dataset is available for Japanese. Thus, a comprehensive Japanese STS
(Semantic Textual Similarity) benchmark dataset is constructed by combining
datasets machine-translated from English with existing datasets. The
experimental results validates the effectiveness of SDJC on two domain-specific
downstream tasks as well as the usefulness of the constructed dataset.
Datasets, codes and backbone models adapted by SDJC are available on our github
repository https://github.com/ccilab-doshisha/SDJC.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 06:15:33 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Chen",
"Zihao",
""
],
[
"Handa",
"Hisashi",
""
],
[
"Ohsaki",
"Miho",
""
],
[
"Shirahama",
"Kimiaki",
""
]
] | TITLE: Domain Adaptation for Japanese Sentence Embeddings with Contrastive
Learning based on Synthetic Sentence Generation
ABSTRACT: Several backbone models pre-trained on general domain datasets can encode a
sentence into a widely useful embedding. Such sentence embeddings can be
further enhanced by domain adaptation that adapts a backbone model to a
specific domain. However, domain adaptation for low-resource languages like
Japanese is often difficult due to the scarcity of large-scale labeled
datasets. To overcome this, this paper introduces SDJC (Self-supervised Domain
adaptation for Japanese sentence embeddings with Contrastive learning) that
utilizes a data generator to generate sentences, which have the same syntactic
structure to a sentence in an unlabeled specific domain corpus but convey
different semantic meanings. Generated sentences are then used to boost
contrastive learning that adapts a backbone model to accurately discriminate
sentences in the specific domain. In addition, the components of SDJC like a
backbone model and a method to adapt it need to be carefully selected, but no
benchmark dataset is available for Japanese. Thus, a comprehensive Japanese STS
(Semantic Textual Similarity) benchmark dataset is constructed by combining
datasets machine-translated from English with existing datasets. The
experimental results validates the effectiveness of SDJC on two domain-specific
downstream tasks as well as the usefulness of the constructed dataset.
Datasets, codes and backbone models adapted by SDJC are available on our github
repository https://github.com/ccilab-doshisha/SDJC.
|
2503.09097 | Sehwan Kim | Sehwan Kim, Rui Wang, Wenbin Lu | Self-Consistent Equation-guided Neural Networks for Censored
Time-to-Event Data | null | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In survival analysis, estimating the conditional survival function given
predictors is often of interest. There is a growing trend in the development of
deep learning methods for analyzing censored time-to-event data, especially
when dealing with high-dimensional predictors that are complexly interrelated.
Many existing deep learning approaches for estimating the conditional survival
functions extend the Cox regression models by replacing the linear function of
predictor effects by a shallow feed-forward neural network while maintaining
the proportional hazards assumption. Their implementation can be
computationally intensive due to the use of the full dataset at each iteration
because the use of batch data may distort the at-risk set of the partial
likelihood function. To overcome these limitations, we propose a novel deep
learning approach to non-parametric estimation of the conditional survival
functions using the generative adversarial networks leveraging self-consistent
equations. The proposed method is model-free and does not require any
parametric assumptions on the structure of the conditional survival function.
We establish the convergence rate of our proposed estimator of the conditional
survival function. In addition, we evaluate the performance of the proposed
method through simulation studies and demonstrate its application on a
real-world dataset.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 06:24:35 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Kim",
"Sehwan",
""
],
[
"Wang",
"Rui",
""
],
[
"Lu",
"Wenbin",
""
]
] | TITLE: Self-Consistent Equation-guided Neural Networks for Censored
Time-to-Event Data
ABSTRACT: In survival analysis, estimating the conditional survival function given
predictors is often of interest. There is a growing trend in the development of
deep learning methods for analyzing censored time-to-event data, especially
when dealing with high-dimensional predictors that are complexly interrelated.
Many existing deep learning approaches for estimating the conditional survival
functions extend the Cox regression models by replacing the linear function of
predictor effects by a shallow feed-forward neural network while maintaining
the proportional hazards assumption. Their implementation can be
computationally intensive due to the use of the full dataset at each iteration
because the use of batch data may distort the at-risk set of the partial
likelihood function. To overcome these limitations, we propose a novel deep
learning approach to non-parametric estimation of the conditional survival
functions using the generative adversarial networks leveraging self-consistent
equations. The proposed method is model-free and does not require any
parametric assumptions on the structure of the conditional survival function.
We establish the convergence rate of our proposed estimator of the conditional
survival function. In addition, we evaluate the performance of the proposed
method through simulation studies and demonstrate its application on a
real-world dataset.
|
2503.09098 | Pei-Sze Tan | Pei-Sze Tan, Sailaja Rajanala, Arghya Pal, Rapha\"el C.-W. Phan,
Huey-Fang Ong | Causal-Ex: Causal Graph-based Micro and Macro Expression Spotting | 7 pages, 6 figures. The paper is under consideration at Pattern
Recognition Letters | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Detecting concealed emotions within apparently normal expressions is crucial
for identifying potential mental health issues and facilitating timely support
and intervention. The task of spotting macro and micro-expressions involves
predicting the emotional timeline within a video, accomplished by identifying
the onset, apex, and offset frames of the displayed emotions. Utilizing
foundational facial muscle movement cues, known as facial action units, boosts
the accuracy. However, an overlooked challenge from previous research lies in
the inadvertent integration of biases into the training model. These biases
arising from datasets can spuriously link certain action unit movements to
particular emotion classes. We tackle this issue by novel replacement of action
unit adjacency information with the action unit causal graphs. This approach
aims to identify and eliminate undesired spurious connections, retaining only
unbiased information for classification. Our model, named Causal-Ex
(Causal-based Expression spotting), employs a rapid causal inference algorithm
to construct a causal graph of facial action units. This enables us to select
causally relevant facial action units. Our work demonstrates improvement in
overall F1-scores compared to state-of-the-art approaches with 0.388 on
CAS(ME)^2 and 0.3701 on SAMM-Long Video datasets.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 06:26:06 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Tan",
"Pei-Sze",
""
],
[
"Rajanala",
"Sailaja",
""
],
[
"Pal",
"Arghya",
""
],
[
"Phan",
"Raphaël C. -W.",
""
],
[
"Ong",
"Huey-Fang",
""
]
] | TITLE: Causal-Ex: Causal Graph-based Micro and Macro Expression Spotting
ABSTRACT: Detecting concealed emotions within apparently normal expressions is crucial
for identifying potential mental health issues and facilitating timely support
and intervention. The task of spotting macro and micro-expressions involves
predicting the emotional timeline within a video, accomplished by identifying
the onset, apex, and offset frames of the displayed emotions. Utilizing
foundational facial muscle movement cues, known as facial action units, boosts
the accuracy. However, an overlooked challenge from previous research lies in
the inadvertent integration of biases into the training model. These biases
arising from datasets can spuriously link certain action unit movements to
particular emotion classes. We tackle this issue by novel replacement of action
unit adjacency information with the action unit causal graphs. This approach
aims to identify and eliminate undesired spurious connections, retaining only
unbiased information for classification. Our model, named Causal-Ex
(Causal-based Expression spotting), employs a rapid causal inference algorithm
to construct a causal graph of facial action units. This enables us to select
causally relevant facial action units. Our work demonstrates improvement in
overall F1-scores compared to state-of-the-art approaches with 0.388 on
CAS(ME)^2 and 0.3701 on SAMM-Long Video datasets.
|
2503.09103 | Usman Naseem | Syed Talal Ahmad, Haohui Lu, Sidong Liu, Annie Lau, Amin Beheshti,
Mark Dras, Usman Naseem | VaxGuard: A Multi-Generator, Multi-Type, and Multi-Role Dataset for
Detecting LLM-Generated Vaccine Misinformation | Preprint | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Large Language Models (LLMs) have significantly
improved text generation capabilities. However, they also present challenges,
particularly in generating vaccine-related misinformation, which poses risks to
public health. Despite research on human-authored misinformation, a notable gap
remains in understanding how LLMs contribute to vaccine misinformation and how
best to detect it. Existing benchmarks often overlook vaccine-specific
misinformation and the diverse roles of misinformation spreaders. This paper
introduces VaxGuard, a novel dataset designed to address these challenges.
VaxGuard includes vaccine-related misinformation generated by multiple LLMs and
provides a comprehensive framework for detecting misinformation across various
roles. Our findings show that GPT-3.5 and GPT-4o consistently outperform other
LLMs in detecting misinformation, especially when dealing with subtle or
emotionally charged narratives. On the other hand, PHI3 and Mistral show lower
performance, struggling with precision and recall in fear-driven contexts.
Additionally, detection performance tends to decline as input text length
increases, indicating the need for improved methods to handle larger content.
These results highlight the importance of role-specific detection strategies
and suggest that VaxGuard can serve as a key resource for improving the
detection of LLM-generated vaccine misinformation.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 06:43:25 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Ahmad",
"Syed Talal",
""
],
[
"Lu",
"Haohui",
""
],
[
"Liu",
"Sidong",
""
],
[
"Lau",
"Annie",
""
],
[
"Beheshti",
"Amin",
""
],
[
"Dras",
"Mark",
""
],
[
"Naseem",
"Usman",
""
]
] | TITLE: VaxGuard: A Multi-Generator, Multi-Type, and Multi-Role Dataset for
Detecting LLM-Generated Vaccine Misinformation
ABSTRACT: Recent advancements in Large Language Models (LLMs) have significantly
improved text generation capabilities. However, they also present challenges,
particularly in generating vaccine-related misinformation, which poses risks to
public health. Despite research on human-authored misinformation, a notable gap
remains in understanding how LLMs contribute to vaccine misinformation and how
best to detect it. Existing benchmarks often overlook vaccine-specific
misinformation and the diverse roles of misinformation spreaders. This paper
introduces VaxGuard, a novel dataset designed to address these challenges.
VaxGuard includes vaccine-related misinformation generated by multiple LLMs and
provides a comprehensive framework for detecting misinformation across various
roles. Our findings show that GPT-3.5 and GPT-4o consistently outperform other
LLMs in detecting misinformation, especially when dealing with subtle or
emotionally charged narratives. On the other hand, PHI3 and Mistral show lower
performance, struggling with precision and recall in fear-driven contexts.
Additionally, detection performance tends to decline as input text length
increases, indicating the need for improved methods to handle larger content.
These results highlight the importance of role-specific detection strategies
and suggest that VaxGuard can serve as a key resource for improving the
detection of LLM-generated vaccine misinformation.
|
2503.09106 | Chuyu Zhang | Chuyu Zhang and Xueyang Yu and Peiyan Gu and Xuming He | Freeze and Cluster: A Simple Baseline for Rehearsal-Free Continual
Category Discovery | Underreview | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the problem of Rehearsal-Free Continual Category
Discovery (RF-CCD), which focuses on continuously identifying novel class by
leveraging knowledge from labeled data. Existing methods typically train from
scratch, overlooking the potential of base models, and often resort to data
storage to prevent forgetting. Moreover, because RF-CCD encompasses both
continual learning and novel class discovery, previous approaches have
struggled to effectively integrate advanced techniques from these fields,
resulting in less convincing comparisons and failing to reveal the unique
challenges posed by RF-CCD. To address these challenges, we lead the way in
integrating advancements from both domains and conducting extensive experiments
and analyses. Our findings demonstrate that this integration can achieve
state-of-the-art results, leading to the conclusion that in the presence of
pre-trained models, the representation does not improve and may even degrade
with the introduction of unlabeled data. To mitigate representation
degradation, we propose a straightforward yet highly effective baseline method.
This method first utilizes prior knowledge of known categories to estimate the
number of novel classes. It then acquires representations using a model
specifically trained on the base classes, generates high-quality pseudo-labels
through k-means clustering, and trains only the classifier layer. We validate
our conclusions and methods by conducting extensive experiments across multiple
benchmarks, including the Stanford Cars, CUB, iNat, and Tiny-ImageNet datasets.
The results clearly illustrate our findings, demonstrate the effectiveness of
our baseline, and pave the way for future advancements in RF-CCD.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 06:46:32 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhang",
"Chuyu",
""
],
[
"Yu",
"Xueyang",
""
],
[
"Gu",
"Peiyan",
""
],
[
"He",
"Xuming",
""
]
] | TITLE: Freeze and Cluster: A Simple Baseline for Rehearsal-Free Continual
Category Discovery
ABSTRACT: This paper addresses the problem of Rehearsal-Free Continual Category
Discovery (RF-CCD), which focuses on continuously identifying novel class by
leveraging knowledge from labeled data. Existing methods typically train from
scratch, overlooking the potential of base models, and often resort to data
storage to prevent forgetting. Moreover, because RF-CCD encompasses both
continual learning and novel class discovery, previous approaches have
struggled to effectively integrate advanced techniques from these fields,
resulting in less convincing comparisons and failing to reveal the unique
challenges posed by RF-CCD. To address these challenges, we lead the way in
integrating advancements from both domains and conducting extensive experiments
and analyses. Our findings demonstrate that this integration can achieve
state-of-the-art results, leading to the conclusion that in the presence of
pre-trained models, the representation does not improve and may even degrade
with the introduction of unlabeled data. To mitigate representation
degradation, we propose a straightforward yet highly effective baseline method.
This method first utilizes prior knowledge of known categories to estimate the
number of novel classes. It then acquires representations using a model
specifically trained on the base classes, generates high-quality pseudo-labels
through k-means clustering, and trains only the classifier layer. We validate
our conclusions and methods by conducting extensive experiments across multiple
benchmarks, including the Stanford Cars, CUB, iNat, and Tiny-ImageNet datasets.
The results clearly illustrate our findings, demonstrate the effectiveness of
our baseline, and pave the way for future advancements in RF-CCD.
|
2503.09113 | Yonas Tefera | Yonas Tefera, Quinten Van Baelen, Maarten Meire, Stijn Luca and Peter
Karsmakers | Constraint-Guided Learning of Data-driven Health Indicator Models: An
Application on the Pronostia Bearing Dataset | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a constraint-guided deep learning framework for
developing physically consistent health indicators in bearing prognostics and
health management. Conventional data-driven methods often lack physical
plausibility, while physics-based models are limited by incomplete system
knowledge. To address this, we integrate domain knowledge into deep learning
using constraints to enforce monotonicity, bound output values between 1 and 0
(representing healthy to failed states), and ensure consistency between signal
energy trends and health indicator estimates. This eliminates the need for
complex loss term balancing. We implement constraint-guided gradient descent
within an autoencoder architecture, creating a constrained autoencoder.
However, the framework is adaptable to other architectures. Using
time-frequency representations of accelerometer signals from the Pronostia
dataset, our constrained model generates smoother, more reliable degradation
profiles compared to conventional methods, aligning with expected physical
behavior. Performance is assessed using three metrics: trendability,
robustness, and consistency. Compared to a conventional baseline, the
constrained model improves all three. Another baseline, incorporating
monotonicity via a soft-ranking loss function, outperforms in trendability but
falls short in robustness and consistency. An ablation study confirms that the
monotonicity constraint enhances trendability, the boundary constraint ensures
consistency, and the energy-health consistency constraint improves robustness.
These findings highlight the effectiveness of constraint-guided deep learning
in producing reliable, physically meaningful health indicators, offering a
promising direction for future prognostic applications.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 07:01:27 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Tefera",
"Yonas",
""
],
[
"Van Baelen",
"Quinten",
""
],
[
"Meire",
"Maarten",
""
],
[
"Luca",
"Stijn",
""
],
[
"Karsmakers",
"Peter",
""
]
] | TITLE: Constraint-Guided Learning of Data-driven Health Indicator Models: An
Application on the Pronostia Bearing Dataset
ABSTRACT: This paper presents a constraint-guided deep learning framework for
developing physically consistent health indicators in bearing prognostics and
health management. Conventional data-driven methods often lack physical
plausibility, while physics-based models are limited by incomplete system
knowledge. To address this, we integrate domain knowledge into deep learning
using constraints to enforce monotonicity, bound output values between 1 and 0
(representing healthy to failed states), and ensure consistency between signal
energy trends and health indicator estimates. This eliminates the need for
complex loss term balancing. We implement constraint-guided gradient descent
within an autoencoder architecture, creating a constrained autoencoder.
However, the framework is adaptable to other architectures. Using
time-frequency representations of accelerometer signals from the Pronostia
dataset, our constrained model generates smoother, more reliable degradation
profiles compared to conventional methods, aligning with expected physical
behavior. Performance is assessed using three metrics: trendability,
robustness, and consistency. Compared to a conventional baseline, the
constrained model improves all three. Another baseline, incorporating
monotonicity via a soft-ranking loss function, outperforms in trendability but
falls short in robustness and consistency. An ablation study confirms that the
monotonicity constraint enhances trendability, the boundary constraint ensures
consistency, and the energy-health consistency constraint improves robustness.
These findings highlight the effectiveness of constraint-guided deep learning
in producing reliable, physically meaningful health indicators, offering a
promising direction for future prognostic applications.
|
2503.09124 | Xiangui Kang | Jin Li, Ziqiang He, Anwei Luo, Jian-Fang Hu, Z. Jane Wang, Xiangui
Kang | AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial
Attacks | Accept by NeurIPS 2024. Please cite this paper using the following
format: J. Li, Z. He, A. Luo, J. Hu, Z. Wang, X. Kang*, "AdvAD: Exploring
Non-Parametric Diffusion for Imperceptible Adversarial Attacks", the 38th
Annual Conference on Neural Information Processing Systems (NeurIPS),
Vancouver, Canada, Dec 9-15, 2024. Code: https://github.com/XianguiKang/AdvAD | Advances in Neural Information Processing Systems, vol. 37, pp.
52323--52353, 2024 | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imperceptible adversarial attacks aim to fool DNNs by adding imperceptible
perturbation to the input data. Previous methods typically improve the
imperceptibility of attacks by integrating common attack paradigms with
specifically designed perception-based losses or the capabilities of generative
models. In this paper, we propose Adversarial Attacks in Diffusion (AdvAD), a
novel modeling framework distinct from existing attack paradigms. AdvAD
innovatively conceptualizes attacking as a non-parametric diffusion process by
theoretically exploring basic modeling approach rather than using the denoising
or generation abilities of regular diffusion models requiring neural networks.
At each step, much subtler yet effective adversarial guidance is crafted using
only the attacked model without any additional network, which gradually leads
the end of diffusion process from the original image to a desired imperceptible
adversarial example. Grounded in a solid theoretical foundation of the proposed
non-parametric diffusion process, AdvAD achieves high attack efficacy and
imperceptibility with intrinsically lower overall perturbation strength.
Additionally, an enhanced version AdvAD-X is proposed to evaluate the extreme
of our novel framework under an ideal scenario. Extensive experiments
demonstrate the effectiveness of the proposed AdvAD and AdvAD-X. Compared with
state-of-the-art imperceptible attacks, AdvAD achieves an average of 99.9$\%$
(+17.3$\%$) ASR with 1.34 (-0.97) $l_2$ distance, 49.74 (+4.76) PSNR and 0.9971
(+0.0043) SSIM against four prevalent DNNs with three different architectures
on the ImageNet-compatible dataset. Code is available at
https://github.com/XianguiKang/AdvAD.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 07:22:39 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Li",
"Jin",
""
],
[
"He",
"Ziqiang",
""
],
[
"Luo",
"Anwei",
""
],
[
"Hu",
"Jian-Fang",
""
],
[
"Wang",
"Z. Jane",
""
],
[
"Kang",
"Xiangui",
""
]
] | TITLE: AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial
Attacks
ABSTRACT: Imperceptible adversarial attacks aim to fool DNNs by adding imperceptible
perturbation to the input data. Previous methods typically improve the
imperceptibility of attacks by integrating common attack paradigms with
specifically designed perception-based losses or the capabilities of generative
models. In this paper, we propose Adversarial Attacks in Diffusion (AdvAD), a
novel modeling framework distinct from existing attack paradigms. AdvAD
innovatively conceptualizes attacking as a non-parametric diffusion process by
theoretically exploring basic modeling approach rather than using the denoising
or generation abilities of regular diffusion models requiring neural networks.
At each step, much subtler yet effective adversarial guidance is crafted using
only the attacked model without any additional network, which gradually leads
the end of diffusion process from the original image to a desired imperceptible
adversarial example. Grounded in a solid theoretical foundation of the proposed
non-parametric diffusion process, AdvAD achieves high attack efficacy and
imperceptibility with intrinsically lower overall perturbation strength.
Additionally, an enhanced version AdvAD-X is proposed to evaluate the extreme
of our novel framework under an ideal scenario. Extensive experiments
demonstrate the effectiveness of the proposed AdvAD and AdvAD-X. Compared with
state-of-the-art imperceptible attacks, AdvAD achieves an average of 99.9$\%$
(+17.3$\%$) ASR with 1.34 (-0.97) $l_2$ distance, 49.74 (+4.76) PSNR and 0.9971
(+0.0043) SSIM against four prevalent DNNs with three different architectures
on the ImageNet-compatible dataset. Code is available at
https://github.com/XianguiKang/AdvAD.
|
2503.09128 | Fengze Sun | Fengze Sun, Yanchuan Chang, Egemen Tanin, Shanika Karunasekera,
Jianzhong Qi | Urban Region Representation Learning: A Flexible Approach | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing availability of urban data offers new opportunities for
learning region representations, which can be used as input to machine learning
models for downstream tasks such as check-in or crime prediction. While
existing solutions have produced promising results, an issue is their fixed
formation of regions and fixed input region features, which may not suit the
needs of different downstream tasks. To address this limitation, we propose a
model named FlexiReg for urban region representation learning that is flexible
with both the formation of urban regions and the input region features.
FlexiReg is based on a spatial grid partitioning over the spatial area of
interest. It learns representations for the grid cells, leveraging publicly
accessible data, including POI, land use, satellite imagery, and street view
imagery. We propose adaptive aggregation to fuse the cell representations and
prompt learning techniques to tailor the representations towards different
tasks, addressing the needs of varying formations of urban regions and
downstream tasks. Extensive experiments on five real-world datasets demonstrate
that FlexiReg outperforms state-of-the-art models by up to 202% in term of the
accuracy of four diverse downstream tasks using the produced urban region
representations.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 07:33:44 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Sun",
"Fengze",
""
],
[
"Chang",
"Yanchuan",
""
],
[
"Tanin",
"Egemen",
""
],
[
"Karunasekera",
"Shanika",
""
],
[
"Qi",
"Jianzhong",
""
]
] | TITLE: Urban Region Representation Learning: A Flexible Approach
ABSTRACT: The increasing availability of urban data offers new opportunities for
learning region representations, which can be used as input to machine learning
models for downstream tasks such as check-in or crime prediction. While
existing solutions have produced promising results, an issue is their fixed
formation of regions and fixed input region features, which may not suit the
needs of different downstream tasks. To address this limitation, we propose a
model named FlexiReg for urban region representation learning that is flexible
with both the formation of urban regions and the input region features.
FlexiReg is based on a spatial grid partitioning over the spatial area of
interest. It learns representations for the grid cells, leveraging publicly
accessible data, including POI, land use, satellite imagery, and street view
imagery. We propose adaptive aggregation to fuse the cell representations and
prompt learning techniques to tailor the representations towards different
tasks, addressing the needs of varying formations of urban regions and
downstream tasks. Extensive experiments on five real-world datasets demonstrate
that FlexiReg outperforms state-of-the-art models by up to 202% in term of the
accuracy of four diverse downstream tasks using the produced urban region
representations.
|
2503.09143 | Haoyu Zhang | Haoyu Zhang, Qiaohui Chu, Meng Liu, Yunxiao Wang, Bin Wen, Fan Yang,
Tingting Gao, Di Zhang, Yaowei Wang, Liqiang Nie | Exo2Ego: Exocentric Knowledge Guided MLLM for Egocentric Video
Understanding | Project: https://egovisiongroup.github.io/Exo2Ego.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI personal assistants, deployed through robots or wearables, require
embodied understanding to collaborate effectively with humans. Current
Multimodal Large Language Models (MLLMs) primarily focus on third-person
(exocentric) vision, overlooking the unique aspects of first-person
(egocentric) videos. Additionally, high acquisition costs limit data size,
impairing MLLM performance. To address these challenges, we propose learning
the mapping between exocentric and egocentric domains, leveraging the extensive
exocentric knowledge within existing MLLMs to enhance egocentric video
understanding. To this end, we introduce Ego-ExoClip, a pre-training dataset
comprising 1.1M synchronized ego-exo clip-text pairs derived from Ego-Exo4D.
Our approach features a progressive training pipeline with three stages:
Teacher Self-Preparation, Teacher-Student Guidance, and Student Self-Practice.
Additionally, we propose an instruction-tuning data EgoIT from multiple sources
to strengthen the model's instruction-following capabilities, along with the
EgoBench benchmark comprising eight different tasks for thorough evaluation.
Extensive experiments across diverse egocentric tasks reveal that existing
MLLMs perform inadequately in egocentric video understanding, while our model
significantly outperforms these leading models.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 08:10:33 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Zhang",
"Haoyu",
""
],
[
"Chu",
"Qiaohui",
""
],
[
"Liu",
"Meng",
""
],
[
"Wang",
"Yunxiao",
""
],
[
"Wen",
"Bin",
""
],
[
"Yang",
"Fan",
""
],
[
"Gao",
"Tingting",
""
],
[
"Zhang",
"Di",
"... | TITLE: Exo2Ego: Exocentric Knowledge Guided MLLM for Egocentric Video
Understanding
ABSTRACT: AI personal assistants, deployed through robots or wearables, require
embodied understanding to collaborate effectively with humans. Current
Multimodal Large Language Models (MLLMs) primarily focus on third-person
(exocentric) vision, overlooking the unique aspects of first-person
(egocentric) videos. Additionally, high acquisition costs limit data size,
impairing MLLM performance. To address these challenges, we propose learning
the mapping between exocentric and egocentric domains, leveraging the extensive
exocentric knowledge within existing MLLMs to enhance egocentric video
understanding. To this end, we introduce Ego-ExoClip, a pre-training dataset
comprising 1.1M synchronized ego-exo clip-text pairs derived from Ego-Exo4D.
Our approach features a progressive training pipeline with three stages:
Teacher Self-Preparation, Teacher-Student Guidance, and Student Self-Practice.
Additionally, we propose an instruction-tuning data EgoIT from multiple sources
to strengthen the model's instruction-following capabilities, along with the
EgoBench benchmark comprising eight different tasks for thorough evaluation.
Extensive experiments across diverse egocentric tasks reveal that existing
MLLMs perform inadequately in egocentric video understanding, while our model
significantly outperforms these leading models.
|
2503.09146 | Linli Yao | Linli Yao, Haoning Wu, Kun Ouyang, Yuanxing Zhang, Caiming Xiong, Bei
Chen, Xu Sun, Junnan Li | Generative Frame Sampler for Long Video Understanding | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advances in Video Large Language Models (VideoLLMs),
effectively understanding long-form videos remains a significant challenge.
Perceiving lengthy videos containing thousands of frames poses substantial
computational burden. To mitigate this issue, this paper introduces Generative
Frame Sampler (GenS), a plug-and-play module integrated with VideoLLMs to
facilitate efficient lengthy video perception. Built upon a lightweight
VideoLLM, GenS leverages its inherent vision-language capabilities to identify
question-relevant frames. To facilitate effective retrieval, we construct
GenS-Video-150K, a large-scale video instruction dataset with dense frame
relevance annotations. Extensive experiments demonstrate that GenS consistently
boosts the performance of various VideoLLMs, including open-source models
(Qwen2-VL-7B, Aria-25B, VILA-40B, LLaVA-Video-7B/72B) and proprietary
assistants (GPT-4o, Gemini). When equipped with GenS, open-source VideoLLMs
achieve impressive state-of-the-art results on long-form video benchmarks:
LLaVA-Video-72B reaches 66.8 (+4.3) on LongVideoBench and 77.0 (+2.7) on MLVU,
while Aria obtains 39.2 on HourVideo surpassing the Gemini-1.5-pro by 1.9
points. We will release all datasets and models at
https://generative-sampler.github.io.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 08:16:39 GMT"
}
] | 2025-03-13T00:00:00 | [
[
"Yao",
"Linli",
""
],
[
"Wu",
"Haoning",
""
],
[
"Ouyang",
"Kun",
""
],
[
"Zhang",
"Yuanxing",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Chen",
"Bei",
""
],
[
"Sun",
"Xu",
""
],
[
"Li",
"Junnan",
... | TITLE: Generative Frame Sampler for Long Video Understanding
ABSTRACT: Despite recent advances in Video Large Language Models (VideoLLMs),
effectively understanding long-form videos remains a significant challenge.
Perceiving lengthy videos containing thousands of frames poses substantial
computational burden. To mitigate this issue, this paper introduces Generative
Frame Sampler (GenS), a plug-and-play module integrated with VideoLLMs to
facilitate efficient lengthy video perception. Built upon a lightweight
VideoLLM, GenS leverages its inherent vision-language capabilities to identify
question-relevant frames. To facilitate effective retrieval, we construct
GenS-Video-150K, a large-scale video instruction dataset with dense frame
relevance annotations. Extensive experiments demonstrate that GenS consistently
boosts the performance of various VideoLLMs, including open-source models
(Qwen2-VL-7B, Aria-25B, VILA-40B, LLaVA-Video-7B/72B) and proprietary
assistants (GPT-4o, Gemini). When equipped with GenS, open-source VideoLLMs
achieve impressive state-of-the-art results on long-form video benchmarks:
LLaVA-Video-72B reaches 66.8 (+4.3) on LongVideoBench and 77.0 (+2.7) on MLVU,
while Aria obtains 39.2 on HourVideo surpassing the Gemini-1.5-pro by 1.9
points. We will release all datasets and models at
https://generative-sampler.github.io.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.