DOI
stringlengths
41
41
Title
stringlengths
23
152
Authors
stringlengths
9
455
Abstract
stringlengths
308
1.92k
Section
stringclasses
2 values
Date
stringdate
2025-03-16 00:00:00
2025-03-24 00:00:00
https://doi.org/10.48550/arXiv.2503.18394
Solving Situation Puzzles with Large Language Model and External Reformulation
Kun Li, Xinwei Chen, Tianyou Song, Chengrui Zhou, Zhuoran Liu, Zhenyan Zhang, Jiangjian Guo, Qing Shan
In recent years, large language models (LLMs) have shown an impressive ability to perform arithmetic and symbolic reasoning tasks. However, we found that LLMs (e.g., ChatGPT) cannot perform well on reasoning that requires multiple rounds of dialogue, especially when solving situation puzzles. Specifically, LLMs intend ...
CL
24/03/2025
https://doi.org/10.48550/arXiv.2503.18320
Bridging Writing Manner Gap in Visual Instruction Tuning by Creating LLM-aligned Instructions
Dong Jing, Nanyi Fei, Zhiwu Lu
In the realm of Large Multi-modal Models (LMMs), the instruction quality during the visual instruction tuning stage significantly influences the performance of modality alignment. In this paper, we assess the instruction quality from a unique perspective termed \textbf{Writing Manner}, which encompasses the selection o...
CL
24/03/2025
https://doi.org/10.48550/arXiv.2503.18225
Decoupling Angles and Strength in Low-rank Adaptation
Massimo Bini, Leander Girrbach, Zeynep Akata
Parameter-Efficient FineTuning (PEFT) methods have recently gained significant popularity thanks to the widespread availability of large-scale pretrained models. These methods allow for quick adaptation to downstream tasks with minimal computational cost. However, popular finetuning methods such as LoRA exhibit limited...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.18102
AgentRxiv: Towards Collaborative Autonomous Research
Samuel Schmidgall, Michael Moor
Progress in scientific discovery is rarely the result of a single "Eureka" moment, but is rather the product of hundreds of scientists incrementally working together toward a common goal. While existing agent workflows are capable of producing research autonomously, they do so in isolation, without the ability to conti...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.18065
Unseen from Seen: Rewriting Observation-Instruction Using Foundation Models for Augmenting Vision-Language Navigation
Ziming Wei, Bingqian Lin, Yunshuang Nie, Jiaqi Chen, Shikui Ma, Hang Xu, Xiaodan Liang
Data scarcity is a long-standing challenge in the Vision-Language Navigation (VLN) field, which extremely hinders the generalization of agents to unseen environments. Previous works primarily rely on additional simulator data or web-collected images/videos to improve the generalization. However, the simulator environme...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.18050
(G)I-DLE: Generative Inference via Distribution-preserving Logit Exclusion with KL Divergence Minimization for Constrained Decoding
Hanwool Lee
We propose (G)I-DLE, a new approach to constrained decoding that leverages KL divergence minimization to preserve the intrinsic conditional probability distribution of autoregressive language models while excluding undesirable tokens. Unlike conventional methods that naively set banned tokens' logits to $-\infty$, whic...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.18034
Expanding the Boundaries of Vision Prior Knowledge in Multi-modal Large Language Models
Qiao Liang, Yanjiang Liu, Ben He, Yaojie Lu, Hongyu Lin, Jia Zheng, Xianpei Han, Le Sun, Yingfei Sun
Does the prior knowledge of the vision encoder constrain the capability boundary of Multi-modal Large Language Models (MLLMs)? While most existing research treats MLLMs as unified systems optimized through end-to-end training, the impact of vision encoder's prior knowledge is seldom investigated. In this work, we intro...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.17979
Trade-offs in Large Reasoning Models: An Empirical Analysis of Deliberative and Adaptive Reasoning over Foundational Capabilities
Weixiang Zhao, Xingyu Sui, Jiahe Guo, Yulin Hu, Yang Deng, Yanyan Zhao, Bing Qin, Wanxiang Che, Tat-Seng Chua, Ting Liu
Recent advancements in Large Reasoning Models (LRMs), such as OpenAI's o1/o3 and DeepSeek-R1, have demonstrated remarkable performance in specialized reasoning tasks through human-like deliberative thinking and long chain-of-thought reasoning. However, our systematic evaluation across various model families (DeepSeek, ...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.17955
Human-AI Interaction and User Satisfaction: Empirical Evidence from Online Reviews of AI Products
Stefan Pasch, Sun-Young Ha
Human-AI Interaction (HAI) guidelines and design principles have become increasingly important in both industry and academia to guide the development of AI systems that align with user needs and expectations. However, large-scale empirical evidence on how HAI principles shape user satisfaction in practice remains limit...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.17928
Debiasing Multimodal Large Language Models via Noise-Aware Preference Optimization
Zefeng Zhang, Hengzhu Tang, Jiawei Sheng, Zhenyu Zhang, Yiming Ren, Zhenyang Li, Dawei Yin, Duohe Ma, Tingwen Liu
Multimodal Large Language Models excel in various tasks, yet often struggle with modality bias, where the model tends to rely heavily on a single modality and overlook critical information in other modalities, which leads to incorrect focus and generating irrelevant responses. In this paper, we propose using the paradi...
CL
23/03/2025
https://doi.org/10.48550/arXiv.2503.17793
Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM
Codefuse, Ling Team: Wenting Cai, Yuchen Cao, Chaoyu Chen, Chen Chen, Siba Chen, Qing Cui, Peng Di, Junpeng Fang, Zi Gong, Ting Guo, Zhengyu He, Yang Huang, Cong Li, Jianguo Li, Zheng Li, Shijie Lian, BingChang Liu, Songshan Luo, Shuo Mao, Min Shen, Jian Wu, Jiaolong Yang, Wenjie Yang, Tong Ye, Hang Yu, Wei Zhang, Zhen...
Recent advancements in code large language models (LLMs) have demonstrated remarkable capabilities in code generation and understanding. It is still challenging to build a code LLM with comprehensive performance yet ultimate efficiency. Many attempts have been released in the open source community to break the trade-of...
CL
22/03/2025
https://doi.org/10.48550/arXiv.2503.17783
Energy-Aware LLMs: A step towards sustainable AI for downstream applications
Nguyen Phuc Tran, Brigitte Jaumard, Oscar Delgado
Advanced Large Language Models (LLMs) have revolutionized various fields, including communication networks, sparking an innovation wave that has led to new applications and services, and significantly enhanced solution schemes. Despite all these impressive developments, most LLMs typically require huge computational re...
CL
22/03/2025
https://doi.org/10.48550/arXiv.2503.17736
V2P-Bench: Evaluating Video-Language Understanding with Visual Prompts for Better Human-Model Interaction
Yiming Zhao, Yu Zeng, Yukun Qi, YaoYang Liu, Lin Chen, Zehui Chen, Xikun Bao, Jie Zhao, Feng Zhao
Large Vision-Language Models (LVLMs) have made significant progress in the field of video understanding recently. However, current benchmarks uniformly lean on text prompts for evaluation, which often necessitate complex referential language and fail to provide precise spatial and temporal references. This limitation d...
CL
22/03/2025
https://doi.org/10.48550/arXiv.2503.17632
FairFlow: Mitigating Dataset Biases through Undecided Learning
Jiali Cheng, Hadi Amiri
Language models are prone to dataset biases, known as shortcuts and spurious correlations in data, which often result in performance drop on new data. We present a new debiasing framework called ``FairFlow'' that mitigates dataset biases by learning to be undecided in its predictions for data samples or representations...
CL
22/03/2025
https://doi.org/10.48550/arXiv.2503.17553
Autonomous Radiotherapy Treatment Planning Using DOLA: A Privacy-Preserving, LLM-Based Optimization Agent
Humza Nusrat (1 and 2), Bing Luo (1), Ryan Hall (1), Joshua Kim (1), Hassan Bagher-Ebadian (1 and 2), Anthony Doemer (1), Benjamin Movsas (1 and 2), Kundan Thind (1 and 2) ((1) Department of Radiation Oncology, Henry Ford Health, Detroit, USA (2) College of Human Medicine, Michigan State University, East Lansing, USA)
Radiotherapy treatment planning is a complex and time-intensive process, often impacted by inter-planner variability and subjective decision-making. To address these challenges, we introduce Dose Optimization Language Agent (DOLA), an autonomous large language model (LLM)-based agent designed for optimizing radiotherap...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17502
Large Language Models (LLMs) for Source Code Analysis: applications, models and datasets
Hamed Jelodar, Mohammad Meymani, Roozbeh Razavi-Far
Large language models (LLMs) and transformer-based architectures are increasingly utilized for source code analysis. As software systems grow in complexity, integrating LLMs into code analysis workflows becomes essential for enhancing efficiency, accuracy, and automation. This paper explores the role of LLMs for differ...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17500
Variance Control via Weight Rescaling in LLM Pre-training
Louis Owen, Abhay Kumar, Nilabhra Roy Chowdhury, Fabian Güra
The outcome of Large Language Model (LLM) pre-training strongly depends on weight initialization and variance control strategies. Although the importance of initial variance control has been well documented in neural networks in general, the literature on initialization and management of its growth during LLM pre-train...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17438
From Text to Talent: A Pipeline for Extracting Insights from Candidate Profiles
Paolo Frazzetto, Muhammad Uzair Ul Haq, Flavia Fabris, Alessandro Sperduti
The recruitment process is undergoing a significant transformation with the increasing use of machine learning and natural language processing techniques. While previous studies have focused on automating candidate selection, the role of multiple vacancies in this process remains understudied. This paper addresses this...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17421
Understanding Social Support Needs in Questions: A Hybrid Approach Integrating Semi-Supervised Learning and LLM-based Data Augmentation
Junwei Kuang, Liang Yang, Shaoze Cui, Weiguo Fan
Patients are increasingly turning to online health Q&A communities for social support to improve their well-being. However, when this support received does not align with their specific needs, it may prove ineffective or even detrimental. This necessitates a model capable of identifying the social support needs in ques...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17382
State Fourier Diffusion Language Model (SFDLM): A Scalable, Novel Iterative Approach to Language Modeling
Andrew Kiruluta, Andreas Lemos
In recent years, diffusion based methods have emerged as a powerful paradigm for generative modeling. Although discrete diffusion for natural language processing has been explored to a lesser extent, it shows promise for tasks requiring iterative denoising of token based data. In standard approaches to text generation,...
CL
16/03/2025
https://doi.org/10.48550/arXiv.2503.16586
Big Help or Big Brother? Auditing Tracking, Profiling, and Personalization in Generative AI Assistants
Yash Vekaria (1), Aurelio Loris Canino (2), Jonathan Levitsky (1), Alex Ciechonski (3), Patricia Callejo (4), Anna Maria Mandalari (3), Zubair Shafiq (1) ((1) UC Davis, (2) Mediterranea University of Reggio Calabria, (3) University College London, (4) Universidad Carlos III de Madrid)
Generative AI (GenAI) browser assistants integrate powerful capabilities of GenAI in web browsers to provide rich experiences such as question answering, content summarization, and agentic navigation. These assistants, available today as browser extensions, can not only track detailed browsing activity such as search a...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.17363
Dancing with Critiques: Enhancing LLM Reasoning with Stepwise Natural Language Self-Critique
Yansi Li, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Qiuzhi Liu, Rui Wang, Zhuosheng Zhang, Zhaopeng Tu, Haitao Mi, Dong Yu
Enhancing the reasoning capabilities of large language models (LLMs), particularly for complex tasks requiring multi-step logical deductions, remains a significant challenge. Traditional inference time scaling methods utilize scalar reward signals from process reward models to evaluate candidate reasoning steps, but th...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17336
Efficient Intent-Based Filtering for Multi-Party Conversations Using Knowledge Distillation from LLMs
Reem Gody, Mohamed Abdelghaffar, Mohammed Jabreel, Ahmed Tawfik
Large language models (LLMs) have showcased remarkable capabilities in conversational AI, enabling open-domain responses in chat-bots, as well as advanced processing of conversations like summarization, intent classification, and insights generation. However, these models are resource-intensive, demanding substantial m...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17287
FastCuRL: Curriculum Reinforcement Learning with Progressive Context Extension for Efficient Training R1-like Reasoning Models
Mingyang Song, Mao Zheng, Zheng Li, Wenjie Yang, Xuan Luo, Yue Pan, Feng Zhang
In this paper, we propose \textbf{\textsc{FastCuRL}}, a simple yet efficient \textbf{Cu}rriculum \textbf{R}einforcement \textbf{L}earning approach with context window extending strategy to accelerate the reinforcement learning training efficiency for R1-like reasoning models while enhancing their performance in tacklin...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17279
CASE -- Condition-Aware Sentence Embeddings for Conditional Semantic Textual Similarity Measurement
Gaifan Zhang, Yi Zhou, Danushka Bollegala
The meaning conveyed by a sentence often depends on the context in which it appears. Despite the progress of sentence embedding methods, it remains unclear how to best modify a sentence embedding conditioned on its context. To address this problem, we propose Condition-Aware Sentence Embeddings (CASE), an efficient and...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17247
KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications
Michael J Bommarito, Daniel Martin Katz, Jillian Bommarito
We present the KL3M tokenizers, a family of specialized tokenizers for legal, financial, and governmental text. Despite established work on tokenization, specialized tokenizers for professional domains remain understudied. Our paper offers two main contributions to this area.First, we introduce domain-specific BPE toke...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17239
SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language Models via Selective Layer-Wise Model Merging
Aladin Djuhera, Swanand Ravindra Kadhe, Farhan Ahmed, Syed Zawad, Holger Boche
Fine-tuning large language models (LLMs) on downstream tasks can inadvertently erode their safety alignment, even for benign fine-tuning datasets. We address this challenge by proposing SafeMERGE, a post-fine-tuning framework that preserves safety while maintaining task utility. It achieves this by selectively merging ...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17222
Automating Adjudication of Cardiovascular Events Using Large Language Models
Sonish Sivarajkumar, Kimia Ameri, Chuqin Li, Yanshan Wang, Min Jiang
Cardiovascular events, such as heart attacks and strokes, remain a leading cause of mortality globally, necessitating meticulous monitoring and adjudication in clinical trials. This process, traditionally performed manually by clinical experts, is time-consuming, resource-intensive, and prone to inter-reviewer variabil...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17211
A Language Anchor-Guided Method for Robust Noisy Domain Generalization
Zilin Dai, Lehong Wang, Fangzhou Lin, Yidong Wang, Zhigang Li, Kazunori D Yamada, Ziming Zhang, Wang Lu
Real-world machine learning applications often struggle with two major challenges: distribution shift and label noise. Models tend to overfit by focusing on redundant and uninformative features in the training data, which makes it hard for them to generalize to the target domain. Noisy data worsens this problem by caus...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17136
CoKe: Customizable Fine-Grained Story Evaluation via Chain-of-Keyword Rationalization
Brihi Joshi, Sriram Venkatapathy, Mohit Bansal, Nanyun Peng, Haw-Shiuan Chang
Evaluating creative text such as human-written stories using language models has always been a challenging task -- owing to the subjectivity of multi-annotator ratings. To mimic the thinking process of humans, chain of thought (CoT) generates free-text explanations that help guide a model's predictions and Self-Consist...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17126
Modifying Large Language Model Post-Training for Diverse Creative Writing
John Joon Young Chung, Vishakh Padmakumar, Melissa Roemmele, Yuqian Sun, Max Kreminski
As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing g...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17073
A Study into Investigating Temporal Robustness of LLMs
Jonas Wallat, Abdelrahman Abdallah, Adam Jatowt, Avishek Anand
Large Language Models (LLMs) encapsulate a surprising amount of factual world knowledge. However, their performance on temporal questions and historical knowledge is limited because they often cannot understand temporal scope and orientation or neglect the temporal aspect altogether. In this study, we aim to measure pr...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17039
Summarization Metrics for Spanish and Basque: Do Automatic Scores and LLM-Judges Correlate with Humans?
Jeremy Barnes, Naiara Perez, Alba Bonet-Jover, Begoña Altuna
Studies on evaluation metrics and LLM-as-a-Judge models for automatic text summarization have largely been focused on English, limiting our understanding of their effectiveness in other languages. Through our new dataset BASSE (BAsque and Spanish Summarization Evaluation), we address this situation by collecting human ...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.17003
A Survey on Personalized Alignment -- The Missing Piece for Large Language Models in Real-World Applications
Jian Guan, Junfei Wu, Jia-Nan Li, Chuanqi Cheng, Wei Wu
Large Language Models (LLMs) have demonstrated remarkable capabilities, yet their transition to real-world applications reveals a critical limitation: the inability to adapt to individual preferences while maintaining alignment with universal human values. Current alignment techniques adopt a one-size-fits-all approach...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16965
When Words Outperform Vision: VLMs Can Self-Improve Via Text-Only Training For Human-Centered Decision Making
Zhe Hu, Jing Li, Yu Yin
Embodied decision-making is fundamental for AI agents operating in real-world environments. While Visual Language Models (VLMs) have advanced this capability, they still struggle with complex decisions, particularly in human-centered situations that require deep reasoning about human needs and values. In this study, we...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16883
Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings
Deniss Ruder, Andero Uusberg, Kairit Sirts
Appraisal theories suggest that emotions arise from subjective evaluations of events, referred to as appraisals. The taxonomy of appraisals is quite diverse, and they are usually given ratings on a Likert scale to be annotated in an experiencer-annotator or reader-annotator paradigm. This paper studies GPT-4 as a reade...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16868
Joint Extraction Matters: Prompt-Based Visual Question Answering for Multi-Field Document Information Extraction
Mengsay Loem, Taiju Hosaka
Visual question answering (VQA) has emerged as a flexible approach for extracting specific pieces of information from document images. However, existing work typically queries each field in isolation, overlooking potential dependencies across multiple items. This paper investigates the merits of extracting multiple fie...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16858
MTBench: A Multimodal Time Series Benchmark for Temporal Reasoning and Question Answering
Jialin Chen, Aosong Feng, Ziyu Zhao, Juan Garza, Gaukhar Nurbek, Cheng Qin, Ali Maatouk, Leandros Tassiulas, Yifeng Gao, Rex Ying
Understanding the relationship between textual news and time-series evolution is a critical yet under-explored challenge in applied data science. While multimodal learning has gained traction, existing multimodal time-series datasets fall short in evaluating cross-modal reasoning and complex question answering, which a...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16856
MMCR: Benchmarking Cross-Source Reasoning in Scientific Papers
Yang Tian, Zheng Lu, Mingqi Gao, Zheng Liu, Bo Zhao
Fully comprehending scientific papers by machines reflects a high level of Artificial General Intelligence, requiring the ability to reason across fragmented and heterogeneous sources of information, presenting a complex and practically significant challenge. While Vision-Language Models (VLMs) have made remarkable str...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16853
Imagine to Hear: Auditory Knowledge Generation can be an Effective Assistant for Language Models
Suho Yoo, Hyunjong Ok, Jaeho Lee
Language models pretrained on text-only corpora often struggle with tasks that require auditory commonsense knowledge. Previous work addresses this problem by augmenting the language model to retrieve knowledge from external audio databases. This approach has several limitations, such as the potential lack of relevant ...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16826
When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large Language Models in Cultural Mixture Contexts
Jun Seong Kim, Kyaw Ye Thu, Javad Ismayilzada, Junyeong Park, Eunsu Kim, Huzama Ahmad, Na Min An, James Thorne, Alice Oh
In a highly globalized world, it is important for multi-modal large language models (MLLMs) to recognize and respond correctly to mixed-cultural inputs. For example, a model should correctly identify kimchi (Korean food) in an image both when an Asian woman is eating it, as well as an African man is eating it. However,...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16789
Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation
Rupak Sarkar, Bahareh Sarrafzadeh, Nirupama Chandrasekaran, Nagu Rangan, Philip Resnik, Longqi Yang, Sujay Kumar Jauhar
Human-LLM conversations are increasingly becoming more pervasive in peoples' professional and personal lives, yet many users still struggle to elicit helpful responses from LLM Chatbots. One of the reasons for this issue is users' lack of understanding in crafting effective prompts that accurately convey their informat...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16779
Chain-of-Tools: Utilizing Massive Unseen Tools in the CoT Reasoning of Frozen Language Models
Mengsong Wu, Tong Zhu, Han Han, Xiang Zhang, Wenbiao Shao, Wenliang Chen
Tool learning can further broaden the usage scenarios of large language models (LLMs). However most of the existing methods either need to finetune that the model can only use tools seen in the training data, or add tool demonstrations into the prompt with lower efficiency. In this paper, we present a new Tool Learning...
CL
21/03/2025
https://doi.org/10.48550/arXiv.2503.16745
SPACER: A Parallel Dataset of Speech Production And Comprehension of Error Repairs
Shiva Upadhye, Jiaxuan Li, Richard Futrell
Speech errors are a natural part of communication, yet they rarely lead to complete communicative failure because both speakers and comprehenders can detect and correct errors. Although prior research has examined error monitoring and correction in production and comprehension separately, integrated investigation of bo...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.16728
Natural Language Generation
Emiel van Miltenburg, Chenghua Lin
This article provides a brief overview of the field of Natural Language Generation. The term Natural Language Generation (NLG), in its broadest definition, refers to the study of systems that verbalize some form of information through natural language. That information could be stored in a large database or knowledge g...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.16674
Through the LLM Looking Glass: A Socratic Self-Assessment of Donkeys, Elephants, and Markets
Molly Kennedy, Ayyoob Imani, Timo Spinde, Hinrich Schütze
While detecting and avoiding bias in LLM-generated text is becoming increasingly important, media bias often remains subtle and subjective, making it particularly difficult to identify and mitigate. In this study, we assess media bias in LLM-generated content and LLMs' ability to detect subtle ideological bias. We cond...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.16655
Accelerating Antibiotic Discovery with Large Language Models and Knowledge Graphs
Maxime Delmas, Magdalena Wysocka, Danilo Gusicuma, André Freitas
The discovery of novel antibiotics is critical to address the growing antimicrobial resistance (AMR). However, pharmaceutical industries face high costs (over $1 billion), long timelines, and a high failure rate, worsened by the rediscovery of known compounds. We propose an LLM-based pipeline that acts as an alarm syst...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.16622
Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation
Michele Fiori, Gabriele Civitarese, Priyankar Choudhary, Claudio Bettini
Explainable Artificial Intelligence (XAI) aims to uncover the inner reasoning of machine learning models. In IoT systems, XAI improves the transparency of models processing sensor data from multiple heterogeneous devices, ensuring end-users understand and trust their outputs. Among the many applications, XAI has also b...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.16614
Classification of User Reports for Detection of Faulty Computer Components using NLP Models: A Case Study
Maria de Lourdes M. Silva, André L. C. Mendonça, Eduardo R. D. Neto, Iago C. Chaves, Felipe T. Brito, Victor A. E. Farias, Javam C. Machado
Computer manufacturers typically offer platforms for users to report faults. However, there remains a significant gap in these platforms' ability to effectively utilize textual reports, which impedes users from describing their issues in their own words. In this context, Natural Language Processing (NLP) offers a promi...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.16585
Distributed LLMs and Multimodal Large Language Models: A Survey on Advances, Challenges, and Future Directions
Hadi Amini, Md Jueal Mia, Yasaman Saadati, Ahmed Imteaj, Seyedsina Nabavirazavi, Urmish Thakker, Md Zarif Hossain, Awal Ahmed Fime, S.S. Iyengar
Language models (LMs) are machine learning models designed to predict linguistic patterns by estimating the probability of word sequences based on large-scale datasets, such as text. LMs have a wide range of applications in natural language processing (NLP) tasks, including autocomplete and machine translation. Althoug...
CL
20/03/2025
https://doi.org/10.48550/arXiv.2503.18950
Target-Aware Video Diffusion Models
Taeksoo Kim, Hanbyul Joo
We present a target-aware video diffusion model that generates videos from an input image in which an actor interacts with a specified target while performing a desired action. The target is defined by a segmentation mask and the desired action is described via a text prompt. Unlike existing controllable image-to-video...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18948
Equivariant Image Modeling
Ruixiao Dong, Mengde Xu, Zigang Geng, Li Li, Han Hu, Shuyang Gu
Current generative models, such as autoregressive and diffusion approaches, decompose high-dimensional data distribution learning into a series of simpler subtasks. However, inherent conflicts arise during the joint optimization of these subtasks, and existing solutions fail to resolve such conflicts without sacrificin...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18947
Tuning-Free Amodal Segmentation via the Occlusion-Free Bias of Inpainting Models
Jae Joong Lee, Bedrich Benes, Raymond A. Yeh
Amodal segmentation aims to predict segmentation masks for both the visible and occluded regions of an object. Most existing works formulate this as a supervised learning problem, requiring manually annotated amodal masks or synthetic training data. Consequently, their performance depends on the quality of the datasets...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18945
Aether: Geometric-Aware Unified World Modeling
Aether Team, Haoyi Zhu, Yifan Wang, Jianjun Zhou, Wenzheng Chang, Yang Zhou, Zizun Li, Junyi Chen, Chunhua Shen, Jiangmiao Pang, Tong He
The integration of geometric reconstruction and generative modeling remains a critical challenge in developing AI systems capable of human-like spatial reasoning. This paper proposes Aether, a unified framework that enables geometry-aware reasoning in world models by jointly optimizing three core capabilities: (1) 4D d...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18944
DINO in the Room: Leveraging 2D Foundation Models for 3D Segmentation
Karim Abou Zeid, Kadir Yilmaz, Daan de Geus, Alexander Hermans, David Adrian, Timm Linder, Bastian Leibe
Vision foundation models (VFMs) trained on large-scale image datasets provide high-quality features that have significantly advanced 2D visual recognition. However, their potential in 3D vision remains largely untapped, despite the common availability of 2D images alongside 3D point cloud datasets. While significant re...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18943
SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding
Mingze Xu, Mingfei Gao, Shiyu Li, Jiasen Lu, Zhe Gan, Zhengfeng Lai, Meng Cao, Kai Kang, Yinfei Yang, Afshin Dehghan
We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of video large language models (LLMs) offering a token-efficient solution for long-form video understanding. This model family employs the two-stream SlowFast mechanism, enabling efficient modeling of long-range temporal context to meet the demand ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18942
Video-T1: Test-Time Scaling for Video Generation
Fangfu Liu, Hanyang Wang, Yimo Cai, Kaiyan Zhang, Xiaohang Zhan, Yueqi Duan
With the scale capability of increasing training data, model size, and computational cost, video generation has achieved impressive results in digital creation, enabling users to express creativity across various domains. Recently, researchers in Large Language Models (LLMs) have expanded the scaling to test-time, whic...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18940
Training-free Diffusion Acceleration with Bottleneck Sampling
Ye Tian, Xin Xia, Yuxi Ren, Shanchuan Lin, Xing Wang, Xuefeng Xiao, Yunhai Tong, Ling Yang, Bin Cui
Diffusion models have demonstrated remarkable capabilities in visual content generation but remain challenging to deploy due to their high computational cost during inference. This computational burden primarily arises from the quadratic complexity of self-attention with respect to image or video resolution. While exis...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18933
SyncVP: Joint Diffusion for Synchronous Multi-Modal Video Prediction
Enrico Pallotta, Sina Mokhtarzadeh Azar, Shuai Li, Olga Zatsarynna, Juergen Gall
Predicting future video frames is essential for decision-making systems, yet RGB frames alone often lack the information needed to fully capture the underlying complexities of the real world. To address this limitation, we propose a multi-modal framework for Synchronous Video Prediction (SyncVP) that incorporates compl...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18931
CoMP: Continual Multimodal Pre-training for Vision Foundation Models
Yitong Chen, Lingchen Meng, Wujian Peng, Zuxuan Wu, Yu-Gang Jiang
Pre-trained Vision Foundation Models (VFMs) provide strong visual representations for a wide range of applications. In this paper, we continually pre-train prevailing VFMs in a multimodal manner such that they can effortlessly process visual inputs of varying sizes and produce visual representations that are more align...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18923
Video SimpleQA: Towards Factuality Evaluation in Large Video Language Models
Meng Cao, Pengfei Hu, Yingyao Wang, Jihao Gu, Haoran Tang, Haoze Zhao, Jiahua Dong, Wangbo Yu, Ge Zhang, Ian Reid, Xiaodan Liang
Recent advancements in Large Video Language Models (LVLMs) have highlighted their potential for multi-modal understanding, yet evaluating their factual grounding in video contexts remains a critical unsolved challenge. To address this gap, we introduce Video SimpleQA, the first comprehensive benchmark tailored for fact...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18903
Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection
Moussa Kassem Sbeyti, Nadja Klein, Azarm Nowzad, Fikret Sivrikaya, Sahin Albayrak
Semi-supervised object detection (SSOD) based on pseudo-labeling significantly reduces dependence on large labeled datasets by effectively leveraging both labeled and unlabeled data. However, real-world applications of SSOD often face critical challenges, including class imbalance, label noise, and labeling errors. We ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18897
Online 3D Scene Reconstruction Using Neural Object Priors
Thomas Chabal, Shizhe Chen, Jean Ponce, Cordelia Schmid
This paper addresses the problem of reconstructing a scene online at the level of objects given an RGB-D video sequence. While current object-aware neural implicit representations hold promise, they are limited in online reconstruction efficiency and shape completion. Our main contributions to alleviate the above limit...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18886
CFG-Zero*: Improved Classifier-Free Guidance for Flow Matching Models
Weichen Fan, Amber Yijia Zheng, Raymond A. Yeh, Ziwei Liu
Classifier-Free Guidance (CFG) is a widely adopted technique in diffusion/flow models to improve image fidelity and controllability. In this work, we first analytically study the effect of CFG on flow matching models trained on Gaussian mixtures where the ground-truth flow can be derived. We observe that in the early s...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18883
Efficient and Accurate Scene Text Recognition with Cascaded-Transformers
Savas Ozkan, Andrea Maracani, Hyowon Kim, Sijun Cho, Eunchung Noh, Jeongwon Min, Jung Min Cho, Mete Ozay
In recent years, vision transformers with text decoder have demonstrated remarkable performance on Scene Text Recognition (STR) due to their ability to capture long-range dependencies and contextual relationships with high learning capacity. However, the computational and memory demands of these models are significant,...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18880
Seeing Speech and Sound: Distinguishing and Locating Audios in Visual Scenes
Hyeonggon Ryu, Seongyu Kim, Joon Son Chung, Arda Senocak
We present a unified model capable of simultaneously grounding both spoken language and non-speech sounds within a visual scene, addressing key limitations in current audio-visual grounding models. Existing approaches are typically limited to handling either speech or non-speech sounds independently, or at best, togeth...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18873
Efficient Self-Supervised Adaptation for Medical Image Analysis
Moein Sorkhei, Emir Konuk, Jingyu Guo, Christos Matsoukas, Kevin Smith
Self-supervised adaptation (SSA) improves foundation model transfer to medical domains but is computationally prohibitive. Although parameter efficient fine-tuning methods such as LoRA have been explored for supervised adaptation, their effectiveness for SSA remains unknown. In this work, we introduce efficient self-su...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18872
Curriculum Coarse-to-Fine Selection for High-IPC Dataset Distillation
Yanda Chen, Gongwei Chen, Miao Zhang, Weili Guan, Liqiang Nie
Dataset distillation (DD) excels in synthesizing a small number of images per class (IPC) but struggles to maintain its effectiveness in high-IPC settings. Recent works on dataset distillation demonstrate that combining distilled and real data can mitigate the effectiveness decay. However, our analysis of the combinati...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18862
Exploring the Integration of Key-Value Attention Into Pure and Hybrid Transformers for Semantic Segmentation
DeShin Hwa, Tobias Holmes, Klaus Drechsler
While CNNs were long considered state of the art for image processing, the introduction of Transformer architectures has challenged this position. While achieving excellent results in image classification and segmentation, Transformers remain inherently reliant on large training datasets and remain computationally expe...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18860
HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation
Zunnan Xu, Zhentao Yu, Zixiang Zhou, Jun Zhou, Xiaoyu Jin, Fa-Ting Hong, Xiaozhong Ji, Junwei Zhu, Chengfei Cai, Shiyu Tang, Qin Lin, Xiu Li, Qinglin Lu
We introduce HunyuanPortrait, a diffusion-based condition control method that employs implicit representations for highly controllable and lifelike portrait animation. Given a single portrait image as an appearance reference and video clips as driving templates, HunyuanPortrait can animate the character in the referenc...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18854
MC-LLaVA: Multi-Concept Personalized Vision-Language Model
Ruichuan An, Sihan Yang, Ming Lu, Renrui Zhang, Kai Zeng, Yulin Luo, Jiajun Cao, Hao Liang, Ying Chen, Qi She, Shanghang Zhang, Wentao Zhang
Current vision-language models (VLMs) show exceptional abilities across diverse tasks, such as visual question answering. To enhance user experience, recent studies investigate VLM personalization to understand user-provided concepts. However, they mainly focus on single-concept personalization, neglecting the existenc...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18853
3DSwapping: Texture Swapping For 3D Object From Single Reference Image
Xiao Cao, Beibei Lin, Bo Wang, Zhiyong Huang, Robby T. Tan
3D texture swapping allows for the customization of 3D object textures, enabling efficient and versatile visual transformations in 3D editing. While no dedicated method exists, adapted 2D editing and text-driven 3D editing approaches can serve this purpose. However, 2D editing requires frame-by-frame manipulation, caus...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18830
DAGait: Generalized Skeleton-Guided Data Alignment for Gait Recognition
Zhengxian Wu, Chuanrui Zhang, Hangrui Xu, Peng Jiao, Haoqian Wang
Gait recognition is emerging as a promising and innovative area within the field of computer vision, widely applied to remote person identification. Although existing gait recognition methods have achieved substantial success in controlled laboratory datasets, their performance often declines significantly when transit...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18817
Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal Representations
Jeonghyeon Kim, Sangheum Hwang
Prior research on out-of-distribution detection (OoDD) has primarily focused on single-modality models. Recently, with the advent of large-scale pretrained vision-language models such as CLIP, OoDD methods utilizing such multi-modal representations through zero-shot and prompt learning strategies have emerged. However,...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18812
SKDU at De-Factify 4.0: Vision Transformer with Data Augmentation for AI-Generated Image Detection
Shrikant Malviya, Neelanjan Bhowmik, Stamos Katsigiannis
The aim of this work is to explore the potential of pre-trained vision-language models, e.g. Vision Transformers (ViT), enhanced with advanced data augmentation strategies for the detection of AI-generated images. Our approach leverages a fine-tuned ViT model trained on the Defactify-4.0 dataset, which includes images ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18808
CRCL: Causal Representation Consistency Learning for Anomaly Detection in Surveillance Videos
Yang Liu, Hongjin Wang, Zepu Wang, Xiaoguang Zhu, Jing Liu, Peng Sun, Rui Tang, Jianwei Du, Victor C.M. Leung, Liang Song
Video Anomaly Detection (VAD) remains a fundamental yet formidable task in the video understanding community, with promising applications in areas such as information forensics and public safety protection. Due to the rarity and diversity of anomalies, existing methods only use easily collected regular events to model ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18803
Change3D: Revisiting Change Detection and Captioning from A Video Modeling Perspective
Duowang Zhu, Xiaohu Huang, Haiyan Huang, Hao Zhou, Zhenfeng Shao
In this paper, we present Change3D, a framework that reconceptualizes the change detection and captioning tasks through video modeling. Recent methods have achieved remarkable success by regarding each pair of bi-temporal images as separate frames. They employ a shared-weight image encoder to extract spatial features a...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18794
NexusGS: Sparse View Synthesis with Epipolar Depth Priors in 3D Gaussian Splatting
Yulong Zheng, Zicheng Jiang, Shengfeng He, Yandu Sun, Junyu Dong, Huaidong Zhang, Yong Du
Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) have noticeably advanced photo-realistic novel view synthesis using images from densely spaced camera viewpoints. However, these methods struggle in few-shot scenarios due to limited supervision. In this paper, we present NexusGS, a 3DGS-based approach that ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18785
LGI-DETR: Local-Global Interaction for UAV Object Detection
Zifa Chen
UAV has been widely used in various fields. However, most of the existing object detectors used in drones are not end-to-end and require the design of various complex components and careful fine-tuning. Most of the existing end-to-end object detectors are designed for natural scenes. It is not ideal to apply them direc...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18784
Leveraging Perturbation Robustness to Enhance Out-of-Distribution Detection
Wenxi Chen, Raymond A. Yeh, Shaoshuai Mou, Yan Gu
Out-of-distribution (OOD) detection is the task of identifying inputs that deviate from the training data distribution. This capability is essential for safely deploying deep computer vision models in open-world environments. In this work, we propose a post-hoc method, Perturbation-Rectified OOD detection (PRO), based ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18783
Frequency Dynamic Convolution for Dense Image Prediction
Linwei Chen, Lin Gu, Liang Li, Chenggang Yan, Ying Fu
While Dynamic Convolution (DY-Conv) has shown promising performance by enabling adaptive weight selection through multiple parallel weights combined with an attention mechanism, the frequency response of these weights tends to exhibit high similarity, resulting in high parameter costs but limited adaptability. In this ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18767
Good Keypoints for the Two-View Geometry Estimation Problem
Konstantin Pakulev, Alexander Vakhitov, Gonzalo Ferrer
Local features are essential to many modern downstream applications. Therefore, it is of interest to determine the properties of local features that contribute to the downstream performance for a better design of feature detectors and descriptors. In our work, we propose a new theoretical model for scoring feature poin...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18755
EgoSurgery-HTS: A Dataset for Egocentric Hand-Tool Segmentation in Open Surgery Videos
Nathan Darjana, Ryo Fujii, Hideo Saito, Hiroki Kajita
Egocentric open-surgery videos capture rich, fine-grained details essential for accurately modeling surgical procedures and human behavior in the operating room. A detailed, pixel-level understanding of hands and surgical tools is crucial for interpreting a surgeon's actions and intentions. We introduce EgoSurgery-HTS,...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18753
Self-Supervised Learning based on Transformed Image Reconstruction for Equivariance-Coherent Feature Representation
Qin Wang, Benjamin Bruns, Hanno Scharr, Kai Krajsek
The equivariant behaviour of features is essential in many computer vision tasks, yet popular self-supervised learning (SSL) methods tend to constrain equivariance by design. We propose a self-supervised learning approach where the system learns transformations independently by reconstructing images that have undergone...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18746
Linguistics-aware Masked Image Modeling for Self-supervised Scene Text Recognition
Yifei Zhang, Chang Liu, Jin Wei, Xiaomeng Yang, Yu Zhou, Can Ma, Xiangyang Ji
Text images are unique in their dual nature, encompassing both visual and linguistic information. The visual component encompasses structural and appearance-based features, while the linguistic dimension incorporates contextual and semantic elements. In scenarios with degraded visual quality, linguistic patterns serve ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18742
SFDLA: Source-Free Document Layout Analysis
Sebastian Tewes, Yufan Chen, Omar Moured, Jiaming Zhang, Rainer Stiefelhagen
Document Layout Analysis (DLA) is a fundamental task in document understanding. However, existing DLA and adaptation methods often require access to large-scale source data and target labels. This requirements severely limiting their real-world applicability, particularly in privacy-sensitive and resource-constrained d...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18725
FG$^2$: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching
Zimin Xia, Alexandre Alahi
We propose a novel fine-grained cross-view localization method that estimates the 3 Degrees of Freedom pose of a ground-level image in an aerial image of the surroundings by matching fine-grained features between the two images. The pose is estimated by aligning a point plane generated from the ground image with a poin...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18719
Boosting Resolution Generalization of Diffusion Transformers with Randomized Positional Encodings
Cong Liu, Liang Hou, Mingwu Zheng, Xin Tao, Pengfei Wan, Di Zhang, Kun Gai
Resolution generalization in image generation tasks enables the production of higher-resolution images with lower training resolution overhead. However, a significant challenge in resolution generalization, particularly in the widely used Diffusion Transformers, lies in the mismatch between the positional encodings enc...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18718
GS-Marker: Generalizable and Robust Watermarking for 3D Gaussian Splatting
Lijiang Li, Jinglu Wang, Xiang Ming, Yan Lu
In the Generative AI era, safeguarding 3D models has become increasingly urgent. While invisible watermarking is well-established for 2D images with encoder-decoder frameworks, generalizable and robust solutions for 3D remain elusive. The main difficulty arises from the renderer between the 3D encoder and 2D decoder, w...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18712
LLaVAction: evaluating and training multi-modal large language models for action recognition
Shaokai Ye, Haozhe Qi, Alexander Mathis, Mackenzie W. Mathis
Understanding human behavior requires measuring behavioral actions. Due to its complexity, behavior is best mapped onto a rich, semantic structure such as language. The recent development of multi-modal large language models (MLLMs) is a promising candidate for a wide range of action understanding tasks. In this work, ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18711
Accenture-NVS1: A Novel View Synthesis Dataset
Thomas Sugg, Kyle O'Brien, Lekh Poudel, Alex Dumouchelle, Michelle Jou, Marc Bosch, Deva Ramanan, Srinivasa Narasimhan, Shubham Tulsiani
This paper introduces ACC-NVS1, a specialized dataset designed for research on Novel View Synthesis specifically for airborne and ground imagery. Data for ACC-NVS1 was collected in Austin, TX and Pittsburgh, PA in 2023 and 2024. The collection encompasses six diverse real-world scenes captured from both airborne and gr...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18709
Revisiting Automatic Data Curation for Vision Foundation Models in Digital Pathology
Boqi Chen, Cédric Vincent-Cuaz, Lydia A. Schoenpflug, Manuel Madeira, Lisa Fournier, Vaishnavi Subramanian, Sonali Andani, Samuel Ruiperez-Campillo, Julia E. Vogt, Raphaëlle Luisier, Dorina Thanou, Viktor H. Koelzer, Pascal Frossard, Gabriele Campanella, Gunnar Rätsch
Vision foundation models (FMs) are accelerating the development of digital pathology algorithms and transforming biomedical research. These models learn, in a self-supervised manner, to represent histological features in highly heterogeneous tiles extracted from whole-slide images (WSIs) of real-world patient samples. ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18705
Benchmarking Burst Super-Resolution for Polarization Images: Noise Dataset and Analysis
Inseung Hwang, Kiseok Choi, Hyunho Ha, Min H. Kim
Snapshot polarization imaging calculates polarization states from linearly polarized subimages. To achieve this, a polarization camera employs a double Bayer-patterned sensor to capture both color and polarization. It demonstrates low light efficiency and low spatial resolution, resulting in increased noise and comprom...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18703
Channel Consistency Prior and Self-Reconstruction Strategy Based Unsupervised Image Deraining
Guanglu Dong, Tianheng Zheng, Yuanzhouhan Cao, Linbo Qing, Chao Ren
Recently, deep image deraining models based on paired datasets have made a series of remarkable progress. However, they cannot be well applied in real-world applications due to the difficulty of obtaining real paired datasets and the poor generalization performance. In this paper, we propose a novel Channel Consistency...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18695
OCRT: Boosting Foundation Models in the Open World with Object-Concept-Relation Triad
Luyao Tang, Yuxuan Yuan, Chaoqi Chen, Zeyu Zhang, Yue Huang, Kun Zhang
Although foundation models (FMs) claim to be powerful, their generalization ability significantly decreases when faced with distribution shifts, weak supervision, or malicious attacks in the open world. On the other hand, most domain generalization or adversarial fine-tuning methods are task-related or model-specific, ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18682
Hardware-Rasterized Ray-Based Gaussian Splatting
Samuel Rota Bulò, Nemanja Bartolovic, Lorenzo Porzi, Peter Kontschieder
We present a novel, hardware rasterized rendering approach for ray-based 3D Gaussian Splatting (RayGS), obtaining both fast and high-quality results for novel view synthesis. Our work contains a mathematically rigorous and geometrically intuitive derivation about how to efficiently estimate all relevant quantities for ...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18678
NullSwap: Proactive Identity Cloaking Against Deepfake Face Swapping
Tianyi Wang, Harry Cheng, Xiao Zhang, Yinglong Wang
Suffering from performance bottlenecks in passively detecting high-quality Deepfake images due to the advancement of generative models, proactive perturbations offer a promising approach to disabling Deepfake manipulations by inserting signals into benign images. However, existing proactive perturbation approaches rema...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18674
Human Motion Unlearning
Edoardo De Matteis, Matteo Migliarini, Alessio Sampieri, Indro Spinelli, Fabio Galasso
We introduce the task of human motion unlearning to prevent the synthesis of toxic animations while preserving the general text-to-motion generative performance. Unlearning toxic motions is challenging as those can be generated from explicit text prompts and from implicit toxic combinations of safe motions (e.g., ``kic...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18673
Any6D: Model-free 6D Pose Estimation of Novel Objects
Taeyeop Lee, Bowen Wen, Minjun Kang, Gyuree Kang, In So Kweon, Kuk-Jin Yoon
We introduce Any6D, a model-free framework for 6D object pose estimation that requires only a single RGB-D anchor image to estimate both the 6D pose and size of unknown objects in novel scenes. Unlike existing methods that rely on textured 3D models or multiple viewpoints, Any6D leverages a joint object alignment proce...
CV
24/03/2025
https://doi.org/10.48550/arXiv.2503.18672
Feature Calibration enhanced Parameter Synthesis for CLIP-based Class-incremental Learning
Juncen Guo, Xiaoguang Zhu, Lianlong Sun, Liangyu Teng, Di Li, Yang Liu, Liang Song
Class-incremental Learning (CIL) enables models to continuously learn new class knowledge while memorizing previous classes, facilitating their adaptation and evolution in dynamic environments. Traditional CIL methods are mainly based on visual features, which limits their ability to handle complex scenarios. In contra...
CV
24/03/2025