Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
100
Unsupervised Occupancy Learning from Sparse Point Cloud
http://arxiv.org/abs/2404.02759
Amine Ouasfi, Adnane Boukhayma
2,404.02759
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities encompassing a wide range from 3D shapes to images and audio. Within the realm of 3D shape representation Neural Signed Distance Functions (SDF) have demonstrated remarkable potential in faithfully encod...
[]
[]
[]
[]
100
101
Extreme Point Supervised Instance Segmentation
Hyeonjun Lee, Sehyun Hwang, Suha Kwak
null
This paper introduces a novel approach to learning instance segmentation using extreme points i.e. the topmost leftmost bottommost and rightmost points of each object. These points are readily available in the modern bounding box annotation process while offering strong clues for precise segmentation and thus allows to...
[]
[]
[]
[]
101
102
3DInAction: Understanding Human Actions in 3D Point Clouds
http://arxiv.org/abs/2303.06346
Yizhak Ben-Shabat, Oren Shrout, Stephen Gould
2,303.06346
We propose a novel method for 3D point cloud action recognition. Understanding human actions in RGB videos has been widely studied in recent years however its 3D point cloud counterpart remains under-explored despite the clear value that 3D information may bring. This is mostly due to the inherent limitation of the poi...
[]
[]
[]
[]
102
103
Cache Me if You Can: Accelerating Diffusion Models through Block Caching
http://arxiv.org/abs/2312.03209
Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Xiaoliang Dai, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, Christian Rupprecht, Daniel Cremers, Peter Vajda, Jialiang Wang
2,312.03209
Diffusion models have recently revolutionized the field of image synthesis due to their ability to generate photorealistic images. However one of the major drawbacks of diffusion models is that the image generation process is costly. A large image-to-image network has to be applied many times to iteratively refine an i...
[]
[]
[]
[]
103
104
MedM2G: Unifying Medical Multi-Modal Generation via Cross-Guided Diffusion with Visual Invariant
http://arxiv.org/abs/2403.04290
Chenlu Zhan, Yu Lin, Gaoang Wang, Hongwei Wang, Jian Wu
2,403.0429
Medical generative models acknowledged for their high-quality sample generation ability have accelerated the fast growth of medical applications. However recent works concentrate on separate medical generation models for distinct medical tasks and are restricted to inadequate medical multi-modal knowledge constraining ...
[]
[]
[]
[]
104
105
SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection
http://arxiv.org/abs/2402.17323
Junsu Kim, Hoseong Cho, Jihyeon Kim, Yihalem Yimolal Tiruneh, Seungryul Baek
2,402.17323
In the field of class incremental learning (CIL) generative replay has become increasingly prominent as a method to mitigate the catastrophic forgetting alongside the continuous improvements in generative models. However its application in class incremental object detection (CIOD) has been significantly limited primari...
[]
[]
[]
[]
105
106
Neural Parametric Gaussians for Monocular Non-Rigid Object Reconstruction
http://arxiv.org/abs/2312.01196
Devikalyan Das, Christopher Wewer, Raza Yunus, Eddy Ilg, Jan Eric Lenssen
2,312.01196
Reconstructing dynamic objects from monocular videos is a severely underconstrained and challenging problem and recent work has approached it in various directions. However owing to the ill-posed nature of this problem there has been no solution that can provide consistent high-quality novel views from camera positions...
[]
[]
[]
[]
106
107
Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving
http://arxiv.org/abs/2403.17301
Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen
2,403.17301
Deep learning-based monocular depth estimation (MDE) extensively applied in autonomous driving is known to be vulnerable to adversarial attacks. Previous physical attacks against MDE models rely on 2D adversarial patches so they only affect a small localized region in the MDE map but fail under various viewpoints. To a...
[]
[]
[]
[]
107
108
Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks
http://arxiv.org/abs/2403.10097
Shin'ya Yamaguchi, Sekitoshi Kanai, Kazuki Adachi, Daiki Chijiwa
2,403.10097
While fine-tuning is a de facto standard method for training deep neural networks it still suffers from overfitting when using small target datasets. Previous methods improve fine-tuning performance by maintaining knowledge of the source datasets or introducing regularization terms such as contrastive loss. However the...
[]
[]
[]
[]
108
109
PH-Net: Semi-Supervised Breast Lesion Segmentation via Patch-wise Hardness
Siyao Jiang, Huisi Wu, Junyang Chen, Qin Zhang, Jing Qin
null
We present a novel semi-supervised framework for breast ultrasound (BUS) image segmentation which is a very challenging task owing to (1) large scale and shape variations of breast lesions and (2) extremely ambiguous boundaries caused by massive speckle noise and artifacts in BUS images. While existing models achieved ...
[]
[]
[]
[]
109
110
Multimodal Prompt Perceiver: Empower Adaptiveness Generalizability and Fidelity for All-in-One Image Restoration
http://arxiv.org/abs/2312.02918
Yuang Ai, Huaibo Huang, Xiaoqiang Zhou, Jiexiang Wang, Ran He
2,312.02918
Despite substantial progress all-in-one image restoration (IR) grapples with persistent challenges in handling intricate real-world degradations. This paper introduces MPerceiver: a novel multimodal prompt learning approach that harnesses Stable Diffusion (SD) priors to enhance adaptiveness generalizability and fidelit...
[]
[]
[]
[]
110
111
ExACT: Language-guided Conceptual Reasoning and Uncertainty Estimation for Event-based Action Recognition and More
http://arxiv.org/abs/2403.12534
Jiazhou Zhou, Xu Zheng, Yuanhuiyi Lyu, Lin Wang
2,403.12534
Event cameras have recently been shown beneficial for practical vision tasks such as action recognition thanks to their high temporal resolution power efficiency and reduced privacy concerns. However current research is hindered by 1) the difficulty in processing events because of their prolonged duration and dynamic a...
[]
[]
[]
[]
111
112
Color Shift Estimation-and-Correction for Image Enhancement
http://arxiv.org/abs/2405.17725
Yiyu Li, Ke Xu, Gerhard Petrus Hancke, Rynson W.H. Lau
2,405.17725
Images captured under sub-optimal illumination conditions may contain both over- and under-exposures. We observe that over- and over-exposed regions display opposite color tone distribution shifts which may not be easily normalized in joint modeling as they usually do not have "normal-exposed" regions/pixels as referen...
[]
[]
[]
[]
112
113
Improving Visual Recognition with Hyperbolical Visual Hierarchy Mapping
http://arxiv.org/abs/2404.00974
Hyeongjun Kwon, Jinhyun Jang, Jin Kim, Kwonyoung Kim, Kwanghoon Sohn
2,404.00974
Visual scenes are naturally organized in a hierarchy where a coarse semantic is recursively comprised of several fine details. Exploring such a visual hierarchy is crucial to recognize the complex relations of visual elements leading to a comprehensive scene understanding. In this paper we propose a Visual Hierarchy Ma...
[]
[]
[]
[]
113
114
ParameterNet: Parameters Are All You Need for Large-scale Visual Pretraining of Mobile Networks
Kai Han, Yunhe Wang, Jianyuan Guo, Enhua Wu
null
The large-scale visual pretraining has significantly improve the performance of large vision models. However we observe the low FLOPs pitfall that the existing low-FLOPs models cannot benefit from large-scale pretraining. In this paper we introduce a novel design principle termed ParameterNet aimed at augmenting the nu...
[]
[]
[]
[]
114
115
Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
http://arxiv.org/abs/2312.02145
Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler
2,312.02145
Monocular depth estimation is a fundamental computer vision task. Recovering 3D depth from a single image is geometrically ill-posed and requires scene understanding so it is not surprising that the rise of deep learning has led to a breakthrough. The impressive progress of monocular depth estimators has mirrored the g...
[]
[]
[]
[]
115
116
Identifying Important Group of Pixels using Interactions
http://arxiv.org/abs/2401.03785
Kosuke Sumiyasu, Kazuhiko Kawamoto, Hiroshi Kera
2,401.03785
To better understand the behavior of image classifiers it is useful to visualize the contribution of individual pixels to the model prediction. In this study we propose a method MoXI(Model eXplanation by Interactions) that efficiently and accurately identifies a group of pixels with high prediction confidence. The prop...
[]
[]
[]
[]
116
117
Towards Scalable 3D Anomaly Detection and Localization: A Benchmark via 3D Anomaly Synthesis and A Self-Supervised Learning Network
http://arxiv.org/abs/2311.14897
Wenqiao Li, Xiaohao Xu, Yao Gu, Bozhong Zheng, Shenghua Gao, Yingna Wu
2,311.14897
Recently 3D anomaly detection a crucial problem involving fine-grained geometry discrimination is getting more attention. However the lack of abundant real 3D anomaly data limits the scalability of current models. To enable scalable anomaly data collection we propose a 3D anomaly synthesis pipeline to adapt existing la...
[]
[]
[]
[]
117
118
Cam4DOcc: Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications
http://arxiv.org/abs/2311.17663
Junyi Ma, Xieyuanli Chen, Jiawei Huang, Jingyi Xu, Zhen Luo, Jintao Xu, Weihao Gu, Rui Ai, Hesheng Wang
2,311.17663
Understanding how the surrounding environment changes is crucial for performing downstream tasks safely and reliably in autonomous driving applications. Recent occupancy estimation techniques using only camera images as input can provide dense occupancy representations of large-scale scenes based on the current observa...
[]
[]
[]
[]
118
119
DIOD: Self-Distillation Meets Object Discovery
Sandra Kara, Hejer Ammar, Julien Denize, Florian Chabot, Quoc-Cuong Pham
null
Instance segmentation demands substantial labeling resources. This has prompted increased interest to explore the object discovery task as an unsupervised alternative. In particular promising results were achieved in localizing instances using motion supervision only. However the motion signal introduces complexities d...
[]
[]
[]
[]
119
120
GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-Mesh
http://arxiv.org/abs/2404.07991
Jing Wen, Xiaoming Zhao, Zhongzheng Ren, Alexander G. Schwing, Shenlong Wang
2,404.07991
We introduce GoMAvatar a novel approach for real-time memory-efficient high-quality animatable human modeling. GoMAvatar takes as input a single monocular video to create a digital avatar capable of re-articulation in new poses and real-time rendering from novel viewpoints while seamlessly integrating with rasterizatio...
[]
[]
[]
[]
120
121
Neural Redshift: Random Networks are not Random Functions
http://arxiv.org/abs/2403.02241
Damien Teney, Armand Mihai Nicolicioiu, Valentin Hartmann, Ehsan Abbasnejad
2,403.02241
Our understanding of the generalization capabilities of neural networks NNs is still incomplete. Prevailing explanations are based on implicit biases of gradient descent GD but they cannot account for the capabilities of models from gradientfree methods nor the simplicity bias recently observed in untrained networks Th...
[]
[]
[]
[]
121
122
HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting
http://arxiv.org/abs/2311.17061
Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, Ziwei Liu
2,311.17061
Realistic 3D human generation from text prompts is a desirable yet challenging task. Existing methods optimize 3D representations like mesh or neural fields via score distillation sampling (SDS) which suffers from inadequate fine details or excessive training time. In this paper we propose an efficient yet effective fr...
[]
[]
[]
[]
122
123
DIEM: Decomposition-Integration Enhancing Multimodal Insights
Xinyi Jiang, Guoming Wang, Junhao Guo, Juncheng Li, Wenqiao Zhang, Rongxing Lu, Siliang Tang
null
In image question answering due to the abundant and sometimes redundant information precisely matching and integrating the information from both text and images is a challenge. In this paper we propose the Decomposition-Integration Enhancing Multimodal Insight (DIEM) which initially decomposes the given question and im...
[]
[]
[]
[]
123
124
CosmicMan: A Text-to-Image Foundation Model for Humans
http://arxiv.org/abs/2404.01294
Shikai Li, Jianglin Fu, Kaiyuan Liu, Wentao Wang, Kwan-Yee Lin, Wayne Wu
2,404.01294
We present CosmicMan a text-to-image foundation model specialized for generating high-fidelity human images. Unlike current general-purpose foundation models that are stuck in the dilemma of inferior quality and text-image misalignment for humans CosmicMan enables generating photo-realistic human images with meticulous...
[]
[]
[]
[]
124
125
LLMs are Good Sign Language Translators
http://arxiv.org/abs/2404.00925
Jia Gong, Lin Geng Foo, Yixuan He, Hossein Rahmani, Jun Liu
2,404.00925
Sign Language Translation (SLT) is a challenging task that aims to translate sign videos into spoken language. Inspired by the strong translation capabilities of large language models (LLMs) that are trained on extensive multilingual text corpora we aim to harness off-the-shelf LLMs to handle SLT. In this paper we regu...
[]
[]
[]
[]
125
126
Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment
http://arxiv.org/abs/2403.10066
Ziyu Shan, Yujie Zhang, Qi Yang, Haichen Yang, Yiling Xu, Jenq-Neng Hwang, Xiaozhong Xu, Shan Liu
2,403.10066
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference which have achieved tremendous improvements due to the utilization of deep neural networks. However learning-based NR-PCQA methods suffer from the scarcity of...
[]
[]
[]
[]
126
127
JDEC: JPEG Decoding via Enhanced Continuous Cosine Coefficients
http://arxiv.org/abs/2404.05558
Woo Kyoung Han, Sunghoon Im, Jaedeok Kim, Kyong Hwan Jin
2,404.05558
We propose a practical approach to JPEG image decoding utilizing a local implicit neural representation with continuous cosine formulation. The JPEG algorithm significantly quantizes discrete cosine transform (DCT) spectra to achieve a high compression rate inevitably resulting in quality degradation while encoding an ...
[]
[]
[]
[]
127
128
Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer
http://arxiv.org/abs/2311.12905
Wenqiao Zhang, Zheqi Lv, Hao Zhou, Jia-Wei Liu, Juncheng Li, Mengze Li, Yunfei Li, Dongping Zhang, Yueting Zhuang, Siliang Tang
2,311.12905
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate. This setting neglects the more practical scenario where training data are collected from multiple sources. This motivates us to extend ADA from a single sourc...
[]
[]
[]
[]
128
129
Learning Continual Compatible Representation for Re-indexing Free Lifelong Person Re-identification
Zhenyu Cui, Jiahuan Zhou, Xun Wang, Manyu Zhu, Yuxin Peng
null
Lifelong Person Re-identification (L-ReID) aims to learn from sequentially collected data to match a person across different scenes. Once an L-ReID model is updated using new data all historical images in the gallery are required to be re-calculated to obtain new features for testing known as "re-indexing". However it ...
[]
[]
[]
[]
129
130
Revisiting Spatial-Frequency Information Integration from a Hierarchical Perspective for Panchromatic and Multi-Spectral Image Fusion
Jiangtong Tan, Jie Huang, Naishan Zheng, Man Zhou, Keyu Yan, Danfeng Hong, Feng Zhao
null
Pan-sharpening is a super-resolution problem that essentially relies on spectra fusion of panchromatic (PAN) images and low-resolution multi-spectral (LRMS) images. The previous methods have validated the effectiveness of information fusion in the Fourier space of the whole image. However they haven't fully explored th...
[]
[]
[]
[]
130
131
BSNet: Box-Supervised Simulation-assisted Mean Teacher for 3D Instance Segmentation
http://arxiv.org/abs/2403.15019
Jiahao Lu, Jiacheng Deng, Tianzhu Zhang
2,403.15019
3D instance segmentation (3DIS) is a crucial task but point-level annotations are tedious in fully supervised settings. Thus using bounding boxes (bboxes) as annotations has shown great potential. The current mainstream approach is a two-step process involving the generation of pseudo-labels from box annotations and th...
[]
[]
[]
[]
131
132
Adaptive Slot Attention: Object Discovery with Dynamic Slot Number
Ke Fan, Zechen Bai, Tianjun Xiao, Tong He, Max Horn, Yanwei Fu, Francesco Locatello, Zheng Zhang
null
Object-centric learning (OCL) extracts the representation of objects with slots offering an exceptional blend of flexibility and interpretability for abstracting low-level perceptual features. A widely adopted method within OCL is slot attention which utilizes attention mechanisms to iteratively refine slot representat...
[]
[]
[]
[]
132
133
CORES: Convolutional Response-based Score for Out-of-distribution Detection
Keke Tang, Chao Hou, Weilong Peng, Runnan Chen, Peican Zhu, Wenping Wang, Zhihong Tian
null
Deep neural networks (DNNs) often display overconfidence when encountering out-of-distribution (OOD) samples posing significant challenges in real-world applications. Capitalizing on the observation that responses on convolutional kernels are generally more pronounced for in-distribution (ID) samples than for OOD ones ...
[]
[]
[]
[]
133
134
Task-Driven Wavelets using Constrained Empirical Risk Minimization
Eric Marcus, Ray Sheombarsing, Jan-Jakob Sonke, Jonas Teuwen
null
Deep Neural Networks (DNNs) are widely used for their ability to effectively approximate large classes of functions. This flexibility however makes the strict enforcement of constraints on DNNs a difficult problem. In contexts where it is critical to limit the function space to which certain network components belong s...
[]
[]
[]
[]
134
135
HOI-M^3: Capture Multiple Humans and Objects Interaction within Contextual Environment
Juze Zhang, Jingyan Zhang, Zining Song, Zhanhe Shi, Chengfeng Zhao, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang
null
Humans naturally interact with both others and the surrounding multiple objects engaging in various social activities. However recent advances in modeling human-object interactions mostly focus on perceiving isolated individuals and objects due to fundamental data scarcity. In this paper we introduce HOI-M^3 a novel la...
[]
[]
[]
[]
135
136
Interactive3D: Create What You Want by Interactive 3D Generation
http://arxiv.org/abs/2404.16510
Shaocong Dong, Lihe Ding, Zhanpeng Huang, Zibin Wang, Tianfan Xue, Dan Xu
2,404.1651
3D object generation has undergone significant advancements yielding high-quality results. However fall short in achieving precise user control often yielding results that do not align with user expectations thus limiting their applicability. User-envisioning 3D object generation faces significant challenges in realizi...
[]
[]
[]
[]
136
137
DeiT-LT: Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
http://arxiv.org/abs/2404.02900
Harsh Rangwani, Pradipto Mondal, Mayank Mishra, Ashish Ramayee Asokan, R. Venkatesh Babu
2,404.029
Vision Transformer (ViT) has emerged as a prominent architecture for various computer vision tasks. In ViT we divide the input image into patch tokens and process them through a stack of self-attention blocks. However unlike Convolutional Neural Network (CNN) ViT's simple architecture has no informative inductive bias ...
[]
[]
[]
[]
137
138
Accurate Spatial Gene Expression Prediction by Integrating Multi-Resolution Features
http://arxiv.org/abs/2403.07592
Youngmin Chung, Ji Hun Ha, Kyeong Chan Im, Joo Sang Lee
2,403.07592
Recent advancements in Spatial Transcriptomics (ST) technology have facilitated detailed gene expression analysis within tissue contexts. However the high costs and methodological limitations of ST necessitate a more robust predictive model. In response this paper introduces TRIPLEX a novel deep learning framework desi...
[]
[]
[]
[]
138
139
FCS: Feature Calibration and Separation for Non-Exemplar Class Incremental Learning
Qiwei Li, Yuxin Peng, Jiahuan Zhou
null
Non-Exemplar Class Incremental Learning (NECIL) involves learning a classification model on a sequence of data without access to exemplars from previously encountered old classes. Such a stringent constraint always leads to catastrophic forgetting of the learned knowledge. Currently existing methods either employ knowl...
[]
[]
[]
[]
139
140
Task2Box: Box Embeddings for Modeling Asymmetric Task Relationships
http://arxiv.org/abs/2403.17173
Rangel Daroya, Aaron Sun, Subhransu Maji
2,403.17173
Modeling and visualizing relationships between tasks or datasets is an important step towards solving various meta-tasks such as dataset discovery multi-tasking and transfer learning. However many relationships such as containment and transferability are naturally asymmetric and current approaches for representation an...
[]
[]
[]
[]
140
141
Behind the Veil: Enhanced Indoor 3D Scene Reconstruction with Occluded Surfaces Completion
http://arxiv.org/abs/2404.03070
Su Sun, Cheng Zhao, Yuliang Guo, Ruoyu Wang, Xinyu Huang, Yingjie Victor Chen, Liu Ren
2,404.0307
In this paper we present a novel indoor 3D reconstruction method with occluded surface completion given a sequence of depth readings. Prior state-of-the-art (SOTA) methods only focus on the reconstruction of the visible areas in a scene neglecting the invisible areas due to the occlusions e.g. the contact surface betwe...
[]
[]
[]
[]
141
142
VideoGrounding-DINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding
Syed Talal Wasim, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan
null
Video grounding aims to localize a spatio-temporal section in a video corresponding to an input text query. This paper addresses a critical limitation in current video grounding methodologies by introducing an Open-Vocabulary Spatio-Temporal Video Grounding task. Unlike prevalent closed-set approaches that struggle wit...
[]
[]
[]
[]
142
143
OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos
http://arxiv.org/abs/2404.00676
Dongyoung Choi, Hyeonjoong Jang, Min H. Kim
2,404.00676
Omnidirectional cameras are extensively used in various applications to provide a wide field of vision. However they face a challenge in synthesizing novel views due to the inevitable presence of dynamic objects including the photographer in their wide field of view. In this paper we introduce a new approach called Omn...
[]
[]
[]
[]
143
144
LoS: Local Structure-Guided Stereo Matching
Kunhong Li, Longguang Wang, Ye Zhang, Kaiwen Xue, Shunbo Zhou, Yulan Guo
null
Estimating disparities in challenging areas is difficult and limits the performance of stereo matching models. In this paper we exploit local structure information (LSI) to enhance stereo matching. Specifically our LSI comprises a series of key elements including the slant plane (parameterised by disparity gradients) d...
[]
[]
[]
[]
144
145
Semantic Human Mesh Reconstruction with Textures
http://arxiv.org/abs/2403.02561
Xiaoyu Zhan, Jianxin Yang, Yuanqi Li, Jie Guo, Yanwen Guo, Wenping Wang
2,403.02561
The field of 3D detailed human mesh reconstruction has made significant progress in recent years. However current methods still face challenges when used in industrial applications due to unstable results low-quality meshes and a lack of UV unwrapping and skinning weights. In this paper we present SHERT a novel pipelin...
[]
[]
[]
[]
145
146
Think Twice Before Selection: Federated Evidential Active Learning for Medical Image Analysis with Domain Shifts
http://arxiv.org/abs/2312.02567
Jiayi Chen, Benteng Ma, Hengfei Cui, Yong Xia
2,312.02567
Federated learning facilitates the collaborative learning of a global model across multiple distributed medical institutions without centralizing data. Nevertheless the expensive cost of annotation on local clients remains an obstacle to effectively utilizing local data. To mitigate this issue federated active learning...
[]
[]
[]
[]
146
147
Probing the 3D Awareness of Visual Foundation Models
http://arxiv.org/abs/2404.08636
Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanzhen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, Varun Jampani
2,404.08636
Recent advances in large-scale pretraining have yielded visual foundation models with strong capabilities. Not only can recent models generalize to arbitrary images for their training task their intermediate representations are useful for other visual tasks such as detection and segmentation. Given that such models can...
[]
[]
[]
[]
147
148
PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models
http://arxiv.org/abs/2312.13964
Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen
2,312.13964
Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation empowering non-experts to generate stunning images with unique styles. While promising animating these personalized images with realistic motions poses significant challenges in preserving distinct styles high-fidelity d...
[]
[]
[]
[]
148
149
When Visual Grounding Meets Gigapixel-level Large-scale Scenes: Benchmark and Approach
Tao Ma, Bing Bai, Haozhe Lin, Heyuan Wang, Yu Wang, Lin Luo, Lu Fang
null
Visual grounding refers to the process of associating natural language expressions with corresponding regions within an image. Existing benchmarks for visual grounding primarily operate within small-scale scenes with a few objects. Nevertheless recent advances in imaging technology have enabled the acquisition of gigap...
[]
[]
[]
[]
149
150
NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs
http://arxiv.org/abs/2402.08622
Michael Fischer, Zhengqin Li, Thu Nguyen-Phuoc, Aljaz Bozic, Zhao Dong, Carl Marshall, Tobias Ritschel
2,402.08622
A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene. We here ask the question whether we can transfer the appearance from a source NeRF onto a target 3D geometry in a semantically meaningful way such that the resulting new NeRF retains the target geometry but has an app...
[]
[]
[]
[]
150
151
Mind Artist: Creating Artistic Snapshots with Human Thought
Jiaxuan Chen, Yu Qi, Yueming Wang, Gang Pan
null
We introduce Mind Artist (MindArt) a novel and efficient neural decoding architecture to snap artistic photographs from our mind in a controllable manner. Recently progress has been made in image reconstruction with non-invasive brain recordings but it's still difficult to generate realistic images with high semantic f...
[]
[]
[]
[]
151
152
ViTamin: Designing Scalable Vision Models in the Vision-Language Era
http://arxiv.org/abs/2404.02132
Jieneng Chen, Qihang Yu, Xiaohui Shen, Alan Yuille, Liang-Chieh Chen
2,404.02132
Recent breakthroughs in vision-language models (VLMs) start a new page in the vision community. The VLMs provide stronger and more generalizable feature embeddings compared to those from ImageNet-pretrained models thanks to the training on the large-scale Internet image-text pairs. However despite the amazing achieveme...
[]
[]
[]
[]
152
153
Accept the Modality Gap: An Exploration in the Hyperbolic Space
Sameera Ramasinghe, Violetta Shevchenko, Gil Avraham, Ajanthan Thalaiyasingam
null
Recent advancements in machine learning have spotlighted the potential of hyperbolic spaces as they effectively learn hierarchical feature representations. While there has been progress in leveraging hyperbolic spaces in single-modality contexts its exploration in multimodal settings remains under explored. Some recent...
[]
[]
[]
[]
153
154
Unraveling Instance Associations: A Closer Look for Audio-Visual Segmentation
http://arxiv.org/abs/2304.02970
Yuanhong Chen, Yuyuan Liu, Hu Wang, Fengbei Liu, Chong Wang, Helen Frazer, Gustavo Carneiro
2,304.0297
Audio-visual segmentation (AVS) is a challenging task that involves accurately segmenting sounding objects based on audio-visual cues. The effectiveness of audio-visual learning critically depends on achieving accurate cross-modal alignment between sound and visual objects. Successful audio-visual learning requires two...
[]
[]
[]
[]
154
155
Few-Shot Object Detection with Foundation Models
Guangxing Han, Ser-Nam Lim
null
Few-shot object detection (FSOD) aims to detect objects with only a few training examples. Visual feature extraction and query-support similarity learning are the two critical components. Existing works are usually developed based on ImageNet pre-trained vision backbones and design sophisticated metric-learning network...
[]
[]
[]
[]
155
156
FedMef: Towards Memory-efficient Federated Dynamic Pruning
http://arxiv.org/abs/2403.14737
Hong Huang, Weiming Zhuang, Chen Chen, Lingjuan Lyu
2,403.14737
Federated learning (FL) promotes decentralized training while prioritizing data confidentiality. However its application on resource-constrained devices is challenging due to the high demand for computation and memory resources to train deep learning models. Neural network pruning techniques such as dynamic pruning cou...
[]
[]
[]
[]
156
157
Seeing the Unseen: Visual Common Sense for Semantic Placement
http://arxiv.org/abs/2401.07770
Ram Ramrakhya, Aniruddha Kembhavi, Dhruv Batra, Zsolt Kira, Kuo-Hao Zeng, Luca Weihs
2,401.0777
Computer vision tasks typically involve describing what is visible in an image (e.g. classification detection segmentation and captioning). We study a visual common sense task that requires understanding 'what is not visible'. Specifically given an image (e.g. of a living room) and a name of an object ("cushion") a vis...
[]
[]
[]
[]
157
158
Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On
http://arxiv.org/abs/2404.01089
Xu Yang, Changxing Ding, Zhibin Hong, Junhao Huang, Jin Tao, Xiangmin Xu
2,404.01089
Image-based virtual try-on is an increasingly important task for online shopping. It aims to synthesize images of a specific person wearing a specified garment. Diffusion model-based approaches have recently become popular as they are excellent at image synthesis tasks. However these approaches usually employ additiona...
[]
[]
[]
[]
158
159
PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization
http://arxiv.org/abs/2404.09011
Zining Chen, Weiqiu Wang, Zhicheng Zhao, Fei Su, Aidong Men, Hongying Meng
2,404.09011
Domain Generalization (DG) aims to resolve distribution shifts between source and target domains and current DG methods are default to the setting that data from source and target domains share identical categories. Nevertheless there exists unseen classes from target domains in practical scenarios. To address this iss...
[]
[]
[]
[]
159
160
SODA: Bottleneck Diffusion Models for Representation Learning
http://arxiv.org/abs/2311.17901
Drew A. Hudson, Daniel Zoran, Mateusz Malinowski, Andrew K. Lampinen, Andrew Jaegle, James L. McClelland, Loic Matthey, Felix Hill, Alexander Lerchner
2,311.17901
We introduce SODA a self-supervised diffusion model designed for representation learning. The model incorporates an image encoder which distills a source view into a compact representation that in turn guides the generation of related novel views. We show that by imposing a tight bottleneck between the encoder and a de...
[]
[]
[]
[]
160
161
Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach
http://arxiv.org/abs/2404.00834
Guoqiang Liang, Kanghao Chen, Hangyu Li, Yunfan Lu, Lin Wang
2,404.00834
Event camera has recently received much attention for low-light image enhancement (LIE) thanks to their distinct advantages such as high dynamic range. However current research is prohibitively restricted by the lack of large-scale real-world and spatial-temporally aligned event-image datasets. To this end we propose a...
[]
[]
[]
[]
161
162
Zero-Reference Low-Light Enhancement via Physical Quadruple Priors
http://arxiv.org/abs/2403.12933
Wenjing Wang, Huan Yang, Jianlong Fu, Jiaying Liu
2,403.12933
Understanding illumination and reducing the need for supervision pose a significant challenge in low-light enhancement. Current approaches are highly sensitive to data usage during training and illumination-specific hyper-parameters limiting their ability to handle unseen scenarios. In this paper we propose a new zero-...
[]
[]
[]
[]
162
163
LLaMA-Excitor: General Instruction Tuning via Indirect Feature Interaction
Bo Zou, Chao Yang, Yu Qiao, Chengbin Quan, Youjian Zhao
null
Existing methods to fine-tune LLMs like Adapter Prefix-tuning and LoRA which introduce extra modules or additional input sequences to inject new skills or knowledge may compromise the innate abilities of LLMs. In this paper we propose LLaMA-Excitor a lightweight method that stimulates the LLMs' potential to better foll...
[]
[]
[]
[]
163
164
NeRFCodec: Neural Feature Compression Meets Neural Radiance Fields for Memory-Efficient Scene Representation
http://arxiv.org/abs/2404.02185
Sicheng Li, Hao Li, Yiyi Liao, Lu Yu
2,404.02185
The emergence of Neural Radiance Fields (NeRF) has greatly impacted 3D scene modeling and novel-view synthesis. As a kind of visual media for 3D scene representation compression with high rate-distortion performance is an eternal target. Motivated by advances in neural compression and neural field representation we pro...
[]
[]
[]
[]
164
165
From a Bird's Eye View to See: Joint Camera and Subject Registration without the Camera Calibration
Zekun Qian, Ruize Han, Wei Feng, Song Wang
null
We tackle a new problem of multi-view camera and subject registration in the bird's eye view (BEV) without pre-given camera calibration which promotes the multi-view subject registration problem to a new calibration-free stage. This greatly alleviates the limitation in many practical applications. However this is a ver...
[]
[]
[]
[]
165
166
Steerers: A Framework for Rotation Equivariant Keypoint Descriptors
Georg Bökman, Johan Edstedt, Michael Felsberg, Fredrik Kahl
null
Image keypoint descriptions that are discriminative and matchable over large changes in viewpoint are vital for 3D reconstruction. However descriptions output by learned descriptors are typically not robust to camera rotation. While they can be made more robust by e.g. data aug-mentation this degrades performance on up...
[]
[]
[]
[]
166
167
Efficient Dataset Distillation via Minimax Diffusion
http://arxiv.org/abs/2311.15529
Jianyang Gu, Saeed Vahidian, Vyacheslav Kungurtsev, Haonan Wang, Wei Jiang, Yang You, Yiran Chen
2,311.15529
Dataset distillation reduces the storage and computational consumption of training a network by generating a small surrogate dataset that encapsulates rich information of the original large-scale one. However previous distillation methods heavily rely on the sample-wise iterative optimization scheme. As the images-per-...
[]
[]
[]
[]
167
168
Posterior Distillation Sampling
http://arxiv.org/abs/2311.13831
Juil Koo, Chanho Park, Minhyuk Sung
2,311.13831
We introduce Posterior Distillation Sampling (PDS) a novel optimization method for parametric image editing based on diffusion models. Existing optimization-based methods which leverage the powerful 2D prior of diffusion models to handle various parametric images have mainly focused on generation. Unlike generation edi...
[]
[]
[]
[]
168
169
HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields
http://arxiv.org/abs/2402.17062
Haozhe Qi, Chen Zhao, Mathieu Salzmann, Alexander Mathis
2,402.17062
Human hands are highly articulated and versatile at handling objects. Jointly estimating the 3D poses of a hand and the object it manipulates from a monocular camera is challenging due to frequent occlusions. Thus existing methods often rely on intermediate 3D shape representations to increase performance. These repres...
[]
[]
[]
[]
169
170
Enhancing Video Super-Resolution via Implicit Resampling-based Alignment
http://arxiv.org/abs/2305.00163
Kai Xu, Ziwei Yu, Xin Wang, Michael Bi Mi, Angela Yao
2,305.00163
In video super-resolution it is common to use a frame-wise alignment to support the propagation of information over time. The role of alignment is well-studied for low-level enhancement in video but existing works overlook a critical step -- resampling. We show through extensive experiments that for alignment to be eff...
[]
[]
[]
[]
170
171
DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis
http://arxiv.org/abs/2312.13016
Yuming Gu, Hongyi Xu, You Xie, Guoxian Song, Yichun Shi, Di Chang, Jing Yang, Linjie Luo
2,312.13016
We present DiffPortrait3D a conditional diffusion model that is capable of synthesizing 3D-consistent photo-realistic novel views from as few as a single in-the-wild portrait. Specifically given a single RGB input we aim to synthesize plausible but consistent facial details rendered from novel camera views with retaine...
[]
[]
[]
[]
171
172
Rethinking Transformers Pre-training for Multi-Spectral Satellite Imagery
http://arxiv.org/abs/2403.05419
Mubashir Noman, Muzammal Naseer, Hisham Cholakkal, Rao Muhammad Anwer, Salman Khan, Fahad Shahbaz Khan
2,403.05419
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks by pre-training on large amount of unlabelled data. Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large a...
[]
[]
[]
[]
172
173
LLM4SGG: Large Language Models for Weakly Supervised Scene Graph Generation
http://arxiv.org/abs/2310.10404
Kibum Kim, Kanghoon Yoon, Jaehyeong Jeon, Yeonjun In, Jinyoung Moon, Donghyun Kim, Chanyoung Park
2,310.10404
Weakly-Supervised Scene Graph Generation (WSSGG) research has recently emerged as an alternative to the fully-supervised approach that heavily relies on costly annotations. In this regard studies on WSSGG have utilized image captions to obtain unlocalized triplets while primarily focusing on grounding the unlocalized t...
[]
[]
[]
[]
173
174
Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model
http://arxiv.org/abs/2311.17112
Zelin Peng, Zhengqin Xu, Zhilin Zeng, Lingxi Xie, Qi Tian, Wei Shen
2,311.17112
Parameter-efficient fine-tuning (PEFT) is an effective methodology to unleash the potential of large foundation models in novel scenarios with limited training data. In the computer vision community PEFT has shown effectiveness in image classification but little research has studied its ability for image segmentation. ...
[]
[]
[]
[]
174
175
Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling
http://arxiv.org/abs/2405.14847
Liwen Wu, Sai Bi, Zexiang Xu, Fujun Luan, Kai Zhang, Iliyan Georgiev, Kalyan Sunkavalli, Ravi Ramamoorthi
2,405.14847
Novel-view synthesis of specular objects like shiny metals or glossy paints remains a significant challenge. Not only the glossy appearance but also global illumination effects including reflections of other objects in the environment are critical components to faithfully reproduce a scene. In this paper we present Neu...
[]
[]
[]
[]
175
176
Masked and Shuffled Blind Spot Denoising for Real-World Images
http://arxiv.org/abs/2404.09389
Hamadi Chihaoui, Paolo Favaro
2,404.09389
We introduce a novel approach to single image denoising based on the Blind Spot Denoising principle which we call MAsked and SHuffled Blind Spot Denoising (MASH). We focus on the case of correlated noise which often plagues real images. MASH is the result of a careful analysis to determine the relationships between the...
[]
[]
[]
[]
176
177
Label Propagation for Zero-shot Classification with Vision-Language Models
Vladan Stojni?, Yannis Kalantidis, Giorgos Tolias
null
Vision-Language Models (VLMs) have demonstrated impressive performance on zero-shot classification i.e. classification when provided merely with a list of class names. In this paper we tackle the case of zero-shot classification in the presence of unlabeled data. We leverage the graph structure of the unlabeled data an...
[]
[]
[]
[]
177
178
DiffusionAvatars: Deferred Diffusion for High-fidelity 3D Head Avatars
Tobias Kirschstein, Simon Giebenhain, Matthias Nießner
null
DiffusionAvatars synthesizes a high-fidelity 3D head avatar of a person offering intuitive control over both pose and expression. We propose a diffusion-based neural renderer that leverages generic 2D priors to produce compelling images of faces. For coarse guidance of the expression and head pose we render a neural pa...
[]
[]
[]
[]
178
179
Data-Free Quantization via Pseudo-label Filtering
Chunxiao Fan, Ziqi Wang, Dan Guo, Meng Wang
null
Quantization for model compression can efficiently reduce the network complexity and storage requirement but the original training data is necessary to remedy the performance loss caused by quantization. The Data-Free Quantization (DFQ) methods have been proposed to handle the absence of original training data with syn...
[]
[]
[]
[]
179
180
Revisiting Global Translation Estimation with Feature Tracks
Peilin Tao, Hainan Cui, Mengqi Rong, Shuhan Shen
null
Global translation estimation is a highly challenging step in the global structure from motion (SfM) algorithm. Many existing methods depend solely on relative translations leading to inaccuracies in low parallax scenes and degradation under collinear camera motion. While recent approaches aim to address these issues b...
[]
[]
[]
[]
180
181
Open-Set Domain Adaptation for Semantic Segmentation
http://arxiv.org/abs/2405.19899
Seun-An Choe, Ah-Hyung Shin, Keon-Hee Park, Jinwoo Choi, Gyeong-Moon Park
2,405.19899
Unsupervised domain adaptation (UDA) for semantic segmentation aims to transfer the pixel-wise knowledge from the labeled source domain to the unlabeled target domain. However current UDA methods typically assume a shared label space between source and target limiting their applicability in real-world scenarios where n...
[]
[]
[]
[]
181
182
Generative Powers of Ten
http://arxiv.org/abs/2312.02149
Xiaojuan Wang, Janne Kontkanen, Brian Curless, Steven M. Seitz, Ira Kemelmacher-Shlizerman, Ben Mildenhall, Pratul Srinivasan, Dor Verbin, Aleksander Holynski
2,312.02149
We present a method that uses a text-to-image model to generate consistent content across multiple image scales enabling extreme semantic zooms into a scene e.g. ranging from a wide-angle landscape view of a forest to a macro shot of an insect sitting on one of the tree branches. We achieve this through a joint multi-s...
[]
[]
[]
[]
182
183
H-ViT: A Hierarchical Vision Transformer for Deformable Image Registration
Morteza Ghahremani, Mohammad Khateri, Bailiang Jian, Benedikt Wiestler, Ehsan Adeli, Christian Wachinger
null
This paper introduces a novel top-down representation approach for deformable image registration which estimates the deformation field by capturing various short- and long-range flow features at different scale levels. As a Hierarchical Vision Transformer (H-ViT) we propose a dual self-attention and cross-attention mec...
[]
[]
[]
[]
183
184
Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training
http://arxiv.org/abs/2311.01734
Yipeng Gao, Zeyu Wang, Wei-Shi Zheng, Cihang Xie, Yuyin Zhou
2,311.01734
Contrastive learning has emerged as a promising paradigm for 3D open-world understanding i.e. aligning point cloud representation to image and text embedding space individually. In this paper we introduce MixCon3D a simple yet effective method aiming to sculpt holistic 3D representation in contrastive language-image-3D...
[]
[]
[]
[]
184
185
Probing Synergistic High-Order Interaction in Infrared and Visible Image Fusion
Naishan Zheng, Man Zhou, Jie Huang, Junming Hou, Haoying Li, Yuan Xu, Feng Zhao
null
Infrared and visible image fusion aims to generate a fused image by integrating and distinguishing complementary information from multiple sources. While the cross-attention mechanism with global spatial interactions appears promising it only capture second-order spatial interactions neglecting higher-order interaction...
[]
[]
[]
[]
185
186
VideoLLM-online: Online Video Large Language Model for Streaming Video
Joya Chen, Zhaoyang Lv, Shiwei Wu, Kevin Qinghong Lin, Chenan Song, Difei Gao, Jia-Wei Liu, Ziteng Gao, Dongxing Mao, Mike Zheng Shou
null
Large Language Models (LLMs) have been enhanced with vision capabilities enabling them to comprehend images videos and interleaved vision-language content. However the learning methods of these large multimodal models (LMMs) typically treat videos as predetermined clips rendering them less effective and efficient at ha...
[]
[]
[]
[]
186
187
Text-conditional Attribute Alignment across Latent Spaces for 3D Controllable Face Image Synthesis
Feifan Xu, Rui Li, Si Wu, Yong Xu, Hau San Wong
null
With the advent of generative models and vision language pretraining significant improvement has been made in text-driven face manipulation. The text embedding can be used as target supervision for expression control.However it is non-trivial to associate with its 3D attributesi.e. pose and illumination. To address the...
[]
[]
[]
[]
187
188
ESCAPE: Encoding Super-keypoints for Category-Agnostic Pose Estimation
Khoi Duc Nguyen, Chen Li, Gim Hee Lee
null
In this paper we tackle the task of category-agnostic pose estimation (CAPE) which aims to predict poses for objects of any category with few annotated samples. Previous works either rely on local matching between features of support and query samples or require support keypoint identifier. The former is prone to overf...
[]
[]
[]
[]
188
189
Correcting Diffusion Generation through Resampling
http://arxiv.org/abs/2312.06038
Yujian Liu, Yang Zhang, Tommi Jaakkola, Shiyu Chang
2,312.06038
Despite diffusion models' superior capabilities in modeling complex distributions there are still non-trivial distributional discrepancies between generated and ground-truth images which has resulted in several notable problems in image generation including missing object errors in text-to-image generation and low imag...
[]
[]
[]
[]
189
190
Towards Better Vision-Inspired Vision-Language Models
Yun-Hao Cao, Kaixiang Ji, Ziyuan Huang, Chuanyang Zheng, Jiajia Liu, Jian Wang, Jingdong Chen, Ming Yang
null
Vision-language (VL) models have achieved unprecedented success recently in which the connection module is the key to bridge the modality gap. Nevertheless the abundant visual clues are not sufficiently exploited in most existing methods. On the vision side most existing approaches only use the last feature of the visi...
[]
[]
[]
[]
190
191
VSRD: Instance-Aware Volumetric Silhouette Rendering for Weakly Supervised 3D Object Detection
http://arxiv.org/abs/2404.00149
Zihua Liu, Hiroki Sakuma, Masatoshi Okutomi
2,404.00149
Monocular 3D object detection poses a significant challenge in 3D scene understanding due to its inherently ill-posed nature in monocular depth estimation. Existing methods heavily rely on supervised learning using abundant 3D labels typically obtained through expensive and labor-intensive annotation on LiDAR point clo...
[]
[]
[]
[]
191
192
RILA: Reflective and Imaginative Language Agent for Zero-Shot Semantic Audio-Visual Navigation
Zeyuan Yang, Jiageng Liu, Peihao Chen, Anoop Cherian, Tim K. Marks, Jonathan Le Roux, Chuang Gan
null
We leverage Large Language Models (LLM) for zeroshot Semantic Audio Visual Navigation (SAVN). Existing methods utilize extensive training demonstrations for reinforcement learning yet achieve relatively low success rates and lack generalizability. The intermittent nature of auditory signals further poses additional obs...
[]
[]
[]
[]
192
193
Endow SAM with Keen Eyes: Temporal-spatial Prompt Learning for Video Camouflaged Object Detection
Wenjun Hui, Zhenfeng Zhu, Shuai Zheng, Yao Zhao
null
The Segment Anything Model (SAM) a prompt-driven foundational model has demonstrated remarkable performance in natural image segmentation. However its application in video camouflaged object detection (VCOD) encounters challenges chiefly stemming from the overlooked temporal-spatial associations and the unreliability o...
[]
[]
[]
[]
193
194
TULIP: Multi-camera 3D Precision Assessment of Parkinson's Disease
Kyungdo Kim, Sihan Lyu, Sneha Mantri, Timothy W. Dunn
null
Parkinson's disease (PD) is a devastating movement disorder accelerating in global prevalence but a lack of precision symptom measurement has made the development of effective therapies challenging. The Unified Parkinson's Disease Rating Scale (UPDRS) is the gold-standard for assessing motor symptom severity yet its ma...
[]
[]
[]
[]
194
195
HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces
Haithem Turki, Vasu Agrawal, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Deva Ramanan, Michael Zollhöfer, Christian Richardt
null
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render. One reason is that they make use of volume rendering thus requiring many samples (and model queries) per ray at render time. Although this representation is flexible and easy to optimize most real-world objects can be ...
[]
[]
[]
[]
195
196
AirPlanes: Accurate Plane Estimation via 3D-Consistent Embeddings
Jamie Watson, Filippo Aleotti, Mohamed Sayed, Zawar Qureshi, Oisin Mac Aodha, Gabriel Brostow, Michael Firman, Sara Vicente
null
Extracting planes from a 3D scene is useful for downstream tasks in robotics and augmented reality. In this paper we tackle the problem of estimating the planar surfaces in a scene from posed images. Our first finding is that a surprisingly competitive baseline results from combining popular clustering algorithms with ...
[]
[]
[]
[]
196
197
Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
http://arxiv.org/abs/2312.16649
Huan Liu, Zichang Tan, Chuangchuang Tan, Yunchao Wei, Jingdong Wang, Yao Zhao
2,312.16649
In this paper we study the problem of generalizable synthetic image detection aiming to detect forgery images from diverse generative methods e.g. GANs and diffusion models. Cutting-edge solutions start to explore the benefits of pre-trained models and mainly follow the fixed paradigm of solely training an attached cla...
[]
[]
[]
[]
197
198
PostureHMR: Posture Transformation for 3D Human Mesh Recovery
Yu-Pei Song, Xiao Wu, Zhaoquan Yuan, Jian-Jun Qiao, Qiang Peng
null
Human Mesh Recovery (HMR) aims to estimate the 3D human body from 2D images which is a challenging task due to inherent ambiguities in translating 2D observations to 3D space. A novel approach called PostureHMR is proposed to leverage a multi-step diffusion-style process which converts this task into a posture transfor...
[]
[]
[]
[]
198
199
Blur2Blur: Blur Conversion for Unsupervised Image Deblurring on Unknown Domains
http://arxiv.org/abs/2403.16205
Bang-Dang Pham, Phong Tran, Anh Tran, Cuong Pham, Rang Nguyen, Minh Hoai
2,403.16205
This paper presents an innovative framework designed to train an image deblurring algorithm tailored to a specific camera device. This algorithm works by transforming a blurry input image which is challenging to deblur into another blurry image that is more amenable to deblurring. The transformation process from one bl...
[]
[]
[]
[]
199