Search is not available for this dataset
title
string
authors
string
abstract
string
pdf
string
supp
string
arXiv
string
bibtex
string
url
string
detail_url
string
tags
string
string
Knowledge Mining With Scene Text for Fine-Grained Recognition
Hao Wang, Junchao Liao, Tianheng Cheng, Zewen Gao, Hao Liu, Bo Ren, Xiang Bai, Wenyu Liu
Recently, the semantics of scene text has been proven to be essential in fine-grained image classification. However, the existing methods mainly exploit the literal meaning of scene text for fine-grained recognition, which might be irrelevant when it is not significantly related to objects/scenes. We propose an end-to-...
https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Knowledge_Mining_With_Scene_Text_for_Fine-Grained_Recognition_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2203.14215
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Knowledge_Mining_With_Scene_Text_for_Fine-Grained_Recognition_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Knowledge_Mining_With_Scene_Text_for_Fine-Grained_Recognition_CVPR_2022_paper.html
CVPR 2022
null
Self-Supervised Learning of Object Parts for Semantic Segmentation
Adrian Ziegler, Yuki M. Asano
Progress in self-supervised learning has brought strong general image representation learning methods. Yet so far, it has mostly focused on image-level learning. In turn, tasks such as unsupervised image segmentation have not benefited from this trend as they require spatially-diverse representations. However, learning...
https://openaccess.thecvf.com/content/CVPR2022/papers/Ziegler_Self-Supervised_Learning_of_Object_Parts_for_Semantic_Segmentation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Ziegler_Self-Supervised_Learning_of_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.13101
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Ziegler_Self-Supervised_Learning_of_Object_Parts_for_Semantic_Segmentation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Ziegler_Self-Supervised_Learning_of_Object_Parts_for_Semantic_Segmentation_CVPR_2022_paper.html
CVPR 2022
null
Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects
Manuel Stoiber, Martin Sundermeyer, Rudolph Triebel
Tracking objects in 3D space and predicting their 6DoF pose is an essential task in computer vision. State-of-the-art approaches often rely on object texture to tackle this problem. However, while they achieve impressive results, many objects do not contain sufficient texture, violating the main underlying assumption. ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Stoiber_Iterative_Corresponding_Geometry_Fusing_Region_and_Depth_for_Highly_Efficient_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Stoiber_Iterative_Corresponding_Geometry_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.05334
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Stoiber_Iterative_Corresponding_Geometry_Fusing_Region_and_Depth_for_Highly_Efficient_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Stoiber_Iterative_Corresponding_Geometry_Fusing_Region_and_Depth_for_Highly_Efficient_CVPR_2022_paper.html
CVPR 2022
null
Single-Photon Structured Light
Varun Sundar, Sizhuo Ma, Aswin C. Sankaranarayanan, Mohit Gupta
We present a novel structured light technique that uses Single Photon Avalanche Diode (SPAD) arrays to enable 3D scanning at high-frame rates and low-light levels. This technique, called "Single-Photon Structured Light", works by sensing binary images that indicates the presence or absence of photon arrivals during eac...
https://openaccess.thecvf.com/content/CVPR2022/papers/Sundar_Single-Photon_Structured_Light_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Sundar_Single-Photon_Structured_Light_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.05300
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Sundar_Single-Photon_Structured_Light_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Sundar_Single-Photon_Structured_Light_CVPR_2022_paper.html
CVPR 2022
null
Deblurring via Stochastic Refinement
Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G. Dimakis, Peyman Milanfar
Image deblurring is an ill-posed problem with multiple plausible solutions for a given input image. However, most existing methods produce a deterministic estimate of the clean image and are trained to minimize pixel-level distortion. These metrics are known to be poorly correlated with human perception, and often lead...
https://openaccess.thecvf.com/content/CVPR2022/papers/Whang_Deblurring_via_Stochastic_Refinement_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Whang_Deblurring_via_Stochastic_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2112.02475
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Whang_Deblurring_via_Stochastic_Refinement_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Whang_Deblurring_via_Stochastic_Refinement_CVPR_2022_paper.html
CVPR 2022
null
3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds
Daigang Cai, Lichen Zhao, Jing Zhang, Lu Sheng, Dong Xu
Observing that the 3D captioning task and the 3D grounding task contain both shared and complementary information in nature, in this work, we propose a unified framework to jointly solve these two distinct but closely related tasks in a synergistic fashion, which consists of both shared task-agnostic modules and lightw...
https://openaccess.thecvf.com/content/CVPR2022/papers/Cai_3DJCG_A_Unified_Framework_for_Joint_Dense_Captioning_and_Visual_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Cai_3DJCG_A_Unified_Framework_for_Joint_Dense_Captioning_and_Visual_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Cai_3DJCG_A_Unified_Framework_for_Joint_Dense_Captioning_and_Visual_CVPR_2022_paper.html
CVPR 2022
null
TransGeo: Transformer Is All You Need for Cross-View Image Geo-Localization
Sijie Zhu, Mubarak Shah, Chen Chen
The dominant CNN-based methods for cross-view image geo-localization rely on polar transform and fail to model global correlation. We propose a pure transformer-based approach (TransGeo) to address these limitations from a different perspective. TransGeo takes full advantage of the strengths of transformer related to g...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhu_TransGeo_Transformer_Is_All_You_Need_for_Cross-View_Image_Geo-Localization_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Zhu_TransGeo_Transformer_Is_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.00097
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhu_TransGeo_Transformer_Is_All_You_Need_for_Cross-View_Image_Geo-Localization_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhu_TransGeo_Transformer_Is_All_You_Need_for_Cross-View_Image_Geo-Localization_CVPR_2022_paper.html
CVPR 2022
null
R(Det)2: Randomized Decision Routing for Object Detection
Yali Li, Shengjin Wang
In the paradigm of object detection, the decision head is an important part, which affects detection performance significantly. Yet how to design a high-performance decision head remains to be an open issue. In this paper, we propose a novel approach to combine decision trees and deep neural networks in an end-to-end l...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_RDet2_Randomized_Decision_Routing_for_Object_Detection_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Li_RDet2_Randomized_Decision_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_RDet2_Randomized_Decision_Routing_for_Object_Detection_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_RDet2_Randomized_Decision_Routing_for_Object_Detection_CVPR_2022_paper.html
CVPR 2022
null
Abandoning the Bayer-Filter To See in the Dark
Xingbo Dong, Wanyan Xu, Zhihui Miao, Lan Ma, Chao Zhang, Jiewen Yang, Zhe Jin, Andrew Beng Jin Teoh, Jiajun Shen
Low-light image enhancement, a pervasive but challenging problem, plays a central role in enhancing the visibility of an image captured in a poor illumination environment. Due to the fact that not all photons can pass the Bayer-Filter on the sensor of the color camera, in this work, we first present a De-Bayer-Filter s...
https://openaccess.thecvf.com/content/CVPR2022/papers/Dong_Abandoning_the_Bayer-Filter_To_See_in_the_Dark_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Dong_Abandoning_the_Bayer-Filter_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.04042
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Abandoning_the_Bayer-Filter_To_See_in_the_Dark_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Abandoning_the_Bayer-Filter_To_See_in_the_Dark_CVPR_2022_paper.html
CVPR 2022
null
SASIC: Stereo Image Compression With Latent Shifts and Stereo Attention
Matthias Wödlinger, Jan Kotera, Jan Xu, Robert Sablatnig
We propose a learned method for stereo image compression that leverages the similarity of the left and right images in a stereo pair due to overlapping fields of view. The left image is compressed by a learned compression method based on an autoencoder with a hyperprior entropy model. The right image uses this informat...
https://openaccess.thecvf.com/content/CVPR2022/papers/Wodlinger_SASIC_Stereo_Image_Compression_With_Latent_Shifts_and_Stereo_Attention_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Wodlinger_SASIC_Stereo_Image_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Wodlinger_SASIC_Stereo_Image_Compression_With_Latent_Shifts_and_Stereo_Attention_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Wodlinger_SASIC_Stereo_Image_Compression_With_Latent_Shifts_and_Stereo_Attention_CVPR_2022_paper.html
CVPR 2022
null
Exploiting Temporal Relations on Radar Perception for Autonomous Driving
Peizhao Li, Pu Wang, Karl Berntorp, Hongfu Liu
We consider the object recognition problem in autonomous driving using automotive radar sensors. Comparing to Lidar sensors, radar is cost-effective and robust in all-weather conditions for perception in autonomous driving. However, radar signals suffer from low angular resolution and precision in recognizing surroundi...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Exploiting_Temporal_Relations_on_Radar_Perception_for_Autonomous_Driving_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Li_Exploiting_Temporal_Relations_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.01184
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Exploiting_Temporal_Relations_on_Radar_Perception_for_Autonomous_Driving_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Exploiting_Temporal_Relations_on_Radar_Perception_for_Autonomous_Driving_CVPR_2022_paper.html
CVPR 2022
null
Multi-Instance Point Cloud Registration by Efficient Correspondence Clustering
Weixuan Tang, Danping Zou
We address the problem of estimating the poses of multiple instances of the source point cloud within a target point cloud. Existing solutions require sampling a lot of hypotheses to detect possible instances and reject the outliers, whose robustness and efficiency degrade notably when the number of instances and outli...
https://openaccess.thecvf.com/content/CVPR2022/papers/Tang_Multi-Instance_Point_Cloud_Registration_by_Efficient_Correspondence_Clustering_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Tang_Multi-Instance_Point_Cloud_CVPR_2022_supplemental.zip
http://arxiv.org/abs/2111.14582
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Tang_Multi-Instance_Point_Cloud_Registration_by_Efficient_Correspondence_Clustering_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Tang_Multi-Instance_Point_Cloud_Registration_by_Efficient_Correspondence_Clustering_CVPR_2022_paper.html
CVPR 2022
null
Contrastive Boundary Learning for Point Cloud Segmentation
Liyao Tang, Yibing Zhan, Zhe Chen, Baosheng Yu, Dacheng Tao
Point cloud segmentation is fundamental in understanding 3D environments. However, current 3D point cloud segmentation methods usually perform poorly on scene boundaries, which degenerates the overall segmentation performance. In this paper, we focus on the segmentation of scene boundaries. Accordingly, we first explor...
https://openaccess.thecvf.com/content/CVPR2022/papers/Tang_Contrastive_Boundary_Learning_for_Point_Cloud_Segmentation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Tang_Contrastive_Boundary_Learning_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.05272
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Tang_Contrastive_Boundary_Learning_for_Point_Cloud_Segmentation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Tang_Contrastive_Boundary_Learning_for_Point_Cloud_Segmentation_CVPR_2022_paper.html
CVPR 2022
null
Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution
Jie Liang, Hui Zeng, Lei Zhang
Single image super-resolution (SISR) with generative adversarial networks (GAN) has recently attracted increasing attention due to its potentials to generate rich details. However, the training of GAN is unstable, and it often introduces many perceptually unpleasant artifacts along with the generated details. In this p...
https://openaccess.thecvf.com/content/CVPR2022/papers/Liang_Details_or_Artifacts_A_Locally_Discriminative_Learning_Approach_to_Realistic_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Liang_Details_or_Artifacts_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.09195
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Liang_Details_or_Artifacts_A_Locally_Discriminative_Learning_Approach_to_Realistic_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Liang_Details_or_Artifacts_A_Locally_Discriminative_Learning_Approach_to_Realistic_CVPR_2022_paper.html
CVPR 2022
null
CVNet: Contour Vibration Network for Building Extraction
Ziqiang Xu, Chunyan Xu, Zhen Cui, Xiangwei Zheng, Jian Yang
The classic active contour model raises a great promising solution to polygon-based object extraction with the progress of deep learning recently. Inspired by the physical vibration theory, we propose a contour vibration network (CVNet) for automatic building boundary delineation. Different from the previous contour mo...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xu_CVNet_Contour_Vibration_Network_for_Building_Extraction_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xu_CVNet_Contour_Vibration_Network_for_Building_Extraction_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xu_CVNet_Contour_Vibration_Network_for_Building_Extraction_CVPR_2022_paper.html
CVPR 2022
null
Hyperbolic Image Segmentation
Mina Ghadimi Atigh, Julian Schoep, Erman Acar, Nanne van Noord, Pascal Mettes
For image segmentation, the current standard is to perform pixel-level optimization and inference in Euclidean output embedding spaces through linear hyperplanes. In this work, we show that hyperbolic manifolds provide a valuable alternative for image segmentation and propose a tractable formulation of hierarchical pix...
https://openaccess.thecvf.com/content/CVPR2022/papers/Atigh_Hyperbolic_Image_Segmentation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Atigh_Hyperbolic_Image_Segmentation_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Atigh_Hyperbolic_Image_Segmentation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Atigh_Hyperbolic_Image_Segmentation_CVPR_2022_paper.html
CVPR 2022
null
Forward Compatible Training for Large-Scale Embedding Retrieval Systems
Vivek Ramanujan, Pavan Kumar Anasosalu Vasu, Ali Farhadi, Oncel Tuzel, Hadi Pouransari
In visual retrieval systems, updating the embedding model requires recomputing features for every piece of data. This expensive process is referred to as backfilling. Recently, the idea of backward compatible training (BCT) was proposed. To avoid the cost of backfilling, BCT modifies training of the new model to make i...
https://openaccess.thecvf.com/content/CVPR2022/papers/Ramanujan_Forward_Compatible_Training_for_Large-Scale_Embedding_Retrieval_Systems_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Ramanujan_Forward_Compatible_Training_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2112.02805
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Ramanujan_Forward_Compatible_Training_for_Large-Scale_Embedding_Retrieval_Systems_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Ramanujan_Forward_Compatible_Training_for_Large-Scale_Embedding_Retrieval_Systems_CVPR_2022_paper.html
CVPR 2022
null
Everything at Once - Multi-Modal Fusion Transformer for Video Retrieval
Nina Shvetsova, Brian Chen, Andrew Rouditchenko, Samuel Thomas, Brian Kingsbury, Rogerio S. Feris, David Harwath, James Glass, Hilde Kuehne
Multi-modal learning from video data has seen increased attention recently as it allows training of semantically meaningful embeddings without human annotation, enabling tasks like zero-shot retrieval and action localization. In this work, we present a multi-modal, modality agnostic fusion transformer that learns to ex...
https://openaccess.thecvf.com/content/CVPR2022/papers/Shvetsova_Everything_at_Once_-_Multi-Modal_Fusion_Transformer_for_Video_Retrieval_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Shvetsova_Everything_at_Once_CVPR_2022_supplemental.pdf
https://arxiv.org/abs/2112.04446
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Shvetsova_Everything_at_Once_-_Multi-Modal_Fusion_Transformer_for_Video_Retrieval_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Shvetsova_Everything_at_Once_-_Multi-Modal_Fusion_Transformer_for_Video_Retrieval_CVPR_2022_paper.html
CVPR 2022
https://openaccess.thecvf.com
Swin Transformer V2: Scaling Up Capacity and Resolution
Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo
We present techniques for scaling Swin Transformer [??] up to 3 billion parameters and making it capable of training with images of up to 1,536x1,536 resolution. By scaling up capacity and resolution, Swin Transformer sets new records on four representative vision benchmarks: 84.0% top-1 accuracy on ImageNet-V2 image c...
https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_Swin_Transformer_V2_Scaling_Up_Capacity_and_Resolution_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Liu_Swin_Transformer_V2_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2111.09883
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Swin_Transformer_V2_Scaling_Up_Capacity_and_Resolution_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Swin_Transformer_V2_Scaling_Up_Capacity_and_Resolution_CVPR_2022_paper.html
CVPR 2022
null
Neural Template: Topology-Aware Reconstruction and Disentangled Generation of 3D Meshes
Ka-Hei Hui, Ruihui Li, Jingyu Hu, Chi-Wing Fu
This paper introduces a novel framework called DT-Net for 3D mesh reconstruction and generation via Disentangled Topology. Beyond previous works, we learn a topology-aware neural template specific to each input then deform the template to reconstruct a detailed mesh while preserving the learned topology. One key insigh...
https://openaccess.thecvf.com/content/CVPR2022/papers/Hui_Neural_Template_Topology-Aware_Reconstruction_and_Disentangled_Generation_of_3D_Meshes_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Hui_Neural_Template_Topology-Aware_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Hui_Neural_Template_Topology-Aware_Reconstruction_and_Disentangled_Generation_of_3D_Meshes_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Hui_Neural_Template_Topology-Aware_Reconstruction_and_Disentangled_Generation_of_3D_Meshes_CVPR_2022_paper.html
CVPR 2022
null
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints
Zhendong Zhao, Xiaojun Chen, Yuexin Xuan, Ye Dong, Dakui Wang, Kaitai Liang
Backdoor attack is a type of serious security threat to deep learning models.An adversary can provide users with a model trained on poisoned data to manipulate prediction behavior in test stage using a backdoor. The backdoored models behave normally on clean images, yet can be activated and output incorrect prediction ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_DEFEAT_Deep_Hidden_Feature_Backdoor_Attacks_by_Imperceptible_Perturbation_and_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhao_DEFEAT_Deep_Hidden_Feature_Backdoor_Attacks_by_Imperceptible_Perturbation_and_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhao_DEFEAT_Deep_Hidden_Feature_Backdoor_Attacks_by_Imperceptible_Perturbation_and_CVPR_2022_paper.html
CVPR 2022
null
Projective Manifold Gradient Layer for Deep Rotation Regression
Jiayi Chen, Yingda Yin, Tolga Birdal, Baoquan Chen, Leonidas J. Guibas, He Wang
Regressing rotations on SO(3) manifold using deep neural networks is an important yet unsolved problem. The gap between the Euclidean network output space and the non-Euclidean SO(3) manifold imposes a severe challenge for neural network learning in both forward and backward passes. While several works have proposed di...
https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Projective_Manifold_Gradient_Layer_for_Deep_Rotation_Regression_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Chen_Projective_Manifold_Gradient_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2110.11657
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Projective_Manifold_Gradient_Layer_for_Deep_Rotation_Regression_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Projective_Manifold_Gradient_Layer_for_Deep_Rotation_Regression_CVPR_2022_paper.html
CVPR 2022
null
CLIMS: Cross Language Image Matching for Weakly Supervised Semantic Segmentation
Jinheng Xie, Xianxu Hou, Kai Ye, Linlin Shen
It has been widely known that CAM (Class Activation Map) usually only activates discriminative object regions and falsely includes lots of object-related backgrounds. As only a fixed set of image-level object labels are available to the WSSS (weakly supervised semantic segmentation) model, it could be very difficult to...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xie_CLIMS_Cross_Language_Image_Matching_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Xie_CLIMS_Cross_Language_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xie_CLIMS_Cross_Language_Image_Matching_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xie_CLIMS_Cross_Language_Image_Matching_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2022_paper.html
CVPR 2022
null
Learning To Refactor Action and Co-Occurrence Features for Temporal Action Localization
Kun Xia, Le Wang, Sanping Zhou, Nanning Zheng, Wei Tang
The main challenge of Temporal Action Localization is to retrieve subtle human actions from various co-occurring ingredients, e.g., context and background, in an untrimmed video. While prior approaches have achieved substantial progress through devising advanced action detectors, they still suffer from these co-occurri...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xia_Learning_To_Refactor_Action_and_Co-Occurrence_Features_for_Temporal_Action_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Xia_Learning_To_Refactor_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xia_Learning_To_Refactor_Action_and_Co-Occurrence_Features_for_Temporal_Action_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xia_Learning_To_Refactor_Action_and_Co-Occurrence_Features_for_Temporal_Action_CVPR_2022_paper.html
CVPR 2022
null
It's Time for Artistic Correspondence in Music and Video
Dídac Surís, Carl Vondrick, Bryan Russell, Justin Salamon
We present an approach for recommending a music track for a given video, and vice versa, based on both their temporal alignment and their correspondence at an artistic level. We propose a self-supervised approach that learns this correspondence directly from data, without any need of human annotations. In order to capt...
https://openaccess.thecvf.com/content/CVPR2022/papers/Suris_Its_Time_for_Artistic_Correspondence_in_Music_and_Video_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Suris_Its_Time_for_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Suris_Its_Time_for_Artistic_Correspondence_in_Music_and_Video_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Suris_Its_Time_for_Artistic_Correspondence_in_Music_and_Video_CVPR_2022_paper.html
CVPR 2022
null
Mixed Differential Privacy in Computer Vision
Aditya Golatkar, Alessandro Achille, Yu-Xiang Wang, Aaron Roth, Michael Kearns, Stefano Soatto
We introduce AdaMix, an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data. While pre-training language models on large public datasets has enabled strong differential privacy (DP) guarantees with minor loss of accuracy, a similar practice yie...
https://openaccess.thecvf.com/content/CVPR2022/papers/Golatkar_Mixed_Differential_Privacy_in_Computer_Vision_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Golatkar_Mixed_Differential_Privacy_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.11481
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Golatkar_Mixed_Differential_Privacy_in_Computer_Vision_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Golatkar_Mixed_Differential_Privacy_in_Computer_Vision_CVPR_2022_paper.html
CVPR 2022
null
AdaFace: Quality Adaptive Margin for Face Recognition
Minchul Kim, Anil K. Jain, Xiaoming Liu
Recognition in low quality face datasets is challenging because facial attributes are obscured and degraded. Advances in margin-based loss functions have resulted in enhanced discriminability of faces in the embedding space. Further, previous studies have studied the effect of adaptive losses to assign more importance ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Kim_AdaFace_Quality_Adaptive_Margin_for_Face_Recognition_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Kim_AdaFace_Quality_Adaptive_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.00964
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Kim_AdaFace_Quality_Adaptive_Margin_for_Face_Recognition_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Kim_AdaFace_Quality_Adaptive_Margin_for_Face_Recognition_CVPR_2022_paper.html
CVPR 2022
null
Learning Soft Estimator of Keypoint Scale and Orientation With Probabilistic Covariant Loss
Pei Yan, Yihua Tan, Shengzhou Xiong, Yuan Tai, Yansheng Li
Estimating keypoint scale and orientation is crucial to extracting invariant features under significant geometric changes. Recently, the estimators based on self-supervised learning have been designed to adapt to complex imaging conditions. Such learning-based estimators generally predict a single scalar for the keypoi...
https://openaccess.thecvf.com/content/CVPR2022/papers/Yan_Learning_Soft_Estimator_of_Keypoint_Scale_and_Orientation_With_Probabilistic_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Yan_Learning_Soft_Estimator_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Soft_Estimator_of_Keypoint_Scale_and_Orientation_With_Probabilistic_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Soft_Estimator_of_Keypoint_Scale_and_Orientation_With_Probabilistic_CVPR_2022_paper.html
CVPR 2022
null
DN-DETR: Accelerate DETR Training by Introducing Query DeNoising
Feng Li, Hao Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang
We present in this paper a novel denoising training method to speedup DETR (DEtection TRansformer) training and offer a deepened understanding of the slow convergence issue of DETR-like methods. We show that the slow convergence results from the instability of bipartite graph matching which causes inconsistent optimiza...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_DN-DETR_Accelerate_DETR_Training_by_Introducing_Query_DeNoising_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Li_DN-DETR_Accelerate_DETR_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_DN-DETR_Accelerate_DETR_Training_by_Introducing_Query_DeNoising_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_DN-DETR_Accelerate_DETR_Training_by_Introducing_Query_DeNoising_CVPR_2022_paper.html
CVPR 2022
null
HCSC: Hierarchical Contrastive Selective Coding
Yuanfan Guo, Minghao Xu, Jiawen Li, Bingbing Ni, Xuanyu Zhu, Zhenbang Sun, Yi Xu
Hierarchical semantic structures naturally exist in an image dataset, in which several semantically relevant image clusters can be further integrated into a larger cluster with coarser-grained semantics. Capturing such structures with image representations can greatly benefit the semantic understanding on various downs...
https://openaccess.thecvf.com/content/CVPR2022/papers/Guo_HCSC_Hierarchical_Contrastive_Selective_Coding_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Guo_HCSC_Hierarchical_Contrastive_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2202.00455
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Guo_HCSC_Hierarchical_Contrastive_Selective_Coding_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Guo_HCSC_Hierarchical_Contrastive_Selective_Coding_CVPR_2022_paper.html
CVPR 2022
null
TransRank: Self-Supervised Video Representation Learning via Ranking-Based Transformation Recognition
Haodong Duan, Nanxuan Zhao, Kai Chen, Dahua Lin
Recognizing transformation types applied to a video clip (RecogTrans) is a long-established paradigm for self-supervised video representation learning, which achieves much inferior performance compared to instance discrimination approaches (InstDisc) in recent works. However, based on a thorough comparison of represent...
https://openaccess.thecvf.com/content/CVPR2022/papers/Duan_TransRank_Self-Supervised_Video_Representation_Learning_via_Ranking-Based_Transformation_Recognition_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Duan_TransRank_Self-Supervised_Video_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2205.02028
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Duan_TransRank_Self-Supervised_Video_Representation_Learning_via_Ranking-Based_Transformation_Recognition_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Duan_TransRank_Self-Supervised_Video_Representation_Learning_via_Ranking-Based_Transformation_Recognition_CVPR_2022_paper.html
CVPR 2022
null
KeyTr: Keypoint Transporter for 3D Reconstruction of Deformable Objects in Videos
David Novotny, Ignacio Rocco, Samarth Sinha, Alexandre Carlier, Gael Kerchenbaum, Roman Shapovalov, Nikita Smetanin, Natalia Neverova, Benjamin Graham, Andrea Vedaldi
We consider the problem of reconstructing the depth of dynamic objects from videos. Recent progress in dynamic video depth prediction has focused on improving the output of monocular depth estimators by means of multi-view constraints while imposing little to no restrictions on the deformation of the dynamic parts of t...
https://openaccess.thecvf.com/content/CVPR2022/papers/Novotny_KeyTr_Keypoint_Transporter_for_3D_Reconstruction_of_Deformable_Objects_in_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Novotny_KeyTr_Keypoint_Transporter_for_3D_Reconstruction_of_Deformable_Objects_in_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Novotny_KeyTr_Keypoint_Transporter_for_3D_Reconstruction_of_Deformable_Objects_in_CVPR_2022_paper.html
CVPR 2022
null
Invariant Grounding for Video Question Answering
Yicong Li, Xiang Wang, Junbin Xiao, Wei Ji, Tat-Seng Chua
Video Question Answering (VideoQA) is the task of answering questions about a video. At its core is understanding the alignments between visual scenes in video and linguistic semantics in question to yield the answer. In leading VideoQA models, the typical learning objective, empirical risk minimization (ERM), latches ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Invariant_Grounding_for_Video_Question_Answering_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Li_Invariant_Grounding_for_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Invariant_Grounding_for_Video_Question_Answering_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Invariant_Grounding_for_Video_Question_Answering_CVPR_2022_paper.html
CVPR 2022
null
Prompt Distribution Learning
Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, Xinmei Tian
We present prompt distribution learning for effectively adapting a pre-trained vision-language model to address downstream recognition tasks. Our method not only learns low-bias prompts from a few samples but also captures the distribution of diverse prompts to handle the varying visual representations. In this way, we...
https://openaccess.thecvf.com/content/CVPR2022/papers/Lu_Prompt_Distribution_Learning_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Lu_Prompt_Distribution_Learning_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2205.03340
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Lu_Prompt_Distribution_Learning_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Lu_Prompt_Distribution_Learning_CVPR_2022_paper.html
CVPR 2022
null
RAGO: Recurrent Graph Optimizer for Multiple Rotation Averaging
Heng Li, Zhaopeng Cui, Shuaicheng Liu, Ping Tan
This paper proposes a deep recurrent Rotation Averaging Graph Optimizer (RAGO) for Multiple Rotation Averaging (MRA). Conventional optimization-based methods usually fail to produce accurate results due to corrupted and noisy relative measurements. Recent learning-based approaches regard MRA as a regression problem, wh...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_RAGO_Recurrent_Graph_Optimizer_for_Multiple_Rotation_Averaging_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Li_RAGO_Recurrent_Graph_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_RAGO_Recurrent_Graph_Optimizer_for_Multiple_Rotation_Averaging_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_RAGO_Recurrent_Graph_Optimizer_for_Multiple_Rotation_Averaging_CVPR_2022_paper.html
CVPR 2022
null
Arch-Graph: Acyclic Architecture Relation Predictor for Task-Transferable Neural Architecture Search
Minbin Huang, Zhijian Huang, Changlin Li, Xin Chen, Hang Xu, Zhenguo Li, Xiaodan Liang
Neural Architecture Search (NAS) aims to find efficient models for multiple tasks. Beyond seeking solutions for a single task, there are surging interests in transferring network design knowledge across multiple tasks. In this line of research, effectively modeling task correlations is vital yet highly neglected. There...
https://openaccess.thecvf.com/content/CVPR2022/papers/Huang_Arch-Graph_Acyclic_Architecture_Relation_Predictor_for_Task-Transferable_Neural_Architecture_Search_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Huang_Arch-Graph_Acyclic_Architecture_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Huang_Arch-Graph_Acyclic_Architecture_Relation_Predictor_for_Task-Transferable_Neural_Architecture_Search_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Huang_Arch-Graph_Acyclic_Architecture_Relation_Predictor_for_Task-Transferable_Neural_Architecture_Search_CVPR_2022_paper.html
CVPR 2022
null
On Aliased Resizing and Surprising Subtleties in GAN Evaluation
Gaurav Parmar, Richard Zhang, Jun-Yan Zhu
Metrics for evaluating generative models aim to measure the discrepancy between real and generated images. The oftenused Frechet Inception Distance (FID) metric, for example, extracts "high-level" features using a deep network from the two sets. However, we find that the differences in "low-level" preprocessing, specif...
https://openaccess.thecvf.com/content/CVPR2022/papers/Parmar_On_Aliased_Resizing_and_Surprising_Subtleties_in_GAN_Evaluation_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2104.11222
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Parmar_On_Aliased_Resizing_and_Surprising_Subtleties_in_GAN_Evaluation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Parmar_On_Aliased_Resizing_and_Surprising_Subtleties_in_GAN_Evaluation_CVPR_2022_paper.html
CVPR 2022
null
Lepard: Learning Partial Point Cloud Matching in Rigid and Deformable Scenes
Yang Li, Tatsuya Harada
We present Lepard, a Learning based approach for partial point cloud matching in rigid and deformable scenes. The key characteristics are the following techniques that exploit 3D positional knowledge for point cloud matching: 1) An architecture that disentangles point cloud representation into feature space and 3D posi...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Lepard_Learning_Partial_Point_Cloud_Matching_in_Rigid_and_Deformable_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Li_Lepard_Learning_Partial_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2111.12591
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Lepard_Learning_Partial_Point_Cloud_Matching_in_Rigid_and_Deformable_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Lepard_Learning_Partial_Point_Cloud_Matching_in_Rigid_and_Deformable_CVPR_2022_paper.html
CVPR 2022
null
Virtual Elastic Objects
Hsiao-yu Chen, Edith Tretschk, Tuur Stuyck, Petr Kadlecek, Ladislav Kavan, Etienne Vouga, Christoph Lassner
We present Virtual Elastic Objects (VEOs): virtual objects that not only look like their real-world counterparts but also behave like them, even when subject to novel interactions. Achieving this presents multiple challenges: not only do objects have to be captured including the physical forces acting on them, then fai...
https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Virtual_Elastic_Objects_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Chen_Virtual_Elastic_Objects_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2201.04623
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Virtual_Elastic_Objects_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Virtual_Elastic_Objects_CVPR_2022_paper.html
CVPR 2022
null
DiSparse: Disentangled Sparsification for Multitask Model Compression
Xinglong Sun, Ali Hassani, Zhangyang Wang, Gao Huang, Humphrey Shi
Despite the popularity of Model Compression and Multitask Learning, how to effectively compress a multitask model has been less thoroughly analyzed due to the challenging entanglement of tasks in the parameter space. In this paper, we propose DiSparse, a simple, effective, and first-of-its-kind multitask pruning and sp...
https://openaccess.thecvf.com/content/CVPR2022/papers/Sun_DiSparse_Disentangled_Sparsification_for_Multitask_Model_Compression_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Sun_DiSparse_Disentangled_Sparsification_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_DiSparse_Disentangled_Sparsification_for_Multitask_Model_Compression_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_DiSparse_Disentangled_Sparsification_for_Multitask_Model_Compression_CVPR_2022_paper.html
CVPR 2022
null
Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference
Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, Timothy M. Hospedales
Few-shot learning (FSL) is an important and topical problem in computer vision that has motivated extensive research into numerous methods spanning from sophisticated meta-learning methods to simple transfer learning baselines. We seek to push the limits of a simple-but-effective pipeline for real-world few-shot image ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Hu_Pushing_the_Limits_of_Simple_Pipelines_for_Few-Shot_Learning_External_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Hu_Pushing_the_Limits_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Hu_Pushing_the_Limits_of_Simple_Pipelines_for_Few-Shot_Learning_External_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Hu_Pushing_the_Limits_of_Simple_Pipelines_for_Few-Shot_Learning_External_CVPR_2022_paper.html
CVPR 2022
null
Opening Up Open World Tracking
Yang Liu, Idil Esen Zulfikar, Jonathon Luiten, Achal Dave, Deva Ramanan, Bastian Leibe, Aljoša Ošep, Laura Leal-Taixé
Tracking and detecting any object, including ones never-seen-before during model training, is a crucial but elusive capability of autonomous systems. An autonomous agent that is blind to never-seen-before objects poses a safety hazard when operating in the real world - and yet this is how almost all current systems wor...
https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_Opening_Up_Open_World_Tracking_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Liu_Opening_Up_Open_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2104.11221
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Opening_Up_Open_World_Tracking_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Opening_Up_Open_World_Tracking_CVPR_2022_paper.html
CVPR 2022
null
Towards Efficient and Scalable Sharpness-Aware Minimization
Yong Liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, Yang You
Recently, Sharpness-Aware Minimization (SAM), which connects the geometry of the loss landscape and generalization, has demonstrated a significant performance boost on training large-scale models such as vision transformers. However, the update rule of SAM requires two sequential (non-parallelizable) gradient computati...
https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_Towards_Efficient_and_Scalable_Sharpness-Aware_Minimization_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Liu_Towards_Efficient_and_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.02714
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Towards_Efficient_and_Scalable_Sharpness-Aware_Minimization_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Towards_Efficient_and_Scalable_Sharpness-Aware_Minimization_CVPR_2022_paper.html
CVPR 2022
null
VISTA: Boosting 3D Object Detection via Dual Cross-VIew SpaTial Attention
Shengheng Deng, Zhihao Liang, Lin Sun, Kui Jia
Detecting objects from LiDAR point clouds is of tremendous significance in autonomous driving. In spite of good progress, accurate and reliable 3D detection is yet to be achieved due to the sparsity and irregularity of LiDAR point clouds. Among existing strategies, multi-view methods have shown great promise by leverag...
https://openaccess.thecvf.com/content/CVPR2022/papers/Deng_VISTA_Boosting_3D_Object_Detection_via_Dual_Cross-VIew_SpaTial_Attention_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Deng_VISTA_Boosting_3D_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.09704
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Deng_VISTA_Boosting_3D_Object_Detection_via_Dual_Cross-VIew_SpaTial_Attention_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Deng_VISTA_Boosting_3D_Object_Detection_via_Dual_Cross-VIew_SpaTial_Attention_CVPR_2022_paper.html
CVPR 2022
null
Rethinking Deep Face Restoration
Yang Zhao, Yu-Chuan Su, Chun-Te Chu, Yandong Li, Marius Renn, Yukun Zhu, Changyou Chen, Xuhui Jia
A model that can authentically restore a low-quality face image to a high-quality one can benefit many applications. While existing approaches for face restoration make significant progress in generating high-quality faces, they often fail to preserve facial features and cannot authentically reconstruct the faces. Beca...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhao_Rethinking_Deep_Face_Restoration_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Zhao_Rethinking_Deep_Face_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhao_Rethinking_Deep_Face_Restoration_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhao_Rethinking_Deep_Face_Restoration_CVPR_2022_paper.html
CVPR 2022
null
OSSO: Obtaining Skeletal Shape From Outside
Marilyn Keller, Silvia Zuffi, Michael J. Black, Sergi Pujades
We address the problem of inferring the anatomic skeleton of a person, in an arbitrary pose, from the 3D surface of the body; i.e. we predict the inside (bones) from the outside (skin). This has many applications in medicine and biomechanics. Existing state-of-the-art biomechanical skeletons are detailed but do not eas...
https://openaccess.thecvf.com/content/CVPR2022/papers/Keller_OSSO_Obtaining_Skeletal_Shape_From_Outside_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Keller_OSSO_Obtaining_Skeletal_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.10129
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Keller_OSSO_Obtaining_Skeletal_Shape_From_Outside_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Keller_OSSO_Obtaining_Skeletal_Shape_From_Outside_CVPR_2022_paper.html
CVPR 2022
null
Temporal Alignment Networks for Long-Term Video
Tengda Han, Weidi Xie, Andrew Zisserman
The objective of this paper is a temporal alignment network that ingests long term video sequences, and associated text sentences, in order to: (1) determine if a sentence is alignable with the video; and (2) if it is alignable, then determine its alignment. The challenge is to train such networks from large-scale data...
https://openaccess.thecvf.com/content/CVPR2022/papers/Han_Temporal_Alignment_Networks_for_Long-Term_Video_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Han_Temporal_Alignment_Networks_CVPR_2022_supplemental.zip
http://arxiv.org/abs/2204.02968
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Han_Temporal_Alignment_Networks_for_Long-Term_Video_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Han_Temporal_Alignment_Networks_for_Long-Term_Video_CVPR_2022_paper.html
CVPR 2022
null
Few-Shot Head Swapping in the Wild
Changyong Shu, Hemao Wu, Hang Zhou, Jiaming Liu, Zhibin Hong, Changxing Ding, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang
The head swapping task aims at flawlessly placing a source head onto a target body, which is of great importance to various entertainment scenarios. While face swapping has drawn much attention in the community, the task of head swapping has rarely been explored, particularly under the few-shot setting. It is inherentl...
https://openaccess.thecvf.com/content/CVPR2022/papers/Shu_Few-Shot_Head_Swapping_in_the_Wild_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Shu_Few-Shot_Head_Swapping_CVPR_2022_supplemental.zip
http://arxiv.org/abs/2204.13100
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Shu_Few-Shot_Head_Swapping_in_the_Wild_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Shu_Few-Shot_Head_Swapping_in_the_Wild_CVPR_2022_paper.html
CVPR 2022
null
A Study on the Distribution of Social Biases in Self-Supervised Learning Visual Models
Kirill Sirotkin, Pablo Carballeira, Marcos Escudero-Viñolo
Deep neural networks are efficient at learning the data distribution if it is sufficiently sampled. However, they can be strongly biased by non-relevant factors implicitly incorporated in the training data. These include operational biases, such as ineffective or uneven data sampling, but also ethical concerns, as the ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Sirotkin_A_Study_on_the_Distribution_of_Social_Biases_in_Self-Supervised_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Sirotkin_A_Study_on_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Sirotkin_A_Study_on_the_Distribution_of_Social_Biases_in_Self-Supervised_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Sirotkin_A_Study_on_the_Distribution_of_Social_Biases_in_Self-Supervised_CVPR_2022_paper.html
CVPR 2022
null
LAR-SR: A Local Autoregressive Model for Image Super-Resolution
Baisong Guo, Xiaoyun Zhang, Haoning Wu, Yu Wang, Ya Zhang, Yan-Feng Wang
Previous super-resolution (SR) approaches often formulate SR as a regression problem and pixel wise restoration, which leads to a blurry and unreal SR output. Recent works combine adversarial loss with pixel-wise loss to train a GAN-based model or introduce normalizing flows into SR problems to generate more realistic ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Guo_LAR-SR_A_Local_Autoregressive_Model_for_Image_Super-Resolution_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Guo_LAR-SR_A_Local_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Guo_LAR-SR_A_Local_Autoregressive_Model_for_Image_Super-Resolution_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Guo_LAR-SR_A_Local_Autoregressive_Model_for_Image_Super-Resolution_CVPR_2022_paper.html
CVPR 2022
null
Bayesian Invariant Risk Minimization
Yong Lin, Hanze Dong, Hao Wang, Tong Zhang
Generalization under distributional shift is an open challenge for machine learning. Invariant Risk Minimization (IRM) is a promising framework to tackle this issue by extracting invariant features. However, despite the potential and popularity of IRM, recent works have reported negative results of it on deep models. W...
https://openaccess.thecvf.com/content/CVPR2022/papers/Lin_Bayesian_Invariant_Risk_Minimization_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Lin_Bayesian_Invariant_Risk_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Lin_Bayesian_Invariant_Risk_Minimization_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Lin_Bayesian_Invariant_Risk_Minimization_CVPR_2022_paper.html
CVPR 2022
null
Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object Detection
Siyue Yu, Jimin Xiao, Bingfeng Zhang, Eng Gee Lim
Co-salient object detection, with the target of detecting co-existed salient objects among a group of images, is gaining popularity. Recent works use the attention mechanism or extra information to aggregate common co-salient features, leading to incomplete even incorrect responses for target objects. In this paper, we...
https://openaccess.thecvf.com/content/CVPR2022/papers/Yu_Democracy_Does_Matter_Comprehensive_Feature_Mining_for_Co-Salient_Object_Detection_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Yu_Democracy_Does_Matter_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.05787
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Yu_Democracy_Does_Matter_Comprehensive_Feature_Mining_for_Co-Salient_Object_Detection_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Yu_Democracy_Does_Matter_Comprehensive_Feature_Mining_for_Co-Salient_Object_Detection_CVPR_2022_paper.html
CVPR 2022
null
Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure Consistency Constraint
Jiaxian Guo, Jiachen Li, Huan Fu, Mingming Gong, Kun Zhang, Dacheng Tao
Unsupervised image-to-image (I2I) translation aims to learn a domain mapping function that can preserve the semantics of the input images without paired data. However, because the underlying semantics distributions in the source and target domains are often mismatched, current distribution matching-based methods may di...
https://openaccess.thecvf.com/content/CVPR2022/papers/Guo_Alleviating_Semantics_Distortion_in_Unsupervised_Low-Level_Image-to-Image_Translation_via_Structure_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Guo_Alleviating_Semantics_Distortion_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Guo_Alleviating_Semantics_Distortion_in_Unsupervised_Low-Level_Image-to-Image_Translation_via_Structure_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Guo_Alleviating_Semantics_Distortion_in_Unsupervised_Low-Level_Image-to-Image_Translation_via_Structure_CVPR_2022_paper.html
CVPR 2022
null
Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches
Ayan Kumar Bhunia, Viswanatha Reddy Gajjala, Subhadeep Koley, Rohit Kundu, Aneeshan Sain, Tao Xiang, Yi-Zhe Song
The human visual system is remarkable in learning new visual concepts from just a few examples. This is precisely the goal behind few-shot class incremental learning (FSCIL), where the emphasis is additionally placed on ensuring the model does not suffer from "forgetting". In this paper, we push the boundary further fo...
https://openaccess.thecvf.com/content/CVPR2022/papers/Bhunia_Doodle_It_Yourself_Class_Incremental_Learning_by_Drawing_a_Few_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Bhunia_Doodle_It_Yourself_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.14843
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Bhunia_Doodle_It_Yourself_Class_Incremental_Learning_by_Drawing_a_Few_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Bhunia_Doodle_It_Yourself_Class_Incremental_Learning_by_Drawing_a_Few_CVPR_2022_paper.html
CVPR 2022
null
ICON: Implicit Clothed Humans Obtained From Normals
Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black
Current methods for learning realistic and animatable 3D clothed avatars need either posed 3D scans or 2D images with carefully controlled user poses. In contrast, our goal is to learn the avatar from only 2D images of people in unconstrained poses. Given a set of images, our method estimates a detailed 3D surface from...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xiu_ICON_Implicit_Clothed_Humans_Obtained_From_Normals_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Xiu_ICON_Implicit_Clothed_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2112.09127
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xiu_ICON_Implicit_Clothed_Humans_Obtained_From_Normals_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xiu_ICON_Implicit_Clothed_Humans_Obtained_From_Normals_CVPR_2022_paper.html
CVPR 2022
null
Comparing Correspondences: Video Prediction With Correspondence-Wise Losses
Daniel Geng, Max Hamilton, Andrew Owens
Image prediction methods often struggle on tasks that require changing the positions of objects, such as video prediction, producing blurry images that average over the many positions that objects might occupy. In this paper, we propose a simple change to existing image similarity metrics that makes them more robust to...
https://openaccess.thecvf.com/content/CVPR2022/papers/Geng_Comparing_Correspondences_Video_Prediction_With_Correspondence-Wise_Losses_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2104.09498
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Geng_Comparing_Correspondences_Video_Prediction_With_Correspondence-Wise_Losses_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Geng_Comparing_Correspondences_Video_Prediction_With_Correspondence-Wise_Losses_CVPR_2022_paper.html
CVPR 2022
null
Uni-Perceiver: Pre-Training Unified Architecture for Generic Perception for Zero-Shot and Few-Shot Tasks
Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Hongsheng Li, Xiaohua Wang, Jifeng Dai
Biological intelligence systems of animals perceive the world by integrating information in different modalities and processing simultaneously for various tasks. In contrast, current machine learning research follows a task-specific paradigm, leading to inefficient collaboration between tasks and high marginal costs of...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhu_Uni-Perceiver_Pre-Training_Unified_Architecture_for_Generic_Perception_for_Zero-Shot_and_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Zhu_Uni-Perceiver_Pre-Training_Unified_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhu_Uni-Perceiver_Pre-Training_Unified_Architecture_for_Generic_Perception_for_Zero-Shot_and_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhu_Uni-Perceiver_Pre-Training_Unified_Architecture_for_Generic_Perception_for_Zero-Shot_and_CVPR_2022_paper.html
CVPR 2022
null
The Auto Arborist Dataset: A Large-Scale Benchmark for Multiview Urban Forest Monitoring Under Domain Shift
Sara Beery, Guanhang Wu, Trevor Edwards, Filip Pavetic, Bo Majewski, Shreyasee Mukherjee, Stanley Chan, John Morgan, Vivek Rathod, Jonathan Huang
Generalization to novel domains is a fundamental challenge for computer vision. Near-perfect accuracy on benchmarks is common, but these models do not work as expected when deployed outside of the training distribution. To build computer vision systems that truly solve real-world problems at global scale, we need bench...
https://openaccess.thecvf.com/content/CVPR2022/papers/Beery_The_Auto_Arborist_Dataset_A_Large-Scale_Benchmark_for_Multiview_Urban_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Beery_The_Auto_Arborist_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Beery_The_Auto_Arborist_Dataset_A_Large-Scale_Benchmark_for_Multiview_Urban_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Beery_The_Auto_Arborist_Dataset_A_Large-Scale_Benchmark_for_Multiview_Urban_CVPR_2022_paper.html
CVPR 2022
null
On the Instability of Relative Pose Estimation and RANSAC's Role
Hongyi Fan, Joe Kileel, Benjamin Kimia
Relative pose estimation using the 5-point or 7-point Random Sample Consensus (RANSAC) algorithms can fail even when no outliers are present and there are enough inliers to support a hypothesis. These cases arise due to numerical instability of the 5- and 7-point minimal problems. This paper characterizes these instabi...
https://openaccess.thecvf.com/content/CVPR2022/papers/Fan_On_the_Instability_of_Relative_Pose_Estimation_and_RANSACs_Role_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Fan_On_the_Instability_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Fan_On_the_Instability_of_Relative_Pose_Estimation_and_RANSACs_Role_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Fan_On_the_Instability_of_Relative_Pose_Estimation_and_RANSACs_Role_CVPR_2022_paper.html
CVPR 2022
null
Shape From Polarization for Complex Scenes in the Wild
Chenyang Lei, Chenyang Qi, Jiaxin Xie, Na Fan, Vladlen Koltun, Qifeng Chen
We present a new data-driven approach with physics-based priors to scene-level normal estimation from a single polarization image. Existing shape from polarization (SfP) works mainly focus on estimating the normal of a single object rather than complex scenes in the wild. A key barrier to high-quality scene-level SfP i...
https://openaccess.thecvf.com/content/CVPR2022/papers/Lei_Shape_From_Polarization_for_Complex_Scenes_in_the_Wild_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2112.11377
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Lei_Shape_From_Polarization_for_Complex_Scenes_in_the_Wild_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Lei_Shape_From_Polarization_for_Complex_Scenes_in_the_Wild_CVPR_2022_paper.html
CVPR 2022
null
Real-Time, Accurate, and Consistent Video Semantic Segmentation via Unsupervised Adaptation and Cross-Unit Deployment on Mobile Device
Hyojin Park, Alan Yessenbayev, Tushar Singhal, Navin Kumar Adhikari, Yizhe Zhang, Shubhankar Mangesh Borse, Hong Cai, Nilesh Prasad Pandey, Fei Yin, Frank Mayer, Balaji Calidas, Fatih Porikli
This demonstration showcases our innovations on efficient, accurate, and temporally consistent video semantic segmentation on mobile device. We employ our test-time unsupervised scheme, AuxAdapt, to enable the segmentation model to adapt to a given video in an online manner. More specifically, we leverage a small auxil...
https://openaccess.thecvf.com/content/CVPR2022/papers/Park_Real-Time_Accurate_and_Consistent_Video_Semantic_Segmentation_via_Unsupervised_Adaptation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Park_Real-Time_Accurate_and_CVPR_2022_supplemental.zip
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Park_Real-Time_Accurate_and_Consistent_Video_Semantic_Segmentation_via_Unsupervised_Adaptation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Park_Real-Time_Accurate_and_Consistent_Video_Semantic_Segmentation_via_Unsupervised_Adaptation_CVPR_2022_paper.html
CVPR 2022
null
SNUG: Self-Supervised Neural Dynamic Garments
Igor Santesteban, Miguel A. Otaduy, Dan Casas
We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies. State-of-the-art data-driven approaches to model 3D garment deformations are trained using supervised strategies that require large datasets, usually obtained by expensive physics-based simulation methods o...
https://openaccess.thecvf.com/content/CVPR2022/papers/Santesteban_SNUG_Self-Supervised_Neural_Dynamic_Garments_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Santesteban_SNUG_Self-Supervised_Neural_CVPR_2022_supplemental.zip
http://arxiv.org/abs/2204.02219
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Santesteban_SNUG_Self-Supervised_Neural_Dynamic_Garments_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Santesteban_SNUG_Self-Supervised_Neural_Dynamic_Garments_CVPR_2022_paper.html
CVPR 2022
null
Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation
Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, Xinjing Cheng
Self-training has greatly facilitated domain adaptive semantic segmentation, which iteratively generates pseudo labels on unlabeled target data and retrains the network. However, realistic segmentation datasets are highly imbalanced, pseudo labels are typically biased to the majority classes and basically noisy, leadin...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xie_Towards_Fewer_Annotations_Active_Learning_via_Region_Impurity_and_Prediction_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Xie_Towards_Fewer_Annotations_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2111.12940
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xie_Towards_Fewer_Annotations_Active_Learning_via_Region_Impurity_and_Prediction_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xie_Towards_Fewer_Annotations_Active_Learning_via_Region_Impurity_and_Prediction_CVPR_2022_paper.html
CVPR 2022
null
Glass Segmentation Using Intensity and Spectral Polarization Cues
Haiyang Mei, Bo Dong, Wen Dong, Jiaxi Yang, Seung-Hwan Baek, Felix Heide, Pieter Peers, Xiaopeng Wei, Xin Yang
Transparent and semi-transparent materials pose significant challenges for existing scene understanding and segmentation algorithms due to their lack of RGB texture which impedes the extraction of meaningful features. In this work, we exploit that the light-matter interactions on glass materials provide unique intensit...
https://openaccess.thecvf.com/content/CVPR2022/papers/Mei_Glass_Segmentation_Using_Intensity_and_Spectral_Polarization_Cues_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Mei_Glass_Segmentation_Using_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Mei_Glass_Segmentation_Using_Intensity_and_Spectral_Polarization_Cues_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Mei_Glass_Segmentation_Using_Intensity_and_Spectral_Polarization_Cues_CVPR_2022_paper.html
CVPR 2022
null
CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding
Mohamed Afham, Isuru Dissanayake, Dinithi Dissanayake, Amaya Dharmasiri, Kanchana Thilakarathna, Ranga Rodrigo
Manual annotation of large-scale point cloud dataset for varying tasks such as 3D object classification, segmentation and detection is often laborious owing to the irregular structure of point clouds. Self-supervised learning, which operates without any human labeling, is a promising approach to address this issue. We ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Afham_CrossPoint_Self-Supervised_Cross-Modal_Contrastive_Learning_for_3D_Point_Cloud_Understanding_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2203.00680
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Afham_CrossPoint_Self-Supervised_Cross-Modal_Contrastive_Learning_for_3D_Point_Cloud_Understanding_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Afham_CrossPoint_Self-Supervised_Cross-Modal_Contrastive_Learning_for_3D_Point_Cloud_Understanding_CVPR_2022_paper.html
CVPR 2022
null
Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment
Jiayu Xiao, Liang Li, Chaofei Wang, Zheng-Jun Zha, Qingming Huang
Training a generative adversarial network (GAN) with limited data has been a challenging task. A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption. However, existing methods are prone to...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xiao_Few_Shot_Generative_Model_Adaption_via_Relaxed_Spatial_Structural_Alignment_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2203.04121
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xiao_Few_Shot_Generative_Model_Adaption_via_Relaxed_Spatial_Structural_Alignment_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xiao_Few_Shot_Generative_Model_Adaption_via_Relaxed_Spatial_Structural_Alignment_CVPR_2022_paper.html
CVPR 2022
null
Target-Relevant Knowledge Preservation for Multi-Source Domain Adaptive Object Detection
Jiaxi Wu, Jiaxin Chen, Mengzhe He, Yiru Wang, Bo Li, Bingqi Ma, Weihao Gan, Wei Wu, Yali Wang, Di Huang
Domain adaptive object detection (DAOD) is a promising way to alleviate performance drop of detectors in new scenes. Albeit great effort made in single source domain adaptation, a more generalized task with multiple source domains remains not being well explored, due to knowledge degradation during their combination. T...
https://openaccess.thecvf.com/content/CVPR2022/papers/Wu_Target-Relevant_Knowledge_Preservation_for_Multi-Source_Domain_Adaptive_Object_Detection_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Wu_Target-Relevant_Knowledge_Preservation_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.07964
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Target-Relevant_Knowledge_Preservation_for_Multi-Source_Domain_Adaptive_Object_Detection_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Target-Relevant_Knowledge_Preservation_for_Multi-Source_Domain_Adaptive_Object_Detection_CVPR_2022_paper.html
CVPR 2022
null
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection
Chenxi Xie, Changqun Xia, Mingcan Ma, Zhirui Zhao, Xiaowu Chen, Jia Li
Recent salient object detection (SOD) methods based on deep neural network have achieved remarkable performance. However, most of existing SOD models designed for low-resolution input perform poorly on high-resolution images due to the contradiction between the sampling depth and the receptive field size. Aiming at res...
https://openaccess.thecvf.com/content/CVPR2022/papers/Xie_Pyramid_Grafting_Network_for_One-Stage_High_Resolution_Saliency_Detection_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Xie_Pyramid_Grafting_Network_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.05041
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Xie_Pyramid_Grafting_Network_for_One-Stage_High_Resolution_Saliency_Detection_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Xie_Pyramid_Grafting_Network_for_One-Stage_High_Resolution_Saliency_Detection_CVPR_2022_paper.html
CVPR 2022
null
A Style-Aware Discriminator for Controllable Image Translation
Kunhee Kim, Sanghun Park, Eunyeong Jeon, Taehun Kim, Daijin Kim
Current image-to-image translations do not control the output domain beyond the classes used during training, nor do they interpolate between different domains well, leading to implausible results. This limitation largely arises because labels do not consider the semantic distance. To mitigate such problems, we propose...
https://openaccess.thecvf.com/content/CVPR2022/papers/Kim_A_Style-Aware_Discriminator_for_Controllable_Image_Translation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Kim_A_Style-Aware_Discriminator_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.15375
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Kim_A_Style-Aware_Discriminator_for_Controllable_Image_Translation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Kim_A_Style-Aware_Discriminator_for_Controllable_Image_Translation_CVPR_2022_paper.html
CVPR 2022
null
Non-Iterative Recovery From Nonlinear Observations Using Generative Models
Jiulong Liu, Zhaoqiang Liu
In this paper, we aim to estimate the direction of an underlying signal from its nonlinear observations following the semi-parametric single index model (SIM). Unlike for conventional compressed sensing where the signal is assumed to be sparse, we assume that the signal lies in the range of an L-Lipschitz continuous ge...
https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_Non-Iterative_Recovery_From_Nonlinear_Observations_Using_Generative_Models_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Liu_Non-Iterative_Recovery_From_CVPR_2022_supplemental.zip
http://arxiv.org/abs/2205.15749
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Non-Iterative_Recovery_From_Nonlinear_Observations_Using_Generative_Models_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Non-Iterative_Recovery_From_Nonlinear_Observations_Using_Generative_Models_CVPR_2022_paper.html
CVPR 2022
null
Incremental Cross-View Mutual Distillation for Self-Supervised Medical CT Synthesis
Chaowei Fang, Liang Wang, Dingwen Zhang, Jun Xu, Yixuan Yuan, Junwei Han
Due to the constraints of the imaging device and high cost in operation time, computer tomography (CT) scans are usually acquired with low within-slice resolution. Improving the inter-slice resolution is beneficial to the disease diagnosis for both human experts and computer-aided systems. To this end, this paper build...
https://openaccess.thecvf.com/content/CVPR2022/papers/Fang_Incremental_Cross-View_Mutual_Distillation_for_Self-Supervised_Medical_CT_Synthesis_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2112.10325
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Fang_Incremental_Cross-View_Mutual_Distillation_for_Self-Supervised_Medical_CT_Synthesis_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Fang_Incremental_Cross-View_Mutual_Distillation_for_Self-Supervised_Medical_CT_Synthesis_CVPR_2022_paper.html
CVPR 2022
null
Enhancing Adversarial Training With Second-Order Statistics of Weights
Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang
Adversarial training has been shown to be one of the most effective approaches to improve the robustness of deep neural networks. It is formalized as a min-max optimization over model weights and adversarial perturbations, where the weights can be optimized through gradient descent methods like SGD. In this paper, we s...
https://openaccess.thecvf.com/content/CVPR2022/papers/Jin_Enhancing_Adversarial_Training_With_Second-Order_Statistics_of_Weights_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Jin_Enhancing_Adversarial_Training_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.06020
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Jin_Enhancing_Adversarial_Training_With_Second-Order_Statistics_of_Weights_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Jin_Enhancing_Adversarial_Training_With_Second-Order_Statistics_of_Weights_CVPR_2022_paper.html
CVPR 2022
null
Partially Does It: Towards Scene-Level FG-SBIR With Partial Input
Pinaki Nath Chowdhury, Ayan Kumar Bhunia, Viswanatha Reddy Gajjala, Aneeshan Sain, Tao Xiang, Yi-Zhe Song
We scrutinise an important observation plaguing scene-level sketch research -- that a significant portion of scene sketches are "partial". A quick pilot study reveals: (i) a scene sketch does not necessarily contain all objects in the corresponding photo, due to the subjective holistic interpretation of scenes, (ii) th...
https://openaccess.thecvf.com/content/CVPR2022/papers/Chowdhury_Partially_Does_It_Towards_Scene-Level_FG-SBIR_With_Partial_Input_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Chowdhury_Partially_Does_It_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.14804
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Chowdhury_Partially_Does_It_Towards_Scene-Level_FG-SBIR_With_Partial_Input_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Chowdhury_Partially_Does_It_Towards_Scene-Level_FG-SBIR_With_Partial_Input_CVPR_2022_paper.html
CVPR 2022
null
Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo
Chaoning Zhang, Kang Zhang, Trung X. Pham, Axi Niu, Zhinan Qiao, Chang D. Yoo, In So Kweon
Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS). To decouple the NSS from the MBS, a dynamic dictionary has be...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhang_Dual_Temperature_Helps_Contrastive_Learning_Without_Many_Negative_Samples_Towards_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Zhang_Dual_Temperature_Helps_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.17248
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Dual_Temperature_Helps_Contrastive_Learning_Without_Many_Negative_Samples_Towards_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Dual_Temperature_Helps_Contrastive_Learning_Without_Many_Negative_Samples_Towards_CVPR_2022_paper.html
CVPR 2022
null
Moving Window Regression: A Novel Approach to Ordinal Regression
Nyeong-Ho Shin, Seon-Ho Lee, Chang-Su Kim
A novel ordinal regression algorithm, called moving window regression (MWR), is proposed in this paper. First, we propose the notion of relative rank (rho-rank), which is a new order representation scheme for input and reference instances. Second, we develop global and local relative regressors (rho-regressors) to pred...
https://openaccess.thecvf.com/content/CVPR2022/papers/Shin_Moving_Window_Regression_A_Novel_Approach_to_Ordinal_Regression_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Shin_Moving_Window_Regression_CVPR_2022_supplemental.zip
http://arxiv.org/abs/2203.13122
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Shin_Moving_Window_Regression_A_Novel_Approach_to_Ordinal_Regression_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Shin_Moving_Window_Regression_A_Novel_Approach_to_Ordinal_Regression_CVPR_2022_paper.html
CVPR 2022
null
UniCoRN: A Unified Conditional Image Repainting Network
Jimeng Sun, Shuchen Weng, Zheng Chang, Si Li, Boxin Shi
Conditional image repainting (CIR) is an advanced image editing task, which requires the model to generate visual content in user-specified regions conditioned on multiple cross-modality constraints, and composite the visual content with the provided background seamlessly. Existing methods based on two-phase architectu...
https://openaccess.thecvf.com/content/CVPR2022/papers/Sun_UniCoRN_A_Unified_Conditional_Image_Repainting_Network_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_UniCoRN_A_Unified_Conditional_Image_Repainting_Network_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_UniCoRN_A_Unified_Conditional_Image_Repainting_Network_CVPR_2022_paper.html
CVPR 2022
null
Forecasting Characteristic 3D Poses of Human Actions
Christian Diller, Thomas Funkhouser, Angela Dai
We propose the task of forecasting characteristic 3d poses: from a short sequence observation of a person, predict a future 3d pose of that person in a likely action-defining, characteristic pose - for instance, from observing a person picking up an apple, predict the pose of the person eating the apple. Prior work on ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Diller_Forecasting_Characteristic_3D_Poses_of_Human_Actions_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Diller_Forecasting_Characteristic_3D_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2011.15079
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Diller_Forecasting_Characteristic_3D_Poses_of_Human_Actions_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Diller_Forecasting_Characteristic_3D_Poses_of_Human_Actions_CVPR_2022_paper.html
CVPR 2022
null
ACPL: Anti-Curriculum Pseudo-Labelling for Semi-Supervised Medical Image Classification
Fengbei Liu, Yu Tian, Yuanhong Chen, Yuyuan Liu, Vasileios Belagiannis, Gustavo Carneiro
Effective semi-supervised learning (SSL) in medical image analysis (MIA) must address two challenges: 1) work effectively on both multi-class (e.g., lesion classification) and multi-label (e.g., multiple-disease diagnosis) problems, and 2) handle imbalanced learning (because of the high variance in disease prevalence)....
https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_ACPL_Anti-Curriculum_Pseudo-Labelling_for_Semi-Supervised_Medical_Image_Classification_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Liu_ACPL_Anti-Curriculum_Pseudo-Labelling_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2111.12918
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_ACPL_Anti-Curriculum_Pseudo-Labelling_for_Semi-Supervised_Medical_Image_Classification_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Liu_ACPL_Anti-Curriculum_Pseudo-Labelling_for_Semi-Supervised_Medical_Image_Classification_CVPR_2022_paper.html
CVPR 2022
null
Learning to Deblur Using Light Field Generated and Real Defocus Images
Lingyan Ruan, Bin Chen, Jizhou Li, Miuling Lam
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur. While deep learning approach shows great promise in solving image restoration problems, defocus deblurring demands accurate training data that consists of all-in-focus and defocus image pairs, which is difficult to collect. Na...
https://openaccess.thecvf.com/content/CVPR2022/papers/Ruan_Learning_to_Deblur_Using_Light_Field_Generated_and_Real_Defocus_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Ruan_Learning_to_Deblur_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.00367
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Ruan_Learning_to_Deblur_Using_Light_Field_Generated_and_Real_Defocus_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Ruan_Learning_to_Deblur_Using_Light_Field_Generated_and_Real_Defocus_CVPR_2022_paper.html
CVPR 2022
null
Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection
Nicolae-Cătălin Ristea, Neelu Madan, Radu Tudor Ionescu, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B. Moeslund, Mubarak Shah
Anomaly detection is commonly pursued as a one-class classification problem, where models can only learn from normal training samples, while being evaluated on both normal and abnormal test samples. Among the successful approaches for anomaly detection, a distinguished category of methods relies on predicting masked in...
https://openaccess.thecvf.com/content/CVPR2022/papers/Ristea_Self-Supervised_Predictive_Convolutional_Attentive_Block_for_Anomaly_Detection_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Ristea_Self-Supervised_Predictive_Convolutional_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2111.09099
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Ristea_Self-Supervised_Predictive_Convolutional_Attentive_Block_for_Anomaly_Detection_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Ristea_Self-Supervised_Predictive_Convolutional_Attentive_Block_for_Anomaly_Detection_CVPR_2022_paper.html
CVPR 2022
null
Safe Self-Refinement for Transformer-Based Domain Adaptation
Tao Sun, Cheng Lu, Tianshuo Zhang, Haibin Ling
Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain. It is a challenging problem especially when a large domain gap lies between the source and target domains. In this paper we propose a novel solution named SSRT (Safe Self-Refinement for ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Sun_Safe_Self-Refinement_for_Transformer-Based_Domain_Adaptation_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2204.07683
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_Safe_Self-Refinement_for_Transformer-Based_Domain_Adaptation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_Safe_Self-Refinement_for_Transformer-Based_Domain_Adaptation_CVPR_2022_paper.html
CVPR 2022
null
Density-Preserving Deep Point Cloud Compression
Yun He, Xinlin Ren, Danhang Tang, Yinda Zhang, Xiangyang Xue, Yanwei Fu
Local density of point clouds is crucial for representing local details, but has been overlooked by existing point cloud compression methods. To address this, we propose a novel deep point cloud compression method that preserves local density information. Our method works in an auto-encoder fashion: the encoder downsam...
https://openaccess.thecvf.com/content/CVPR2022/papers/He_Density-Preserving_Deep_Point_Cloud_Compression_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/He_Density-Preserving_Deep_Point_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.12684
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/He_Density-Preserving_Deep_Point_Cloud_Compression_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/He_Density-Preserving_Deep_Point_Cloud_Compression_CVPR_2022_paper.html
CVPR 2022
null
StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions
Lukas Höllein, Justin Johnson, Matthias Nießner
We apply style transfer on mesh reconstructions of indoor scenes. This enables VR applications like experiencing 3D environments painted in the style of a favorite artist. Style transfer typically operates on 2D images, making stylization of a mesh challenging. When optimized over a variety of poses, stylization patter...
https://openaccess.thecvf.com/content/CVPR2022/papers/Hollein_StyleMesh_Style_Transfer_for_Indoor_3D_Scene_Reconstructions_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Hollein_StyleMesh_Style_Transfer_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Hollein_StyleMesh_Style_Transfer_for_Indoor_3D_Scene_Reconstructions_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Hollein_StyleMesh_Style_Transfer_for_Indoor_3D_Scene_Reconstructions_CVPR_2022_paper.html
CVPR 2022
null
Which Model To Transfer? Finding the Needle in the Growing Haystack
Cedric Renggli, André Susano Pinto, Luka Rimanic, Joan Puigcerver, Carlos Riquelme, Ce Zhang, Mario Lučić
Transfer learning has been recently popularized as a data-efficient alternative to training models from scratch, in particular for computer vision tasks where it provides a remarkably solid baseline. The emergence of rich model repositories, such as TensorFlow Hub, enables the practitioners and researchers to unleash t...
https://openaccess.thecvf.com/content/CVPR2022/papers/Renggli_Which_Model_To_Transfer_Finding_the_Needle_in_the_Growing_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Renggli_Which_Model_To_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2010.06402
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Renggli_Which_Model_To_Transfer_Finding_the_Needle_in_the_Growing_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Renggli_Which_Model_To_Transfer_Finding_the_Needle_in_the_Growing_CVPR_2022_paper.html
CVPR 2022
null
Fast and Unsupervised Action Boundary Detection for Action Segmentation
Zexing Du, Xue Wang, Guoqing Zhou, Qing Wang
To deal with the great number of untrimmed videos produced every day, we propose an efficient unsupervised action segmentation method by detecting boundaries, named action boundary detection (ABD). In particular, the proposed method has the following advantages: no training stage and low-latency inference. To detect ac...
https://openaccess.thecvf.com/content/CVPR2022/papers/Du_Fast_and_Unsupervised_Action_Boundary_Detection_for_Action_Segmentation_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Du_Fast_and_Unsupervised_Action_Boundary_Detection_for_Action_Segmentation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Du_Fast_and_Unsupervised_Action_Boundary_Detection_for_Action_Segmentation_CVPR_2022_paper.html
CVPR 2022
null
Class-Incremental Learning With Strong Pre-Trained Models
Tz-Ying Wu, Gurumurthy Swaminathan, Zhizhong Li, Avinash Ravichandran, Nuno Vasconcelos, Rahul Bhotika, Stefano Soatto
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes). Instead, we explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes. We hypothesize that a strong base model can p...
https://openaccess.thecvf.com/content/CVPR2022/papers/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Wu_Class-Incremental_Learning_With_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.03634
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.html
CVPR 2022
null
Robust Optimization As Data Augmentation for Large-Scale Graphs
Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, Tom Goldstein
Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks). While most existing graph regularizers focus on manipulating graph topological structures by adding/r...
https://openaccess.thecvf.com/content/CVPR2022/papers/Kong_Robust_Optimization_As_Data_Augmentation_for_Large-Scale_Graphs_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Kong_Robust_Optimization_As_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2010.09891
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Kong_Robust_Optimization_As_Data_Augmentation_for_Large-Scale_Graphs_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Kong_Robust_Optimization_As_Data_Augmentation_for_Large-Scale_Graphs_CVPR_2022_paper.html
CVPR 2022
null
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks With Implicit Gradients
Kaidong Li, Ziming Zhang, Cuncong Zhong, Guanghui Wang
Deep neural networks for 3D point cloud classification, such as PointNet, have been demonstrated to be vulnerable to adversarial attacks. Current adversarial defenders often learn to denoise the (attacked) point clouds by reconstruction, and then feed them to the classifiers as input. In contrast to the literature, we ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Robust_Structured_Declarative_Classifiers_for_3D_Point_Clouds_Defending_Adversarial_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2203.15245
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Robust_Structured_Declarative_Classifiers_for_3D_Point_Clouds_Defending_Adversarial_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Li_Robust_Structured_Declarative_Classifiers_for_3D_Point_Clouds_Defending_Adversarial_CVPR_2022_paper.html
CVPR 2022
null
PhotoScene: Photorealistic Material and Lighting Transfer for Indoor Scenes
Yu-Ying Yeh, Zhengqin Li, Yannick Hold-Geoffroy, Rui Zhu, Zexiang Xu, Miloš Hašan, Kalyan Sunkavalli, Manmohan Chandraker
Most indoor 3D scene reconstruction methods focus on recovering 3D geometry and scene layout. In this work, we go beyond this to propose PhotoScene, a framework that takes input image(s) of a scene along with approximately aligned CAD geometry (either reconstructed automatically or manually specified) and builds a phot...
https://openaccess.thecvf.com/content/CVPR2022/papers/Yeh_PhotoScene_Photorealistic_Material_and_Lighting_Transfer_for_Indoor_Scenes_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Yeh_PhotoScene_Photorealistic_Material_CVPR_2022_supplemental.zip
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Yeh_PhotoScene_Photorealistic_Material_and_Lighting_Transfer_for_Indoor_Scenes_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Yeh_PhotoScene_Photorealistic_Material_and_Lighting_Transfer_for_Indoor_Scenes_CVPR_2022_paper.html
CVPR 2022
null
Improving the Transferability of Targeted Adversarial Examples Through Object-Based Diverse Input
Junyoung Byun, Seungju Cho, Myung-Joon Kwon, Hee-Seon Kim, Changick Kim
The transferability of adversarial examples allows the deception on black-box models, and transfer-based targeted attacks have attracted a lot of interest due to their practical applicability. To maximize the transfer success rate, adversarial examples should avoid overfitting to the source model, and image augmentatio...
https://openaccess.thecvf.com/content/CVPR2022/papers/Byun_Improving_the_Transferability_of_Targeted_Adversarial_Examples_Through_Object-Based_Diverse_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Byun_Improving_the_Transferability_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2203.09123
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Byun_Improving_the_Transferability_of_Targeted_Adversarial_Examples_Through_Object-Based_Diverse_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Byun_Improving_the_Transferability_of_Targeted_Adversarial_Examples_Through_Object-Based_Diverse_CVPR_2022_paper.html
CVPR 2022
null
IRON: Inverse Rendering by Optimizing Neural SDFs and Materials From Photometric Images
Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely
We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines. We propose a neural inverse rendering pipeline called IRON that operates on photome...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhang_IRON_Inverse_Rendering_by_Optimizing_Neural_SDFs_and_Materials_From_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Zhang_IRON_Inverse_Rendering_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.02232
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_IRON_Inverse_Rendering_by_Optimizing_Neural_SDFs_and_Materials_From_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_IRON_Inverse_Rendering_by_Optimizing_Neural_SDFs_and_Materials_From_CVPR_2022_paper.html
CVPR 2022
null
ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer
Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, Jiajun Wu
Objects play a crucial role in our everyday activities. Though multisensory object-centric learning has shown great potential lately, the modeling of objects in prior work is rather unrealistic. ObjectFolder 1.0 is a recent dataset that introduces 100 virtualized objects with visual, auditory, and tactile sensory data....
https://openaccess.thecvf.com/content/CVPR2022/papers/Gao_ObjectFolder_2.0_A_Multisensory_Object_Dataset_for_Sim2Real_Transfer_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Gao_ObjectFolder_2.0_A_Multisensory_Object_Dataset_for_Sim2Real_Transfer_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Gao_ObjectFolder_2.0_A_Multisensory_Object_Dataset_for_Sim2Real_Transfer_CVPR_2022_paper.html
CVPR 2022
null
Versatile Multi-Modal Pre-Training for Human-Centric Perception
Fangzhou Hong, Liang Pan, Zhongang Cai, Ziwei Liu
Human-centric perception plays a vital role in vision and graphics. But their data annotations are prohibitively expensive. Therefore, it is desirable to have a versatile pre-train model that serves as a foundation for data-efficient downstream tasks transfer. To this end, we propose the Human-Centric Multi-Modal Contr...
https://openaccess.thecvf.com/content/CVPR2022/papers/Hong_Versatile_Multi-Modal_Pre-Training_for_Human-Centric_Perception_CVPR_2022_paper.pdf
null
http://arxiv.org/abs/2203.13815
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Hong_Versatile_Multi-Modal_Pre-Training_for_Human-Centric_Perception_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Hong_Versatile_Multi-Modal_Pre-Training_for_Human-Centric_Perception_CVPR_2022_paper.html
CVPR 2022
null
360MonoDepth: High-Resolution 360deg Monocular Depth Estimation
Manuel Rey-Area, Mingze Yuan, Christian Richardt
360deg cameras can capture complete environments in a single shot, which makes 360deg imagery alluring in many computer vision tasks. However, monocular depth estimation remains a challenge for 360deg data, particularly for high resolutions like 2K (2048x1024) and beyond that are important for novel-view synthesis and ...
https://openaccess.thecvf.com/content/CVPR2022/papers/Rey-Area_360MonoDepth_High-Resolution_360deg_Monocular_Depth_Estimation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Rey-Area_360MonoDepth_High-Resolution_360deg_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Rey-Area_360MonoDepth_High-Resolution_360deg_Monocular_Depth_Estimation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Rey-Area_360MonoDepth_High-Resolution_360deg_Monocular_Depth_Estimation_CVPR_2022_paper.html
CVPR 2022
null
Splicing ViT Features for Semantic Appearance Transfer
Narek Tumanyan, Omer Bar-Tal, Shai Bagon, Tali Dekel
We present a method for semantically transferring the visual appearance of one natural image to another. Specifically, our goal is to generate an image in which objects in a source structure image are "painted" with the visual appearance of their semantically related objects in a target appearance image. Our method wor...
https://openaccess.thecvf.com/content/CVPR2022/papers/Tumanyan_Splicing_ViT_Features_for_Semantic_Appearance_Transfer_CVPR_2022_paper.pdf
null
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Tumanyan_Splicing_ViT_Features_for_Semantic_Appearance_Transfer_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Tumanyan_Splicing_ViT_Features_for_Semantic_Appearance_Transfer_CVPR_2022_paper.html
CVPR 2022
null
Contrastive Regression for Domain Adaptation on Gaze Estimation
Yaoming Wang, Yangzhou Jiang, Jin Li, Bingbing Ni, Wenrui Dai, Chenglin Li, Hongkai Xiong, Teng Li
Appearance-based Gaze Estimation leverages deep neural networks to regress the gaze direction from monocular images and achieve impressive performance. However, its success depends on expensive and cumbersome annotation capture. When lacking precise annotation, the large domain gap hinders the performance of trained mo...
https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Contrastive_Regression_for_Domain_Adaptation_on_Gaze_Estimation_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Wang_Contrastive_Regression_for_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Contrastive_Regression_for_Domain_Adaptation_on_Gaze_Estimation_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Contrastive_Regression_for_Domain_Adaptation_on_Gaze_Estimation_CVPR_2022_paper.html
CVPR 2022
null
MUSE-VAE: Multi-Scale VAE for Environment-Aware Long Term Trajectory Prediction
Mihee Lee, Samuel S. Sohn, Seonghyeon Moon, Sejong Yoon, Mubbasir Kapadia, Vladimir Pavlovic
Accurate long-term trajectory prediction in complex scenes, where multiple agents (e.g., pedestrians or vehicles) interact with each other and the environment while attempting to accomplish diverse and often unknown goals, is a challenging stochastic forecasting problem. In this work, we propose MUSE-VAE, a new probabi...
https://openaccess.thecvf.com/content/CVPR2022/papers/Lee_MUSE-VAE_Multi-Scale_VAE_for_Environment-Aware_Long_Term_Trajectory_Prediction_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Lee_MUSE-VAE_Multi-Scale_VAE_CVPR_2022_supplemental.pdf
null
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Lee_MUSE-VAE_Multi-Scale_VAE_for_Environment-Aware_Long_Term_Trajectory_Prediction_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Lee_MUSE-VAE_Multi-Scale_VAE_for_Environment-Aware_Long_Term_Trajectory_Prediction_CVPR_2022_paper.html
CVPR 2022
null
Multi-View Consistent Generative Adversarial Networks for 3D-Aware Image Synthesis
Xuanmeng Zhang, Zhedong Zheng, Daiheng Gao, Bang Zhang, Pan Pan, Yi Yang
3D-aware image synthesis aims to generate images of objects from multiple views by learning a 3D representation. However, one key challenge remains: existing approaches lack geometry constraints, hence usually fail to generate multi-view consistent images. To address this challenge, we propose Multi-View Consistent Gen...
https://openaccess.thecvf.com/content/CVPR2022/papers/Zhang_Multi-View_Consistent_Generative_Adversarial_Networks_for_3D-Aware_Image_Synthesis_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Zhang_Multi-View_Consistent_Generative_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2204.06307
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Multi-View_Consistent_Generative_Adversarial_Networks_for_3D-Aware_Image_Synthesis_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Multi-View_Consistent_Generative_Adversarial_Networks_for_3D-Aware_Image_Synthesis_CVPR_2022_paper.html
CVPR 2022
null
Putting People in Their Place: Monocular Regression of 3D People in Depth
Yu Sun, Wu Liu, Qian Bao, Yili Fu, Tao Mei, Michael J. Black
Given an image with multiple people, our goal is to directly regress the pose and shape of all the people as well as their relative depth. Inferring the depth of a person in an image, however, is fundamentally ambiguous without knowing their height. This is particularly problematic when the scene contains people of ver...
https://openaccess.thecvf.com/content/CVPR2022/papers/Sun_Putting_People_in_Their_Place_Monocular_Regression_of_3D_People_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Sun_Putting_People_in_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2112.08274
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_Putting_People_in_Their_Place_Monocular_Regression_of_3D_People_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Sun_Putting_People_in_Their_Place_Monocular_Regression_of_3D_People_CVPR_2022_paper.html
CVPR 2022
null
POCO: Point Convolution for Surface Reconstruction
Alexandre Boulch, Renaud Marlet
Implicit neural networks have been successfully used for surface reconstruction from point clouds. However, many of them face scalability issues as they encode the isosurface function of a whole object or scene into a single latent vector. To overcome this limitation, a few approaches infer latent vectors on a coarse r...
https://openaccess.thecvf.com/content/CVPR2022/papers/Boulch_POCO_Point_Convolution_for_Surface_Reconstruction_CVPR_2022_paper.pdf
https://openaccess.thecvf.com/content/CVPR2022/supplemental/Boulch_POCO_Point_Convolution_CVPR_2022_supplemental.pdf
http://arxiv.org/abs/2201.01831
https://openaccess.thecvf.com
https://openaccess.thecvf.com/content/CVPR2022/html/Boulch_POCO_Point_Convolution_for_Surface_Reconstruction_CVPR_2022_paper.html
https://openaccess.thecvf.com/content/CVPR2022/html/Boulch_POCO_Point_Convolution_for_Surface_Reconstruction_CVPR_2022_paper.html
CVPR 2022
null