mishig HF Staff commited on
Commit
8adce10
·
verified ·
1 Parent(s): ab3ddd5

Add 1 files

Browse files
Files changed (1) hide show
  1. 2403/2403.14594.md +427 -0
2403/2403.14594.md ADDED
@@ -0,0 +1,427 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition
2
+
3
+ URL Source: https://arxiv.org/html/2403.14594
4
+
5
+ Markdown Content:
6
+ Mariia Gladkova 1,2⋆Yan Xia 1,2†Rui Wang 3 Daniel Cremers 1,2
7
+
8
+ 1 TU Munich 2 Munich Center for Machine Learning 3 Microsoft
9
+
10
+ {yunjin.li, mariia.gladkova, yan.xia, cremers}.@tum.de wangr@microsoft.com
11
+
12
+ ###### Abstract
13
+
14
+ Cross-modal place recognition methods are flexible GPS-alternatives under varying environment conditions and sensor setups. However, this task is non-trivial since extracting consistent and robust global descriptors from different modalities is challenging. To tackle this issue, we propose Voxel-Cross-Pixel (VXP), a novel camera-to-LiDAR place recognition framework that enforces local similarities in a self-supervised manner and effectively brings global context from images and LiDAR scans into a shared feature space. Specifically, VXP is trained in three stages: first, we deploy a visual transformer to compactly represent input images. Secondly, we establish local correspondences between image-based and point cloud-based feature spaces using our novel geometric alignment module. We then aggregate local similarities into an expressive shared latent space. Extensive experiments on the three benchmarks (Oxford RobotCar, ViViD++ and KITTI) demonstrate that our method surpasses the state-of-the-art cross-modal retrieval by a large margin. Our evaluations show that the proposed method is accurate, efficient and light-weight. Our project page is available at: [https://yunjinli.github.io/projects-vxp/](https://yunjinli.github.io/projects-vxp/).
15
+
16
+ 2 2 footnotetext: Corresponding author. * Equal contribution.
17
+ 1 Introduction
18
+ --------------
19
+
20
+ Since the emergence of autonomous systems, global place recognition has become essential for mobile robotics. Despite the widespread availability of the Global Navigation Satellite System (GNSS), signal outages remain inevitable, particularly in parking spaces or urban areas where buildings or tunnels can block satellite signals[[42](https://arxiv.org/html/2403.14594v2#bib.bib42)]. These disruptions are critical challenges for achieving autonomous driving on a city-wide scale and must be managed using onboard devices like cameras[[4](https://arxiv.org/html/2403.14594v2#bib.bib4)], LiDARs[[46](https://arxiv.org/html/2403.14594v2#bib.bib46)], or radars[[40](https://arxiv.org/html/2403.14594v2#bib.bib40)]. The Autonomous Vehicle (AV) sensor suite provides various strategies for data recording and, thus, enables alternative ways for global localization in GNSS-denied areas. Although numerous solutions have been proposed within the computer vision and robotics communities, most still rely on the same type of data during both map acquisition and operation. This dependence on a single data source may limit the applicability of these solutions in cases of sensor malfunctions or variations in sensor setups. Consequently, there is a need for more flexible localization methods that can take advantage of different sensor modalities under varying environmental conditions. This presents significant potential for cross-modal place recognition techniques. While multi-modal approaches require data to be available from all sensors, cross-modal methods are intended to be more flexible and seamlessly switch between the map and query sources. For instance, camera-to-LiDAR method would support querying a database of encoded LiDAR scans with RGB images (2D-3D localization). In terms of practical value, it would save the on-board computational load of processing large point clouds and guarantee global localization even in cases of LiDAR malfunctioning using image data.
21
+
22
+ Although cross-modal place recognition offers significant potential, it also presents challenges due to substantial differences between observations from various sensors. Specifically, in camera-to-LiDAR localization, images and point clouds exhibit a clear gap in both raw data (2D images vs 3D scans) and extracted features. The lack of explicit correlation between these two data modalities complicates the development of cross-modal global localization solutions. Due to this, only a few approaches have been proposed to tackle the task so far. Cattaneo et al. [[10](https://arxiv.org/html/2403.14594v2#bib.bib10)] first introduce 2D and 3D feature extraction networks to create a shared embedding space between images and point clouds. L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)] proposes to transform image and point clouds into the same 2.5D space for reducing the domain gap. LIP-Loc[[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] advocates usage of multi-class N-pair batched loss in the contrastive learning regime to boost cross-modal retrieval. While these methods focus on designing powerful networks to encode data into robust global descriptors, they ignore geometric relation between local structures captured by both modalities. Local consistency not only provides additional constraints in order to effectively bridge the domain gap during the cross-modal training, but also enhances the representative power of the shared latent space.
23
+
24
+ ![Image 1: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/teaser.jpg)
25
+
26
+ ![Image 2: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/oxfordrecallcurve.png)
27
+
28
+ Figure 1: (Left) Voxel-Cross-Pixel (VXP) can effectively map data from different modalities (2D images and 3D LiDAR scans) into the shared latent space, which exhibits local similarities and captures global context. (Right) Recall for up-to K = 25 retrieved places on Oxford RobotCar benchmark. VXP consistently demonstrates superior cross-modal large-scale global retrieval preformance.
29
+
30
+ In light of this, we introduce a novel method Voxel-Cross-Pixel (VXP) for camera-LiDAR place recognition. Our pipeline is three-fold: firstly, we leverage the power of visual transformers to obtain an expressive feature map and compact global embedding for an input image. Secondly, we choose sparse voxelized representation of a corresponding LiDAR scan and hierarchically aggregate features by utilizing sparse 3D convolutions. By means of projective geometry we establish local feature correspondences between image- and voxel-based feature maps and enforce their similarity during training. Lastly, we enforce similarities between global descriptors of the cross-modal matches. This comprehensive training paradigm enables the network to effectively capture both fine-grained local details and broader global context, facilitating successful cross-modal learning. We evaluate our model on three real-world datasets, achieving state-of-the-art cross-modal retrieval.
31
+
32
+ To summarize, the main contributions of the paper are:
33
+
34
+ * •We propose a novel framework for the cross-modal place recognition, Voxel-Cross-Pixel (VXP), which effectively encodes images and LiDAR scans in a shared latent space.
35
+ * •We demonstrate the effectiveness of local similarity constraints in learning robust global descriptors for the cross-modal place recognition task.
36
+ * •We establish state-of-the-art performance in cross-modal retrieval on the Oxford RobotCar, ViViD++ datasets and KITTI benchmark, while maintaining high uni-modal global localization accuracy.
37
+ * •
38
+
39
+ 2 Related Work
40
+ --------------
41
+
42
+ In this section, we first review uni-modal place recognition techniques. We then introduce some fusion-based approaches. Finally, the existing cross-modal methods are presented.
43
+
44
+ Visual and point cloud-based retrieval. Uni-modal place recognition methods operate within one sensor type and aim to find the closest query match in a database. Most widely researched modalities are visual and LiDAR-based, while other types such as radar recently have received attention from the community[[5](https://arxiv.org/html/2403.14594v2#bib.bib5)]. Traditional image-based approaches, such as bag-of-words [[14](https://arxiv.org/html/2403.14594v2#bib.bib14)], represent different places with a visual vocabulary of quantized local descriptors[[30](https://arxiv.org/html/2403.14594v2#bib.bib30)] and they are widely used in the SLAM community for re-localization and loop closure tasks[[7](https://arxiv.org/html/2403.14594v2#bib.bib7), [16](https://arxiv.org/html/2403.14594v2#bib.bib16)]. In recent years, Convolutional Neural Network (CNN)-based methods have gained popularity for their expressiveness and enhanced robustness. Arandjelović et al. introduced NetVLAD [[4](https://arxiv.org/html/2403.14594v2#bib.bib4)], a CNN-based approach that encodes RGB images into dense feature maps and learns to effectively aggregate these features into a global descriptor. CosPlace[[6](https://arxiv.org/html/2403.14594v2#bib.bib6)] explored to perform the retrieval as a classification task. Recent works[[1](https://arxiv.org/html/2403.14594v2#bib.bib1), [2](https://arxiv.org/html/2403.14594v2#bib.bib2)] proposed to process the features extracted by a CNN with a Conv-AP layer or a Feature-Mixer. AnyLoc [[19](https://arxiv.org/html/2403.14594v2#bib.bib19)] utilizes the features generated from off-the-shelf self-supervised model (DINOv2 [[28](https://arxiv.org/html/2403.14594v2#bib.bib28)]) to achieve SOTA performance in many VPR benchmarks.
45
+
46
+ As for LiDAR-based place recognition, Uy et al. proposed PointNetVLAD [[38](https://arxiv.org/html/2403.14594v2#bib.bib38)], in which they employed PointNet [[32](https://arxiv.org/html/2403.14594v2#bib.bib32)] to extract features from a point cloud map and then aggregate them into a global descriptor using a subsequent NetVLAD layer. LPD-Net was introduced by Liu et al. [[26](https://arxiv.org/html/2403.14594v2#bib.bib26)], in which an adaptive local feature extraction module is proposed to extract local features along with the graph-based aggregation module to effectively combine them. SOE-Net[[43](https://arxiv.org/html/2403.14594v2#bib.bib43)] first introduces orientation encoding into PointNet and a self-attention unit to generate a robust 3D global descriptor. Furthermore, various methods[[50](https://arxiv.org/html/2403.14594v2#bib.bib50), [12](https://arxiv.org/html/2403.14594v2#bib.bib12)] explored the integration of different transformer networks to learn long-range contextual relationships. In contrast, Minkloc3D[[20](https://arxiv.org/html/2403.14594v2#bib.bib20)] employed a voxel-based strategy to generate a compact global descriptor. However, the voxelization methods inevitably suffer from information loss due to the quantization. Recent CASSPR[[44](https://arxiv.org/html/2403.14594v2#bib.bib44)] thus introduced a hierarchical cross attention transformer, combining both the advantages of voxel-based strategies with the point-based strategies. Text2Loc[[47](https://arxiv.org/html/2403.14594v2#bib.bib47)] achieved the 3D localization based on textual descriptions. In this paper, our work brings the best practices of 2D image and 3D point cloud communities together into a coherent framework that can achieve state-of-the-art performance in cross-modal retrieval.
47
+
48
+ Fused-Modal Place Recognition. LiDAR-based methods are more robust to variations in illumination and appearance when compared to the vision-based approaches. However, obtained scans are limited in capturing fine details of the observed scenes, while image data offers rich and dense scene capture. To this end, researchers have started exploring the possibility of fusing image and LiDAR data for the place recognition task. Pan et al. proposed a method called CORAL [[29](https://arxiv.org/html/2403.14594v2#bib.bib29)], in which point cloud data is converted into an elevation image in order to perform further fusion. MinkLoc++ [[21](https://arxiv.org/html/2403.14594v2#bib.bib21)], on the other hand, employed a late fusion technique, processing point cloud and image data separately and performing fusion at the final stage. While our approach relies on having both image and LiDAR data available during training, due to the chosen architecture with two independent branches we are capable of dealing with a single stream data during inference, which enables cross-modal retrieval.
49
+
50
+ Cross-Modal Place Recognition. Cattaneo et al. [[9](https://arxiv.org/html/2403.14594v2#bib.bib9)] were the first to introduce this task, proposing a data-driven method where two networks were trained to encode images and point cloud maps separately using a teacher-student training approach. Initially, the image network (teacher) was trained using the triplet loss function [[36](https://arxiv.org/html/2403.14594v2#bib.bib36)], and then the point cloud network (student) was trained to align point embeddings within the shared latent space. In our work, we build on this paradigm with a stronger image backbone and enhance global descriptors by incorporating local feature constraints. The L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT approach, proposed by Lee et al.[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)], presented an alternative method for cross-modal retrieval, where the domain gap was bridged by pre-processing sensor data and transforming it into the same data representations. Specifically, they converted both types of data into the 2.5D space, where RGB images were turned into disparity maps using depth network[[41](https://arxiv.org/html/2403.14594v2#bib.bib41)] and LiDAR point clouds were transformed into range images. A self-supervised pre-training scheme [[24](https://arxiv.org/html/2403.14594v2#bib.bib24)] was employed on the encoders, enabling the networks convergence. Similar method such as Lip-Loc[[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] also proposed to process LiDAR-scans into range images and optimize their encoders by contrastive learning. In comparison, our method directly handles input raw data and does not require computationally demanding pre-processing steps such as generation of range images or depth maps, which would be more favorable for on-board devices.
51
+
52
+ A few studies have been proposed to tackle cross-modal registration such as 2D-3D re-localization[[25](https://arxiv.org/html/2403.14594v2#bib.bib25), [35](https://arxiv.org/html/2403.14594v2#bib.bib35), [39](https://arxiv.org/html/2403.14594v2#bib.bib39), [13](https://arxiv.org/html/2403.14594v2#bib.bib13)]. These methods primarily concentrate on accurately aligning a given camera view with a corresponding point cloud map and estimating relative 6-DoF transformation between them. In our work, we propose a solution for finding the cross-modal pairs, which are often unavailable in a real-world scenario, and advocate usefulness of local constraints in achieving this goal.
53
+
54
+ 3 Problem Statement
55
+ -------------------
56
+
57
+ We begin by defining the task of cross-modal place recognition. In particular, we are interested in camera-to-LiDAR retrieval, however the definition can be naturally extended to other modalities such as radars.
58
+
59
+ Given a reference map M ref subscript 𝑀 ref M_{\text{ref}}italic_M start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT, where each element (a 2D image I 𝐼 I italic_I or a 3D point cloud P) is tagged with a GPS coordinate, we aim to retrieve the geographically closest match to a query Q 𝑄 Q italic_Q from a different sensor modality, such as LiDAR scanner or camera respectively. With this, the cross-modal place recognition can be defined formally for 3D-2D as
60
+
61
+ I∗=argmin⁢{d⁢(g⁢(Q),f⁢(I))}superscript 𝐼 argmin 𝑑 𝑔 𝑄 𝑓 𝐼 I^{*}=\text{argmin}\{d(g(Q),f(I))\}italic_I start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = argmin { italic_d ( italic_g ( italic_Q ) , italic_f ( italic_I ) ) }
62
+
63
+ or for 2D-3D as
64
+
65
+ P∗=argmin⁢{d⁢(f⁢(Q),g⁢(P))},superscript 𝑃 argmin 𝑑 𝑓 𝑄 𝑔 𝑃 P^{*}=\text{argmin}\{d(f(Q),g(P))\},italic_P start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = argmin { italic_d ( italic_f ( italic_Q ) , italic_g ( italic_P ) ) } ,
66
+
67
+ where d⁢(⋅)𝑑⋅d(\cdot)italic_d ( ⋅ ) is a distance metric (e.g. L1 norm), f 𝑓 f italic_f is an image network, g 𝑔 g italic_g is a point cloud network and I,P∈M ref 𝐼 𝑃 subscript 𝑀 ref I,P\in M_{\text{ref}}italic_I , italic_P ∈ italic_M start_POSTSUBSCRIPT ref end_POSTSUBSCRIPT. This step can be efficiently done using a KD-tree (e.g. from FAISS library[[18](https://arxiv.org/html/2403.14594v2#bib.bib18)]).
68
+
69
+ 4 Method
70
+ --------
71
+
72
+ ![Image 3: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/pipeline.jpg)
73
+
74
+ Figure 2: VXP pipeline comprises three steps: (1) image network training ([Sec.4.1](https://arxiv.org/html/2403.14594v2#S4.SS1 "4.1 Image Network ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")), (2) cross-modal local feature training ([Sec.4.2](https://arxiv.org/html/2403.14594v2#S4.SS2 "4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")), and cross-modal global descriptor training ([Sec.4.3](https://arxiv.org/html/2403.14594v2#S4.SS3 "4.3 Cross-Modal Global Descriptor Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")). Starting from step (2) image features are frozen (\scalerel*![Image 4: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/freeze.jpeg)), while the point cloud features are trained (\scalerel*![Image 5: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/fire.png)). The two networks operate independently during inference, so queries and database samples can be processed separately. The objective is to map different data into a shared latent space and minimize the distance (e.g. L2 norm) between global descriptors of different modalities taken from the same space.
75
+
76
+ In this section, we introduce our cross-modal place recognition approach in detail. We design two separate networks that map image and point cloud into the shared latent space.
77
+
78
+ Practically, dealing with raw point cloud data, which typically consists of thousands of points, can pose a significant computational challenge. To tackle this problem, we downsample each input scan before feeding it to a network. To this end, we leverage point cloud grouping techniques, which has also been shown to effectively capture local structures[[33](https://arxiv.org/html/2403.14594v2#bib.bib33)]. Consequently, we deploy voxelization method[[49](https://arxiv.org/html/2403.14594v2#bib.bib49)] to transform the raw point cloud data P∈ℝ N×3 P superscript ℝ 𝑁 3\textbf{P}\in\mathbb{R}^{N\times 3}P ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × 3 end_POSTSUPERSCRIPT into a voxel grid V={v i∈ℝ M×3,c i∈ℝ 3}1,2,…,T V subscript formulae-sequence subscript v 𝑖 superscript ℝ 𝑀 3 subscript c 𝑖 superscript ℝ 3 1 2…𝑇\textbf{V}=\{\textbf{v}_{i}\in\mathbb{R}^{M\times 3},\textbf{c}_{i}\in\mathbb{% R}^{3}\}_{1,2,...,T}V = { v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_M × 3 end_POSTSUPERSCRIPT , c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT 1 , 2 , … , italic_T end_POSTSUBSCRIPT, where T 𝑇 T italic_T is the number of non-empty voxels and M 𝑀 M italic_M represents the maximal number of points within a voxel. If the number of points in a voxel is lower than M 𝑀 M italic_M, we do zero-padding. From this point, the framework employs a voxel-based representation of LiDAR scans.
79
+
80
+ Our Voxel-Cross-Pixel (VXP) pipeline comprises three steps as demonstrated in [Fig.2](https://arxiv.org/html/2403.14594v2#S4.F2 "In 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Firstly, we train an image network to learn distinctive global descriptors based on positive and negative image pairs ([Sec.4.1](https://arxiv.org/html/2403.14594v2#S4.SS1 "4.1 Image Network ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")). The learned feature space guides optimization in the second stage, where we enforce local correspondences by deploying the Voxel-Pixel Projection module in the point cloud branch ([Sec.4.2](https://arxiv.org/html/2403.14594v2#S4.SS2 "4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")). Lastly, we optimize for the similarity between global descriptors to ensure consistency ([Sec.4.3](https://arxiv.org/html/2403.14594v2#S4.SS3 "4.3 Cross-Modal Global Descriptor Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")).
81
+
82
+ ### 4.1 Image Network
83
+
84
+ The image network architecture comprises two components: (1) the DINO ViTs-8 encoder and (2) a global pooling layer (GeM + FCN) as illustrated in [Fig.2](https://arxiv.org/html/2403.14594v2#S4.F2 "In 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). In the initial phase, an RGB image, denoted as I∈R H×W×3 𝐼 superscript 𝑅 𝐻 𝑊 3 I\in R^{H\times W\times 3}italic_I ∈ italic_R start_POSTSUPERSCRIPT italic_H × italic_W × 3 end_POSTSUPERSCRIPT, is processed by the DINO ViTs-8 encoder f e⁢n⁢c:R H×W×3→R H∗×W∗×D:superscript 𝑓 𝑒 𝑛 𝑐→superscript 𝑅 𝐻 𝑊 3 superscript 𝑅 superscript 𝐻 superscript 𝑊 𝐷 f^{enc}:R^{H\times W\times 3}\rightarrow R^{H^{*}\times W^{*}\times D}italic_f start_POSTSUPERSCRIPT italic_e italic_n italic_c end_POSTSUPERSCRIPT : italic_R start_POSTSUPERSCRIPT italic_H × italic_W × 3 end_POSTSUPERSCRIPT → italic_R start_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT × italic_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT × italic_D end_POSTSUPERSCRIPT, where H∗=H//8 H^{*}=H//8 italic_H start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_H / / 8 and W∗=W//8 W^{*}=W//8 italic_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_W / / 8. This operation yields 2D features, which are also recognized as local image feature descriptors. Subsequently, these generated image features are passed through the global pooling layer f p⁢o⁢o⁢l:R H∗×W∗×D→R D:superscript 𝑓 𝑝 𝑜 𝑜 𝑙→superscript 𝑅 superscript 𝐻 superscript 𝑊 𝐷 superscript 𝑅 𝐷 f^{pool}:R^{H^{*}\times W^{*}\times D}\rightarrow R^{D}italic_f start_POSTSUPERSCRIPT italic_p italic_o italic_o italic_l end_POSTSUPERSCRIPT : italic_R start_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT × italic_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT × italic_D end_POSTSUPERSCRIPT → italic_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT, resulting in the creation of a global image descriptor.
85
+
86
+ We train the image network in a contrastive learning regime using a triplet loss function as per[Eq.1](https://arxiv.org/html/2403.14594v2#S4.E1 "In 4.1 Image Network ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), where an anchor image denoted as I i a subscript superscript 𝐼 𝑎 𝑖 I^{a}_{i}italic_I start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, a positive image I i p subscript superscript 𝐼 𝑝 𝑖 I^{p}_{i}italic_I start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT closely related to the anchor image’s location, and a negative image I i n subscript superscript 𝐼 𝑛 𝑖 I^{n}_{i}italic_I start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT positioned far away from the anchor image.
87
+
88
+ ℒ img=∑I i a,p,n∈ℬ[d⁢(f⁢(I i a),f⁢(I i p))−d⁢(f⁢(I i a),f⁢(I i n))+m]+subscript ℒ img subscript superscript subscript 𝐼 𝑖 𝑎 𝑝 𝑛 ℬ subscript delimited-[]𝑑 𝑓 superscript subscript 𝐼 𝑖 𝑎 𝑓 superscript subscript 𝐼 𝑖 𝑝 𝑑 𝑓 superscript subscript 𝐼 𝑖 𝑎 𝑓 superscript subscript 𝐼 𝑖 𝑛 𝑚\mathcal{L}_{\text{img}}=\sum_{I_{i}^{a,p,n}\in\cal{B}}[d(f(I_{i}^{a}),f(I_{i}% ^{p}))-d(f(I_{i}^{a}),f(I_{i}^{n}))+m]_{+}caligraphic_L start_POSTSUBSCRIPT img end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_a , italic_p , italic_n end_POSTSUPERSCRIPT ∈ caligraphic_B end_POSTSUBSCRIPT [ italic_d ( italic_f ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ) , italic_f ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ) ) - italic_d ( italic_f ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ) , italic_f ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) ) + italic_m ] start_POSTSUBSCRIPT + end_POSTSUBSCRIPT(1)
89
+
90
+ Note that d⁢(⋅)𝑑⋅d(\cdot)italic_d ( ⋅ ) is the distance function, f⁢(⋅)𝑓⋅f(\cdot)italic_f ( ⋅ ) is the image branch model, m 𝑚 m italic_m is the margin, and [⋅]+subscript delimited-[]⋅[\cdot]_{+}[ ⋅ ] start_POSTSUBSCRIPT + end_POSTSUBSCRIPT means max⁡{0,[⋅]}0 delimited-[]⋅\max\{~{}0,[\cdot]~{}\}roman_max { 0 , [ ⋅ ] }. In order to train more efficiently, we find the hardest positive sample with maximal distance and the hardest negative sample with minimal distance to the anchor within the mini-batch ℬ ℬ\mathcal{B}caligraphic_B.
91
+
92
+ ### 4.2 Cross-modal Local Feature Training
93
+
94
+ In this section we describe the second stage of our pipeline, where we pre-train point cloud-based branch using local feature correspondences. The overview can be seen in[Fig.2](https://arxiv.org/html/2403.14594v2#S4.F2 "In 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition").
95
+
96
+ ![Image 6: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/vxp_proj.jpg)
97
+
98
+ Figure 3: Illustration of our proposed local feature optimization between projected voxel- and image-based feature maps. ϕ italic-ϕ\phi italic_ϕ represents “empty” as the 3D feature maps are sparse. Note that the voxel local descriptor is the v i o⁢u⁢t superscript subscript v 𝑖 𝑜 𝑢 𝑡\textbf{v}_{i}^{out}v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o italic_u italic_t end_POSTSUPERSCRIPT introduced in [Eq.2](https://arxiv.org/html/2403.14594v2#S4.E2 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). After the projection, multiple v i o⁢u⁢t superscript subscript v 𝑖 𝑜 𝑢 𝑡\textbf{v}_{i}^{out}v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o italic_u italic_t end_POSTSUPERSCRIPT could be projected as per [Eq.4](https://arxiv.org/html/2403.14594v2#S4.E4 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition").
99
+
100
+ Voxel Feature Encoding. The initial voxel feature v∈ℝ M×3 v superscript ℝ 𝑀 3\textbf{v}\in\mathbb{R}^{M\times 3}v ∈ blackboard_R start_POSTSUPERSCRIPT italic_M × 3 end_POSTSUPERSCRIPT aggregates information from M 𝑀 M italic_M raw point coordinates contained within the voxel boundaries. We use VoxelNet [[49](https://arxiv.org/html/2403.14594v2#bib.bib49)] to extract more detailed descriptor for each voxel v∈ℝ M×3→ℝ D∗v superscript ℝ 𝑀 3→superscript ℝ superscript 𝐷\textbf{v}\in\mathbb{R}^{M\times 3}\quad\rightarrow\quad\mathbb{R}^{D^{*}}v ∈ blackboard_R start_POSTSUPERSCRIPT italic_M × 3 end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_D start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT. Finally, we perform a series of sparse 3D convolutions [[48](https://arxiv.org/html/2403.14594v2#bib.bib48)] to generate a sparse 3D feature map of grid size 28×28×28 28 28 28 28\times 28\times 28 28 × 28 × 28, namely V o⁢u⁢t subscript V 𝑜 𝑢 𝑡\textbf{V}_{out}V start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT, as formulated in[Eq.2](https://arxiv.org/html/2403.14594v2#S4.E2 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition").
101
+
102
+ V out={v i out∈ℝ D,c i out∈ℝ 3}1,2,…,T∗subscript V out subscript formulae-sequence superscript subscript v 𝑖 out superscript ℝ 𝐷 superscript subscript c 𝑖 out superscript ℝ 3 1 2…superscript 𝑇\textbf{V}_{\text{out}}=\{\textbf{v}_{i}^{\text{out}}\in\mathbb{R}^{D},\textbf% {c}_{i}^{\text{out}}\in\mathbb{R}^{3}\}_{1,2,...,T^{*}}V start_POSTSUBSCRIPT out end_POSTSUBSCRIPT = { v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT out end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT , c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT out end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT 1 , 2 , … , italic_T start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT(2)
103
+
104
+ The v i o⁢u⁢t∈ℝ D superscript subscript v 𝑖 𝑜 𝑢 𝑡 superscript ℝ 𝐷\textbf{v}_{i}^{out}\in\mathbb{R}^{D}v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o italic_u italic_t end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT represents a local descriptor of a single voxel in the output voxel grid, which is a D-dimensional vector corresponding to the channel size of the 2D feature from f e⁢n⁢c superscript 𝑓 𝑒 𝑛 𝑐 f^{enc}italic_f start_POSTSUPERSCRIPT italic_e italic_n italic_c end_POSTSUPERSCRIPT, while c i o⁢u⁢t superscript subscript c 𝑖 𝑜 𝑢 𝑡\textbf{c}_{i}^{out}c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o italic_u italic_t end_POSTSUPERSCRIPT denotes the coordinate of this voxel within the voxel grid. Here, c i o⁢u⁢t superscript subscript c 𝑖 𝑜 𝑢 𝑡\textbf{c}_{i}^{out}c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o italic_u italic_t end_POSTSUPERSCRIPT is defined with respect to the voxel grid coordinate frame {𝒱}𝒱\{\mathcal{V}\}{ caligraphic_V }. Note that T∗superscript 𝑇 T^{*}italic_T start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT represents the number of non-empty voxel local descriptors. Sparse convolutions allow us to aggregate spatial information from neighboring voxels in a hierarchical fashion, which allows to capture long-distance relations.
105
+
106
+ Voxel-Pixel Projection. In order to bridge the domain gap between point cloud and image, we introduce simple yet effective Voxel-Pixel Projection module. This module projects voxels onto the image plane using the pinhole camera model. However, it’s important to note that the voxel coordinates are defined within the voxel grid coordinate system denoted as {𝒱}𝒱\{\mathcal{V}\}{ caligraphic_V }. As per[Eq.3](https://arxiv.org/html/2403.14594v2#S4.E3 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), we first transform the voxels into the point cloud (LiDAR) coordinate frame and apply projection matrix 𝐌 𝐌\mathbf{M}bold_M to transform points onto the image plane. This way, we can obtain the voxel-based feature map and establish local descriptor constraints with the image-based features. Projection matrix is assumed to be provided and comprises intrinsic camera parameters and extrinsic LiDAR-camera calibration transformation.
107
+
108
+ λ⁢[u i v i 1]=𝐌⋅([v x 0 0 0 v y 0 0 0 v z]⁢c i o⁢u⁢t+[v x/2+x m⁢i⁢n v y/2+y m⁢i⁢n v z/2+z m⁢i⁢n])𝜆 matrix subscript 𝑢 𝑖 subscript 𝑣 𝑖 1⋅𝐌 matrix subscript 𝑣 𝑥 0 0 0 subscript 𝑣 𝑦 0 0 0 subscript 𝑣 𝑧 superscript subscript c 𝑖 𝑜 𝑢 𝑡 matrix subscript 𝑣 𝑥 2 subscript 𝑥 𝑚 𝑖 𝑛 subscript 𝑣 𝑦 2 subscript 𝑦 𝑚 𝑖 𝑛 subscript 𝑣 𝑧 2 subscript 𝑧 𝑚 𝑖 𝑛\lambda\begin{bmatrix}u_{i}\\ v_{i}\\ 1\end{bmatrix}=\mathbf{M}\cdot\left(\begin{bmatrix}v_{x}&0&0\\ 0&v_{y}&0\\ 0&0&v_{z}\end{bmatrix}\textbf{c}_{i}^{out}+\begin{bmatrix}v_{x}/2+x_{min}\\ v_{y}/2+y_{min}\\ v_{z}/2+z_{min}\end{bmatrix}\right)italic_λ [ start_ARG start_ROW start_CELL italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL 1 end_CELL end_ROW end_ARG ] = bold_M ⋅ ( [ start_ARG start_ROW start_CELL italic_v start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_v start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL italic_v start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ] c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o italic_u italic_t end_POSTSUPERSCRIPT + [ start_ARG start_ROW start_CELL italic_v start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT / 2 + italic_x start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL italic_v start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT / 2 + italic_y start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL italic_v start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT / 2 + italic_z start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ] )(3)
109
+
110
+ Note that (v x,v y,v z)subscript 𝑣 𝑥 subscript 𝑣 𝑦 subscript 𝑣 𝑧(v_{x},v_{y},v_{z})( italic_v start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ) is the dimension of the output voxel grid and the lower bound of the point cloud range is represented as (x min,y min,z min)subscript 𝑥 min subscript 𝑦 min subscript 𝑧 min(x_{\text{min}},y_{\text{min}},z_{\text{min}})( italic_x start_POSTSUBSCRIPT min end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT min end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT min end_POSTSUBSCRIPT ).
111
+
112
+ Local Feature Optimization. During the local descriptor optimization phase shown in [Fig.3](https://arxiv.org/html/2403.14594v2#S4.F3 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), we utilize the projected voxel coordinates (u i,v i)subscript 𝑢 𝑖 subscript 𝑣 𝑖(u_{i},v_{i})( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) as indices to retrieve the corresponding local descriptors from the image feature map. Once retrieved, we can apply the local descriptor loss as
113
+
114
+ ℒ local=∑(u i,v i)∈ℳ V‖d i⋅ℳ V⁢(u i,v i)−ℳ I⁢(u i,v i)‖1; smooth.subscript ℒ local subscript subscript 𝑢 𝑖 subscript 𝑣 𝑖 subscript ℳ 𝑉 subscript norm⋅subscript 𝑑 𝑖 subscript ℳ 𝑉 subscript 𝑢 𝑖 subscript 𝑣 𝑖 subscript ℳ 𝐼 subscript 𝑢 𝑖 subscript 𝑣 𝑖 1; smooth\mathcal{L}_{\text{local}}=\sum_{(u_{i},v_{i})\in\mathcal{M}_{V}}||d_{i}\cdot% \mathcal{M}_{V}(u_{i},v_{i})-\mathcal{M}_{I}(u_{i},v_{i})||_{\text{1; smooth}}.caligraphic_L start_POSTSUBSCRIPT local end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ∈ caligraphic_M start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT end_POSTSUBSCRIPT | | italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ caligraphic_M start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - caligraphic_M start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) | | start_POSTSUBSCRIPT 1; smooth end_POSTSUBSCRIPT .(4)
115
+
116
+ The projected voxel feature map is denoted as ℳ V subscript ℳ 𝑉\mathcal{M}_{V}caligraphic_M start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT, the image feature map is ℳ I subscript ℳ 𝐼\mathcal{M}_{I}caligraphic_M start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT. We also take care of collisions, when multiple voxels are projected to the same pixel, by weighting descriptors with their voxels’ inverse depths d i subscript 𝑑 𝑖 d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. This way we give preference to the voxels that are closer to the camera, however propagate gradients to all voxels. This strategy allowed more stable training over the z-buffering.
117
+
118
+ ### 4.3 Cross-Modal Global Descriptor Training
119
+
120
+ In the last stage, we fine-tune the Voxel-Pixel Encoder and train pooling layers together with the subsequent FCN to bring global embeddings closer to their image-based matches with
121
+
122
+ ℒ global=∑i‖f⁢(I i)−g⁢(P i)‖1; smooth,subscript ℒ global subscript 𝑖 subscript norm 𝑓 subscript 𝐼 𝑖 𝑔 subscript 𝑃 𝑖 1; smooth\mathcal{L}_{\text{global}}=\sum_{i}||f(I_{i})-g(P_{i})||_{\text{1; smooth}},caligraphic_L start_POSTSUBSCRIPT global end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | | italic_f ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - italic_g ( italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) | | start_POSTSUBSCRIPT 1; smooth end_POSTSUBSCRIPT ,(5)
123
+
124
+ where I i subscript 𝐼 𝑖 I_{i}italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and P i subscript 𝑃 𝑖 P_{i}italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are an image and a point cloud corresponding to the same location, f(.)f(.)italic_f ( . ) and g(.)g(.)italic_g ( . ) refer to the corresponding networks. This allows us to ensure global consistency of aggregated descriptors in addition to local similarities enforced in the previous stage.
125
+
126
+ 5 Experiments and Results
127
+ -------------------------
128
+
129
+ ### 5.1 Implementation Details
130
+
131
+ Image Network Training: We resize the image to 224×224 224 224 224\times 224 224 × 224. During training of the image network, positive pairs are chosen from images that are within 10 meters, while the negative pairs are defined from samples that are more than 25 meters away as [[23](https://arxiv.org/html/2403.14594v2#bib.bib23)]. We set the margin of the triplet loss function to 0.3. To handle zero-triplets, i.e. anchor-positive-negative tuples with zero triplet loss, we employ a strategy of gradually increasing the batch size if the proportion of zero-triplets exceeds 30% of the original batch size. The training with branch expansion rate is adopted from[[20](https://arxiv.org/html/2403.14594v2#bib.bib20)] and configured to 1.4, while the maximum batch size is set to 256. We use pre-trained model (dino-vits8) and finetune all its parameters together with our GeM + FCN block using [Eq.1](https://arxiv.org/html/2403.14594v2#S4.E1 "In 4.1 Image Network ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). A custom batch sampler with at least one positive pair within each batch {ℬ ℬ\mathcal{B}caligraphic_B} is implemented. For each sample in {ℬ ℬ\mathcal{B}caligraphic_B}, its hard positive / negative sample is the farthest / closest sample in {ℬ ℬ\mathcal{B}caligraphic_B} based on the L2 distance between the global descriptors.
132
+
133
+ Point Cloud Network Training. We take the fine-tuned image network and freeze all its parameter during the training of the point cloud network. We adopt the following voxelization parameters: point cloud boundaries range is x:[0,44],y:[−22,22],z:[−4,18]:𝑥 0 44 𝑦:22 22 𝑧:4 18 x:[0,44],y:[-22,22],z:[-4,18]italic_x : [ 0 , 44 ] , italic_y : [ - 22 , 22 ] , italic_z : [ - 4 , 18 ], voxel dimensions are set to [v x,v y,v z]=[0.4,0.4,0.2]subscript 𝑣 𝑥 subscript 𝑣 𝑦 subscript 𝑣 𝑧 0.4 0.4 0.2[v_{x},v_{y},v_{z}]=[0.4,0.4,0.2][ italic_v start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ] = [ 0.4 , 0.4 , 0.2 ]. This would allow us to have a final voxel grid with size (110, 110, 110). Both the cost functions in ℒ local subscript ℒ local\mathcal{L}_{\text{local}}caligraphic_L start_POSTSUBSCRIPT local end_POSTSUBSCRIPT and ℒ global subscript ℒ global\mathcal{L}_{\text{global}}caligraphic_L start_POSTSUBSCRIPT global end_POSTSUBSCRIPT are chosen as smooth L1 loss ||⋅||1; smooth||\cdot||_{\text{1; smooth}}| | ⋅ | | start_POSTSUBSCRIPT 1; smooth end_POSTSUBSCRIPT to ensure robustness to outliers. Adam optimizer and LambdaLR learning rate scheduler are utilized in our training pipeline.
134
+
135
+ ### 5.2 Datasets
136
+
137
+ Oxford RobotCar Dataset. We utilize the Oxford RobotCar benchmark[[27](https://arxiv.org/html/2403.14594v2#bib.bib27)] for evaluation, where the same trajectory was traveled over a year in different times of the day and seasonal conditions. We generate data samples following the same protocol as conducted by Cattaneo et al. [[10](https://arxiv.org/html/2403.14594v2#bib.bib10)], where image is recorded every five meters and the corresponding point cloud map is constructed by concatenating the subsequent 2D LiDAR scans. The four test regions are excluded from the training dataset as per[[38](https://arxiv.org/html/2403.14594v2#bib.bib38)].
138
+
139
+ ViViD++ Dataset. Additionally, we assess the performance of our model on the ViViD++ dataset [[22](https://arxiv.org/html/2403.14594v2#bib.bib22)], which consists of driving and handheld sequences and offers 3D LiDAR, visual and GPS data. In the scope of our work, we are mainly interested in the urban data, which contains sensor measurements recorded during a day, evening and night. We follow the training procedures proposed by Lee et al. [[23](https://arxiv.org/html/2403.14594v2#bib.bib23)] where only the day1 sequences are used for training, while performing evaluation with day2 and night sequences.
140
+
141
+ KITTI Odometry Dataset. We further test the generalization capability of our VXP on the KITTI Odometry benchmark [[15](https://arxiv.org/html/2403.14594v2#bib.bib15)], which contains sequences with LiDAR scans, images, and ground-truth poses.
142
+
143
+ ### 5.3 Results
144
+
145
+ Across various datasets we evaluate different combinations of modalities for query and database: 2D-3D (image query and point cloud database), 3D-2D (point cloud query and image database) and their uni-modal variations, i.e. 2D-2D (image-only) and 3D-3D (point cloud-only).
146
+
147
+ Oxford RobotCar. We adhere to the evaluation metric employed by Cattaneo[[10](https://arxiv.org/html/2403.14594v2#bib.bib10)], in which we select each pair of distinct runs from 23 sequences as query and database. The query contains samples only from the four excluded regions as per[[38](https://arxiv.org/html/2403.14594v2#bib.bib38)], while database consists of samples from the entire trajectory. Finally, the average of the recall is computed for all the pairs. In[Tab.1](https://arxiv.org/html/2403.14594v2#S5.T1 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), we compare our model with the existing cross-modal retrieval approaches, such as the method by Cattaneo et al.[[10](https://arxiv.org/html/2403.14594v2#bib.bib10)], L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)] and LIP-Loc [[31](https://arxiv.org/html/2403.14594v2#bib.bib31)]. As the code from Cattaneo et al. [[10](https://arxiv.org/html/2403.14594v2#bib.bib10)] is not publicly released, we have implemented the approach with the authors’ help to the best of our abilities. We report performance on different modality configurations, namely database and query combinations of 2D images and 3D point clouds.
148
+
149
+ Our method outperforms other baselines on 2D-3D place recognition by a significant margin due to the proposed local constraints. We also demonstrate the best performance in the uni-modal retrieval. Fig.[1](https://arxiv.org/html/2403.14594v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") shows average recall up to K=25 𝐾 25 K=25 italic_K = 25 nearest neighbors for cross-modal place recognition on the Oxford dataset. Our method is the most accurate and precise with respect to all the baselines from Cattaneo et al.[[10](https://arxiv.org/html/2403.14594v2#bib.bib10)] and LIP-Loc [[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] and across the whole K-range, which validates consistency of our method.
150
+
151
+ Table 1: Retrieval performance compared with existing cross-modal methods on Oxford dataset. Our model consistently outperforms other baselines on both cross- and uni-modal settings.
152
+
153
+ 2D-2D 3D-3D
154
+ 1 1%1 1%
155
+ AnyLoc[[19](https://arxiv.org/html/2403.14594v2#bib.bib19)]93.5 98.9––
156
+ MixVPR[[2](https://arxiv.org/html/2403.14594v2#bib.bib2)]92.8 97.7––
157
+ MinkLoc3D-S[[51](https://arxiv.org/html/2403.14594v2#bib.bib51)]––95.8 99.0
158
+ CASSPR[[44](https://arxiv.org/html/2403.14594v2#bib.bib44)]––94.7 98.4
159
+ VXP (Ours)92.0 98.8 94.7 98.8
160
+
161
+ Table 2: Retrieval performance compared with existing uni-modal methods on Oxford dataset. Provided values correspond to Recall@1 and 1%. Our model has comparable performance with the uni-modal state-of-the-art approaches.
162
+
163
+ We also compare the performance of our method with the state-of-the-art uni-modal approaches for visual place recognition methods AnyLoc[[19](https://arxiv.org/html/2403.14594v2#bib.bib19)] and MixVPR[[2](https://arxiv.org/html/2403.14594v2#bib.bib2)], and LiDAR-based retrieval, such as MinkLoc3D-S[[51](https://arxiv.org/html/2403.14594v2#bib.bib51)] and CASSPR[[44](https://arxiv.org/html/2403.14594v2#bib.bib44)]. [Tab.2](https://arxiv.org/html/2403.14594v2#S5.T2 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") shows our method performs on-par with the uni-modal baselines, while additionally offering cross-modal capabilities that are practical for multi-sensor on-board suites.
164
+
165
+ ViViD++. We further evaluate our model on the ViViD++ dataset and compare the results of different approaches on day1−--day2 sequences in[Tab.3](https://arxiv.org/html/2403.14594v2#S5.T3 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Note that day1−--day2 represents query from day1 sequences and database using day2 sequences. Overall, we outperform the other baselines [[10](https://arxiv.org/html/2403.14594v2#bib.bib10), [23](https://arxiv.org/html/2403.14594v2#bib.bib23), [31](https://arxiv.org/html/2403.14594v2#bib.bib31)] on the cross-modal place recognition and perform on par with[[10](https://arxiv.org/html/2403.14594v2#bib.bib10)] on uni-modal retrieval task.
166
+
167
+ We also evaluate our method on the night-day retrieval, where database map is recorded in the day and queries are obtained at night. We report average performance computed on the city night−--city day2 and campus night−--campus day2 sequences from the dataset. Despite significant appearance differences between night queries and map samples recorded during the day, our VXP is able to tackle this challenge by incorporating information from the LiDAR scans that are not affected by insufficient lighting conditions. As shown in[Tab.4](https://arxiv.org/html/2403.14594v2#S5.T4 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), image retrieval (2D-2D) struggles in the challenging scenarios of the night-day retrieval, while cross-modal recognition is capable to offer more accurate place recognition performance across all baselines. Moreover, our approach outperforms other methods such as L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, [[10](https://arxiv.org/html/2403.14594v2#bib.bib10)], and [[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] on the 3D-2D place recognition task and shows highly accurate results based on the top selected retrieval candidate. Specifically, on Recall@1 we achieve a boost in performance by a large margin (∼25%similar-to absent percent 25\sim 25\%∼ 25 % improvement), which demonstrates the effectiveness of our pipeline for this challenging scenario.
168
+
169
+ 2D-3D 3D-2D 2D-2D 3D-3D
170
+ 1 1%1 1%1 1%1 1%
171
+ L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)]60.9 96.0 51.8 94.6 69.2 96.9 58.1 96.1
172
+ Cattaneo’s [[10](https://arxiv.org/html/2403.14594v2#bib.bib10)]87.6 99.6 78.6 98.6 93.4 99.8 91.0 99.9
173
+ LIP-Loc [[31](https://arxiv.org/html/2403.14594v2#bib.bib31)]73.7 98.4 54.9 93.0 61.1 94.0 78.8 97.4
174
+ VXP (Ours)96.8 99.6 94.7 99.8 96.7 99.9 97.0 99.7
175
+
176
+ Table 3: Retrieval performance (average recall) for top 1 and 1% retrieved places on the ViViD++ dataset (day1−--day2 sequences). Our model outperforms the other baselines on both uni- and cross-modal experiments.
177
+
178
+ 2D-2D 3D-2D
179
+ 1 1%1 1%
180
+ L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)]0.8 5.5 49.4 93.4
181
+ Cattaneo’s [[10](https://arxiv.org/html/2403.14594v2#bib.bib10)]2.2 10.1 56.9 94.9
182
+ LIP-Loc [[31](https://arxiv.org/html/2403.14594v2#bib.bib31)]2.7 12.0 45.5 90.0
183
+ VXP (Ours)10.2 21.7 82.0 97.5
184
+
185
+ Table 4: Retrieval performance (average recall) for top 1 and 1% retrieved places on ViViD++ dataset (night−--day2 sequences). We can observe the advantage of deploying LiDAR scans as a query, which significantly boosts performance for all baselines. Due to the proposed architectural design, our VXP performs the best in both settings.
186
+
187
+ KITTI Odometry Benchmark. The results are shown in [Tab.5](https://arxiv.org/html/2403.14594v2#S5.T5 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Different to the evaluation procedure followed by [[44](https://arxiv.org/html/2403.14594v2#bib.bib44), [11](https://arxiv.org/html/2403.14594v2#bib.bib11)] for LiDAR-based place recognition, we propose our own evaluation protocol on the dataset. Specifically, we train the model on sequences 03, 04, 05, 06, 07, 08, 09, 10. For testing we select 4 regions from sequences 00 and 02 and include the remaining parts of the trajectory into the training data. Notably, none of the sequences traverses the same place, so we test our model on completely unseen regions to demonstrate generalisation capability of our method. Further training details are provided in the supplementary.
188
+
189
+ As shown in[Tab.5](https://arxiv.org/html/2403.14594v2#S5.T5 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") our method demonstrates competitive performance on all configurations. Since the full code for the L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT was not publicly available at the submission time, we could not provide comparison on this benchmark. While LIP-Loc[[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] achieves the best performance on Recall@1% 2D-3D setting, it is more sensitive to the sampling range of the database samples and queries. We provide details of the experiment in the supplementary.
190
+
191
+ Table 5: Retrieval performance (average recall) for top 1 and 1% retrieved places on KITTI Odometry dataset (00, 02 sequences). Our model shows competitive performance among all baselines.
192
+
193
+ Table 6: Ablation study of the local feature optimization ([Eq.4](https://arxiv.org/html/2403.14594v2#S4.E4 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")) for cross-modal retrieval on the Oxford RobotCar benchmark. Introducing local constraints significantly improves retrieval accuracy over global-only baseline ([Eq.5](https://arxiv.org/html/2403.14594v2#S4.E5 "In 4.3 Cross-Modal Global Descriptor Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")), which validates our architectural design.
194
+
195
+ 6 Ablation Studies
196
+ ------------------
197
+
198
+ Local Descriptor Loss Analysis. We evaluate the impact of the Local Descriptor Optimization ([Sec.4.2](https://arxiv.org/html/2403.14594v2#S4.SS2 "4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")) on the cross-modal place recognition. As shown in [Tab.6](https://arxiv.org/html/2403.14594v2#S5.T6 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), the proposed combination of local and global optimizations allows the model to effectively bridge the domain gap between image and point cloud and achieve higher cross-modal retrieval performance.
199
+
200
+ Fine-tuning Image Backbone. Foundation models such as DINO[[28](https://arxiv.org/html/2403.14594v2#bib.bib28)] have demonstrated capability of addressing a wide range of tasks[[3](https://arxiv.org/html/2403.14594v2#bib.bib3)]. However, we have noticed that their off-the-shelf performance on the visual (2D-2D) place recognition task is quite poor and fine-tuning is necessary to reach a competitive accuracy. Specifically, we scored only 59.5% on the 2D-2D Recall@1 with the pre-trained DINO ViTs-8 model, while with additional fine-tuning we achieved 2D-2D accuracy of 92.0% ([Tab.2](https://arxiv.org/html/2403.14594v2#S5.T2 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")). We can also observe the effect of fine-tuning the model on the attention maps. An example from the Oxford benchmark is shown in [Fig.4](https://arxiv.org/html/2403.14594v2#S6.F4 "In 6 Ablation Studies ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Specifically, buildings, road markings and traffic lights receive higher attention scores after fine-tuning, while the car hood is ignored.
201
+
202
+ ![Image 7: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/att_compare.png)
203
+
204
+ Figure 4: DINO fine-tuning effects on attention maps. From left to right: an input image, an attention map generated by pretrained DINO’s ViTs-8 without fine-tuning and a map produced after fine-tuning. Due to the latter, important scene structures such as buildings and traffic poles receive higher attention.
205
+
206
+ Voxel-Pixel Projection Module Analysis. We compare our VXP model against a simple baseline, Ortho-VXP, which transforms V out subscript V out\textbf{V}_{\text{out}}V start_POSTSUBSCRIPT out end_POSTSUBSCRIPT to a dense form and performs an orthographic projection of the features to obtain an analogue in the image plane. As shown in [Tab.7](https://arxiv.org/html/2403.14594v2#S6.T7 "In 6 Ablation Studies ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), we achieve a boost on the cross-modal localization due to the perspective nature of the VXP module, which associates voxels to corresponding pixels considering the former depth and provides stronger place recognition cues than maintaining original distances and size as per orthographic projection.
207
+
208
+ Table 7: Ablation study of projection module on the Oxford RobotCar dataset. Perspective projection with VXP benefits localization when compared with its orthographic analog, Ortho-VXP.
209
+
210
+ Qualitative Evaluation for VXP. As we have shown in [Sec.5.3](https://arxiv.org/html/2403.14594v2#S5.SS3 "5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), VXP achieves state-of-the-art cross-modal retrieval performance and maintains high uni-modal global localization accuracy. At the same time, we are capable of mitigating the domain gap between different modalities and learning expressive shared latent space. We demonstrate the correlation between the attention map of an RGB image and the feature map of the projected voxels in [Fig.5](https://arxiv.org/html/2403.14594v2#S6.F5 "In 6 Ablation Studies ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Notably, the projected voxels exhibit a similar pattern with the image-based attention map. Since our focus is on place recognition, structures such as buildings carry greater significance, resulting in higher attention scores in those regions for feature maps from both modalities. With this, global descriptors are learned based on consistent information across modalities and we are capable of effectively bridging the domain gap.
211
+
212
+ ![Image 8: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/rgb_att_projv.jpg)
213
+
214
+ Figure 5: From left to right: an input image, its attention map and projected feature map generated from the respective point cloud.
215
+
216
+ Training and Inference Efficiency. We evaluate model inference time using a single RTX3080 and pre-processing time with Intel i7-12700. Depth image generation for L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)] baseline is done on GPU. Our VXP takes 7 ms to obtain a global descriptor for an image and 18 ms for a point cloud respectively, while L⁢C 2 𝐿 superscript 𝐶 2 LC^{2}italic_L italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT[[23](https://arxiv.org/html/2403.14594v2#bib.bib23)] encodes input image in 17 ms and LiDAR scan in 53 ms due to expensive pre-processing step of depth image generation and point cloud-to-range image conversion. In terms of model parameters and memory footprint, 2D and 3D networks of VXP have 21.7M (87.2MB) and 5.9M (23.6MB) parameters respectively. With this, our model is fast and lightweight to run as part of a real-time system. Notably, the reference map can be encoded offline.
217
+
218
+ 7 Limitations and Future Work
219
+ -----------------------------
220
+
221
+ Our VXP pipeline comprises three steps as described in[Sec.4](https://arxiv.org/html/2403.14594v2#S4 "4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Although this multi-stage design showcases the best performance based on our ablation studies ([Sec.6](https://arxiv.org/html/2403.14594v2#S6 "6 Ablation Studies ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")), end-to-end training requires less engineering effort and opens a possibility for generalization when training on larger or multi-source datasets, which is desirable for the autonomous driving applications. In addition, our model is specific for every dataset. While it achieves good performance on the unseen views from the training in-domain dataset, it does not work on different, out-domain sequences. As VXP needs a dataset-specific calibration matrix to establish local descriptor consistency, it remains a limitation towards multi-dataset generalization. Learning calibration on diverse input images and point clouds is a straightforward extension of the VXP pipeline, which is part of the future work.
222
+
223
+ 8 Conclusion
224
+ ------------
225
+
226
+ We have presented a new framework, Voxel-Cross-Pixel (VXP), for camera-LiDAR place recognition. VXP makes use of a novel 3D-to-2D projection module specifically designed to establish local feature correspondences and facilitate bridging the domain gap between LiDAR scans and images. To this end, we proposed a cross-modal pipeline, which captures both fine-grained local details and broader global context. Notably, our approach directly works on raw data without any pre-processing steps. Experimental evaluations demonstrate that VXP provides a new state-of-the-art performance on cross-modal image-LiDAR retrieval and offers competitive performance against uni-modal baselines. It shows real-time capability and low memory footprint, which makes it an excellent candidate for deployment on the embedded systems.
227
+
228
+ \thetitle
229
+
230
+ Supplementary Material
231
+
232
+ In the supplementary we provide details on the used coordinate system convention in[Sec.9](https://arxiv.org/html/2403.14594v2#S9 "9 Coordinate Frames ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), evaluation procedure on the KITTI Odometry benchmark ([Sec.10](https://arxiv.org/html/2403.14594v2#S10 "10 Training / Testing Setup (KITTI) ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition")), and further qualitative results in[Sec.12](https://arxiv.org/html/2403.14594v2#S12 "12 Qualitative Results in Challenging Illumination Conditions ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Moreover, we visualize cross-modal local correspondences in the latent space in[Sec.13](https://arxiv.org/html/2403.14594v2#S13 "13 Correspondences in Local Feature Space ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") and report few failure cases in[Sec.14](https://arxiv.org/html/2403.14594v2#S14 "14 Failure Cases ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition").
233
+
234
+ ![Image 9: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/coordinate_frames.jpg)
235
+
236
+ Figure 6: Illustrations of voxel grid coordinate frame {𝒱}𝒱\mathcal{\{V\}}{ caligraphic_V } (left), intermediate point cloud (LiDAR) system {𝒫}𝒫\mathcal{\{P\}}{ caligraphic_P } (middle), and target camera coordinate frame {𝒞}𝒞\mathcal{\{C\}}{ caligraphic_C } (right). Note that once the points are transformed to {𝒞}𝒞\mathcal{\{C\}}{ caligraphic_C }, we can apply pinhole camera projection to project points to pixel coordinate frame.
237
+
238
+ 9 Coordinate Frames
239
+ -------------------
240
+
241
+ In [Sec.4.2](https://arxiv.org/html/2403.14594v2#S4.SS2 "4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") (main), we introduce Voxel-Pixel Projection module along with the associated coordinate transformations. Accordingly, in [Fig.6](https://arxiv.org/html/2403.14594v2#S8.F6 "In 8 Conclusion ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), we provide a comprehensive illustration of the operational coordinate frames {𝒱}𝒱\mathcal{\{V\}}{ caligraphic_V }, {𝒫}𝒫\mathcal{\{P\}}{ caligraphic_P }, and {𝒞}𝒞\mathcal{\{C\}}{ caligraphic_C } and demonstrate the transformations between them as per [Eq.3](https://arxiv.org/html/2403.14594v2#S4.E3 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") (main). Please note that camera parameters and relative transformation between sensors (camera and LiDAR) are known and provided as part of the datasets (Oxford, ViViD++, and KITTI). When performing pinhole camera projection as per [Eq.3](https://arxiv.org/html/2403.14594v2#S4.E3 "In 4.2 Cross-modal Local Feature Training ‣ 4 Method ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") in the main paper, normalized intrinsics are used for adapting to different sizes of a feature map.
242
+
243
+ 10 Training / Testing Setup (KITTI)
244
+ -----------------------------------
245
+
246
+ ![Image 10: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/kitti_00.png)
247
+
248
+ (a)Test regions in KITTI sequence 00.
249
+
250
+ ![Image 11: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/kitti_02.png)
251
+
252
+ (b)Test regions in KITTI sequence 02.
253
+
254
+ Figure 7: We select 4 non-overlapping regions from KITTI (sequences 00, 02) and exclude their samples during training. Ability to perform accurate place retrieval in these regions is important to tackle localization drift of SLAM systems.
255
+
256
+ In [Tab.5](https://arxiv.org/html/2403.14594v2#S5.T5 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") (main) we present the evaluation on the KITTI Odometry Benchmark[[15](https://arxiv.org/html/2403.14594v2#bib.bib15)]. Our evaluation protocol draws inspiration from the methodology employed in the Oxford RobotCar dataset[[27](https://arxiv.org/html/2403.14594v2#bib.bib27)]. As shown in [Fig.7](https://arxiv.org/html/2403.14594v2#S10.F7 "In 10 Training / Testing Setup (KITTI) ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), we define non-overlapping regions in KITTI sequences 00 and 02 and exclude samples from these regions during the training process. This ensures that the model is tested on unseen scenes. Throughout the training process, samples from the test regions in sequence 02 are utilized for validation, and we select the best model based on the validation set. During testing, both queries and database are sampled every 20 meters with the start offset of 5 meters. This ensures that samples from queries and database are not repeated, and the positive samples from the constructed query-database pair are in the range of [4.3m, 17.0m] to their corresponding queries as per [Tab.8](https://arxiv.org/html/2403.14594v2#S10.T8 "In 10 Training / Testing Setup (KITTI) ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Notably, to mimic a real-world scenario of detecting loop candidates, we consider a “revisit” location by only keeping the positive samples (D t i subscript 𝐷 subscript 𝑡 𝑖 D_{t_{i}}italic_D start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT) of a query (Q t 0 subscript 𝑄 subscript 𝑡 0 Q_{t_{0}}italic_Q start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT) such that t i<t 0 subscript 𝑡 𝑖 subscript 𝑡 0 t_{i}<t_{0}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT < italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and t 0−t i>10 subscript 𝑡 0 subscript 𝑡 𝑖 10 t_{0}-t_{i}>10 italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT > 10. In other words, positive samples older than 10 seconds from the query timestamp are all “revisit” places. The 10-second threshold is determined empirically based on the sequences 00 and 02.
257
+
258
+ Table 8: Distance range [Min., Max.] of positive samples and their average number per query w.r.t different sampling intervals.
259
+
260
+ ### 10.1 Sensitivity to Different Sampling (KITTI)
261
+
262
+ In [Sec.5.3](https://arxiv.org/html/2403.14594v2#S5.SS3 "5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") (main), we discussed the sensitivity of Lip-Loc [[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] retrieval performance to variations in the sampling interval of queries and the database samples. To further investigate this observation, we perform additional studies in [Fig.8](https://arxiv.org/html/2403.14594v2#S10.F8 "In 10.1 Sensitivity to Different Sampling (KITTI) ‣ 10 Training / Testing Setup (KITTI) ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). As can be seen, our approach remains consistent across different sampling intervals. Lip-Loc, however, exhibits significant performance fluctuations when the sampling interval is changed. Notably, reducing the sampling interval results in more samples being classified as positives for each query, with these positive samples being spatially much closer to the query itself, as shown in [Tab.8](https://arxiv.org/html/2403.14594v2#S10.T8 "In 10 Training / Testing Setup (KITTI) ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). This sensitivity in Lip-Loc could be attributed to its training methodology, which uses N-pair batched contrastive loss. In their approach, a pair of an image (I t i subscript 𝐼 subscript 𝑡 𝑖 I_{t_{i}}italic_I start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT) and a LiDAR-scan (P t j subscript 𝑃 subscript 𝑡 𝑗 P_{t_{j}}italic_P start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT) is considered a positive match only when i=j 𝑖 𝑗 i=j italic_i = italic_j, while all the others (when i≠j 𝑖 𝑗 i\neq j italic_i ≠ italic_j) are counted as negatives. Therefore, even a LiDAR-scan located 5 meters away from the image would still be labeled as negative, which does not allow the network to generalize to different sensor frequencies and setups. This limitation can be observed from the LIP-Loc performance on the Oxford RobotCar and ViViD++ benchmarks ([Tab.1](https://arxiv.org/html/2403.14594v2#S5.T1 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") and [Tab.3](https://arxiv.org/html/2403.14594v2#S5.T3 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") from main paper, respectively), where camera and LiDAR timestamps are not synchronized, negatively affecting the method’s retrieval accuracy.
263
+
264
+ In contrast, VXP employs a training strategy that learns a shared embedding space by mimicking the output from an image network trained with a triplet loss function. In our approach, samples within 10 meters are considered positive, while those beyond 25 meters are labeled as negative as discussed in [Sec.5.1](https://arxiv.org/html/2403.14594v2#S5.SS1 "5.1 Implementation Details ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") (main). This training strategy enables VXP to robustly retrieve similar locations from the query, even under the more challenging conditions of a 20-meter query-database sampling interval.
265
+
266
+ ![Image 12: Refer to caption](https://arxiv.org/html/2403.14594v2/x1.png)
267
+
268
+ Figure 8: The impact of database-query sampling on VXP and LIP-Loc[[31](https://arxiv.org/html/2403.14594v2#bib.bib31)] retrieval accuracy. Our VXP shows consistent performance for all sampling ranges, while LIP-Loc results deteriorate rapidly.
269
+
270
+ 11 Ablation Study of Image Network
271
+ ----------------------------------
272
+
273
+ We conducted extensive testing with various image encoders and pooling layers, and the results are summarized in [Tab.9](https://arxiv.org/html/2403.14594v2#S11.T9 "In 11 Ablation Study of Image Network ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). Overall, it is evident that DINO [[8](https://arxiv.org/html/2403.14594v2#bib.bib8)] stands out as the most favorable choice for the image encoder. Concerning the pooling layers, GeM [[34](https://arxiv.org/html/2403.14594v2#bib.bib34)] seems to perform slightly better than NetVLAD [[4](https://arxiv.org/html/2403.14594v2#bib.bib4)]. Based on the experiments, it is clear that the combination of DINO + GeM + FCN yields the most optimal results.
274
+
275
+ Table 9: The comparison of the different combinations of image encoder and pooling layer. V16 represents VGG16 [[37](https://arxiv.org/html/2403.14594v2#bib.bib37)], R18 is ResNet18 [[17](https://arxiv.org/html/2403.14594v2#bib.bib17)], Dino is DINO’s ViTs-8 [[8](https://arxiv.org/html/2403.14594v2#bib.bib8)], N is NetVLAD [[4](https://arxiv.org/html/2403.14594v2#bib.bib4)], G refers to GeM [[34](https://arxiv.org/html/2403.14594v2#bib.bib34)], L means fully-connected layer. Our architectural design yeilds the best 2D-2D performance.
276
+
277
+ ![Image 13: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/oxford2d3d.png)
278
+
279
+ (a)2D-3D retrieval
280
+
281
+ ![Image 14: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/oxford3d2d.png)
282
+
283
+ (b)3D-2D retrieval
284
+
285
+ Figure 9: Qualitative results for 2D-3D and 3D-2D cross-modal retrieval of our VXP on the Oxford RobotCar benchmark. We demonstrate the query and the top 3 closest places retrieved from a given map by our method. While the database samples traverse the same route, they are captured at different times when the environmental conditions vary. We can observe that the top 1 candidate is spatially close to the query, demonstrating our approach’s effectiveness and accuracy.
286
+
287
+ 12 Qualitative Results in Challenging Illumination Conditions
288
+ -------------------------------------------------------------
289
+
290
+ Given that cameras are sensitive to changes in illumination conditions, robust visual place recognition at night poses significant challenges. In this experiment, we qualitatively demonstrate the differences in image-only retrieval (2D-2D) against cross-modal (2D-3D and 3D-2D) place recognition for the task of day-night and day-evening place recognition with reduced visibility.
291
+
292
+ In[Fig.14(c)](https://arxiv.org/html/2403.14594v2#S14.F14.sf3 "In Figure 14 ‣ 14 Failure Cases ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") we can observe that the top 3 places for 2D-2D retrieval are quite far from the query. This suggests that image-based place recognition struggles to retrieve the correct candidate when the images are under different illumination conditions. However, VXP successfully retrieves the closest candidates using 2D-3D cross-modal retrieval as illustrated in [Fig.14(d)](https://arxiv.org/html/2403.14594v2#S14.F14.sf4 "In Figure 14 ‣ 14 Failure Cases ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). This capability could explain why VXP exhibits slightly better top-1 2D-3D recall performance than its 2D-2D counterpart in [Tab.3](https://arxiv.org/html/2403.14594v2#S5.T3 "In 5.3 Results ‣ 5 Experiments and Results ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") (main). Notably, VXP stands out as the only model (compared to [[10](https://arxiv.org/html/2403.14594v2#bib.bib10), [23](https://arxiv.org/html/2403.14594v2#bib.bib23)]) emphasizing the practical advantage of cross-modal retrieval with respect to the uni-modal counterpart.
293
+
294
+ We demonstrate additional qualitative results of the cross-modal retrieval task on Oxford RobotCar and KITTI datasets in [Fig.9](https://arxiv.org/html/2403.14594v2#S11.F9 "In 11 Ablation Study of Image Network ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") and [Fig.11](https://arxiv.org/html/2403.14594v2#S13.F11 "In 13 Correspondences in Local Feature Space ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") respectively. These observations underscore the versatility and effectiveness of cross-modal retrieval approaches in challenging real-world scenarios.
295
+
296
+ \begin{overpic}[width=433.62pt,tics=5]{pics/correspondences.jpg} \put(5.0,-2.5){{\tiny LiDAR point cloud }} \put(26.0,-2.5){{\tiny Projected voxel feature map }} \put(54.0,-2.5){{\tiny Image feature map }} \put(78.0,-2.5){{\tiny Corresponding image }} \end{overpic}
297
+
298
+ Figure 10: Example of local feature correspondences (red) between a projected voxel feature map and an image feature map on the ViViD++ dataset. Cross-matches are established by minimizing the cosine similarity distance between learned descriptors. Being important landmarks for the place recognition task, buildings and trees receive fairly accurate local matches.
299
+
300
+ 13 Correspondences in Local Feature Space
301
+ -----------------------------------------
302
+
303
+ We visualize cross-modal matches from the ViViD++ dataset[[22](https://arxiv.org/html/2403.14594v2#bib.bib22)] in [Fig.10](https://arxiv.org/html/2403.14594v2#S12.F10 "In 12 Qualitative Results in Challenging Illumination Conditions ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") established by picking the closest pairs of learned local descriptors. It is worth noting that LiDAR scans and images do not capture exactly the same information about the scene since LiDAR and camera are not synchronized. In addition, while we utilize data recorded by traversing the same route, the distance between samples corresponding to the same location can be significant among different traversals. For instance, in the ViViD++ dataset, the distance between an image and the corresponding point cloud averages about 7 meters according to calibration and GPS/INS poses.
304
+
305
+ As it can be seen from [Fig.10](https://arxiv.org/html/2403.14594v2#S12.F10 "In 12 Qualitative Results in Challenging Illumination Conditions ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"), feature correspondences stemming from trees and building structures are accurately established. However, we encounter challenges in regions where voxel features are projected onto the ground region due to the ambiguous nature of respective image features. Therefore, capturing reliable correspondences within the ground regions appears to be a difficult task. Since the ground is constantly present in driving sequences, this misalignment only has minimal impact on the distinctiveness of the estimated global descriptors. Instead, it leverages correctly aligned correspondences from buildings and other static distinct objects in the scene to achieve state-of-the-art cross-modal performance.
306
+
307
+ ![Image 15: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/retrieval_vis_kitti_2d3d.png)
308
+
309
+ (a)KITTI (00) 2D-3D
310
+
311
+ ![Image 16: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/retrieval_vis_kitti_3d2d.png)
312
+
313
+ (b)KITTI (00) 3D-2D
314
+
315
+ Figure 11: Qualitative results for 2D-3D and 3D-2D cross-modal retrieval of our VXP on the KITTI Odometry benchmark. We demonstrate the query and the top 3 closest places retrieved from a given map by our method. As described in [Sec.10](https://arxiv.org/html/2403.14594v2#S10 "10 Training / Testing Setup (KITTI) ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition") test queries are taken from the region unseen during training. We can observe that all top 3 candidates are spatially close to the query, demonstrating our approach’s effectiveness and accuracy.
316
+
317
+ 14 Failure Cases
318
+ ----------------
319
+
320
+ Although VXP achieves state-of-the-art performance on cross-modal place recognition task and scores well in many challenging conditions, it fails in some cases. Primarily we have noticed that repetitive structures such as highway roads cause confusion for our method and lead to incorrect retrievals. Notably, uni-modal methods also fail in such cases as shown in [Fig.12](https://arxiv.org/html/2403.14594v2#S14.F12 "In 14 Failure Cases ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). We believe that integrating sequential information as part of the mobile robot localization system would facilitate the task and remain part of future work.
321
+
322
+ ![Image 17: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/day1-day2-2d3dfail.jpg)
323
+
324
+ (a)ViViD++ City day1-day2 2D-3D failed.
325
+
326
+ ![Image 18: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/12b.png)
327
+
328
+ (b)ViViD++ City day1-day2 3D-2D failed.
329
+
330
+ Figure 12: Failure cases of 2D-3D and 3D-2D cross-modal retrieval with our VXP due to challenging and repetitive scenes from the ViViD++ benchmark. For each retrieval, the query and its top 3 retrievals are shown. Although the correct candidate is obtained within the top 3 places, the top 1 is not the closest in the latent embedding space, and thus, the performance is negatively affected.
331
+
332
+ Secondly, sparse representation of places where the environment contains a lot of empty space and few, far-away structures, is not effective and lacks distinctive information. We demonstrate some failure examples in [Fig.13](https://arxiv.org/html/2403.14594v2#S14.F13 "In 14 Failure Cases ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). One possible reason for worse performance lies in the projective nature of supervisory signal for our VXP. It is infeasible to learn a meaningful shared latent space and successfully perform cross-modal retrieval without establishing sufficient number of correspondences between voxels and pixels. Inspired by CASSPR[[45](https://arxiv.org/html/2403.14594v2#bib.bib45)], it might be beneficial to incorporate a point branch and enhance point cloud encoding to tackle such challenging cases.
333
+
334
+ Lastly, illumination conditions are crucial in cross-modal retrieval where images are used as queries. Therefore, an image feature map generated from a poorly illuminated scene would obtain a bad-quality feature map, and leveraging such images for cross-modal retrievals becomes considerably challenging, as shown in [Fig.15](https://arxiv.org/html/2403.14594v2#S14.F15 "In 14 Failure Cases ‣ VXP: Voxel-Cross-Pixel Large-Scale Camera-LiDAR Place Recognition"). We believe that localizing with LiDAR scans (3D-2D retrieval) that are not affected by the light conditions would be more robust in such extreme cases.
335
+
336
+ ![Image 19: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/13a.png)
337
+
338
+ (a)ViViD++ City day1-day2 2D-3D failed.
339
+
340
+ ![Image 20: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/13b.png)
341
+
342
+ (b)ViViD++ City day1-day2 3D-2D failed.
343
+
344
+ Figure 13: Failure cases of 2D-3D and 3D-2D cross-modal retrieval with our VXP due to sparse point cloud and lack of meaningful structures the ViViD++ benchmark. For each retrieval, the query and its top 3 retrievals are shown. Although the correct candidate is obtained within the top 3 places, the top 1 is not the closest in the latent embedding space, and thus, the performance is impaired.
345
+
346
+ ![Image 21: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/retrieval_vis_nightday2_2d2d.png)
347
+
348
+ (a)ViViD++ campus night-day2 2D-2D failed.
349
+
350
+ ![Image 22: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/retrieval_vis_nightday2_3d2d.png)
351
+
352
+ (b)ViViD++ campus night-day2 3D-2D succeeded.
353
+
354
+ ![Image 23: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/day1-evening_2d2d.jpg)
355
+
356
+ (c)ViViD++ city day1-evening 2D-2D failed.
357
+
358
+ ![Image 24: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/day1-evening_2d3d.jpg)
359
+
360
+ (d)ViViD++ city day1-evening 2D-3D succeeded.
361
+
362
+ Figure 14: Qualitative results of uni-modal (left) and cross-modal (right) on challenging illumination conditions such as evening and night sequences from the ViViD++ dataset. While 2D-2D place recognition is impaired by poor illumination conditions, integration of LiDAR scans, which remain unaffected, mitigates the issue and allows accurate cross-modal retrieval.
363
+
364
+ ![Image 25: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/retrieval_vis_oxford_2d3d_night_overcase_faile.jpg)
365
+
366
+ (a)Oxford RobotCar night-overcast 2D-3D failed.
367
+
368
+ ![Image 26: Refer to caption](https://arxiv.org/html/2403.14594v2/extracted/6281878/pics/14b.png)
369
+
370
+ (b)ViViD++ City night-day2 2D-3D failed.
371
+
372
+ Figure 15: Failure cases of 2D-3D cross-modal retrieval with our VXP due to poor illumination conditions in the night sequences from the Oxford RobotCar and ViViD++ datasets.
373
+
374
+ References
375
+ ----------
376
+
377
+ * Ali-bey et al. [2022] Amar Ali-bey, Brahim Chaib-draa, and Philippe Giguère. Gsv-cities: Toward appropriate supervised visual place recognition. _Neurocomputing_, 513:194–203, 2022.
378
+ * Ali-Bey et al. [2023] Amar Ali-Bey, Brahim Chaib-Draa, and Philippe Giguere. Mixvpr: Feature mixing for visual place recognition. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 2998–3007, 2023.
379
+ * Amir et al. [2021] Shir Amir, Yossi Gandelsman, Shai Bagon, and Tali Dekel. Deep vit features as dense visual descriptors. _arXiv preprint arXiv:2112.05814_, 2(3):4, 2021.
380
+ * Arandjelovic et al. [2016] Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. Netvlad: Cnn architecture for weakly supervised place recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 5297–5307, 2016.
381
+ * Barnes et al. [2019] Dan Barnes, Matthew Gadd, Paul Murcutt, Paul Newman, and Ingmar Posner. The oxford radar robotcar dataset: A radar extension to the oxford robotcar dataset. _arXiv preprint arXiv: 1909.01300_, 2019.
382
+ * Berton et al. [2022] Gabriele Berton, Carlo Masone, and Barbara Caputo. Rethinking visual geo-localization for large-scale applications. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4878–4888, 2022.
383
+ * Campos et al. [2021] Carlos Campos, Richard Elvira, Juan J Gómez Rodríguez, José MM Montiel, and Juan D Tardós. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. _IEEE Transactions on Robotics_, 37(6):1874–1890, 2021.
384
+ * Caron et al. [2021] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 9650–9660, 2021.
385
+ * Cattaneo et al. [2019] Daniele Cattaneo, Matteo Vaghi, Augusto Luis Ballardini, Simone Fontana, Domenico G Sorrenti, and Wolfram Burgard. Cmrnet: Camera to lidar-map registration. In _2019 IEEE intelligent transportation systems conference (ITSC)_, pages 1283–1289. IEEE, 2019.
386
+ * Cattaneo et al. [2020] Daniele Cattaneo, Matteo Vaghi, Simone Fontana, Augusto Luis Ballardini, and Domenico G Sorrenti. Global visual localization in lidar-maps through shared 2d-3d embedding space. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_, pages 4365–4371. IEEE, 2020.
387
+ * Cattaneo et al. [2022] Daniele Cattaneo, Matteo Vaghi, and Abhinav Valada. Lcdnet: Deep loop closure detection and point cloud registration for lidar slam. _IEEE Transactions on Robotics_, 38(4):2074–2093, 2022.
388
+ * Fan et al. [2022] Zhaoxin Fan, Zhenbo Song, Hongyan Liu, Zhiwu Lu, Jun He, and Xiaoyong Du. Svt-net: Super light-weight sparse voxel transformer for large scale place recognition. AAAI, 2022.
389
+ * Feng et al. [2019] Mengdan Feng, Sixing Hu, Marcelo H Ang, and Gim Hee Lee. 2d3d-matchnet: Learning to match keypoints across 2d image and 3d point cloud. In _2019 International Conference on Robotics and Automation (ICRA)_, pages 4790–4796. IEEE, 2019.
390
+ * Gálvez-López and Tardos [2012] Dorian Gálvez-López and Juan D Tardos. Bags of binary words for fast place recognition in image sequences. _IEEE Transactions on Robotics_, 28(5):1188–1197, 2012.
391
+ * Geiger et al. [2012] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In _2012 IEEE conference on computer vision and pattern recognition_, pages 3354–3361. IEEE, 2012.
392
+ * Gladkova et al. [2021] Mariia Gladkova, Rui Wang, Niclas Zeller, and Daniel Cremers. Tight integration of feature-based relocalization in monocular direct visual odometry. In _2021 IEEE international conference on Robotics and automation (ICRA)_, pages 9608–9614. IEEE, 2021.
393
+ * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770–778, 2016.
394
+ * Johnson et al. [2019] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. _IEEE Transactions on Big Data_, 7(3):535–547, 2019.
395
+ * Keetha et al. [2023] Nikhil Keetha, Avneesh Mishra, Jay Karhade, Krishna Murthy Jatavallabhula, Sebastian Scherer, Madhava Krishna, and Sourav Garg. Anyloc: Towards universal visual place recognition. _IEEE Robotics and Automation Letters_, 2023.
396
+ * Komorowski [2021] Jacek Komorowski. Minkloc3d: Point cloud based large-scale place recognition. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 1790–1799, 2021.
397
+ * Komorowski et al. [2021] Jacek Komorowski, Monika Wysoczańska, and Tomasz Trzcinski. Minkloc++: lidar and monocular image fusion for place recognition. In _2021 International Joint Conference on Neural Networks (IJCNN)_, pages 1–8. IEEE, 2021.
398
+ * Lee et al. [2022] Alex Junho Lee, Younggun Cho, Young-sik Shin, Ayoung Kim, and Hyun Myung. Vivid++: Vision for visibility dataset. _IEEE Robotics and Automation Letters_, 7(3):6282–6289, 2022.
399
+ * Lee et al. [2023] Alex Junho Lee, Seungwon Song, Hyungtae Lim, Woojoo Lee, and Hyun Myung. Lc 2: Lidar-camera loop constraints for cross-modal place recognition. _IEEE Robotics and Automation Letters_, 8(6):3589–3596, 2023.
400
+ * Leyva-Vallina et al. [2023] María Leyva-Vallina, Nicola Strisciuglio, and Nicolai Petkov. Data-efficient large scale place recognition with graded similarity supervision. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 23487–23496, 2023.
401
+ * Li and Lee [2021] Jiaxin Li and Gim Hee Lee. Deepi2p: Image-to-point cloud registration via deep classification. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15960–15969, 2021.
402
+ * Liu et al. [2019] Zhe Liu, Shunbo Zhou, Chuanzhe Suo, Peng Yin, Wen Chen, Hesheng Wang, Haoang Li, and Yun-Hui Liu. Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 2831–2840, 2019.
403
+ * Maddern et al. [2017] Will Maddern, Geoffrey Pascoe, Chris Linegar, and Paul Newman. 1 year, 1000 km: The oxford robotcar dataset. _The International Journal of Robotics Research_, 36(1):3–15, 2017.
404
+ * Oquab et al. [2023] Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_, 2023.
405
+ * Pan et al. [2021] Yiyuan Pan, Xuecheng Xu, Weijie Li, Yunxiang Cui, Yue Wang, and Rong Xiong. Coral: Colored structural representation for bi-modal place recognition. In _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 2084–2091. IEEE, 2021.
406
+ * Philbin et al. [2007] James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Object retrieval with large vocabularies and fast spatial matching. In _2007 IEEE conference on computer vision and pattern recognition_, pages 1–8. IEEE, 2007.
407
+ * Puligilla et al. [2024] Sai Shubodh Puligilla, Mohammad Omama, Husain Zaidi, Udit Singh Parihar, and Madhava Krishna. Lip-loc: Lidar image pretraining for cross-modal localization. In _2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW)_, pages 939–948. IEEE, 2024.
408
+ * Qi et al. [2017a] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 652–660, 2017a.
409
+ * Qi et al. [2017b] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. _Advances in neural information processing systems_, 30, 2017b.
410
+ * Radenović et al. [2018] Filip Radenović, Giorgos Tolias, and Ondřej Chum. Fine-tuning cnn image retrieval with no human annotation. _IEEE transactions on pattern analysis and machine intelligence_, 41(7):1655–1668, 2018.
411
+ * Ren et al. [2022] Siyu Ren, Yiming Zeng, Junhui Hou, and Xiaodong Chen. Corri2p: Deep image-to-point cloud registration via dense correspondence. _IEEE Transactions on Circuits and Systems for Video Technology_, 33(3):1198–1208, 2022.
412
+ * Schroff et al. [2015] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 815–823, 2015.
413
+ * Simonyan [2014] Karen Simonyan. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_, 2014.
414
+ * Uy and Lee [2018] Mikaela Angelina Uy and Gim Hee Lee. Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4470–4479, 2018.
415
+ * Wang et al. [2023] Guangming Wang, Yu Zheng, Yanfeng Guo, Zhe Liu, Yixiang Zhu, Wolfram Burgard, and Hesheng Wang. End-to-end 2d-3d registration between image and lidar point cloud for vehicle localization. _arXiv preprint arXiv:2306.11346_, 2023.
416
+ * Wang et al. [2021] Yizhou Wang, Zhongyu Jiang, Xiangyu Gao, Jenq-Neng Hwang, Guanbin Xing, and Hui Liu. Rodnet: Radar object detection using cross-modal supervision. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 504–513, 2021.
417
+ * Watson et al. [2021] Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel Brostow, and Michael Firman. The temporal opportunist: Self-supervised multi-frame monocular depth. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1164–1174, 2021.
418
+ * Wen et al. [2019] Chenglu Wen, Yudi Dai, Yan Xia, Yuhan Lian, Jinbin Tan, Cheng Wang, and Jonathan Li. Toward efficient 3-d colored mapping in gps-/gnss-denied environments. _IEEE Geoscience and Remote Sensing Letters_, 17(1):147–151, 2019.
419
+ * Xia et al. [2021] Yan Xia, Yusheng Xu, Shuang Li, Rui Wang, Juan Du, Daniel Cremers, and Uwe Stilla. Soe-net: A self-attention and orientation encoding network for point cloud based place recognition. In _Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition_, pages 11348–11357, 2021.
420
+ * Xia et al. [2023a] Yan Xia, Mariia Gladkova, Rui Wang, Qianyun Li, Uwe Stilla, João F Henriques, and Daniel Cremers. Casspr: Cross attention single scan place recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 8461–8472, 2023a.
421
+ * Xia et al. [2023b] Yan Xia, Mariia Gladkova, Rui Wang, Qianyun Li, Uwe Stilla, Joao F Henriques, and Daniel Cremers. Casspr: Cross attention single scan place recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 8461–8472, 2023b.
422
+ * Xia et al. [2023c] Yan Xia, Qiangqiang Wu, Wei Li, Antoni B Chan, and Uwe Stilla. A lightweight and detector-free 3d single object tracker on point clouds. _IEEE Transactions on Intelligent Transportation Systems_, 24(5):5543–5554, 2023c.
423
+ * Xia et al. [2024] Yan Xia, Letian Shi, Zifeng Ding, João F Henriques, and Daniel Cremers. Text2loc: 3d point cloud localization from natural language. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2024.
424
+ * Yan et al. [2018] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. _Sensors_, 18(10):3337, 2018.
425
+ * Zhou and Tuzel [2018] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4490–4499, 2018.
426
+ * Zhou et al. [2021] Zhicheng Zhou, Cheng Zhao, Daniel Adolfsson, Songzhi Su, Yang Gao, Tom Duckett, and Li Sun. Ndt-transformer: Large-scale 3d point cloud localisation using the normal distribution transform representation. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_, pages 5654–5660. IEEE, 2021.
427
+ * Żywanowski et al. [2021] Kamil Żywanowski, Adam Banaszczyk, Michał R Nowicki, and Jacek Komorowski. Minkloc3d-si: 3d lidar place recognition with sparse convolutions, spherical coordinates, and intensity. _IEEE Robotics and Automation Letters_, 7(2):1079–1086, 2021.