Add 1 files
Browse files- 2603/2603.09826.md +387 -0
2603/2603.09826.md
ADDED
|
@@ -0,0 +1,387 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2603.09826
|
| 4 |
+
|
| 5 |
+
Published Time: Wed, 11 Mar 2026 01:08:22 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
Shuhao Kang 1 Youqi Liao 2 Peijie Wang 3 Wenlong Liao 4 Qilin Zhang 5,6
|
| 9 |
+
|
| 10 |
+
Benjamin Busam 5,6 Xieyuanli Chen 7† Yun Liu 1,8,9†
|
| 11 |
+
|
| 12 |
+
1 VCIP, CS, Nankai University 2 Wuhan University 3 CASIA 4 COWAROBOT
|
| 13 |
+
|
| 14 |
+
5 TUM 6 MCML 7 NUDT 8 AAIS, Nankai University 9 NKIARI, Shenzhen Futian
|
| 15 |
+
|
| 16 |
+
###### Abstract
|
| 17 |
+
|
| 18 |
+
Text-to-point-cloud (T2P) localization aims to infer precise spatial positions within 3D point cloud maps from natural language descriptions, reflecting how humans perceive and communicate spatial layouts through language. However, existing methods largely rely on shallow text-point cloud correspondence without effective spatial reasoning, limiting their accuracy in complex environments. To address this limitation, we propose VLM-Loc, a framework that leverages the spatial reasoning capability of large vision-language models (VLMs) for T2P localization. Specifically, we transform point clouds into bird’s-eye-view (BEV) images and scene graphs that jointly encode geometric and semantic context, providing structured inputs for the VLM to learn cross-modal representations bridging linguistic and spatial semantics. On top of these representations, we introduce a partial node assignment mechanism that explicitly associates textual cues with scene graph nodes, enabling interpretable spatial reasoning for accurate localization. To facilitate systematic evaluation across diverse scenes, we present CityLoc, a benchmark built from multi-source point clouds for fine-grained T2P localization. Experiments on CityLoc demonstrate VLM-Loc achieves superior accuracy and robustness compared to state-of-the-art methods. Our code, model, and dataset are available at [repository](https://github.com/MCG-NKU/nku-3d-vision).
|
| 19 |
+
|
| 20 |
+
0 0 footnotetext: † corresponding authors.
|
| 21 |
+
1 Introduction
|
| 22 |
+
--------------
|
| 23 |
+
|
| 24 |
+
Estimating the spatial position with natural language is a fundamental task in embodied intelligence[[53](https://arxiv.org/html/2603.09826#bib.bib51 "Vision-and-language navigation via causal learning"), [22](https://arxiv.org/html/2603.09826#bib.bib47 "π0.5: A vision-language-action model with open-world generalization"), [65](https://arxiv.org/html/2603.09826#bib.bib50 "Same: learning generic language-guided visual navigation with state-adaptive mixture of experts"), [64](https://arxiv.org/html/2603.09826#bib.bib72 "A survey of embodied learning for object-centric robotic manipulation")] and autonomous vehicles[[12](https://arxiv.org/html/2603.09826#bib.bib46 "OverlapNet: loop closing for lidar-based slam"), [28](https://arxiv.org/html/2603.09826#bib.bib57 "Multi-scale interaction for real-time lidar data segmentation on an embedded platform"), [29](https://arxiv.org/html/2603.09826#bib.bib49 "Diffloc: diffusion model for outdoor lidar localization"), [21](https://arxiv.org/html/2603.09826#bib.bib73 "A survey on end-to-end perception and prediction for autonomous driving")]. In real-world scenarios such as autonomous robotaxi services, vehicles typically rely on the Global Navigation Satellite System (GNSS) for approximate passenger localization. However, GNSS localization often suffers from degraded accuracy in urban environments due to multipath effect and atmospheric delay[[49](https://arxiv.org/html/2603.09826#bib.bib19 "Springer handbook of global navigation satellite systems")], making it difficult to identify the precise pickup spot. In such scenarios, passengers can naturally describe their surroundings using language, providing additional spatial cues for localization without relying on any visual sensors. Since point cloud maps offer detailed geometric representations of urban scenes, they are well-suited for aligning these textual cues with the physical environment. This motivates the task of text-to-point-cloud (T2P) localization, which bridges language understanding and 3D spatial perception, paving the way for future human-robot interactive localization systems, as shown in [Fig.1](https://arxiv.org/html/2603.09826#S1.F1 "In 1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")(a).
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+
Figure 1: (a) illustrates the human-like logic behind text-to-point cloud localization, where spatial descriptions are used to infer the target position. (b) and (c) show the architectures of a typical method, Text2Loc[[57](https://arxiv.org/html/2603.09826#bib.bib3 "Text2loc: 3d point cloud localization from natural language")], and our proposed VLM-Loc, respectively.
|
| 29 |
+
|
| 30 |
+
Existing T2P localization approaches focus on localizing in city-scale point cloud maps using textual descriptions. Text2Pos[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")] introduced the KITTI360Pose benchmark and employed a coarse-to-fine strategy, retrieving candidate submaps before refining the position estimation. Following this paradigm, subsequent works[[52](https://arxiv.org/html/2603.09826#bib.bib2 "Text to point cloud localization with relation-enhanced transformer"), [57](https://arxiv.org/html/2603.09826#bib.bib3 "Text2loc: 3d point cloud localization from natural language"), [33](https://arxiv.org/html/2603.09826#bib.bib17 "Text to point cloud localization with multi-level negative contrastive learning"), [58](https://arxiv.org/html/2603.09826#bib.bib4 "CMMLoc: advancing text-to-pointcloud localization with cauchy-mixture-model based framework")] introduced more effective model designs to improve T2P localization. However, these approaches still face several limitations: (1) during the fine localization stage, their submaps are typically restricted to small and relatively simple regions (_e.g_., 30m×30m 30\text{m}\times 30\text{m}), where the limited spatial extent inherently simplifies matching. This assumption oversimplifies real-world conditions and fails to capture the complexity of large-scale urban scenes; and (2) despite architectural improvements, these methods adopt an end-to-end position prediction paradigm without explicit reasoning, which makes structured spatial modeling challenging and limits localization accuracy, especially in complex environments.
|
| 31 |
+
|
| 32 |
+
To better understand and address these challenges, we consider what capabilities fine-grained T2P localization requires. Humans can naturally perceive and describe spatial layouts spanning tens of meters[[60](https://arxiv.org/html/2603.09826#bib.bib54 "A statistical explanation of visual space"), [37](https://arxiv.org/html/2603.09826#bib.bib52 "Visual perception of egocentric distance in real and virtual environments"), [45](https://arxiv.org/html/2603.09826#bib.bib53 "The perception of egocentric distances in virtual environments-a review")], indicating that robust localization should operate over larger and more complex regions rather than being limited to small, simplified submaps. Furthermore, overcoming the absence of explicit reasoning requires models that can interpret spatial relations expressed in language and connect them to the environment. Vision-language models (VLMs) offer a promising foundation for this goal. Equipped with strong multimodal reasoning capabilities[[9](https://arxiv.org/html/2603.09826#bib.bib58 "Holistic evaluation of multimodal llms on spatial intelligence")], they can parse complex spatial descriptions and effectively align linguistic cues with visual inputs, making them suitable for fine-grained T2P localization in complex environments.
|
| 33 |
+
|
| 34 |
+
Motivated by these insights, we propose VLM-Loc, a novel VLM-based framework for fine-grained localization in complex local point cloud maps. As illustrated in [Fig.1](https://arxiv.org/html/2603.09826#S1.F1 "In 1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") (b) and (c), unlike prior approaches[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization"), [57](https://arxiv.org/html/2603.09826#bib.bib3 "Text2loc: 3d point cloud localization from natural language")] that directly learn correspondences between textual descriptions and 3D objects, VLM-Loc leverages the inherent spatial reasoning capability of large VLMs to align linguistic cues with structured map representations, achieving more interpretable and accurate localization. The point cloud map is transformed into a bird’s-eye-view (BEV) image to provide dense and structured spatial representations, while a scene graph is simultaneously constructed to capture higher-level semantic relations among objects. Building upon these representations, we introduce Partial Node Assignment (PNA) mechanism that explicitly supervises the VLM to associate textual cues with their corresponding spatial nodes, thereby guiding interpretable spatial understanding and improving position estimation. To comprehensively evaluate T2P localization, we establish CityLoc, a new benchmark specifically designed for fine-grained T2P localization in complex and diverse environments. Experimental results on CityLoc show that the proposed VLM-Loc achieves state-of-the-art (SOTA) performance, outperforming the previous best method, CMMLoc[[58](https://arxiv.org/html/2603.09826#bib.bib4 "CMMLoc: advancing text-to-pointcloud localization with cauchy-mixture-model based framework")], by 14.20% at Recall@5m on CityLoc-K test set, demonstrating the effectiveness of our framework for accurate localization.
|
| 35 |
+
|
| 36 |
+
Overall, our contributions are concluded as follows:
|
| 37 |
+
|
| 38 |
+
* •
|
| 39 |
+
We introduce VLM-Loc, a VLM-based framework for accurate T2P localization, together with CityLoc, a benchmark designed to systematically evaluate performance in complex 3D scenes.
|
| 40 |
+
|
| 41 |
+
* •
|
| 42 |
+
We transform 3D point clouds into BEV images augmented with scene graphs, bridging the modality gap between 3D point clouds and 2D VLMs and enabling effective multi-modal alignment for accurate T2P localization.
|
| 43 |
+
|
| 44 |
+
* •
|
| 45 |
+
We design a partial node assignment mechanism to explicitly align textual hints with graph nodes, enhancing the spatial understanding and reasoning.
|
| 46 |
+
|
| 47 |
+
2 Related Work
|
| 48 |
+
--------------
|
| 49 |
+
|
| 50 |
+
### 2.1 Localization in Point Cloud
|
| 51 |
+
|
| 52 |
+
Localization within point cloud maps has been extensively studied in recent years. Existing approaches mainly adopt point clouds or images as query modalities. Pioneering point-cloud-to-point-cloud (P2P) localization methods, such as PointNetVLAD[[50](https://arxiv.org/html/2603.09826#bib.bib29 "Pointnetvlad: deep point cloud based retrieval for large-scale place recognition")], combine PointNet[[43](https://arxiv.org/html/2603.09826#bib.bib30 "Pointnet: deep learning on point sets for 3d classification and segmentation")] with NetVLAD[[4](https://arxiv.org/html/2603.09826#bib.bib55 "NetVLAD: cnn architecture for weakly supervised place recognition")] for global descriptor learning. Subsequent transformer-based approaches further improve performance by modeling long-range geometric dependencies[[17](https://arxiv.org/html/2603.09826#bib.bib31 "Svt-net: super light-weight sparse voxel transformer for large scale place recognition"), [62](https://arxiv.org/html/2603.09826#bib.bib32 "Rank-pointretrieval: reranking point cloud retrieval via a visually consistent registration evaluation"), [41](https://arxiv.org/html/2603.09826#bib.bib33 "OverlapTransformer: an efficient and yaw-angle-invariant transformer network for lidar-based place recognition"), [56](https://arxiv.org/html/2603.09826#bib.bib35 "Casspr: cross attention single scan place recognition")]. More recently, methods like BEVPlace++[[40](https://arxiv.org/html/2603.09826#bib.bib44 "Bevplace++: fast, robust, and lightweight lidar global localization for unmanned ground vehicles")] and RING#[[39](https://arxiv.org/html/2603.09826#bib.bib45 "Ring#: pr-by-pe global localization with roto-translation equivariant gram learning")] enhance robustness through rotation-equivariant architectures in BEV representations. Image-to-point-cloud (I2P) localization methods instead align image features with 3D representations through shared embeddings[[10](https://arxiv.org/html/2603.09826#bib.bib36 "Global visual localization in lidar-maps through shared 2d-3d embedding space"), [31](https://arxiv.org/html/2603.09826#bib.bib37 "Vxp: voxel-cross-pixel large-scale image-lidar place recognition")] or implicit reconstruction[[24](https://arxiv.org/html/2603.09826#bib.bib39 "SOLVR: submap oriented lidar-visual re-localisation")]. In contrast, our approach employs natural language as the query modality, enabling intuitive and interpretable localization without relying on additional sensor observations, facilitating flexible interaction in real-world applications.
|
| 53 |
+
|
| 54 |
+
### 2.2 3D Vision and Language
|
| 55 |
+
|
| 56 |
+
Recent years have seen increasing interest in grounding natural language in 3D point cloud scenes. Early works[[18](https://arxiv.org/html/2603.09826#bib.bib12 "Free-form description guided 3d visual graph network for object grounding in point cloud"), [23](https://arxiv.org/html/2603.09826#bib.bib14 "Bottom up top down detection transformers for language grounding in images and point clouds"), [5](https://arxiv.org/html/2603.09826#bib.bib75 "Evaluating sam2 for video semantic segmentation")] explored text-guided 3D detection and segmentation, aiming to associate linguistic expressions with corresponding geometric structures. Following this line, a series of studies[[2](https://arxiv.org/html/2603.09826#bib.bib15 "ReferIt3D: neural listeners for fine-grained 3d object identification in real-world scenes"), [11](https://arxiv.org/html/2603.09826#bib.bib16 "ScanRefer: 3d object localization in rgb-d scans using natural language"), [46](https://arxiv.org/html/2603.09826#bib.bib13 "Languagerefer: spatial-language model for 3d visual grounding"), [3](https://arxiv.org/html/2603.09826#bib.bib56 "Generalized few-shot 3d point cloud segmentation with vision-language model")] advanced 3D language grounding through contrastive learning and transformer-based cross-modal reasoning. These methods mainly focus on object-level grounding, _e.g_., identifying the referred objects or regions within a local 3D scene, thus forming the foundation for higher-level spatial understanding tasks.
|
| 57 |
+
|
| 58 |
+
Extending beyond language-driven 3D perception tasks, T2P localization requires holistic scene understanding of spatial layouts and relationships to infer specific target positions from textual descriptions. This task, first introduced by Text2Pos[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")], aims to predict a 2-degree-of-freedom (DoF) location corresponding to the described scene context. Text2Pos formulates the problem as cell-level retrieval followed by position regression, using the KITTI360Pose dataset for evaluation. Subsequent works improved this paradigm by enhancing cross-modal representations: RET[[52](https://arxiv.org/html/2603.09826#bib.bib2 "Text to point cloud localization with relation-enhanced transformer")] and Text2Loc[[57](https://arxiv.org/html/2603.09826#bib.bib3 "Text2loc: 3d point cloud localization from natural language")] leveraged transformer-based[[51](https://arxiv.org/html/2603.09826#bib.bib70 "Attention is all you need"), [36](https://arxiv.org/html/2603.09826#bib.bib69 "Vision transformers with hierarchical attention"), [48](https://arxiv.org/html/2603.09826#bib.bib71 "Rethinking global context in crowd counting")] encoders for text and point cloud alignment, MNCL[[33](https://arxiv.org/html/2603.09826#bib.bib17 "Text to point cloud localization with multi-level negative contrastive learning")] employed multi-level contrastive learning to improve boundary perception, and CMMLoc[[58](https://arxiv.org/html/2603.09826#bib.bib4 "CMMLoc: advancing text-to-pointcloud localization with cauchy-mixture-model based framework")] modeled 3D objects using Cauchy Mixture Model priors with integrated cardinal direction cues for fine localization. Nevertheless, existing approaches operate on small and relatively simple submaps whose limited spatial extent simplifies localization, and they rely on direct feature matching without explicit spatial reasoning, limiting their performance in complex environments.
|
| 59 |
+
|
| 60 |
+
### 2.3 Vision-Language Models
|
| 61 |
+
|
| 62 |
+
VLMs are designed to capture holistic visual and spatial relationships across objects, scene layouts, and abstract semantic concepts. Early works such as CLIP[[44](https://arxiv.org/html/2603.09826#bib.bib23 "Learning transferable visual models from natural language supervision")] and ALIGN[[27](https://arxiv.org/html/2603.09826#bib.bib24 "Align before fuse: vision and language representation learning with momentum distillation")] pioneered contrastive learning between image-text pairs, enabling robust zero-shot recognition and retrieval. Subsequently, VLMs evolved beyond static contrastive pre-training toward richer multimodal reasoning and grounded visual understanding. Models like BLIP-2[[26](https://arxiv.org/html/2603.09826#bib.bib25 "Blip-2: bootstrapping language-image pre-training with frozen image encoders and large language models")] and InstructBLIP[[14](https://arxiv.org/html/2603.09826#bib.bib26 "Instructblip: towards general-purpose vision-language models with instruction tuning")] bridge frozen visual encoders with large language models via lightweight query transformers, achieving efficient multimodal alignment and instruction following. More recent systems, including LLaVA series[[35](https://arxiv.org/html/2603.09826#bib.bib63 "Visual instruction tuning"), [34](https://arxiv.org/html/2603.09826#bib.bib27 "Improved baselines with visual instruction tuning")] and the Qwen-VL family[[6](https://arxiv.org/html/2603.09826#bib.bib62 "Qwen technical report"), [54](https://arxiv.org/html/2603.09826#bib.bib28 "Qwen2-vl: enhancing vision-language model’s perception of the world at any resolution"), [8](https://arxiv.org/html/2603.09826#bib.bib61 "Qwen2. 5-vl technical report"), [7](https://arxiv.org/html/2603.09826#bib.bib60 "Qwen3-vl technical report")], further enhance spatial reasoning and dense region grounding through large-scale instruction tuning and unified multimodal architectures. Despite these advances, most existing efforts focus on perceiving and grounding real, physically present objects rather than estimating a virtual location described through language. Motivated by this gap, our work investigates how to utilize large VLMs for text-guided localization in 3D environments, leveraging their inherent spatial reasoning abilities for accurate localization.
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
|
| 66 |
+
Figure 2: Overview of VLM-Loc. In the data generation stage, the point cloud map is converted into a BEV image and a scene graph, where each node encodes semantic and spatial information. During training, the BEV image is used as the visual input, and the text input includes the scene graph, system prompt, and text query. These inputs are fed into a VLM for fine-tuning, enabling it to perform partial node assignment and position estimation in an autoregressive manner.
|
| 67 |
+
|
| 68 |
+
3 The CityLoc Benchmark
|
| 69 |
+
-----------------------
|
| 70 |
+
|
| 71 |
+
The T2P localization is formulated on local maps for practicality. However, existing benchmark does not reflect the scale and complexity of real-world environments. The most widely used T2P localization benchmark, KITTI360Pose, was designed for large-scale point cloud localization, yet its submaps cover only small areas and contain a limited number of simple objects. This substantially simplifies the fine localization stage by restricting the search space and reducing scene clutter, yet diverges from the scenarios in which humans typically perceive and describe much larger and more complex surroundings[[37](https://arxiv.org/html/2603.09826#bib.bib52 "Visual perception of egocentric distance in real and virtual environments"), [60](https://arxiv.org/html/2603.09826#bib.bib54 "A statistical explanation of visual space")].
|
| 72 |
+
|
| 73 |
+
To systematically evaluate existing frameworks and our proposed method under a more realistic setting, we introduce the CityLoc benchmark, constructed from the LiDAR point cloud of KITTI-360[[32](https://arxiv.org/html/2603.09826#bib.bib9 "Kitti-360: a novel dataset and benchmarks for urban scene understanding in 2d and 3d")] and the photogrammetric point cloud of CityRefer[[42](https://arxiv.org/html/2603.09826#bib.bib21 "CityRefer: geography-aware 3d visual grounding dataset on city-scale point cloud data")], referred to as CityLoc-K and CityLoc-C. CityLoc-K, derived from vehicle-mounted LiDAR scans, is used to validate the model design and assess localization performance. In contrast, CityLoc-C is built from unmanned aerial vehicle (UAV)–based photogrammetric point clouds[[20](https://arxiv.org/html/2603.09826#bib.bib64 "Sensaturban: learning semantics from urban-scale photogrammetric point clouds")] and is used to evaluate cross-domain generalization to unseen urban scenes with different sensing modalities and semantic distributions. Such a dual-source design enables a more comprehensive evaluation of model robustness across diverse point cloud scenes.
|
| 74 |
+
|
| 75 |
+
The CityLoc benchmark targets fine-grained T2P localization in complex environments with broader visible ranges and more diverse spatial structures, which present significant challenges for accurate pose estimation. We detail the construction process of CityLoc-K in this section, while CityLoc-C follows the same pipeline but adopts a tailored sampling strategy to fit its aerial viewpoint and semantic distribution. Additional details of CityLoc are provided in the supplementary material.
|
| 76 |
+
|
| 77 |
+
Map construction. KITTI-360[[32](https://arxiv.org/html/2603.09826#bib.bib9 "Kitti-360: a novel dataset and benchmarks for urban scene understanding in 2d and 3d")] provides city-scale point cloud scenes annotated with semantic labels, instance IDs, and color information. Along the vehicle trajectory, we perform distance-based sampling to generate point cloud submaps for localization experiments. For each sampled position, a local map ℳ\mathcal{M} of size S×S S\times S m is obtained by cropping all points within the corresponding spatial window centered at the sampling location. Following KITTI-360[[32](https://arxiv.org/html/2603.09826#bib.bib9 "Kitti-360: a novel dataset and benchmarks for urban scene understanding in 2d and 3d")], all instances are divided into “stuff” and “object” categories. For “stuff” categories, points within each map are grouped into discrete instances using DBSCAN[[16](https://arxiv.org/html/2603.09826#bib.bib22 "A density-based algorithm for discovering clusters in large spatial databases with noise")]. For “object” categories, an instance is kept only if at least one-third of its points lie inside the map, ensuring sufficient completeness and recognizability. After processing, each local map ℳ\mathcal{M} contains a set of discrete object instances, each associated with semantic labels, instance identifiers, per-point coordinates, and color attributes.
|
| 78 |
+
|
| 79 |
+
Text query generation. Around each sampled vehicle pose, we randomly sample multiple nearby positions to serve as query locations. Each query location defines a pose cell, which represents the local region visible from that position. For each pose cell, all objects within S×S S\times S meters area centered at the query location are collected, following the same procedure used in map construction. A subset of N t N_{t} objects is then randomly selected to form the textual description. For each selected object, its semantic label, color, and orientation relative to the pose cell are inserted into a predefined template to generate the natural language query.
|
| 80 |
+
|
| 81 |
+
4 Methodology
|
| 82 |
+
-------------
|
| 83 |
+
|
| 84 |
+
Problem formulation. Given a textual description 𝒯\mathcal{T} of a target position ξ\xi, the goal of T2P localization is to estimate the 2D coordinates ξ=(x,y)∈ℝ 2\xi=(x,y)\in\mathbb{R}^{2} within the point cloud map ℳ\mathcal{M} on the ground plane. The text query 𝒯={h i}i=1 N t\mathcal{T}=\{h_{i}\}_{i=1}^{N_{t}} consists of N t N_{t} linguistic hints describing the semantics, colors, and directions of surrounding objects relative to ξ\xi. Following[[47](https://arxiv.org/html/2603.09826#bib.bib10 "Orienternet: visual localization in 2d public maps with neural matching")], we assume a locally planar ground surface, which is a reasonable approximation for urban environments at this scale.
|
| 85 |
+
|
| 86 |
+
[Fig.2](https://arxiv.org/html/2603.09826#S2.F2 "In 2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") presents an overview of VLM-Loc. Given a point cloud map, we first convert it into two complementary representations: a BEV image and a scene graph that capture the dense geometric layout and object-level semantics of the environment ([Sec.4.1](https://arxiv.org/html/2603.09826#S4.SS1 "4.1 BEV Rendering and Scene Graph Generation ‣ 4 Methodology ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")). During localization, these representations serve as map inputs, while the text query 𝒯\mathcal{T} is prepended with a system prompt s s to guide the spatial reasoning process of the VLM. To improve spatial understanding, we introduce the PNA mechanism ([Sec.4.2](https://arxiv.org/html/2603.09826#S4.SS2 "4.2 Partial Node Assignment ‣ 4 Methodology ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")), which identifies valid textual hints and grounds them to corresponding nodes in the scene graph. Based on the grounded nodes, the model predicts the target position through an autoregressive decoding procedure ([Sec.4.3](https://arxiv.org/html/2603.09826#S4.SS3 "4.3 Position Estimation ‣ 4 Methodology ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")).
|
| 87 |
+
|
| 88 |
+
### 4.1 BEV Rendering and Scene Graph Generation
|
| 89 |
+
|
| 90 |
+
BEV image rendering. Since VLMs are mainly pretrained on RGB images, we generate a BEV image by projecting the points of ℳ\mathcal{M} onto the ground plane. This allows the models to leverage their spatial reasoning capability within a 2D layout. The map ℳ\mathcal{M} contains an object set 𝒪={o i}i=1 N o\mathcal{O}=\{o_{i}\}_{i=1}^{N_{o}}, where N o N_{o} denotes the number of objects. Each object o i o_{i} is represented by N i N_{i} 3D points with RGB colors, 𝒫 i={(𝐩 ij,𝐜 ij)}j=1 N i,\mathcal{P}_{i}=\{(\mathbf{p}_{ij},\mathbf{c}_{ij})\}_{j=1}^{N_{i}}, where 𝐩 ij∈ℝ 3\mathbf{p}_{ij}\in\mathbb{R}^{3} is the per-point coordinates and 𝐜 ij∈ℝ 3\mathbf{c}_{ij}\in\mathbb{R}^{3} denotes the corresponding color, and l i l_{i} is its object-level semantic label. To obtain a compact appearance representation for each object, we compute the representative color of o i o_{i} by averaging the RGB values of all points:
|
| 91 |
+
|
| 92 |
+
𝐜¯i=1 N i∑j=1 N i 𝐜 ij.\bar{\mathbf{c}}_{i}=\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\mathbf{c}_{ij}.(1)
|
| 93 |
+
|
| 94 |
+
Next, the entire point cloud map is projected onto the ground plane and rasterized into a BEV image I∈ℝ H×W×3 I\in\mathbb{R}^{H\times W\times 3}, covering a spatial range of S×S S\times S meters. Each pixel in the BEV image is assigned the color 𝐜¯i\bar{\mathbf{c}}_{i} of the object whose projected footprint occupies that grid cell. When a pixel contains both “stuff” and “object” categories, the “object” category is rendered with higher priority to ensure that foreground regions are preserved without being overwritten.
|
| 95 |
+
|
| 96 |
+
Scene graph generation. The BEV image I I provides a top-down visual representation of the point cloud map ℳ\mathcal{M}. Although it captures the overall scene layout, it lacks explicit semantics and makes it difficult to model spatial relationships among objects. To enable more structured scene understanding for VLMs, we construct a scene graph 𝒢=(𝒱,ℰ)\mathcal{G}=(\mathcal{V},\mathcal{E}). Here, 𝒱\mathcal{V} denotes the set of object nodes and ℰ\mathcal{E} represents the pairwise spatial relations among them. Each object o i o_{i} is represented as a node n i∈𝒱 n_{i}\in\mathcal{V}, defined as n i=(i,l i,𝐮 i)n_{i}=(i,l_{i},\mathbf{u}_{i}), where i i is the node index, l i l_{i} is the semantic label, and 𝐮 i=(u i,v i)\mathbf{u}_{i}=(u_{i},v_{i}) is the centroid pixel coordinates of the object on the BEV image. Since the BEV-projected coordinates 𝐮 i\mathbf{u}_{i} already encode relative spatial relations, we omit explicit edges ℰ\mathcal{E} in practice for simplicity. This representation provides both semantic and geometric structure that aligns well with VLMs for cross-modal reasoning.
|
| 97 |
+
|
| 98 |
+
Remark. Transforming the point cloud map into a BEV image provides a dense, rasterized representation of the environment that aligns naturally with off-the-shelf VLMs. In parallel, the scene graph serves as a structured representation that connects semantic labels to pixel locations and captures the relative spatial relationships among objects, thereby bridging the BEV image and textual instructions. Together, the BEV image and scene graph allow the VLMs to exploit both fine-grained geometric cues and high-level semantic relationships, facilitating precise spatial understanding and localization.
|
| 99 |
+
|
| 100 |
+
### 4.2 Partial Node Assignment
|
| 101 |
+
|
| 102 |
+

|
| 103 |
+
|
| 104 |
+
Figure 3: Illustration of the node assignment process. PNA determines whether a textual object is groundable by comparing the distance between points A and B with the threshold τ\tau.
|
| 105 |
+
|
| 106 |
+
Although BEV images and scene graphs provide structured and complementary representations of the environment, T2P localization still faces two inherent challenges. First, directly regressing coordinates from these inputs does not fully leverage the structured reasoning capability of VLMs, often resulting in ambiguous spatial interpretations. Second, the map ℳ\mathcal{M} covers only a limited region, some objects mentioned in the textual query may fall outside the mapped area. As a result, only part of the mentioned objects can be grounded, which leads to a partial matching problem that further complicates accurate pose estimation.
|
| 107 |
+
|
| 108 |
+
To address this issue, we introduce the PNA mechanism, which provides supervision by explicitly aligning visible textual objects with nodes in the scene graph. For each hint h i h_{i} in the textual query 𝒯\mathcal{T}, we determine for every mentioned object whether it can be matched to a node n∈𝒱 n\in\mathcal{V} in the scene graph. As illustrated in [Fig.3](https://arxiv.org/html/2603.09826#S4.F3 "In 4.2 Partial Node Assignment ‣ 4 Methodology ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), consider a vegetation object whose projected center A is computed from the points of this object that fall inside the map region, and let B denote the center computed from the points of the pose cell that lie inside the region visible to the query location ξ\xi. We compute the distance between A and B. If this distance is smaller than a threshold τ\tau, the textual object is considered valid, labeled as True, and linked to the corresponding node in 𝒢\mathcal{G}. Otherwise, it is considered invalid, labeled as False, and the assignment for this object is set to null.
|
| 109 |
+
|
| 110 |
+
During training, ground-truth (GT) assignments are used to supervise the model in determining whether each textual object lies inside the map ℳ\mathcal{M} and in establishing textual-object correspondences. During inference, the model predicts these partial correspondences autonomously, which enables robust reasoning even when the description includes only a subset of the objects in the scene.
|
| 111 |
+
|
| 112 |
+
Remark. PNA enables the VLM to perform interpretable localization by determining whether objects mentioned in the textual query that are visible in the map and establishing correspondences between textual query and map objects. This mechanism helps the model focus on relevant surrounding objects and understand the spatial distribution described in the text relative to the map, thereby improving localization accuracy.
|
| 113 |
+
|
| 114 |
+
### 4.3 Position Estimation
|
| 115 |
+
|
| 116 |
+
After obtaining the node–text correspondences by the PNA module, VLM-Loc estimates the 2-DoF position ξ\xi in pixel coordinates. We incorporate position prediction into the autoregressive decoding, allowing the model to output the corresponding 2D location on BEV image in the same sequence of generation steps. This unified decoding strategy allows the model to reason consistently from correspondences to spatial coordinates. The predicted pixel location is then converted to world coordinate system.
|
| 117 |
+
|
| 118 |
+
### 4.4 Loss function
|
| 119 |
+
|
| 120 |
+
Following the autoregressive prediction paradigm of VLMs, we train VLM-Loc by maximizing the likelihood of generating the correct text–node alignments and position predictions, both of which are expressed entirely in text form. For each output token y t y_{t} in the generated sequence, the training objective is the standard cross-entropy loss:
|
| 121 |
+
|
| 122 |
+
ℒ=−∑t=1 T logP(y t∣y<t,s,𝒯,I,𝒢),\mathcal{L}=-\sum_{t=1}^{T}\log P\!\left(y_{t}\mid y_{<t},\,s,\,\mathcal{T},\,I,\,\mathcal{G}\right),(2)
|
| 123 |
+
|
| 124 |
+
where T T denotes the number of tokens in the predicted sequence. After generating the token sequence {y 1,y 2,…,y T}\{y_{1},y_{2},\dots,y_{T}\}, the model outputs a JSON-formatted string, which we parse into structured predictions, including matched text–node pairs and the 2D pixel position.
|
| 125 |
+
|
| 126 |
+
5 Experiments
|
| 127 |
+
-------------
|
| 128 |
+
|
| 129 |
+
### 5.1 Experimental Setup
|
| 130 |
+
|
| 131 |
+
Baselines. In experiments, we adopt representative T2P localization methods, including Text2Pos[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")], Text2Loc[[57](https://arxiv.org/html/2603.09826#bib.bib3 "Text2loc: 3d point cloud localization from natural language")], MNCL[[33](https://arxiv.org/html/2603.09826#bib.bib17 "Text to point cloud localization with multi-level negative contrastive learning")], and CMMLoc[[58](https://arxiv.org/html/2603.09826#bib.bib4 "CMMLoc: advancing text-to-pointcloud localization with cauchy-mixture-model based framework")] as baselines. For fair comparison, all baselines are retrained on CityLoc-K using only the localization modules, with official implementations and identical training configurations.
|
| 132 |
+
|
| 133 |
+
Evaluation metrics. Following prior localization works[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization"), [47](https://arxiv.org/html/2603.09826#bib.bib10 "Orienternet: visual localization in 2d public maps with neural matching")], we evaluate localization performance using Recall@K K m, which measures the percentage of samples whose predicted positions are within K K meters of the GT location. Results are reported for K∈{5,10,15}K\in\{5,10,15\}.
|
| 134 |
+
|
| 135 |
+
Implementation details. We employ Qwen3-VL-8B-Instruct[[59](https://arxiv.org/html/2603.09826#bib.bib59 "Qwen3 technical report")] as the base model for the VLM-Loc framework. Training is performed using the Swift framework[[63](https://arxiv.org/html/2603.09826#bib.bib7 "Swift: a scalable lightweight infrastructure for fine-tuning")] with LoRA-based parameter-efficient tuning[[19](https://arxiv.org/html/2603.09826#bib.bib8 "Lora: low-rank adaptation of large language models.")]. We insert LoRA adapters with rank r=8 r{=}8 and scaling factor α=16\alpha{=}16 into all linear layers. The parameters of the vision encoder, vision adapters and language backbone remain frozen, and only LoRA parameters are updated during training, preserving pretrained cross-modal alignment while adapting the model to the domain gap between rendered BEV and natural images. The model is trained for 2 epochs with a global batch size of 4 on 8 NVIDIA RTX 4090 GPUs. We use the AdamW optimizer[[38](https://arxiv.org/html/2603.09826#bib.bib11 "Decoupled weight decay regularization")] with a learning rate of 1×10−4 1\times 10^{-4} and a warm-up ratio of 0.05. All experiments are conducted in bfloat16 precision.
|
| 136 |
+
|
| 137 |
+
For each BEV image I I, the resolution is set to H=W=224 H=W=224, corresponding to a spatial coverage of S=50 S=50. Each textual query 𝒯\mathcal{T} contains N t=6 N_{t}=6 hints. A dynamic threshold τ\tau is used for different semantic categories: 5 m for “object” classes and 15 m for “stuff” classes.
|
| 138 |
+
|
| 139 |
+
#Input Output CityLoc-K Val CityLoc-K Test
|
| 140 |
+
BEV SG PNA R@5/10/15m R@5/10/15m
|
| 141 |
+
(a)✓✗✗13.04/32.79/52.31 13.21/33.86/51.40
|
| 142 |
+
(b)✗✓✗26.75/52.94/71.67 24.62/51.25/69.46
|
| 143 |
+
(c)✗✓✓33.69/59.48/75.51 32.34/61.34/74.94
|
| 144 |
+
(d)✓✓✗29.06/59.37/77.01 29.79/57.57/73.78
|
| 145 |
+
\rowcolor gray!20 (e)✓✓✓36.23/63.66/77.77 35.91/63.81/76.79
|
| 146 |
+
|
| 147 |
+
Table 1: Ablation study on each component. Input: BEV = BEV image, SG = scene graph. Output: PNA = partial node assignment. Best results are in bold, and second-best results are underlined.
|
| 148 |
+
|
| 149 |
+
Table 2: Ablation study on partial and full node assignment.
|
| 150 |
+
|
| 151 |
+
Table 3: Ablation study on text query components.
|
| 152 |
+
|
| 153 |
+
Table 4: Ablation study on the effect of different VLM backbones.
|
| 154 |
+
|
| 155 |
+
Table 5: Localization results of VLM-Loc and baseline methods on CityLoc-K. Green numbers indicate improvements over baselines.
|
| 156 |
+
|
| 157 |
+
### 5.2 Ablation Study
|
| 158 |
+
|
| 159 |
+
This section provides ablation studies to evaluate: (1) the effectiveness of each component in VLM-Loc; (2) the impact of the PNA mechanism; (3) the effect of different text query construction strategies; and (4) the effect of various VLM backbones. Ablation studies are conducted on CityLoc-K.
|
| 160 |
+
|
| 161 |
+
Effect of components. We conduct ablation studies to assess the contribution of each component in VLM-Loc, including the BEV image, scene graph (SG), coordinate prediction (C), and the proposed PNA mechanism, as summarized in [Tab.1](https://arxiv.org/html/2603.09826#S5.T1 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). Variant (a), which uses only the BEV image, performs poorly, showing that dense appearance cues alone cannot capture object relationships and support accurate localization. Variant (b), which relies solely on the SG, significantly outperforms (a), indicating that relational structure is more effective for position estimation. Introducing PNA on top of SG, as in (c), further boosts performance, improving Recall@5m by 6.94% on the validation set and 7.72% on the test set. This confirms that explicit node-level grounding strengthens the reliability of coordinate prediction. Variant (d), which combines the BEV image with the scene graph, further improves performance by 2.31% and 5.17% at Recall@5m over the SG-only baseline. This improvement indicates that integrating dense visual cues with relational structure provides the model with more complete spatial information. Finally, the full model (e), integrating BEV, SG, and PNA, achieves the best overall performance, demonstrating the synergy between multimodal inputs and explicit grounding for T2P localization.
|
| 162 |
+
|
| 163 |
+
Comparison between partial and full node assignment. In our PNA strategy, an object mentioned in a hint h h is considered groundable when the distance between the centroid of its visible region in the map M M and the centroid of its visible region in the pose cell is smaller than a threshold τ\tau. Otherwise, the object is treated as not groundable. To evaluate its effectiveness, we compare PNA with a full node assignment variant. For this variant, any object mentioned in the textual query is always assigned to a node in the scene graph as long as there exists at least one node with the same semantic label. In particular, the object is forced to match the nearest node with the same label, even when their centroid distance exceeds τ\tau, rather than being left unassigned. As shown in [Tab.2](https://arxiv.org/html/2603.09826#S5.T2 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), the proposed partial node assignment consistently outperforms the full assignment on both the validation and test sets. Specifically, it improves Recall@5m by 18.00% and 18.10%, respectively, demonstrating that explicitly accounting for partial visibility allows the model to focus on truly observable objects, thereby enhancing node grounding and ultimately benefiting position estimation.
|
| 164 |
+
|
| 165 |
+
Effect of text query components. As shown in [Tab.3](https://arxiv.org/html/2603.09826#S5.T3 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), we analyze the contribution of semantic, color, and directional cues in the text queries. Removing both color and directional information (Variant (a)) leads to a severe performance drop, reducing Recall@5m from 36.23% to 18.74% on the validation set and from 35.91% to 16.93% on the test set. When keeping color but removing directional cues (Variant (b)), the performance remains significantly lower than the full setting, with Recall@5m of 18.28% on validation and 18.01% on test. These results show that directional cues play the dominant role in spatial reasoning, as removing them nearly halves the performance even when color information is preserved, while color provides complementary appearance grounding that further enhances localization when combined with directional cues.
|
| 166 |
+
|
| 167 |
+
Effect of VLM backbones. We evaluate the impact of different VLM architectures and model sizes on localization performance, as summarized in [Tab.4](https://arxiv.org/html/2603.09826#S5.T4 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). By leveraging either InternVL3.5[[55](https://arxiv.org/html/2603.09826#bib.bib18 "Internvl3. 5: advancing open-source multimodal models in versatility, reasoning, and efficiency")] or the Qwen3-VL-Instruct models[[7](https://arxiv.org/html/2603.09826#bib.bib60 "Qwen3-vl technical report")], VLM-Loc achieves strong performance, showing that diverse VLM architectures are compatible with and effective for VLM-Loc. Within the Qwen3-VL family, the 2B, 4B, and 8B variants achieve comparable results, while the 32B model shows a significant gain, reaching 39.84% Recall@5m on the validation set. This scaling trend indicates that VLM-Loc consistently benefits from the enhanced multimodal reasoning capacity of larger models.
|
| 168 |
+
|
| 169 |
+
### 5.3 Results
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Figure 4: Relationship between localization error and the number of correctly assigned nodes on the CityLoc-K test set. More correct node assignments correspond to lower localization errors.
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 5: Qualitative results of VLM-Loc and baseline methods on the CityLoc-K. Each example visualizes the predicted and GT positions on colorized BEV maps rendered with semantic labels. The red circles ⚫ and black circles ⚫ denote the GT and predicted positions, respectively. The localization error is shown below each image, and green/red borders indicate localization error below/above 5 m.
|
| 178 |
+
|
| 179 |
+
Node assignment results.[Fig.4](https://arxiv.org/html/2603.09826#S5.F4 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") presents the distribution of samples according to the number of correctly assigned nodes on the CityLoc-K. Most samples achieve three or more correct assignments, indicating that VLM-Loc is generally effective at grounding textual cues to corresponding scene graph nodes. On the right, the interquartile range (IQR)[[15](https://arxiv.org/html/2603.09826#bib.bib38 "A modern introduction to probability and statistics: understanding why and how")] illustrates the distribution of localization errors. A clear trend emerges: as the number of correctly assigned nodes increases, both the median error and the spread of the error distribution decrease substantially, with notably stable performance once four or more nodes are correctly grounded. This strong correlation highlights the importance of accurate node assignment, as reliable grounding of textual descriptions directly leads to precise localization.
|
| 180 |
+
|
| 181 |
+
Localization accuracy. Localization results of the baseline methods and our proposed VLM-Loc are presented in [Tab.5](https://arxiv.org/html/2603.09826#S5.T5 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). On CityLoc-K, VLM-Loc achieves the best localization performance and significantly outperforms the strongest baseline CMMLoc, with improvements of 15.46% and 14.20% in Recall@5m on the validation and test sets, respectively. Qualitative examples in [Fig.5](https://arxiv.org/html/2603.09826#S5.F5 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") further show that VLM-Loc consistently surpasses all baselines across diverse scenes. Results show that VLM-Loc attains strong spatial understanding through BEV and scene graph representations, and performs structured reasoning via the PNA mechanism, yielding more accurate and interpretable localization than existing baselines. More qualitative results are available in the supplementary material.
|
| 182 |
+
|
| 183 |
+
Table 6: Generalization results on the CityLoc-C benchmark.
|
| 184 |
+
|
| 185 |
+
Generalization ability. To evaluate cross-domain generalization, we directly transfer the models trained on CityLoc-K to the CityLoc-C split and assess their localization accuracy without extra fine-tuning. The point cloud data of CityLoc-C originates from the SensatUrban[[20](https://arxiv.org/html/2603.09826#bib.bib64 "Sensaturban: learning semantics from urban-scale photogrammetric point clouds")] dataset, which contains photogrammetric point clouds captured by drones and thus exhibit characteristics markedly different from the vehicle-mounted LiDAR data used in CityLoc-K. As shown in [Tab.6](https://arxiv.org/html/2603.09826#S5.T6 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), VLM-Loc achieves substantially higher recall across all thresholds (21.37%, 49.12%, and 68.26% at 5/10/15m), significantly outperforming prior methods. These results demonstrate that VLM-Loc generalizes effectively to unseen environments and heterogeneous point cloud sources.
|
| 186 |
+
|
| 187 |
+
6 Conclusion
|
| 188 |
+
------------
|
| 189 |
+
|
| 190 |
+
In this paper, we present VLM-Loc, a VLM-based framework for localization in point cloud maps using text queries. By leveraging the visual and textual understanding and the strong spatial reasoning capability of VLMs, our approach enables accurate localization from language cues in complex environments. VLM-Loc converts point cloud maps into BEV images and scene graphs, and introduces a partial node assignment mechanism that explicitly aligns textual cues with spatial nodes. We also establish the CityLoc benchmark to evaluate fine-grained T2P localization. Experiments on CityLoc demonstrate that VLM-Loc achieves substantial improvements in localization accuracy and robustness over existing baselines.
|
| 191 |
+
|
| 192 |
+
Future work. We believe that this work paves the way for two promising research directions: (1) Strengthening multi-step reasoning and scene understanding so that models can handle longer and more compositional textual descriptions in complex outdoor environments. (2) Building upon the CityLoc benchmark to move from passive localization toward an active agent[[30](https://arxiv.org/html/2603.09826#bib.bib74 "Theory of mind inspired large reasoning language model improved multi-agent reinforcement learning algorithm for robust and adaptive partner modelling")] that unifies localization with planning and navigation in unseen environments, improving the ability to interact effectively with its surroundings.
|
| 193 |
+
|
| 194 |
+
Acknowledgements
|
| 195 |
+
----------------
|
| 196 |
+
|
| 197 |
+
This work is supported in part by the Fundamental Research Funds for the Central Universities (Nankai University, No. 070-63253235) and in part by NSFC (No. 62576176).
|
| 198 |
+
|
| 199 |
+
References
|
| 200 |
+
----------
|
| 201 |
+
|
| 202 |
+
* [1]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. (2023)Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Cited by: [§C.1](https://arxiv.org/html/2603.09826#S3.SS1.p5.1 "C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 203 |
+
* [2] (2020)ReferIt3D: neural listeners for fine-grained 3d object identification in real-world scenes. In European Conference on Computer Vision (ECCV), pp.422–440. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 204 |
+
* [3]Z. An, G. Sun, Y. Liu, R. Li, J. Han, E. Konukoglu, and S. Belongie (2025)Generalized few-shot 3d point cloud segmentation with vision-language model. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp.16997–17007. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 205 |
+
* [4]R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic (2016)NetVLAD: cnn architecture for weakly supervised place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.5297–5307. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 206 |
+
* [5]S. H. S. Ariff, Y. Liu, G. Sun, J. Yang, H. Ding, X. Geng, and X. Jiang (2026)Evaluating sam2 for video semantic segmentation. Machine Intelligence Research. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 207 |
+
* [6]J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng, Y. Fan, W. Ge, Y. Han, F. Huang, et al. (2023)Qwen technical report. arXiv preprint arXiv:2309.16609. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 208 |
+
* [7]S. Bai, Y. Cai, R. Chen, K. Chen, X. Chen, Z. Cheng, L. Deng, W. Ding, C. Gao, C. Ge, et al. (2025)Qwen3-vl technical report. arXiv preprint arXiv:2511.21631. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.2](https://arxiv.org/html/2603.09826#S5.SS2.p5.1 "5.2 Ablation Study ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 209 |
+
* [8]S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang, et al. (2025)Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 210 |
+
* [9]Z. Cai, Y. Wang, Q. Sun, R. Wang, C. Gu, W. Yin, Z. Lin, Z. Yang, C. Wei, O. Qian, H. E. Pang, X. Shi, K. Deng, X. Han, Z. Chen, J. Li, X. Fan, H. Deng, L. Lu, B. Li, Z. Liu, Q. Wang, D. Lin, and L. Yang (2025)Holistic evaluation of multimodal llms on spatial intelligence. External Links: 2508.13142, [Link](https://arxiv.org/abs/2508.13142)Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p3.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 211 |
+
* [10]D. Cattaneo, M. Vaghi, S. Fontana, A. L. Ballardini, and D. G. Sorrenti (2020)Global visual localization in lidar-maps through shared 2d-3d embedding space. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp.4365–4371. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 212 |
+
* [11]D. Z. Chen, A. X. Chang, and M. Nießner (2020)ScanRefer: 3d object localization in rgb-d scans using natural language. In European Conference on Computer Vision (ECCV), pp.202–221. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 213 |
+
* [12]X. Chen, T. Läbe, A. Milioto, T. Röhling, O. Vysotska, A. Haag, J. Behley, and C. Stachniss (2021)OverlapNet: loop closing for lidar-based slam. arXiv preprint arXiv:2105.11344. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 214 |
+
* [13]M. Chu, Z. Zheng, W. Ji, T. Wang, and T. Chua (2024)Towards natural language-guided drones: geotext-1652 benchmark with spatial relation matching. In European Conference on Computer Vision, pp.213–231. Cited by: [§C.1](https://arxiv.org/html/2603.09826#S3.SS1.p5.1 "C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 215 |
+
* [14]W. Dai, J. Li, D. Li, A. Tiong, J. Zhao, W. Wang, B. Li, P. N. Fung, and S. Hoi (2023)Instructblip: towards general-purpose vision-language models with instruction tuning. Advances in neural information processing systems 36, pp.49250–49267. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 216 |
+
* [15]F. M. Dekking (2005)A modern introduction to probability and statistics: understanding why and how. Springer Science & Business Media. Cited by: [§5.3](https://arxiv.org/html/2603.09826#S5.SS3.p1.1 "5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 217 |
+
* [16]M. Ester, H. Kriegel, J. Sander, X. Xu, et al. (1996)A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, Vol. 96, pp.226–231. Cited by: [§3](https://arxiv.org/html/2603.09826#S3.p4.3 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 218 |
+
* [17]Z. Fan, Z. Song, H. Liu, Z. Lu, J. He, and X. Du (2022)Svt-net: super light-weight sparse voxel transformer for large scale place recognition. In Proceedings of the AAAI conference on artificial intelligence, Vol. 36, pp.551–560. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 219 |
+
* [18]M. Feng, Z. Li, Q. Li, L. Zhang, X. Zhang, G. Zhu, H. Zhang, Y. Wang, and A. Mian (2021)Free-form description guided 3d visual graph network for object grounding in point cloud. In Proceedings of the IEEE/CVF international conference on computer vision, pp.3722–3731. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 220 |
+
* [19]E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, et al. (2022)Lora: low-rank adaptation of large language models.. ICLR 1 (2), pp.3. Cited by: [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p3.3 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 221 |
+
* [20]Q. Hu, B. Yang, S. Khalid, W. Xiao, N. Trigoni, and A. Markham (2022)Sensaturban: learning semantics from urban-scale photogrammetric point clouds. International Journal of Computer Vision 130 (2), pp.316–343. Cited by: [§B.2](https://arxiv.org/html/2603.09826#S2.SS2a.p1.1 "B.2 CityLoc-C Construction ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§3](https://arxiv.org/html/2603.09826#S3.p2.1 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.3](https://arxiv.org/html/2603.09826#S5.SS3.p3.1 "5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 222 |
+
* [21]Y. Hu, L. Hu, Q. Kong, and B. Fan (2025)A survey on end-to-end perception and prediction for autonomous driving. Machine Intelligence Research 22 (6), pp.999–1030. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 223 |
+
* [22]P. Intelligence, K. Black, N. Brown, J. Darpinian, K. Dhabalia, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, et al. (2025)π 0.5\pi_{0.5}: A vision-language-action model with open-world generalization. arXiv preprint arXiv:2504.16054. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 224 |
+
* [23]A. Jain, N. Gkanatsios, I. Mediratta, and K. Fragkiadaki (2022)Bottom up top down detection transformers for language grounding in images and point clouds. In European Conference on Computer Vision, pp.417–433. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 225 |
+
* [24]J. Knights, S. B. Laina, P. Moghadam, and S. Leutenegger (2025)SOLVR: submap oriented lidar-visual re-localisation. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pp.6387–6393. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 226 |
+
* [25]M. Kolmet, Q. Zhou, A. Ošep, and L. Leal-Taixé (2022)Text2pos: text-to-point-cloud cross-modal localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.6687–6696. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p2.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§1](https://arxiv.org/html/2603.09826#S1.p4.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§C.1](https://arxiv.org/html/2603.09826#S3.SS1.p4.3 "C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§C.1](https://arxiv.org/html/2603.09826#S3.SS1.p6.1 "C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§D.1](https://arxiv.org/html/2603.09826#S4.SS1a.p1.1 "D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p1.1 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p2.3 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 5](https://arxiv.org/html/2603.09826#S5.T5.2.3.1.1 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 6](https://arxiv.org/html/2603.09826#S5.T6.2.3.3.1 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 227 |
+
* [26]J. Li, D. Li, S. Savarese, and S. Hoi (2023)Blip-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pp.19730–19742. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 228 |
+
* [27]J. Li, R. Selvaraju, A. Gotmare, S. Joty, C. Xiong, and S. C. H. Hoi (2021)Align before fuse: vision and language representation learning with momentum distillation. Advances in neural information processing systems 34, pp.9694–9705. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 229 |
+
* [28]S. Li, X. Chen, Y. Liu, D. Dai, C. Stachniss, and J. Gall (2022)Multi-scale interaction for real-time lidar data segmentation on an embedded platform. IEEE Robotics and Automation Letters 7 (2), pp.738–745. External Links: [Document](https://dx.doi.org/10.1109/LRA.2021.3132059)Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 230 |
+
* [29]W. Li, Y. Yang, S. Yu, G. Hu, C. Wen, M. Cheng, and C. Wang (2024)Diffloc: diffusion model for outdoor lidar localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.15045–15054. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 231 |
+
* [30]X. Li, T. Zhang, C. Liu, S. Xu, and B. Xu (2025)Theory of mind inspired large reasoning language model improved multi-agent reinforcement learning algorithm for robust and adaptive partner modelling. Machine Intelligence Research, pp.1–14. Cited by: [§6](https://arxiv.org/html/2603.09826#S6.p2.1 "6 Conclusion ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 232 |
+
* [31]Y. Li, M. Gladkova, Y. Xia, R. Wang, and D. Cremers (2024)Vxp: voxel-cross-pixel large-scale image-lidar place recognition. arXiv preprint arXiv:2403.14594. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 233 |
+
* [32]Y. Liao, J. Xie, and A. Geiger (2022)Kitti-360: a novel dataset and benchmarks for urban scene understanding in 2d and 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (3), pp.3292–3310. Cited by: [Figure 6](https://arxiv.org/html/2603.09826#S1.F6 "In A Overview ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Figure 6](https://arxiv.org/html/2603.09826#S1.F6.13.2 "In A Overview ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§B.3](https://arxiv.org/html/2603.09826#S2.SS3a.p2.1 "B.3 Benchmark Statistics ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§3](https://arxiv.org/html/2603.09826#S3.p2.1 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§3](https://arxiv.org/html/2603.09826#S3.p4.3 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 234 |
+
* [33]D. Liu, S. Huang, W. Li, S. Shen, and C. Wang (2025)Text to point cloud localization with multi-level negative contrastive learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39, pp.5397–5405. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p2.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p1.1 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 5](https://arxiv.org/html/2603.09826#S5.T5.2.5.3.1 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 6](https://arxiv.org/html/2603.09826#S5.T6.2.5.5.1 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 235 |
+
* [34]H. Liu, C. Li, Y. Li, and Y. J. Lee (2024)Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.26296–26306. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 236 |
+
* [35]H. Liu, C. Li, Q. Wu, and Y. J. Lee (2023)Visual instruction tuning. NeurIPS. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 237 |
+
* [36]Y. Liu, Y. Wu, G. Sun, L. Zhang, A. Chhatkuli, and L. Van Gool (2024)Vision transformers with hierarchical attention. Machine intelligence research 21 (4), pp.670–683. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 238 |
+
* [37]J. M. Loomis and J. M. Knapp (2003)Visual perception of egocentric distance in real and virtual environments. In Virtual and adaptive environments, pp.21–46. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p3.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§3](https://arxiv.org/html/2603.09826#S3.p1.1 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 239 |
+
* [38]I. Loshchilov and F. Hutter (2017)Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Cited by: [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p3.3 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 240 |
+
* [39]S. Lu, X. Xu, D. Zhang, Y. Wu, H. Lu, X. Chen, R. Xiong, and Y. Wang (2025)Ring#: pr-by-pe global localization with roto-translation equivariant gram learning. IEEE Transactions on Robotics. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 241 |
+
* [40]L. Luo, S. Cao, X. Li, J. Xu, R. Ai, Z. Yu, and X. Chen (2025)Bevplace++: fast, robust, and lightweight lidar global localization for unmanned ground vehicles. IEEE Transactions on Robotics. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 242 |
+
* [41]J. Ma, J. Zhang, J. Xu, R. Ai, W. Gu, and X. Chen (2022)OverlapTransformer: an efficient and yaw-angle-invariant transformer network for lidar-based place recognition. IEEE Robotics and Automation Letters 7 (3), pp.6958–6965. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 243 |
+
* [42]T. Miyanishi, F. Kitamori, S. Kurita, J. Lee, M. Kawanabe, and N. Inoue (2023)CityRefer: geography-aware 3d visual grounding dataset on city-scale point cloud data. arXiv preprint arXiv:2310.18773. Cited by: [Figure 6](https://arxiv.org/html/2603.09826#S1.F6 "In A Overview ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Figure 6](https://arxiv.org/html/2603.09826#S1.F6.13.2 "In A Overview ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§B.2](https://arxiv.org/html/2603.09826#S2.SS2a.p1.1 "B.2 CityLoc-C Construction ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§3](https://arxiv.org/html/2603.09826#S3.p2.1 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 244 |
+
* [43]C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017)Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.652–660. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 245 |
+
* [44]A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. (2021)Learning transferable visual models from natural language supervision. In International conference on machine learning, pp.8748–8763. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 246 |
+
* [45]R. S. Renner, B. M. Velichkovsky, and J. R. Helmert (2013)The perception of egocentric distances in virtual environments-a review. ACM Computing Surveys (CSUR)46 (2), pp.1–40. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p3.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 247 |
+
* [46]J. Roh, K. Desingh, A. Farhadi, and D. Fox (2022)Languagerefer: spatial-language model for 3d visual grounding. In Conference on Robot Learning, pp.1046–1056. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p1.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 248 |
+
* [47]P. Sarlin, D. DeTone, T. Yang, A. Avetisyan, J. Straub, T. Malisiewicz, S. R. Bulo, R. Newcombe, P. Kontschieder, and V. Balntas (2023)Orienternet: visual localization in 2d public maps with neural matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.21632–21642. Cited by: [§4](https://arxiv.org/html/2603.09826#S4.p1.7 "4 Methodology ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p2.3 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 249 |
+
* [48]G. Sun, Y. Liu, T. Probst, D. P. Paudel, N. Popovic, and L. Van Gool (2024)Rethinking global context in crowd counting. Machine Intelligence Research 21 (4), pp.640–651. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 250 |
+
* [49]P. J. Teunissen, O. Montenbruck, et al. (2017)Springer handbook of global navigation satellite systems. Vol. 10, Springer. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 251 |
+
* [50]M. A. Uy and G. H. Lee (2018)Pointnetvlad: deep point cloud based retrieval for large-scale place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.4470–4479. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 252 |
+
* [51]A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017)Attention is all you need. Advances in neural information processing systems 30. Cited by: [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 253 |
+
* [52]G. Wang, H. Fan, and M. Kankanhalli (2023)Text to point cloud localization with relation-enhanced transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, pp.2501–2509. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p2.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 254 |
+
* [53]L. Wang, Z. He, R. Dang, M. Shen, C. Liu, and Q. Chen (2024)Vision-and-language navigation via causal learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.13139–13150. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 255 |
+
* [54]P. Wang, S. Bai, S. Tan, S. Wang, Z. Fan, J. Bai, K. Chen, X. Liu, J. Wang, W. Ge, et al. (2024)Qwen2-vl: enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191. Cited by: [§2.3](https://arxiv.org/html/2603.09826#S2.SS3.p1.1 "2.3 Vision-Language Models ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 256 |
+
* [55]W. Wang, Z. Gao, L. Gu, H. Pu, L. Cui, X. Wei, Z. Liu, L. Jing, S. Ye, J. Shao, et al. (2025)Internvl3. 5: advancing open-source multimodal models in versatility, reasoning, and efficiency. arXiv preprint arXiv:2508.18265. Cited by: [§5.2](https://arxiv.org/html/2603.09826#S5.SS2.p5.1 "5.2 Ablation Study ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 4](https://arxiv.org/html/2603.09826#S5.T4.2.6.6.1 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 257 |
+
* [56]Y. Xia, M. Gladkova, R. Wang, Q. Li, U. Stilla, J. F. Henriques, and D. Cremers (2023)Casspr: cross attention single scan place recognition. In Proceedings of the IEEE/CVF international conference on computer vision, pp.8461–8472. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 258 |
+
* [57]Y. Xia, L. Shi, Z. Ding, J. F. Henriques, and D. Cremers (2024)Text2loc: 3d point cloud localization from natural language. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.14958–14967. Cited by: [Figure 1](https://arxiv.org/html/2603.09826#S1.F1 "In 1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Figure 1](https://arxiv.org/html/2603.09826#S1.F1.3.2 "In 1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§1](https://arxiv.org/html/2603.09826#S1.p2.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§1](https://arxiv.org/html/2603.09826#S1.p4.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p1.1 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 5](https://arxiv.org/html/2603.09826#S5.T5.2.4.2.1 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 6](https://arxiv.org/html/2603.09826#S5.T6.2.4.4.1 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 259 |
+
* [58]Y. Xu, H. Qu, J. Liu, W. Zhang, and X. Yang (2025)CMMLoc: advancing text-to-pointcloud localization with cauchy-mixture-model based framework. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp.6637–6647. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p2.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§1](https://arxiv.org/html/2603.09826#S1.p4.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§2.2](https://arxiv.org/html/2603.09826#S2.SS2.p2.1 "2.2 3D Vision and Language ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§D.1](https://arxiv.org/html/2603.09826#S4.SS1a.p1.1 "D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p1.1 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 5](https://arxiv.org/html/2603.09826#S5.T5.2.6.4.1 "In 5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [Table 6](https://arxiv.org/html/2603.09826#S5.T6.2.6.6.1 "In 5.3 Results ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 260 |
+
* [59]A. Yang, A. Li, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Gao, C. Huang, C. Lv, et al. (2025)Qwen3 technical report. arXiv preprint arXiv:2505.09388. Cited by: [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p3.3 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 261 |
+
* [60]Z. Yang and D. Purves (2003)A statistical explanation of visual space. Nature neuroscience 6 (6), pp.632–640. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p3.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), [§3](https://arxiv.org/html/2603.09826#S3.p1.1 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 262 |
+
* [61]J. Ye, H. Lin, L. Ou, D. Chen, Z. Wang, Q. Zhu, C. He, and W. Li (2025)Where am i? cross-view geo-localization with natural language descriptions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.5890–5900. Cited by: [§C.1](https://arxiv.org/html/2603.09826#S3.SS1.p5.1 "C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 263 |
+
* [62]W. Zhang, H. Zhou, Z. Dong, Q. Yan, and C. Xiao (2022)Rank-pointretrieval: reranking point cloud retrieval via a visually consistent registration evaluation. IEEE Transactions on Visualization and Computer Graphics 29 (9), pp.3840–3854. Cited by: [§2.1](https://arxiv.org/html/2603.09826#S2.SS1.p1.1 "2.1 Localization in Point Cloud ‣ 2 Related Work ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 264 |
+
* [63]Y. Zhao, J. Huang, J. Hu, X. Wang, Y. Mao, D. Zhang, Z. Jiang, Z. Wu, B. Ai, A. Wang, et al. (2025)Swift: a scalable lightweight infrastructure for fine-tuning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39, pp.29733–29735. Cited by: [§5.1](https://arxiv.org/html/2603.09826#S5.SS1.p3.3 "5.1 Experimental Setup ‣ 5 Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 265 |
+
* [64]Y. Zheng, L. Yao, Y. Su, Y. Zhang, Y. Wang, S. Zhao, Y. Zhang, and L. Chau (2025)A survey of embodied learning for object-centric robotic manipulation. Machine Intelligence Research 22 (4), pp.588–626. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 266 |
+
* [65]G. Zhou, Y. Hong, Z. Wang, C. Zhao, M. Bansal, and Q. Wu (2025)Same: learning generic language-guided visual navigation with state-adaptive mixture of experts. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.7794–7807. Cited by: [§1](https://arxiv.org/html/2603.09826#S1.p1.1 "1 Introduction ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 267 |
+
|
| 268 |
+
\thetitle
|
| 269 |
+
|
| 270 |
+
Supplementary Material
|
| 271 |
+
|
| 272 |
+
A Overview
|
| 273 |
+
----------
|
| 274 |
+
|
| 275 |
+
In the supplementary material, we provide additional benchmark construction and statistics ([Sec.B](https://arxiv.org/html/2603.09826#S2a "B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")), implementation details of textual query and system prompt ([Sec.C](https://arxiv.org/html/2603.09826#S3a "C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")), and further visualization results ([Sec.E](https://arxiv.org/html/2603.09826#S5a "E Qualitative Results ‣ D.2 Inference Analysis ‣ D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")).
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
Figure 6: Example point clouds from the CityLoc benchmark. (a) A roadside LiDAR scene from KITTI-360[[32](https://arxiv.org/html/2603.09826#bib.bib9 "Kitti-360: a novel dataset and benchmarks for urban scene understanding in 2d and 3d")]. (b) A photogrammetric urban block from CityRefer[[42](https://arxiv.org/html/2603.09826#bib.bib21 "CityRefer: geography-aware 3d visual grounding dataset on city-scale point cloud data")].
|
| 280 |
+
|
| 281 |
+
B The CityLoc Benchmark Details
|
| 282 |
+
-------------------------------
|
| 283 |
+
|
| 284 |
+
Our CityLoc benchmark consists of two subsets, including CityLoc-K for training, validation and testing, CityLoc-C for cross-domain testing, as shown in Fig.[6](https://arxiv.org/html/2603.09826#S1.F6 "Figure 6 ‣ A Overview ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). The two splits differ significantly in semantic composition, sensing modality, point cloud characteristics, and geographic region, offering a diverse and challenging setting for evaluating the generalization ability of T2P localization models.
|
| 285 |
+
|
| 286 |
+
In this section, we provide additional details that complement the description in [Sec.3](https://arxiv.org/html/2603.09826#S3 "3 The CityLoc Benchmark ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). We first describe the construction procedures of CityLoc-K and CityLoc-C in [Sec.B.1](https://arxiv.org/html/2603.09826#S2.SS1a "B.1 CityLoc-K Construction ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") and [Sec.B.2](https://arxiv.org/html/2603.09826#S2.SS2a "B.2 CityLoc-C Construction ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"), respectively. In [Sec.B.3](https://arxiv.org/html/2603.09826#S2.SS3a "B.3 Benchmark Statistics ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") we then present the statistics of the proposed CityLoc benchmark.
|
| 287 |
+
|
| 288 |
+
### B.1 CityLoc-K Construction
|
| 289 |
+
|
| 290 |
+
Data source.CityLoc-K is constructed based on the KITTI-360 dataset 1 1 1 Available under the Creative Commons [Attribution-NonCommercial-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-nc-sa/3.0/)., which contains large-scale LiDAR point clouds collected by vehicle-mounted sensors along urban roads in Karlsruhe. We use 5 training sequences (00, 02, 04, 06, 07), 1 validation sequence (10), and 3 testing sequences (03, 05, 09).
|
| 291 |
+
|
| 292 |
+
Map center sampling. When constructing the maps, we sample map centers from the vehicle trajectory. Specifically, we perform distance-based subsampling along the trajectory to ensure that any two selected centers are at least 10 m apart. This ensures that sampling remains evenly distributed across the entire trajectory, preventing points from clustering in specific regions and avoiding biased localization outcomes.
|
| 293 |
+
|
| 294 |
+
Query location sampling. For each sampled map center, we further generate four query positions by randomly perturbing its horizontal coordinates within ±15m\pm 15\,\text{m} along both the East and North directions, thereby expanding the number of query samples and positional diversity.
|
| 295 |
+
|
| 296 |
+
### B.2 CityLoc-C Construction
|
| 297 |
+
|
| 298 |
+
Data source.CityLoc-C is derived from SensatUrban[[20](https://arxiv.org/html/2603.09826#bib.bib64 "Sensaturban: learning semantics from urban-scale photogrammetric point clouds")] and CityRefer[[42](https://arxiv.org/html/2603.09826#bib.bib21 "CityRefer: geography-aware 3d visual grounding dataset on city-scale point cloud data")]2 2 2 Both datasets are released under the [MIT License](https://mit-license.org/).. For convenience, we denote blocks in Birmingham and Cambridge as B# and C#, respectively. The dataset contains high-resolution photogrammetric point clouds with nearly three billion points collected across three UK cities, covering an area of 7.6 km 2. We use the validation and test splits from Birmingham and Cambridge, selecting 4 blocks in Birmingham (B0, B5, B6, B12) and 7 blocks in Cambridge (C2, C3, C8, C10, C14, C21, C26) for cross-city generalization analysis.
|
| 299 |
+
|
| 300 |
+
Map center sampling. CityRefer does not provide the sensor trajectory of the data acquisition process. Therefore, for map center sampling, we perform grid sampling on each point cloud block and retain maps that contain more than six objects. For query location sampling, we follow the same strategy used for CityLoc-K, as described in [Sec.B.1](https://arxiv.org/html/2603.09826#S2.SS1a "B.1 CityLoc-K Construction ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 301 |
+
|
| 302 |
+
### B.3 Benchmark Statistics
|
| 303 |
+
|
| 304 |
+
Dataset-Level Statistics.CityLoc-K contains 2,767, 300, and 1,027 maps, together with 16,113, 1,772, and 6,109 queries for training, validation, and testing, respectively. These maps span areas of 1.66, 0.19, and 0.60 km 2. CityLoc-C includes 875 maps and 4,487 queries, covering 0.90 km 2 for cross-domain testing. The reported map area is computed as the union of the minimum bounding boxes of all maps.
|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
|
| 308 |
+
Figure 7: Semantic instance distribution in CityLoc-K.
|
| 309 |
+
|
| 310 |
+

|
| 311 |
+
|
| 312 |
+
Figure 8: Semantic instance distribution in CityLoc-C.
|
| 313 |
+
|
| 314 |
+
Distribution of semantic instances. For all semantic instances contained in the point cloud maps, we compute their distribution, as shown in[Fig.7](https://arxiv.org/html/2603.09826#S2.F7 "In B.3 Benchmark Statistics ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") and[Fig.8](https://arxiv.org/html/2603.09826#S2.F8 "In B.3 Benchmark Statistics ‣ B The CityLoc Benchmark Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). Here, “object” and “stuff” follow the definitions in KITTI-360[[32](https://arxiv.org/html/2603.09826#bib.bib9 "Kitti-360: a novel dataset and benchmarks for urban scene understanding in 2d and 3d")], referring to countable and uncountable instances, respectively, and are rendered in orange and blue.
|
| 315 |
+
|
| 316 |
+
C Implementation Details
|
| 317 |
+
------------------------
|
| 318 |
+
|
| 319 |
+
### C.1 Textual Query Generation
|
| 320 |
+
|
| 321 |
+
Components. Each textual query 𝒯\mathcal{T} contains N t N_{t} hints. Each hint h h describes an object by specifying its color, semantic category, and its directional relation with respect to the query location, following the template format: "The pose is <direction> of <color><semantic>."
|
| 322 |
+
|
| 323 |
+
Direction computation. For each object o o referenced in the textual description, we first project all its 3D points onto the horizontal plane and identify the closest point 𝐩 close∈ℝ 2\mathbf{p}_{\text{close}}\in\mathbb{R}^{2} to the query position ξ\xi. If the distance between 𝐩 close\mathbf{p}_{\text{close}} and ξ\xi is below δ=2.5\delta=2.5 m, the direction is assigned as “on-top”. Otherwise, we determine the relative orientation by comparing the horizontal offsets dx dx and dy dy between ξ\xi and the centroid of o o, as summarized in [Eq.3](https://arxiv.org/html/2603.09826#S3.E3 "In C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models").
|
| 324 |
+
|
| 325 |
+
Direction(ξ,o)={“on-top”,‖(ξ−𝐩 close)‖2<δ,“east”,|dx|≥|dy|anddx≥0,“west”,|dx|≥|dy|anddx<0,“north”,|dx|<|dy|anddy≥0,“south”,|dx|<|dy|anddy<0,\text{Direction}(\xi,o)=\begin{cases}\text{``on-top''},&\|(\xi-\mathbf{p}_{\text{close}})\|_{2}<\delta,\\[5.0pt] \text{``east''},&|dx|\geq|dy|\ \text{and}\ dx\geq 0,\\[3.0pt] \text{``west''},&|dx|\geq|dy|\ \text{and}\ dx<0,\\[3.0pt] \text{``north''},&|dx|<|dy|\ \text{and}\ dy\geq 0,\\[3.0pt] \text{``south''},&|dx|<|dy|\ \text{and}\ dy<0,\end{cases}(3)
|
| 326 |
+
|
| 327 |
+
Color mapping. Each object in the map is already associated with an RGB color 𝐜¯\bar{\mathbf{c}} (as described in Eq.([1](https://arxiv.org/html/2603.09826#S4.E1 "Equation 1 ‣ 4.1 BEV Rendering and Scene Graph Generation ‣ 4 Methodology ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models")) in the main paper). To obtain its textual color attribute, we map the object’s RGB value to the nearest entry in a predefined discrete color palette COLOR_NAMES={dark-green, gray, gray-green, bright-gray, black, green, beige}\texttt{COLOR\_NAMES}=\{\text{dark-green, gray, gray-green, bright-gray, black, green, beige}\} as in KITTI360Pose[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")], using the corresponding template RGB centers 𝒞={𝐜 1,…,𝐜 K}\mathcal{C}=\{\mathbf{c}_{1},\dots,\mathbf{c}_{K}\} for nearest-neighbor assignment. The assigned textual label is obtained by
|
| 328 |
+
|
| 329 |
+
color(o)=COLOR_NAMES[argmin k‖𝐜¯−𝐜 k‖].\text{color}(o)=\texttt{COLOR\_NAMES}\!\left[\arg\min_{k}\|\bar{\mathbf{c}}-\mathbf{c}_{k}\|\right].(4)
|
| 330 |
+
|
| 331 |
+
where k∈{1,…,K}k\in\{1,\dots,K\}.
|
| 332 |
+
|
| 333 |
+
Constructing large-scale, free-form textual descriptions from point cloud data is highly challenging and costly. Due to the complexity of real-world point cloud scenes, manually extracting scene information to compose detailed descriptions is prohibitively expensive. Although some recent methods[[13](https://arxiv.org/html/2603.09826#bib.bib68 "Towards natural language-guided drones: geotext-1652 benchmark with spatial relation matching"), [61](https://arxiv.org/html/2603.09826#bib.bib66 "Where am i? cross-view geo-localization with natural language descriptions")] attempt to generate text by first extracting keywords and then prompting a large language model (LLM) (_e.g_., GPT-4[[1](https://arxiv.org/html/2603.09826#bib.bib67 "Gpt-4 technical report")]). This process is generally not free and still requires manual verification, making it unsuitable for scalable dataset construction.
|
| 334 |
+
|
| 335 |
+
For these reasons, we follow prior work[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")] and adopt a rule-based template to encode the essential object-level cues for localization, including color, semantic category, and relative direction. This approach allows us to construct informative and scalable textual descriptions at minimal cost. It is fully free to generate, requires no LLMs, and avoids any manual post-verification, while still preserving the key information necessary for fine-grained localization.
|
| 336 |
+
|
| 337 |
+
Table 7: System prompt for VLM-Loc.
|
| 338 |
+
|
| 339 |
+
### C.2 System Prompt
|
| 340 |
+
|
| 341 |
+
Our system prompt consists of six components designed to guide the VLM in performing the T2P localization task, as detailed in Tab.[7](https://arxiv.org/html/2603.09826#S3.T7 "Table 7 ‣ C.1 Textual Query Generation ‣ C Implementation Details ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). The prompt could be formulated as :
|
| 342 |
+
|
| 343 |
+
* •
|
| 344 |
+
Role: specifies the localization task for the VLM and clearly defines the expected input structure, including the types of information provided, their format, and the order in which they are supplied.
|
| 345 |
+
|
| 346 |
+
* •
|
| 347 |
+
Coordinate system: provides the correspondence between the pixel coordinate system and the geospatial coordinate system, which is essential for enabling the model to interpret directional cues in the text and map them to the correct orientation on the map.
|
| 348 |
+
|
| 349 |
+
* •
|
| 350 |
+
Goal: guides the model in estimating the current pose step-by-step. The procedure is: 1) comprehend the input text description; 2) extract the textual cues by sequentially processing each mentioned object; 3) assess the visibility of described elements and form text-node matching pairs for valid entries; 4) estimate the target location based on the aggregated geographic information.
|
| 351 |
+
|
| 352 |
+
* •
|
| 353 |
+
Rules: specifies the constraints for the model’s output, _e.g_., that distances should be measured in pixel coordinates. These rules ensure that the pose estimation adheres to a parsable format and avoids invalid results.
|
| 354 |
+
|
| 355 |
+
* •
|
| 356 |
+
Output format: defines the required output structure, mandating that the model presents its results for node assignment and pose estimation in a predefined format.
|
| 357 |
+
|
| 358 |
+
* •
|
| 359 |
+
Example: provides a concrete example that illustrates the predefined output format for the model to follow. Specifically, the output should first present the visibility and assignment status for each textual hint, followed by the estimated 2D pixel coordinates.
|
| 360 |
+
|
| 361 |
+
D Additional Experiments
|
| 362 |
+
------------------------
|
| 363 |
+
|
| 364 |
+
### D.1 Results on KITTI360Pose
|
| 365 |
+
|
| 366 |
+
To enable a fair comparison on KITTI360Pose[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")], we follow its retrieval-then-localization protocol and adopt the same Top-1 retrieval results from CMMLoc[[58](https://arxiv.org/html/2603.09826#bib.bib4 "CMMLoc: advancing text-to-pointcloud localization with cauchy-mixture-model based framework")] for localization. We retrain VLM-Loc for localization on KITTI360Pose and evaluate it on the test set. Other methods perform localization using their pretrained models (Text2Pos[[25](https://arxiv.org/html/2603.09826#bib.bib1 "Text2pos: text-to-point-cloud cross-modal localization")] is not available). Results are reported in[Sec.D.1](https://arxiv.org/html/2603.09826#S4.SS1a "D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). Under same retrieval protocol, VLM-Loc achieves competitive localization performance compared to CMMLoc.
|
| 367 |
+
|
| 368 |
+
Table 8: Localization on KITTI360Pose test set (11404 samples).
|
| 369 |
+
|
| 370 |
+
### D.2 Inference Analysis
|
| 371 |
+
|
| 372 |
+
Inference analysis is conducted on two RTX 4090 GPUs with the batch size of 1, and results are reported in [Tab.9](https://arxiv.org/html/2603.09826#S4.T9 "In D.2 Inference Analysis ‣ D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). The latency is acceptable for the T2P localization setting. Moreover, the inference cost can be significantly reduced via quantization, smaller backbones, and optimized deployment frameworks, making VLM-Loc more practical for real-world systems.
|
| 373 |
+
|
| 374 |
+
Table 9: Inference analysis of VLM-Loc on the CityLoc-K val set.
|
| 375 |
+
|
| 376 |
+
E Qualitative Results
|
| 377 |
+
---------------------
|
| 378 |
+
|
| 379 |
+
Additional qualitative results of our approach and baselines on the CityLoc-K and CityLoc-C splits are shown in Fig.[9](https://arxiv.org/html/2603.09826#S5.F9 "Figure 9 ‣ E Qualitative Results ‣ D.2 Inference Analysis ‣ D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models") and Fig.[10](https://arxiv.org/html/2603.09826#S5.F10 "Figure 10 ‣ E Qualitative Results ‣ D.2 Inference Analysis ‣ D.1 Results on KITTI360Pose ‣ D Additional Experiments ‣ VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models"). The results demonstrate the superior performance of the proposed method over baselines across diverse scenes, validating its robustness and accuracy.
|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+
Figure 9: Qualitative results of VLM-Loc and baseline methods on the CityLoc-K. Each example visualizes the predicted and GT positions on colorized BEV maps rendered with semantic labels. The red circles ⚫ and black circles ⚫ denote the GT and predicted positions, respectively. The localization error is shown below each image, and green/red borders indicate localization error below/above 5 m.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+
Figure 10: Qualitative results of VLM-Loc and baseline methods on the CityLoc-C. Each example visualizes the predicted and GT positions on colorized BEV maps rendered with semantic labels. The red circles ⚫ and black circles ⚫ denote the GT and predicted positions, respectively. The localization error is shown below each image, and green/red borders indicate localization error below/above 5 m.
|