diff --git "a/2206/2206.08920.md" "b/2206/2206.08920.md" new file mode 100644--- /dev/null +++ "b/2206/2206.08920.md" @@ -0,0 +1,504 @@ +Title: VectorMapNet: End-to-end Vectorized HD Map Learning + +URL Source: https://arxiv.org/html/2206.08920 + +Markdown Content: +###### Abstract + +Autonomous driving systems require High-Definition (HD) semantic maps to navigate around urban roads. Existing solutions approach the semantic mapping problem by offline manual annotation, which suffers from serious scalability issues. Recent learning-based methods produce dense rasterized segmentation predictions to construct maps. However, these predictions do not include instance information of individual map elements and require heuristic post-processing to obtain vectorized maps. To tackle these challenges, we introduce an end-to-end vectorized HD map learning pipeline, termed VectorMapNet. VectorMapNet takes onboard sensor observations and predicts a sparse set of polylines in the bird’s-eye view. This pipeline can explicitly model the spatial relation between map elements and generate vectorized maps that are friendly to downstream autonomous driving tasks. Extensive experiments show that VectorMapNet achieve strong map learning performance on both nuScenes and Argoverse2 dataset, surpassing previous state-of-the-art methods by 14.2 mAP and 14.6mAP. Qualitatively, VectorMapNet is capable of generating comprehensive maps and capturing fine-grained details of road geometry. To the best of our knowledge, VectorMapNet is the first work designed towards end-to-end vectorized map learning from onboard observations. + +Map Learning, Autonomous Driving, Vectorization + +1 Introduction +-------------- + +Autonomous driving systems require an understanding of map elements on the road, including lanes, pedestrian crossing, and traffic signs, to navigate around the world. Such map elements are typically provided by pre-annotated High-Definition (HD) semantic maps in existing pipelines(Rong et al., [2020](https://arxiv.org/html/2206.08920#bib.bib47)). However, these methods face scalability issues due to their heavy reliance on human labor for annotating HD maps. Additionally, they necessitate precise localization of the ego-vehicle to derive local maps from the global one, a process that could introduce meter-level errors. + +In contrast, our focus lies in developing a learning-based approach for online HD semantic map learning. The aim is to use onboard sensors, including LiDARs and cameras, to estimate map elements on-the-fly. This methodology avoids the need for localization, allowing for prompt updates. Furthermore, learning-based methods can generate uncertainty or confidence indicators that downstream modules, such as motion forecasting and planning, can utilize to offset imperfect perception. These methods can leverage increasing data and model size, promptly reflect current conditions, and generalize from annotated maps to under-annotated or even non-annotated areas (please refer to Figure[6](https://arxiv.org/html/2206.08920#S4.F6 "Figure 6 ‣ 4.2 Qualitative Analysis ‣ 4 Experiments ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")). + +Most of HD semantic map learning methods(Li et al., [2021](https://arxiv.org/html/2206.08920#bib.bib25); Philion & Fidler, [2020](https://arxiv.org/html/2206.08920#bib.bib42); Roddick & Cipolla, [2020](https://arxiv.org/html/2206.08920#bib.bib46); Zhou & Krähenbühl, [2022](https://arxiv.org/html/2206.08920#bib.bib61)) consider the task as a semantic segmentation problem in bird’s-eye view (BEV), which rasterizes map elements into pixels and assigns each pixel with a class label. This formulation makes it straightforward to leverage fully convolutional networks. However, rasterized maps are not an ideal map representation for autonomous driving, for three reasons. First, rasterized maps lack instance information necessary to distinguish map elements with the same class label but different semantics, e.g. left boundary and right boundary. Second, it is hard to enforce spatial consistency within the predicted rasterized maps, e.g. nearby pixels might have contradicted semantics or geometries. Third, 2D rasterized maps are incompatible with most autonomous driving systems which consume instance-level 2D/3D vectorized maps for motion forecasting and planning. + +To alleviate these issues and produce vectorized outputs, HDMapNet(Li et al., [2021](https://arxiv.org/html/2206.08920#bib.bib25)) generates semantic, instance, and directional maps and vectorizes these three maps with a hand-designed post-processing algorithm. However, HDMapNet still relies on the rasterized map predictions, and its heuristic post-processing step restricts the model’s scalability and performance. + +![Image 1: Refer to caption](https://arxiv.org/html/x1.png) + +Figure 1: An overview of VectorMapNet. Sensor data is encoded to BEV features in the same coordinate as map elements. VectorMapNet detects the locations of map elements from BEV features by leveraging element queries. The vectorized HD map is built upon a sparse set of polylines that are generated from the detection results. Since our polylines are directional, we can infer drivable area of a map. + +In this paper, we propose an end-to-end vectorized HD map learning model named VectorMapNet, an end-to-end framework that does not involve dense semantic pixels or sophisticated post-processing steps. Instead, it represents map elements as a set of polylines closely related to downstream tasks, e.g. motion forecasting(Gao et al., [2020](https://arxiv.org/html/2206.08920#bib.bib16)). Therefore, the mapping problem boils down to predicting a sparse set of polylines from sensor observations. Specifically, we pose it as a detection problem and leverage recent set detection and sequence generation methods. First, VectorMapNet aggregates features generated from different modalities (e.g. camera images and LiDAR) into a common BEV feature space. Then, it detects map element locations based on learnable element queries and BEV features. Finally, we decode each element query into a polyline. An overview of VectorMapNet is shown in Figure[1](https://arxiv.org/html/2206.08920#S1.F1 "Figure 1 ‣ 1 Introduction ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). + +Our experiments show that VectorMapNet achieves state-of-the-art performance on the public nuScenes dataset(Caesar et al., [2020](https://arxiv.org/html/2206.08920#bib.bib4)) and Argoverse2(Wilson et al., [2021](https://arxiv.org/html/2206.08920#bib.bib54)), outperforming HDMapNet and another baseline by at least 14.2 mAP. Qualitatively, VectorMapNet builds a more comprehensive map than previous works and can capture fine details, e.g. jagged boundaries. Furthermore, we feed our predicted vectorized HD map into a downstream motion forecasting module, demonstrating the predicted map’s compatibility and effectiveness. To summarize, the contributions of the paper are as follows: + +* • +We present VectorMapNet, an end-to-end mapping approach that eliminates the need for map rasterization and post-processing by predicting vectorized outputs directly from sensor observations. + +* • +We utilize polyline, a flexible primitive with variable lengths and encoded order, to accommodate the heterogeneous nature of map elements. This approach effectively formulates the construction of a polyline map as a detection issue, thereby introducing a new strategy to the mapping paradigm. + +* • +We adapt detection transformer (DETR) models to locate deformable elements within a 3D space. Recognizing that prevalent centerpoint-based feature extraction methods fall short when dealing with map elements of varying sizes and shapes, we propose an innovative solution. Our novel method overcomes these limitations, delivering state-of-the-art performance in online semantic HD map learning tasks. + +![Image 2: Refer to caption](https://arxiv.org/html/x2.png) + +Figure 2: The network architecture of VectorMapNet. The top row is the pipeline of VectorMapNet generating polylines from raw sensor inputs. The bottom row illustrates detailed structures and inference procedures of three primary components of VectorMapNet: BEV feature extractor, map element detector, and polyline generator. Numbers in polyline embeddings indicate predicted vertex indexes. + +2 Related Works +--------------- + +Semantic map learning. Annotating semantic maps attracts plenty of interests thanks to autonomous driving. Recently, semantic map learning is formulated as a semantic segmentation problem(Mattyus et al., [2015](https://arxiv.org/html/2206.08920#bib.bib34)) and is solved by using aerial images(Máttyus et al., [2016](https://arxiv.org/html/2206.08920#bib.bib35)), LiDAR points(Yang et al., [2018](https://arxiv.org/html/2206.08920#bib.bib57)), and HD panorama(Wang et al., [2016](https://arxiv.org/html/2206.08920#bib.bib52)). The crowdsourcing tags(Wang et al., [2015](https://arxiv.org/html/2206.08920#bib.bib51)) are used to improve the performance of fine-grained segmentation. Instead of using offline data, recent works focus on understanding BEV semantics from onboard camera images(Lu et al., [2019](https://arxiv.org/html/2206.08920#bib.bib32); Yang et al., [2021](https://arxiv.org/html/2206.08920#bib.bib58)), and videos(Can et al., [2020](https://arxiv.org/html/2206.08920#bib.bib5)). Only using onboard sensors as model input is particularly challenging as the inputs and target map lie in different coordinate systems. Recently, several cross-view learning approaches(Philion & Fidler, [2020](https://arxiv.org/html/2206.08920#bib.bib42); Pan et al., [2020](https://arxiv.org/html/2206.08920#bib.bib40); Li et al., [2021](https://arxiv.org/html/2206.08920#bib.bib25); Zhou & Krähenbühl, [2022](https://arxiv.org/html/2206.08920#bib.bib61); Wang et al., [2022](https://arxiv.org/html/2206.08920#bib.bib53); Chen et al., [2022](https://arxiv.org/html/2206.08920#bib.bib12)) leverage the geometric structure of scenes to mitigate the mismatch between sensor inputs and BEV representations. Some methods(Casas et al., [2021](https://arxiv.org/html/2206.08920#bib.bib9); Sadat et al., [2020](https://arxiv.org/html/2206.08920#bib.bib48)) use pixel-level semantic maps to solve downstream tasks, but the entire downstream pipeline needs to be redesigned to accommodate these rasterized map inputs. Beyond pixel-level semantic maps, our work extracts a consistent vectorized map around ego-vehicle from surrounding cameras or LiDARs, which suits for existing downstream tasks like motion forecasting(Gao et al., [2020](https://arxiv.org/html/2206.08920#bib.bib16); Zhao et al., [2020](https://arxiv.org/html/2206.08920#bib.bib60); Liu et al., [2021](https://arxiv.org/html/2206.08920#bib.bib30)) without further post-processing. + +Lane detection. Lane detection aims to separate lane segments from road scenes precisely. Most lane detection algorithms(Pan et al., [2018](https://arxiv.org/html/2206.08920#bib.bib41); Neven et al., [2018](https://arxiv.org/html/2206.08920#bib.bib39)) use a pixel-level segmentation technique combined with sophisticated post-processing. Another line of work leverages the predefined proposal to achieve high accuracy and fast inference speed. These methods typically involve handcrafted elements such as vanishing points(Lee et al., [2017](https://arxiv.org/html/2206.08920#bib.bib22)), polynomial curves(Van Gansbeke et al., [2019](https://arxiv.org/html/2206.08920#bib.bib49)), line segments(Li et al., [2019](https://arxiv.org/html/2206.08920#bib.bib26)), and Bézier curves(Feng et al., [2022](https://arxiv.org/html/2206.08920#bib.bib14)) to model proposals. In addition to using perspective view cameras as inputs, (Homayounfar et al., [2018](https://arxiv.org/html/2206.08920#bib.bib19)) and (Liang et al., [2019](https://arxiv.org/html/2206.08920#bib.bib27)) extract lane segments from overhead highway cameras and LiDAR imagery with a recurrent neural network. Instead of discovering the road’s topology via boundaries detection, STSU(Can et al., [2021](https://arxiv.org/html/2206.08920#bib.bib6)) and LaneGraphNet(Zürn et al., [2021](https://arxiv.org/html/2206.08920#bib.bib65)) construct lane graphs from centerline segments that are encoded by Bézier curves and line segments, respectively. To model complex geometries in the urban environment, we leverage polylines to represent all the map elements in perceptual scopes. + +Geometric data modeling. Another line of work closely related to VectorMapNet is geometric data generation. These methods typically treat geometric elements as a sequence, such as primitive parts of furniture(Li et al., [2017](https://arxiv.org/html/2206.08920#bib.bib24); Mo et al., [2019](https://arxiv.org/html/2206.08920#bib.bib37)), states of sketch strokes(Ha & Eck, [2017](https://arxiv.org/html/2206.08920#bib.bib17)), vertices of n 𝑛 n italic_n-gon mesh(Nash et al., [2020](https://arxiv.org/html/2206.08920#bib.bib38)), and parameters of SVG primitives (Carlier et al., [2020](https://arxiv.org/html/2206.08920#bib.bib8)). These methods generate these sequences by leveraging autoregressive models(e.g. Transformer). Since the directly modeling sequence is challenging for long-range centerline maps, HDMapGen(Mi et al., [2021](https://arxiv.org/html/2206.08920#bib.bib36)) views the map as a two-level hierarchy. It produces a global and local graph separately with a hierarchical graph RNN. Instead of treating geometric elements as a sequence generation problem, LETR(Xu et al., [2021](https://arxiv.org/html/2206.08920#bib.bib55)) models line segment as a detection problem and tackle it with a query-based detector. Unlike the above approaches that focus on single-level geometric modelings, such as scene level (e.g. line segments in an image) or object-level (e.g. furniture), VectorMapNet is designed to address both the scene level and object level geometric modeling. Specifically, VectorMapNet constructs a map by modeling the global relationship between map elements in the scene and the local geometric details inside each element. + +Learning vector representations from images. VectorMapNet bears some similarities with predicting vector graphics from raster images. Several recent works(Carlier et al., [2020](https://arxiv.org/html/2206.08920#bib.bib8); Reddy et al., [2021](https://arxiv.org/html/2206.08920#bib.bib45)) use different vector representations to generate vector images. (Ganin et al., [2021](https://arxiv.org/html/2206.08920#bib.bib15)) converts images to CAD, CanvasVAE(Yamaguchi, [2021](https://arxiv.org/html/2206.08920#bib.bib56)) learns vectorized canvas layouts from images, and (Liu et al., [2022](https://arxiv.org/html/2206.08920#bib.bib29)) generates vectorized stroke primitives from a raster line drawing. The instance segmentation community has also been concerned with a similar task of detecting object contours in a vector form from an image. These methods(Acuna et al., [2018](https://arxiv.org/html/2206.08920#bib.bib1); Liang et al., [2020](https://arxiv.org/html/2206.08920#bib.bib28); Castrejon et al., [2017](https://arxiv.org/html/2206.08920#bib.bib10); Zorzi et al., [2022](https://arxiv.org/html/2206.08920#bib.bib64); Zhang & Wang, [2019](https://arxiv.org/html/2206.08920#bib.bib59)) initialize a contour for every object instance and then refine the vertex positions of the contour. However, The above methods are highly domain-dependent, and it is non-trivial to adapt them for our task that requires detecting and generating map elements with different semantics and geometry in the 3D world. + +3 VectorMapNet +-------------- + +Problem Formulation and Challenges. Similar to HDMapNet(Li et al., [2021](https://arxiv.org/html/2206.08920#bib.bib25)), our task is to vectorize map elements using data from onboard sensors of autonomous vehicle, such as RGB cameras and/or LiDARs. These map elements include but are not limited to: Road boundaries (boundaries of roads separating roads and sidewalks, typically irregularly-shaped curves of arbitrary lengths), Lane dividers (boundaries dividing lanes on the road, usually straight lines), and Pedestrian crossings (regions with white markings indicating legal pedestrian crossing points, typically represented as polygons). While the task is clearly defined, it is fraught with complexities and unique challenges when tackling it. (1) The diverse geometric structures of map elements make it difficult to establish a unified geometric representation. (2) The inputs and outputs of the mapping problem are not perfectly aligned. They exist in different view spaces (e.g. camera data is in perspective view and map elements are in BEV), and not all map elements are fully visible from input sensors. In some extreme cases, map elements may be completely occluded by vehicles. (3) The task requires more than simple vectorization; it also necessitates scene understanding because of the complex geometrical and topological relationships between map elements. For instance, map elements may overlap, or two traffic cones connected with a wire might indicate a road boundary. + +### 3.1 Method Overview + +The challenges above underline the need for a primitive that effectively represents a variety of geometric structures and a model that is capable of capturing the geometrical and topological relationships from various sensor inputs. + +Polyline representation. The heterogeneous geometry of map elements calls for a unified vectorized representation. We opt to use N 𝑁 N italic_N polylines 𝓥 poly={𝑽 1 poly,…,𝑽 N poly}superscript 𝓥 poly superscript subscript 𝑽 1 poly…superscript subscript 𝑽 𝑁 poly\bm{\mathcal{V}}^{\mathrm{poly}}=\{\bm{V}_{1}^{\mathrm{poly}},\dots,\bm{V}_{N}% ^{\mathrm{poly}}\}bold_caligraphic_V start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT = { bold_italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT , … , bold_italic_V start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT } as primitives to represent these map elements in a map ℳ ℳ\mathcal{M}caligraphic_M. Each polyline 𝑽 i poly={𝒗 i,n∈ℝ 2|n=1,…,N v}subscript superscript 𝑽 poly 𝑖 conditional-set subscript 𝒗 𝑖 𝑛 superscript ℝ 2 𝑛 1…subscript 𝑁 𝑣\bm{V}^{\mathrm{poly}}_{i}=\{\bm{v}_{i,n}\in\mathbb{R}^{2}|n=1,\dots,N_{v}\}bold_italic_V start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { bold_italic_v start_POSTSUBSCRIPT italic_i , italic_n end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | italic_n = 1 , … , italic_N start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT } is a collection of N v subscript 𝑁 𝑣 N_{v}italic_N start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ordered vertices 𝒗 i,n subscript 𝒗 𝑖 𝑛\bm{v}_{i,n}bold_italic_v start_POSTSUBSCRIPT italic_i , italic_n end_POSTSUBSCRIPT. In practice, we pre-process public autonomous driving semantic maps to obtain a unified polyline representation of map elements: polygons are represented as closed polylines; curves are converted into polylines by applying the Ramer–Douglas–Peucker algorithm(Ramer, [1972](https://arxiv.org/html/2206.08920#bib.bib44)). + +Using polylines to represent map elements has three main advantages: (1) HD maps are typically composed of a mixture of different geometries, such as points, lines, curves, and polygons. Polylines are a flexible primitive that can represent these geometric elements effectively. (2) The order of polyline vertices is a natural way to encode the direction of map elements, which is vital to driving. (3) The polyline representation has been widely used by downstream autonomous driving modules, such as motion forecasting(Gao et al., [2020](https://arxiv.org/html/2206.08920#bib.bib16)). + +VectorMapNet. We introduce VectorMapNet, an end-to-end model designed to represent a map ℳ ℳ\mathcal{M}caligraphic_M with a sparse set of polylines 𝓥 poly superscript 𝓥 poly\bm{\mathcal{V}}^{\mathrm{poly}}bold_caligraphic_V start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT, thus formulating the task as a sparse set detection problem. In our approach, we convert sensor data into a canonical Bird’s Eye View (BEV) representation, 𝓕 BEV subscript 𝓕 BEV\bm{\mathcal{F}}_{\mathrm{BEV}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT, and model polylines based on this BEV. Given the complexity and diversity of map elements’ structural and location patterns and relationships, we divide the task into three distinct components: (1) A BEV feature extractor (§[3.2](https://arxiv.org/html/2206.08920#S3.SS2 "3.2 BEV Feature Extractor ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")) that lifts various sensor modality inputs into a canonical feature space. (2) A map element detector (§[3.3](https://arxiv.org/html/2206.08920#S3.SS3 "3.3 Map Element Detector ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")) that locates and classifies all map elements by predicting element keypoints 𝓐={𝑨 i∈ℝ k×2|i=1,…,N}𝓐 conditional-set subscript 𝑨 𝑖 superscript ℝ 𝑘 2 𝑖 1…𝑁\bm{\mathcal{A}}=\{\bm{A}_{i}\in\mathbb{R}^{k\times 2}|i=1,\dots,N\}bold_caligraphic_A = { bold_italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_k × 2 end_POSTSUPERSCRIPT | italic_i = 1 , … , italic_N } and their class labels 𝓛={l i∈ℤ|i=1,…,N}𝓛 conditional-set subscript 𝑙 𝑖 ℤ 𝑖 1…𝑁\bm{\mathcal{L}}=\{l_{i}\in\mathbb{Z}|i=1,\dots,N\}bold_caligraphic_L = { italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_Z | italic_i = 1 , … , italic_N }. The definition of element keypoint representation 𝒜 𝒜\mathcal{A}caligraphic_A is described in §[3.3](https://arxiv.org/html/2206.08920#S3.SS3 "3.3 Map Element Detector ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). (3) A polyline generator(§[3.4](https://arxiv.org/html/2206.08920#S3.SS4 "3.4 Polyline Generator ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")) that produces a sequence of ordered polyline vertices which describes the local geometry of each detected map element (𝑨 i,l i)subscript 𝑨 𝑖 subscript 𝑙 𝑖(\bm{A}_{i},l_{i})( bold_italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). An overview of three components is demonstrated in Figure[2](https://arxiv.org/html/2206.08920#S1.F2 "Figure 2 ‣ 1 Introduction ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). + +![Image 3: Refer to caption](https://arxiv.org/html/x3.png) + +Figure 3: Three different keypoint representations are proposed here: Bounding Box (k=2), SME (k=3), and Extreme Points (k=4), where k 𝑘 k italic_k has the same definition in §[3](https://arxiv.org/html/2206.08920#S3 "3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"): the number of key points of each keypoint representation. The arrow line indicates the direction of the example polyline, and the arrow dash lines indicate the vertices order of keypoint representations. + +### 3.2 BEV Feature Extractor + +The objective of BEV feature extractor is to lift various modality inputs into a canonical feature space and aggregates and align features these features into a canonical representation termed BEV features 𝓕 BEV∈ℝ W×H×(C 1+C 2)subscript 𝓕 BEV superscript ℝ 𝑊 𝐻 subscript 𝐶 1 subscript 𝐶 2\bm{\mathcal{F}}_{\mathrm{BEV}}\in\mathbb{R}^{W\times H\times(C_{1}+C_{2})}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_W × italic_H × ( italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT based on their coordinates, where W 𝑊 W italic_W and H 𝐻 H italic_H represent the width and height of the BEV feature, respectively; C 1 subscript 𝐶 1 C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and C 2 subscript 𝐶 2 C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT represent the output channels of the BEV feature extracted from the two common modalities: surrounding camera images ℐ ℐ\mathcal{I}caligraphic_I and LiDAR points 𝒫 𝒫\mathcal{P}caligraphic_P. + +Camera branch. We use ResNet to extract features from images, followed by a feature transformation module from image space to BEV space. VectorMapNet does not rely on certain feature transformation approaches and we opt to use a simple but popular variant of IPM, which produces BEV features of 𝓕 BEV ℐ∈ℝ W×H×C 1 superscript subscript 𝓕 BEV ℐ superscript ℝ 𝑊 𝐻 subscript 𝐶 1\bm{\mathcal{F}}_{\mathrm{BEV}}^{\mathcal{I}}\in\mathbb{R}^{W\times H\times C_% {1}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_I end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_W × italic_H × italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. The detailed structure of the image extractor can be found in Appendix[C.3](https://arxiv.org/html/2206.08920#A3.SS3 "C.3 Model Details ‣ Appendix C Implementation details ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). + +LiDAR branch. For LiDAR data 𝒫 𝒫\mathcal{P}caligraphic_P, we use a variant of PointPillars(Lang et al., [2019](https://arxiv.org/html/2206.08920#bib.bib21)) with dynamic voxelization(Zhou et al., [2020](https://arxiv.org/html/2206.08920#bib.bib62)), which divides the 3D space into multiple pillars and uses pillar-wise point clouds to learn pillar-wise feature maps. We denote this feature map in BEV as 𝓕 BEV 𝒫∈ℝ W×H×C 2 superscript subscript 𝓕 BEV 𝒫 superscript ℝ 𝑊 𝐻 subscript 𝐶 2\bm{\mathcal{F}}_{\mathrm{BEV}}^{\mathcal{P}}\in\mathbb{R}^{W\times H\times C_% {2}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_P end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_W × italic_H × italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. + +For sensor fusion, we obtain the BEV features 𝓕 BEV∈ℝ W×H×(C 1+C 2)subscript 𝓕 BEV superscript ℝ 𝑊 𝐻 subscript 𝐶 1 subscript 𝐶 2\bm{\mathcal{F}}_{\mathrm{BEV}}\in\mathbb{R}^{W\times H\times(C_{1}+C_{2})}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_W × italic_H × ( italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT by concatenating 𝓕 BEV ℐ superscript subscript 𝓕 BEV ℐ\bm{\mathcal{F}}_{\mathrm{BEV}}^{\mathcal{I}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_I end_POSTSUPERSCRIPT and 𝓕 BEV 𝒫 superscript subscript 𝓕 BEV 𝒫\bm{\mathcal{F}}_{\mathrm{BEV}}^{\mathcal{P}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_P end_POSTSUPERSCRIPT, and then process the concatenated result with a two-layer convolutional network. An overview of the BEV feature extractor is shown at the bottom-left of Figure[2](https://arxiv.org/html/2206.08920#S1.F2 "Figure 2 ‣ 1 Introduction ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). + +### 3.3 Map Element Detector + +After extracting the bird’s-eye view (BEV) features, VectorMapNet have to identify and abstractly represent map elements using these features. We employ a hierarchical representation for this purpose, specifically through element queries and keypoint queries, enabling us to model the non-local shape of map elements effectively. We leverage a variant of transformer set prediction detector(Carion et al., [2020](https://arxiv.org/html/2206.08920#bib.bib7)) to achieve this goal, as it is a robust detector that eliminates the need for extra post-processing. Specifically, the detector represents map elements’ locations and categories by predicting their element keypoints 𝓐 𝓐\bm{\mathcal{A}}bold_caligraphic_A and class labels 𝓛 𝓛\bm{\mathcal{L}}bold_caligraphic_L from the BEV features 𝓕 BEV subscript 𝓕 BEV\bm{\mathcal{F}}_{\mathrm{BEV}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT. + +Element queries. The detector uses learnable element queries 𝒒 i elem∈ℝ k×d|i=1,…,N max formulae-sequence superscript subscript 𝒒 𝑖 elem conditional superscript ℝ 𝑘 𝑑 𝑖 1…subscript 𝑁 max{\bm{q}_{i}^{\mathrm{elem}}\in\mathbb{R}^{k\times d}|i=1,\dots,N_{\mathrm{max}}}bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_elem end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_k × italic_d end_POSTSUPERSCRIPT | italic_i = 1 , … , italic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT as its inputs, where d 𝑑 d italic_d represents the hidden embedding size and N max subscript 𝑁 max N_{\mathrm{max}}italic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT is a preset constant, which is much greater than the number of map elements N 𝑁 N italic_N in the scene. The i 𝑖 i italic_i-th element query 𝒒 i elem superscript subscript 𝒒 𝑖 elem\bm{q}_{i}^{\mathrm{elem}}bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_elem end_POSTSUPERSCRIPT is composed of k 𝑘 k italic_k element keypoint embeddings 𝒒 i,j kp subscript superscript 𝒒 kp 𝑖 𝑗\bm{q}^{\mathrm{kp}}_{i,j}bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT: 𝒒 i elem={𝒒 i,j kp∈ℝ d|j=1,…,k}subscript superscript 𝒒 elem 𝑖 conditional-set subscript superscript 𝒒 kp 𝑖 𝑗 superscript ℝ 𝑑 𝑗 1…𝑘\bm{q}^{\mathrm{elem}}_{i}=\{\bm{q}^{\mathrm{kp}}_{i,j}\in\mathbb{R}^{d}|j=1,% \dots,k\}bold_italic_q start_POSTSUPERSCRIPT roman_elem end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT | italic_j = 1 , … , italic_k }. Element queries are similar to object queries used in Detection Transformer (DETR)(Carion et al., [2020](https://arxiv.org/html/2206.08920#bib.bib7)), where a query represents an object. In our case, an element query represents a map element. + +Keypoint representations. In object detection problems, people use bounding box to abstract object shape. Here we use k 𝑘 k italic_k element keypoints locations 𝑨 i={𝒂 i,j∈ℝ 2|j=1,…,k}subscript 𝑨 𝑖 conditional-set subscript 𝒂 𝑖 𝑗 superscript ℝ 2 𝑗 1…𝑘\bm{A}_{i}=\{\bm{a}_{i,j}\in\mathbb{R}^{2}|j=1,...,k\}bold_italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { bold_italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | italic_j = 1 , … , italic_k } (please refer to Figure[3](https://arxiv.org/html/2206.08920#S3.F3 "Figure 3 ‣ 3.1 Method Overview ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")), to represent the outline of a map element. However, defining keypoints for map elements is not straightforward due to their diversity. We conduct an ablation study to investigate the performance of different choices in §[4.3](https://arxiv.org/html/2206.08920#S4.SS3 "4.3 Ablation Studies ‣ 4 Experiments ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). Note that element keypoints are different from polyline vertices and the element keypoints are intermediate representations of VectorMapNet that are passed to the polyline generator(§[3.4](https://arxiv.org/html/2206.08920#S3.SS4 "3.4 Polyline Generator ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")) for conditional prediction, and the number of keypoints for each type of polyline is fixed and determined by its definition. Polylines are our output representations. + +Architecture. The overall architecture of the map element detector consists of a transformer decoder(Vaswani et al., [2017](https://arxiv.org/html/2206.08920#bib.bib50)) and a prediction head, as shown at the bottom-middle of Figure[2](https://arxiv.org/html/2206.08920#S1.F2 "Figure 2 ‣ 1 Introduction ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). The decoder transforms the element queries using multi-head self-/cross-attention mechanisms. In particular, we use the deformable attention module(Zhu et al., [2020](https://arxiv.org/html/2206.08920#bib.bib63)) as the decoder’s cross attention module, where each element query has a 2D location grounding. It improves interpretability and accelerates training convergence(Li et al., [2022](https://arxiv.org/html/2206.08920#bib.bib23)). + +The prediction head has two MLPs, which decodes element queries into element keypoints 𝒂 i,j=MLP kp⁢(𝒒 i,j kp)subscript 𝒂 𝑖 𝑗 subscript MLP kp subscript superscript 𝒒 kp 𝑖 𝑗\bm{a}_{i,j}=\mathrm{MLP_{kp}}(\bm{q}^{\mathrm{kp}}_{i,j})bold_italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT = roman_MLP start_POSTSUBSCRIPT roman_kp end_POSTSUBSCRIPT ( bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ) and their class labels l i=MLP cls⁢([𝒒 i,1 kp,…,𝒒 i,k kp])subscript 𝑙 𝑖 subscript MLP cls subscript superscript 𝒒 kp 𝑖 1…subscript superscript 𝒒 kp 𝑖 𝑘 l_{i}=\mathrm{MLP_{cls}}([\bm{q}^{\mathrm{kp}}_{i,1},\dots,\bm{q}^{\mathrm{kp}% }_{i,k}])italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_MLP start_POSTSUBSCRIPT roman_cls end_POSTSUBSCRIPT ( [ bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT , … , bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT ] ), respectively. [⋅]delimited-[]⋅[\cdot][ ⋅ ] is a concatenation operator. Each keypoint embedding 𝒒 i,j kp subscript superscript 𝒒 kp 𝑖 𝑗\bm{q}^{\mathrm{kp}}_{i,j}bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT in the map element detector consists of two learnable parts. The first parts is a keypoint position embedding {𝒆 𝐤𝐩 j∈ℝ d|j=1,…,k}conditional-set subscript superscript 𝒆 𝐤𝐩 𝑗 superscript ℝ 𝑑 𝑗 1…𝑘\{\bm{e^{\mathrm{kp}}}_{j}\in\mathbb{R}^{d}|j=1,\dots,k\}{ bold_italic_e start_POSTSUPERSCRIPT bold_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT | italic_j = 1 , … , italic_k }, indicating which position in an element keypoint the point belongs to. The second embedding {𝒆 𝐩 i∈ℝ d|i=1,…,N max}conditional-set subscript superscript 𝒆 𝐩 𝑖 superscript ℝ 𝑑 𝑖 1…subscript 𝑁 max\{\bm{e^{\mathrm{p}}}_{i}\in\mathbb{R}^{d}|i=1,\dots,N_{\mathrm{max}}\}{ bold_italic_e start_POSTSUPERSCRIPT bold_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT | italic_i = 1 , … , italic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT } encodes which map element the keypoint belongs to. The keypoint embedding 𝒒 i,j kp subscript superscript 𝒒 kp 𝑖 𝑗\bm{q}^{\mathrm{kp}}_{i,j}bold_italic_q start_POSTSUPERSCRIPT roman_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is the addition of these two embeddings 𝒆 𝐩 i+𝒆 𝐤𝐩 j subscript superscript 𝒆 𝐩 𝑖 subscript superscript 𝒆 𝐤𝐩 𝑗\bm{e^{\mathrm{p}}}_{i}+\bm{e^{\mathrm{kp}}}_{j}bold_italic_e start_POSTSUPERSCRIPT bold_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + bold_italic_e start_POSTSUPERSCRIPT bold_kp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. + +![Image 4: Refer to caption](https://arxiv.org/html/x4.png) + +Figure 4: Qualitative results generated by VectorMapNet and baselines. We use camera images as inputs for comparisons. The areas enclosed by red and blue ellipses show that VectorMapNet can preserve sharp corners, and polyline representations prevent VectorMapNet from generating ambiguous self-looping results. The drivable area is inferred from disjoint boundaries. + +### 3.4 Polyline Generator + +Upon the approximate position, shape, and category of map elements identified by map element detector, the polyline generator focuses on the detailed geometry of HD map, which entails calculating variable-length polyline vertices and their order. Accurate modeling of vertex relationships is crucial - for instance, a white line between two vertices often signifies a line connection in the vectorized map. The polyline generator operates as a discrete distribution p⁢(𝑽 i poly|𝑨 i,l i,𝓕 BEV f)𝑝 conditional superscript subscript 𝑽 𝑖 poly subscript 𝑨 𝑖 subscript 𝑙 𝑖 superscript subscript 𝓕 BEV 𝑓 p(\bm{V}_{i}^{\mathrm{poly}}|\bm{A}_{i},l_{i},\bm{\mathcal{F}}_{\mathrm{BEV}}^% {f})italic_p ( bold_italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT | bold_italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ) over the vertices of each polyline, conditioned on the initial layout(i.e., element keypoints 𝑨 i subscript 𝑨 𝑖\bm{A}_{i}bold_italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and class label l i subscript 𝑙 𝑖 l_{i}italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) and BEV features. To estimate this distribution, we decompose the joint distribution over each polyline 𝑽 i poly superscript subscript 𝑽 𝑖 poly\bm{V}_{i}^{\mathrm{poly}}bold_italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT as a product of a series of conditional vertex coordinate distributions. In particular, we transform each polyline 𝑽 i poly={𝒗 i,n∈ℝ 2|n=1,…,N v}superscript subscript 𝑽 𝑖 poly conditional-set subscript 𝒗 𝑖 𝑛 superscript ℝ 2 𝑛 1…subscript 𝑁 𝑣\bm{V}_{i}^{\mathrm{poly}}=\{\bm{v}_{i,n}\in\mathbb{R}^{2}|n=1,\dots,N_{v}\}bold_italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_poly end_POSTSUPERSCRIPT = { bold_italic_v start_POSTSUBSCRIPT italic_i , italic_n end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | italic_n = 1 , … , italic_N start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT } into a flattened sequence {v i,n f∈ℝ|n=1,…,2⁢N v}conditional-set superscript subscript 𝑣 𝑖 𝑛 𝑓 ℝ 𝑛 1…2 subscript 𝑁 𝑣\{v_{i,n}^{f}\in\mathbb{R}|n=1,\dots,2N_{v}\}{ italic_v start_POSTSUBSCRIPT italic_i , italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ∈ blackboard_R | italic_n = 1 , … , 2 italic_N start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT } by concatenating coordinates values of polyline vertices and add an additional End of Sequence token(E⁢O⁢S 𝐸 𝑂 𝑆 EOS italic_E italic_O italic_S) at the end of each sequence, and the target distribution turns into: + +p⁢(𝑽 i poly|𝑨 i,l i,𝓕 BEV;𝜽)=∏n=1 2⁢N v p⁢(v i,n f|v i,−1 𝑐 𝑎 𝑖 𝑗 1 ca(i,j)>-1 italic\_c italic\_a ( italic\_i , italic\_j ) > - 1_ then + +return + +c⁢a⁢(i,j)𝑐 𝑎 𝑖 𝑗 ca(i,j)italic_c italic_a ( italic_i , italic_j ) +; + +else if _i=1 𝑖 1 i=1 italic\_i = 1 and j=1 𝑗 1 j=1 italic\_j = 1_ then + +c⁢a⁢(i,j):=d⁢(u⁢1,v⁢1)assign 𝑐 𝑎 𝑖 𝑗 𝑑 𝑢 1 𝑣 1 ca(i,j):=d(u1,v1)italic_c italic_a ( italic_i , italic_j ) := italic_d ( italic_u 1 , italic_v 1 ) +; + +else if _i>1 𝑖 1 i>1 italic\_i > 1 and j=1 𝑗 1 j=1 italic\_j = 1_ then + +c⁢a⁢(i,j):=max⁡{c⁢(i−1,1),d⁢(u i,v 1)}assign 𝑐 𝑎 𝑖 𝑗 𝑐 𝑖 1 1 𝑑 subscript 𝑢 𝑖 subscript 𝑣 1 ca(i,j):=\max\{c(i-1,1),d(u_{i},v_{1})\}italic_c italic_a ( italic_i , italic_j ) := roman_max { italic_c ( italic_i - 1 , 1 ) , italic_d ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) } +; + +else if _i=1 𝑖 1 i=1 italic\_i = 1 and j>1 𝑗 1 j>1 italic\_j > 1_ then + +c⁢a⁢(i,j):=max⁡{c⁢(1,j−1),d⁢(u 1,v j)}assign 𝑐 𝑎 𝑖 𝑗 𝑐 1 𝑗 1 𝑑 subscript 𝑢 1 subscript 𝑣 𝑗 ca(i,j):=\max\{c(1,j-1),\,d(u_{1},v_{j})\}italic_c italic_a ( italic_i , italic_j ) := roman_max { italic_c ( 1 , italic_j - 1 ) , italic_d ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } +; + +else if _i>1 𝑖 1 i>1 italic\_i > 1 and j>1 𝑗 1 j>1 italic\_j > 1_ then + +c⁢a⁢(i,j):=max⁡{min⁡(c⁢(i−1,j),c⁢(i−1,j−1),c⁢(i,j−1)),d⁢(u i,v j)}assign 𝑐 𝑎 𝑖 𝑗 𝑐 𝑖 1 𝑗 𝑐 𝑖 1 𝑗 1 𝑐 𝑖 𝑗 1 𝑑 subscript 𝑢 𝑖 subscript 𝑣 𝑗 ca(i,j):=\max\{\min(c(i-1,j),c(i-1,j-1),c(i,j-1)),d(u_{i},v_{j})\}italic_c italic_a ( italic_i , italic_j ) := roman_max { roman_min ( italic_c ( italic_i - 1 , italic_j ) , italic_c ( italic_i - 1 , italic_j - 1 ) , italic_c ( italic_i , italic_j - 1 ) ) , italic_d ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } +; + +else + +c⁢a⁢(i,j):=∞assign 𝑐 𝑎 𝑖 𝑗 ca(i,j):=\infty italic_c italic_a ( italic_i , italic_j ) := ∞ +; + +end if + +return + +c⁢a⁢(i,j)𝑐 𝑎 𝑖 𝑗 ca(i,j)italic_c italic_a ( italic_i , italic_j ) +; + +end + +begin + +for _i=1 𝑖 1 i=1 italic\_i = 1 to p 𝑝 p italic\_p_ do + +for _j=1 𝑗 1 j=1 italic\_j = 1 to q 𝑞 q italic\_q_ do + +ca(i, j) := -1.0; + +end for + +end for + +return _c⁢(p,q)𝑐 𝑝 𝑞 c(p,q)italic\_c ( italic\_p , italic\_q );_ + +end + +Algorithm 1 The Algorithm of Discrete Fréchet Distance + +![Image 7: Refer to caption](https://arxiv.org/html/x7.png) + +Figure 7: When the ego car cameras are occluded by the nearby vehicles, VectorMapNet(Camera) can not precept the surrounding map. With the depth cue from LiDAR, VectorMapNet(Fusion) can generate a more plausible result than its camera counterpart. + +![Image 8: Refer to caption](https://arxiv.org/html/x8.png) + +Figure 8: The blind area of onboard cameras may cause our model to miss the map elements closed ego vehicle. In contrast, we can easily find that LiDAR data has sensed some obstacles near the ego vehicle in the right-most column. With these cues, our fusion model detects the missed lane boundary by our camera-only model. + +![Image 9: Refer to caption](https://arxiv.org/html/x9.png) + +Figure 9: The qualitative results of VectorMapNet in bad weather conditions. VectorMapNet(Camera) falsely detects these puddles near the intersection as a lane boundary. The fusion result shows that the miss detection issue can be resolved by combining the depth information. + +Appendix B More Qualitative results of VectorMapNet +--------------------------------------------------- + +### B.1 Visualization results of VectorMapNet(Fusion) + +We visualized three cases of VectorMapNet (Fusion) and VectorMapNet (Camera) to demonstrate that LiDAR information can complement visual information to generate more robust map predictions. In the first case, the camera view is constrained by the nearby vehicles, so it can not provide helpful surrounding information. LiDAR sensor bypasses the nearby vehicle and provides some cue for VectorMapNet to generate a better result than its camera-only counterpart (see Figure[7](https://arxiv.org/html/2206.08920#A1.F7 "Figure 7 ‣ A.2 Metrics ‣ Appendix A Experiment Setup ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")). For the second case (see Figure[8](https://arxiv.org/html/2206.08920#A1.F8 "Figure 8 ‣ A.2 Metrics ‣ Appendix A Experiment Setup ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")), the model cannot detect the nearby parking gate because it locates in the blind zone of cameras. In contrast, the LiDAR provides depth information and helps the VectorMapNet(Fusion) detect the missing lane boundary. LiDAR points can prevent the model from falsely detecting map elements in bad weather conditions as well. As shown in Figure[9](https://arxiv.org/html/2206.08920#A1.F9 "Figure 9 ‣ A.2 Metrics ‣ Appendix A Experiment Setup ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"), some puddles are near the intersection. With the light reflection, these puddles visually look like a lane boundary. However, the LiDAR data shows that there does not have any bump in there. Unlike the camera-only model, this depth information from LiDAR helps our fusion model not generate a non existed lane boundary. + +Appendix C Implementation details +--------------------------------- + +### C.1 Overall Architectures. + +BEV feature extractor outputs a feature map with a size of (200,100,128)200 100 128(200,100,128)( 200 , 100 , 128 ). It uses ResNet50(He et al., [2016](https://arxiv.org/html/2206.08920#bib.bib18)) for shared CNN backbone. We use a single layer PointNet(Qi et al., [2017](https://arxiv.org/html/2206.08920#bib.bib43)) whose outputs have 64 64 64 64 dimensions as the LiDAR backbone to aggregate LiDAR points into a pillar. We set the number of element queries N max subscript 𝑁 max N_{\mathrm{max}}italic_N start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT in map element detector as 100 100 100 100. The transformer decoders we used in map element detector and polyline generator both have 6 6 6 6 decoder layers, and their hidden embeddings’ size is 256 256 256 256. For the output space of polyline generator, we divide the map space (see §[3.4](https://arxiv.org/html/2206.08920#S3.SS4 "3.4 Polyline Generator ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning")) evenly into 200×100 200 100 200\times 100 200 × 100 rectangular grids, and each grid has a size of 0.3⁢m×0.3⁢m 0.3 𝑚 0.3 𝑚 0.3m\times 0.3m 0.3 italic_m × 0.3 italic_m. + +### C.2 Training settings. + +We train all our models on 8 GTX3090 GPUs for 110 epochs with a total batch size of 32. We use AdamW(Loshchilov & Hutter, [2018](https://arxiv.org/html/2206.08920#bib.bib31)) optimizer with a gradient clipping norm of 5.0 5.0 5.0 5.0. For the learning rate schedule, we use a step schedule that multiplies a learning rate by 0.1 at epoch 100 and has a linear warm-up period at the first 5000 steps. The dropout rate for all modules is 0.2, following the transformer’s settings(Vaswani et al., [2017](https://arxiv.org/html/2206.08920#bib.bib50)). Data augmentation is only deployed during polyline generator’s training; specifically, two I.I.D. Gaussian noises are added to each input vertex’s x 𝑥 x italic_x and y 𝑦 y italic_y coordinates with a probability of 0.3 0.3 0.3 0.3. + +### C.3 Model Details + +Camera Branch of Map Feature Extractor. For image data ℐ ℐ\mathcal{I}caligraphic_I, we use a shared CNN backbone to obtain each camera’s image features in the camera space, then use the Inverse Perspective Mapping(IPM)(Mallot et al., [1991](https://arxiv.org/html/2206.08920#bib.bib33)) technique to transform these features into BEV space. Since the depth information is missing in camera images, we follow one common approach that assumes the ground is mostly planar and transforms the images to BEV via homography. Without knowing the exact height of the ground plane, this homography is not an accurate transformation. To alleviate this issue, we transform the image features into four BEV planes with different heights ( we use (−1⁢m,0⁢m,1⁢m,2⁢m 1 𝑚 0 𝑚 1 𝑚 2 𝑚-1m,0m,1m,2m- 1 italic_m , 0 italic_m , 1 italic_m , 2 italic_m) in practice). The camera BEV features 𝓕 BEV ℐ∈ℝ W×H×C 1 superscript subscript 𝓕 BEV ℐ superscript ℝ 𝑊 𝐻 subscript 𝐶 1\bm{\mathcal{F}}_{\mathrm{BEV}}^{\mathcal{I}}\in\mathbb{R}^{W\times H\times C_% {1}}bold_caligraphic_F start_POSTSUBSCRIPT roman_BEV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_I end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_W × italic_H × italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT are the concatenation of these feature maps. + +### C.4 Loss + +Loss settings. The loss function of map element detector is a linear combination of three parts: a negative log-likelihood for element keypoint classification, a smooth L1 loss, and an IoU loss for keypoints regression. The coefficients of these loss components are 2,0.1,1 2 0.1 1 2,0.1,1 2 , 0.1 , 1. The matching cost of map element detector is the same as the loss combination. The loss function of polyline generator is a negative log-likelihood. We train VectorMapNet by simply summing up these losses. + +map element detector loss. To get the loss, we first establish a correspondence between the ground-truth (𝒜 𝒜\mathcal{A}caligraphic_A, 𝓛 𝓛\bm{\mathcal{L}}bold_caligraphic_L) and the prediction (𝒜^^𝒜\hat{\mathcal{A}}over^ start_ARG caligraphic_A end_ARG, 𝓛^^𝓛\hat{\bm{\mathcal{L}}}over^ start_ARG bold_caligraphic_L end_ARG). Assuming the number of ground-truth map element keypoints N 𝑁 N italic_N is smaller than the number of predictions N m⁢a⁢x subscript 𝑁 𝑚 𝑎 𝑥 N_{max}italic_N start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT, and we pad the set of ground-truth (𝒜 𝒜\mathcal{A}caligraphic_A, 𝓛 𝓛\bm{\mathcal{L}}bold_caligraphic_L) with ∅\emptyset∅s (no object) up to N m⁢a⁢x subscript 𝑁 𝑚 𝑎 𝑥 N_{max}italic_N start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT. The correspondence σ 𝜎\sigma italic_σ is a permutation of N m⁢a⁢x subscript 𝑁 𝑚 𝑎 𝑥 N_{max}italic_N start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT elements σ∈𝒫 𝜎 𝒫\sigma\in\mathcal{P}italic_σ ∈ caligraphic_P with the lowest cost: σ∗=a⁢r⁢g⁢m⁢i⁢n σ∈𝒫∑j=1 N m⁢a⁢x−𝟙(l j≠∅)p^σ⁢(j)(l j)+−𝟙(l j≠∅)𝓛 k⁢e⁢y⁢p⁢o⁢i⁢n⁢t(a j,a^σ⁢(j))\sigma^{\ast}=\underset{\sigma\in\mathcal{P}}{argmin}\sum_{j=1}^{N_{max}}-% \mathds{1}_{(l_{j}\neq\emptyset)}\hat{p}_{\sigma(j)}(l_{j})+-\mathds{1}_{(l_{j% }\neq\emptyset)}\bm{\mathcal{L}}_{keypoint}(a_{j},\hat{a}_{\sigma(j)})italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = start_UNDERACCENT italic_σ ∈ caligraphic_P end_UNDERACCENT start_ARG italic_a italic_r italic_g italic_m italic_i italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT end_POSTSUPERSCRIPT - blackboard_1 start_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≠ ∅ ) end_POSTSUBSCRIPT over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_σ ( italic_j ) end_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + - blackboard_1 start_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≠ ∅ ) end_POSTSUBSCRIPT bold_caligraphic_L start_POSTSUBSCRIPT italic_k italic_e italic_y italic_p italic_o italic_i italic_n italic_t end_POSTSUBSCRIPT ( italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , over^ start_ARG italic_a end_ARG start_POSTSUBSCRIPT italic_σ ( italic_j ) end_POSTSUBSCRIPT ), where p^σ⁢(j)⁢(l j)subscript^𝑝 𝜎 𝑗 subscript 𝑙 𝑗\hat{p}_{\sigma(j)}(l_{j})over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_σ ( italic_j ) end_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) is the probability of class label l j subscript 𝑙 𝑗 l_{j}italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT for the prediction with index σ⁢(j)𝜎 𝑗\sigma(j)italic_σ ( italic_j ), and the loss of keypoints parameters 𝓛 k⁢e⁢y⁢p⁢o⁢i⁢n⁢t subscript 𝓛 𝑘 𝑒 𝑦 𝑝 𝑜 𝑖 𝑛 𝑡\bm{\mathcal{L}}_{keypoint}bold_caligraphic_L start_POSTSUBSCRIPT italic_k italic_e italic_y italic_p italic_o italic_i italic_n italic_t end_POSTSUBSCRIPT is an addition of a smooth L1 loss and an IoU loss. With these notations we define the loss of detector as: + +𝓛 d⁢e⁢t=∑j=1 N m⁢a⁢x−log⁡p^σ∗⁢(j)⁢(l j)+𝟙(l j≠∅)⁢𝓛 k⁢e⁢y⁢p⁢o⁢i⁢n⁢t⁢(a j,a^σ∗⁢(j)),subscript 𝓛 𝑑 𝑒 𝑡 superscript subscript 𝑗 1 subscript 𝑁 𝑚 𝑎 𝑥 subscript^𝑝 superscript 𝜎∗𝑗 subscript 𝑙 𝑗 subscript 1 subscript 𝑙 𝑗 subscript 𝓛 𝑘 𝑒 𝑦 𝑝 𝑜 𝑖 𝑛 𝑡 subscript 𝑎 𝑗 subscript^𝑎 superscript 𝜎∗𝑗\bm{\mathcal{L}}_{det}=\sum_{j=1}^{N_{max}}-\log\hat{p}_{\sigma^{\ast}(j)}(l_{% j})+\mathds{1}_{(l_{j}\neq\emptyset)}\bm{\mathcal{L}}_{keypoint}(a_{j},\hat{a}% _{\sigma^{\ast}(j)}),bold_caligraphic_L start_POSTSUBSCRIPT italic_d italic_e italic_t end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT end_POSTSUPERSCRIPT - roman_log over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_j ) end_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + blackboard_1 start_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≠ ∅ ) end_POSTSUBSCRIPT bold_caligraphic_L start_POSTSUBSCRIPT italic_k italic_e italic_y italic_p italic_o italic_i italic_n italic_t end_POSTSUBSCRIPT ( italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , over^ start_ARG italic_a end_ARG start_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_j ) end_POSTSUBSCRIPT ) , + +where σ∗superscript 𝜎∗\sigma^{\ast}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is the optimal assignment computed by Hungarian algorithm(Kuhn, [1955](https://arxiv.org/html/2206.08920#bib.bib20)). + +### C.5 Baseline model + +HDMapNet For all experiments in our paper, we employed the official HDMapNet model from the provided codebase and directly take its vectorized results. As the Argoverse dataset was not included in the original HDMapNet paper, we adapted the NuScenes data processing steps from their codebase to create an Argoverse2 dataloader for our experiments. + +STSU For STSU, It uses a transformer module to detect the moving objects and centerline segments. It uses an association head to piece the segments together as the road graph. In order to adapt STSU to our task, we use a two-layer MLP to predict lane segments and only keep its object branch and polyline branch. + +Appendix D More Ablation Studies +-------------------------------- + +### D.1 Curve sampling strategies + +Table 5: Ablation study of curves sampling strategies. + +Fréchet Distance Chamfer Distance +Vertex Sampling Method AP p⁢e⁢d 𝑝 𝑒 𝑑{}_{ped}start_FLOATSUBSCRIPT italic_p italic_e italic_d end_FLOATSUBSCRIPT AP d⁢i⁢v⁢i⁢d⁢e⁢r 𝑑 𝑖 𝑣 𝑖 𝑑 𝑒 𝑟{}_{divider}start_FLOATSUBSCRIPT italic_d italic_i italic_v italic_i italic_d italic_e italic_r end_FLOATSUBSCRIPT AP b⁢o⁢u⁢n⁢d⁢a⁢r⁢y 𝑏 𝑜 𝑢 𝑛 𝑑 𝑎 𝑟 𝑦{}_{boundary}start_FLOATSUBSCRIPT italic_b italic_o italic_u italic_n italic_d italic_a italic_r italic_y end_FLOATSUBSCRIPT mAP AP p⁢e⁢d 𝑝 𝑒 𝑑{}_{ped}start_FLOATSUBSCRIPT italic_p italic_e italic_d end_FLOATSUBSCRIPT AP d⁢i⁢v⁢i⁢d⁢e⁢r 𝑑 𝑖 𝑣 𝑖 𝑑 𝑒 𝑟{}_{divider}start_FLOATSUBSCRIPT italic_d italic_i italic_v italic_i italic_d italic_e italic_r end_FLOATSUBSCRIPT AP b⁢o⁢u⁢n⁢d⁢a⁢r⁢y 𝑏 𝑜 𝑢 𝑛 𝑑 𝑎 𝑟 𝑦{}_{boundary}start_FLOATSUBSCRIPT italic_b italic_o italic_u italic_n italic_d italic_a italic_r italic_y end_FLOATSUBSCRIPT mAP +curvature-based 47.0 47.4 56.9 50.4 27.6 34.4 35.4 32.5 +fixed interval 26.0 23.6 37.1 28.9 14.6 17.6 18.7 17.0 + +We use two approaches to sample polylines. The first is based on the original nuScenes setting(Caesar et al., [2020](https://arxiv.org/html/2206.08920#bib.bib4)), which samples vertices at the position where the curvature changes are beyond a certain threshold. The second is to sample the vertices at fixed intervals (1⁢m 1 𝑚 1m 1 italic_m). We compare our methods under these two sampling strategies and the results are shown in Table[5](https://arxiv.org/html/2206.08920#A4.T5 "Table 5 ‣ D.1 Curve sampling strategies ‣ Appendix D More Ablation Studies ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). The curvature-based sampling outperforms its fixed-sampling counterpart by a large margin and achieves a leading 21.5 Fréchet mAP and 15.5 Chamfer mAP. We hypothesize that the fixed-sampling method involves a large set of redundant vertices that have negligible contributions to the geometry, thus under-weighs the essential vertices (e.g. the vertices at the corner of a polyline) in the learning process. + +### D.2 Vertex modeling methods. + +Table 6: Ablation study of vertex modeling methods. + +Fréchet Distance Chamfer Distance +Modeling Method AP p⁢e⁢d 𝑝 𝑒 𝑑{}_{ped}start_FLOATSUBSCRIPT italic_p italic_e italic_d end_FLOATSUBSCRIPT AP d⁢i⁢v⁢i⁢d⁢e⁢r 𝑑 𝑖 𝑣 𝑖 𝑑 𝑒 𝑟{}_{divider}start_FLOATSUBSCRIPT italic_d italic_i italic_v italic_i italic_d italic_e italic_r end_FLOATSUBSCRIPT AP b⁢o⁢u⁢n⁢d⁢a⁢r⁢y 𝑏 𝑜 𝑢 𝑛 𝑑 𝑎 𝑟 𝑦{}_{boundary}start_FLOATSUBSCRIPT italic_b italic_o italic_u italic_n italic_d italic_a italic_r italic_y end_FLOATSUBSCRIPT mAP AP p⁢e⁢d 𝑝 𝑒 𝑑{}_{ped}start_FLOATSUBSCRIPT italic_p italic_e italic_d end_FLOATSUBSCRIPT AP d⁢i⁢v⁢i⁢d⁢e⁢r 𝑑 𝑖 𝑣 𝑖 𝑑 𝑒 𝑟{}_{divider}start_FLOATSUBSCRIPT italic_d italic_i italic_v italic_i italic_d italic_e italic_r end_FLOATSUBSCRIPT AP b⁢o⁢u⁢n⁢d⁢a⁢r⁢y 𝑏 𝑜 𝑢 𝑛 𝑑 𝑎 𝑟 𝑦{}_{boundary}start_FLOATSUBSCRIPT italic_b italic_o italic_u italic_n italic_d italic_a italic_r italic_y end_FLOATSUBSCRIPT mAP +discrete 47.0 47.4 56.9 50.4 27.6 34.4 35.4 32.5 +continuous 38.0 41.6 46.1 41.9 26.5 28.1 30.1 26.5 + +We investigate both discrete and continuous ways to model polyline vertices. The discrete version of polyline generator is described in §[3.4](https://arxiv.org/html/2206.08920#S3.SS4 "3.4 Polyline Generator ‣ 3 VectorMapNet ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). With the same model structure, we follow SketchRNN(Ha & Eck, [2017](https://arxiv.org/html/2206.08920#bib.bib17)) and use mixture of Gaussian distributions to model the vertices of polylines as continuous variables. The comparison is shown in Table[6](https://arxiv.org/html/2206.08920#A4.T6 "Table 6 ‣ D.2 Vertex modeling methods. ‣ Appendix D More Ablation Studies ‣ VectorMapNet: End-to-end Vectorized HD Map Learning"). We find that using discrete embeddings vertex coordinates results in a considerable gain in performance, with Chamfer mAP increasing from 18.2 to 32.5 and the Fréchet mAP increasing from 26.8 to 50.4. These improvements suggest that the non-local characteristic of categorical distribution helps our model to capture complex vertex coordinate distributions. + +### D.3 Extrinsic Robustness + +Thanks for this suggestion. To probe the robustness of VectorMapNet, we follow lift-splat-shoot(Philion & Fidler, [2020](https://arxiv.org/html/2206.08920#bib.bib42)), which tests the model under the noise that occurs in self-driving, such as camera extrinsic being biased. The table[7](https://arxiv.org/html/2206.08920#A4.T7 "Table 7 ‣ D.3 Extrinsic Robustness ‣ Appendix D More Ablation Studies ‣ VectorMapNet: End-to-end Vectorized HD Map Learning") shows that training the model with noisy extrinsic can lead to better test-time performance. And our model maintains its good performance for high amounts of extrinsic noise. The results show our model’s robustness against extrinsic noise. + +Table 7: VectorMapNet performance under different extrinsic noise. + +mAP Test time extrinsic noise +Train time extrinsic noise 0 0.1 0.3 0.6 +0 42.2 42.6 42.4 42.5 +0.1 43.8 43.6 43.5 43.6 +0.3 43.0 43.0 43.1 43.1 +0.6 42.6 42.72 42.8 42.9 + +Appendix E Additional Discussions +--------------------------------- + +The potential negative societal impact. While there are legitimate concerns regarding privacy issues in autonomous driving systems that use generated maps, we’d like to reassure that our method is designed with such concerns in mind. Our proposed VectorMapNet model relies solely on onboard sensor observations and doesn’t track any global locations or individual movements. Consequently, it poses no risk of leaking personal information, such as patterns in individuals’ movements, thus upholding the highest standards of privacy. + +Confidence indicator. Learning-based models can certainly provide confidence indicators to describe prediction uncertainty. Our model can generate two types of confidence scores: (1) The DETR-like Map Element Detector generates a confidence score for each detected map element. It is an instance-level score. (2) The auto-regressive Polyline Generator generates a score for each point on a polyline. It is a point-level score. Both of them can indicate the confidence or likelihood of the model’s prediction. However, how to better use this uncertainty for downstream tasks still remains an open question. We leave it for future research.