Add Batch 35d23857-e71c-4c51-950e-0036d2fdc2d8
Browse files- .gitattributes +6 -0
- 2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_content_list.json +0 -0
- 2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_model.json +0 -0
- 2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_origin.pdf +3 -0
- 2501.18xxx/2501.18672/full.md +480 -0
- 2501.18xxx/2501.18672/images.zip +3 -0
- 2501.18xxx/2501.18672/layout.json +0 -0
- 2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_content_list.json +0 -0
- 2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_model.json +0 -0
- 2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_origin.pdf +3 -0
- 2502.11xxx/2502.11028/full.md +538 -0
- 2502.11xxx/2502.11028/images.zip +3 -0
- 2502.11xxx/2502.11028/layout.json +0 -0
- 2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_content_list.json +1884 -0
- 2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_model.json +0 -0
- 2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_origin.pdf +3 -0
- 2503.05xxx/2503.05689/full.md +392 -0
- 2503.05xxx/2503.05689/images.zip +3 -0
- 2503.05xxx/2503.05689/layout.json +0 -0
- 2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_content_list.json +0 -0
- 2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_model.json +0 -0
- 2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_origin.pdf +3 -0
- 2506.07xxx/2506.07927/full.md +0 -0
- 2506.07xxx/2506.07927/images.zip +3 -0
- 2506.07xxx/2506.07927/layout.json +0 -0
- 2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_content_list.json +0 -0
- 2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_model.json +0 -0
- 2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_origin.pdf +3 -0
- 2512.02xxx/2512.02792/full.md +0 -0
- 2512.02xxx/2512.02792/images.zip +3 -0
- 2512.02xxx/2512.02792/layout.json +0 -0
- 2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_content_list.json +0 -0
- 2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_model.json +0 -0
- 2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_origin.pdf +3 -0
- 2512.06xxx/2512.06357/full.md +450 -0
- 2512.06xxx/2512.06357/images.zip +3 -0
- 2512.06xxx/2512.06357/layout.json +0 -0
.gitattributes
CHANGED
|
@@ -5617,3 +5617,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 5617 |
2501.01xxx/2501.01994/165ca9aa-864f-4ebd-af02-de61c076213f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5618 |
2501.05xxx/2501.05464/252a9e47-b3b6-487a-ba83-698123d55a24_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5619 |
2502.15xxx/2502.15694/62fba17e-72b7-43cf-a6e0-040163f1b6e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5617 |
2501.01xxx/2501.01994/165ca9aa-864f-4ebd-af02-de61c076213f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5618 |
2501.05xxx/2501.05464/252a9e47-b3b6-487a-ba83-698123d55a24_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5619 |
2502.15xxx/2502.15694/62fba17e-72b7-43cf-a6e0-040163f1b6e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5620 |
+
2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5621 |
+
2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5622 |
+
2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5623 |
+
2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5624 |
+
2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5625 |
+
2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2501.18xxx/2501.18672/873e1004-e54e-4182-b091-dd60a746469c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:907b2cdbad8d1fb0f7794ab66c652fba8af2feb43c9e6524ea0d5c58ab889aea
|
| 3 |
+
size 24653430
|
2501.18xxx/2501.18672/full.md
ADDED
|
@@ -0,0 +1,480 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting
|
| 2 |
+
|
| 3 |
+
Yansong $\mathbf{Q}\mathbf{u}^{1}$ , Dian Chen $^{1}$ , Xinyang Li $^{1}$ , Xiaofan Li $^{2\dagger}$ , Shengchuan Zhang $^{1}$ , Liujuan Cao $^{1\ddagger}$ , Rongrong Ji $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Xiamen University, $^{2}$ Baidu Inc.
|
| 6 |
+
|
| 7 |
+
$\dagger$ project lead, $\ddagger$ corresponding author {quyans,chendian}@stu.xmu.edu.cn,imlixinyang@gmail.com,shalfunnn@gmail.com {zsc_2016,caoliujuan,rrji}@stu.edu.cn
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
Figure 1: DYG achieves flexible, high-quality drag-based 3D editing results. Given a reconstructed 3D Gaussian field, users specify the desired editing area with 3D masks (brighter areas), and perform scene edits through pairs of control points, including handle points (blue), target points (red).
|
| 11 |
+
|
| 12 |
+
# ABSTRACT
|
| 13 |
+
|
| 14 |
+
Recent advancements in 3D scene editing have been propelled by the rapid development of generative models. Existing methods typically utilize generative models to perform text-guided editing on 3D representations, such as 3D Gaussian Splatting (3DGS). However, these methods are often limited to texture modifications and fail when addressing geometric changes, such as editing a character's head to turn around. Moreover, such methods lack accurate control over the spatial position of editing results, as language struggles to precisely describe the extent of edits. To overcome these limitations, we introduce DYG, an effective 3D drag-based editing method for 3D Gaussian Splatting. It enables users to conveniently specify the desired editing region and the desired dragging direction through the input of 3D masks and pairs of control points, thereby enabling precise control over the extent of editing. DYG integrates
|
| 15 |
+
|
| 16 |
+
the strengths of the implicit triplane representation to establish the geometric scaffold of the editing results, effectively overcoming suboptimal editing outcomes caused by the sparsity of 3DGS in the desired editing regions. Additionally, we incorporate a drag-based Latent Diffusion Model into our method through the proposed Drag-SDS loss function, enabling flexible, multi-view consistent, and fine-grained editing. Extensive experiments demonstrate that DYG conducts effective drag-based editing guided by control point prompts, surpassing other baselines in terms of editing effect and quality, both qualitatively and quantitatively. Visit our project page at https://quyans.github.io/Drag-Your-Gaussian/.
|
| 17 |
+
|
| 18 |
+
# KEYWORDS
|
| 19 |
+
|
| 20 |
+
3D Gaussian Splitting, Drag-based Editing, Score Distillation
|
| 21 |
+
|
| 22 |
+
# 1 INTRODUCTION
|
| 23 |
+
|
| 24 |
+
The representation and manipulation of 3D scenes have become increasingly significant in a variety of fields, such as virtual reality (VR) and augmented reality (AR). Traditional 3D representations, such as meshes, voxels, and point clouds, have facilitated many advancements but typically encounter challenges in scalability, efficiency, and expressiveness. The developments in neural scene representations, such as Neural Radiance Fields (NeRF) [19], have suggested impressive capabilities in synthesizing photorealistic novel views. However, NeRF-related methods [4, 21, 28, 35] rely on extensive sampling processes and are computationally intensive, making them less suitable for interactive editing tasks.
|
| 25 |
+
|
| 26 |
+
Recently, 3D Gaussian Splitting (3DGS) [15] has attracted substantial attention for its ability to represent volumetric data using sparse Gaussian primitives. By replacing dense neural networks with lightweight and interpretable Gaussian primitives, 3DGS enables real-time rendering and fast updates, making it a promising 3D representation for 3D editing. Recent 3DGS-based scene editing methods [5, 6, 34, 37] leverage pre-trained 2D latent diffusion models (LDM) with text prompts to guide the optimization of 3DGS. However, they primarily focus on texture modifications or stylistic changes, falling short in enabling precise geometric editing, as illustrated in Fig. 2.
|
| 27 |
+
|
| 28 |
+
To overcome these geometric editing limitations, we draw inspiration from recent advancements in 2D drag-based image editing. Methods such as DragGAN [24] and its successors [31, 32] provide precise control and intuitive image editing capabilities through the use of paired control points, including handle points and target points. However, applying 2D drag-based generative models to guide the optimization of 3DGS for drag-based 3D editing introduces a new challenge: the target regions often exhibit sparse distributions of 3D Gaussians, making it challenging to effectively edit the 3D Gaussian field. Consequently, the model tends to align the texture of nearby 3D Gaussians around the target area, rather than accurately generating the desired geometric structures. This issue significantly impacts the precision and realism of the editing results.
|
| 29 |
+
|
| 30 |
+
A straightforward approach involves adopting a rigid transformation [35, 38], where 3DGS primitives around the handle points are copied to the target region during initialization. However, it significantly limits the diversity of editing tasks and often results in poor geometry in the target area, leading to noticeable artifacts.
|
| 31 |
+
|
| 32 |
+
In this work, we present Drag Your 3D Gaussian (DYG), a novel drag-based 3DGS editing approach for real-world scenes. To enhance usability and accessibility, we extend the 2D drag-based image editing paradigm to 3D, introducing 3D masks along with pairs of 3D control points as inputs for editing 3D scenes. We integrate the independence and discreteness of 3DGS with the continuous nature of implicit triplane representation to address the challenge of sparse 3D Gaussian distributions. To encode the positions of 3D Gaussians, we introduce the Multi-scale Triplane Positional (MTP) Encoder, and employ a Region-Specific Positional (RSP) Decoder to predict positional offsets, constructing the geometric scaffold for dragging. Additionally, we propose a Soft Local Edit (SLE) strategy to focus editing on the desired region while preserving the integrity of other areas. Leveraging an off-the-shelf 2D drag-based
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
Original View
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
GS-Editor
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
Ours drag ①
|
| 42 |
+
Figure 2: Differences between our drag-based editing approach and the text-guided editing method GS-Editor [6]. The latter often fails to achieve geometric editing goals and struggles to describe the degree of editing through text, whereas our method allows for flexible control over the extent of edits.
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
Ours drag ②
|
| 46 |
+
|
| 47 |
+
LDM as supervision through the proposed Drag-SDS loss function, we enable perceptually plausible scene dragging with multi-view consistency. As shown in Fig. 1 and Fig. 2, DYG facilitates flexible and fine-grained 3D scene editing.
|
| 48 |
+
|
| 49 |
+
Our contributions can be summarized as follows:
|
| 50 |
+
|
| 51 |
+
- We propose an effective drag-based scene editing method for 3D Gaussian Splatting, capable of delivering flexible and high-quality results for geometric editing tasks, including deformation, transformation, and morphing.
|
| 52 |
+
- We introduce the MTP encoder to address the challenge of uneven spatial distribution of Gaussian primitives, facilitating smooth geometric editing. Additionally, the RSP decoder and SLE strategy ensure harmonious local editing. Finally, Drag-SDS leverages the existing 2D drag-based LDM to achieve multiview consistent dragging results.
|
| 53 |
+
- Extensive experiments quantitatively and qualitatively demonstrate that our method achieves state-of-the-art (SOTA) 3D scene editing results, validating the versatility and generalization capabilities of DYG.
|
| 54 |
+
|
| 55 |
+
# 2 RELATED WORK
|
| 56 |
+
|
| 57 |
+
# 2.1 Drag-based Image Editing
|
| 58 |
+
|
| 59 |
+
In light of advancements in generative models, image editing has seen significant development. However, text-guided image editing methods [2, 10, 14, 29] often lack precision and flexibility when it comes to editing spatial attributes. To address this, DragGAN [24] enables impressive interactive drag-based image editing by utilizing control points and optimizing generative adversarial networks (GANS) latent codes. However, the applicability of this framework is constrained by the intrinsic limitations in the capacity of GANs. In order to enhance generalization, subsequent works [18, 20, 32] extend this paradigm to large-scale Latent Diffusion Models (LDM). However, these methods depend on computationally intensive operations, such as latent optimization, resulting in inefficiencies in editing tasks.
|
| 60 |
+
|
| 61 |
+
Lightning-Drag [31] encodes user prompts into corresponding point embeddings, which are then injected into the self-attention
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
Figure 3: The overall framework of DYG. Left: Given a 3D Gaussian scene, users provide 3D masks and several pairs of control points as input. Top-right: The Smooth Geometric Editing module predicts positional offsets for 3D Gaussians, resolving the issue of sparse distributions within the target region while ensuring seamless local editing. We adopt a two-stage training strategy: the first stage constructs the geometric scaffold of the edited Gaussians, and the second stage refines the texture details. Bottom-right: In the Score Distillation Guidance Module, to ensure stable optimization, 3D control points are projected onto 2D control points for a specified viewpoint. The RGB image and 2D mask, rendered from the mirrored initial 3D Gaussians, are encoded into point embeddings (P-Emb) and appearance embeddings (A-Emb), which act as conditions for the drag-based LDM. This process leverages our proposed Drag-SDS loss function to enable flexible and view-consistent 3D drag-based editing.
|
| 65 |
+
|
| 66 |
+
modules of the Stable Diffusion inpainting backbone [11, 33] to guide the generation process. This approach eliminates the need for time-consuming operations required in previous methods [22, 32], enabling interactive drag-based image editing. In this work, we adopt Lightning-Drag [31] as the guiding model for editing 3DGS, owing to its rapid and high-quality drag-based editing capabilities.
|
| 67 |
+
|
| 68 |
+
# 2.2 3D Editing for Radiance Fields
|
| 69 |
+
|
| 70 |
+
Neural Radiance Fields (NeRF) [19] introduced radiance fields, having excelled in novel view synthesis, producing realistic rendering results. However, NeRF's reliance on a neural network for complete implicit representation of scenes leads to tedious training and rendering times. More recently, 3DGS [9] has garnered attention from researchers due to its real-time rendering speed and photo-realistic rendering quality.
|
| 71 |
+
|
| 72 |
+
Robust 3D representations have driven advancements in 3D editing. Early methods [8, 23, 40] learn object-compositional NeRFs, enabling object-level editing, such as duplicating or moving objects. However, these approaches are limited to coarse-grained manipulations. Works like NeuMesh [39] and related methods [35, 38, 42] propose using explicit geometry, such as cages or point clouds, to facilitate geometric editing. Nevertheless, these methods heavily rely on precise geometric reconstructions. SC-GS [13] adopts a sampling-based approach to learn anchor points for editing 3D scenes. However, these methods strongly depend on accurate geometric representations, offer limited editing diversity, and often suffer from unreasonable results, such as local tearing artifacts.
|
| 73 |
+
|
| 74 |
+
With the success of generative models, methods [9, 34, 44, 45], like GS-Editor [6] and GS-Ctrl [37] leverage text-guided latent diffusion models for 3D scene editing. While effective for texture or style modifications, these approaches often fail to handle geometric changes. Moreover, they struggle to accurately specify the spatial extent of editing through text. In contrast, our method enables flexible, fine-grained geometric editing while maintaining the plausibility of the edited scenes.
|
| 75 |
+
|
| 76 |
+
# 3 PRELIMINARY
|
| 77 |
+
|
| 78 |
+
# 3.1 3D Gaussian Splitting
|
| 79 |
+
|
| 80 |
+
3D Gaussian Splatting (3DGS) [15] utilizes a set of anisotropic 3D Gaussians to model three-dimensional information and provide fast rendering by efficiently rasterizing 3D Gaussians into images, given camera poses. Specifically, each Gaussian is composed of its position $p \in \mathbb{R}^3$ , a scale $s \in \mathbb{R}^3$ , a rotation quaternion $r \in \mathbb{R}^4$ , an opacity $o \in \mathbb{R}$ , and the spherical harmonics (SH) coefficients $c \in \mathbb{R}^d$ for volume rendering. These 3D Gaussians are projected onto the image plane as 2D Gaussians and rendered in real time using the tiled rasterizer.
|
| 81 |
+
|
| 82 |
+
# 3.2 Score Distillation Sampling (SDS)
|
| 83 |
+
|
| 84 |
+
DreamFusion [26] introduces the SDS loss function, utilizing a pre-trained 2D latent diffusion model (LDM) to optimize 3D representations for 3D generation. Specifically, given a differentiable 3D representation, such as NeRF [19] or 3DGS [15], parameterized by
|
| 85 |
+
|
| 86 |
+
$\mathcal{G}$ and a rendering function $\mathcal{R}$ , the rendered image corresponding to a camera pose $c$ can be expressed as $x = \mathcal{R}(\mathcal{G}, c)$ . SDS leverages the prior knowledge of a LDM to guide the optimization of the 3D representation $\mathcal{G}$ in a low-resolution latent space. This latent space is articulated as $z = \mathcal{E}(x), x = \mathcal{D}(z)$ , where $\mathcal{E}$ and $\mathcal{D}$ represent the encoder and decoder of the LDM, respectively. The SDS loss function is formulated as follows:
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\nabla_ {\mathcal {G}} \mathcal {L} _ {\mathrm {S D S}} = \mathbb {E} _ {t, \epsilon , c} \left[ w (t) (\hat {\epsilon} - \epsilon) \frac {\partial \mathcal {E} (\mathcal {R} (\mathcal {G} , c))}{\partial \mathcal {G}} \right] \tag {1}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $\epsilon$ denotes ground truth noise, $\hat{\epsilon}$ is the noise predicted by the LDM with $z_{t}$ as input for timestep $t$ , and $w(t)$ represents a weighting function that varies according to the timestep $t$ . The SDS loss can be reformulated [33, 43] as follows:
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\mathcal {L} _ {\mathrm {S D S}} = \mathbb {E} _ {t, \epsilon , c} \left[ w (t) \frac {\sqrt {\bar {\alpha} _ {t}}}{\sqrt {1 - \bar {\alpha} _ {t}}} \| z - \hat {z} \| _ {2} ^ {2} \right], \tag {2}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
where
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\hat {z} = \frac {z _ {t} - \sqrt {1 - \bar {\alpha} _ {t}} \hat {\epsilon}}{\sqrt {\bar {\alpha} _ {t}}}, \tag {3}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
with $\bar{\alpha}(t)$ is also a weighting function that dynamically varies with each timestep $t$ .
|
| 105 |
+
|
| 106 |
+
# 4 METHODS
|
| 107 |
+
|
| 108 |
+
# 4.1 Problem Definition and Method Overview
|
| 109 |
+
|
| 110 |
+
Given a reconstructed 3D Gaussian field $\mathcal{G}$ , we extend the attributes of 3D Gaussians by adding a mask attribute $m$ , which represents the user-defined 3D masks for the desired editing area. The Gaussian field $\mathcal{G}$ can be defined as: $\mathcal{G} = \{p_i, r_i, s_i, \alpha_i, c_i, m_i\}_{i=1}^N$ . In addition to the 3D mask, users are also required to input $K$ pairs of control points $Q = \{(q_i^o, q_i^t)\}_{i=1}^K$ , where $q_i^o, q_i^t \in \mathbb{R}^3$ serve as guidance for the editing process. Our objective is to drag the desired editing region around the handle points $q_i^o$ to the target points $q_i^t$ .
|
| 111 |
+
|
| 112 |
+
Rendering and Projection for 2D Guidance. In order to ensure stable control of the 2D drag-based LDM, a mirrored copy of $\mathcal{G}$ is preserved, referred to as the Initial 3D Gaussians $\mathcal{G}'$ , as shown in Fig. 3. During the training phase, for a given camera pose $c$ , an RGB image $I_{c}$ and a 2D mask are rendered from $\mathcal{G}'$ using the similar volumetric rendering approach [15, 27]. This 2D mask is then subjected to a dilation operation to produce the final 2D mask $M_{c}$ . Additionally, the 3D control points $Q$ are projected into 2D points $Q_{c}^{2d} = \{(\Pi (\mathbf{q}_{i}^{o}), \Pi (\mathbf{q}_{i}^{t}))\}_{i=1}^{K}$ through the projection transformation $\Pi$ . Upon acquiring $I_{c}$ , $M_{c}$ and view-specific 2D control points $Q_{c}^{2d}$ obtained, these inputs are utilized as the condition $y$ of 2D drag-based LDM to guide 3DGS optimization using our proposed Drag-SDS loss.
|
| 113 |
+
|
| 114 |
+
Local Edit. Our guiding principle is to perform localized edits within the desired editing region while ensuring that the rest of the scene remains unaffected, thereby maintaining overall harmony and realism.
|
| 115 |
+
|
| 116 |
+
To facilitate this process, we have developed an interactive GUI tool based on 3D Gaussian Splitting. This tool enables users to identify desired editing regions from different viewpoints and generates the 3D mask by calculating the intersection of 3D Gaussians within each view frustum. This real-time interactive approach allows users to efficiently complete the 3D mask selection process with minimal effort, potentially requiring only a single operation.
|
| 117 |
+
|
| 118 |
+
Method Overview. Figure 3 illustrates the overview of our method. In Sec. 4.2, we explain how to integrate the strengths of the implicit triplane representation and explicit 3DGS to overcome suboptimal editing outcomes caused by the sparsity of 3DGS in target regions, thereby enabling high-quality, localized drag-based editing. In Sec. 4.3, we describe how the existing 2D drag-based LDM is incorporated into our method through the proposed Drag-SDS loss function, enabling flexible, view-consistent, and fine-grained editing.
|
| 119 |
+
|
| 120 |
+
# 4.2 Smooth Geometric Editing
|
| 121 |
+
|
| 122 |
+
3D drag-based editing encompasses three main scenarios: (1) Deformation: Involves fine-grained edits, such as adjusting facial features to face a different direction. (2) Transformation: Encompasses local rigid transformations, exemplified by moving a man's leg to take a step forward. (3) Morphing: Includes structural adjustments, such as to raise the collar or to make a person's shoulders narrower.
|
| 123 |
+
|
| 124 |
+
For scenario (1), the challenge lies in fine-grained local editing—modifying the desired region while preserving other areas as much as possible. For scenarios (2) and (3), the key challenge lies in the sparsity of 3D Gaussians around the target points, making it difficult to generate new Gaussians within the target region through optimization or the densify and prune operations [15]. This often results in editing failures.
|
| 125 |
+
|
| 126 |
+
Therefore, based on the above observations and to propose a unified solution, we present the Multi-resolution Triplane Positional Encoder, Region-Specific Positional Decoder, Two-stage Dragging, and Soft Local Editing strategies to achieve smooth geometric editing.
|
| 127 |
+
|
| 128 |
+
Multi-resolution Triplane Positional (MTP) Encoder. The triplane representation [3, 7, 46] is distinguished by its compactness and efficient expressiveness. Its implicit nature facilitating the learning of 3D structures through volume rendering [15, 28], providing an effective solution to the uneven spatial distribution of Gaussian primitives.
|
| 129 |
+
|
| 130 |
+
Another consideration is that the intuitive approach for 3D drag-based editing should involve moving the original region's Gaussians primitives to the target region, rather than deleting the primitives in the original region and generating new ones in the target region. To achieve this, we introduce the Multi-resolution Triplane Positional (MTP) Encoder to encode the position of the 3D Gaussians and predict the position shifts $\Delta P$ with the Region-Specific Positional Decoder.
|
| 131 |
+
|
| 132 |
+
Specifically, the MTP decomposes the 3D space into three orthogonal, learnable multi-resolution feature planes: $\mathcal{H}xy$ , $\mathcal{H}xz$ , $\mathcal{H}yz$ . For the position $p = (x,y,z)$ of each 3D Gaussian, it is normalized and projected onto these triplanes at varying resolutions:
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
f _ {c} ^ {s} = \psi^ {s} \left(\mathcal {H} _ {c}, \pi_ {c} (x, y, z)\right), \tag {4}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
f = \Theta \left(\underset {s} {\operatorname {c o n c a t}} \prod_ {c \in C} f _ {c} ^ {s}\right), \tag {5}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
where $\pi_c$ projects the point onto the $c$ -th plane, and $\psi^s$ performs bilinear interpolation on the plane at resolution $s$ . $\Pi$ denotes the Hadamard product, $C$ represents the set of planes $\{xy, xz, yz\}$ , and $\Theta$ is a lightweight MLP for fusing positional mixed-scale features.
|
| 143 |
+
|
| 144 |
+
Region-Specific Positional (RSP) Decoder. We introduce the local editing guiding principle in Sec. 4.1, however, the implicit
|
| 145 |
+
|
| 146 |
+
representation of triplane, combined with the latent-based optimization of SDS, inevitably result in changes to regions outside the 3D mask. To address this issue, the RSP decoder predict the position shifts $\Delta P$ of the masked 3DGS while introducing a new network to correct the unintended movements in regions outside the 3D masks. Additionally, we propose a regularization loss to further constrain the optimization process. Specifically, The 3DGS $\mathcal{G}$ is divided into two subsets: $\mathcal{G}_m = \{(p_i,r_i,s_i,o_i,c_i,m_i)\in \mathcal{G} \mid m_i = 1\}$ and $\mathcal{G}_{um} = \{(p_i,r_i,s_i,o_i,c_i,m_i)\in \mathcal{G} \mid m_i = 0\}$ . two MLPs, $\mathcal{N}_1$ and $\mathcal{N}_2$ , are employed as decoders for the feature $f$ obtained from the MTP encoder. The position shift of each Gaussian primitive $g_i$ can be formulated as follows:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\Delta p = \left\{ \begin{array}{l l} \mathcal {N} _ {1} (f) & \text {f o r} g _ {i} \in \mathcal {G} _ {m} \\ \operatorname {s g} \left(\mathcal {N} _ {1} (f)\right) + \mathcal {N} _ {2} (\operatorname {s g} (f)) & \text {f o r} g _ {i} \in \mathcal {G} _ {u m}, \end{array} \right. \tag {6}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
where $\mathrm{sg}(\cdot)$ is the stop gradient operator. Based on this, we design a region regularization loss to encourage the unmasked Gaussians to remain unchanged:
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\mathcal {L} _ {\mathrm {R R}} = \sum_ {g _ {i} \in \mathcal {G} _ {u m}} \Delta p _ {i}. \tag {7}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
Two-stage Dragging. The entire dragging process consists of two stages. The first stage focuses on optimizing the geometric structure, thereby establishing the foundational geometric scaffold of the edited scene. During this stage, the model optimizes both the MTP encoder and RSP decoder while freezing all 3DGS parameters and halting the densify and prune operations [15]. As shown in Fig. 10 In the complex deformation task of turning a character's face, after Stage 1, the Gaussians corresponding to the face are dragged to the target region, while the positions of unmasked Gaussians remain unchanged. The second stage focuses on refining the texture details of the scene. During this stage, other attributes of the 3D Gaussians (color, opacity, rotation, and scale) are primarily optimized, and the densify and prune operations are reactivated. After constructing the basic geometric scaffold in the first stage, the regions outside the desired editing area tend to remain unchanged during the second stage. The newly added 3D Gaussians primarily originate from the splitting or duplication of elements in $\mathcal{G}_m$ . Consequently, all newly added 3D Gaussians are assigned to $\mathcal{G}_m$ for subsequent optimization.
|
| 159 |
+
|
| 160 |
+
Soft Local Edit (SLE). Strictly freezing the parameters of 3DGS outside the 3D mask [6, 34] to facilitate local editing may result in disjoint effects, as illustrated in Fig. 7(e). To address this limitation, we adopt a soft 3D mask strategy. Specifically, for each 3D Gaussian within $\mathcal{G}m$ , we identify its K-nearest neighbors (KNN) and select those belonging to $\mathcal{G}um$ to form the set $\mathcal{G}_{\mathrm{knn}}$ . These neighboring Gaussians are subsequently optimized with a reduced learning rate to ensure smoother transitions.
|
| 161 |
+
|
| 162 |
+
# 4.3 Score Distillation Guidance
|
| 163 |
+
|
| 164 |
+
The SDS loss [26], which leverages LDM as the guidance model to produce multi-view consistent 3D results, has been widely adopted in 3D generation methods [17, 36, 45]. However, it suffers from issues of over-saturation, over-smoothing. Inspired by [16, 41], we extend SDS and propose an improved score distillation loss function. For the predicted noise $\hat{\epsilon}$ in Eq. (1), we extend it into a composite
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
Figure 4: Detailed illustration of Score Distillation Guidance Module and Drag-SDS loss, presented in Fig. 3. We employ two different UNets to predict $\epsilon_{\mathrm{tgt}}$ and $\epsilon_{\mathrm{src}}$ described in Eq. (8), respectively. The components within the orange box represent the inputs to the Inpainting UNet, while the components within the green box signify the inputs to the Original SD UNet.
|
| 168 |
+
|
| 169 |
+
term defined as follows:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\hat {\epsilon} = \epsilon_ {\mathrm {t g t}} - \epsilon_ {\mathrm {s r c}} + \epsilon , \tag {8}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where $\epsilon_{\mathrm{tgt}}$ represents the noise predicted by the LDM, $\epsilon_{\mathrm{src}}$ denotes a learnable source prediction for adaptive estimation of the current distribution.
|
| 176 |
+
|
| 177 |
+
Our guidance model, Lightning-Drag [31], employs the Stable Diffusion Inpainting U-Net as the backbone to predict $\epsilon_{\mathrm{tgt}}$ . It takes as input the concatenation of noise latents $z_{t}$ , a binary mask $m_{2\mathrm{d}}$ and the latents of the masked initial image $m_{2\mathrm{d}} \odot x_0$ . The model also incorporates the point embedding of 2D control points and the appearance embedding as the condition $y$ . For clarity, we omit the classifier-free guidance [26, 41]. As shown in Fig. 4, the output of our guidance model is represented as $\hat{\epsilon}_{\mathrm{inpaint}} = \epsilon_{\theta}(z_t, t, y, m_{2\mathrm{d}}, \mathcal{E}(m_{2\mathrm{d}} \odot x_0))$ . Our Drag-SDS loss is composed of three components: image-space loss $\mathcal{L}_{\mathrm{img}}$ , latent-space loss $\mathcal{L}_{\mathrm{lat}}$ , and $\mathcal{L}_{\mathrm{lora}}$ .
|
| 178 |
+
|
| 179 |
+
Lightning-Drag provides reliable predictions for $\epsilon_{\mathrm{tgt}}$ ; however, it is not suitable for estimating $\epsilon_{\mathrm{src}}$ . This limitation stems from the inpainting backbone tends to focus primarily on the information within the masked region while preserving the content outside the mask, thereby failing to fully capture the current distribution. Therefore, different from straightforwardly using the same UNet to predict $\epsilon_{\mathrm{tgt}}$ and $\epsilon_{\mathrm{src}}$ in [16, 41], we utilize the original Stable Diffusion UNet with a LoRA model [12] $\phi$ as the predictor of $\epsilon_{\mathrm{src}}$ , denoted as $\hat{\epsilon}_{\phi}(x_t,t,\hat{y}_\emptyset)$ , where $\hat{y}_\emptyset$ is a learnable embedding initialized to zero. The LoRA model is trained using a simple diffusion loss, defined as:
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\mathcal {L} _ {\text {l o r a}} = \mathbb {E} _ {t, c, \epsilon} \left[ \| \epsilon_ {\phi} \left(x _ {t}, t, \hat {y} _ {\emptyset}\right) - \epsilon \| _ {2} ^ {2} \right] \tag {9}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
The latent-space score objective, referred to as $\mathcal{L}_{\mathrm{lat}}$ , is formulated similarly to Eq. (2). The image-space score distillation loss function is defined as follows:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathcal {L} _ {\mathrm {i m g}} = \mathbb {E} _ {t, c, \epsilon} \left[ w (t) \frac {\sqrt {\bar {\alpha} _ {t}}}{\sqrt {1 - \bar {\alpha} _ {t}}} \| x - \hat {x} \| _ {2} ^ {2} \right], \tag {10}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 5: Qualitative comparison between DYG and different baselines. The first column shows two rendered views of the original 3D scene, where the 3D editing points are projected onto the 2D plane for visualization. SC-GS [13] may show unnatural results, as well as blurring or tearing of the background, while GS-Editor [6] and GS-Ctrl [37] frequently fail to perform successful edits. Additionally, GS-Ctrl tends to exhibit over-saturation issues, and 2D-Lifting suffers from scene blurriness. By contrast, DYG is able to sufficiently interpret both the user's dragging intent and the 3D scene context, thereby achieving effective editing and generating detailed results across various scenarios, including deformation, transformation, morphing.
|
| 193 |
+
|
| 194 |
+
where $\hat{x} = \mathcal{D}(\hat{z})$ , with $\mathcal{D}$ is the image decoder of LDM. The final Drag-SDS loss function can be defined as:
|
| 195 |
+
|
| 196 |
+
where $\lambda_{\mathrm{lat}}$ , $\lambda_{\mathrm{img}}$ , $\lambda_{\mathrm{lora}}$ represent the weights for the latent-space, image-space, and lora objectives, respectively.
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
\mathcal {L} _ {\text {D r a g - S D S}} = \lambda_ {\text {l a t}} \mathcal {L} _ {\text {l a t}} + \lambda_ {\text {i m g}} \mathcal {L} _ {\text {i m g}} + \lambda_ {\text {l o r a}} \mathcal {L} _ {\text {l o r a}}, \tag {11}
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
OS View 1
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
ES View 1
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
OS View 2
|
| 230 |
+
ES View 2
|
| 231 |
+
Figure 6: More qualitative results. The top three rows showcase real scenes, while the bottom two rows are generated scenes. For each edit, we show two views of both the original (OS) and edited scenes (ES).
|
| 232 |
+
|
| 233 |
+
# 5 IMPLEMENTATION DETAILS
|
| 234 |
+
|
| 235 |
+
Our model, built on a 3D Gaussian Scene reconstructed using vanilla 3D Gaussian Splatting [15], completes one dragging operation on an A100-40G GPU in approximately 10 minutes, whereas GS-Editor [6] requires over 20 minutes.
|
| 236 |
+
|
| 237 |
+
Additional implementation details are provided in the Appendix.
|
| 238 |
+
|
| 239 |
+
# 6 EXPERIMENTS
|
| 240 |
+
|
| 241 |
+
# 6.1 Evaluation Setup
|
| 242 |
+
|
| 243 |
+
Baseline Methods. It should be noted that this is the first work to perform 3DGS drag-based editing in real scenes, and therefore there are no directly comparable baselines. We compare our method with other 3DGS-based text-driven editing approaches, including GS-Editor [6] and GS-Ctrl [37], as well as the anchor point-based dragging method SC-GS [13]. Additionally, we construct a naive baseline, 2D-Lifting, which uses the same inputs as our method but performs drag-based editing on 2D images with our guidance model, Lightning-Drag [31], followed by 3D reconstruction [15].
|
| 244 |
+
|
| 245 |
+
Datasets. To comprehensively evaluate our method, we select six representative scenes from two datasets: Mip-NeRF360 [1] and Instruct-NeRF2NeRF [9]. We perform over 20 types of editing tasks on these scenes, which include human faces, indoor objects, and complex outdoor scenes.
|
| 246 |
+
|
| 247 |
+
# 6.2 Qualitative Evaluation
|
| 248 |
+
|
| 249 |
+
Since no other existing methods currently support 3D drag-based editing for real scenes, we make our best effort to guide SC-GS, GS-Editor, and GS-Ctrl to achieve comparable editing results, ensuring a fair comparison.
|
| 250 |
+
|
| 251 |
+
Fig. 5 presents the results of our method compared to other approaches. The editing results illustrate that our approach enables flexible edits while ensuring high visual quality and multi-view consistency. For instance, in the first row, we demonstrate a deformation scenario: dragging the face to one side, which is a highly challenging task. This is because the operation not only involves rotating the entire head but also requires the facial features, such as the eyes, to change harmoniously and synchronously to maintain consistency and realism. It can be observed that SC-GS may exhibit unnatural distortions, as well as blurring or tearing of the background caused by dragging. GS-Editor only shows minor darkening in color without effective geometric changes. GS-Ctrl achieves slightly more noticeable changes, but similar to GS-Editor, it fails to perform meaningful geometric editing. 2D-Lifting manages to turn the head, but severe inconsistencies across views result in significant scene blurriness. By contrast, our method successfully turns the head to one side while maintaining better details.
|
| 252 |
+
|
| 253 |
+
Fig. 6 showcases more results on additional scenes, demonstrating its ability to handle complex scenarios. These results illustrate that our method effectively interprets the control-point prompts and produces plausible, high-quality 3D drag-based edits.
|
| 254 |
+
|
| 255 |
+
# 6.3 Quantitative Evaluation
|
| 256 |
+
|
| 257 |
+
As there is currently no widely accepted benchmark dataset for 3D scene editing, we conducted user studies to evaluate the editing results and quality. To ensure objectivity, we also adopted the GPT Evaluation Score as an additional metric. Furthermore, we utilized aesthetic evaluation metrics [30] to assess the quality of the edited scenes.
|
| 258 |
+
|
| 259 |
+
User Study. We collected survey responses from 75 users, with each questionnaire containing comparisons of 10 edited scenes. Users were asked to select their preferred editing results based on two criteria: Edit Effect and Scene Quality, respectively, resulting in a total of 1,500 votes. Fig. 9 visualizes the results of the user study, showing that $86.1\%$ and $62\%$ of users favored our editing results, significantly outperforming other compared methods.
|
| 260 |
+
|
| 261 |
+
GPT Score. We utilized GPT-4o to evaluate the editing results of different methods, asking it to rate the results based on three criteria: Scene Quality (SQ), which assesses the visual quality of the edited scene; Editing Effect (EE), which examines whether the editing result meets the intended requirements; and Retention of Initial Features (RIF), which evaluates whether non-edited regions remain unchanged. Scores were assigned on a scale from 0 to 5 for each criterion. Table 1 presents the average scores across all editing scenarios, with the GPT-Overall (GPTO) score calculated
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
(a) User Edit
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
(b) w/ Drag-SDS
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
(c) w/ MTP
|
| 271 |
+
|
| 272 |
+

|
| 273 |
+
(d) w/ Two Stage Training
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
(e) w/RSP
|
| 279 |
+
|
| 280 |
+

|
| 281 |
+
(f) w/ SLE
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
Figure 7: Ablation Study on different modules of DYG. From left to right, new modules are progressively added on top of the previous setup.
|
| 285 |
+
Original Scene
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
SDS
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
Drag-SDS + Inpainting UNet
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
Drag-SDS + SD UNet (Ours)
|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
Figure 8: Ablation Study of different score distillation loss functions. To visualize the 3D mask, we render it as a 2D mask, with the bright region indicating masked area and the darker region unmasked. SDS [26] often causes blurriness in the desired editing area, while Drag-SDS with the inpainting UNet over-focuses on the mask, creating disharmonious color layers. By contrast, our method delivers harmonious editing results.
|
| 298 |
+
(b) Scene Quality
|
| 299 |
+
Figure 9: User study of different methods for 3D scene editing.
|
| 300 |
+
|
| 301 |
+
as $0.3 \times \mathrm{SQ} + 0.4 \times \mathrm{EE} + 0.3 \times \mathrm{RIF}$ . The first row shows the scores for the initial, unedited scenes, which receive a rating of 5 in all categories.
|
| 302 |
+
|
| 303 |
+
Analyzing in conjunction with Fig. 5 and Table 1, GS-Editor achieves a relatively high RIF score because of minimal geometric changes but has a low EE score as the edits are less effective. GS-Ctrl often fails in geometric editing, resulting in a low EE score.
|
| 304 |
+
|
| 305 |
+
Table 1: Evaluation metrics. We report the Scene Quality (SQ), Editing Effect (EE), Retention of Initial Features (RIF), GPT-Overall (GPTO), and Aesthetic (AES) scores for different methods across various scenes. Gray text represents the evaluation metrics of the initial scene.
|
| 306 |
+
|
| 307 |
+
<table><tr><td></td><td>SQ↑</td><td>EE↑</td><td>RIF↑</td><td>GPTO↑</td><td>AES↑</td></tr><tr><td>Init</td><td>5</td><td></td><td>5</td><td></td><td>5.53</td></tr><tr><td>SC-GS [13]</td><td>2.69</td><td>2.318</td><td>2.28</td><td>2.4182</td><td>4.14</td></tr><tr><td>GS-Editor [6]</td><td>4.418</td><td>2.354</td><td>4.624</td><td>3.6542</td><td>5.28</td></tr><tr><td>GS-Ctrl [37]</td><td>4.162</td><td>2.16</td><td>4.392</td><td>3.4302</td><td>5.38</td></tr><tr><td>2D-Lifting</td><td>3.116</td><td>3.234</td><td>3.212</td><td>3.192</td><td>4.85</td></tr><tr><td>Ours</td><td>4.434</td><td>4.42</td><td>4.626</td><td>4.486</td><td>5.36</td></tr></table>
|
| 308 |
+
|
| 309 |
+
2D-Lifting, which reconstructs scenes after 2D dragging, generally succeeds in geometric edits but suffers from multi-view inconsistencies, leading to blurry results and consequently low SQ scores. By contrast, our method achieves the best performance across all four metrics.
|
| 310 |
+
|
| 311 |
+
Aesthetic Score. We evaluate the aesthetic quality of 3D edit results using the open-source LAION Aesthetics Predictor, which rates image quality on a 0-10 scale. The rendered images of edited 3D scenes are scored, and the average is reported. As shown in the last column of Table 1, our method decreases by only 0.17 compared to the initial score, achieving better performance compared to SC-GS [13], GS-Editor [6], and 2D-Lifting. Notably, GS-Ctrl tends to oversatuate the overall scene colors, leading to higher aesthetic scores. For example, in the first column of Fig. 5, The face becomes smoother and more cartoon-like, resulting in a higher aesthetic score, but it fails to achieve the intended head-turning editing operation.
|
| 312 |
+
|
| 313 |
+
# 6.4 Ablation Study
|
| 314 |
+
|
| 315 |
+
Smooth Geometric Editing module. To evaluate the effectiveness of the design of our Smooth Geometric Editing module, we conduct an ablation study and can be shown in Fig. 7. From left to right, we progressively add new modules on top of the previous setup. Fig. 7(b): Only the Drag-SDS loss function is used to guide optimization, with all 3DGS parameters trainable. The results show a tendency to fit the target distribution through texture adjustments rather than position shifts, leaving artifacts where original Gaussians fail to move completely to the target region, such as the lower left corner of the face. Fig. 7(c): Introducing MTPE
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
Init Scene
|
| 319 |
+
|
| 320 |
+

|
| 321 |
+
After Stage 1
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
After Stage 2
|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
Figure 10: Visualization of the positional changes of sampled 3D Gaussians after two-stage dragging. The positions of maked 3D Gaussians are represented as colored points, while others are shown in gray.
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
(a) Multi-round Dragging on Different Objects
|
| 331 |
+
(b) Multi-round Dragging on the Same Object
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
Figure 11: Multi-round Dragging. Each dragging operation is performed based on the results of the previous edit shown on the left.
|
| 335 |
+
|
| 336 |
+

|
| 337 |
+
OS View 1
|
| 338 |
+
Figure 12: Visualization of dragging on the generated scenes. For each edit, we show two views of both the original (OS) and edited scenes (ES).
|
| 339 |
+
|
| 340 |
+

|
| 341 |
+
|
| 342 |
+

|
| 343 |
+
OS View 2
|
| 344 |
+
ES View 2
|
| 345 |
+
|
| 346 |
+
with other Gaussian parameters trainable. While this alleviates the Gaussian artifact issue, the fine-grained dragging performance remains unsatisfactory. Fig. 7(d): A two-stage optimization strategy is introduced. In the first stage, only MTPE and RPD are optimized, aligning the target distribution via Gaussian displacements to build
|
| 347 |
+
|
| 348 |
+
a scaffold for the edited scene. In the second stage, Gaussian parameters are learned to refine the scene representation. However, this approach unintentionally modifies the background, leading to issues like over-saturation and darkened colors. Fig. 7(e): Local Edit is applied, but it introduces cracks between the 3D mask region and surrounding areas, such as the region around the shoulders. Fig. 7(f): Finally, the Soft Local Edit strategy is introduced, achieving the highest visual quality with harmonious and consistent results.
|
| 349 |
+
|
| 350 |
+
Effectiveness of Drag-SDS. We conduct an ablation study on different distillation loss functions. All training strategies and modules are identical to the full model, with the only difference being the choice of the distillation loss function. As shown in Fig. 8, SDS [26] results in a blurred target region, failing to achieve the desired action, such as opening the mouth. In contrast, our proposed Drag-SDS method successfully enables target editing. However, due to its reliance on an inpainting strategy, it focuses more on estimating the image distribution within the mask, leading to inconsistent colors between masked and unmasked regions. For example, the area around the mouth appears lighter, causing noticeable layering artifacts. In comparison, our approach, which utilizes the original SD [11, 33] UNet, pays greater attention to global information and effectively resolves this issue.
|
| 351 |
+
|
| 352 |
+
# 6.5 Multi-round Dragging
|
| 353 |
+
|
| 354 |
+
Due to the complexity and diversity of editing tasks, there arises a demand for multi-round dragging. As shown in Fig. 11, we explore the potential of extending DYG to multi-round dragging scenarios. In Fig. 11(a), we perform sequential edits on different targets (e.g., right leg, right arm, left leg), building upon the results of the previous round. Similarly, Fig. 11(b) shows multi-round editing applied to the same target (e.g., gradually raising the man's right arm). The results illustrate that DYG can be easily adapted to multi-round dragging scenarios while maintaining notable stability.
|
| 355 |
+
|
| 356 |
+
# 6.6 Dragging for 3D Generative Scenes
|
| 357 |
+
|
| 358 |
+
In addition to real-world scenes, we also explore the application of our method to editing 3D generative scenes. As shown in Fig. 12, we leverage Director3D [16] to generate two scenes with the text prompts "A faux-fur leopard print hat" and "A brown teddy bear in a toy shop," respectively. Subsequently, we apply drag-based editing to these scenes. To our delight, our DYG presents strong generalization to 3D generative results, achieving high-quality and precise drag-based editing even in these synthetic scenarios. More examples of scene edits in generated scenarios can be found in the last two rows of Fig. 6.
|
| 359 |
+
|
| 360 |
+
# 7 LIMITATION AND CONCLUSION
|
| 361 |
+
|
| 362 |
+
**Limitation.** Our method distills prior knowledge from 2D drag-based LDM to optimize the 3D Gaussian primitives. Although DYG satisfies a diverse range of 3D drag-based editing requirements, our 3D editing capabilities are inherently limited by the performance of 2D generative models. Therefore, advancements in 2D generative models can further drive the development of our method.
|
| 363 |
+
|
| 364 |
+
Conclusion. We present DYG, an effective drag-based scene editing method that enables users to conveniently perform flexible, fine-grained, high-quality edits on 3D Gaussian scenes using 3D
|
| 365 |
+
|
| 366 |
+
masks and control points. Extensive experiments demonstrate the effectiveness and generalization of our method. Our future work includes improving interaction speed to achieve near-real-time 3D drag-based editing. Additionally, it can be extended to 4D dynamic processes, enabling dynamic editing results for 3D scenes.
|
| 367 |
+
|
| 368 |
+
# REFERENCES
|
| 369 |
+
|
| 370 |
+
[1] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. IEEE, 5460-5469. https://doi.org/10.1109/CVPR52688.2022.00539
|
| 371 |
+
[2] Tim Brooks, Aleksander Holynski, and Alexei A Efros. 2023. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18392-18402.
|
| 372 |
+
[3] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. 2022. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 16123-16133.
|
| 373 |
+
[4] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. 2022. Tensorf: Tensorial radiance fields. In European conference on computer vision. Springer, 333-350.
|
| 374 |
+
[5] Minghao Chen, Iro Laina, and Andrea Vedaldi. 2025. Dge: Direct gaussian 3d editing by consistent multi-view editing. In European Conference on Computer Vision. Springer, 74-92.
|
| 375 |
+
[6] Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. 2024. Gaussianeditor: Swift and controllable 3d editing with gaussian splattering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 21476-21485.
|
| 376 |
+
[7] Sara Fridovich-Keil, Giacomo Meanti, Frederik Rahbæk Warburg, Benjamin Recht, and Angjoo Kanazawa. 2023. K-planes: Explicit radiance fields in space, time, and appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12479-12488.
|
| 377 |
+
[8] Michelle Guo, Alireza Fathi, Jiajun Wu, and Thomas Funkhouser. 2020. Object-centric neural scene rendering. arXiv preprint arXiv:2012.08503 (2020).
|
| 378 |
+
[9] Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. 2023. Instruct-nerf2merf: Editing 3d scenes with instructions. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 19740-19750.
|
| 379 |
+
[10] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. 2022. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022).
|
| 380 |
+
[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems 33 (2020), 6840-6851.
|
| 381 |
+
[12] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).
|
| 382 |
+
[13] Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. 2024. Sc-gs: Sparse-controlled gaussian splatting for editable dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4220-4230.
|
| 383 |
+
[14] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. 2023. Magic: Text-based real image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6007-6017.
|
| 384 |
+
[15] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 2023. 3d gaussian splattering for real-time radiance field rendering. ACM Transactions on Graphics 42, 4 (2023), 1-14.
|
| 385 |
+
[16] Xinyang Li, Zhangyu Lai, Linning Xu, Yansong Qu, Liujuan Cao, Shengchuan Zhang, Bo Dai, and Rongrong Ji. 2024. Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text. arXiv preprint arXiv:2406.17601 (2024).
|
| 386 |
+
[17] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. 2023. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision. 9298-9309.
|
| 387 |
+
[18] Grace Luo, Trevor Darrell, Oliver Wang, Dan B Goldman, and Aleksander Holynski. 2024. Readout guidance: Learning control from diffusion features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8217-8227.
|
| 388 |
+
[19] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2021. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Commun. ACM 65, 1 (2021), 99-106.
|
| 389 |
+
[20] Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, and Jian Zhang. 2023. Dragondiffusion: Enabling drag-style manipulation on diffusion models. arXiv
|
| 390 |
+
|
| 391 |
+
preprint arXiv:2307.02421 (2023).
|
| 392 |
+
[21] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG) 41, 4 (2022), 1-15.
|
| 393 |
+
[22] Shen Nie, Hanzhong Allan Guo, Cheng Lu, Yuhao Zhou, Chenyu Zheng, and Chongxuan Li. 2023. The blessing of randomness: Sde beats ode in general diffusion-based image editing. arXiv preprint arXiv:2311.01410 (2023).
|
| 394 |
+
[23] Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, and Felix Heide. 2021. Neural scene graphs for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2856-2865.
|
| 395 |
+
[24] Xingang Pan, Ayush Tewari, Thomas Leimkuhler, Lingjie Liu, Abhimitra Meka, and Christian Theobalt. 2023. Drag your gan: Interactive point-based manipulation on the generative image manifold. In ACM SIGGRAPH 2023 Conference Proceedings. 1-11.
|
| 396 |
+
[25] Yong-Hyun Park, Mingi Kwon, Jaewoong Choi, Junghyo Jo, and Youngjung Uh. 2023. Understanding the Latent Space of Diffusion Models through the Lens of Riemannian Geometry. In Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds.), Vol. 36. Curran Associates, Inc., 24129-24142. https://proceedings.neurips.cc/paper_files/paper/2023/file/4bfcebedf7a2967c410b64670f27f904-Paper-Conference.pdf
|
| 397 |
+
[26] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. 2022. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988 (2022).
|
| 398 |
+
[27] Yansong Qu, Shaohui Dai, Xinyang Li, Jianghang Lin, Liujuan Cao, Shengchuan Zhang, and Rongrong Ji. 2024. Goi: Find 3d gaussians of interest with an estimable open-vocabulary semantic-space hyperplane. In Proceedings of the 32nd ACM International Conference on Multimedia. 5328-5337.
|
| 399 |
+
[28] Yansong Qu, Yuze Wang, and Yue Qi. 2023. Sg-nerf: Semantic-guided point-based neural radiance fields. In 2023 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 570-575.
|
| 400 |
+
[29] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1, 2 (2022), 3.
|
| 401 |
+
[30] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. 2022. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35 (2022), 25278-25294.
|
| 402 |
+
[31] Yujun Shi, Jun Hao Liew, Hanshu Yan, Vincent YF Tan, and Jiashi Feng. 2024. InstaDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos. arXiv preprint arXiv:2405.13722 (2024).
|
| 403 |
+
[32] Yujun Shi, Chuhui Xue, Jun Hao Liew, Jiachun Pan, Hanshu Yan, Wenqing Zhang, Vincent YF Tan, and Song Bai. 2024. Dragdiffusion: Harnessing diffusion models for interactive point-based image editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8839-8849.
|
| 404 |
+
[33] Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 (2020).
|
| 405 |
+
[34] Junjie Wang, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, and Qi Tian. 2024. Gaussian editor: Editing 3d gaussians delicately with text instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20902-20911.
|
| 406 |
+
[35] Yuz Wang, Junyi Wang, Yansong Qu, and Yue Qi. 2023. Rip-nerf: learning rotation-invariant point-based neural radiance field for fine-grained editing and compositing. In Proceedings of the 2023 ACM International Conference on Multimedia Retrieval. 125-134.
|
| 407 |
+
[36] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. 2024. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. Advances in Neural Information Processing Systems 36 (2024).
|
| 408 |
+
[37] Jing Wu, Jia-Wang Bian, Xinghui Li, Guangrun Wang, Ian Reid, Philip Torr, and Victor Adrian Prisacariu. 2025. Gaussctrl: Multi-view consistent text-driven 3d gaussian splatting editing. In European Conference on Computer Vision. Springer, 55-71.
|
| 409 |
+
[38] Tianhan Xu and Tatsuya Harada. 2022. Deforming radiance fields with cages. In European Conference on Computer Vision. Springer, 159-175.
|
| 410 |
+
[39] Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, and Guofeng Zhang. 2022. Neumesh: Learning disentangled neural mesh-based implicit field for geometry and texture editing. In European Conference on Computer Vision. Springer, 597-614.
|
| 411 |
+
[40] Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. 2021. Learning object-compositional neural radiance field for editable scene rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 13779-13788.
|
| 412 |
+
[41] Xiaofeng Yang, Yiwen Chen, Cheng Chen, Chi Zhang, Yi Xu, Xulei Yang, Fayao Liu, and Guosheng Lin. 2023. Learn to optimize denoising scores for 3d generation: A unified and improved diffusion prior on nef and 3d gaussian splatting. arXiv preprint arXiv:2312.04820 (2023).
|
| 413 |
+
[42] Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. 2022. Nerf-editing: geometry editing of neural radiance fields. In Proceedings of the
|
| 414 |
+
|
| 415 |
+
IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18353-18364.
|
| 416 |
+
[43] Junzhe Zhu, Peiye Zhuang, and Sanmi Koyejo. 2023. Hifa: High-fidelity text-to-3d generation with advanced diffusion guidance. arXiv preprint arXiv:2305.18766 (2023).
|
| 417 |
+
[44] Jingyu Zhuang, Di Kang, Yan-Pei Cao, Guanbin Li, Liang Lin, and Ying Shan. 2024. Tip-editor: An accurate 3d editor following both text-prompts and image-prompts. ACM Transactions on Graphics (TOG) 43, 4 (2024), 1-12.
|
| 418 |
+
[45] Jingyu Zhuang, Chen Wang, Liang Lin, Lingjie Liu, and Guanbin Li. 2023. Dreameditor: Text-driven 3d scene editing with neural fields. In SIGGRAPH Asia 2023 Conference Papers. 1-10.
|
| 419 |
+
[46] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. 2024. Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10324-10335.
|
| 420 |
+
|
| 421 |
+
# A IMPLEMENT DETAILS
|
| 422 |
+
|
| 423 |
+
# A.1 Training Setup
|
| 424 |
+
|
| 425 |
+
For 3DGS reconstruction, we optimized the Gaussians over 7,000 iterations and set the spherical harmonics to degree 0. In our experiment, batch size is set to 4, and the learning rates for Gaussian's color, opacity, scale, and rotation are set to $2.5 \times 10^{-3}$ , $2.5 \times 10^{-3}$ , $2.5 \times 10^{-4}$ , and $2.5 \times 10^{-3}$ , respectively. The shifts in Gaussian position are entirely obtained through the MTP encoder and the RSP decoder, and we set the learning rate for the MTP encoder was set to $1 \times 10^{-3}$ , while the learning rate for RSP was $5 \times 10^{-4}$ . In the first stage, we freeze the Gaussian attributes and only train MTP and RSP. In the second stage, we lower the learning rate of MTP to $1 \times 10^{-4}$ to stabilize the scene's geometric structure and begin training the other Gaussian attributes. For Drag-SDS, we set $\lambda_{\mathrm{lat}} = 1$ , $\lambda_{\mathrm{img}} = 0.1$ , and $\lambda_{\mathrm{lora}} = 1$ , the learning rate of the learnable embedding $\hat{y}_{\emptyset}$ set to $1 \times 10^{-3}$ , and the lora rank set to 16 with the learning rate of $5 \times 10^{-4}$ .
|
| 426 |
+
|
| 427 |
+
# A.2 Two-Stage Dragging
|
| 428 |
+
|
| 429 |
+
During the entire training process, we sample the diffusion timestep $t$ using a cosine annealing schedule $t = f(s) = \frac{1}{2} (T_{\mathrm{max}} - T_{\mathrm{min}})(1 + \cos (\pi s)) + T_{\mathrm{min}}$ , where $s$ is the current training epoch ratio, $[T_{\mathrm{max}}, T_{\mathrm{min}}] = [0.98, 0.02]$ represents the annealing range for the diffusion timestep. Inspired by previous works that utilized diffusion models based on latent variables for image editing[25, 32], we hypothesize that the diffusion model optimize the overall geometric structure of the image at higher diffusion timesteps(> 0.7T), and refining the texture at lower timesteps(< 0.7T). Therefore, we derive the two-stage training epochs based on the diffusion timestep, where the chosen timestep threshold is $T_{\mathrm{threshold}} = 0.7$ , leading to a first-stage geometric reconstruction training epoch ratio of $s = f^{-1}(T_{\mathrm{threshold}}) = 0.36$
|
| 430 |
+
|
| 431 |
+
# A.3 Classifier-Free Guidance
|
| 432 |
+
|
| 433 |
+
Here, we provide a detailed description of the classifier-free guidance (CFG) scale we used. Unlike the conventional LDM, which uses a fixed CFG, we follow the approach in [31] and use CFG annealing to avoid the over-saturation issue. Specifically, we employ a CFG inverse square annealing function:
|
| 434 |
+
|
| 435 |
+
$$
|
| 436 |
+
\omega (s) = \left(\omega_ {\max } - 1\right) \times (1 - s) ^ {2} + 1, \text {w i t h} \omega_ {\max } = 4 \tag {12}
|
| 437 |
+
$$
|
| 438 |
+
|
| 439 |
+
Our target noise prediction uses CFG, while the source noise prediction does not:
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\epsilon_ {\mathrm {t g t}} = \omega (s) \left(\epsilon_ {\theta} \left(z _ {t}, t, y\right) - \epsilon_ {\theta} \left(z _ {t}, t, \emptyset\right)\right) + \epsilon_ {\theta} \left(z _ {t}, t, \emptyset\right) \tag {13}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
$$
|
| 446 |
+
\epsilon_ {\mathrm {s r c}} = \hat {\epsilon} _ {\phi} \left(x _ {t}, t, \hat {y} _ {\emptyset}\right) \tag {14}
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
# A.4 Total Loss
|
| 450 |
+
|
| 451 |
+
We optimize the total loss for the 3D Gaussian as follows, where $\lambda_{\mathrm{d - sds}} = 1$ $\lambda_{\mathrm{rr}} = 2500$ ..
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
\mathcal {L} = \lambda_ {\text {d r a g - s d s}} \mathcal {L} _ {\text {D r a g - S D S}} + \lambda_ {\text {r r}} \mathcal {L} _ {R R} \tag {15}
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
Figure 13: The MLP structure employed in the RSP Decoder.
|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
Original Scene
|
| 462 |
+
Figure 14: Visualization of our failure case. We render the Gaussian in the corresponding viewpoint to obtain the rendered image and 2D mask, and project the 3D handle points and target points into 2D as the input for Lightning-Drag, the figure illustrates the output results. Lightning-Drag fails in this case, and since our method uses Lightning-Drag as guidance, it also struggles to achieve the desired results.
|
| 463 |
+
|
| 464 |
+

|
| 465 |
+
Lightning-Drag
|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
Ours
|
| 469 |
+
|
| 470 |
+
# B DETAILS OF THE RSP DECODER
|
| 471 |
+
|
| 472 |
+
Fig. 13 present the structure of the MLP employed in our RSP. The MLP accepts multi-scale features $f$ as input and outputs positional shifts $\Delta P$ . In our RSP Decoder, there are two such MLPs, $\mathcal{N}_1$ and $\mathcal{N}_2$ , which are used to predict positional shifts of Gaussians in the desired and undesired editing area, respectively. In our experiments, we found that directly using $\mathcal{N}_2$ to predict positional sifts in undesired editing area leads to severe geometric tearing issues. To address this, we designed Eq. (16) to prevent such problems.
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
\Delta p = \left\{ \begin{array}{l l} \mathcal {N} _ {1} (f), & \text {f o r} g _ {i} \in \mathcal {G} _ {m}, \\ \operatorname {s g} \left(\mathcal {N} _ {1} (f)\right) + \mathcal {N} _ {2} (\operatorname {s g} (f)), & \text {f o r} g _ {i} \in \mathcal {G} _ {u m}, \end{array} \right. \tag {16}
|
| 476 |
+
$$
|
| 477 |
+
|
| 478 |
+
# C FAILURE CASE
|
| 479 |
+
|
| 480 |
+
Since we used the pretrained drag-based LDM Lightning-Drag[31], our method also inherits its failure cases. The Fig. 14 shows the user input where the user expects his eye to be closed, and both Lightning-Drag and our method fail in this case.
|
2501.18xxx/2501.18672/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9188f396b6e5d1d7c8fc2bfe457795ec18492bfc1851ace710ee683186b5ee57
|
| 3 |
+
size 1164687
|
2501.18xxx/2501.18672/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2502.11xxx/2502.11028/a77672cf-71a5-4963-8421-316afe80531d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:15c1a3f159fff98f82bfb74d4ef5e5cd04f9608f385b38fd974a614ac441a418
|
| 3 |
+
size 2903449
|
2502.11xxx/2502.11028/full.md
ADDED
|
@@ -0,0 +1,538 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models
|
| 2 |
+
|
| 3 |
+
Prateek Chhikara
|
| 4 |
+
|
| 5 |
+
University of Southern California, Los Angeles, USA
|
| 6 |
+
|
| 7 |
+
pchikar@usc.edu
|
| 8 |
+
|
| 9 |
+
Reviewed on OpenReview: https://openreview.net/forum?id=lyaHnHDdZl
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Large Language Models (LLMs) show remarkable proficiency in natural language tasks, yet their frequent overconfidence—misalignment between predicted confidence and true correctness—poses significant risks in critical decision-making applications. We present a comprehensive analysis on calibration in LLMs across nine LLMs and three factual Question-Answering (QA) datasets, systematically comparing standard free-generation settings against structured distractor-augmented prompts. Our evaluation reveals that explicitly incorporating distractors can substantially mitigate miscalibration, achieving relative accuracy improvements up to $460\%$ and ECE reductions up to $90\%$ . Despite general trends, we uncover nuanced findings: large RLHF-tuned models display inherent calibration strengths but can paradoxically suffer increased miscalibration on easier queries, whereas smaller models benefit disproportionately from distractor prompts but remain significantly miscalibrated. Through detailed analyses across question types, we identify persistent calibration failures, particularly in person-based queries. We conclude with concrete recommendations—targeted fine-tuning, structured prompting, and strategic model choice—to ensure reliable, trustworthy LLM deployments. Code is publicly available at: https://github.com/prateekchhikara/llms-calibration
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: An instance from SimpleQA dataset where an LLM assigns high confidence to an incorrect answer.
|
| 17 |
+
|
| 18 |
+
# 1 Introduction
|
| 19 |
+
|
| 20 |
+
Large Language Models (LLMs) have significantly advanced natural language understanding, achieving state-of-the-art results across tasks including conversational AI (Skjuve et al., 2024; Zhang, 2024), scientific discovery (Kumar, 2024), and multimodal systems (Zhang et al., 2023; Chhikara et al., 2024; Zhang et al.). As LLMs increasingly guide critical decisions in sensitive domains—such as healthcare, finance, and law—the reliability of their confidence estimates becomes paramount. Misalignment between model confidence and actual correctness, known as miscalibration, poses severe risks, potentially eroding user trust and causing costly or hazardous errors (Dhuliawala et al., 2023; Geng et al., 2024). For example, as illustrated in Figure 1, when asked "Who received the IEEE Frank Rosenblatt Award in 2010?", a leading LLM confidently but incorrectly answers "Geoffrey Hinton" with a confidence of $93\%$ , despite the correct answer being "Michio Sugeno". Such pronounced overconfidence can lead users to mistakenly trust erroneous outputs—particularly problematic in high-stakes applications such as medical diagnoses or financial decisions. Well-calibrated models, on the other hand, report confidence scores accurately reflecting their true reliability, thus enabling systems to flag uncertain predictions for human oversight and significantly mitigating real-world risks.
|
| 21 |
+
|
| 22 |
+
Modern Question-Answering (QA) pipelines adopt "one-right+several-wrong" format — whether through retrieval-augmented candidate spans, knowledge-graph sibling entities, self-consistency checks, or ensemble methods—to improve answer selection. However, these structured, distractor-rich settings introduce novel challenges for confidence estimation—challenges that classical post-hoc calibration methods (temperature scaling, Platt scaling, isotonic regression (Guo et al., 2017)) were not originally designed to address. Although such methods have proven effective on small- to medium-scale neural networks, their applicability to today's large-scale LLMs under real-world, distractor-heavy conditions remains unclear. Prior work has examined isolated factors—model scale, architecture (dense vs. mixture of experts (MoE)), and fine-tuning regime (supervised fine-tuning (SFT) vs. reinforcement learning from human feedback (RLHF)) (Leng et al., 2024; Li et al., 2024)—but has not extensively examined how explicit distractors, ubiquitous in deployed QA systems, affect calibration accuracy and confidence ranking for modern LLMs.
|
| 23 |
+
|
| 24 |
+
Mitigating overconfidence through distractors Research in cognitive psychology demonstrates that human overconfidence can be reduced by explicitly considering alternative answers before making decisions (Lord et al., 1984; Mussweiler et al., 2000). Inspired by this "consider-the-opposite" strategy, we investigate whether presenting LLMs with plausible distractors similarly mitigates their systematic overconfidence and enhances calibration. To be specific, we conduct the first large-scale empirical study of LLM calibration, comparing performance under standard (free-generation) and distractor-augmented settings. Our contributions are fourfold: (1) we introduce a unified calibration benchmark evaluating nine state-of-the-art LLMs—spanning model sizes (8B-70B to greater than 1T), architectures (dense vs. MoE), and fine-tuning methods (SFT vs. RLHF)—across three factual QA datasets (SimpleQA, FaVIQ, TriviaQA); (2) we propose a structured distractor-augmented evaluation paradigm, where models select answers from one correct and multiple plausible incorrect options, enabling simultaneous assessment of accuracy improvements and Expected Calibration Error (ECE); (3) we perform fine-grained analyses across question types (e.g., person, date) to identify conditions of severe miscalibration; and (4) we systematically disentangle how scale, tuning regime, and architecture independently influence model calibration and responsiveness to distractors.
|
| 25 |
+
|
| 26 |
+
# 2 Related Work
|
| 27 |
+
|
| 28 |
+
Intrinsic calibration Methods directly elicit uncertainty from LLMs. Prompt-based approaches that verbalize confidence (Tian et al., 2023; Mielke et al., 2022; Lin et al.) or aggregate multiple outputs (Xiong et al.) have demonstrated effectiveness, particularly for black-box models.
|
| 29 |
+
|
| 30 |
+
Fine-tuning strategies Approaches such as calibration-aware RLHF (Leng et al., 2024) and Mixup-style data augmentation (Park & Caragea, 2022) aim to optimize calibration during training. While these methods reduce ECE, they remain sensitive to distribution shifts (Liu et al.).
|
| 31 |
+
|
| 32 |
+
Calibration during pre-training and alignment Chen et al. (2023) show that calibration emerges early in self-supervised pre-training, and Zhu et al. (2023) demonstrate that instruction tuning and RLHF can preserve or even enhance these gains. Jiang et al. (2021) find that post-hoc methods like temperature scaling often fail to align confidence with accuracy in factual QA. However, the effects of explicit distractor-based prompting on LLM calibration remain unexplored.
|
| 33 |
+
|
| 34 |
+
Post-hoc calibration Techniques adjust predictions after training, with classical approaches like temperature scaling (Guo et al., 2017) still underexplored for contemporary LLMs. Recent studies indicate persistent overconfidence even at larger scales, highlighting a potential degradation in calibration performance with increased model size (Zhou et al., 2024a;b).
|
| 35 |
+
|
| 36 |
+
Despite extensive research, existing literature lacks a systematic exploration of structured distractor effects on calibration. Motivated by psychological findings that considering alternative answers can reduce human overconfidence, our work introduces a structured distractor-augmented evaluation framework. Unlike prior methods, we empirically investigate how explicit distractor scenarios—common in practical applications such as retrieval-augmented generation and multiple-choice contexts—impact LLM calibration. Additionally, we conduct detailed, fine-grained analyses across different question types (e.g., person, date, place), uncovering context-dependent calibration challenges previously overlooked. Our findings thus address critical gaps in calibration research, offering practical insights to enhance the reliability and trustworthiness of LLM deployments.
|
| 37 |
+
|
| 38 |
+
# 3 Experimental Setup
|
| 39 |
+
|
| 40 |
+
# 3.1 Evaluation Datasets:
|
| 41 |
+
|
| 42 |
+
SimpleQA We use the SimpleQA dataset (Wei et al., 2024), which provides a reliable benchmark for evaluating LLM factual accuracy and calibration. Comprising short, fact-seeking queries with clearly defined correct answers, SimpleQA enables precise measurement of model confidence and alignment with factual correctness. Its high-quality annotations, verified by multiple independent AI trainers, ensure accuracy and unambiguity, making it well-suited for calibration assessment. The dataset contains 4326 question-answer pairs.
|
| 43 |
+
|
| 44 |
+
FaVIQ (Park et al., 2022) We select data points from test subset of the R-set. The dataset initially contains 5877 data points, out of which we focus exclusively on the 2922 data points labeled as "supports," indicating that the provided answer is correct. FaVIQ is particularly appropriate for our experiments due to its construction methodology derived from real-world information-seeking questions. This design inherently reduces strong lexical biases found in other crowdsourced datasets, promoting nuanced semantic understanding.
|
| 45 |
+
|
| 46 |
+
TriviaQA We use the TriviaQA dataset (Joshi et al., 2017), for evaluating open-domain question answering and factual knowledge retrieval. For our experiments, we select first 1000 question-answer pairs from the validation split of the rc.web.no_content subset, ensuring a diverse yet controlled evaluation set. By restricting our selection to the no-content class, we focus on settings where models must rely purely on prior knowledge without the assistance of retrieved supporting context, isolating intrinsic model calibration behavior.
|
| 47 |
+
|
| 48 |
+
# 3.2 Evaluation Methods
|
| 49 |
+
|
| 50 |
+
Let our evaluation set be
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\mathcal {S} = \left\{(q _ {i}, a _ {i}) \right\} _ {i = 1} ^ {n},
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $q_{i}$ is the $i$ -th question and $a_{i}$ its ground-truth answer. We compare two prompting regimes:
|
| 57 |
+
|
| 58 |
+
Free-generation baseline $(\mathcal{N})$ We prepend each $q_{i}$ with a fixed prompt template $\pi_N$ and let model generate a completion answer and the confidence:
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
\mathbf {y} _ {i} ^ {(N)}, \mathbf {c} _ {i} ^ {(N)} = \operatorname {L L M} \left(\pi_ {N} \| q _ {i}\right),
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
where the model's final answer is $\mathbf{y}_i^{(N)}$ and the associated confidence is $\mathbf{c}_i^{(N)}$ .
|
| 65 |
+
|
| 66 |
+
Distractor-augmented setting $(\mathcal{D})$ For each $(q_i, a_i)$ we sample three distractors $\{d_{i,1}, d_{i,2}, d_{i,3}\}$ and form the choice list
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\mathcal {C} _ {i} = \operatorname {s h u f f l e} \left(\left\{a _ {i} \right\} \cup \left\{d _ {i, j} \right\} _ {j = 1} ^ {3}\right).
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
We then feed the model:
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\mathbf {y} _ {i} ^ {(D)}, \mathbf {c} _ {i} ^ {(D)} = \operatorname {L L M} \left(\pi_ {D} \| q _ {i} \| \mathcal {C} _ {i}\right),
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
and extract the model's final answer $\mathbf{y}_i^{(D)}$ and record the associated confidence $\mathbf{c}_i^{(D)}$ . Prompt templates $\pi_N, \pi_D$ are provided in Appendix A. Our distractors were generated using GPT-4o-mini with a carefully designed prompt to ensure they were factually incorrect yet contextually plausible. Specifically, for each question-answer pair in the datasets, we used GPT-4o-mini to generate three distractors that (i) matched the expected answer type (e.g., dates for date-based questions), (ii) remained distinct from the correct answer, and (iii) maintained comparable specificity and context. To further validate plausibility, we manually inspected over 500 randomly sampled examples from the SimpleQA dataset, confirming that the generated distractors were consistently plausible and contextually relevant.
|
| 79 |
+
|
| 80 |
+
Why Elicited Confidence? We measure confidence via elicited self-reports (0-100) because they capture task-level belief ("How sure are you that your answer is correct?") rather than token-level fluency artifacts. Prior work shows that verbalized probabilities can better reflect correctness than raw token likelihoods, which are sensitive to phrasing and tokenization (Lin et al.). In RLHF-tuned systems, elicited confidence has also been observed to track calibration more reliably than log-probabilities, which can degrade post-alignment (Tian et al., 2023). This aligns with "linguistic calibration" evidence that making models state their confidence reduces overconfidence and improves user-facing transparency (Mielke et al., 2022). Elicitation is also the only uniformly available signal across the nine black-box and open-weight models we evaluate, enabling apples-to-apples comparison. We do not claim it is the sole or optimal measure: logit-based margins/entropy and self-consistency (majority-vote variance) are complementary signals. Our findings should therefore be interpreted in the context of elicited confidence; we add this clarification to promote comparability and to reflect the literature's guidance on when verbalized uncertainty is informative.
|
| 81 |
+
|
| 82 |
+
# 3.3 Selected LLMs
|
| 83 |
+
|
| 84 |
+
We select nine representative models, spanning three major families—OpenAI's GPT-4 series, Meta's LLaMA lineage, and two leading open-weight models (Gemma-2 and Qwen-qwq). We select these models from OpenAI<sup>1</sup> and GroqCloud<sup>2</sup> API services. More details about the selected models and their taxonomy are in Table 1.
|
| 85 |
+
|
| 86 |
+
GPT-4 Family The GPT-4 family consists of three variants: GPT-4o, GPT-4-turbo, and GPT-4o-mini (a smaller, assumed to be 8B-parameter model). All three support an extended 128K-token context window and are instruction-tuned via SFT followed by RLHF to optimize conversational quality. They differ primarily in total parameter count—leading to different trade-offs in inference latency and compute cost.
|
| 87 |
+
|
| 88 |
+
LLaMA Lineage Meta's LLaMA-3 (Dubey et al., 2024) series spans three dense checkpoints: an 8B base with 8K window, an 8B "Instant" assistant-tuned model with 128K window, and a 70B base (8K window). Each uses Grouped-Query Attention (GQA) (Ainslie et al.) for efficient long-context processing; all these variant undergoes SFT and RLHF. In contrast, LLaMA-4-Scout- $17\mathrm{b}^{3}$ adopts a 16-expert MoE transformer which supports up to 10M tokens, and is fine-tuned with both SFT and RLHF for reasoning.
|
| 89 |
+
|
| 90 |
+
Table 1: Taxonomy of selected LLMs showing differences in training and fine-tuning approaches, where FT (fine-tuning) and IT (instruct-tuning).
|
| 91 |
+
|
| 92 |
+
<table><tr><td>Model (Params / Context)</td><td>Architecture</td><td>Dataset Type</td><td>Training Strategy</td><td>FT</td><td>IT</td></tr><tr><td>GPT-4o (undisclosed / 128K)</td><td>Dense</td><td>Multi-modal (web text, code, images, audio transcripts)</td><td>Pre-training + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>GPT-4o-mini (undisclosed / 128K)</td><td>Dense</td><td>Multi-modal (web text, code, images)</td><td>SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>GPT-4-turbo (undisclosed / 128K)</td><td>Dense</td><td>Multi-modal (web text, code, images)</td><td>Pre-training + SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>LLaMA-3.1-8B-Instant (8B / 128K)</td><td>Dense (GQA)</td><td>15T tokens - Public (web, code, multilingual)</td><td>Pre-training + SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>LLaMA-3-8B-Instruct (8B / 8K)</td><td>Dense (GQA)</td><td>15T tokens - Public (web, code, multilingual)</td><td>Pre-training + SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>LLaMA-3-70B-Instruct (70B / 8K)</td><td>Dense (GQA)</td><td>15T tokens - Public (web, code, multilingual)</td><td>Pre-training + SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>LLaMA-4-Scout-17B (17B / 10M)</td><td>MoE (16 experts)</td><td>40T tokens - Mixed (text + vision, multilingual)</td><td>MoE Pre-training + SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>Gemma2-9B-it (9B / 8K)</td><td>Dense (GQA, interleaved local-global)</td><td>8T tokens - Public (web, academic)</td><td>Distillation + SFT + RLHF</td><td>✓</td><td>✓</td></tr><tr><td>Qwen-qwq-32B (32B / 131K)</td><td>Dense (GQA)</td><td>18T tokens - Public (multilingual; web text, code, scientific lit.)</td><td>Pre-training + SFT</td><td>✓</td><td>✓</td></tr></table>
|
| 93 |
+
|
| 94 |
+
Open-Weight Alternatives Gemma2-9b-it (Team et al., 2024) leverages knowledge-distillation pretraining on 8T tokens—with interleaved local-global and group-query attention—followed by instruction fine-tuning (SFT + Direct Preference Optimization (DPO) (Rafailov et al., 2023)) within 8K-token context. By contrast, Qwen-qwq-32b (Hui et al., 2024; Yang et al., 2024) is a 32B-parameter dense model (64 layers, Rotary Positional Embedding (RoPE) (Su et al., 2024), SwiGLU (Dauphin et al., 2017)) pre-trained on 18T multilingual tokens without RLHF.
|
| 95 |
+
|
| 96 |
+
The GPT-4 family and LLaMA-3 series are dense models trained on massive multimodal or text-only corpora, each undergoing both SFT and RLHF before instruct-tuning. In contrast, LLaMA-4-Scout employs a 16-expert MoE design over mixed vision-text data, while Gemma2-9b-it and Qwen-qwq-32b explore distillation-based and pure SFT regimes, respectively. Despite these varied strategies, all nine models receive dedicated fine-tuning and instruction-tuning to optimize performance and calibration in downstream QA and conversational tasks.
|
| 97 |
+
|
| 98 |
+
# 3.4 Evaluation Criteria
|
| 99 |
+
|
| 100 |
+
Following prior work, we use GPT-4o-mini as an LLM-based judge to classify responses as CORRECT, INCORRECT, or NOT_ATTEMPTED (Packer et al., 2023; Wei et al., 2024; Chhikara et al., 2025). A response is CORRECT if it fully captures the gold target's key information without contradiction, allowing minor variations in wording, order, or hedging. It is INCORRECT if it contains factual errors, contradictions, or misleading speculation, even if hedged. NOT_ATTEMPTED applies when a response lacks essential information without in
|
| 101 |
+
|
| 102 |
+
Table 2: Performance metrics of LLMs in the Normal $(\mathcal{N})$ and Distractor $(\mathcal{D})$ settings on the SimpleQA, FaVIQ, and TriviaQA datasets, including accuracy (correct), NOT_ATTEMPTED (na), ECE, and the number of helped $(\mathcal{D}_{helped})$ and harmed $(\mathcal{D}_{harmed})$ instances with their percentages.
|
| 103 |
+
|
| 104 |
+
<table><tr><td>Dataset / LLMs</td><td>Ncorrect</td><td>Na</td><td>NECE</td><td>Dcorrect</td><td>Dna</td><td>DECE</td><td>Dhelped</td><td>Dharmed</td></tr><tr><td colspan="9">SimpleQA</td></tr><tr><td>GPT-4o-mini</td><td>8.46%</td><td>6.80%</td><td>0.750</td><td>47.43%</td><td>0.02%</td><td>0.320</td><td>1644 (93.78%)</td><td>109 (6.22%)</td></tr><tr><td>GPT-4-turbo</td><td>20.37%</td><td>6.17%</td><td>0.612</td><td>65.40%</td><td>0.00%</td><td>0.165</td><td>1877 (95.86%)</td><td>81 (4.14%)</td></tr><tr><td>GPT-4o</td><td>35.14%</td><td>7.88%</td><td>0.450</td><td>73.42%</td><td>0.02%</td><td>0.037</td><td>1569 (91.97%)</td><td>137 (8.03%)</td></tr><tr><td>LLaMA-3.1-8b-instant∞</td><td>5.58%</td><td>18.78%</td><td>0.799</td><td>44.64%</td><td>0.12%</td><td>0.367</td><td>1355 (95.29%)</td><td>67 (4.71%)</td></tr><tr><td>LLaMA-3-8B-8192∞</td><td>4.79%</td><td>21.20%</td><td>0.810</td><td>44.01%</td><td>0.55%</td><td>0.361</td><td>1382 (95.57%)</td><td>62 (4.43%)</td></tr><tr><td>LLaMA-3-70b-8192∞</td><td>12.73%</td><td>16.46%</td><td>0.760</td><td>55.81%</td><td>0.25%</td><td>0.239</td><td>1587 (95.09%)</td><td>82 (4.91%)</td></tr><tr><td>LLaMA-4-scout-17b∞</td><td>6.70%</td><td>8.76%</td><td>0.631</td><td>50.30%</td><td>0.02%</td><td>0.285</td><td>1763 (95.40%)</td><td>85 (4.60%)</td></tr><tr><td>Gemma2-9B-itG</td><td>5.48%</td><td>33.29%</td><td>0.799</td><td>45.58%</td><td>1.78%</td><td>0.367</td><td>1143 (94.38%)</td><td>68 (5.62%)</td></tr><tr><td>Qwen-qwq-32b</td><td>7.59%</td><td>3.99%</td><td>0.680</td><td>51.68%</td><td>0.00%</td><td>0.253</td><td>1784 (96.48%)</td><td>65 (3.52%)</td></tr><tr><td colspan="9">FaVIQ</td></tr><tr><td>GPT-4o-mini</td><td>47.19%</td><td>4.08%</td><td>0.426</td><td>69.73%</td><td>0.24%</td><td>0.161</td><td>722 (85.85%)</td><td>119 (14.15%)</td></tr><tr><td>GPT-4-turbo</td><td>54.76%</td><td>5.79%</td><td>0.357</td><td>80.07%</td><td>0.31%</td><td>0.062</td><td>682 (93.81%)</td><td>45 (6.19%)</td></tr><tr><td>GPT-4o</td><td>56.20%</td><td>4.83%</td><td>0.315</td><td>81.37%</td><td>0.21%</td><td>0.036</td><td>688 (93.73%)</td><td>46 (6.27%)</td></tr><tr><td>LLaMA-3.1-8b-instant∞</td><td>36.27%</td><td>6.16%</td><td>0.532</td><td>60.14%</td><td>0.31%</td><td>0.267</td><td>776 (80.83%)</td><td>184 (19.17%)</td></tr><tr><td>LLaMA-3-8b-8192∞</td><td>30.85%</td><td>4.81%</td><td>0.587</td><td>58.53%</td><td>0.68%</td><td>0.282</td><td>892 (86.02%)</td><td>145 (13.98%)</td></tr><tr><td>LLaMA-3-70b-8192∞</td><td>44.95%</td><td>4.79%</td><td>0.463</td><td>72.85%</td><td>0.76%</td><td>0.139</td><td>715 (91.32%)</td><td>68 (8.68%)</td></tr><tr><td>LLaMA-4-scout-17b∞</td><td>39.05%</td><td>3.74%</td><td>0.499</td><td>67.85%</td><td>0.28%</td><td>0.218</td><td>861 (89.41%)</td><td>102 (10.59%)</td></tr><tr><td>Gemma2-9b-itG</td><td>35.80%</td><td>10.42%</td><td>0.581</td><td>58.95%</td><td>1.28%</td><td>0.300</td><td>607 (82.25%)</td><td>131 (17.75%)</td></tr><tr><td>Qwen-qwq-32b</td><td>38.34%</td><td>3.62%</td><td>0.530</td><td>68.54%</td><td>0.25%</td><td>0.200</td><td>850 (92.69%)</td><td>67 (7.31%)</td></tr><tr><td colspan="9">TriviaQA</td></tr><tr><td>GPT-4o-mini</td><td>81.13%</td><td>0.12%</td><td>0.104</td><td>87.81%</td><td>0.00%</td><td>0.065</td><td>75 (77.32%)</td><td>22 (22.68%)</td></tr><tr><td>GPT-4-turbo</td><td>90.38%</td><td>0.36%</td><td>0.025</td><td>95.43%</td><td>0.00%</td><td>0.048</td><td>43 (84.31%)</td><td>8 (15.69%)</td></tr><tr><td>GPT-4o</td><td>89.66%</td><td>0.24%</td><td>0.071</td><td>95.55%</td><td>0.00%</td><td>0.083</td><td>48 (85.71%)</td><td>8 (14.29%)</td></tr><tr><td>LLaMA-3.1-8b-instant∞</td><td>75.46%</td><td>0.31%</td><td>0.153</td><td>81.35%</td><td>0.00%</td><td>0.101</td><td>119 (67.61%)</td><td>57 (32.29%)</td></tr><tr><td>LLaMA-3-8b-8192∞</td><td>67.41%</td><td>0.51%</td><td>0.221</td><td>78.93%</td><td>0.00%</td><td>0.113</td><td>166 (74.11%)</td><td>58 (25.89%)</td></tr><tr><td>LLaMA-3-70b-8192∞</td><td>82.86%</td><td>0.30%</td><td>0.070</td><td>91.43%</td><td>0.00%</td><td>0.026</td><td>103 (83.74%)</td><td>20 (16.26%)</td></tr><tr><td>LLaMA-4-scout-17b∞</td><td>77.92%</td><td>0.40%</td><td>0.141</td><td>86.87%</td><td>0.00%</td><td>0.079</td><td>127 (74.71%)</td><td>43 (25.29%)</td></tr><tr><td>Gemma2-9b-itG</td><td>70.10%</td><td>1.41%</td><td>0.230</td><td>81.55%</td><td>0.00%</td><td>0.107</td><td>150 (75.76%)</td><td>48 (24.24%)</td></tr><tr><td>Qwen-qwq-32b</td><td>75.03%</td><td>0.62%</td><td>0.158</td><td>88.08%</td><td>0.00%</td><td>0.042</td><td>133 (86.93%)</td><td>20 (13.07%)</td></tr></table>
|
| 105 |
+
|
| 106 |
+
producing errors, including vague or evasive answers. We experiment with using the same LLM for both prediction and judgment, finding that smaller LLM judges often misclassify responses or hesitate to assign NOT_ATTEMPTED when no valid answer is generated. Manual inspection confirms these issues, and further details are provided in the Appendix B.
|
| 107 |
+
|
| 108 |
+
# 3.5 Evaluation Metrics:
|
| 109 |
+
|
| 110 |
+
To evaluate performance, we measure correctly answered questions for both variations $(\mathcal{N}$ and $\mathcal{D})$ . For calibration assessment, we use ECE to quantify the misalignment between a model's predicted confidence and actual accuracy. A well-calibrated model produces confidence estimates that closely match its true correctness, with an ECE of zero indicating perfect calibration. Following (Naeini et al., 2015), we compute ECE using empirical binning (bin size 0.1) to ensure a robust measurement of miscalibration. Additionally, we define two complementary metrics: $\mathcal{D}_{helped}$ , denoting instances where the model failed under the $\mathcal{N}$ setting but succeeded when distractors were added ( $\mathcal{N}_i = 0$ and $\mathcal{D}_i = 1$ ); and $\mathcal{D}_{harmed}$ , capturing the reverse—cases where the model initially answered correctly under $\mathcal{N}$ but erred when distractors were introduced ( $\mathcal{N}_i = 1$ and $\mathcal{D}_i = 0$ ).
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
Figure 2: Reliability diagrams (RDs) showing calibration performance in $\mathcal{N}(\bullet)$ and $\mathcal{D}(\bullet)$ settings on the SimpleQA dataset. (y-axis: actual accuracy, x-axis: predicted confidence)
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
# 4 Experimental Results and Analysis
|
| 132 |
+
|
| 133 |
+
# 4.1 Quantifying Baseline Calibration of LLMs
|
| 134 |
+
|
| 135 |
+
We first quantify each model's out-of-the-box accuracy $(\mathcal{N}_{\text{correct}})$ , NOT_ATTEMPTED $(\mathcal{N}_{\text{na}})$ , and $\mathcal{N}_{\text{ECE}}$ on three benchmarks: SimpleQA (a deliberately hard, concise factoid task), FaVIQ (moderate difficulty), and TriviaQA (an easier, QA dataset). Table 2 reports the full metrics.
|
| 136 |
+
|
| 137 |
+
On the hardest SimpleQA benchmark—where direct “off-the-shelf” generation is inherently complex—even GPT-40 attains only $35\%$ accuracy (ECE 0.45, NOT_ATTEMPTED ≈ $8\%$ ). Larger GPT-4 variants have been pretrained on vastly more tokens (and may even have encountered similar questions), yet their calibration loss remains on the same order as smaller models. This parity—despite scale and pretraining volume—indicates persistent overconfidence across sizes on challenging questions.
|
| 138 |
+
|
| 139 |
+
By contrast, the easy TriviaQA setting reveals the limits of overconfidence: GPT-4o's accuracy rises to $90\%$ with ECE $\approx 0.07$ and near-zero NOT_ATTEMPTED, while smaller or open-source models tighten their reliability curves more markedly. In other words, on simpler, context-rich queries, larger models not only answer correctly but also exhibit proportionally less overconfidence compared to hard benchmarks. FaVIQ again falls between these extremes, with both accuracy and ECE interpolating smoothly.
|
| 140 |
+
|
| 141 |
+
Comparative analysis across families shows that GPT-4 variants consistently occupy the "high accuracy, low ECE, low NOT_ATTEMPTED" regime on all datasets. Small LLaMA-3 models, in contrast, post single-digit
|
| 142 |
+
|
| 143 |
+
accuracy on SimpleQA, ECEs approaching 0.8, and frequent deferrals; scaling them to 70B or adopting to other open-source alternatives (Gemma2-9b-it, Qwen-qwq-32b) yields only incremental gains.
|
| 144 |
+
|
| 145 |
+
# 4.2 Effects of Structured Distractors on Accuracy and Confidence
|
| 146 |
+
|
| 147 |
+
To quantify effect of adding correct answer alongside three incorrect options, we measure relative accuracy gain $\Delta \mathrm{Acc} = (\mathcal{D}_{\mathrm{correct}} - \mathcal{N}_{\mathrm{correct}}) / \mathcal{N}_{\mathrm{correct}}$ and ECE compression $\Delta \mathrm{ECE} = \mathcal{N}_{\mathrm{ECE}} - \mathcal{D}_{\mathrm{ECE}}$ . Table 2 reports counts and percentage of $\mathcal{D}_{\mathrm{helped}}$ vs. $\mathcal{D}_{\mathrm{harmed}}$ and NOT_ATTEMPTED $(\mathcal{D}_{\mathrm{na}})$ . Figure 2 overlays reliability diagrams (RDs) for all the nine models on SimpleQA dataset. RDs for FaVIQ and TriviaQA are in the Appendix C.
|
| 148 |
+
|
| 149 |
+
Across all models, structured distractors boost accuracy and (for most cases) reduce ECE. On the challenging SimpleQA benchmark, distractors often halve ECE (up to $\Delta \mathrm{ECE} \approx 0.4$ ) and more than double accuracy for smaller variants, confirming that explicit options help recalibrate confidence when generation alone is unreliable. However, on TriviaQA—the easiest dataset—GPT-4o and GPT-4-turbo (the largest models) exhibit a slight increase in ECE under $\mathcal{D}$ , despite relative accuracy gains of $3 - 5\%$ . We observe that $\mathrm{ECE}_{\mathcal{N} \to \mathcal{D}}(\mathrm{GPT}-4o)$ rises from 0.071 to 0.083, and $\mathrm{ECE}_{\mathcal{N} \to \mathcal{D}}(\mathrm{GPT}-4$ -turbo) rises from 0.025 to 0.048. This counterintuitive effect likely stems from confidence inflation: on already-easy examples, the multiple-choice context encourages the model to assign excessive probability mass to the correct answer, amplifying residual misalignment between predicted confidence and true correctness.
|
| 150 |
+
|
| 151 |
+
Smaller models $(< 10\mathrm{B})$ show the largest $\Delta$ Acc on FaVIQ and SimpleQA but also the highest $\mathcal{D}_{\text{harmed}}$ rates on FaVIQ and TriviaQA, suggesting that limited pretraining makes them more susceptible to distractor-induced errors. Notably, $\mathcal{D}_{\text{na}}$ drops for all models (to zero on TriviaQA), underlining that no LLM abstains once explicit options are provided for easier questions.
|
| 152 |
+
|
| 153 |
+
# 4.3 Influence of Fine-Tuning Regime and Model Architecture
|
| 154 |
+
|
| 155 |
+
Our analysis systematically investigates how different fine-tuning strategies and model architectures impact accuracy and calibration performance, specifically in the context of structured distractors.
|
| 156 |
+
|
| 157 |
+
First, we observe that effectiveness of RLHF on calibration performance varies significantly across different model implementations and sizes. While RLHF models such as GPT-4o-mini (assumed 8B parameters) exhibit superior calibration performance (ECE 0.750 reduced to 0.320 on SimpleQA), smaller-scale RLHF models like LLaMA-3-8b-8192 and LLaMA-3.1-8b-Instant underperform relative to Qwen-qwq-32B, an SFT-only model. Specifically, in $\mathcal{D}$ setting, Qwen-qwq-32B demonstrates better accuracy (51.68% vs. 44.64% for LLaMA-3.1 and 44.01% for LLaMA-3-8b on SimpleQA) and calibration (ECE 0.253 vs. 0.367 for LLaMA-3.1 and 0.361 for LLaMA3-8b), highlighting that RLHF alone does not guarantee superior calibration. This indicates that other factors, such as the volume and diversity of training data, the quality of fine-tuning data, and overall training strategies, play crucial roles.
|
| 158 |
+
|
| 159 |
+
Examining within the LLaMA family, LLaMA 3.1-8b-Instruct notably outperforms the earlier LLaMA3-8B variant across all benchmarks despite identical parameter counts. This improved performance, especially evident in all three open-domain QA tasks in $\mathcal{N}$ setting (e.g. accuracy $75.46\%$ vs. $67.41\%$ in TriviaQA dataset), stems from enhancements in parametric knowledge, refined instruction tuning, and superior calibration. LLaMA-3.1-8b benefited from training on an extended and more recent dataset (up to December 2023), enabling better retention of long-tail factual knowledge and improved instruction-following capabilities, thereby enhancing robustness and reliability.
|
| 160 |
+
|
| 161 |
+
Distilled models, notably Gemma2-9b-it, exhibit higher "NOT_ATTEMPTED" rates on all the three datasets (e.g. $33.29\%$ on SimpleQA) compared to similarly sized models, indicating challenges in effectively utilizing their compressed knowledge base without external support. Qwen-qwq-32b, despite lacking RLHF finetuning, consistently produces answers with lower "NOT_ATTEMPTED" rates and demonstrates robustness against distractor-induced errors, as indicated by its lower percentage of harmed instances.
|
| 162 |
+
|
| 163 |
+
Furthermore, although significantly larger, the MoE-based LLaMA-4-Scout-17b does not outperform GPT-4o-mini in accuracy or calibration, underscoring that training volume and quality significantly im
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
Figure 3: Accuracy and calibration shifts with distractors. We show relative accuracy gains (bars) and ECE changes (points) when distractor options are added. While all models improve in accuracy, calibration effects vary—large models benefit most, while smaller or models often remain miscalibrated.
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
|
| 170 |
+
pact performance more than sheer parameter count. However, at a substantially larger scale, LLaMA-3-70b successfully surpasses GPT-4o-mini, highlighting the interplay of extensive parameterization, robust training, and fine-tuning strategies.
|
| 171 |
+
|
| 172 |
+
In conclusion, our comprehensive analysis reveals that while RLHF can substantially improve model calibration, it is not universally effective without careful consideration of other critical training factors. Models leveraging large-scale training datasets, updated fine-tuning approaches, and comprehensive instruction tuning achieve optimal accuracy and calibration. Effective deployment in reliability-sensitive contexts thus demands a strategic blend of parameterization, extensive pre-training, robust fine-tuning methodologies, and carefully designed calibration interventions.
|
| 173 |
+
|
| 174 |
+
# 4.4 Effect of Model Size within LLM Families
|
| 175 |
+
|
| 176 |
+
We disentangle parameter count from other factors by comparing the smallest, mid-sized, and largest checkpoints released by each provider. In the normal setting $(\mathcal{N})$ , accuracy increases monotonically with scale for both families; however, calibration improves much faster for the OpenAI series. GPT-40 already attains an ECE of 0.450 on SimpleQA. This suggests that RLHF alignment, used uniformly across GPT-4 models, amplifies the natural size-driven gains in self-assessment that emerge from scaling alone.
|
| 177 |
+
|
| 178 |
+
Introducing distractor options $(\mathcal{D})$ radically alters the picture. The largest relative accuracy jumps occur in the smallest models as shown in Figure 3: GPT-4o-mini rockets from $8.5\%$ to $47.4\%$ accuracy on SimpleQA $(+461\%)$ , and LLaMA-3-8b leaps $+819\%$ over the same split. In contrast, their flagship counterparts—GPT-4o and LLaMA-70b—gain a more modest $+109\%$ and $+338\%$ , respectively. Yet these headline boosts do not translate into equally dramatic calibration improvements. After distractors, GPT-4o compresses its ECE by $92\%$ (to 0.037), whereas GPT-4o-mini still lingers above 0.32. A mirror pattern holds for LLaMA: the 70B model cuts ECE by $69\%$ , finishing at 0.24, while the 8B base remains mis-calibrated (ECE 0.36) despite its vast accuracy lift. When we extend this analysis to FaVIQ and TriviaQA, the same ranking by model size holds but with attenuated returns. On FaVIQ, distractors yield solid—but more moderate—accuracy boosts across all scales. On TriviaQA, where the baseline performance is already high, relative gains shrink into the low-teens, indicating only marginal benefit from explicit distractors.
|
| 179 |
+
|
| 180 |
+
Two key takeaways emerge from our analysis. First, small models acquire factual knowledge more quickly than they learn to assess their own confidence: adding explicit answer choices boosts accuracy but yields poorly calibrated probability estimates. Second, increasing model scale primarily improves a model's ability to quantify its certainty rather than uncover new knowledge. Consequently, when downstream tasks rely directly on confidence scores—such as risk-aware planning or answer adjudication—larger models offer more trustworthy probabilities. In contrast, in settings where latency or compute cost is the main constraint
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
|
| 186 |
+

|
| 187 |
+
(a) Date Person
|
| 188 |
+
(c) Person
|
| 189 |
+
Figure 4: Performance (correct) of LLMs across different question types in both $\mathcal{N}(\bullet)$ and $\mathcal{D}(\bullet)$ settings.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
(b) Number Place
|
| 193 |
+
(d) Place
|
| 194 |
+
|
| 195 |
+
and some post-hoc temperature scaling is acceptable, smaller models can still deliver adequate confidence estimates so long as they're provided with distractors or retrieval-augmented context.
|
| 196 |
+
|
| 197 |
+
# 4.5 Performance Across Question Types
|
| 198 |
+
|
| 199 |
+
The SimpleQA dataset already categorizes 4326 questions into four non-overlapping types—Date (1418), Number (663), Person (1041), and Place (427)—and we evaluate each model's accuracy and ECE in both the free-generation $(\mathcal{N})$ and distractor-augmented $(\mathcal{D})$ settings. To better understand calibration weaknesses, we analyze model performance and confidence alignment across these question types (Figure 4). Person-based queries are most challenging, likely due to name ambiguities and inherent variability in names, overlapping roles, and contextual dependencies that require deeper reasoning beyond surface-level pattern matching. LLMs frequently confuse historical figures with similar names, but providing structured answer
|
| 200 |
+
|
| 201 |
+
Table 3: ECE comparison across question types.
|
| 202 |
+
|
| 203 |
+
<table><tr><td></td><td colspan="2">Date</td><td colspan="2">Number</td><td colspan="2">Person</td><td colspan="2">Place</td></tr><tr><td></td><td>N</td><td>D</td><td>N</td><td>D</td><td>N</td><td>D</td><td>N</td><td>D</td></tr><tr><td>GPT-4o-mini</td><td>0.77</td><td>0.32</td><td>0.73</td><td>0.35</td><td>0.76</td><td>0.25</td><td>0.73</td><td>0.42</td></tr><tr><td>GPT-4-turbo</td><td>0.62</td><td>0.23</td><td>0.65</td><td>0.26</td><td>0.62</td><td>0.03</td><td>0.52</td><td>0.19</td></tr><tr><td>GPT-4o</td><td>0.41</td><td>0.05</td><td>0.53</td><td>0.13</td><td>0.51</td><td>0.07</td><td>0.35</td><td>0.04</td></tr><tr><td>LLaMA-3.1-8b</td><td>0.84</td><td>0.37</td><td>0.82</td><td>0.38</td><td>0.80</td><td>0.31</td><td>0.73</td><td>0.40</td></tr><tr><td>LLaMA-3-8b</td><td>0.85</td><td>0.35</td><td>0.82</td><td>0.36</td><td>0.81</td><td>0.33</td><td>0.73</td><td>0.45</td></tr><tr><td>LLaMA-3-70b</td><td>0.80</td><td>0.22</td><td>0.80</td><td>0.28</td><td>0.76</td><td>0.19</td><td>0.65</td><td>0.33</td></tr><tr><td>LLaMA-4-17b</td><td>0.68</td><td>0.30</td><td>0.65</td><td>0.29</td><td>0.60</td><td>0.24</td><td>0.53</td><td>0.35</td></tr><tr><td>Gemma2-9b-it</td><td>0.80</td><td>0.33</td><td>0.78</td><td>0.42</td><td>0.84</td><td>0.33</td><td>0.76</td><td>0.41</td></tr><tr><td>Qwen-qwq-32b</td><td>0.68</td><td>0.21</td><td>0.66</td><td>0.25</td><td>0.68</td><td>0.26</td><td>0.66</td><td>0.31</td></tr><tr><td>mean</td><td>0.72</td><td>0.26</td><td>0.72</td><td>0.30</td><td>0.71</td><td>0.22</td><td>0.63</td><td>0.32</td></tr></table>
|
| 204 |
+
|
| 205 |
+
choices significantly improves accuracy, suggesting that explicit disambiguation helps mitigate uncertainty in this category. In contrast, place-based queries exhibit relatively strong performance across both settings, indicating that geographic knowledge is well-represented in pretraining. However, calibration improvements vary (Table 3): the person category sees highest relative ECE drop $(69\%)$ , while place category shows the lowest $(49\%)$ . This suggests that structured choices help correct overconfidence in ambiguous queries but offer limited calibration gains when models already retrieve knowledge with high confidence. These results demonstrate that miscalibration depends on both task framing and knowledge representation, not just model scale or architecture. While structured reasoning improves confidence alignment in person-based queries, factual retrieval tasks like place-based questions may require alternative calibration strategies to prevent persistent overconfidence.
|
| 206 |
+
|
| 207 |
+
# 5 Conclusion
|
| 208 |
+
|
| 209 |
+
Our investigation provides a rigorous empirical foundation for understanding and addressing calibration issues in LLMs. We reveal widespread overconfidence across various model families and sizes, significantly improved by employing structured distractors—particularly effective for smaller models. However, our findings also highlight counterintuitive outcomes, including degraded calibration in large models on simpler queries. Moreover, systematic miscalibration across specific query categories underscores the complexity of the calibration challenge beyond simple accuracy. Consequently, achieving trustworthy AI requires a multifaceted calibration strategy, integrating robust RLHF, optimized prompt design, and post-hoc calibration adjustments. The evaluation framework and guidelines proposed herein serve as critical tools for future research, driving forward the development of LLMs that are not only accurate but reliably calibrated for safe real-world application.
|
| 210 |
+
|
| 211 |
+
# 6 Limitations
|
| 212 |
+
|
| 213 |
+
Generator/Judge Dependence. Our distractor-augmented setting fixes a single generator and a single judge (GPT-4o-mini) for consistency across models. While this reduces rubric drift and style confounds, it also risks bias from an "AI-generates/AI-judges" loop. We mitigate subjectivity via type- and format-matched distractor prompts, overlap and plausibility checks, and human spot-checks by three reviewers with disagreement resolution. Nevertheless, results should be interpreted as conditional on this (generator, judge) pair. We release prompts to support replication with alternative generators and judges. Exploring cross-model distractor sources and independent judges (including human-only adjudication) is important future work; our present study prioritizes comparability under a fixed setup and does not claim generator/judge invariance.
|
| 214 |
+
|
| 215 |
+
# References
|
| 216 |
+
|
| 217 |
+
Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. In The 2023 Conference on Empirical Methods in Natural Language Processing.
|
| 218 |
+
Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, and Heng Ji. A close look into the calibration of pretrained language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1343-1367, 2023.
|
| 219 |
+
Prateek Chhikara, Dhiraj Chaurasia, Yifan Jiang, Omkar Masur, and Filip Ilievski. Fire: Food image to recipe generation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 8184-8194, 2024.
|
| 220 |
+
Prateek Chhikara, Dev Khant, Saket Aryan, Taranjeet Singh, and Deshraj Yadav. Mem0: Building production-ready ai agents with scalable long-term memory. arXiv preprint arXiv:2504.19413, 2025.
|
| 221 |
+
Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International conference on machine learning, pp. 933-941. PMLR, 2017.
|
| 222 |
+
Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, and Mrinmaya Sachan. A diachronic perspective on user trust in ai under uncertainty. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5567-5580, 2023.
|
| 223 |
+
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
|
| 224 |
+
Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych. A survey of confidence estimation and calibration in large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pp. 6577-6595, 2024.
|
| 225 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning, pp. 1321-1330. PMLR, 2017.
|
| 226 |
+
Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186, 2024.
|
| 227 |
+
Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977, 2021.
|
| 228 |
+
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, 2017.
|
| 229 |
+
Pranjal Kumar. Large language models (llms): survey, technical frameworks, and future challenges. Artificial Intelligence Review, 57(10):260, 2024.
|
| 230 |
+
Jixuan Leng, Chengsong Huang, Banghua Zhu, and Jiaxin Huang. Taming overconfidence in llms: Reward calibration in rlhf. arXiv preprint arXiv:2410.09724, 2024.
|
| 231 |
+
Chengzu Li, Han Zhou, Goran Glavaš, Anna Korhonen, and Ivan Vulić. Can large language models achieve calibration with in-context learning? In *ICLR 2024 Workshop on Reliable and Responsible Foundation Models*, 2024.
|
| 232 |
+
Stephanie Lin, Jacob Hilton, and Owain Evans. Teaching models to express their uncertainty in words. Transactions on Machine Learning Research.
|
| 233 |
+
|
| 234 |
+
Hongfu Liu, Hengguan Huang, Hao Wang, Xiangming Gu, and Ye Wang. On calibration of llm-based guard models for reliable content moderation. In The Thirteenth International Conference on Learning Representations.
|
| 235 |
+
Charles G Lord, Mark R Lepper, and Elizabeth Preston. Considering the opposite: a corrective strategy for social judgment. Journal of personality and social psychology, 47(6):1231, 1984.
|
| 236 |
+
Sabrina J Mielke, Arthur Szlam, Emily Dinan, and Y-Lan Boureau. Reducing conversational agents' overconfidence through linguistic calibration. Transactions of the Association for Computational Linguistics, 10:857-872, 2022.
|
| 237 |
+
Thomas Mussweiler, Fritz Strack, and Tim Pfeiffer. Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. *Personality and Social Psychology Bulletin*, 26(9): 1142-1150, 2000.
|
| 238 |
+
Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015.
|
| 239 |
+
Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, Shishir G Patil, Ion Stoica, and Joseph E Gonzalez. Memgpt: Towards llms as operating systems. arXiv preprint arXiv:2310.08560, 2023.
|
| 240 |
+
Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, and Hannaneh Hajishirzi. Faviq: Fact verification from information-seeking questions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics (ACL), 2022.
|
| 241 |
+
Seo Yeon Park and Cornelia Caragea. On the calibration of pre-trained language models using mixup guided by area under the margin and saliency. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5364-5374, 2022.
|
| 242 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36:53728-53741, 2023.
|
| 243 |
+
Marita Skjuve, Petter Bae Brandtzaeg, and Asbjørn Følstad. Why do people use chatgpt? exploring user motivations for generative conversational ai. First Monday, 2024.
|
| 244 |
+
Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024.
|
| 245 |
+
Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024.
|
| 246 |
+
Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5433-5442, 2023.
|
| 247 |
+
Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. Measuring short-form factuality in large language models. arXiv preprint arXiv:2411.04368, 2024.
|
| 248 |
+
Miao Xiong, Zhiyuan Hu, Xinyang Lu, YIFEI LI, Jie Fu, Junxian He, and Bryan Hooi. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. In The Twelfth International Conference on Learning Representations.
|
| 249 |
+
An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115, 2024.
|
| 250 |
+
|
| 251 |
+
Jiarui Zhang. Guided profile generation improves personalization with large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pp. 4005-4016, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.231. URL https://aclanthology.org/2024 findings-emnlp.231/.
|
| 252 |
+
Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. Mllms know where to look: Training-free perception of small visual details with multimodal llms. In The Thirteenth International Conference on Learning Representations.
|
| 253 |
+
Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. Visual cropping improves zero-shot question answering of multimodal large language models. In R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Large Foundation Models, 2023.
|
| 254 |
+
Kaitlyn Zhou, Jena Hwang, Xiang Ren, and Maarten Sap. Relying on the unreliable: The impact of language models' reluctance to express uncertainty. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3623-3643, 2024a.
|
| 255 |
+
Lexin Zhou, Wout Schellaert, Fernando Martínez-Plumed, Yael Moros-Daval, César Ferri, and José Hernández-Orallo. Larger and more instructable language models become less reliable. Nature, 634(8032): 61-68, 2024b.
|
| 256 |
+
Chiwei Zhu, Benfeng Xu, Quan Wang, Yongdong Zhang, and Zhendong Mao. On the calibration of large language models and alignment. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 9778-9795, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023-findings-emnlp.654. URL https://aclanthology.org/2023-findings-emnlp.654/.
|
| 257 |
+
|
| 258 |
+
# A Custom Prompts
|
| 259 |
+
|
| 260 |
+
We generate the answers for $\mathcal{N}$ setting using the following prompt. The prompt outputs the answer and confidence for the answer in a json format.
|
| 261 |
+
|
| 262 |
+
```txt
|
| 263 |
+
1 LLM_RESPONSE_prompt = ""
|
| 264 |
+
2 You are an intelligent assistant who is given a question. Your role is to provide accurate, helpful, and well-reasoned responses based on your knowledge and capabilities.
|
| 265 |
+
3 Along with the question, you need to provide a confidence score for your answer. The confidence score should be a number between 0 and 100, where:
|
| 266 |
+
4 - 0-25 indicates low confidence
|
| 267 |
+
5 - 26-75 indicates moderate confidence
|
| 268 |
+
6 - 76-100 indicates high confidence
|
| 269 |
+
7 Guidelines for providing answers:
|
| 270 |
+
8 1. Be direct and concise in your answer while ensuring completeness. Avoid unnecessary words or tangents.
|
| 271 |
+
9 2. If you are uncertain, provide a lower confidence score.
|
| 272 |
+
10 3. Base your confidence score on:
|
| 273 |
+
11 - The reliability and recency of available information
|
| 274 |
+
12 - Your knowledge of the specific domain
|
| 275 |
+
13 Here are some examples:
|
| 276 |
+
14 Example 1:
|
| 277 |
+
15 Question: What is the capital of France?
|
| 278 |
+
16 Answer: Paris
|
| 279 |
+
17 Confidence score: 91
|
| 280 |
+
18 (High confidence as this is a well-established fact)
|
| 281 |
+
19 Example 2:
|
| 282 |
+
20 Question: Which country has the best healthcare system?
|
| 283 |
+
21 Answer: It depends on the criteria used. Some rankings favor Switzerland, while others favor Sweden or Singapore.
|
| 284 |
+
22 Confidence score: 25
|
| 285 |
+
23 (There is no definitive answer, and the confidence is low due to the lack of a clear consensus.)
|
| 286 |
+
24 Example 3:
|
| 287 |
+
25 Question: Which state is between Washington and California?
|
| 288 |
+
26 Answer: Oregon
|
| 289 |
+
27 Confidence score: 87
|
| 290 |
+
28 (Maximum confidence as this is a clear geographic fact)
|
| 291 |
+
29 Example 4:
|
| 292 |
+
30 Question: What was Albert Einstein's favorite food?
|
| 293 |
+
31 Answer: There is no definitive record of his favorite food, but he reportedly liked pasta.
|
| 294 |
+
32 Confidence score: 25
|
| 295 |
+
33 (There are anecdotal mentions, but no verified records.)
|
| 296 |
+
34 Example 5:
|
| 297 |
+
35 Question: Is Irvine a city in California?
|
| 298 |
+
36 Answer: Yes
|
| 299 |
+
37 Confidence score: 81
|
| 300 |
+
38 (High confidence as this is a verifiable fact)
|
| 301 |
+
39 Example 6:
|
| 302 |
+
40 Question: What is the most popular programming language for AI development?
|
| 303 |
+
41 Answer: Python
|
| 304 |
+
42 Confidence score: 66
|
| 305 |
+
43 (Moderate-high confidence based on current trends, but this can change over time)
|
| 306 |
+
44 Here is a new example. Simply reply with your answer and confidence score.
|
| 307 |
+
45 Question: {{question}}
|
| 308 |
+
46 Provide your response in the following JSON format:
|
| 309 |
+
```
|
| 310 |
+
|
| 311 |
+
```txt
|
| 312 |
+
59{ 'answer': 'Your answer here', 60 61 'confidence_score': number between 0-100
|
| 313 |
+
62 }
|
| 314 |
+
63 "
|
| 315 |
+
```
|
| 316 |
+
|
| 317 |
+
We generate the answers for $\mathcal{D}$ setting using the following prompt. The prompt outputs the answer and confidence for the answer in a json format.
|
| 318 |
+
|
| 319 |
+
```txt
|
| 320 |
+
1 LLM_RESPONSE_prompt_DISTRACTORS $=$ "
|
| 321 |
+
2 You are an intelligent assistant who is given a question and a list of options. Your role is to provide accurate, helpful, and well-reasoned answer based on your knowledge and capabilities and the options provided..
|
| 322 |
+
3 Along with the answer, you need to provide a confidence score for your answer. The confidence score should be a number between 0 and 100, where:
|
| 323 |
+
5 - 0-25 indicates low confidence
|
| 324 |
+
6 - 26-75 indicates moderate confidence
|
| 325 |
+
7 - 76-100 indicates high confidence
|
| 326 |
+
8
|
| 327 |
+
9 Guidelines for providing answers:
|
| 328 |
+
10 1. Return the answer from the list of options provided only. It is guaranteed that the answer will be one of the options provided.
|
| 329 |
+
11 2. If you are uncertain, provide a lower confidence score.
|
| 330 |
+
12 3. Base your confidence score on: - The reliability and recency of available information - Your knowledge of the specific domain
|
| 331 |
+
13
|
| 332 |
+
14
|
| 333 |
+
15
|
| 334 |
+
16 Here are some examples:
|
| 335 |
+
17
|
| 336 |
+
18 Example 1:
|
| 337 |
+
19 Question: What is the capital of France?
|
| 338 |
+
20 Options:
|
| 339 |
+
21 - Paris
|
| 340 |
+
22 - London
|
| 341 |
+
23 - Rome
|
| 342 |
+
24 - Madrid
|
| 343 |
+
25 Answer: Paris
|
| 344 |
+
26 Confidence score: 91
|
| 345 |
+
27 (High confidence as this is a well-established fact)
|
| 346 |
+
28
|
| 347 |
+
29 Example 2:
|
| 348 |
+
30 Question: Which country has the best healthcare system?
|
| 349 |
+
31 Options:
|
| 350 |
+
32 - Switzerland
|
| 351 |
+
33 - Sweden
|
| 352 |
+
34 - Singapore
|
| 353 |
+
35 - United States
|
| 354 |
+
36 Answer: It depends on the criteria used. Some rankings favor Switzerland, while others favor Sweden or Singapore.
|
| 355 |
+
37 Confidence score: 25
|
| 356 |
+
38 (There is no definitive answer, and the confidence is low due to the lack of a clear consensus.)
|
| 357 |
+
39
|
| 358 |
+
40 Example 3:
|
| 359 |
+
41 Question: Which state is between Washington and California?
|
| 360 |
+
42 Options:
|
| 361 |
+
43 - Oregon
|
| 362 |
+
44 - Washington
|
| 363 |
+
45 - California
|
| 364 |
+
46 - Idaho
|
| 365 |
+
47 Answer: Oregon
|
| 366 |
+
48 Confidence score: 87
|
| 367 |
+
49 (Maximum confidence as this is a clear geographic fact)
|
| 368 |
+
50
|
| 369 |
+
51 Example 4:
|
| 370 |
+
52 Question: What was Albert Einstein's favorite food?
|
| 371 |
+
53 Options:
|
| 372 |
+
```
|
| 373 |
+
|
| 374 |
+
```handlebars
|
| 375 |
+
- Pizza
|
| 376 |
+
- Pasta
|
| 377 |
+
- Sushi
|
| 378 |
+
- Tacos
|
| 379 |
+
Answer: There is no definitive record of his favorite food, but he reportedly liked pasta. Confidence score: 25
|
| 380 |
+
(There are anecdotal mentions, but no verified records.)
|
| 381 |
+
Example 5:
|
| 382 |
+
Question: Is Irvine a city in California?
|
| 383 |
+
Options:
|
| 384 |
+
- Yes
|
| 385 |
+
- No
|
| 386 |
+
Answer: Yes
|
| 387 |
+
Confidence score: 81
|
| 388 |
+
(High confidence as this is a verifiable fact)
|
| 389 |
+
Example 6:
|
| 390 |
+
Question: What is the most popular programming language for AI development?
|
| 391 |
+
Options:
|
| 392 |
+
- Python
|
| 393 |
+
- Java
|
| 394 |
+
- C++
|
| 395 |
+
- JavaScript
|
| 396 |
+
Answer: Python
|
| 397 |
+
Confidence score: 66
|
| 398 |
+
(Moderate-high confidence based on current trends, but this can change over time)
|
| 399 |
+
Here is a new example. Simply reply with your answer and confidence score.
|
| 400 |
+
Question: {{question}}
|
| 401 |
+
Options: {{{options}}}
|
| 402 |
+
```
|
| 403 |
+
|
| 404 |
+
# To generate the 3 distractors for each question-answer pair, we use the following prompt with few shot examples.
|
| 405 |
+
|
| 406 |
+
```txt
|
| 407 |
+
1 DISTRACTORS_GENRATION_prompt = ""You are an expert synthetic data generator. Your task is to generate three plausible but incorrect answers to a given question.
|
| 408 |
+
2 Guidelines for generating wrong answers:
|
| 409 |
+
4 1. Each answer should be factually incorrect but plausible within the context
|
| 410 |
+
5 2. Match the answer type (e.g. if asking for a date, provide wrong dates)
|
| 411 |
+
6 3. The wrong answers should be clearly distinct from the correct answer and from each other
|
| 412 |
+
7 4. Maintain a similar level of specificity as the original answer
|
| 413 |
+
8 5. The answers should be realistic and not obviously wrong
|
| 414 |
+
9 Example 1:
|
| 415 |
+
11 Question: What is the capital of France?
|
| 416 |
+
12 Answer: Paris
|
| 417 |
+
13 Wrong Answers:
|
| 418 |
+
14 - Lyon
|
| 419 |
+
15 - Marseille
|
| 420 |
+
16 - Bordeaux
|
| 421 |
+
17 Reason: All are major French cities, but incorrect as capital
|
| 422 |
+
18 Example 2:
|
| 423 |
+
20 Question: Who was the first president of the United States?
|
| 424 |
+
21 Answer: George Washington
|
| 425 |
+
22 Wrong Answers:
|
| 426 |
+
23 - John Adams
|
| 427 |
+
24 - Thomas Jefferson
|
| 428 |
+
```
|
| 429 |
+
|
| 430 |
+
```txt
|
| 431 |
+
- Benjamin Franklin
|
| 432 |
+
Reason: All are founding fathers but not the first president
|
| 433 |
+
Example 3:
|
| 434 |
+
Question: In what year did World War II end?
|
| 435 |
+
Answer: 1945
|
| 436 |
+
Wrong Answers:
|
| 437 |
+
- 1943
|
| 438 |
+
- 1944
|
| 439 |
+
- 1946
|
| 440 |
+
Reason: All are plausible years during or near WWII but not when it ended
|
| 441 |
+
Example 4:
|
| 442 |
+
Question: Who wrote Romeo and Juliet?
|
| 443 |
+
Answer: William Shakespeare
|
| 444 |
+
Wrong Answers:
|
| 445 |
+
- Christopher Marlowe
|
| 446 |
+
- Ben Jonson
|
| 447 |
+
- John Webster
|
| 448 |
+
Reason: All are prominent Elizabethan playwrights
|
| 449 |
+
Example 5:
|
| 450 |
+
Question: What is the largest planet in our solar system?
|
| 451 |
+
Answer: Jupiter
|
| 452 |
+
Wrong Answers:
|
| 453 |
+
- Saturn
|
| 454 |
+
- Neptune
|
| 455 |
+
- Uranus
|
| 456 |
+
Reason: All are gas giant planets, but smaller than Jupiter
|
| 457 |
+
Please generate three wrong answers that follow these guidelines for the given question.
|
| 458 |
+
The answers should be:
|
| 459 |
+
- Factually incorrect but plausible
|
| 460 |
+
- Match the same answer type (e.g. date, person, number)
|
| 461 |
+
- Clearly distinct from the correct answer and each other
|
| 462 |
+
- Similar in specificity/detail level
|
| 463 |
+
- Realistic and not obviously wrong
|
| 464 |
+
Return only three wrong answers as a list in JSON format with the following requirements:
|
| 465 |
+
- Each wrong answer should be a string
|
| 466 |
+
- The output should be a single JSON object with key 'wrong_answers'
|
| 467 |
+
- The value should be an array of exactly 3 wrong answers
|
| 468 |
+
- No explanations or additional text should be included
|
| 469 |
+
- The answers should maintain consistent formatting with the correct answer
|
| 470 |
+
Example format:
|
| 471 |
+
{ 'wrong_answers': ['opt1', 'opt2', 'opt3'] }
|
| 472 |
+
Question: {question}
|
| 473 |
+
Correct Answer: {answer}
|
| 474 |
+
Generate three wrong answers:
|
| 475 |
+
```
|
| 476 |
+
|
| 477 |
+
# B Same LLM judge as Prediction LLM
|
| 478 |
+
|
| 479 |
+
We employ the same LLM as both the judge and the base model responsible for predicting answers to the questions. The performance metrics are presented in Table 4. Upon manually inspecting instances from smaller LLMs, we observe that the LLM judge occasionally misclassifies responses and refrains from assigning NOT_ATTEMPTED to certain data points. This can be seen by comparing $\mathcal{N}_{none}$ in Table2 and 4 for the SimpleQA dataset. To address this issue and ensure consistency, we use GPT-4o-mini as the LLM judge across all models. We also create reliability diagrams of pipeline where LLM-judge was different and the graphs are shown in Figure 5.
|
| 480 |
+
|
| 481 |
+
Table 4: Performance metrics of LLMs on SimpleQA dataset in the Normal $(\mathcal{N})$ and Distractor $(\mathcal{D})$ settings, including accuracy (correct), non-attempt (na), ECE, and the number of helped $(\mathcal{D}_{helped})$ and harmed $(\mathcal{D}_{harmed})$ instances. Here the LLM judge model is same as the prediction model.
|
| 482 |
+
|
| 483 |
+
<table><tr><td>LLMs</td><td>Ncorrect</td><td>Nnone</td><td>NECE</td><td>Dcorrect</td><td>Dnone</td><td>DECE</td><td>Dhelped</td><td>Dharmed</td></tr><tr><td>GPT-4o-mini</td><td>8.46%</td><td>6.80%</td><td>0.750</td><td>47.43%</td><td>0.02%</td><td>0.320</td><td>1644 (93.78%)</td><td>109</td></tr><tr><td>GPT-4-turbo</td><td>20.99%</td><td>7.14%</td><td>0.616</td><td>65.33%</td><td>0.02%</td><td>0.165</td><td>1821 (95.44%)</td><td>87</td></tr><tr><td>GPT-4o®</td><td>36.75%</td><td>8.16%</td><td>0.437</td><td>73.48%</td><td>0%</td><td>0.037</td><td>1507 (91.22%)</td><td>145</td></tr><tr><td>LLaMA3.1-8b-instant∞</td><td>8.24%</td><td>19.58%</td><td>0.780</td><td>44.94%</td><td>0.21%</td><td>0.367</td><td>1294 (91.45%)</td><td>121</td></tr><tr><td>LLaMA 3-8B-8192∞</td><td>9.27%</td><td>24.99%</td><td>0.790</td><td>45.56%</td><td>2.45%</td><td>0.355</td><td>1251 (90.46%)</td><td>132</td></tr><tr><td>Gemma2-9B-it G</td><td>9.52%</td><td>34.49%</td><td>0.771</td><td>46.58%</td><td>1.87%</td><td>0.359</td><td>1060 (88.70%)</td><td>135</td></tr></table>
|
| 484 |
+
|
| 485 |
+

|
| 486 |
+
|
| 487 |
+

|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
Figure 5: Reliability diagrams (RDs) on SimpleQA dataset showing calibration performance in $\mathcal{N}(\bullet)$ and $\mathcal{D}(\bullet)$ settings. The numbers on top of bars represent the number of correctly predicted instances (y-axis: actual accuracy, x-axis: predicted confidence). Here the LLM judge model is same as the prediction model.
|
| 493 |
+
|
| 494 |
+

|
| 495 |
+
|
| 496 |
+

|
| 497 |
+
|
| 498 |
+
To validate the reliability of GPT-4o-mini as an LLM judge, we conducted a small-scale human evaluation. We sampled 100 responses and had three human annotators independently classify them as CORRECT, INCORRECT, or NOT_ATTEMPTED. The inter-annotator agreement, measured using Cohen's kappa, was 0.82 for GPT-4o-mini, indicating substantial agreement. This comparison allowed us to measure the extent of biases introduced by automated evaluation and confirm that LLM judges generally aligned with human judgments, though minor inconsistencies were observed in ambiguous cases.
|
| 499 |
+
|
| 500 |
+
# C Reliability Diagrams of Selected Models on FaVIQ and TriviaQA dataset
|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
|
| 504 |
+

|
| 505 |
+
|
| 506 |
+

|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
|
| 514 |
+

|
| 515 |
+
Figure 6: Reliability diagrams (RDs) showing calibration performance in $\mathcal{N}(\bullet)$ and $\mathcal{D}(\bullet)$ settings on the TriviaQA dataset. (y-axis: actual accuracy, x-axis: predicted confidence).
|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
|
| 519 |
+

|
| 520 |
+
|
| 521 |
+

|
| 522 |
+
|
| 523 |
+

|
| 524 |
+
|
| 525 |
+

|
| 526 |
+
|
| 527 |
+

|
| 528 |
+
|
| 529 |
+

|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
|
| 533 |
+

|
| 534 |
+
Figure 7: Reliability diagrams (RDs) showing calibration performance in $\mathcal{N}(\bullet)$ and $\mathcal{D}(\bullet)$ settings on the FaVIQ dataset. (y-axis: actual accuracy, x-axis: predicted confidence)
|
| 535 |
+
|
| 536 |
+

|
| 537 |
+
|
| 538 |
+

|
2502.11xxx/2502.11028/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55ec4a4f538ce07411034d38bf92032b7b63f1c39d26a6dec453af5bc3253791
|
| 3 |
+
size 1685367
|
2502.11xxx/2502.11028/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_content_list.json
ADDED
|
@@ -0,0 +1,1884 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "GoalFlow: Goal-Driven Flow Matching for Multimodal Trajectories Generation in End-to-End Autonomous Driving",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
96,
|
| 8 |
+
128,
|
| 9 |
+
898,
|
| 10 |
+
175
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zebin Xing $^{1,2*}$ , Xingyu Zhang $^{2*}$ , Yang Hu $^{2}$ , Bo Jiang $^{4,2}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
282,
|
| 19 |
+
202,
|
| 20 |
+
715,
|
| 21 |
+
222
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Tong He $^{5}$ , Qian Zhang $^{2}$ , Xiaoxiao Long $^{3}$ , Wei Yin $^{2\\dagger}$",
|
| 28 |
+
"bbox": [
|
| 29 |
+
297,
|
| 30 |
+
220,
|
| 31 |
+
709,
|
| 32 |
+
238
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "$^{1}$ School of Artificial Intelligence, University of Chinese Academy of Sciences $^{2}$ Horizon Robotics",
|
| 39 |
+
"bbox": [
|
| 40 |
+
104,
|
| 41 |
+
238,
|
| 42 |
+
893,
|
| 43 |
+
255
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "$^{3}$ Nanjing University $^{4}$ Huazhong University of Science & Technology $^{5}$ Shanghai AI Laboratory",
|
| 50 |
+
"bbox": [
|
| 51 |
+
102,
|
| 52 |
+
255,
|
| 53 |
+
893,
|
| 54 |
+
273
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
248,
|
| 64 |
+
309,
|
| 65 |
+
325,
|
| 66 |
+
323
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "We propose GoalFlow, an end-to-end autonomous driving method for generating high-quality multimodal trajectories. In autonomous driving scenarios, there is rarely a single suitable trajectory. Recent methods have increasingly focused on modeling multimodal trajectory distributions. However, they suffer from trajectory selection complexity and reduced trajectory quality due to high trajectory divergence and inconsistencies between guidance and scene information. To address these issues, we introduce GoalFlow, a novel method that effectively constrains the generative process to produce high-quality, multimodal trajectories. To resolve the trajectory divergence problem inherent in diffusion-based methods, GoalFlow constrains the generated trajectories by introducing a goal point. GoalFlow establishes a novel scoring mechanism that selects the most appropriate goal point from the candidate points based on scene information. Furthermore, GoalFlow employs an efficient generative method, Flow Matching, to generate multimodal trajectories, and incorporates a refined scoring mechanism to select the optimal trajectory from the candidates. Our experimental results, validated on the Navsim[7], demonstrate that GoalFlow achieves state-of-the-art performance, delivering robust multimodal trajectories for autonomous driving. GoalFlow achieved PDMS of 90.3, significantly surpassing other methods. Compared with other diffusion-policy-based methods, our approach requires only a single denoising step to obtain excellent performance. The code is available at https://github.com/YvanYin/GoalFlow.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
88,
|
| 75 |
+
340,
|
| 76 |
+
483,
|
| 77 |
+
779
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1. Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
91,
|
| 87 |
+
809,
|
| 88 |
+
220,
|
| 89 |
+
823
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Since UniAD[15], autonomous driving has increasingly favored end-to-end systems, where tasks like mapping and",
|
| 96 |
+
"bbox": [
|
| 97 |
+
89,
|
| 98 |
+
834,
|
| 99 |
+
482,
|
| 100 |
+
864
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "image",
|
| 106 |
+
"img_path": "images/f136da89aba11ee6bc5aa447283a028b7b1e1c6181f8edc0804c3b51bc819f80.jpg",
|
| 107 |
+
"image_caption": [],
|
| 108 |
+
"image_footnote": [],
|
| 109 |
+
"bbox": [
|
| 110 |
+
522,
|
| 111 |
+
308,
|
| 112 |
+
893,
|
| 113 |
+
412
|
| 114 |
+
],
|
| 115 |
+
"page_idx": 0
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"type": "image",
|
| 119 |
+
"img_path": "images/754ac1cff7441087cc3385438561f68b8ef9a397394eefc657c48a47446c9ef6.jpg",
|
| 120 |
+
"image_caption": [
|
| 121 |
+
"Goal-Driven Generation Model",
|
| 122 |
+
"Figure 1. The comparison of different multimodal trajectory generation paradigms recently. A standalone generative model often produces highly diverse trajectories with no clear boundaries between different modalities. In contrast, the Goal-Driven Generation Model leverages the strong guidance of goal points, effectively distinguishing multiple modalities by utilizing different goal points."
|
| 123 |
+
],
|
| 124 |
+
"image_footnote": [],
|
| 125 |
+
"bbox": [
|
| 126 |
+
524,
|
| 127 |
+
430,
|
| 128 |
+
893,
|
| 129 |
+
537
|
| 130 |
+
],
|
| 131 |
+
"page_idx": 0
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"type": "text",
|
| 135 |
+
"text": "detection ultimately serve the planning task. To enhance system reliability, some end-to-end algorithms[16, 17, 27] have begun exploring ways to generate multimodal trajectories as trajectory candidates for the algorithms. In autonomous driving, command typically includes indicators for left, right, and straight actions. VAD[17] uses this command information to generate multimodal trajectories. Goal points, which provide the vehicle's location information for the next few seconds, are commonly used as guiding information in other approaches, such as SparseDrive[27]. These methods pre-define a set of goal points to generate different trajectory modes. Both approaches have succeeded in autonomous driving, offering candidate trajectories that significantly reduce collision rates. However, since these methods' guiding information does not pursue accuracy but instead provides a set of candidate values for the trajec",
|
| 136 |
+
"bbox": [
|
| 137 |
+
511,
|
| 138 |
+
657,
|
| 139 |
+
906,
|
| 140 |
+
902
|
| 141 |
+
],
|
| 142 |
+
"page_idx": 0
|
| 143 |
+
},
|
| 144 |
+
{
|
| 145 |
+
"type": "aside_text",
|
| 146 |
+
"text": "arXiv:2503.05689v6 [cs.CV] 1 Oct 2025",
|
| 147 |
+
"bbox": [
|
| 148 |
+
22,
|
| 149 |
+
282,
|
| 150 |
+
57,
|
| 151 |
+
710
|
| 152 |
+
],
|
| 153 |
+
"page_idx": 0
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"type": "page_footnote",
|
| 157 |
+
"text": "*Equal contribution.",
|
| 158 |
+
"bbox": [
|
| 159 |
+
109,
|
| 160 |
+
875,
|
| 161 |
+
218,
|
| 162 |
+
887
|
| 163 |
+
],
|
| 164 |
+
"page_idx": 0
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"type": "page_footnote",
|
| 168 |
+
"text": "† Corresponding author, project leader. Email: yvanwy@outlook.com",
|
| 169 |
+
"bbox": [
|
| 170 |
+
109,
|
| 171 |
+
887,
|
| 172 |
+
473,
|
| 173 |
+
900
|
| 174 |
+
],
|
| 175 |
+
"page_idx": 0
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"type": "text",
|
| 179 |
+
"text": "tory, when the gap between the guiding information and the ground truth is large, it is prone to generating low-quality trajectories.",
|
| 180 |
+
"bbox": [
|
| 181 |
+
89,
|
| 182 |
+
90,
|
| 183 |
+
480,
|
| 184 |
+
136
|
| 185 |
+
],
|
| 186 |
+
"page_idx": 1
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"type": "text",
|
| 190 |
+
"text": "In recent trajectory prediction works, some methods[18, 28, 32] aim to generate multimodal trajectories through diffusion, using scene or motion information as a condition to produce multimodal trajectories. Other methods [12] utilize diffusion to construct a world model. Without constraints, approaches like Diffusion-ES[32] tend to generate divergent trajectories, which is depicted in the second row of Fig.1, requiring a scoring mechanism based on HD maps to align with the real-world road network, which is difficult to obtain in end-to-end environments. MotionDiffuser[18] addresses trajectory divergence by using the ground truth endpoint as a constraint, which introduces overly strong prior information. GoalGAN[8] first predicts the goal point and then uses it to guide the GAN network to generate trajectories. However, GoalGAN employs grid-cell to sample goal points, which does not consider the distribution of the goal points.",
|
| 191 |
+
"bbox": [
|
| 192 |
+
89,
|
| 193 |
+
138,
|
| 194 |
+
482,
|
| 195 |
+
393
|
| 196 |
+
],
|
| 197 |
+
"page_idx": 1
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"type": "text",
|
| 201 |
+
"text": "Reviewing previous work, we identified some overlooked issues:(1) Existing end-to-end autonomous driving systems tend to focus heavily on collision and L2 metrics, often adding specific losses or applying post-processing to reduce collision, while overlooking whether the vehicle remains within the drivable area. (2) Most end-to-end methods are based on regression models and aim to achieve multimodality by using different guiding information. However, when the guiding information deviates significantly from the ground truth, it can lead to the generation of low-quality trajectories.",
|
| 202 |
+
"bbox": [
|
| 203 |
+
89,
|
| 204 |
+
397,
|
| 205 |
+
482,
|
| 206 |
+
563
|
| 207 |
+
],
|
| 208 |
+
"page_idx": 1
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"type": "text",
|
| 212 |
+
"text": "GoalFlow can be divided into three parts: Perception Module, Goal Point Construction Module, and Trajectory Planning Module. In the first module, following transfuser[3], images and LiDAR are fed into two separate backbones and fused into BEV feature finally. In the second module, GoalFlow establishes a dense vocabulary of goal points, and a novel scoring mechanism is used to select the optimal goal point that is closest to the ground truth goal point and within a drivable area. In the third module, GoalFlow uses flow matching to model multimodal trajectories efficiently. It conditions scene information and incorporates stronger guidance from the selected goal point. Finally, GoalFlow employs a scoring mechanism to select the optimal trajectory. Compared to directly generating trajectories with diffusion, as in the first row of Fig. 1, our approach provides strong constraints on the trajectory, leading to more reliable results.",
|
| 213 |
+
"bbox": [
|
| 214 |
+
89,
|
| 215 |
+
566,
|
| 216 |
+
482,
|
| 217 |
+
821
|
| 218 |
+
],
|
| 219 |
+
"page_idx": 1
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"type": "text",
|
| 223 |
+
"text": "We conducted experimental validation in Navsim and found that our method outperformed other approaches in overall scoring. Notably, due to our goal point selection mechanism, we achieved a significant improvement in DAC scores. Additionally, we observed that this flow-matching-",
|
| 224 |
+
"bbox": [
|
| 225 |
+
89,
|
| 226 |
+
824,
|
| 227 |
+
482,
|
| 228 |
+
900
|
| 229 |
+
],
|
| 230 |
+
"page_idx": 1
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"type": "text",
|
| 234 |
+
"text": "based approach is robust to the number of denoising steps during inference. Even with only a single denoising step, the score dropped by only $1.6\\%$ compared to the optimal case, enhancing the potential for real-world deployment of generative models in autonomous driving.",
|
| 235 |
+
"bbox": [
|
| 236 |
+
511,
|
| 237 |
+
90,
|
| 238 |
+
903,
|
| 239 |
+
166
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "Our contributions can be summarized as follows:",
|
| 246 |
+
"bbox": [
|
| 247 |
+
532,
|
| 248 |
+
167,
|
| 249 |
+
856,
|
| 250 |
+
180
|
| 251 |
+
],
|
| 252 |
+
"page_idx": 1
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"type": "list",
|
| 256 |
+
"sub_type": "text",
|
| 257 |
+
"list_items": [
|
| 258 |
+
"- We designed a novel approach to establishing goal points, demonstrating its effectiveness in guiding generative models for trajectory generation.",
|
| 259 |
+
"- We introduced flow matching to end-to-end autonomous driving and seamlessly integrated it with goal point guidance.",
|
| 260 |
+
"- We developed an innovative trajectory selection mechanism, using shadow trajectories to further address potential goal point errors.",
|
| 261 |
+
"- Our method achieved state-of-the-art results in Navsim."
|
| 262 |
+
],
|
| 263 |
+
"bbox": [
|
| 264 |
+
511,
|
| 265 |
+
183,
|
| 266 |
+
903,
|
| 267 |
+
333
|
| 268 |
+
],
|
| 269 |
+
"page_idx": 1
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"type": "text",
|
| 273 |
+
"text": "2. Related Work",
|
| 274 |
+
"text_level": 1,
|
| 275 |
+
"bbox": [
|
| 276 |
+
513,
|
| 277 |
+
349,
|
| 278 |
+
653,
|
| 279 |
+
364
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "2.1. End-to-End Autonomous Driving",
|
| 286 |
+
"text_level": 1,
|
| 287 |
+
"bbox": [
|
| 288 |
+
511,
|
| 289 |
+
375,
|
| 290 |
+
805,
|
| 291 |
+
391
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 1
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "text",
|
| 297 |
+
"text": "Earlier end-to-end autonomous driving approaches[5][4] used imitation learning methods, directly extracting features from input images to generate trajectories. Later, Transfuser[3] advanced by fusing lidar and image information during perception, using auxiliary tasks such as mapping and detection to provide supervision for the perception. FusionAD[33] took Transfuser a step further by propagating fused perception features directly to the prediction and planning modules. Other methods [19, 20] align the traffic scene with natural language. UniAD[15] introduced a unified query design that made the framework ultimately planning-oriented. Similarly, VAD[17] focused on a planning-oriented approach by simplifying perception tasks and transforming scene representation into a vectorized format, significantly enhancing both planning capability and efficiency. Building on this, some methods[1, 22] discretized the trajectory space and constructed a trajectory vocabulary, transforming the regression task into a classification task. PARA-Drive[30] performs mapping, planning, motion prediction, and occupancy prediction tasks in parallel. GenAD[35] employed VAE and GRU for temporal trajectory reconstruction, while SparseDrive[27] progressed further in the vectorized scene representation, omitting denser BEV representations. Compared to previous methods that focus on better fitting ground truth trajectories using a regression model, we concentrate on generating high-quality multimodal trajectories in an end-to-end setting.",
|
| 298 |
+
"bbox": [
|
| 299 |
+
511,
|
| 300 |
+
398,
|
| 301 |
+
906,
|
| 302 |
+
821
|
| 303 |
+
],
|
| 304 |
+
"page_idx": 1
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"type": "text",
|
| 308 |
+
"text": "2.2. Diffusion Model and Flow Matching",
|
| 309 |
+
"text_level": 1,
|
| 310 |
+
"bbox": [
|
| 311 |
+
511,
|
| 312 |
+
832,
|
| 313 |
+
828,
|
| 314 |
+
848
|
| 315 |
+
],
|
| 316 |
+
"page_idx": 1
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"type": "text",
|
| 320 |
+
"text": "Early generative models always used VAE[21] and GAN[10] in image generation. Recently, diffusion models that generate images by iteratively adding and remov",
|
| 321 |
+
"bbox": [
|
| 322 |
+
511,
|
| 323 |
+
854,
|
| 324 |
+
903,
|
| 325 |
+
900
|
| 326 |
+
],
|
| 327 |
+
"page_idx": 1
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"type": "image",
|
| 331 |
+
"img_path": "images/4702ea98a40fd53483e624dde46530f2a199dbd4f594c3507f39d17f9ba886f0.jpg",
|
| 332 |
+
"image_caption": [
|
| 333 |
+
"Figure 2. Overview of the GoalFlow architecture. GoalFlow consists of three modules. The Perception Module is responsible for integrating scene information into a BEV feature $F_{bev}$ , the Goal Point Construction Module selects the optimal goal point from Goal Point Vocabulary V as guidance information, and the Trajectory Planning Module generates the trajectories by denoising from the Gaussian distribution to the target distribution. Finally, the Trajectory Scorer selects the optimal trajectory from the candidates."
|
| 334 |
+
],
|
| 335 |
+
"image_footnote": [],
|
| 336 |
+
"bbox": [
|
| 337 |
+
91,
|
| 338 |
+
88,
|
| 339 |
+
898,
|
| 340 |
+
324
|
| 341 |
+
],
|
| 342 |
+
"page_idx": 2
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"type": "text",
|
| 346 |
+
"text": "ing noise have become mainstream. DDPM[14] applies noise to images during training, converting states over time steps, and subsequently denoises them during testing to reconstruct the image. More recent methods[26] have further optimized sampling efficiency. Additionally, CFG[13] has enhanced the robustness of generated outputs. Flow Matching[23] establishes a vector field for transitioning from one distribution to another. Rectified flow[24], a specific form of flow matching, enables a direct, linear transition path between distributions. Compared to diffusion models, rectified flow often requires only a single inference step to achieve good results.",
|
| 347 |
+
"bbox": [
|
| 348 |
+
88,
|
| 349 |
+
417,
|
| 350 |
+
485,
|
| 351 |
+
602
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 2
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "2.3. MultiModal Trajectories Generation",
|
| 358 |
+
"text_level": 1,
|
| 359 |
+
"bbox": [
|
| 360 |
+
89,
|
| 361 |
+
618,
|
| 362 |
+
408,
|
| 363 |
+
635
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 2
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "text",
|
| 369 |
+
"text": "In planning tasks, such as manipulation and autonomous driving, a given scenario often offers multiple action options, requiring effective multimodal modeling. Recent works[2, 31] in manipulation have explored this by applying diffusion models with notable success. Autonomous driving has adopted two main multimodal strategies: the first uses discrete commands to guide trajectory generation, such as in VAD[17], which produces three distinct trajectory modes, and SparseDrive[27] and [16], which cluster fixed navigation points from datasets for trajectory guidance. The second approach introduces diffusion models directly to generate multimodal trajectories[18, 29, 32], achieving success in trajectory prediction but facing challenges in end-to-end applications. Building on diffusion models, we address limitations in accuracy and efficiency by incorporating flow matching, using goal points to guide trajectories with precision rather than focusing solely on multimodal diversity.",
|
| 370 |
+
"bbox": [
|
| 371 |
+
89,
|
| 372 |
+
643,
|
| 373 |
+
483,
|
| 374 |
+
902
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 2
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "3. Method",
|
| 381 |
+
"text_level": 1,
|
| 382 |
+
"bbox": [
|
| 383 |
+
513,
|
| 384 |
+
416,
|
| 385 |
+
604,
|
| 386 |
+
431
|
| 387 |
+
],
|
| 388 |
+
"page_idx": 2
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "text",
|
| 392 |
+
"text": "3.1. Preliminary",
|
| 393 |
+
"text_level": 1,
|
| 394 |
+
"bbox": [
|
| 395 |
+
511,
|
| 396 |
+
441,
|
| 397 |
+
643,
|
| 398 |
+
458
|
| 399 |
+
],
|
| 400 |
+
"page_idx": 2
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"type": "text",
|
| 404 |
+
"text": "Compared to diffusion, which focuses on learning to reverse the gradual addition of noise over time to recover data, flow matching[23] focuses on learning invertible transformations that map between data distributions. Let $\\pi_0$ denote a simple distribution, typically the standard normal distribution $p(x) = \\mathcal{N}(x|0,I)$ , and let $\\pi_1$ denote the target distribution. Under this framework, rectified flow[24] uses a simple and effective method to construct the path through optimal transport[25] displacement, which we choose as our Flow Matching method.",
|
| 405 |
+
"bbox": [
|
| 406 |
+
511,
|
| 407 |
+
463,
|
| 408 |
+
906,
|
| 409 |
+
613
|
| 410 |
+
],
|
| 411 |
+
"page_idx": 2
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"type": "text",
|
| 415 |
+
"text": "Given $x_0$ sampled from $\\pi_0$ , $x_1$ sampled from $\\pi_1$ , and $t \\in [0, 1]$ , the path from $x_0$ to $x_1$ is defined as a straight line, meaning the intermediate status $x_t$ is given by $(1 - t)x_0 + tx_1$ , with the direction of intermediate status consistently following $x_1 - x_0$ . By constructing a neural network $v_\\theta$ to predict the direction $x_1 - x_0$ based on the current state $x_t$ and time step $t$ , we can obtain a path from the initial distribution $\\pi_0$ to target distribution $\\pi_1$ by optimizing the loss between $v_\\theta(x_t, t)$ and $x_1 - x_0$ . This can be formalized as:",
|
| 416 |
+
"bbox": [
|
| 417 |
+
511,
|
| 418 |
+
614,
|
| 419 |
+
908,
|
| 420 |
+
763
|
| 421 |
+
],
|
| 422 |
+
"page_idx": 2
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"type": "equation",
|
| 426 |
+
"text": "\n$$\nv _ {\\theta} \\left(x _ {t}, t\\right) \\approx \\mathbf {E} _ {x _ {0} \\sim \\pi_ {0}, x _ {1} \\sim \\pi_ {1}} \\left[ v _ {t} \\mid x _ {t} \\right] \\tag {1}\n$$\n",
|
| 427 |
+
"text_format": "latex",
|
| 428 |
+
"bbox": [
|
| 429 |
+
599,
|
| 430 |
+
766,
|
| 431 |
+
903,
|
| 432 |
+
782
|
| 433 |
+
],
|
| 434 |
+
"page_idx": 2
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "equation",
|
| 438 |
+
"text": "\n$$\n\\mathcal {L} (\\theta) = \\mathbf {E} _ {x _ {0} \\sim \\pi_ {0}, x _ {1} \\sim \\pi_ {1}} [ \\| v _ {\\theta} (x _ {t}, t) - (x _ {1} - x _ {0}) \\| _ {2} ] \\tag {2}\n$$\n",
|
| 439 |
+
"text_format": "latex",
|
| 440 |
+
"bbox": [
|
| 441 |
+
537,
|
| 442 |
+
787,
|
| 443 |
+
903,
|
| 444 |
+
806
|
| 445 |
+
],
|
| 446 |
+
"page_idx": 2
|
| 447 |
+
},
|
| 448 |
+
{
|
| 449 |
+
"type": "text",
|
| 450 |
+
"text": "3.2. GoalFlow",
|
| 451 |
+
"text_level": 1,
|
| 452 |
+
"bbox": [
|
| 453 |
+
511,
|
| 454 |
+
814,
|
| 455 |
+
625,
|
| 456 |
+
829
|
| 457 |
+
],
|
| 458 |
+
"page_idx": 2
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "text",
|
| 462 |
+
"text": "3.2.1. Overview",
|
| 463 |
+
"text_level": 1,
|
| 464 |
+
"bbox": [
|
| 465 |
+
511,
|
| 466 |
+
835,
|
| 467 |
+
627,
|
| 468 |
+
849
|
| 469 |
+
],
|
| 470 |
+
"page_idx": 2
|
| 471 |
+
},
|
| 472 |
+
{
|
| 473 |
+
"type": "text",
|
| 474 |
+
"text": "GoalFlow is a goal-driven end-to-end autonomous driving method that can generate high-quality multimodal trajectories. The overall architecture of GoalFlow is illustrated in",
|
| 475 |
+
"bbox": [
|
| 476 |
+
511,
|
| 477 |
+
854,
|
| 478 |
+
906,
|
| 479 |
+
900
|
| 480 |
+
],
|
| 481 |
+
"page_idx": 2
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"type": "text",
|
| 485 |
+
"text": "Figure 2. It comprises three main components. In the Perception Module, we obtain a BEV feature $F_{\\mathrm{bev}}$ that encapsulates environmental information by fusing camera images $I$ , and LiDAR data $L$ . The Goal Point Construction Module focuses on generating precise guidance information for trajectory generation. It accomplishes this by constructing a goal point vocabulary $\\mathbb{V} = \\{g_i\\}^N$ , and employing a scoring mechanism to select the most appropriate goal point $g$ . In the Trajectory Planning Module, we produce a set of multimodal trajectories, $\\mathbb{T} = \\{\\hat{\\tau}_i\\}^M$ , and then identify the optimal trajectory $\\tau$ , through a trajectory scoring mechanism.",
|
| 486 |
+
"bbox": [
|
| 487 |
+
89,
|
| 488 |
+
90,
|
| 489 |
+
480,
|
| 490 |
+
257
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 3
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "text",
|
| 496 |
+
"text": "3.2.2. Perception Module",
|
| 497 |
+
"text_level": 1,
|
| 498 |
+
"bbox": [
|
| 499 |
+
89,
|
| 500 |
+
265,
|
| 501 |
+
269,
|
| 502 |
+
280
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 3
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"text": "In the first step, we fuse image and LiDAR data to create a BEV feature, $F_{\\mathrm{bev}}$ , that captures rich road condition information. A single modality often lacks crucial details; for example, LiDAR does not capture traffic light information, while images cannot precisely locate objects. By fusing different sensor modalities, we can achieve a more complete and accurate representation of the road conditions.",
|
| 509 |
+
"bbox": [
|
| 510 |
+
89,
|
| 511 |
+
284,
|
| 512 |
+
482,
|
| 513 |
+
388
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 3
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "text",
|
| 519 |
+
"text": "We adopt the Transfuser architecture [3] for modality fusion. The forward, left, and right camera views are concatenated into a single image $I \\in \\mathbb{R}^{3 \\times H_1 \\times W_1}$ , while Li-DAR data is formed as a tensor $L \\in \\mathbb{R}^{K \\times 3}$ . These inputs are passed through separate backbones, and their features are fused at different layers using multiple transformer blocks. The result is a BEV feature, $F_{\\mathrm{bev}}$ , which comprehensively represents the scene. To ensure effective interaction between the ego vehicle and surrounding objects, as well as map information, we apply auxiliary supervision to the BEV feature through losses derived from HD maps and bounding boxes.",
|
| 520 |
+
"bbox": [
|
| 521 |
+
89,
|
| 522 |
+
388,
|
| 523 |
+
482,
|
| 524 |
+
571
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 3
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "text",
|
| 530 |
+
"text": "3.2.3. Goal Point Construction Module.",
|
| 531 |
+
"text_level": 1,
|
| 532 |
+
"bbox": [
|
| 533 |
+
89,
|
| 534 |
+
579,
|
| 535 |
+
367,
|
| 536 |
+
593
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 3
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "text",
|
| 542 |
+
"text": "In this module, we construct a precise goal point to guide the trajectory generation process. Diffusion-based approach[18, 32] without constraints often leads to excessive trajectory divergence, which complicates trajectory selection. Our key observation is that a goal point contains a precise description of the short-term future position, which imposes a strong constraint on the generation model. As a result, we divide the traditional Planning Module into two steps: first, constructing a precise goal point, and second, generating the trajectory through planning.",
|
| 543 |
+
"bbox": [
|
| 544 |
+
89,
|
| 545 |
+
598,
|
| 546 |
+
482,
|
| 547 |
+
750
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 3
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "text",
|
| 553 |
+
"text": "Goal Point Vocabulary. We aim to construct a goal point set that provides candidates for the optimal goal point. Traditional goal-based methods[11, 34], rely on lane-level information from HD map to generate goal point sets for trajectory prediction. However, HD maps are expensive, making lane information often unavailable in end-to-end driving. Inspired by VADv2[1], we discretize the endpoint space of trajectories to generate candidate goal points, enabling a solution without relying on HD maps. We clustered trajectory endpoints $\\mathbf{p}_i = (x_i, y_i, \\theta_i)$ in the training data",
|
| 554 |
+
"bbox": [
|
| 555 |
+
89,
|
| 556 |
+
750,
|
| 557 |
+
482,
|
| 558 |
+
901
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 3
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "text",
|
| 564 |
+
"text": "to create $N$ cluster centers, which form our goal point vocabulary $\\mathbb{V}$ . Each endpoint $p_i$ represents a position $(x_i, y_i)$ and heading $\\theta_i$ . To ensure that the vocabulary represents finer-grained locations, we typically set $N$ to a large value, generally 4096 or 8192.",
|
| 565 |
+
"bbox": [
|
| 566 |
+
511,
|
| 567 |
+
90,
|
| 568 |
+
903,
|
| 569 |
+
165
|
| 570 |
+
],
|
| 571 |
+
"page_idx": 3
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"type": "text",
|
| 575 |
+
"text": "Goal Point Scorer. High-quality trajectories typically exhibit the following characteristics: A small distance to the ground truth and within the drivable area. To achieve this, we evaluate each goal point $g_{i}$ in the vocabulary $\\mathbb{V}$ using two distinct scores: the Distance Score $\\hat{\\delta}^{\\mathrm{dis}}$ and the Drivable Area Compliance Score $\\hat{\\delta}^{\\mathrm{dac}}$ . The Distance Score measures the proximity between the goal point $g_{i}$ and the endpoint of ground truth trajectory $g^{\\mathrm{gt}}$ , with a continuous value in the range $\\hat{\\delta}^{\\mathrm{dis}} \\in [0,1]$ , where a higher value indicates a closer match to $g^{\\mathrm{gt}}$ . The Drivable Area Compliance Score ensures that the goal point lies within the drivable area, using a binary value $\\hat{\\delta}^{\\mathrm{dac}} \\in \\{0,1\\}$ , where 1 indicates that the goal point is valid within the drivable area, and 0 indicates it is not.",
|
| 576 |
+
"bbox": [
|
| 577 |
+
511,
|
| 578 |
+
166,
|
| 579 |
+
906,
|
| 580 |
+
376
|
| 581 |
+
],
|
| 582 |
+
"page_idx": 3
|
| 583 |
+
},
|
| 584 |
+
{
|
| 585 |
+
"type": "text",
|
| 586 |
+
"text": "To construct the target distance score $\\delta_i^{\\mathrm{dis}}$ , we utilize the softmax function to map the Euclidean distance between the goal point $g_{i}$ and the ground truth goal point $g^{\\mathrm{gt}}$ to the interval [0, 1]. This is defined as:",
|
| 587 |
+
"bbox": [
|
| 588 |
+
511,
|
| 589 |
+
377,
|
| 590 |
+
905,
|
| 591 |
+
439
|
| 592 |
+
],
|
| 593 |
+
"page_idx": 3
|
| 594 |
+
},
|
| 595 |
+
{
|
| 596 |
+
"type": "equation",
|
| 597 |
+
"text": "\n$$\n\\delta_ {i} ^ {\\mathrm {d i s}} = \\frac {\\exp \\left(- \\| g _ {i} - g ^ {\\mathrm {g t}} \\| _ {2}\\right)}{\\sum_ {j} \\exp \\left(- \\| g _ {j} - g ^ {\\mathrm {g t}} \\| _ {2}\\right)} \\tag {3}\n$$\n",
|
| 598 |
+
"text_format": "latex",
|
| 599 |
+
"bbox": [
|
| 600 |
+
604,
|
| 601 |
+
449,
|
| 602 |
+
903,
|
| 603 |
+
486
|
| 604 |
+
],
|
| 605 |
+
"page_idx": 3
|
| 606 |
+
},
|
| 607 |
+
{
|
| 608 |
+
"type": "text",
|
| 609 |
+
"text": "For the target drivable area compliance score $\\delta_i^{\\mathrm{dac}}$ , we introduce a shadow vehicle, whose bounding box is determined based on the position and heading $(x_i, y_i, \\theta_i)$ in $g_i$ and the shape of the ego vehicle. Let $\\{p^j\\}^4$ represent the set of four corner positions of the shadow vehicle, and let $\\mathbb{D}$ denote the polygon representing the drivable area. The drivable area compliance score $\\delta_i^{\\mathrm{dac}}$ is defined as:",
|
| 610 |
+
"bbox": [
|
| 611 |
+
511,
|
| 612 |
+
492,
|
| 613 |
+
905,
|
| 614 |
+
599
|
| 615 |
+
],
|
| 616 |
+
"page_idx": 3
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"type": "equation",
|
| 620 |
+
"text": "\n$$\n\\delta_ {i} ^ {\\mathrm {d a c}} = \\left\\{ \\begin{array}{l l} 1, & \\text {i f \\forall j , p ^ {j} \\in \\mathbb {D} ^ {\\circ}} \\\\ 0, & \\text {o t h e r w i s e} \\end{array} \\right.\n$$\n",
|
| 621 |
+
"text_format": "latex",
|
| 622 |
+
"bbox": [
|
| 623 |
+
612,
|
| 624 |
+
609,
|
| 625 |
+
802,
|
| 626 |
+
651
|
| 627 |
+
],
|
| 628 |
+
"page_idx": 3
|
| 629 |
+
},
|
| 630 |
+
{
|
| 631 |
+
"type": "text",
|
| 632 |
+
"text": "We compute the final score $\\hat{\\delta}_i^{\\mathrm{final}}$ by aggregating $\\hat{\\delta}_i^{\\mathrm{dis}}$ and $\\hat{\\delta}_i^{\\mathrm{dac}}$ . The goal point with the highest final score is selected for trajectory generation.",
|
| 633 |
+
"bbox": [
|
| 634 |
+
511,
|
| 635 |
+
662,
|
| 636 |
+
905,
|
| 637 |
+
710
|
| 638 |
+
],
|
| 639 |
+
"page_idx": 3
|
| 640 |
+
},
|
| 641 |
+
{
|
| 642 |
+
"type": "equation",
|
| 643 |
+
"text": "\n$$\n\\hat {\\delta} _ {i} ^ {\\mathrm {f i n a l}} = w _ {1} \\log \\hat {\\delta} _ {i} ^ {\\mathrm {d i s}} + w _ {2} \\log \\hat {\\delta} _ {i} ^ {\\mathrm {d a c}}\n$$\n",
|
| 644 |
+
"text_format": "latex",
|
| 645 |
+
"bbox": [
|
| 646 |
+
601,
|
| 647 |
+
722,
|
| 648 |
+
815,
|
| 649 |
+
743
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 3
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "text",
|
| 655 |
+
"text": "As shown in Fig.3(a), the Transformer-based Scorer Decoder uses the result of adding $F_{v}$ and $F_{\\mathrm{ego}}$ as the query, with $F_{\\mathrm{bev}}$ as the key and value. The output is passed through two separate MLPs to produce the scores $\\hat{\\delta}^{dis}$ and $\\hat{\\delta}^{dac}$ for each point in the V. Fig.3(b) shows the distribution of these two scores. With the points in warmer colors representing higher scores, we observe that score $\\hat{\\delta}^{dis}$ effectively indicates the desired future position, while $\\hat{\\delta}^{dac}$ identifies if the goal point is within the drivable area.",
|
| 656 |
+
"bbox": [
|
| 657 |
+
511,
|
| 658 |
+
763,
|
| 659 |
+
906,
|
| 660 |
+
901
|
| 661 |
+
],
|
| 662 |
+
"page_idx": 3
|
| 663 |
+
},
|
| 664 |
+
{
|
| 665 |
+
"type": "image",
|
| 666 |
+
"img_path": "images/93cb2c5ca6f5fc4750e946ccb42a40431e84224092685e5159af196166703f4d.jpg",
|
| 667 |
+
"image_caption": [
|
| 668 |
+
"(a)"
|
| 669 |
+
],
|
| 670 |
+
"image_footnote": [],
|
| 671 |
+
"bbox": [
|
| 672 |
+
93,
|
| 673 |
+
85,
|
| 674 |
+
547,
|
| 675 |
+
306
|
| 676 |
+
],
|
| 677 |
+
"page_idx": 4
|
| 678 |
+
},
|
| 679 |
+
{
|
| 680 |
+
"type": "image",
|
| 681 |
+
"img_path": "images/cc506e08880ffdea26b407a2acd06669354965431a0fdfb2533e9910e6c34b97.jpg",
|
| 682 |
+
"image_caption": [
|
| 683 |
+
"(b)"
|
| 684 |
+
],
|
| 685 |
+
"image_footnote": [],
|
| 686 |
+
"bbox": [
|
| 687 |
+
550,
|
| 688 |
+
87,
|
| 689 |
+
885,
|
| 690 |
+
306
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 4
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "image",
|
| 696 |
+
"img_path": "images/1678738b8dee2c78bf97a35a85142ca65b71b93c2664685e35cde1e8510eb0bd.jpg",
|
| 697 |
+
"image_caption": [
|
| 698 |
+
"Figure 3. Goal Point Scorer. (a) shows the detailed structure of the Goal Point Construction Module, and (b) presents the score distributions of $\\{\\hat{\\delta}_i^{dis}\\}^N$ , $\\{\\hat{\\delta}_i^{dac}\\}^N$ , and $\\{\\hat{\\delta}_i^{final}\\}^N$ , where points with higher scores are highlighted with warmer color.",
|
| 699 |
+
"Figure 4. The network architecture used in Rectified Flow."
|
| 700 |
+
],
|
| 701 |
+
"image_footnote": [],
|
| 702 |
+
"bbox": [
|
| 703 |
+
94,
|
| 704 |
+
396,
|
| 705 |
+
498,
|
| 706 |
+
570
|
| 707 |
+
],
|
| 708 |
+
"page_idx": 4
|
| 709 |
+
},
|
| 710 |
+
{
|
| 711 |
+
"type": "text",
|
| 712 |
+
"text": "3.2.4. Trajectory Planning Module",
|
| 713 |
+
"text_level": 1,
|
| 714 |
+
"bbox": [
|
| 715 |
+
89,
|
| 716 |
+
622,
|
| 717 |
+
334,
|
| 718 |
+
638
|
| 719 |
+
],
|
| 720 |
+
"page_idx": 4
|
| 721 |
+
},
|
| 722 |
+
{
|
| 723 |
+
"type": "text",
|
| 724 |
+
"text": "In this module, we generate constrained, high-quality trajectory candidates using a generative model and then select the optimal trajectory through a scoring mechanism. Generative models based on diffusion methods like DDPM[14] and DDIM[26] typically require complex denoising paths, leading to significant time overhead during inference, which makes them unsuitable for real-time systems like autonomous driving. In contrast, Rectified Flow[24], which is based on the optimal transport path in flow matching, requires much fewer inference steps to achieve good results. We adopt Rectified Flow as the generative model, using the BEV feature and goal point as conditions to generate multimodal trajectories.",
|
| 725 |
+
"bbox": [
|
| 726 |
+
88,
|
| 727 |
+
642,
|
| 728 |
+
482,
|
| 729 |
+
838
|
| 730 |
+
],
|
| 731 |
+
"page_idx": 4
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"type": "text",
|
| 735 |
+
"text": "Multimodal Trajectories Generating. We generate multimodal trajectories by modeling the shift from the noise distribution to the target trajectory distribution. During this distribution transfer process, given the current state $x_{t}$ and",
|
| 736 |
+
"bbox": [
|
| 737 |
+
89,
|
| 738 |
+
839,
|
| 739 |
+
483,
|
| 740 |
+
901
|
| 741 |
+
],
|
| 742 |
+
"page_idx": 4
|
| 743 |
+
},
|
| 744 |
+
{
|
| 745 |
+
"type": "text",
|
| 746 |
+
"text": "time step $t$ , we predict the shift $\\mathbf{v_t}$ .",
|
| 747 |
+
"bbox": [
|
| 748 |
+
511,
|
| 749 |
+
398,
|
| 750 |
+
746,
|
| 751 |
+
412
|
| 752 |
+
],
|
| 753 |
+
"page_idx": 4
|
| 754 |
+
},
|
| 755 |
+
{
|
| 756 |
+
"type": "equation",
|
| 757 |
+
"text": "\n$$\n\\mathbf {v} _ {\\mathbf {t}} = \\tau^ {\\text {n o r m}} - x _ {0} \\tag {4}\n$$\n",
|
| 758 |
+
"text_format": "latex",
|
| 759 |
+
"bbox": [
|
| 760 |
+
647,
|
| 761 |
+
428,
|
| 762 |
+
903,
|
| 763 |
+
443
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 4
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "equation",
|
| 769 |
+
"text": "\n$$\nx _ {t} = (1 - t) x _ {0} + t \\tau^ {\\text {n o r m}} \\tag {5}\n$$\n",
|
| 770 |
+
"text_format": "latex",
|
| 771 |
+
"bbox": [
|
| 772 |
+
622,
|
| 773 |
+
450,
|
| 774 |
+
903,
|
| 775 |
+
465
|
| 776 |
+
],
|
| 777 |
+
"page_idx": 4
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "equation",
|
| 781 |
+
"text": "\n$$\n\\tau^ {\\text {n o r m}} = \\mathcal {H} \\left(\\tau^ {g t}\\right) \\tag {6}\n$$\n",
|
| 782 |
+
"text_format": "latex",
|
| 783 |
+
"bbox": [
|
| 784 |
+
651,
|
| 785 |
+
472,
|
| 786 |
+
903,
|
| 787 |
+
489
|
| 788 |
+
],
|
| 789 |
+
"page_idx": 4
|
| 790 |
+
},
|
| 791 |
+
{
|
| 792 |
+
"type": "text",
|
| 793 |
+
"text": "Where $\\tau^{gt}$ is the ground truth trajectory and $\\tau^{norm}$ is its normalized form. We define $\\mathcal{H}(\\cdot)$ as the normalization operation applied to the trajectory. The variable $x_0$ represents the noise distribution, which follows $x_0\\sim \\mathcal{N}(0,\\sigma^2 I)$ . The variable $x_{t}$ is obtained by linearly interpolating between $x_0$ and $\\tau^{norm}$ .",
|
| 794 |
+
"bbox": [
|
| 795 |
+
511,
|
| 796 |
+
496,
|
| 797 |
+
905,
|
| 798 |
+
585
|
| 799 |
+
],
|
| 800 |
+
"page_idx": 4
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"type": "text",
|
| 804 |
+
"text": "As illustrated in Fig.4, we extract different features through a series of encoders. Specifically, we encode $x_{t}$ using a linear layer, while $t$ and the goal point are transformed into feature vectors via sinusoidal encoding. The feature $F_{\\mathrm{env}}$ is obtained by passing the information from $F_{\\mathrm{bev}}$ and $F_{\\mathrm{ego}}$ through the environment encoder.",
|
| 805 |
+
"bbox": [
|
| 806 |
+
511,
|
| 807 |
+
587,
|
| 808 |
+
906,
|
| 809 |
+
679
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 4
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "equation",
|
| 815 |
+
"text": "\n$$\nF _ {\\text {e n v}} = E _ {\\text {e n v}} \\left(Q, \\left(F _ {\\text {B E V}} + F _ {\\text {e g o}}\\right), \\left(F _ {\\text {B E V}} + F _ {\\text {e g o}}\\right)\\right) \\tag {7}\n$$\n",
|
| 816 |
+
"text_format": "latex",
|
| 817 |
+
"bbox": [
|
| 818 |
+
553,
|
| 819 |
+
705,
|
| 820 |
+
903,
|
| 821 |
+
722
|
| 822 |
+
],
|
| 823 |
+
"page_idx": 4
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"text": "Here, $E_{\\mathrm{env}}$ refers to a Transformer-based encoder, $Q$ denotes a learnable embedding, and $F_{\\mathrm{ego}}$ represents the ego status feature, which encodes the kinematic information of the ego vehicle.",
|
| 828 |
+
"bbox": [
|
| 829 |
+
511,
|
| 830 |
+
733,
|
| 831 |
+
906,
|
| 832 |
+
794
|
| 833 |
+
],
|
| 834 |
+
"page_idx": 4
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"text": "We concatenate the features $F_{\\mathrm{env}}$ , $F_{\\mathrm{goal}}$ , $F_{\\mathrm{trail}}$ , and $F_{\\mathrm{t}}$ to form the overall feature $F_{\\mathrm{all}}$ , which encapsulates the current state, time step, and scene information. This combined feature is then passed through several attention layers to predict the distribution shift $\\mathbf{v}_{\\mathrm{t}}$ .",
|
| 839 |
+
"bbox": [
|
| 840 |
+
511,
|
| 841 |
+
794,
|
| 842 |
+
905,
|
| 843 |
+
869
|
| 844 |
+
],
|
| 845 |
+
"page_idx": 4
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "equation",
|
| 849 |
+
"text": "\n$$\n\\hat {\\mathbf {v}} _ {\\mathbf {t}} = \\mathcal {G} \\left(F _ {\\text {a l l}}, F _ {\\text {a l l}}, F _ {\\text {a l l}}\\right) \\tag {8}\n$$\n",
|
| 850 |
+
"text_format": "latex",
|
| 851 |
+
"bbox": [
|
| 852 |
+
633,
|
| 853 |
+
885,
|
| 854 |
+
903,
|
| 855 |
+
901
|
| 856 |
+
],
|
| 857 |
+
"page_idx": 4
|
| 858 |
+
},
|
| 859 |
+
{
|
| 860 |
+
"type": "equation",
|
| 861 |
+
"text": "\n$$\nF _ {\\text {a l l}} = \\operatorname {C o n c a t} \\left(F _ {\\text {e n v}}, F _ {\\text {g o a l}}, F _ {\\text {t r a j}}, F _ {t}\\right) \\tag {9}\n$$\n",
|
| 862 |
+
"text_format": "latex",
|
| 863 |
+
"bbox": [
|
| 864 |
+
171,
|
| 865 |
+
90,
|
| 866 |
+
483,
|
| 867 |
+
108
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 5
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"text": "Where $\\mathcal{G}$ is the network that consists of N attention layers.",
|
| 874 |
+
"bbox": [
|
| 875 |
+
89,
|
| 876 |
+
116,
|
| 877 |
+
475,
|
| 878 |
+
131
|
| 879 |
+
],
|
| 880 |
+
"page_idx": 5
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "text",
|
| 884 |
+
"text": "We reconstruct the trajectory distribution using $x_0$ and $\\hat{\\mathbf{v}}_{\\mathbf{t}}$ . Typically, we achieve this by performing multiple inference steps through the Rectified Flow, gradually transforming the noise distribution $x_0$ to the target distribution $\\tau^{\\mathrm{norm}}$ . Finally, we apply denormalization to $\\tau^{\\mathrm{norm}}$ to obtain the final trajectory $\\hat{\\tau}$ .",
|
| 885 |
+
"bbox": [
|
| 886 |
+
89,
|
| 887 |
+
132,
|
| 888 |
+
483,
|
| 889 |
+
223
|
| 890 |
+
],
|
| 891 |
+
"page_idx": 5
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "equation",
|
| 895 |
+
"text": "\n$$\n\\hat {\\tau} = \\mathcal {H} ^ {- 1} \\left(\\hat {\\tau} ^ {n o r m}\\right) \\tag {10}\n$$\n",
|
| 896 |
+
"text_format": "latex",
|
| 897 |
+
"bbox": [
|
| 898 |
+
225,
|
| 899 |
+
234,
|
| 900 |
+
483,
|
| 901 |
+
252
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 5
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "equation",
|
| 907 |
+
"text": "\n$$\n\\hat {\\tau} ^ {n o r m} = x _ {0} + \\frac {1}{n} \\sum_ {i} ^ {n} \\hat {v} _ {t _ {i}} \\tag {11}\n$$\n",
|
| 908 |
+
"text_format": "latex",
|
| 909 |
+
"bbox": [
|
| 910 |
+
202,
|
| 911 |
+
266,
|
| 912 |
+
483,
|
| 913 |
+
305
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 5
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "text",
|
| 919 |
+
"text": "Where $n$ is the total inference steps, and $t_i$ is the time step sampled in the $i$ -th step, which satisfies $t_i \\in [0,1]$ . $\\mathcal{H}^{-1}(\\cdot)$ is the denormalization operation.",
|
| 920 |
+
"bbox": [
|
| 921 |
+
89,
|
| 922 |
+
314,
|
| 923 |
+
483,
|
| 924 |
+
359
|
| 925 |
+
],
|
| 926 |
+
"page_idx": 5
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "text",
|
| 930 |
+
"text": "Trajectory Selecting In trajectory selection, methods like SparseDrive[27] and Diffusion-ES[32] rely on kinematic simulation of the generated trajectories to predict potential collisions with surrounding agents, thus selecting the optimal trajectory. This process significantly increases the inference time. We simplify this procedure by using the goal point as a reference for selecting the trajectory. Specifically, we trade off the trajectory distance to the goal point and ego progress, selecting the optimal trajectory through a trajectory scorer.",
|
| 931 |
+
"bbox": [
|
| 932 |
+
89,
|
| 933 |
+
359,
|
| 934 |
+
483,
|
| 935 |
+
511
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 5
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "equation",
|
| 941 |
+
"text": "\n$$\nf \\left(\\hat {\\tau} _ {i}\\right) = - \\lambda_ {1} \\Phi \\left(f _ {d i s} \\left(\\hat {\\tau} _ {i}\\right)\\right) + \\lambda_ {2} \\Phi \\left(f _ {p g} \\left(\\hat {\\tau} _ {i}\\right)\\right) \\tag {12}\n$$\n",
|
| 942 |
+
"text_format": "latex",
|
| 943 |
+
"bbox": [
|
| 944 |
+
148,
|
| 945 |
+
525,
|
| 946 |
+
482,
|
| 947 |
+
542
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 5
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "text",
|
| 953 |
+
"text": "where $\\Phi (\\cdot)$ is the minimax operation. $f_{dis}(\\hat{\\tau}_i)$ presents the $\\mathcal{L}_2$ distance of $\\hat{\\tau}_i$ and $g$ , and $f_{pg}(\\hat{\\tau}_i)$ presents the $\\mathcal{L}_2$ distance of progress of $\\hat{\\tau}_i$ make.",
|
| 954 |
+
"bbox": [
|
| 955 |
+
89,
|
| 956 |
+
554,
|
| 957 |
+
482,
|
| 958 |
+
599
|
| 959 |
+
],
|
| 960 |
+
"page_idx": 5
|
| 961 |
+
},
|
| 962 |
+
{
|
| 963 |
+
"type": "text",
|
| 964 |
+
"text": "Furthermore, predicted goal point may contain an error that can misguide the trajectory. To mitigate this, we mask the goal point during generation to create a shadow trajectory. If the shadow trajectory deviates significantly from the main trajectory, we treat the goal point as unreliable and use the shadow as the output.",
|
| 965 |
+
"bbox": [
|
| 966 |
+
89,
|
| 967 |
+
601,
|
| 968 |
+
483,
|
| 969 |
+
691
|
| 970 |
+
],
|
| 971 |
+
"page_idx": 5
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"type": "text",
|
| 975 |
+
"text": "3.2.5. Training Losses",
|
| 976 |
+
"text_level": 1,
|
| 977 |
+
"bbox": [
|
| 978 |
+
89,
|
| 979 |
+
700,
|
| 980 |
+
246,
|
| 981 |
+
715
|
| 982 |
+
],
|
| 983 |
+
"page_idx": 5
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "text",
|
| 987 |
+
"text": "Firstly, we optimize the perception extractor exclusively, and enforce multiple perception losses for supervision, including the cross-entropy loss for HD map $(L_{HD})$ and 3D bounding box classification $(L_{bbox})$ and $L_{1}$ loss for 3D bounding box locations $(L_{loc})$ . This stage aims to enrich the BEV feature with information on various perceptions. Losses are as follows.",
|
| 988 |
+
"bbox": [
|
| 989 |
+
89,
|
| 990 |
+
720,
|
| 991 |
+
483,
|
| 992 |
+
825
|
| 993 |
+
],
|
| 994 |
+
"page_idx": 5
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "equation",
|
| 998 |
+
"text": "\n$$\nL _ {\\text {p e r c e p t i o n}} = w _ {1} * L _ {H D} + w _ {2} * L _ {b b o x} + w _ {3} * L _ {l o c} \\tag {13}\n$$\n",
|
| 999 |
+
"text_format": "latex",
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
101,
|
| 1002 |
+
840,
|
| 1003 |
+
482,
|
| 1004 |
+
858
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 5
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "where $w_{1}, w_{2}, w_{3}$ are set to 10.0, 1.0, 10.0 in training. For the goal constructor, we employ the cross entropy loss for",
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
89,
|
| 1013 |
+
869,
|
| 1014 |
+
483,
|
| 1015 |
+
901
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 5
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "text",
|
| 1021 |
+
"text": "distance score $(L_{dis})$ and DAC score $(L_{dac})$ . $w_4, w_5$ are set to 1.0 and 0.005.",
|
| 1022 |
+
"bbox": [
|
| 1023 |
+
511,
|
| 1024 |
+
90,
|
| 1025 |
+
906,
|
| 1026 |
+
119
|
| 1027 |
+
],
|
| 1028 |
+
"page_idx": 5
|
| 1029 |
+
},
|
| 1030 |
+
{
|
| 1031 |
+
"type": "equation",
|
| 1032 |
+
"text": "\n$$\nL _ {\\text {g o a l}} = w _ {4} * L _ {\\text {d i s}} + w _ {5} * L _ {\\text {d a c}} \\tag {14}\n$$\n",
|
| 1033 |
+
"text_format": "latex",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
602,
|
| 1036 |
+
133,
|
| 1037 |
+
906,
|
| 1038 |
+
150
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 5
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "equation",
|
| 1044 |
+
"text": "\n$$\nL _ {d i s} = - \\sum_ {i = 1} ^ {N} \\delta_ {i} ^ {d i s} \\log \\left(\\hat {\\delta_ {i}} ^ {d i s}\\right) \\tag {15}\n$$\n",
|
| 1045 |
+
"text_format": "latex",
|
| 1046 |
+
"bbox": [
|
| 1047 |
+
614,
|
| 1048 |
+
162,
|
| 1049 |
+
905,
|
| 1050 |
+
203
|
| 1051 |
+
],
|
| 1052 |
+
"page_idx": 5
|
| 1053 |
+
},
|
| 1054 |
+
{
|
| 1055 |
+
"type": "equation",
|
| 1056 |
+
"text": "\n$$\nL _ {d a c} = - \\delta^ {d a c} \\log \\hat {\\delta} ^ {d a c} - (1 - \\delta^ {d a c}) \\log \\left(1 - \\hat {\\delta} ^ {d a c}\\right) \\tag {16}\n$$\n",
|
| 1057 |
+
"text_format": "latex",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
531,
|
| 1060 |
+
210,
|
| 1061 |
+
905,
|
| 1062 |
+
229
|
| 1063 |
+
],
|
| 1064 |
+
"page_idx": 5
|
| 1065 |
+
},
|
| 1066 |
+
{
|
| 1067 |
+
"type": "text",
|
| 1068 |
+
"text": "$L_{1}$ loss is utilized for multimodal planner.",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
513,
|
| 1071 |
+
236,
|
| 1072 |
+
792,
|
| 1073 |
+
251
|
| 1074 |
+
],
|
| 1075 |
+
"page_idx": 5
|
| 1076 |
+
},
|
| 1077 |
+
{
|
| 1078 |
+
"type": "equation",
|
| 1079 |
+
"text": "\n$$\nL _ {\\text {p l a n n e r}} = \\left| \\mathbf {v} _ {\\mathbf {t}} - \\hat {\\mathbf {v}} _ {\\mathbf {t}} \\right| \\tag {17}\n$$\n",
|
| 1080 |
+
"text_format": "latex",
|
| 1081 |
+
"bbox": [
|
| 1082 |
+
635,
|
| 1083 |
+
263,
|
| 1084 |
+
905,
|
| 1085 |
+
280
|
| 1086 |
+
],
|
| 1087 |
+
"page_idx": 5
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "text",
|
| 1091 |
+
"text": "4. Experiments",
|
| 1092 |
+
"text_level": 1,
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
513,
|
| 1095 |
+
292,
|
| 1096 |
+
645,
|
| 1097 |
+
310
|
| 1098 |
+
],
|
| 1099 |
+
"page_idx": 5
|
| 1100 |
+
},
|
| 1101 |
+
{
|
| 1102 |
+
"type": "text",
|
| 1103 |
+
"text": "4.1. Dataset",
|
| 1104 |
+
"text_level": 1,
|
| 1105 |
+
"bbox": [
|
| 1106 |
+
513,
|
| 1107 |
+
319,
|
| 1108 |
+
609,
|
| 1109 |
+
333
|
| 1110 |
+
],
|
| 1111 |
+
"page_idx": 5
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "text",
|
| 1115 |
+
"text": "Our experiment is validated on the Openscene[6] dataset. Openscene includes 120 hours of autonomous driving data. Its end-to-end environment Navsim[7] uses 1192 and 136 scenarios for trainval and testing, a total of over 10w samples at $2\\mathrm{Hz}$ . Each sample contains camera images from 8 perspectives, fused Lidar data from 5 sensors, ego status, and annotations for the map and objects.",
|
| 1116 |
+
"bbox": [
|
| 1117 |
+
511,
|
| 1118 |
+
340,
|
| 1119 |
+
906,
|
| 1120 |
+
446
|
| 1121 |
+
],
|
| 1122 |
+
"page_idx": 5
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "text",
|
| 1126 |
+
"text": "4.2. Metrics",
|
| 1127 |
+
"text_level": 1,
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
513,
|
| 1130 |
+
455,
|
| 1131 |
+
609,
|
| 1132 |
+
470
|
| 1133 |
+
],
|
| 1134 |
+
"page_idx": 5
|
| 1135 |
+
},
|
| 1136 |
+
{
|
| 1137 |
+
"type": "text",
|
| 1138 |
+
"text": "In the Navsim environment, the generated $2\\mathrm{Hz}$ , 4-second trajectories are interpolated via an LQR controller to yield $10\\mathrm{Hz}$ , 4-second trajectories. These trajectories are scored using closed-loop metrics, including No at-fault Collisions $S_{NC}$ , Drivable Area Compliance $S_{DAC}$ , Time to Collision $S_{TTC}$ with bounds, Ego Progress $S_{EP}$ , Comfort $S_{CF}$ , and Driving Direction Compliance $S_{DDC}$ . The final score is derived by aggregating these metrics. Due to practical constraints, $S_{DDC}$ is omitted from the calculation<sup>1</sup>.",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
511,
|
| 1141 |
+
478,
|
| 1142 |
+
906,
|
| 1143 |
+
614
|
| 1144 |
+
],
|
| 1145 |
+
"page_idx": 5
|
| 1146 |
+
},
|
| 1147 |
+
{
|
| 1148 |
+
"type": "equation",
|
| 1149 |
+
"text": "\n$$\n\\begin{array}{l} S _ {P D M} = S _ {N C} \\times S _ {D A C} \\times s _ {T T C} \\times \\\\ \\left(\\frac {5 \\times S _ {E P} + 5 \\times S _ {C F} + 2 \\times S _ {D D C}}{1 2}\\right) \\tag {18} \\\\ \\end{array}\n$$\n",
|
| 1150 |
+
"text_format": "latex",
|
| 1151 |
+
"bbox": [
|
| 1152 |
+
529,
|
| 1153 |
+
625,
|
| 1154 |
+
905,
|
| 1155 |
+
679
|
| 1156 |
+
],
|
| 1157 |
+
"page_idx": 5
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "text",
|
| 1161 |
+
"text": "4.3. Baselines",
|
| 1162 |
+
"text_level": 1,
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
513,
|
| 1165 |
+
688,
|
| 1166 |
+
620,
|
| 1167 |
+
703
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 5
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "text",
|
| 1173 |
+
"text": "In Navsim, we compare against the following baselines: Constant Velocity Assumes constant speed from the current timestamp for forward movement. Ego Status MLP Takes only the current state as input and uses an MLP to generate the trajectory. PDM-Closed Using ground-truth perception as input, several trajectories are generated through a rule-based IDM method. The PDM scorer then selects the optimal trajectory from these as the output. Transfuser Uses both image and LiDAR inputs, fusing them via a transformer into a BEV feature, which is then used for trajectory generation. LTF A streamlined version",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
511,
|
| 1176 |
+
710,
|
| 1177 |
+
906,
|
| 1178 |
+
877
|
| 1179 |
+
],
|
| 1180 |
+
"page_idx": 5
|
| 1181 |
+
},
|
| 1182 |
+
{
|
| 1183 |
+
"type": "page_footnote",
|
| 1184 |
+
"text": "1 https://github.com/autonomousvision/navsim/issues/14",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
529,
|
| 1187 |
+
886,
|
| 1188 |
+
830,
|
| 1189 |
+
900
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 5
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "table",
|
| 1195 |
+
"img_path": "images/39f9f9959fc7303c010fe2333222797c80ac462543e256b62541c156a3b5e0d8.jpg",
|
| 1196 |
+
"table_caption": [],
|
| 1197 |
+
"table_footnote": [],
|
| 1198 |
+
"table_body": "<table><tr><td>Method</td><td>Ego Stat.</td><td>Image</td><td>LiDAR</td><td>Video</td><td>\\(S_{NC} \\uparrow\\)</td><td>\\(S_{DAC} \\uparrow\\)</td><td>\\(S_{TTC} \\uparrow\\)</td><td>\\(S_{CF} \\uparrow\\)</td><td>\\(S_{EP} \\uparrow\\)</td><td>\\(S_{PDM} \\uparrow\\)</td></tr><tr><td>Constant Velocity</td><td>✓</td><td></td><td></td><td></td><td>68.0</td><td>57.8</td><td>50.0</td><td>100</td><td>19.4</td><td>20.6</td></tr><tr><td>Ego Status MLP</td><td>✓</td><td></td><td></td><td></td><td>93.0</td><td>77.3</td><td>83.6</td><td>100</td><td>62.8</td><td>65.6</td></tr><tr><td>LTF [3]</td><td>✓</td><td>✓</td><td></td><td></td><td>97.4</td><td>92.8</td><td>92.4</td><td>100</td><td>79.0</td><td>83.8</td></tr><tr><td>TransFuser [3]</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>97.7</td><td>92.8</td><td>92.8</td><td>100</td><td>79.2</td><td>84.0</td></tr><tr><td>UniAD [15]</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>97.8</td><td>91.9</td><td>92.9</td><td>100</td><td>78.8</td><td>83.4</td></tr><tr><td>PARA-Drive [30]</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>97.9</td><td>92.4</td><td>93.0</td><td>99.8</td><td>79.3</td><td>84.0</td></tr><tr><td>GoalFlow (Ours)</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>85.0</td><td>90.3</td></tr><tr><td>GoalFlow†</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>99.8</td><td>97.9</td><td>98.6</td><td>100</td><td>85.4</td><td>92.1</td></tr><tr><td>Human‡</td><td></td><td></td><td></td><td></td><td>100</td><td>100</td><td>100</td><td>99.9</td><td>87.5</td><td>94.8</td></tr></table>",
|
| 1199 |
+
"bbox": [
|
| 1200 |
+
94,
|
| 1201 |
+
88,
|
| 1202 |
+
903,
|
| 1203 |
+
265
|
| 1204 |
+
],
|
| 1205 |
+
"page_idx": 6
|
| 1206 |
+
},
|
| 1207 |
+
{
|
| 1208 |
+
"type": "table",
|
| 1209 |
+
"img_path": "images/2c0a5d8980a063fbd7b3f0dc6910fe4bd04881b5ca539e36fdde11275993b14d.jpg",
|
| 1210 |
+
"table_caption": [
|
| 1211 |
+
"Table 1. Comparisons with SOTA methods in PDM score metrics on Navsim [7] Test. Our method outperforms other approaches across all evaluation metrics. † uses the endpoint of the ground-truth trajectory as the goal point. ‡ uses the ground-truth trajectories to evaluate."
|
| 1212 |
+
],
|
| 1213 |
+
"table_footnote": [],
|
| 1214 |
+
"table_body": "<table><tr><td>Model</td><td>Description</td><td>SNC↑</td><td>SDAC↑</td><td>STTC↑</td><td>SCF</td><td>SEP↑</td><td>SPDM↑</td></tr><tr><td>-</td><td>Transfuser[3]</td><td>97.7</td><td>92.8</td><td>92.8</td><td>100</td><td>79.0</td><td>84.0</td></tr><tr><td>M0</td><td>Base Model</td><td>97.9</td><td>94.2</td><td>94.2</td><td>100</td><td>79.9</td><td>85.6</td></tr><tr><td>M1</td><td>M0 + Distance Score Map</td><td>98.5</td><td>96.4</td><td>94.9</td><td>100</td><td>83.0</td><td>88.5</td></tr><tr><td>M2</td><td>M1 + DAC Score Map</td><td>98.6</td><td>97.5</td><td>94.7</td><td>100</td><td>83.8</td><td>89.4</td></tr><tr><td>M3</td><td>M2 + Trajectory Scorer</td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>85.0</td><td>90.3</td></tr></table>",
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
163,
|
| 1217 |
+
329,
|
| 1218 |
+
834,
|
| 1219 |
+
445
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 6
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "table",
|
| 1225 |
+
"img_path": "images/8247b4ef403682c208d4c4a9d3f7c244b842fb1e9bcee4a939189211718bcd54.jpg",
|
| 1226 |
+
"table_caption": [
|
| 1227 |
+
"Table 2. Ablation study on the influence of each component. $\\mathcal{M}_0$ is the base model, which uses rectified flow without goal point guidance and averages all generated trajectories to produce the final output. $\\mathcal{M}_1$ and $\\mathcal{M}_2$ introduce the distance score map and DAC score map, respectively, to guide the rectified flow. $\\mathcal{M}_3$ builds upon $\\mathcal{M}_1$ by incorporating trajectory scorer."
|
| 1228 |
+
],
|
| 1229 |
+
"table_footnote": [],
|
| 1230 |
+
"table_body": "<table><tr><td>T</td><td>Inf.Time</td><td>SNC↑</td><td>SDAC↑</td><td>STTC↑</td><td>SCF↑</td><td>SEP↑</td><td>SPDM↑</td></tr><tr><td>20</td><td>177.8ms</td><td>98.3</td><td>98.1</td><td>94.3</td><td>100</td><td>84.7</td><td>89.9</td></tr><tr><td>10</td><td>92.4ms</td><td>98.3</td><td>98.2</td><td>94.4</td><td>100</td><td>84.9</td><td>90.1</td></tr><tr><td>5</td><td>49.0ms</td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>84.4</td><td>90.3</td></tr><tr><td>1</td><td>10.4ms</td><td>98.4</td><td>97.8</td><td>94.1</td><td>100</td><td>84.5</td><td>88.9</td></tr></table>",
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
94,
|
| 1233 |
+
520,
|
| 1234 |
+
480,
|
| 1235 |
+
585
|
| 1236 |
+
],
|
| 1237 |
+
"page_idx": 6
|
| 1238 |
+
},
|
| 1239 |
+
{
|
| 1240 |
+
"type": "table",
|
| 1241 |
+
"img_path": "images/b93628b46bcf1ffa1711be983e681aea6ae837576f77e2aaec9f359a17e42804.jpg",
|
| 1242 |
+
"table_caption": [
|
| 1243 |
+
"Table 3. Impact of different timesteps in inference. $T$ denotes the number of denoising steps during inference. The results indicate that the model's performance is robust to variations of denoising steps."
|
| 1244 |
+
],
|
| 1245 |
+
"table_footnote": [],
|
| 1246 |
+
"table_body": "<table><tr><td>σ</td><td>SNC↑</td><td>SDAC↑</td><td>STTC↑</td><td>SCF↑</td><td>SEP↑</td><td>SPDM↑</td></tr><tr><td>0.05</td><td>98.3</td><td>98.2</td><td>94.4</td><td>100</td><td>85.0</td><td>90.1</td></tr><tr><td>0.1</td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>85.0</td><td>90.3</td></tr><tr><td>0.2</td><td>87.4</td><td>76.0</td><td>69.4</td><td>32.0</td><td>56.2</td><td>49.0</td></tr><tr><td>0.3</td><td>68.3</td><td>48.1</td><td>44.8</td><td>2.23</td><td>23.6</td><td>18.8</td></tr></table>",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
94,
|
| 1249 |
+
667,
|
| 1250 |
+
480,
|
| 1251 |
+
744
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 6
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "text",
|
| 1257 |
+
"text": "Table 4. Impact of different values of $\\sigma$ on the initial noise distribution. $\\sigma$ is the standard deviation of $x_0$ . The results show that performance drops significantly when $\\sigma$ exceeds 0.1, but remains stable for values below 0.1.",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
89,
|
| 1260 |
+
753,
|
| 1261 |
+
482,
|
| 1262 |
+
811
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 6
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "text",
|
| 1268 |
+
"text": "of Transfuser, where the LiDAR backbone is replaced with a learnable embedding. It achieves results in NavSim similar to Transfuser. UniAD Employs multiple transformer architectures to process information differently, using queries",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
89,
|
| 1271 |
+
839,
|
| 1272 |
+
483,
|
| 1273 |
+
902
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 6
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "text",
|
| 1279 |
+
"text": "to transfer information specifically for planning. PARA-Drive Diffs from UniAD by performing mapping, planning, motion prediction, and occupancy prediction tasks in parallel based on the BEV feature.",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
511,
|
| 1282 |
+
523,
|
| 1283 |
+
906,
|
| 1284 |
+
585
|
| 1285 |
+
],
|
| 1286 |
+
"page_idx": 6
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "text",
|
| 1290 |
+
"text": "4.4. Model Setups and Parameters",
|
| 1291 |
+
"text_level": 1,
|
| 1292 |
+
"bbox": [
|
| 1293 |
+
511,
|
| 1294 |
+
590,
|
| 1295 |
+
782,
|
| 1296 |
+
607
|
| 1297 |
+
],
|
| 1298 |
+
"page_idx": 6
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "text",
|
| 1302 |
+
"text": "The training of rectified flow[24] follows classifier-free guidance[13], where features within the conditioning set are randomly masked to bolster model robustness. The last point of the ground-truth trajectory is used to guide flow matching in trajectory generation during training. In testing, the goal point for trajectory generation is set by selecting the highest-scoring point from the goal point vocabulary. The sampling process employs a smoothing method in [9] that re-scales the timesteps nonlinearly, instead of using uniform intervals. We generate 128/256 trajectories, from which the trajectory scorer identifies the optimal one. All training was conducted on 4 nodes, each equipped with 8 RTX 4090 or RTX 3090 GPUs.",
|
| 1303 |
+
"bbox": [
|
| 1304 |
+
511,
|
| 1305 |
+
613,
|
| 1306 |
+
906,
|
| 1307 |
+
809
|
| 1308 |
+
],
|
| 1309 |
+
"page_idx": 6
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "text",
|
| 1313 |
+
"text": "4.5. Results and Analysis",
|
| 1314 |
+
"text_level": 1,
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
511,
|
| 1317 |
+
816,
|
| 1318 |
+
709,
|
| 1319 |
+
834
|
| 1320 |
+
],
|
| 1321 |
+
"page_idx": 6
|
| 1322 |
+
},
|
| 1323 |
+
{
|
| 1324 |
+
"type": "text",
|
| 1325 |
+
"text": "Comparison with SOTA Methods. In Table 1, we compared our method with several state-of-the-art algorithms in end-to-end autonomous driving, highlighting the highest scores in bold. Testing in the Navsim environment revealed",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
511,
|
| 1328 |
+
839,
|
| 1329 |
+
906,
|
| 1330 |
+
902
|
| 1331 |
+
],
|
| 1332 |
+
"page_idx": 6
|
| 1333 |
+
},
|
| 1334 |
+
{
|
| 1335 |
+
"type": "text",
|
| 1336 |
+
"text": "that GoalFlow consistently outperformed other methods in overall scores. Notably, our method surpasses the second-best approach by 5.5 points in the DAC score and by 5.7 points in the EP score, indicating that GoalFlow provides stronger constraints on keeping the vehicle within drivable areas, thus enhancing the safety of autonomous driving systems. Additionally, GoalFlow enables faster driving speeds while ensuring safety. Further experiments, where we replaced the predicted goal point with the endpoint of the ground truth trajectory, resulted in a score of 92.1, which is very close to the human trajectory score of 94.8. This demonstrates the strong guiding capability of the goal point in autonomous driving.",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
89,
|
| 1339 |
+
90,
|
| 1340 |
+
480,
|
| 1341 |
+
287
|
| 1342 |
+
],
|
| 1343 |
+
"page_idx": 7
|
| 1344 |
+
},
|
| 1345 |
+
{
|
| 1346 |
+
"type": "text",
|
| 1347 |
+
"text": "Ablation Study on The Influence of Each Component. We conduct an ablation study of the influence of each component in Table 2. The $\\mathcal{M}_0$ represents a model that generates trajectories using only the rectified flow. In our experiment results, the base $\\mathcal{M}_0$ consistently outperforms baseline methods on Navsim, particularly excelling in DAC and TTC. This indicates that the base model, which is based on flow matching, has effectively learned interactions with map information and surrounding agents, demonstrating that the flow model alone possesses strong modeling capabilities.",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
89,
|
| 1350 |
+
287,
|
| 1351 |
+
480,
|
| 1352 |
+
455
|
| 1353 |
+
],
|
| 1354 |
+
"page_idx": 7
|
| 1355 |
+
},
|
| 1356 |
+
{
|
| 1357 |
+
"type": "text",
|
| 1358 |
+
"text": "The $\\mathcal{M}_1$ model builds on $\\mathcal{M}_0$ by modeling the distance score distribution and selecting the point with the highest score to guide the rectified flow. We found that this results in the most significant improvement, demonstrating the effectiveness of decomposing the trajectory planning task. Specifically, we decompose the complex task into two simpler sub-tasks: goal point prediction and trajectory generation guided by the goal point.",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
89,
|
| 1361 |
+
455,
|
| 1362 |
+
480,
|
| 1363 |
+
578
|
| 1364 |
+
],
|
| 1365 |
+
"page_idx": 7
|
| 1366 |
+
},
|
| 1367 |
+
{
|
| 1368 |
+
"type": "text",
|
| 1369 |
+
"text": "The $\\mathcal{M}_2$ model builds upon $\\mathcal{M}_1$ by incorporating the prediction of DAC score distribution. The main improvement is seen in the DAC score. By introducing multiple evaluators from different perspectives, the model benefits from a more robust assessment, resulting in improved performance.",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
89,
|
| 1372 |
+
579,
|
| 1373 |
+
480,
|
| 1374 |
+
670
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 7
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "text",
|
| 1380 |
+
"text": "By incorporating trajectory scorer, which includes a trajectory selection and goal point checking mechanism, $\\mathcal{M}_3$ further enhances the reliability of GoalFlow.",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
89,
|
| 1383 |
+
671,
|
| 1384 |
+
480,
|
| 1385 |
+
717
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 7
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "Impact of Different Steps in Inference. We conducted experiments with different denoising steps during the inference process, as shown in Table 3. In these experiments, We found as the number of inference steps decreases from 20 to 1, the scores remained stable. Specifically, even with just a single inference step, excellent performance was achieved. This highlights the advantage of flow matching over diffusion-based frameworks: flow matching takes a direct, straight path, requiring fewer steps to transfer from noisy distribution to target distribution during inference. Additionally, as the inference steps are reduced from 20 to 1, the denoising time in inference of one sample decreases",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
89,
|
| 1394 |
+
719,
|
| 1395 |
+
480,
|
| 1396 |
+
900
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 7
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "to $6\\%$ of the original. This efficient inference process is especially critical for autonomous driving systems, where real-time performance is essential.",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
511,
|
| 1405 |
+
90,
|
| 1406 |
+
903,
|
| 1407 |
+
136
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 7
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "text",
|
| 1413 |
+
"text": "Impact of Different Initial Noise in Training. In the experiments, the initial noise follows a Gaussian distribution $\\mathcal{N}(0,\\sigma^2 I)$ . We explored the impact of the noise variance on the generated trajectories in Table 4. The results reveal that noise settings have a significant impact on the scores. When the noise is set too high, the generated trajectories become excessively erratic; notably, with a $\\sigma$ of 0.3, the Comfort score drops to only 2.23, indicating that the trajectory lacks coherent shape. Conversely, when the noise variance is too low, flow matching tends to degenerate into a regression model, reducing the trajectory diversity available for scoring. This lack of variety lowers overall scores.",
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
511,
|
| 1416 |
+
137,
|
| 1417 |
+
903,
|
| 1418 |
+
319
|
| 1419 |
+
],
|
| 1420 |
+
"page_idx": 7
|
| 1421 |
+
},
|
| 1422 |
+
{
|
| 1423 |
+
"type": "table",
|
| 1424 |
+
"img_path": "images/baaca64b45458e9df0efe1376a3c1be6bb889a73b76af784d2ddbfdc1c1455de.jpg",
|
| 1425 |
+
"table_caption": [],
|
| 1426 |
+
"table_footnote": [],
|
| 1427 |
+
"table_body": "<table><tr><td>Dim</td><td>Backbone</td><td>\\(S_{NC} \\uparrow\\)</td><td>\\(S_{DAC} \\uparrow\\)</td><td>\\(S_{TTC} \\uparrow\\)</td><td>\\(S_{EP} \\uparrow\\)</td><td>\\(S_{PDM} \\uparrow\\)</td></tr><tr><td>256/256</td><td>V2-99/V2-99</td><td>97.1</td><td>96.2</td><td>91.8</td><td>81.8</td><td>86.5</td></tr><tr><td>512/512</td><td>V2-99/V2-99</td><td>97.3</td><td>97.6</td><td>92.5</td><td>83.0</td><td>88.1</td></tr><tr><td>1024/1024</td><td>V2-99/V2-99</td><td>98.6</td><td>97.5</td><td>94.7</td><td>85.0</td><td>89.4</td></tr><tr><td>256/256</td><td>resnet34/resnet34</td><td>98.3</td><td>93.8</td><td>94.3</td><td>79.8</td><td>85.7</td></tr><tr><td>1024/256</td><td>V2-99/resnet34</td><td>98.2</td><td>96.4</td><td>93.8</td><td>82.6</td><td>87.9</td></tr></table>",
|
| 1428 |
+
"bbox": [
|
| 1429 |
+
517,
|
| 1430 |
+
334,
|
| 1431 |
+
901,
|
| 1432 |
+
411
|
| 1433 |
+
],
|
| 1434 |
+
"page_idx": 7
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "text",
|
| 1438 |
+
"text": "Table 5. Impact of Scaling Model. We examine the impact of scaling the Transformer's hidden dimension and changing the image backbone within the Goal Point Construction Module (left) and Trajectory Planning Module (right). Increasing the hidden dimension and using a stronger image backbone both lead to improved end-to-end performance. For fair comparison, we align post-processing with baseline $\\mathcal{M}_2$ .",
|
| 1439 |
+
"bbox": [
|
| 1440 |
+
511,
|
| 1441 |
+
421,
|
| 1442 |
+
903,
|
| 1443 |
+
518
|
| 1444 |
+
],
|
| 1445 |
+
"page_idx": 7
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "text",
|
| 1449 |
+
"text": "Impact of Scaling Model. Inspired by [36], we present experiments on scaling the model based on the $\\mathcal{M}_2$ in Table 5. Under the same V2-99 backbone, increasing the hidden dimension consistently improves performance, with the best results observed at a dimension of 1024. Additionally, we conducted experiments to compare different configurations of the Goal Point Construction Module. We found that scaling this module significantly improves overall performance, highlighting the critical role of goal point guidance in trajectory planning.",
|
| 1450 |
+
"bbox": [
|
| 1451 |
+
511,
|
| 1452 |
+
537,
|
| 1453 |
+
903,
|
| 1454 |
+
690
|
| 1455 |
+
],
|
| 1456 |
+
"page_idx": 7
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "text",
|
| 1460 |
+
"text": "5. Conclusion",
|
| 1461 |
+
"text_level": 1,
|
| 1462 |
+
"bbox": [
|
| 1463 |
+
511,
|
| 1464 |
+
707,
|
| 1465 |
+
633,
|
| 1466 |
+
723
|
| 1467 |
+
],
|
| 1468 |
+
"page_idx": 7
|
| 1469 |
+
},
|
| 1470 |
+
{
|
| 1471 |
+
"type": "text",
|
| 1472 |
+
"text": "In this paper, we focus on generating accurate and efficient multimodal trajectories. We reviewed recent works on multimodal trajectory generation in autonomous driving and proposed a framework that generates precise goal points and effectively constrains the generative model with them, ultimately producing high-quality multimodal trajectories. We conducted experiments on the Navsim environment, demonstrating that GoalFlow achieves state-of-the-art performance. In the future, we aim to further investigate the impact of different guidance information on multimodal trajectory generation.",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
511,
|
| 1475 |
+
733,
|
| 1476 |
+
903,
|
| 1477 |
+
898
|
| 1478 |
+
],
|
| 1479 |
+
"page_idx": 7
|
| 1480 |
+
},
|
| 1481 |
+
{
|
| 1482 |
+
"type": "text",
|
| 1483 |
+
"text": "References",
|
| 1484 |
+
"text_level": 1,
|
| 1485 |
+
"bbox": [
|
| 1486 |
+
91,
|
| 1487 |
+
89,
|
| 1488 |
+
187,
|
| 1489 |
+
104
|
| 1490 |
+
],
|
| 1491 |
+
"page_idx": 8
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"type": "list",
|
| 1495 |
+
"sub_type": "ref_text",
|
| 1496 |
+
"list_items": [
|
| 1497 |
+
"[1] Shaoyu Chen, Bo Jiang, Hao Gao, Benchcheng Liao, Qing Xu, Qian Zhang, Chang Huang, Wenyu Liu, and Xinggang Wang. Vadv2: End-to-end vectorized autonomous driving via probabilistic planning. arXiv preprint arXiv:2402.13243, 2024. 2, 4",
|
| 1498 |
+
"[2] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS), 2023. 3",
|
| 1499 |
+
"[3] Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, and Andreas Geiger. Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. Pattern Analysis and Machine Intelligence (PAMI), 2023. 2, 4, 7",
|
| 1500 |
+
"[4] Felipe Codevilla, Matthias Müller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), page 1-9. IEEE Press, 2018. 2",
|
| 1501 |
+
"[5] Felipe Codevilla, Eder Santana, Antonio Lopez, and Adrien Gaidon. Exploring the limitations of behavior cloning for autonomous driving. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9328-9337, 2019. 2",
|
| 1502 |
+
"[6] OpenScene Contributors. Openscene: The largest up-to-date 3d occupancy prediction benchmark in autonomous driving. https://github.com/OpenDriveLab/OpenScene, 2023. 6",
|
| 1503 |
+
"[7] Daniel Dauner, Marcel Hallgarten, Tianyu Li, Xinshuo Weng, Zhiyu Huang, Zetong Yang, Hongyang Li, Igor Gilitschenski, Boris Ivanovic, Marco Pavone, Andreas Geiger, and Kashyap Chitta. Navsim: Data-driven non-reactive autonomous vehicle simulation and benchmarking. arXiv, 2406.15349, 2024. 1, 6, 7",
|
| 1504 |
+
"[8] Patrick Dendorfer, Aljosa Osep, and Laura Leal-Taixe. Goalgan: Multimodal trajectory prediction based on goal position estimation. In Asian Conference on Computer Vision, 2020. 2",
|
| 1505 |
+
"[9] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis. In Proceedings of the 41st International Conference on Machine Learning, pages 12606-12633. PMLR, 2024. 7",
|
| 1506 |
+
"[10] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, page 2672-2680, Cambridge, MA, USA, 2014. MIT Press. 2",
|
| 1507 |
+
"[11] Junru Gu, Chen Sun, and Hang Zhao. Densetnt: End-to-end trajectory prediction from dense goal sets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15303-15312, 2021. 4"
|
| 1508 |
+
],
|
| 1509 |
+
"bbox": [
|
| 1510 |
+
93,
|
| 1511 |
+
114,
|
| 1512 |
+
483,
|
| 1513 |
+
900
|
| 1514 |
+
],
|
| 1515 |
+
"page_idx": 8
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "list",
|
| 1519 |
+
"sub_type": "ref_text",
|
| 1520 |
+
"list_items": [
|
| 1521 |
+
"[12] Songen Gu, Wei Yin, Bu Jin, Xiaoyang Guo, Junming Wang, Haodong Li, Qian Zhang, and Xiaoxiao Long. Dome: Taming diffusion model into high-fidelity controllable occupancy world model. arXiv preprint arXiv:2410.10429, 2024. 2",
|
| 1522 |
+
"[13] Jonathan Ho. Classifier-free diffusion guidance. ArXiv, abs/2207.12598, 2022. 3, 7",
|
| 1523 |
+
"[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, pages 6840-6851. Curran Associates, Inc., 2020. 3, 5",
|
| 1524 |
+
"[15] Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, and Hongyang Li. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2, 7",
|
| 1525 |
+
"[16] Zhiyu Huang, Xinshuo Weng, Maximilian Igl, Yuxiao Chen, Yulong Cao, Boris Ivanovic, Marco Pavone, and Chen Lv. Gen-drive: Enhancing diffusion generative driving policies with reward modeling and reinforcement learning finetuning. arXiv preprint arXiv:2410.05582, 2024. 1, 3",
|
| 1526 |
+
"[17] Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, and Xinggang Wang. Vad: Vectorized scene representation for efficient autonomous driving. ICCV, 2023. 1, 2, 3",
|
| 1527 |
+
"[18] Chiyu \"Max\" Jiang, Andre Cornman, Cheolho Park, Benjamin Sapp, Yin Zhou, and Dragomir Anguelov. Motion-diffuser: Controllable multi-agent motion prediction using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9644-9653, 2023. 2, 3, 4",
|
| 1528 |
+
"[19] Bu Jin, Xinyu Liu, Yupeng Zheng, Pengfei Li, Hao Zhao, Tong Zhang, Yuhang Zheng, Guyue Zhou, and Jingjing Liu. Adapt: Action-aware driving caption transformer. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 7554-7561, 2023. 2",
|
| 1529 |
+
"[20] Bu Jin, Yupeng Zheng, Pengfei Li, Weize Li, Yuhang Zheng, Sujie Hu, Xinyu Liu, Jinwei Zhu, Zhijie Yan, Haiyang Sun, Kun Zhan, Peng Jia, Xiaoxiao Long, Yilun Chen, and Hao Zhao. Tod3cap: Towards 3d dense captioning in outdoor scenes. In Computer Vision - ECCV 2024: 18th European Conference, Milan, Italy, September 29 - October 4, 2024, Proceedings, Part XVIII, page 367-384, Berlin, Heidelberg, 2024. Springer-Verlag. 2",
|
| 1530 |
+
"[21] Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 2",
|
| 1531 |
+
"[22] Zhenxin Li, Kailin Li, Shihao Wang, Shiyi Lan, Zhiding Yu, Yishen Ji, Zhiqi Li, Ziye Zhu, Jan Kautz, Zuxuan Wu, et al. Hydra-mdp: End-to-end multimodal planning with multitarget hydra-distillation. arXiv preprint arXiv:2406.06978, 2024. 2",
|
| 1532 |
+
"[23] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. 3",
|
| 1533 |
+
"[24] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with"
|
| 1534 |
+
],
|
| 1535 |
+
"bbox": [
|
| 1536 |
+
516,
|
| 1537 |
+
92,
|
| 1538 |
+
903,
|
| 1539 |
+
900
|
| 1540 |
+
],
|
| 1541 |
+
"page_idx": 8
|
| 1542 |
+
},
|
| 1543 |
+
{
|
| 1544 |
+
"type": "list",
|
| 1545 |
+
"sub_type": "ref_text",
|
| 1546 |
+
"list_items": [
|
| 1547 |
+
"rectified flow. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. 3, 5, 7",
|
| 1548 |
+
"[25] Robert J. McCann. A convexity principle for interacting gases. Advances in Mathematics, 128(1):153-179, 1997. 3",
|
| 1549 |
+
"[26] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv:2010.02502, 2020. 3, 5",
|
| 1550 |
+
"[27] Wenchao Sun, Xuewu Lin, Yining Shi, Chuang Zhang, Haoran Wu, and Sifa Zheng. Sparsedrive: End-to-end autonomous driving via sparse scene representation. arXiv preprint arXiv:2405.19620, 2024. 1, 2, 3, 6",
|
| 1551 |
+
"[28] Junming Wang, Xingyu Zhang, Zebin Xing, Songen Gu, Xiaoyang Guo, Yang Hu, Ziying Song, Qian Zhang, Xiaoxiao Long, and Wei Yin. He-drive: Human-like end-to-end driving with vision language models. arXiv preprint arXiv:2410.05051, 2024. 2",
|
| 1552 |
+
"[29] Junming Wang, Xingyu Zhang, Zebin Xing, Songen Gu, Xiaoyang Guo, Yang Hu, Ziying Song, Qian Zhang, Xiaoxiao Long, and Wei Yin. He-drive: Human-like end-to-end driving with vision language models. arXiv preprint arXiv:2410.05051, 2024.3",
|
| 1553 |
+
"[30] Xinshuo Weng, Boris Ivanovic, Yan Wang, Yue Wang, and Marco Pavone. Para-drive: Parallelized architecture for real-time autonomous driving. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15449-15458, 2024. 2, 7",
|
| 1554 |
+
"[31] Zhou Xian, Nikolaos Gkanatsios, Theophile Gervet, Tsung-Wei Ke, and Katerina Fragkiadaki. Chaineddiffuser: Unifying trajectory diffusion and keypose prediction for robotic manipulation. In Proceedings of The 7th Conference on Robot Learning, pages 2323–2339. PMLR, 2023. 3",
|
| 1555 |
+
"[32] Brian Yang, Huangyuan Su, Nikolaos Gkanatsios, Tsung-Wei Ke, Ayush Jain, Jeff Schneider, and Katerina Fragkiadaki. Diffusion-es: Gradient-free planning with diffusion for autonomous driving and zero-shot instruction following. arXiv preprint arXiv:2402.06559, 2024. 2, 3, 4, 6",
|
| 1556 |
+
"[33] Tengju* Ye, Wei* Jing, Chunyong Hu, Shikun Huang, Lingping Gao, Fangzhen Li, Jingke Wang, Ke Guo, Wencong Xiao, Weibo Mao, Hang Zheng, Kun Li, Junbo Chen, and Kaicheng Yu. Fusionad: Multi-modality fusion for prediction and planning tasks of autonomous driving. 2023. *Equal Contribution.* 2",
|
| 1557 |
+
"[34] Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Ben Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, Congcong Li, and Dragomir Anguelov. Tnt: Target-driven trajectory prediction. In Proceedings of the 2020 Conference on Robot Learning, pages 895-904. PMLR, 2021. 4",
|
| 1558 |
+
"[35] Wenzhao Zheng, Ruiqi Song, Xianda Guo, Chenming Zhang, and Long Chen. Genad: Generative end-to-end autonomous driving. arXiv preprint arXiv: 2402.11502, 2024. 2",
|
| 1559 |
+
"[36] Yupeng Zheng, Zhongpu Xia, Qichao Zhang, Teng Zhang, Ben Lu, Xiaochuang Huo, Chao Han, Yixian Li, Mengjie Yu, Bu Jin, Pengxuan Yang, Yuhang Zheng, Haifeng Yuan, Ke Jiang, Peng Jia, Xianpeng Lang, and Dongbin Zhao. Pre-"
|
| 1560 |
+
],
|
| 1561 |
+
"bbox": [
|
| 1562 |
+
91,
|
| 1563 |
+
90,
|
| 1564 |
+
483,
|
| 1565 |
+
901
|
| 1566 |
+
],
|
| 1567 |
+
"page_idx": 9
|
| 1568 |
+
},
|
| 1569 |
+
{
|
| 1570 |
+
"type": "text",
|
| 1571 |
+
"text": "liminary investigation into data scaling laws for imitation learning-based end-to-end autonomous driving, 2024. 8",
|
| 1572 |
+
"bbox": [
|
| 1573 |
+
545,
|
| 1574 |
+
90,
|
| 1575 |
+
906,
|
| 1576 |
+
119
|
| 1577 |
+
],
|
| 1578 |
+
"page_idx": 9
|
| 1579 |
+
},
|
| 1580 |
+
{
|
| 1581 |
+
"type": "image",
|
| 1582 |
+
"img_path": "images/9a0105c3504d0573ebd7629b140b36bb707ea3d6f670dc672115b17b46d72b0d.jpg",
|
| 1583 |
+
"image_caption": [],
|
| 1584 |
+
"image_footnote": [],
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
125,
|
| 1587 |
+
92,
|
| 1588 |
+
872,
|
| 1589 |
+
489
|
| 1590 |
+
],
|
| 1591 |
+
"page_idx": 10
|
| 1592 |
+
},
|
| 1593 |
+
{
|
| 1594 |
+
"type": "image",
|
| 1595 |
+
"img_path": "images/baffa80301ffaeffabb5ff6efdb8c387d401e2ed8db051bbbfb1f5973f0db854.jpg",
|
| 1596 |
+
"image_caption": [
|
| 1597 |
+
"Figure 5. Visualization of Trajectories. $\\times$ indicates that the trajectory results in a collision or goes beyond the drivable area, while $\\checkmark$ represents a safe trajectory. The orange points are generated by the Goal Constructor, while the blue and yellow points correspond to samples from the vocabulary. The results highlight that GoalFlow generates higher-quality trajectories compared to the other two methods.",
|
| 1598 |
+
"Figure 6. Visualization of the goal point distribution. The $\\hat{\\delta}_i^{dac}$ score indicates whether a point is within the drivable area, while the $\\hat{\\delta}_i^{dis}$ score reflects the distance relationship between the point and the goal. The final score $\\hat{\\delta}_i^{final}$ is a fusion of the $\\hat{\\delta}_i^{dac}$ and $\\hat{\\delta}_i^{dis}$ scores, where points with higher brightness represent higher scores."
|
| 1599 |
+
],
|
| 1600 |
+
"image_footnote": [],
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
125,
|
| 1603 |
+
545,
|
| 1604 |
+
872,
|
| 1605 |
+
917
|
| 1606 |
+
],
|
| 1607 |
+
"page_idx": 10
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "image",
|
| 1611 |
+
"img_path": "images/da957bd98f585d93211130f91364b6e18c95bca0022daea9f23886653f996b03.jpg",
|
| 1612 |
+
"image_caption": [],
|
| 1613 |
+
"image_footnote": [],
|
| 1614 |
+
"bbox": [
|
| 1615 |
+
106,
|
| 1616 |
+
90,
|
| 1617 |
+
302,
|
| 1618 |
+
242
|
| 1619 |
+
],
|
| 1620 |
+
"page_idx": 11
|
| 1621 |
+
},
|
| 1622 |
+
{
|
| 1623 |
+
"type": "image",
|
| 1624 |
+
"img_path": "images/aae4d598b0421d7b34f3ead6d7a38a02a6b70857c8bf72a1ed2ccc144f6f5fe2.jpg",
|
| 1625 |
+
"image_caption": [],
|
| 1626 |
+
"image_footnote": [],
|
| 1627 |
+
"bbox": [
|
| 1628 |
+
303,
|
| 1629 |
+
92,
|
| 1630 |
+
500,
|
| 1631 |
+
242
|
| 1632 |
+
],
|
| 1633 |
+
"page_idx": 11
|
| 1634 |
+
},
|
| 1635 |
+
{
|
| 1636 |
+
"type": "image",
|
| 1637 |
+
"img_path": "images/f0ce49cb6445862223921ff9a29130d1af6f01227fd6d610ce6b6213f573f0e3.jpg",
|
| 1638 |
+
"image_caption": [],
|
| 1639 |
+
"image_footnote": [],
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
501,
|
| 1642 |
+
90,
|
| 1643 |
+
697,
|
| 1644 |
+
242
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 11
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "image",
|
| 1650 |
+
"img_path": "images/2c7f17951b2836480e5a8ee334b20bbbdbab2a50e39a20ec43a26a57e3b72ed4.jpg",
|
| 1651 |
+
"image_caption": [],
|
| 1652 |
+
"image_footnote": [],
|
| 1653 |
+
"bbox": [
|
| 1654 |
+
699,
|
| 1655 |
+
92,
|
| 1656 |
+
895,
|
| 1657 |
+
242
|
| 1658 |
+
],
|
| 1659 |
+
"page_idx": 11
|
| 1660 |
+
},
|
| 1661 |
+
{
|
| 1662 |
+
"type": "image",
|
| 1663 |
+
"img_path": "images/b6196d22f5b6acc14a894c046b53bebf9cea5214671c6fca67f0a4557f10d78f.jpg",
|
| 1664 |
+
"image_caption": [],
|
| 1665 |
+
"image_footnote": [],
|
| 1666 |
+
"bbox": [
|
| 1667 |
+
106,
|
| 1668 |
+
243,
|
| 1669 |
+
302,
|
| 1670 |
+
393
|
| 1671 |
+
],
|
| 1672 |
+
"page_idx": 11
|
| 1673 |
+
},
|
| 1674 |
+
{
|
| 1675 |
+
"type": "image",
|
| 1676 |
+
"img_path": "images/1252cd67b0da75b107f79818a42e7024d22ffce6e968c47208029af704e2e45f.jpg",
|
| 1677 |
+
"image_caption": [],
|
| 1678 |
+
"image_footnote": [],
|
| 1679 |
+
"bbox": [
|
| 1680 |
+
303,
|
| 1681 |
+
243,
|
| 1682 |
+
500,
|
| 1683 |
+
393
|
| 1684 |
+
],
|
| 1685 |
+
"page_idx": 11
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "image",
|
| 1689 |
+
"img_path": "images/e8d003f57bc83cfc7c2b79bfa34665723d32016a61472fbf509ac47cec7feeee.jpg",
|
| 1690 |
+
"image_caption": [],
|
| 1691 |
+
"image_footnote": [],
|
| 1692 |
+
"bbox": [
|
| 1693 |
+
501,
|
| 1694 |
+
243,
|
| 1695 |
+
697,
|
| 1696 |
+
393
|
| 1697 |
+
],
|
| 1698 |
+
"page_idx": 11
|
| 1699 |
+
},
|
| 1700 |
+
{
|
| 1701 |
+
"type": "image",
|
| 1702 |
+
"img_path": "images/0d7e3c8fb1ee0ae1832aefe7547b886e5eed202b5ffaf16883615da754914d82.jpg",
|
| 1703 |
+
"image_caption": [],
|
| 1704 |
+
"image_footnote": [],
|
| 1705 |
+
"bbox": [
|
| 1706 |
+
699,
|
| 1707 |
+
243,
|
| 1708 |
+
895,
|
| 1709 |
+
393
|
| 1710 |
+
],
|
| 1711 |
+
"page_idx": 11
|
| 1712 |
+
},
|
| 1713 |
+
{
|
| 1714 |
+
"type": "image",
|
| 1715 |
+
"img_path": "images/fd1bad27aba26a02975fc413d086f2ddf27faafea42708aaeaae5a4c8df01e03.jpg",
|
| 1716 |
+
"image_caption": [],
|
| 1717 |
+
"image_footnote": [],
|
| 1718 |
+
"bbox": [
|
| 1719 |
+
106,
|
| 1720 |
+
393,
|
| 1721 |
+
302,
|
| 1722 |
+
544
|
| 1723 |
+
],
|
| 1724 |
+
"page_idx": 11
|
| 1725 |
+
},
|
| 1726 |
+
{
|
| 1727 |
+
"type": "image",
|
| 1728 |
+
"img_path": "images/6290014512e2d457b035997536c33ae0b267a80143be52fc7ed16f78c717b770.jpg",
|
| 1729 |
+
"image_caption": [],
|
| 1730 |
+
"image_footnote": [],
|
| 1731 |
+
"bbox": [
|
| 1732 |
+
303,
|
| 1733 |
+
393,
|
| 1734 |
+
500,
|
| 1735 |
+
545
|
| 1736 |
+
],
|
| 1737 |
+
"page_idx": 11
|
| 1738 |
+
},
|
| 1739 |
+
{
|
| 1740 |
+
"type": "image",
|
| 1741 |
+
"img_path": "images/cde4206b19031534e18587d90a41c9e4c9817778fe1c814976bae4db6ee17215.jpg",
|
| 1742 |
+
"image_caption": [],
|
| 1743 |
+
"image_footnote": [],
|
| 1744 |
+
"bbox": [
|
| 1745 |
+
501,
|
| 1746 |
+
393,
|
| 1747 |
+
697,
|
| 1748 |
+
545
|
| 1749 |
+
],
|
| 1750 |
+
"page_idx": 11
|
| 1751 |
+
},
|
| 1752 |
+
{
|
| 1753 |
+
"type": "image",
|
| 1754 |
+
"img_path": "images/5152683d8841b96d3b4616c6e5c0e1617102e2d5235944301a8d3713d44e99c6.jpg",
|
| 1755 |
+
"image_caption": [],
|
| 1756 |
+
"image_footnote": [],
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
699,
|
| 1759 |
+
393,
|
| 1760 |
+
895,
|
| 1761 |
+
545
|
| 1762 |
+
],
|
| 1763 |
+
"page_idx": 11
|
| 1764 |
+
},
|
| 1765 |
+
{
|
| 1766 |
+
"type": "image",
|
| 1767 |
+
"img_path": "images/2aff53864f6a88ba674fc4dc63ab43b86f14ceeffda5e84dcdd3301382e0e8f4.jpg",
|
| 1768 |
+
"image_caption": [],
|
| 1769 |
+
"image_footnote": [],
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
106,
|
| 1772 |
+
546,
|
| 1773 |
+
302,
|
| 1774 |
+
698
|
| 1775 |
+
],
|
| 1776 |
+
"page_idx": 11
|
| 1777 |
+
},
|
| 1778 |
+
{
|
| 1779 |
+
"type": "image",
|
| 1780 |
+
"img_path": "images/8bb4f0a5590bc29e5b32f8192b0557f1340b1e0d6c24178e8d59a47198cdc711.jpg",
|
| 1781 |
+
"image_caption": [],
|
| 1782 |
+
"image_footnote": [],
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
303,
|
| 1785 |
+
546,
|
| 1786 |
+
500,
|
| 1787 |
+
698
|
| 1788 |
+
],
|
| 1789 |
+
"page_idx": 11
|
| 1790 |
+
},
|
| 1791 |
+
{
|
| 1792 |
+
"type": "image",
|
| 1793 |
+
"img_path": "images/59c0f59c5ee4bde8251e1848b774d1c95277e02314a096dfcda87ab72805e325.jpg",
|
| 1794 |
+
"image_caption": [],
|
| 1795 |
+
"image_footnote": [],
|
| 1796 |
+
"bbox": [
|
| 1797 |
+
501,
|
| 1798 |
+
546,
|
| 1799 |
+
697,
|
| 1800 |
+
698
|
| 1801 |
+
],
|
| 1802 |
+
"page_idx": 11
|
| 1803 |
+
},
|
| 1804 |
+
{
|
| 1805 |
+
"type": "image",
|
| 1806 |
+
"img_path": "images/b07e0c19a12ceca0c93889a6fc13820371a5bf81a50e2b854bb3304da3bf7ef5.jpg",
|
| 1807 |
+
"image_caption": [],
|
| 1808 |
+
"image_footnote": [],
|
| 1809 |
+
"bbox": [
|
| 1810 |
+
699,
|
| 1811 |
+
546,
|
| 1812 |
+
895,
|
| 1813 |
+
698
|
| 1814 |
+
],
|
| 1815 |
+
"page_idx": 11
|
| 1816 |
+
},
|
| 1817 |
+
{
|
| 1818 |
+
"type": "image",
|
| 1819 |
+
"img_path": "images/6eab241306133d7881f8c776b04b78026628c36a687ae91887f223b9c2b85bdf.jpg",
|
| 1820 |
+
"image_caption": [],
|
| 1821 |
+
"image_footnote": [],
|
| 1822 |
+
"bbox": [
|
| 1823 |
+
106,
|
| 1824 |
+
699,
|
| 1825 |
+
302,
|
| 1826 |
+
849
|
| 1827 |
+
],
|
| 1828 |
+
"page_idx": 11
|
| 1829 |
+
},
|
| 1830 |
+
{
|
| 1831 |
+
"type": "image",
|
| 1832 |
+
"img_path": "images/3a5fa0ab63b402bcce776313aece7c8cb9b33a388398e00f1923d90c2d158a12.jpg",
|
| 1833 |
+
"image_caption": [],
|
| 1834 |
+
"image_footnote": [],
|
| 1835 |
+
"bbox": [
|
| 1836 |
+
303,
|
| 1837 |
+
699,
|
| 1838 |
+
500,
|
| 1839 |
+
849
|
| 1840 |
+
],
|
| 1841 |
+
"page_idx": 11
|
| 1842 |
+
},
|
| 1843 |
+
{
|
| 1844 |
+
"type": "image",
|
| 1845 |
+
"img_path": "images/5e75d98d906bf04c5241489cb9660a307657c73ab4246fecf2e46352ad728729.jpg",
|
| 1846 |
+
"image_caption": [],
|
| 1847 |
+
"image_footnote": [],
|
| 1848 |
+
"bbox": [
|
| 1849 |
+
501,
|
| 1850 |
+
699,
|
| 1851 |
+
697,
|
| 1852 |
+
849
|
| 1853 |
+
],
|
| 1854 |
+
"page_idx": 11
|
| 1855 |
+
},
|
| 1856 |
+
{
|
| 1857 |
+
"type": "image",
|
| 1858 |
+
"img_path": "images/78b645ea3d4ef212ba70731138c85db90e2eb19363444f3ebca80e5d945e981a.jpg",
|
| 1859 |
+
"image_caption": [],
|
| 1860 |
+
"image_footnote": [],
|
| 1861 |
+
"bbox": [
|
| 1862 |
+
699,
|
| 1863 |
+
699,
|
| 1864 |
+
895,
|
| 1865 |
+
849
|
| 1866 |
+
],
|
| 1867 |
+
"page_idx": 11
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "image",
|
| 1871 |
+
"img_path": "images/6e374e920a3193863d847cfb25f07b95eeb8979e8f559db5791feebdb6d48a95.jpg",
|
| 1872 |
+
"image_caption": [
|
| 1873 |
+
"Figure 7. Visualization of trajectories. We visualize four scenarios: going straight, turning left, turning right, and yielding. For each scenario, 128 trajectories were generated using GoalFlow."
|
| 1874 |
+
],
|
| 1875 |
+
"image_footnote": [],
|
| 1876 |
+
"bbox": [
|
| 1877 |
+
106,
|
| 1878 |
+
851,
|
| 1879 |
+
893,
|
| 1880 |
+
895
|
| 1881 |
+
],
|
| 1882 |
+
"page_idx": 11
|
| 1883 |
+
}
|
| 1884 |
+
]
|
2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2503.05xxx/2503.05689/3339af19-8242-4e67-a7b7-e16079121bb2_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db67b8192b052e66aa8c046732340cd99b74f86acc2b0020bed658360a9c76d4
|
| 3 |
+
size 5720835
|
2503.05xxx/2503.05689/full.md
ADDED
|
@@ -0,0 +1,392 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GoalFlow: Goal-Driven Flow Matching for Multimodal Trajectories Generation in End-to-End Autonomous Driving
|
| 2 |
+
|
| 3 |
+
Zebin Xing $^{1,2*}$ , Xingyu Zhang $^{2*}$ , Yang Hu $^{2}$ , Bo Jiang $^{4,2}$
|
| 4 |
+
|
| 5 |
+
Tong He $^{5}$ , Qian Zhang $^{2}$ , Xiaoxiao Long $^{3}$ , Wei Yin $^{2\dagger}$
|
| 6 |
+
|
| 7 |
+
$^{1}$ School of Artificial Intelligence, University of Chinese Academy of Sciences $^{2}$ Horizon Robotics
|
| 8 |
+
|
| 9 |
+
$^{3}$ Nanjing University $^{4}$ Huazhong University of Science & Technology $^{5}$ Shanghai AI Laboratory
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
We propose GoalFlow, an end-to-end autonomous driving method for generating high-quality multimodal trajectories. In autonomous driving scenarios, there is rarely a single suitable trajectory. Recent methods have increasingly focused on modeling multimodal trajectory distributions. However, they suffer from trajectory selection complexity and reduced trajectory quality due to high trajectory divergence and inconsistencies between guidance and scene information. To address these issues, we introduce GoalFlow, a novel method that effectively constrains the generative process to produce high-quality, multimodal trajectories. To resolve the trajectory divergence problem inherent in diffusion-based methods, GoalFlow constrains the generated trajectories by introducing a goal point. GoalFlow establishes a novel scoring mechanism that selects the most appropriate goal point from the candidate points based on scene information. Furthermore, GoalFlow employs an efficient generative method, Flow Matching, to generate multimodal trajectories, and incorporates a refined scoring mechanism to select the optimal trajectory from the candidates. Our experimental results, validated on the Navsim[7], demonstrate that GoalFlow achieves state-of-the-art performance, delivering robust multimodal trajectories for autonomous driving. GoalFlow achieved PDMS of 90.3, significantly surpassing other methods. Compared with other diffusion-policy-based methods, our approach requires only a single denoising step to obtain excellent performance. The code is available at https://github.com/YvanYin/GoalFlow.
|
| 14 |
+
|
| 15 |
+
# 1. Introduction
|
| 16 |
+
|
| 17 |
+
Since UniAD[15], autonomous driving has increasingly favored end-to-end systems, where tasks like mapping and
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Goal-Driven Generation Model
|
| 23 |
+
Figure 1. The comparison of different multimodal trajectory generation paradigms recently. A standalone generative model often produces highly diverse trajectories with no clear boundaries between different modalities. In contrast, the Goal-Driven Generation Model leverages the strong guidance of goal points, effectively distinguishing multiple modalities by utilizing different goal points.
|
| 24 |
+
|
| 25 |
+
detection ultimately serve the planning task. To enhance system reliability, some end-to-end algorithms[16, 17, 27] have begun exploring ways to generate multimodal trajectories as trajectory candidates for the algorithms. In autonomous driving, command typically includes indicators for left, right, and straight actions. VAD[17] uses this command information to generate multimodal trajectories. Goal points, which provide the vehicle's location information for the next few seconds, are commonly used as guiding information in other approaches, such as SparseDrive[27]. These methods pre-define a set of goal points to generate different trajectory modes. Both approaches have succeeded in autonomous driving, offering candidate trajectories that significantly reduce collision rates. However, since these methods' guiding information does not pursue accuracy but instead provides a set of candidate values for the trajec
|
| 26 |
+
|
| 27 |
+
tory, when the gap between the guiding information and the ground truth is large, it is prone to generating low-quality trajectories.
|
| 28 |
+
|
| 29 |
+
In recent trajectory prediction works, some methods[18, 28, 32] aim to generate multimodal trajectories through diffusion, using scene or motion information as a condition to produce multimodal trajectories. Other methods [12] utilize diffusion to construct a world model. Without constraints, approaches like Diffusion-ES[32] tend to generate divergent trajectories, which is depicted in the second row of Fig.1, requiring a scoring mechanism based on HD maps to align with the real-world road network, which is difficult to obtain in end-to-end environments. MotionDiffuser[18] addresses trajectory divergence by using the ground truth endpoint as a constraint, which introduces overly strong prior information. GoalGAN[8] first predicts the goal point and then uses it to guide the GAN network to generate trajectories. However, GoalGAN employs grid-cell to sample goal points, which does not consider the distribution of the goal points.
|
| 30 |
+
|
| 31 |
+
Reviewing previous work, we identified some overlooked issues:(1) Existing end-to-end autonomous driving systems tend to focus heavily on collision and L2 metrics, often adding specific losses or applying post-processing to reduce collision, while overlooking whether the vehicle remains within the drivable area. (2) Most end-to-end methods are based on regression models and aim to achieve multimodality by using different guiding information. However, when the guiding information deviates significantly from the ground truth, it can lead to the generation of low-quality trajectories.
|
| 32 |
+
|
| 33 |
+
GoalFlow can be divided into three parts: Perception Module, Goal Point Construction Module, and Trajectory Planning Module. In the first module, following transfuser[3], images and LiDAR are fed into two separate backbones and fused into BEV feature finally. In the second module, GoalFlow establishes a dense vocabulary of goal points, and a novel scoring mechanism is used to select the optimal goal point that is closest to the ground truth goal point and within a drivable area. In the third module, GoalFlow uses flow matching to model multimodal trajectories efficiently. It conditions scene information and incorporates stronger guidance from the selected goal point. Finally, GoalFlow employs a scoring mechanism to select the optimal trajectory. Compared to directly generating trajectories with diffusion, as in the first row of Fig. 1, our approach provides strong constraints on the trajectory, leading to more reliable results.
|
| 34 |
+
|
| 35 |
+
We conducted experimental validation in Navsim and found that our method outperformed other approaches in overall scoring. Notably, due to our goal point selection mechanism, we achieved a significant improvement in DAC scores. Additionally, we observed that this flow-matching-
|
| 36 |
+
|
| 37 |
+
based approach is robust to the number of denoising steps during inference. Even with only a single denoising step, the score dropped by only $1.6\%$ compared to the optimal case, enhancing the potential for real-world deployment of generative models in autonomous driving.
|
| 38 |
+
|
| 39 |
+
Our contributions can be summarized as follows:
|
| 40 |
+
|
| 41 |
+
- We designed a novel approach to establishing goal points, demonstrating its effectiveness in guiding generative models for trajectory generation.
|
| 42 |
+
- We introduced flow matching to end-to-end autonomous driving and seamlessly integrated it with goal point guidance.
|
| 43 |
+
- We developed an innovative trajectory selection mechanism, using shadow trajectories to further address potential goal point errors.
|
| 44 |
+
- Our method achieved state-of-the-art results in Navsim.
|
| 45 |
+
|
| 46 |
+
# 2. Related Work
|
| 47 |
+
|
| 48 |
+
# 2.1. End-to-End Autonomous Driving
|
| 49 |
+
|
| 50 |
+
Earlier end-to-end autonomous driving approaches[5][4] used imitation learning methods, directly extracting features from input images to generate trajectories. Later, Transfuser[3] advanced by fusing lidar and image information during perception, using auxiliary tasks such as mapping and detection to provide supervision for the perception. FusionAD[33] took Transfuser a step further by propagating fused perception features directly to the prediction and planning modules. Other methods [19, 20] align the traffic scene with natural language. UniAD[15] introduced a unified query design that made the framework ultimately planning-oriented. Similarly, VAD[17] focused on a planning-oriented approach by simplifying perception tasks and transforming scene representation into a vectorized format, significantly enhancing both planning capability and efficiency. Building on this, some methods[1, 22] discretized the trajectory space and constructed a trajectory vocabulary, transforming the regression task into a classification task. PARA-Drive[30] performs mapping, planning, motion prediction, and occupancy prediction tasks in parallel. GenAD[35] employed VAE and GRU for temporal trajectory reconstruction, while SparseDrive[27] progressed further in the vectorized scene representation, omitting denser BEV representations. Compared to previous methods that focus on better fitting ground truth trajectories using a regression model, we concentrate on generating high-quality multimodal trajectories in an end-to-end setting.
|
| 51 |
+
|
| 52 |
+
# 2.2. Diffusion Model and Flow Matching
|
| 53 |
+
|
| 54 |
+
Early generative models always used VAE[21] and GAN[10] in image generation. Recently, diffusion models that generate images by iteratively adding and remov
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
Figure 2. Overview of the GoalFlow architecture. GoalFlow consists of three modules. The Perception Module is responsible for integrating scene information into a BEV feature $F_{bev}$ , the Goal Point Construction Module selects the optimal goal point from Goal Point Vocabulary V as guidance information, and the Trajectory Planning Module generates the trajectories by denoising from the Gaussian distribution to the target distribution. Finally, the Trajectory Scorer selects the optimal trajectory from the candidates.
|
| 58 |
+
|
| 59 |
+
ing noise have become mainstream. DDPM[14] applies noise to images during training, converting states over time steps, and subsequently denoises them during testing to reconstruct the image. More recent methods[26] have further optimized sampling efficiency. Additionally, CFG[13] has enhanced the robustness of generated outputs. Flow Matching[23] establishes a vector field for transitioning from one distribution to another. Rectified flow[24], a specific form of flow matching, enables a direct, linear transition path between distributions. Compared to diffusion models, rectified flow often requires only a single inference step to achieve good results.
|
| 60 |
+
|
| 61 |
+
# 2.3. MultiModal Trajectories Generation
|
| 62 |
+
|
| 63 |
+
In planning tasks, such as manipulation and autonomous driving, a given scenario often offers multiple action options, requiring effective multimodal modeling. Recent works[2, 31] in manipulation have explored this by applying diffusion models with notable success. Autonomous driving has adopted two main multimodal strategies: the first uses discrete commands to guide trajectory generation, such as in VAD[17], which produces three distinct trajectory modes, and SparseDrive[27] and [16], which cluster fixed navigation points from datasets for trajectory guidance. The second approach introduces diffusion models directly to generate multimodal trajectories[18, 29, 32], achieving success in trajectory prediction but facing challenges in end-to-end applications. Building on diffusion models, we address limitations in accuracy and efficiency by incorporating flow matching, using goal points to guide trajectories with precision rather than focusing solely on multimodal diversity.
|
| 64 |
+
|
| 65 |
+
# 3. Method
|
| 66 |
+
|
| 67 |
+
# 3.1. Preliminary
|
| 68 |
+
|
| 69 |
+
Compared to diffusion, which focuses on learning to reverse the gradual addition of noise over time to recover data, flow matching[23] focuses on learning invertible transformations that map between data distributions. Let $\pi_0$ denote a simple distribution, typically the standard normal distribution $p(x) = \mathcal{N}(x|0,I)$ , and let $\pi_1$ denote the target distribution. Under this framework, rectified flow[24] uses a simple and effective method to construct the path through optimal transport[25] displacement, which we choose as our Flow Matching method.
|
| 70 |
+
|
| 71 |
+
Given $x_0$ sampled from $\pi_0$ , $x_1$ sampled from $\pi_1$ , and $t \in [0, 1]$ , the path from $x_0$ to $x_1$ is defined as a straight line, meaning the intermediate status $x_t$ is given by $(1 - t)x_0 + tx_1$ , with the direction of intermediate status consistently following $x_1 - x_0$ . By constructing a neural network $v_\theta$ to predict the direction $x_1 - x_0$ based on the current state $x_t$ and time step $t$ , we can obtain a path from the initial distribution $\pi_0$ to target distribution $\pi_1$ by optimizing the loss between $v_\theta(x_t, t)$ and $x_1 - x_0$ . This can be formalized as:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
v _ {\theta} \left(x _ {t}, t\right) \approx \mathbf {E} _ {x _ {0} \sim \pi_ {0}, x _ {1} \sim \pi_ {1}} \left[ v _ {t} \mid x _ {t} \right] \tag {1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\mathcal {L} (\theta) = \mathbf {E} _ {x _ {0} \sim \pi_ {0}, x _ {1} \sim \pi_ {1}} [ \| v _ {\theta} (x _ {t}, t) - (x _ {1} - x _ {0}) \| _ {2} ] \tag {2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
# 3.2. GoalFlow
|
| 82 |
+
|
| 83 |
+
# 3.2.1. Overview
|
| 84 |
+
|
| 85 |
+
GoalFlow is a goal-driven end-to-end autonomous driving method that can generate high-quality multimodal trajectories. The overall architecture of GoalFlow is illustrated in
|
| 86 |
+
|
| 87 |
+
Figure 2. It comprises three main components. In the Perception Module, we obtain a BEV feature $F_{\mathrm{bev}}$ that encapsulates environmental information by fusing camera images $I$ , and LiDAR data $L$ . The Goal Point Construction Module focuses on generating precise guidance information for trajectory generation. It accomplishes this by constructing a goal point vocabulary $\mathbb{V} = \{g_i\}^N$ , and employing a scoring mechanism to select the most appropriate goal point $g$ . In the Trajectory Planning Module, we produce a set of multimodal trajectories, $\mathbb{T} = \{\hat{\tau}_i\}^M$ , and then identify the optimal trajectory $\tau$ , through a trajectory scoring mechanism.
|
| 88 |
+
|
| 89 |
+
# 3.2.2. Perception Module
|
| 90 |
+
|
| 91 |
+
In the first step, we fuse image and LiDAR data to create a BEV feature, $F_{\mathrm{bev}}$ , that captures rich road condition information. A single modality often lacks crucial details; for example, LiDAR does not capture traffic light information, while images cannot precisely locate objects. By fusing different sensor modalities, we can achieve a more complete and accurate representation of the road conditions.
|
| 92 |
+
|
| 93 |
+
We adopt the Transfuser architecture [3] for modality fusion. The forward, left, and right camera views are concatenated into a single image $I \in \mathbb{R}^{3 \times H_1 \times W_1}$ , while Li-DAR data is formed as a tensor $L \in \mathbb{R}^{K \times 3}$ . These inputs are passed through separate backbones, and their features are fused at different layers using multiple transformer blocks. The result is a BEV feature, $F_{\mathrm{bev}}$ , which comprehensively represents the scene. To ensure effective interaction between the ego vehicle and surrounding objects, as well as map information, we apply auxiliary supervision to the BEV feature through losses derived from HD maps and bounding boxes.
|
| 94 |
+
|
| 95 |
+
# 3.2.3. Goal Point Construction Module.
|
| 96 |
+
|
| 97 |
+
In this module, we construct a precise goal point to guide the trajectory generation process. Diffusion-based approach[18, 32] without constraints often leads to excessive trajectory divergence, which complicates trajectory selection. Our key observation is that a goal point contains a precise description of the short-term future position, which imposes a strong constraint on the generation model. As a result, we divide the traditional Planning Module into two steps: first, constructing a precise goal point, and second, generating the trajectory through planning.
|
| 98 |
+
|
| 99 |
+
Goal Point Vocabulary. We aim to construct a goal point set that provides candidates for the optimal goal point. Traditional goal-based methods[11, 34], rely on lane-level information from HD map to generate goal point sets for trajectory prediction. However, HD maps are expensive, making lane information often unavailable in end-to-end driving. Inspired by VADv2[1], we discretize the endpoint space of trajectories to generate candidate goal points, enabling a solution without relying on HD maps. We clustered trajectory endpoints $\mathbf{p}_i = (x_i, y_i, \theta_i)$ in the training data
|
| 100 |
+
|
| 101 |
+
to create $N$ cluster centers, which form our goal point vocabulary $\mathbb{V}$ . Each endpoint $p_i$ represents a position $(x_i, y_i)$ and heading $\theta_i$ . To ensure that the vocabulary represents finer-grained locations, we typically set $N$ to a large value, generally 4096 or 8192.
|
| 102 |
+
|
| 103 |
+
Goal Point Scorer. High-quality trajectories typically exhibit the following characteristics: A small distance to the ground truth and within the drivable area. To achieve this, we evaluate each goal point $g_{i}$ in the vocabulary $\mathbb{V}$ using two distinct scores: the Distance Score $\hat{\delta}^{\mathrm{dis}}$ and the Drivable Area Compliance Score $\hat{\delta}^{\mathrm{dac}}$ . The Distance Score measures the proximity between the goal point $g_{i}$ and the endpoint of ground truth trajectory $g^{\mathrm{gt}}$ , with a continuous value in the range $\hat{\delta}^{\mathrm{dis}} \in [0,1]$ , where a higher value indicates a closer match to $g^{\mathrm{gt}}$ . The Drivable Area Compliance Score ensures that the goal point lies within the drivable area, using a binary value $\hat{\delta}^{\mathrm{dac}} \in \{0,1\}$ , where 1 indicates that the goal point is valid within the drivable area, and 0 indicates it is not.
|
| 104 |
+
|
| 105 |
+
To construct the target distance score $\delta_i^{\mathrm{dis}}$ , we utilize the softmax function to map the Euclidean distance between the goal point $g_{i}$ and the ground truth goal point $g^{\mathrm{gt}}$ to the interval [0, 1]. This is defined as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\delta_ {i} ^ {\mathrm {d i s}} = \frac {\exp \left(- \| g _ {i} - g ^ {\mathrm {g t}} \| _ {2}\right)}{\sum_ {j} \exp \left(- \| g _ {j} - g ^ {\mathrm {g t}} \| _ {2}\right)} \tag {3}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
For the target drivable area compliance score $\delta_i^{\mathrm{dac}}$ , we introduce a shadow vehicle, whose bounding box is determined based on the position and heading $(x_i, y_i, \theta_i)$ in $g_i$ and the shape of the ego vehicle. Let $\{p^j\}^4$ represent the set of four corner positions of the shadow vehicle, and let $\mathbb{D}$ denote the polygon representing the drivable area. The drivable area compliance score $\delta_i^{\mathrm{dac}}$ is defined as:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\delta_ {i} ^ {\mathrm {d a c}} = \left\{ \begin{array}{l l} 1, & \text {i f \forall j , p ^ {j} \in \mathbb {D} ^ {\circ}} \\ 0, & \text {o t h e r w i s e} \end{array} \right.
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
We compute the final score $\hat{\delta}_i^{\mathrm{final}}$ by aggregating $\hat{\delta}_i^{\mathrm{dis}}$ and $\hat{\delta}_i^{\mathrm{dac}}$ . The goal point with the highest final score is selected for trajectory generation.
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\hat {\delta} _ {i} ^ {\mathrm {f i n a l}} = w _ {1} \log \hat {\delta} _ {i} ^ {\mathrm {d i s}} + w _ {2} \log \hat {\delta} _ {i} ^ {\mathrm {d a c}}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
As shown in Fig.3(a), the Transformer-based Scorer Decoder uses the result of adding $F_{v}$ and $F_{\mathrm{ego}}$ as the query, with $F_{\mathrm{bev}}$ as the key and value. The output is passed through two separate MLPs to produce the scores $\hat{\delta}^{dis}$ and $\hat{\delta}^{dac}$ for each point in the V. Fig.3(b) shows the distribution of these two scores. With the points in warmer colors representing higher scores, we observe that score $\hat{\delta}^{dis}$ effectively indicates the desired future position, while $\hat{\delta}^{dac}$ identifies if the goal point is within the drivable area.
|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
(a)
|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
(b)
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
Figure 3. Goal Point Scorer. (a) shows the detailed structure of the Goal Point Construction Module, and (b) presents the score distributions of $\{\hat{\delta}_i^{dis}\}^N$ , $\{\hat{\delta}_i^{dac}\}^N$ , and $\{\hat{\delta}_i^{final}\}^N$ , where points with higher scores are highlighted with warmer color.
|
| 133 |
+
Figure 4. The network architecture used in Rectified Flow.
|
| 134 |
+
|
| 135 |
+
# 3.2.4. Trajectory Planning Module
|
| 136 |
+
|
| 137 |
+
In this module, we generate constrained, high-quality trajectory candidates using a generative model and then select the optimal trajectory through a scoring mechanism. Generative models based on diffusion methods like DDPM[14] and DDIM[26] typically require complex denoising paths, leading to significant time overhead during inference, which makes them unsuitable for real-time systems like autonomous driving. In contrast, Rectified Flow[24], which is based on the optimal transport path in flow matching, requires much fewer inference steps to achieve good results. We adopt Rectified Flow as the generative model, using the BEV feature and goal point as conditions to generate multimodal trajectories.
|
| 138 |
+
|
| 139 |
+
Multimodal Trajectories Generating. We generate multimodal trajectories by modeling the shift from the noise distribution to the target trajectory distribution. During this distribution transfer process, given the current state $x_{t}$ and
|
| 140 |
+
|
| 141 |
+
time step $t$ , we predict the shift $\mathbf{v_t}$ .
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\mathbf {v} _ {\mathbf {t}} = \tau^ {\text {n o r m}} - x _ {0} \tag {4}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
x _ {t} = (1 - t) x _ {0} + t \tau^ {\text {n o r m}} \tag {5}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\tau^ {\text {n o r m}} = \mathcal {H} \left(\tau^ {g t}\right) \tag {6}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Where $\tau^{gt}$ is the ground truth trajectory and $\tau^{norm}$ is its normalized form. We define $\mathcal{H}(\cdot)$ as the normalization operation applied to the trajectory. The variable $x_0$ represents the noise distribution, which follows $x_0\sim \mathcal{N}(0,\sigma^2 I)$ . The variable $x_{t}$ is obtained by linearly interpolating between $x_0$ and $\tau^{norm}$ .
|
| 156 |
+
|
| 157 |
+
As illustrated in Fig.4, we extract different features through a series of encoders. Specifically, we encode $x_{t}$ using a linear layer, while $t$ and the goal point are transformed into feature vectors via sinusoidal encoding. The feature $F_{\mathrm{env}}$ is obtained by passing the information from $F_{\mathrm{bev}}$ and $F_{\mathrm{ego}}$ through the environment encoder.
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
F _ {\text {e n v}} = E _ {\text {e n v}} \left(Q, \left(F _ {\text {B E V}} + F _ {\text {e g o}}\right), \left(F _ {\text {B E V}} + F _ {\text {e g o}}\right)\right) \tag {7}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
Here, $E_{\mathrm{env}}$ refers to a Transformer-based encoder, $Q$ denotes a learnable embedding, and $F_{\mathrm{ego}}$ represents the ego status feature, which encodes the kinematic information of the ego vehicle.
|
| 164 |
+
|
| 165 |
+
We concatenate the features $F_{\mathrm{env}}$ , $F_{\mathrm{goal}}$ , $F_{\mathrm{trail}}$ , and $F_{\mathrm{t}}$ to form the overall feature $F_{\mathrm{all}}$ , which encapsulates the current state, time step, and scene information. This combined feature is then passed through several attention layers to predict the distribution shift $\mathbf{v}_{\mathrm{t}}$ .
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\hat {\mathbf {v}} _ {\mathbf {t}} = \mathcal {G} \left(F _ {\text {a l l}}, F _ {\text {a l l}}, F _ {\text {a l l}}\right) \tag {8}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
F _ {\text {a l l}} = \operatorname {C o n c a t} \left(F _ {\text {e n v}}, F _ {\text {g o a l}}, F _ {\text {t r a j}}, F _ {t}\right) \tag {9}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
Where $\mathcal{G}$ is the network that consists of N attention layers.
|
| 176 |
+
|
| 177 |
+
We reconstruct the trajectory distribution using $x_0$ and $\hat{\mathbf{v}}_{\mathbf{t}}$ . Typically, we achieve this by performing multiple inference steps through the Rectified Flow, gradually transforming the noise distribution $x_0$ to the target distribution $\tau^{\mathrm{norm}}$ . Finally, we apply denormalization to $\tau^{\mathrm{norm}}$ to obtain the final trajectory $\hat{\tau}$ .
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\hat {\tau} = \mathcal {H} ^ {- 1} \left(\hat {\tau} ^ {n o r m}\right) \tag {10}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\hat {\tau} ^ {n o r m} = x _ {0} + \frac {1}{n} \sum_ {i} ^ {n} \hat {v} _ {t _ {i}} \tag {11}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
Where $n$ is the total inference steps, and $t_i$ is the time step sampled in the $i$ -th step, which satisfies $t_i \in [0,1]$ . $\mathcal{H}^{-1}(\cdot)$ is the denormalization operation.
|
| 188 |
+
|
| 189 |
+
Trajectory Selecting In trajectory selection, methods like SparseDrive[27] and Diffusion-ES[32] rely on kinematic simulation of the generated trajectories to predict potential collisions with surrounding agents, thus selecting the optimal trajectory. This process significantly increases the inference time. We simplify this procedure by using the goal point as a reference for selecting the trajectory. Specifically, we trade off the trajectory distance to the goal point and ego progress, selecting the optimal trajectory through a trajectory scorer.
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
f \left(\hat {\tau} _ {i}\right) = - \lambda_ {1} \Phi \left(f _ {d i s} \left(\hat {\tau} _ {i}\right)\right) + \lambda_ {2} \Phi \left(f _ {p g} \left(\hat {\tau} _ {i}\right)\right) \tag {12}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $\Phi (\cdot)$ is the minimax operation. $f_{dis}(\hat{\tau}_i)$ presents the $\mathcal{L}_2$ distance of $\hat{\tau}_i$ and $g$ , and $f_{pg}(\hat{\tau}_i)$ presents the $\mathcal{L}_2$ distance of progress of $\hat{\tau}_i$ make.
|
| 196 |
+
|
| 197 |
+
Furthermore, predicted goal point may contain an error that can misguide the trajectory. To mitigate this, we mask the goal point during generation to create a shadow trajectory. If the shadow trajectory deviates significantly from the main trajectory, we treat the goal point as unreliable and use the shadow as the output.
|
| 198 |
+
|
| 199 |
+
# 3.2.5. Training Losses
|
| 200 |
+
|
| 201 |
+
Firstly, we optimize the perception extractor exclusively, and enforce multiple perception losses for supervision, including the cross-entropy loss for HD map $(L_{HD})$ and 3D bounding box classification $(L_{bbox})$ and $L_{1}$ loss for 3D bounding box locations $(L_{loc})$ . This stage aims to enrich the BEV feature with information on various perceptions. Losses are as follows.
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
L _ {\text {p e r c e p t i o n}} = w _ {1} * L _ {H D} + w _ {2} * L _ {b b o x} + w _ {3} * L _ {l o c} \tag {13}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where $w_{1}, w_{2}, w_{3}$ are set to 10.0, 1.0, 10.0 in training. For the goal constructor, we employ the cross entropy loss for
|
| 208 |
+
|
| 209 |
+
distance score $(L_{dis})$ and DAC score $(L_{dac})$ . $w_4, w_5$ are set to 1.0 and 0.005.
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
L _ {\text {g o a l}} = w _ {4} * L _ {\text {d i s}} + w _ {5} * L _ {\text {d a c}} \tag {14}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
L _ {d i s} = - \sum_ {i = 1} ^ {N} \delta_ {i} ^ {d i s} \log \left(\hat {\delta_ {i}} ^ {d i s}\right) \tag {15}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
L _ {d a c} = - \delta^ {d a c} \log \hat {\delta} ^ {d a c} - (1 - \delta^ {d a c}) \log \left(1 - \hat {\delta} ^ {d a c}\right) \tag {16}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$L_{1}$ loss is utilized for multimodal planner.
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
L _ {\text {p l a n n e r}} = \left| \mathbf {v} _ {\mathbf {t}} - \hat {\mathbf {v}} _ {\mathbf {t}} \right| \tag {17}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
# 4. Experiments
|
| 230 |
+
|
| 231 |
+
# 4.1. Dataset
|
| 232 |
+
|
| 233 |
+
Our experiment is validated on the Openscene[6] dataset. Openscene includes 120 hours of autonomous driving data. Its end-to-end environment Navsim[7] uses 1192 and 136 scenarios for trainval and testing, a total of over 10w samples at $2\mathrm{Hz}$ . Each sample contains camera images from 8 perspectives, fused Lidar data from 5 sensors, ego status, and annotations for the map and objects.
|
| 234 |
+
|
| 235 |
+
# 4.2. Metrics
|
| 236 |
+
|
| 237 |
+
In the Navsim environment, the generated $2\mathrm{Hz}$ , 4-second trajectories are interpolated via an LQR controller to yield $10\mathrm{Hz}$ , 4-second trajectories. These trajectories are scored using closed-loop metrics, including No at-fault Collisions $S_{NC}$ , Drivable Area Compliance $S_{DAC}$ , Time to Collision $S_{TTC}$ with bounds, Ego Progress $S_{EP}$ , Comfort $S_{CF}$ , and Driving Direction Compliance $S_{DDC}$ . The final score is derived by aggregating these metrics. Due to practical constraints, $S_{DDC}$ is omitted from the calculation<sup>1</sup>.
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
\begin{array}{l} S _ {P D M} = S _ {N C} \times S _ {D A C} \times s _ {T T C} \times \\ \left(\frac {5 \times S _ {E P} + 5 \times S _ {C F} + 2 \times S _ {D D C}}{1 2}\right) \tag {18} \\ \end{array}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
# 4.3. Baselines
|
| 244 |
+
|
| 245 |
+
In Navsim, we compare against the following baselines: Constant Velocity Assumes constant speed from the current timestamp for forward movement. Ego Status MLP Takes only the current state as input and uses an MLP to generate the trajectory. PDM-Closed Using ground-truth perception as input, several trajectories are generated through a rule-based IDM method. The PDM scorer then selects the optimal trajectory from these as the output. Transfuser Uses both image and LiDAR inputs, fusing them via a transformer into a BEV feature, which is then used for trajectory generation. LTF A streamlined version
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Method</td><td>Ego Stat.</td><td>Image</td><td>LiDAR</td><td>Video</td><td>\(S_{NC} \uparrow\)</td><td>\(S_{DAC} \uparrow\)</td><td>\(S_{TTC} \uparrow\)</td><td>\(S_{CF} \uparrow\)</td><td>\(S_{EP} \uparrow\)</td><td>\(S_{PDM} \uparrow\)</td></tr><tr><td>Constant Velocity</td><td>✓</td><td></td><td></td><td></td><td>68.0</td><td>57.8</td><td>50.0</td><td>100</td><td>19.4</td><td>20.6</td></tr><tr><td>Ego Status MLP</td><td>✓</td><td></td><td></td><td></td><td>93.0</td><td>77.3</td><td>83.6</td><td>100</td><td>62.8</td><td>65.6</td></tr><tr><td>LTF [3]</td><td>✓</td><td>✓</td><td></td><td></td><td>97.4</td><td>92.8</td><td>92.4</td><td>100</td><td>79.0</td><td>83.8</td></tr><tr><td>TransFuser [3]</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>97.7</td><td>92.8</td><td>92.8</td><td>100</td><td>79.2</td><td>84.0</td></tr><tr><td>UniAD [15]</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>97.8</td><td>91.9</td><td>92.9</td><td>100</td><td>78.8</td><td>83.4</td></tr><tr><td>PARA-Drive [30]</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>97.9</td><td>92.4</td><td>93.0</td><td>99.8</td><td>79.3</td><td>84.0</td></tr><tr><td>GoalFlow (Ours)</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>85.0</td><td>90.3</td></tr><tr><td>GoalFlow†</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>99.8</td><td>97.9</td><td>98.6</td><td>100</td><td>85.4</td><td>92.1</td></tr><tr><td>Human‡</td><td></td><td></td><td></td><td></td><td>100</td><td>100</td><td>100</td><td>99.9</td><td>87.5</td><td>94.8</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 1. Comparisons with SOTA methods in PDM score metrics on Navsim [7] Test. Our method outperforms other approaches across all evaluation metrics. † uses the endpoint of the ground-truth trajectory as the goal point. ‡ uses the ground-truth trajectories to evaluate.
|
| 250 |
+
|
| 251 |
+
<table><tr><td>Model</td><td>Description</td><td>SNC↑</td><td>SDAC↑</td><td>STTC↑</td><td>SCF</td><td>SEP↑</td><td>SPDM↑</td></tr><tr><td>-</td><td>Transfuser[3]</td><td>97.7</td><td>92.8</td><td>92.8</td><td>100</td><td>79.0</td><td>84.0</td></tr><tr><td>M0</td><td>Base Model</td><td>97.9</td><td>94.2</td><td>94.2</td><td>100</td><td>79.9</td><td>85.6</td></tr><tr><td>M1</td><td>M0 + Distance Score Map</td><td>98.5</td><td>96.4</td><td>94.9</td><td>100</td><td>83.0</td><td>88.5</td></tr><tr><td>M2</td><td>M1 + DAC Score Map</td><td>98.6</td><td>97.5</td><td>94.7</td><td>100</td><td>83.8</td><td>89.4</td></tr><tr><td>M3</td><td>M2 + Trajectory Scorer</td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>85.0</td><td>90.3</td></tr></table>
|
| 252 |
+
|
| 253 |
+
Table 2. Ablation study on the influence of each component. $\mathcal{M}_0$ is the base model, which uses rectified flow without goal point guidance and averages all generated trajectories to produce the final output. $\mathcal{M}_1$ and $\mathcal{M}_2$ introduce the distance score map and DAC score map, respectively, to guide the rectified flow. $\mathcal{M}_3$ builds upon $\mathcal{M}_1$ by incorporating trajectory scorer.
|
| 254 |
+
|
| 255 |
+
<table><tr><td>T</td><td>Inf.Time</td><td>SNC↑</td><td>SDAC↑</td><td>STTC↑</td><td>SCF↑</td><td>SEP↑</td><td>SPDM↑</td></tr><tr><td>20</td><td>177.8ms</td><td>98.3</td><td>98.1</td><td>94.3</td><td>100</td><td>84.7</td><td>89.9</td></tr><tr><td>10</td><td>92.4ms</td><td>98.3</td><td>98.2</td><td>94.4</td><td>100</td><td>84.9</td><td>90.1</td></tr><tr><td>5</td><td>49.0ms</td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>84.4</td><td>90.3</td></tr><tr><td>1</td><td>10.4ms</td><td>98.4</td><td>97.8</td><td>94.1</td><td>100</td><td>84.5</td><td>88.9</td></tr></table>
|
| 256 |
+
|
| 257 |
+
Table 3. Impact of different timesteps in inference. $T$ denotes the number of denoising steps during inference. The results indicate that the model's performance is robust to variations of denoising steps.
|
| 258 |
+
|
| 259 |
+
<table><tr><td>σ</td><td>SNC↑</td><td>SDAC↑</td><td>STTC↑</td><td>SCF↑</td><td>SEP↑</td><td>SPDM↑</td></tr><tr><td>0.05</td><td>98.3</td><td>98.2</td><td>94.4</td><td>100</td><td>85.0</td><td>90.1</td></tr><tr><td>0.1</td><td>98.4</td><td>98.3</td><td>94.6</td><td>100</td><td>85.0</td><td>90.3</td></tr><tr><td>0.2</td><td>87.4</td><td>76.0</td><td>69.4</td><td>32.0</td><td>56.2</td><td>49.0</td></tr><tr><td>0.3</td><td>68.3</td><td>48.1</td><td>44.8</td><td>2.23</td><td>23.6</td><td>18.8</td></tr></table>
|
| 260 |
+
|
| 261 |
+
Table 4. Impact of different values of $\sigma$ on the initial noise distribution. $\sigma$ is the standard deviation of $x_0$ . The results show that performance drops significantly when $\sigma$ exceeds 0.1, but remains stable for values below 0.1.
|
| 262 |
+
|
| 263 |
+
of Transfuser, where the LiDAR backbone is replaced with a learnable embedding. It achieves results in NavSim similar to Transfuser. UniAD Employs multiple transformer architectures to process information differently, using queries
|
| 264 |
+
|
| 265 |
+
to transfer information specifically for planning. PARA-Drive Diffs from UniAD by performing mapping, planning, motion prediction, and occupancy prediction tasks in parallel based on the BEV feature.
|
| 266 |
+
|
| 267 |
+
# 4.4. Model Setups and Parameters
|
| 268 |
+
|
| 269 |
+
The training of rectified flow[24] follows classifier-free guidance[13], where features within the conditioning set are randomly masked to bolster model robustness. The last point of the ground-truth trajectory is used to guide flow matching in trajectory generation during training. In testing, the goal point for trajectory generation is set by selecting the highest-scoring point from the goal point vocabulary. The sampling process employs a smoothing method in [9] that re-scales the timesteps nonlinearly, instead of using uniform intervals. We generate 128/256 trajectories, from which the trajectory scorer identifies the optimal one. All training was conducted on 4 nodes, each equipped with 8 RTX 4090 or RTX 3090 GPUs.
|
| 270 |
+
|
| 271 |
+
# 4.5. Results and Analysis
|
| 272 |
+
|
| 273 |
+
Comparison with SOTA Methods. In Table 1, we compared our method with several state-of-the-art algorithms in end-to-end autonomous driving, highlighting the highest scores in bold. Testing in the Navsim environment revealed
|
| 274 |
+
|
| 275 |
+
that GoalFlow consistently outperformed other methods in overall scores. Notably, our method surpasses the second-best approach by 5.5 points in the DAC score and by 5.7 points in the EP score, indicating that GoalFlow provides stronger constraints on keeping the vehicle within drivable areas, thus enhancing the safety of autonomous driving systems. Additionally, GoalFlow enables faster driving speeds while ensuring safety. Further experiments, where we replaced the predicted goal point with the endpoint of the ground truth trajectory, resulted in a score of 92.1, which is very close to the human trajectory score of 94.8. This demonstrates the strong guiding capability of the goal point in autonomous driving.
|
| 276 |
+
|
| 277 |
+
Ablation Study on The Influence of Each Component. We conduct an ablation study of the influence of each component in Table 2. The $\mathcal{M}_0$ represents a model that generates trajectories using only the rectified flow. In our experiment results, the base $\mathcal{M}_0$ consistently outperforms baseline methods on Navsim, particularly excelling in DAC and TTC. This indicates that the base model, which is based on flow matching, has effectively learned interactions with map information and surrounding agents, demonstrating that the flow model alone possesses strong modeling capabilities.
|
| 278 |
+
|
| 279 |
+
The $\mathcal{M}_1$ model builds on $\mathcal{M}_0$ by modeling the distance score distribution and selecting the point with the highest score to guide the rectified flow. We found that this results in the most significant improvement, demonstrating the effectiveness of decomposing the trajectory planning task. Specifically, we decompose the complex task into two simpler sub-tasks: goal point prediction and trajectory generation guided by the goal point.
|
| 280 |
+
|
| 281 |
+
The $\mathcal{M}_2$ model builds upon $\mathcal{M}_1$ by incorporating the prediction of DAC score distribution. The main improvement is seen in the DAC score. By introducing multiple evaluators from different perspectives, the model benefits from a more robust assessment, resulting in improved performance.
|
| 282 |
+
|
| 283 |
+
By incorporating trajectory scorer, which includes a trajectory selection and goal point checking mechanism, $\mathcal{M}_3$ further enhances the reliability of GoalFlow.
|
| 284 |
+
|
| 285 |
+
Impact of Different Steps in Inference. We conducted experiments with different denoising steps during the inference process, as shown in Table 3. In these experiments, We found as the number of inference steps decreases from 20 to 1, the scores remained stable. Specifically, even with just a single inference step, excellent performance was achieved. This highlights the advantage of flow matching over diffusion-based frameworks: flow matching takes a direct, straight path, requiring fewer steps to transfer from noisy distribution to target distribution during inference. Additionally, as the inference steps are reduced from 20 to 1, the denoising time in inference of one sample decreases
|
| 286 |
+
|
| 287 |
+
to $6\%$ of the original. This efficient inference process is especially critical for autonomous driving systems, where real-time performance is essential.
|
| 288 |
+
|
| 289 |
+
Impact of Different Initial Noise in Training. In the experiments, the initial noise follows a Gaussian distribution $\mathcal{N}(0,\sigma^2 I)$ . We explored the impact of the noise variance on the generated trajectories in Table 4. The results reveal that noise settings have a significant impact on the scores. When the noise is set too high, the generated trajectories become excessively erratic; notably, with a $\sigma$ of 0.3, the Comfort score drops to only 2.23, indicating that the trajectory lacks coherent shape. Conversely, when the noise variance is too low, flow matching tends to degenerate into a regression model, reducing the trajectory diversity available for scoring. This lack of variety lowers overall scores.
|
| 290 |
+
|
| 291 |
+
<table><tr><td>Dim</td><td>Backbone</td><td>\(S_{NC} \uparrow\)</td><td>\(S_{DAC} \uparrow\)</td><td>\(S_{TTC} \uparrow\)</td><td>\(S_{EP} \uparrow\)</td><td>\(S_{PDM} \uparrow\)</td></tr><tr><td>256/256</td><td>V2-99/V2-99</td><td>97.1</td><td>96.2</td><td>91.8</td><td>81.8</td><td>86.5</td></tr><tr><td>512/512</td><td>V2-99/V2-99</td><td>97.3</td><td>97.6</td><td>92.5</td><td>83.0</td><td>88.1</td></tr><tr><td>1024/1024</td><td>V2-99/V2-99</td><td>98.6</td><td>97.5</td><td>94.7</td><td>85.0</td><td>89.4</td></tr><tr><td>256/256</td><td>resnet34/resnet34</td><td>98.3</td><td>93.8</td><td>94.3</td><td>79.8</td><td>85.7</td></tr><tr><td>1024/256</td><td>V2-99/resnet34</td><td>98.2</td><td>96.4</td><td>93.8</td><td>82.6</td><td>87.9</td></tr></table>
|
| 292 |
+
|
| 293 |
+
Table 5. Impact of Scaling Model. We examine the impact of scaling the Transformer's hidden dimension and changing the image backbone within the Goal Point Construction Module (left) and Trajectory Planning Module (right). Increasing the hidden dimension and using a stronger image backbone both lead to improved end-to-end performance. For fair comparison, we align post-processing with baseline $\mathcal{M}_2$ .
|
| 294 |
+
|
| 295 |
+
Impact of Scaling Model. Inspired by [36], we present experiments on scaling the model based on the $\mathcal{M}_2$ in Table 5. Under the same V2-99 backbone, increasing the hidden dimension consistently improves performance, with the best results observed at a dimension of 1024. Additionally, we conducted experiments to compare different configurations of the Goal Point Construction Module. We found that scaling this module significantly improves overall performance, highlighting the critical role of goal point guidance in trajectory planning.
|
| 296 |
+
|
| 297 |
+
# 5. Conclusion
|
| 298 |
+
|
| 299 |
+
In this paper, we focus on generating accurate and efficient multimodal trajectories. We reviewed recent works on multimodal trajectory generation in autonomous driving and proposed a framework that generates precise goal points and effectively constrains the generative model with them, ultimately producing high-quality multimodal trajectories. We conducted experiments on the Navsim environment, demonstrating that GoalFlow achieves state-of-the-art performance. In the future, we aim to further investigate the impact of different guidance information on multimodal trajectory generation.
|
| 300 |
+
|
| 301 |
+
# References
|
| 302 |
+
|
| 303 |
+
[1] Shaoyu Chen, Bo Jiang, Hao Gao, Benchcheng Liao, Qing Xu, Qian Zhang, Chang Huang, Wenyu Liu, and Xinggang Wang. Vadv2: End-to-end vectorized autonomous driving via probabilistic planning. arXiv preprint arXiv:2402.13243, 2024. 2, 4
|
| 304 |
+
[2] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS), 2023. 3
|
| 305 |
+
[3] Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, and Andreas Geiger. Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. Pattern Analysis and Machine Intelligence (PAMI), 2023. 2, 4, 7
|
| 306 |
+
[4] Felipe Codevilla, Matthias Müller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), page 1-9. IEEE Press, 2018. 2
|
| 307 |
+
[5] Felipe Codevilla, Eder Santana, Antonio Lopez, and Adrien Gaidon. Exploring the limitations of behavior cloning for autonomous driving. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9328-9337, 2019. 2
|
| 308 |
+
[6] OpenScene Contributors. Openscene: The largest up-to-date 3d occupancy prediction benchmark in autonomous driving. https://github.com/OpenDriveLab/OpenScene, 2023. 6
|
| 309 |
+
[7] Daniel Dauner, Marcel Hallgarten, Tianyu Li, Xinshuo Weng, Zhiyu Huang, Zetong Yang, Hongyang Li, Igor Gilitschenski, Boris Ivanovic, Marco Pavone, Andreas Geiger, and Kashyap Chitta. Navsim: Data-driven non-reactive autonomous vehicle simulation and benchmarking. arXiv, 2406.15349, 2024. 1, 6, 7
|
| 310 |
+
[8] Patrick Dendorfer, Aljosa Osep, and Laura Leal-Taixe. Goalgan: Multimodal trajectory prediction based on goal position estimation. In Asian Conference on Computer Vision, 2020. 2
|
| 311 |
+
[9] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis. In Proceedings of the 41st International Conference on Machine Learning, pages 12606-12633. PMLR, 2024. 7
|
| 312 |
+
[10] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, page 2672-2680, Cambridge, MA, USA, 2014. MIT Press. 2
|
| 313 |
+
[11] Junru Gu, Chen Sun, and Hang Zhao. Densetnt: End-to-end trajectory prediction from dense goal sets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15303-15312, 2021. 4
|
| 314 |
+
|
| 315 |
+
[12] Songen Gu, Wei Yin, Bu Jin, Xiaoyang Guo, Junming Wang, Haodong Li, Qian Zhang, and Xiaoxiao Long. Dome: Taming diffusion model into high-fidelity controllable occupancy world model. arXiv preprint arXiv:2410.10429, 2024. 2
|
| 316 |
+
[13] Jonathan Ho. Classifier-free diffusion guidance. ArXiv, abs/2207.12598, 2022. 3, 7
|
| 317 |
+
[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, pages 6840-6851. Curran Associates, Inc., 2020. 3, 5
|
| 318 |
+
[15] Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, and Hongyang Li. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2, 7
|
| 319 |
+
[16] Zhiyu Huang, Xinshuo Weng, Maximilian Igl, Yuxiao Chen, Yulong Cao, Boris Ivanovic, Marco Pavone, and Chen Lv. Gen-drive: Enhancing diffusion generative driving policies with reward modeling and reinforcement learning finetuning. arXiv preprint arXiv:2410.05582, 2024. 1, 3
|
| 320 |
+
[17] Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, and Xinggang Wang. Vad: Vectorized scene representation for efficient autonomous driving. ICCV, 2023. 1, 2, 3
|
| 321 |
+
[18] Chiyu "Max" Jiang, Andre Cornman, Cheolho Park, Benjamin Sapp, Yin Zhou, and Dragomir Anguelov. Motion-diffuser: Controllable multi-agent motion prediction using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9644-9653, 2023. 2, 3, 4
|
| 322 |
+
[19] Bu Jin, Xinyu Liu, Yupeng Zheng, Pengfei Li, Hao Zhao, Tong Zhang, Yuhang Zheng, Guyue Zhou, and Jingjing Liu. Adapt: Action-aware driving caption transformer. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 7554-7561, 2023. 2
|
| 323 |
+
[20] Bu Jin, Yupeng Zheng, Pengfei Li, Weize Li, Yuhang Zheng, Sujie Hu, Xinyu Liu, Jinwei Zhu, Zhijie Yan, Haiyang Sun, Kun Zhan, Peng Jia, Xiaoxiao Long, Yilun Chen, and Hao Zhao. Tod3cap: Towards 3d dense captioning in outdoor scenes. In Computer Vision - ECCV 2024: 18th European Conference, Milan, Italy, September 29 - October 4, 2024, Proceedings, Part XVIII, page 367-384, Berlin, Heidelberg, 2024. Springer-Verlag. 2
|
| 324 |
+
[21] Diederik P Kingma. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 2
|
| 325 |
+
[22] Zhenxin Li, Kailin Li, Shihao Wang, Shiyi Lan, Zhiding Yu, Yishen Ji, Zhiqi Li, Ziye Zhu, Jan Kautz, Zuxuan Wu, et al. Hydra-mdp: End-to-end multimodal planning with multitarget hydra-distillation. arXiv preprint arXiv:2406.06978, 2024. 2
|
| 326 |
+
[23] Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. 3
|
| 327 |
+
[24] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with
|
| 328 |
+
|
| 329 |
+
rectified flow. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. 3, 5, 7
|
| 330 |
+
[25] Robert J. McCann. A convexity principle for interacting gases. Advances in Mathematics, 128(1):153-179, 1997. 3
|
| 331 |
+
[26] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv:2010.02502, 2020. 3, 5
|
| 332 |
+
[27] Wenchao Sun, Xuewu Lin, Yining Shi, Chuang Zhang, Haoran Wu, and Sifa Zheng. Sparsedrive: End-to-end autonomous driving via sparse scene representation. arXiv preprint arXiv:2405.19620, 2024. 1, 2, 3, 6
|
| 333 |
+
[28] Junming Wang, Xingyu Zhang, Zebin Xing, Songen Gu, Xiaoyang Guo, Yang Hu, Ziying Song, Qian Zhang, Xiaoxiao Long, and Wei Yin. He-drive: Human-like end-to-end driving with vision language models. arXiv preprint arXiv:2410.05051, 2024. 2
|
| 334 |
+
[29] Junming Wang, Xingyu Zhang, Zebin Xing, Songen Gu, Xiaoyang Guo, Yang Hu, Ziying Song, Qian Zhang, Xiaoxiao Long, and Wei Yin. He-drive: Human-like end-to-end driving with vision language models. arXiv preprint arXiv:2410.05051, 2024.3
|
| 335 |
+
[30] Xinshuo Weng, Boris Ivanovic, Yan Wang, Yue Wang, and Marco Pavone. Para-drive: Parallelized architecture for real-time autonomous driving. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15449-15458, 2024. 2, 7
|
| 336 |
+
[31] Zhou Xian, Nikolaos Gkanatsios, Theophile Gervet, Tsung-Wei Ke, and Katerina Fragkiadaki. Chaineddiffuser: Unifying trajectory diffusion and keypose prediction for robotic manipulation. In Proceedings of The 7th Conference on Robot Learning, pages 2323–2339. PMLR, 2023. 3
|
| 337 |
+
[32] Brian Yang, Huangyuan Su, Nikolaos Gkanatsios, Tsung-Wei Ke, Ayush Jain, Jeff Schneider, and Katerina Fragkiadaki. Diffusion-es: Gradient-free planning with diffusion for autonomous driving and zero-shot instruction following. arXiv preprint arXiv:2402.06559, 2024. 2, 3, 4, 6
|
| 338 |
+
[33] Tengju* Ye, Wei* Jing, Chunyong Hu, Shikun Huang, Lingping Gao, Fangzhen Li, Jingke Wang, Ke Guo, Wencong Xiao, Weibo Mao, Hang Zheng, Kun Li, Junbo Chen, and Kaicheng Yu. Fusionad: Multi-modality fusion for prediction and planning tasks of autonomous driving. 2023. *Equal Contribution.* 2
|
| 339 |
+
[34] Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Ben Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, Congcong Li, and Dragomir Anguelov. Tnt: Target-driven trajectory prediction. In Proceedings of the 2020 Conference on Robot Learning, pages 895-904. PMLR, 2021. 4
|
| 340 |
+
[35] Wenzhao Zheng, Ruiqi Song, Xianda Guo, Chenming Zhang, and Long Chen. Genad: Generative end-to-end autonomous driving. arXiv preprint arXiv: 2402.11502, 2024. 2
|
| 341 |
+
[36] Yupeng Zheng, Zhongpu Xia, Qichao Zhang, Teng Zhang, Ben Lu, Xiaochuang Huo, Chao Han, Yixian Li, Mengjie Yu, Bu Jin, Pengxuan Yang, Yuhang Zheng, Haifeng Yuan, Ke Jiang, Peng Jia, Xianpeng Lang, and Dongbin Zhao. Pre-
|
| 342 |
+
|
| 343 |
+
liminary investigation into data scaling laws for imitation learning-based end-to-end autonomous driving, 2024. 8
|
| 344 |
+
|
| 345 |
+

|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
Figure 5. Visualization of Trajectories. $\times$ indicates that the trajectory results in a collision or goes beyond the drivable area, while $\checkmark$ represents a safe trajectory. The orange points are generated by the Goal Constructor, while the blue and yellow points correspond to samples from the vocabulary. The results highlight that GoalFlow generates higher-quality trajectories compared to the other two methods.
|
| 349 |
+
Figure 6. Visualization of the goal point distribution. The $\hat{\delta}_i^{dac}$ score indicates whether a point is within the drivable area, while the $\hat{\delta}_i^{dis}$ score reflects the distance relationship between the point and the goal. The final score $\hat{\delta}_i^{final}$ is a fusion of the $\hat{\delta}_i^{dac}$ and $\hat{\delta}_i^{dis}$ scores, where points with higher brightness represent higher scores.
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
Figure 7. Visualization of trajectories. We visualize four scenarios: going straight, turning left, turning right, and yielding. For each scenario, 128 trajectories were generated using GoalFlow.
|
2503.05xxx/2503.05689/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:30c9db6441511f1b347a764e601023bf3de023d1e8d11a8799de53b7b77c5821
|
| 3 |
+
size 1139742
|
2503.05xxx/2503.05689/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2506.07xxx/2506.07927/1cd48b0e-cb03-447a-8c00-32dbcc095fbe_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:91898f00aa804daee78f9aac32eeb7581e3efe145c7332bee201820b696137fb
|
| 3 |
+
size 2564782
|
2506.07xxx/2506.07927/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2506.07xxx/2506.07927/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6b65f5d83a2b4c27f6b598ba014e71878055137298fb16b64e0198f19d87198d
|
| 3 |
+
size 2080080
|
2506.07xxx/2506.07927/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.02xxx/2512.02792/6a4b067c-0d55-4fdc-babe-b0b797c0a31d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dc95768ea4bb2bd5a1e8a2152da95ef87ac9cc171814683ebe09f28a9714d9a7
|
| 3 |
+
size 14705073
|
2512.02xxx/2512.02792/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.02xxx/2512.02792/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c0f4a779d58fe206f5230bf4d6ed60dbf9c577cbad81324b9de2444e11a1e972
|
| 3 |
+
size 1328364
|
2512.02xxx/2512.02792/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2512.06xxx/2512.06357/540823da-3706-4f34-ab46-894f42e1d382_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c531b178ec4059d8b7fd927f42e33982f60c1e67bd8b1d57611d91ffc48066f3
|
| 3 |
+
size 5800601
|
2512.06xxx/2512.06357/full.md
ADDED
|
@@ -0,0 +1,450 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Proportional integral derivative booster for neural networks-based time-series prediction: Case of water demand prediction
|
| 2 |
+
|
| 3 |
+
Tony Salloom $^{a,b}$ , Okayay Kaynak $^{a,b,c}$ , Xinbo Yu $^{b}$ , Wei He $^{a,b,*}$
|
| 4 |
+
|
| 5 |
+
$^{a}$ School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
|
| 6 |
+
$^{b}$ Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing 100083, China
|
| 7 |
+
$^{c}$ Bogazici University, Istanbul Turkey
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Multi-step time-series prediction is an essential supportive step for decision-makers in several industrial areas. Artificial intelligence techniques, which use a neural network component in various forms, have recently frequently been used to accomplish this step. However, the complexity of the neural network structure still stands up as a critical problem against prediction accuracy. In this paper, a method inspired by the proportional-integral-derivative (PID) control approach is investigated to enhance the performance of neural network models used for multi-step ahead prediction of periodic time-series information while maintaining a negligible impact on the complexity of the system. The PID-based method is applied to the predicted value at each time step to bring that value closer to the real value. The water demand forecasting problem is considered as a case study, where two deep neural network models from the literature are used to prove the effectiveness of the proposed boosting method. Furthermore, to prove the applicability of this PID-based booster to other types of periodic time-series prediction problems, it is applied to enhance the accuracy of a neural network model used for multi-step forecasting of hourly energy consumption. The comparison between the results of the original prediction models and the results after using the proposed technique demonstrates the superiority of the proposed method in terms of prediction accuracy and system complexity.
|
| 12 |
+
|
| 13 |
+
Keywords: PID control, neural networks, time-series forecasting, water demand prediction
|
| 14 |
+
|
| 15 |
+
# 1. Introduction
|
| 16 |
+
|
| 17 |
+
In recent years, the use of artificial intelligence techniques in the form of machine learning (ML) and deep learning (DL) has swept through the vast majority of the research fields that may come to mind. Their superiority over traditional approaches has been confirmed in several comparative research in literature as in [1, 2, 3]. The basic component in such approaches is a neural network (NN). In numerous studies, the abilities of NNs are exploited in a number of ways. To mention a few, in the field of robotics, researchers use the approximation ability of NNs to compensate for the uncertain information in the dynamics of the robot as in [4, 5, 6]. In computer vision, NNs are used for objects detection such as drug bills [7], fabric defection [8], etc. The ability of feature extraction is used for classification and prediction, where Barchi et al. [9] prove the efficiency of deep convolution NN in source code classification, in [10], Song et al. use the extreme learning machine for fault detection and classification, and in [11], Capizzi et al. build a spiking neural network for long-term prediction of biogas production. The deep belief network is used for PM2.5 concentration prediction in Beijing [12].
|
| 18 |
+
|
| 19 |
+
Time series prediction is one of the most illustrious applications of neural networks (NNs), especially after the rise of the memory power provided by long-short term memory (LSTM) and gated recurrent unit (GRU). This research is concerned with NN models that are meant to forecast periodic time-series information. Periodicity is a natural phenomenon that characterizes many real-life events. Hourly water and power consumption, daily streamflow, hourly average temperature, hourly pollution rate in the city, and rain rate are but a few examples of periodic time-series information. Prediction of time-series information enables planning in several fields of economic and industrial activities [13]. Particularly, multi-step prediction is commonly met in real-life scenarios, where planners need to predict future observations based on a given sequence of historical information [14]. Several papers that propose different NN-based methods for forecasting periodic time series data considering both single-step and multi-step scenarios can be found in the literature. For example, Liu et al. [15] propose a novel NN model for time series forecasting based on dual-stage two-phase model and temporal attention recurrent NN and prove its applicability in the fields of energy, finance, environment and medicine. While Wang et al. [16] and Safari et al. [17] predict one step of short-term wind power interval based on GRU and decomposition approaches. In [18], Khodayar et al. provide a wind speed prediction method based on a rough deep NN. In [19], GRU and $K$ -means classification methods are employed for fore-
|
| 20 |
+
|
| 21 |
+
casting quarterly water demand. On the other hand, the outputs of two LSTM networks are integrated to forecast the daily water demand in [20].
|
| 22 |
+
|
| 23 |
+
Two strategies are used for multi-step time series prediction [21, 22]: (i) The multi-output strategy, or direct strategy as is called in [23] and [24], where a multi-input-multi-output model is built to predict several time-steps in the future at one go based on a sequence of historical data as input. This strategy is used in [25], where a novel approach is introduced to predict stocks based on fuzzy aggregation models with modular NNs. It is also considered in [26] for multi-step stocks prediction, where impressive research is conducted to predict the Dow Jones stocks price based on ensembles of adaptive neuro-fuzzy inferences systems models. (ii) The iterative strategy where the model is designed for one time-step, then it is iterated to predict the next time-step. In this case, the prior time-step is included in the input of the prediction of the next time-step. Here, a problem comes to one's mind when the iterative technique is used for multi-step ahead prediction, where the values of $n$ consecutive periods need to be predicted while the dataset is yet to be updated. The reliance on the forecasted values to predict the next values poses the problem of error accumulation [27]. Thus the error increases as more predicted values are used. In literature, some papers are available that attempt to solve this problem, where some authors suggest using different models for different time-steps to reduce the dependency on forecasted values [28]. Salloom et al. [19] apply the $K$ -means method to find a relationship between the data at the same time step over previous days and reduce the effects of the predicted value in the input of the next step. Ye et al. [29] suggest a multi-task-learning algorithm that handles forecasting different horizons as different tasks. Although this kind of solution provides a high level of accuracy, the approach requires extremely high computational time to configure systems, and a large memory to save system parameters, which hinders system maintenance and update processes. On the other hand, some solutions for particular problems are proposed, where Sardinha-Lourenco et al. [30] propose a parallel adaptive weighting strategy to enhance the performance of time-series water consumption prediction. bio-inspired algorithms using fuzzy integrator to predict time-series information. Other works [31, 32] suggest the use of a support vector machine to enhance the performance of the LSTM-based NN model. In [33], the authors propose a hybrid system consisting of wavelet packet decomposition and Elman Neural Networks for multi-step ahead wind speed forecasting. Although the authors of these works stated that their strategies have a reasonable computational load, they did not consider the complexity of the system. Thus, the tradeoff between the complexity of
|
| 24 |
+
|
| 25 |
+
the NN models and the accuracy is the most critical issue that confronts the NN designer. The higher the complexity is, the larger is the storage required to save the model and the heavier is the computational load. In fact, the high complexity is a natural result of the predominant ideology of building NN models based on sub-modules [34]. The majority of researchers resort to using a number of simple NN modules to perform several operations on the available data, aiming to extract useful features and highlight the underlying relationships within the data, as done in [35, 36, 37, 38]. On the other hand, simple NN models cannot handle the multi-relationships, which could be present within the data. In view of the characteristics of each problem, there is no general solution that fits all kinds of NN models. The majority of the solutions proposed in the literature are designed for a specific class of NN models or a particular problem. This research is concerned with multi-step ahead forecasting of periodic time-series information based on NNs.
|
| 26 |
+
|
| 27 |
+
It stands to reason that improving the accuracy without modifying the structure of the NN gives an additional advantage to the model by reducing the complexity as well as the computational load. That is the exact goal of this paper. PID control is a well-known approach in the industrial arena. It has been applied for controlling the output of a system over time, which can be interpreted as the control of time-series events. PID controller is characterized by a small number of tunable parameters compared with the enormous number of parameters in machine learning methods. This feature can be exploited to improve the performance of the learning systems with a near-zero impact on the complexity of the system as well as a small extra computational load.
|
| 28 |
+
|
| 29 |
+
In this paper, we propose a novel technique inspired by the well-known PID control concept to control the output of NN-based systems, which are designed for multi-step ahead forecasting of periodic time-series information, particularly systems that depend on the iterative strategy to achieve multi-step ahead prediction. This method could be used as a supplementary method to support the NN model. The advantage of this method over the other methods is the negligible number of tunable parameters, where it adds only three tunable variables to the system.
|
| 30 |
+
|
| 31 |
+
This method is applied to two deep learning models proposed in [2] and [19]. These models are used for multi-step prediction of water demand. The results are compared with the results of the same models reinforced with a supplementary processes to handle the accumulative error. Moreover, the PID-based method is applied to an LSTM-based model that used for power consumption prediction. The performance of the system including
|
| 32 |
+
|
| 33 |
+
the PID-based method is compared with that of the same system without including the PID-based method. The contribution of this paper can be summarized as follows:
|
| 34 |
+
|
| 35 |
+
- A novel method inspired by the PID control approach is designed that is able to improve the accuracy of a multi-step ahead time-series forecasting systems with insignificant impact on the system complexity and computational load.
|
| 36 |
+
- The designed method is applied to two water demand forecasting systems and another system for hourly forecasting of energy consumption. The efficiency is compared with another methods used to enhance the accuracy of the same systems.
|
| 37 |
+
|
| 38 |
+
The rest of this paper is organized as follows. Section 2 includes the methodology of designing and integrating the proposed method into the prediction system, while Section 3 includes the details of the case study of water demand prediction. The applicability of the proposed method to another time-series forecasting models is emphasized in section 4. Comparison results and discussion are listed in Section 5. The conclusion of this paper is stated in Section 6.
|
| 39 |
+
|
| 40 |
+
# 2. Methodology
|
| 41 |
+
|
| 42 |
+
This section includes the methodology of designing the PID-based method and the proper way to apply it to the system. Firstly, the designing strategy starts with the general formula of the PID control law that is used in the industrial field, each term of this formula is adapted individually according to the nature of the NN-based prediction system. The obtained PID-based method is able to enhance any NN-based prediction system that is meant for multi-steps ahead prediction of periodic data. Secondly, the right way to integrate the proposed method into the prediction system is discussed in detail.
|
| 43 |
+
|
| 44 |
+
# 2.1. The proposed method
|
| 45 |
+
|
| 46 |
+
PID approach is used in industrial control systems to correct the system's output to follow a desired target or trajectory [39]. Assuming that the system is an NN model used to forecast $n$ time-steps ahead of periodic data with a period of $T$ , the prediction results need to be controlled to follow the trajectory formed by the actual values of the data of $n$ steps. Let $PV(t)$ and $RV(t)$ be the predicted value and the actual water demand value for time-step $t$ , respectively. The controller aims to make $PV(t)$ follows the trajectory $RV(t)$
|
| 47 |
+
|
| 48 |
+
where $t = 1,2,\dots,n$ . The NN-based prediction system is a discrete system. The following equation represents the general equation of a PID controller for such a system [40, 41]:
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
u (t) = - K _ {p} e (t) - K _ {i} \sum_ {0} ^ {t} e (t) - K _ {d} (\Delta e (t)) \tag {1}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
where $e(t)$ represents the system error, $K_{p}$ is a positive number that represents the proportional gain, $K_{i}$ is a positive number expresses the integral gain, $K_{d}$ is a positive number that represents the derivative gain. The final output $P(t)$ of the system becomes as follows:
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
P (t) = P V (t) + u (t) \tag {2}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
The error of the system is as follows:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
e (t) = P (t) - R V (t) \tag {3}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
Since the feedback is not available until the end of the prediction process of $n$ steps, the error of the period $t - T$ is used. The integral part changes to the summation of the prediction error of the time-steps before $t - T$ . In order to keep the summation part limited, it is reset after the end of the prediction round, thus the error is summed up starting with the first step in the previous round, $(i - 1)T$ , where $i$ is the number of the current round of prediction. The derivative part at a time-step $t$ becomes the difference between the error at step $t - T$ and at step $t - T - 1$ . Equation (1) becomes as follows:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
u (t) = - K _ {p} e (t - T) - K _ {i} \sum_ {x = (i - 1) T} ^ {x = t - T} e (x) - K _ {d} \left(e (t - T) - e (t - T - 1)\right) \tag {4}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
# 2.2. The integration of the PID-based method into the prediction system
|
| 73 |
+
|
| 74 |
+
In this work, the prediction system is an NN model that is used for multi-step prediction. The NN model performs an $n$ time-steps of prediction using the iterative strategy. It uses the predicted values at each time-step to predict the value of the next time-step. The PID-based method is integrated into the system in a way to enhance the accuracy of the final predicted value. The system will be configured and used as follows:
|
| 75 |
+
|
| 76 |
+
Firstly, the NN model is trained on the available data for one-step prediction until it gives a reasonable accuracy. The PID-based method is not used during training.
|
| 77 |
+
|
| 78 |
+
Secondly, during prediction, the NN model achieves the prediction step by step; the
|
| 79 |
+
|
| 80 |
+
output of the NN at every time-step is $PV(t)$ . The PID-based method designed in equations (2),(3), and (4) is applied to correct $PV(t)$ . The resulting value is the final forecasted value $P(t)$ for time-step $t$ . $P(t)$ is added to the database to be involved in the prediction of the next time-step $t + 1$ if necessary. When real values become available, the current round's prediction errors are calculated to be used in the next round. Fig. 1 illustrates the system flow graph during prediction.
|
| 81 |
+
|
| 82 |
+
The error used in the first round of prediction should be initialized. There are three possible strategies to achieve that.
|
| 83 |
+
|
| 84 |
+
i. Set the initial error to zero for all $n$ steps. In this case, $u = 0$ and the controller becomes useless when applying it to the first $n$ steps.
|
| 85 |
+
ii. Set the initial error to the MAE that is obtained during the training, then the integral part increases rapidly due to the absence of negative errors. This increment may lead to instability.
|
| 86 |
+
iii. The available data can be used to initialize the error; the NN model can be run to predict one round of $n$ steps of the available data. In this round, the initial error for the first prediction step is set to zero, and the PID-based method is run over the resulted values. When running the system for the second round of prediction, the error from the first round can be used. Moreover, during the initialization process, the error of the previous prediction step $t - 1$ in the first round of prediction can be involved instead of the error at $t - T$ from the first round to guarantee faster convergence. Thus, equation (4) takes the following form during the initialization process:
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
u (t) = - K _ {p} e (t - 1) - K _ {i} \sum_ {x = 0} ^ {x = t - 1} e (x) - K _ {d} \left(e (t - 1) - e (t - 2)\right) \tag {5}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
The third strategy is the most appropriate way and it is used in this work.
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
Figure 1: Prediction process.
|
| 96 |
+
|
| 97 |
+
# 3. Case study: water demand prediction
|
| 98 |
+
|
| 99 |
+
The limits of water resources on our planet become more and more apparent as the world's population increases and the daily use per population increases. It is, therefore, essential that water distribution should be managed very efficiently. This can be achieved effectively when a reasonable estimation of the water demand in the future is available. The water demand forecasting methods can be classified based on the forecasting horizon [42]. The short-term horizon implies forecasting the water demand during short periods that range from 15 minutes to a few days in length [43], while the medium-term horizon means forecasting the water demand for periods with a span of a month to one year. The long-term horizon implies forecasting water demand during a period longer than five years. Different factors influence the prediction for each horizon [44, 45]. Factors such as pricing policy, the economic situation of the area, water conservation policies, population size, weather conditions, and the history of water demand influence the forecasting results of all horizons [46]. In contrast, the history of water demand, the weather conditions, and the specific importance of the day have a considerable impact on the prediction results of the short-term horizon [47].
|
| 100 |
+
|
| 101 |
+
The case addressed in this work is the case of short-term prediction, specifically, daily prediction with a prediction period of 15 minutes. Recently, the applicability of artificial NNs in this field has been investigated widely. The most appropriate structure of an NN model used for water demand forecasting relies on deep learning methods. In this research, the methods GRUN and DCGRU proposed in [2] and [19], respectively, are used to prove the efficiency of the PID-based method. Where both works depend on the historical data for prediction and design their NN model based on the GRU, but the structure of each model is different. Guo et al. [2] propose a supplementary NN (SPNN) model to overcome the problem of error accumulation, while the DCGRU model includes a pre-processing step of data classification to mitigate this problem. To prove the effectiveness of the proposed method, It is applied to the GRUN model and the overall performance is compared with the performance after applying the SPNN presented in the original work. Moreover, the PID-based method is applied to the DCGRU model and results are compared with the results obtained by applying the classification step.
|
| 102 |
+
|
| 103 |
+
Hereunder is a description of the prediction machinery including the data, the structure of both NNs, the SPNN model structure, and the application of the PID-based method to the NN prediction model.
|
| 104 |
+
|
| 105 |
+
# 3.1. Water demand data
|
| 106 |
+
|
| 107 |
+
Table 1: Statistics of water demand data.
|
| 108 |
+
|
| 109 |
+
<table><tr><td>Statistics</td><td>DMA1</td><td>DMA2</td></tr><tr><td>Average demand (m3/15 min)</td><td>81.4</td><td>87.0</td></tr><tr><td>Maximum demand (m3/15 min)</td><td>157.0</td><td>149.0</td></tr><tr><td>Minimum demand (m3/15 min)</td><td>30.0</td><td>40.0</td></tr><tr><td>Standard deviation</td><td>24.4</td><td>21.1</td></tr><tr><td>Type of DMA</td><td>Residential</td><td>Industrial</td></tr></table>
|
| 110 |
+
|
| 111 |
+
The water demand data is real data collected from two distinct measurement areas (DMAs) in Changzhou in China. The population of the first area is around 13000, and a small number of commercial facilities. The second area, DMA2, has a population of 8500 in addition to 300 factories.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
(a)
|
| 115 |
+
Figure 2: Seven days of observations of water demand: (a) for DMA1; (b) for DMA2.
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
(b)
|
| 119 |
+
|
| 120 |
+
Fig. 2 illustrates one week samples of the two data sets. Fig. 2(a) shows samples from DMA1 and Fig. 2(b) shows samples from DMA2. Table 1 demonstrates the statistics of the data. The data collected in 2016 for one year. Officials use an adaptive water management plan that requires information about the water demand every 15 minutes. Thus, the water demand is measured every 15 minutes, and the prediction period is set to the same length. The measured data transferred to the control room at the end of the day. This raises the need for a multi-step ahead prediction to do the water demand forecasting for 96 successive periods, 15 minutes each, to get the data required for a full day.
|
| 121 |
+
|
| 122 |
+
The dataset is divided into three sets, the training set, the validation set, and the testing set. Both the validation set and the testing set contain $10\%$ of the total data each, while the training set contains $80\%$ of the total data.
|
| 123 |
+
|
| 124 |
+
# 3.2. The structure of the GRUN prediction model
|
| 125 |
+
|
| 126 |
+
Table 2: Selected water demand values for each feature.
|
| 127 |
+
|
| 128 |
+
<table><tr><td>Feature</td><td>Selected demand values</td></tr><tr><td>Input sequence 1</td><td>Vt-1, Vt-2, Vt-3, Vt-4, and Vt-5</td></tr><tr><td>Input sequence 2</td><td>Vt-94, Vt-95, Vt-96, Vt-97, and Vt-98</td></tr><tr><td>Input sequence 3</td><td>Vt-190, Vt-191, Vt-192, Vt-193, and Vt-194</td></tr></table>
|
| 129 |
+
|
| 130 |
+
Guo et al. [2] are of the opinion that historical data are sufficient for extracting the pattern of water demand. Disregarding the veracity of this claim, this work is based on the same GRU NN structure as it clearly shows the problem of error accumulation. The effectiveness of the PID-based method proposed in this work is shown in comparison to the supplementary model used by Guo et al. to enhance the prediction accuracy.
|
| 131 |
+
|
| 132 |
+
The prediction model is built based on the GRU cell. Guo et al. assume that the prediction of the water demand at a period $t$ depends on the close history and far history of water demand. They choose three data sequences as input for their model, five elements each. The first sequence represents the water demand in the most recent periods, which are the last five periods. The second one represents the water demand in the earlier periods. They choose five periods from the last day. While the third sequence represents the distant time, they are selected from the second day before the day of the period to predict. Table 2 shows the elements of each sequence. The structure of the prediction model is shown in Fig. 3. It is composed of three GRU layers, and each layer extracts the sequential relationship between its input. The outputs of the three GRU layers are merged and sent to a block of six dense layers, which work upon the relationship between the three sequences. The output of the last dense layer is one value that represents the desired water demand during the period $t$ .
|
| 133 |
+
|
| 134 |
+
# 3.3. The supplementary model (SPNN)
|
| 135 |
+
|
| 136 |
+
This model is proposed by Guo et al. [2]. The goal is to bring the predicted water demand values closer to the real values. The SPNN model is a shallow NN that consists
|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
Figure 3: The structure of the GRUN model.
|
| 140 |
+
|
| 141 |
+
of an input layer and an output layer with one hidden layer. The input of this model is 96 predicted water demand values, while the output is also 96 water demand values as the correction of the predicted values. The structure of the model is demonstrated in Fig. 4.
|
| 142 |
+
|
| 143 |
+
Regarding the application strategy of this model, firstly, the water demand for 96 periods should be forecasted using the prediction model, then the 96 values are sent as input to the SPNN model. The output values of the SPNN model are the final predicted values that are used in water plant management.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
Figure 4: The structure of the SPNN model.
|
| 147 |
+
|
| 148 |
+
# 3.4. The structure of the DCGRU model
|
| 149 |
+
|
| 150 |
+
This model also depends on historical data to predict the demand for the next 15 minutes. The structure of the model includes a layer of 32 GRU units, with an output layer with one GRU unit. All data are classified into four classes, then 96 vectors, where each one contains a value of water demand with its belonging to each class, are sent to three fully connected layers to come up with 96 values, these values are the input to the GRU layer.
|
| 151 |
+
|
| 152 |
+
The full structure of the model is shown in Fig. 5. The main purpose of the classification step is to reduce the influence of the predicted value by creating a relationship between each value and other previous values. For more details about the DCGRU model, readers could refer to the original work in [19].
|
| 153 |
+
|
| 154 |
+
To evaluate our PID-based method, we apply the method to the DCGRU model instead of the classification step. Demand values $t - 1$ to $t - 96$ are sent to the GRU layer directly then apply the PID-based method to predict 96 steps. The final result is compared with the prediction result of DCGRU including the classification step.
|
| 155 |
+
|
| 156 |
+
# 3.5. The application of the PID-based method for water demand prediction
|
| 157 |
+
|
| 158 |
+
For both GRUN and DCGRU models, the proposed PID-based method designed in (4) is applied after forecasting each value in the series, and the resulted value is considered as the final prediction result. Then the value is added to the data to be used when forecasting the water demand for the next period. The error $e(t)$ , the integration of the error, and the derivative of the error are calculated based on that final prediction value.
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Figure 5: The structure of the DCGRU model.
|
| 162 |
+
|
| 163 |
+
The critical problem in this method is to find the optimal values of the parameters of the controller, i.e. $K_{p}, K_{i}$ , and $K_{d}$ . In fact, there are several methods to tune such a controller, such as deterministic methods as in [48, 49], evolutionary methods as in [50], or hybrid ones as in [51]. In this work, the experimental way gives satisfactory results, where $K_{p}$ is tuned firstly, then $K_{i}$ and $K_{d}$ , in the beginning, $K_{p}$ is set to a small positive number 0.1, then increased gradually with an incremental step of 0.1 until reaching the optimal value at which the smallest output error is obtained. Then $K_{i}$ and $K_{d}$ are set to small positive numbers starting with 0.0001 and an incremental step of 0.0001 as well. It has been found that the best control performance can be obtained when $K_{p}, K_{i}, K_{d} \in [0,1]$ and $K_{i}, K_{d} < K_{p}$ . The obtained values of $K_{p}, K_{i}$ , and $K_{d}$ that used with the GRUN and DCGRU models are listed in Table 4. Although the experimental method of tuning the PID control law does not produce an optimal controller, it nevertheless results in an effective one.
|
| 164 |
+
|
| 165 |
+
# 4. The applicability of the PID-based method to other time-series forecasting methods
|
| 166 |
+
|
| 167 |
+
As we mentioned above, the applicability of the proposed method is guaranteed on time series data that satisfy the following: (i) the series should be periodic with a period of $T$ ,
|
| 168 |
+
|
| 169 |
+
to guarantee that the value at time point $t$ close enough to that at time point $t - iT$ where $i = 1,2,3$ , (ii) The original prediction model is an NN model. (iii) Multi-step prediction strategy used is the iterative strategy. The applicability of the proposed method with other types of prediction models or with different multi-step prediction strategies will be studied in the future work.
|
| 170 |
+
|
| 171 |
+
In a distributed environment, the data will be distributed in many storages, the approach of federated learning will be used, where several copies of the NN model will be trained on several data sets, then any merging method could be used to merge the trained models, finally, the PID-based method is applied to the final model and any training data set could be used to tune the parameter of the PID law. Further investigation will be conducted in future work.
|
| 172 |
+
|
| 173 |
+
In order to clarify the applicability of the proposed method, we apply it to enhance the prediction results of CNN-LSTM model in [52], this model is used for multi-step prediction of hourly consumption of electricity.
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
4.1. Electricity consumption data
|
| 177 |
+
(a)
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
(b)
|
| 181 |
+
Figure 6: Seven days of observations of electricity consumption in megawatt $(\mathrm{mw / h})$ : (a) for DAYTON data set; (b) for FirstEnergy data set.
|
| 182 |
+
|
| 183 |
+
The hourly consumption data set is taken from PJM Interconnection LLC, which is a regional transmission organization in the United States, it provides data from thirteen companies that serve different states in the USA. In this paper, only two data sets are chosen to test our method, first one is the DAYTON data set which contains energy demand
|
| 184 |
+
|
| 185 |
+
Table 3: Statistics of energy consumption data.
|
| 186 |
+
|
| 187 |
+
<table><tr><td>Statistics</td><td>DAYTON</td><td>FirstEnergy</td></tr><tr><td>Number of costumers</td><td>500,000</td><td>6,000,000</td></tr><tr><td>Number of reading</td><td>121250</td><td>62850</td></tr><tr><td>Average demand (mw/h)</td><td>2037.85</td><td>11701.68</td></tr><tr><td>Maximum demand (mw/h)</td><td>3764</td><td>23631</td></tr><tr><td>Minimum demand (mw/h)</td><td>982</td><td>7003</td></tr><tr><td>Standard deviation</td><td>393.40</td><td>2371.48</td></tr></table>
|
| 188 |
+
|
| 189 |
+
mw = megawatt
|
| 190 |
+
|
| 191 |
+
supplied by the Dayton Power and Light company, which serve a customer base of 500 thousand people distributed in the area of West Central Ohio, including the area around Dayton, Ohio. The second one is the FirstEnergy data set which contains power demand supplied by the FirstEnergy company, this company serve about 6 million clients in the area of Ohio, Pennsylvania, West Virginia, Virginia, Maryland, New Jersey and New York. DAYTON data sets contain observations from 1-10-2004 to 1-2-2018, while the FirstEnergy data set contains hourly observations from 1-6-2011 to 1-8-2018. Fig. 6 shows samples of one week of each data set, Fig. 6(a) shows DAYTON data set, while Fig.6(b) shows FirstEnergy data set. Table 3 lists some statistical information about each data set. These data sets are available for public use at https://www.kaggle.com/robikscube/hourly-energy-consumption.
|
| 192 |
+
|
| 193 |
+
Both data sets are divided into three sets, $70\%$ used for training and $10\%$ for validation, while the rest is used for testing.
|
| 194 |
+
|
| 195 |
+
# 4.2. The structure of the CNN-LSTM model
|
| 196 |
+
|
| 197 |
+
This model is proposed in [52] for multi-step prediction of hourly power consumption. The input of the model is a series of 24 consecutive observation values. These values are passed to two convolution layers for extracting more features of the data, the resulted values form a sequence that is handled by an LSTM layer. The output layer of the model is a fully connected layer with a linear activation function.
|
| 198 |
+
|
| 199 |
+
Fig. 7 illustrates the structure of the CNN-LSTM model. The activation function used in the convolutional layers is a ReLU, while for the LSTM layer, the authors use the default functions.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Figure 7: The structure of the CNN-LSTM model.
|
| 203 |
+
|
| 204 |
+
# 4.3. The application of the PID-based method to power prediction
|
| 205 |
+
|
| 206 |
+
CNN-LSTM model is trained on the training set of DAYTON and FirstEnergy data sets separately until reaching a prediction accuracy similar or higher than what has been obtained in the original work in [52]. As for water demand prediction, the PID booster is applied in the prediction phase. Firstly, power consumption is predicted by the CNN-LSTM model then the controller in equation (4) is applied, and its output is taken as the final output of the system. This final value is used in the input of the next prediction step. This process is repeated iteratively until predicting 24 steps. When real data is available, the prediction error is calculated to be used when predicting the next prediction round which contains 24 steps. The parameters of the controller are tuned using grid search. The value of $K_{p}, K_{i}$ , and $K_{d}$ are listed in Table 4.
|
| 207 |
+
|
| 208 |
+
# 5. Results and discussion
|
| 209 |
+
|
| 210 |
+
# 5.1. Evaluation criteria
|
| 211 |
+
|
| 212 |
+
The effectiveness of the proposed PID-based method is investigated based on the prediction accuracy and the complexity of the forecasting system. Regarding water demand forecasting, two cases are compared, the first one is when the proposed PID-based method
|
| 213 |
+
|
| 214 |
+
Table 4: The optimal values of the controller parameters used with each model.
|
| 215 |
+
|
| 216 |
+
<table><tr><td>Data set</td><td>NN Model</td><td>Kp</td><td>Ki</td><td>Kd</td></tr><tr><td>DMA 1</td><td>GRUN</td><td>0.4</td><td>0.01</td><td>0.001</td></tr><tr><td>DMA 2</td><td>GRUN</td><td>0.5</td><td>0.001</td><td>0.001</td></tr><tr><td>DMA 1</td><td>DCGRU</td><td>0.2</td><td>0.01</td><td>0.001</td></tr><tr><td>DMA 2</td><td>DCGRU</td><td>0.22</td><td>0.001</td><td>0.001</td></tr><tr><td>DAYTON</td><td>CNN-LSTM</td><td>0.1</td><td>0.0001</td><td>0.001</td></tr><tr><td>FirstEnergy</td><td>CNN-LSTM</td><td>0.13</td><td>0.001</td><td>0.001</td></tr></table>
|
| 217 |
+
|
| 218 |
+
is applied to the GRUN model instead of the SPNN, and the DCGRU model instead of the classification method. While the second case is when the SPNN and the classification step are applied to the GRUN model and DCGRU model, respectively.
|
| 219 |
+
|
| 220 |
+
The prediction accuracy is measured by the mean absolute error (MAE) and the standard deviation (Std) of the error of the final result, in addition to the mean absolute percentage error (MAPE). Thus the distribution of the error can be estimated.
|
| 221 |
+
|
| 222 |
+
System complexity is measured based on (i) the computational load and (ii) the Akaike Information Criterion (AIC).
|
| 223 |
+
|
| 224 |
+
(i) Computational load is evaluated based on the execution time (ExT), including prediction time and data reshaping time. The training time is not measured since the method proposed in this work does not need training. ExT is measured for the process of predicting 96 steps, the measurement is repeated 1000 times; then the average is recorded. All process is executed using an ASUS laptop with core i7 processor and 16 GB of RAM and 1TB of hard disk in addition to 256 GB of SSD hard disk. The model is built based on Keras library and Tensorflow backends with python 3.6 programming language.
|
| 225 |
+
(ii) AIC is a reliable tool for selecting the best between several competing statistical models that depend on the number of variables and output error of the models when applying the same dataset [53, 54]. The smaller the AIC, is the better. AIC for an NN model can be calculated based on the following equation [55]:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
A I C = n \ln \left(\frac {R S S}{n}\right) + 2 w + \left(\frac {2 w (w + 1)}{n - w - 1}\right) \tag {6}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
where $w$ represents the number of the variables of the NN model, $n$ represents the size of the dataset, $RSS$ is the summation of the square of the forecasting error.
|
| 232 |
+
|
| 233 |
+
Table 5: Evaluation results of water demand prediction.
|
| 234 |
+
|
| 235 |
+
<table><tr><td>Feature</td><td>DMA</td><td>GRUN</td><td>GRUN + SPNN</td><td>GRUN + PID</td><td>DCGRU</td><td>DCGRU + classi- fication</td><td>DCGRU + PID</td></tr><tr><td rowspan="2">ExT(ms)</td><td>1</td><td>78.516</td><td>78.964</td><td>79.154</td><td>735</td><td>757</td><td>773</td></tr><tr><td>2</td><td>78.516</td><td>78.964</td><td>79.154</td><td>735</td><td>757</td><td>773</td></tr><tr><td rowspan="2">AIC</td><td>1</td><td>264606</td><td>538116</td><td>240945</td><td>51915</td><td>24325</td><td>25342</td></tr><tr><td>2</td><td>263628</td><td>527649</td><td>242912</td><td>74956</td><td>30454</td><td>28344</td></tr><tr><td rowspan="2">MAE(m3/15 min)</td><td>1</td><td>3.82</td><td>3.1</td><td>2.76</td><td>3.92</td><td>1.23</td><td>1.92</td></tr><tr><td>2</td><td>4.16</td><td>2.52</td><td>2.8</td><td>5.9</td><td>1.18</td><td>2.19</td></tr><tr><td rowspan="2">Std(m3/15 min)</td><td>1</td><td>3.32</td><td>2.78</td><td>2.74</td><td>2.72</td><td>1.92</td><td>1.8</td></tr><tr><td>2</td><td>3.56</td><td>2.34</td><td>2.87</td><td>2.9</td><td>1.9</td><td>1.91</td></tr><tr><td rowspan="2">MAPE(%)</td><td>1</td><td>5.14</td><td>3.79</td><td>3.64</td><td>5.32</td><td>1.52</td><td>2.02</td></tr><tr><td>2</td><td>4.98</td><td>2.83</td><td>3.12</td><td>6.96</td><td>1.43</td><td>2.41</td></tr></table>
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 8: The output and the error of the prediction system when using different method to predict water demand in DMA1.
|
| 239 |
+
|
| 240 |
+
# 5.2. Evaluation results of water demand prediction
|
| 241 |
+
|
| 242 |
+
The numerical evaluation results of both GRUN and DCGRU are stated in Table 5.
|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
Figure 9: The output and the error of the prediction system when using different method to predict water demand in DMA2.
|
| 246 |
+
|
| 247 |
+
# 5.2.1. GRUN prediction system
|
| 248 |
+
|
| 249 |
+
The AIC of the original prediction NN model is 264606 and 263628 for DMA1 and DMA2, respectively. Applying the SPNN model doubles the AIC of the system to 538116 and 527649 for DMA1 and DMA2, respectively. On the contrary, applying the proposed method to the system reduces the AIC value of the system to 240945 and 242912 for DMA1 and DMA2, respectively.
|
| 250 |
+
|
| 251 |
+
Fig. 8 and Fig. 9 illustrate the prediction result and error for two days from the testing dataset for DMA1 and DMA2, respectively. The MAE for the original prediction model is $3.82\mathrm{m}^3 /15\mathrm{min}$ for DMA1 and $4.16\mathrm{m}^3 /15\mathrm{min}$ for DMA2. Both the SPNN and the PID-based methods reduce the prediction error effectively. For DMA1, the PID-based method reduces the error to $2.76\mathrm{m}^3 /15\mathrm{min}$ , which is slightly less than $3.1\mathrm{m}^3 /15\mathrm{min}$ achieved by the SPNN model. On the contrary, for DMA2, the PID-based method reduces the MAE into $2.8\mathrm{m}^3 /15\mathrm{min}$ , which is higher than that is achieved by the SPNN model.
|
| 252 |
+
|
| 253 |
+
The histogram of the absolute error when using the SPNN model and the PID-based method to predict the testing data is shown in Fig. 10 for DMA1 and Fig. 11 for DMA2. It gives a better look at the error distribution.
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
Figure 10: The histogram of the absolute error of the prediction system when using different method to predict water demand in DMA1.
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
Figure 11: The histogram of the absolute error of the prediction system when using different method to predict water demand in DMA2.
|
| 260 |
+
|
| 261 |
+
# 5.2.2. DCGRU prediction system
|
| 262 |
+
|
| 263 |
+
The DCGRU model without classification nor PID represents the best model in term of tunable parameter, however, it achieves the highest MAE, which reaches $3.92\mathrm{m}^3 /15$ min and $5.9\mathrm{m}^3 /15$ min for DMA1 and DMA2, respectively. Thus, this model attains the
|
| 264 |
+
|
| 265 |
+
highest AIC, 51915 for DMA1 and 74956 for DMA2. On the other hand, the classification step achieves the best accuracy with $1.18\mathrm{m}^3 /15\mathrm{min}$ and $1.38\mathrm{m}^3 /15\mathrm{min}$ of MAE for DMA1 and DMA2, respectively. However, applying the PID-based method to the DCGRU model reduces the MAE of the prediction system effectively, and reaches an AIC very close to what is obtained by the system with the classification step as shown in Table 5. Fig. 12 shows the prediction results of two days of the water demand data sets when applying the PID-based method and the classification step, separately.
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
(a)
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
(b)
|
| 276 |
+
Figure 12: The output and the error of the DCGRU prediction system: (a) for DMA1 data set; (b) for DMA2 data set.
|
| 277 |
+
|
| 278 |
+
# 5.3. Evaluation results of electricity consumption prediction
|
| 279 |
+
|
| 280 |
+
Table 6 describes all numerical results of the power prediction system. Fig. 13 shows the predicted values and the error for two days of data. The AIC for the original system is 298070 and 154392 for DAYTON and FirstEnergy data sets, respectively. While after the application of the PID-based method, these numbers decline to 188848 and 143488. The proposed method contributes by reducing the MAE and MAPE of prediction. Without the PID-based method, the MAE values obtained are $109.27\mathrm{mw}$ and $198.5\mathrm{mw}$ for DAYTON and FirstEnergy data sets, respectively.
|
| 281 |
+
|
| 282 |
+
As for water demand prediction, ExT for power consumption prediction does not vary by data set. Logically, involving the PID-based method in the prediction process increases
|
| 283 |
+
|
| 284 |
+
the execution time, where the ExT obtained before involving the PID is 170.21 ms, while it rises to 315.83 ms when using the PID-based method.
|
| 285 |
+
|
| 286 |
+
Table 6: Evaluation results of electricity consumption prediction.
|
| 287 |
+
|
| 288 |
+
<table><tr><td>Feature</td><td>Data set</td><td>CNN-LSTM</td><td>CNN-LSTM with PID</td></tr><tr><td rowspan="2">ExT(ms)</td><td>DAYTON</td><td>170.21</td><td>315.83</td></tr><tr><td>FirstEnergy</td><td>170.21</td><td>315.83</td></tr><tr><td rowspan="2">AIC</td><td>DAYTON</td><td>298070</td><td>188848</td></tr><tr><td>FirstEnergy</td><td>154392</td><td>143488</td></tr><tr><td rowspan="2">MAE(mw)</td><td>DAYTON</td><td>109.27</td><td>71.07</td></tr><tr><td>FirstEnergy</td><td>198.5</td><td>132.4</td></tr><tr><td rowspan="2">Std(mw)</td><td>DAYTON</td><td>143.15</td><td>94.63</td></tr><tr><td>FirstEnergy</td><td>319</td><td>231.03</td></tr><tr><td rowspan="2">MAPE(%)</td><td>DAYTON</td><td>5.6</td><td>3.47</td></tr><tr><td>FirstEnergy</td><td>1.74</td><td>1.16</td></tr></table>
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
Figure 13: The output and the error of the CNN-LSTM prediction system: (a) for DAYTON data set; (b) for FirstEnergy data set.
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+
# 5.4. Discussion
|
| 296 |
+
|
| 297 |
+
This work targets the periodic time series data such as hourly or monthly water demand, power consumption or daily average temperature, etc. such kind of data are characterized by their seasonality, which means these data show a clear pattern that repeats itself every period of $T$ . Thus, the behaviour of the NN prediction model at time step $t$ carries some information about the behaviour at time step $t - T$ . Since the basic approach of the PID
|
| 298 |
+
|
| 299 |
+
control implies driving the output of the system to a known target, however, in prediction problems, the target is unknown, therefore, the real error at time step $t - T$ is used as a target instead of the unknown one at time $t$ . For more understanding, substituting Equation 4 and Equation 2 in Equation 3 results in the following equation:
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
e (t) = P V (t) - R V (t) - K _ {p} e (t - T) - K _ {i} \sum_ {x = (i - 1) T} ^ {x = t - T} e (x) - K _ {d} \bigl (e (t - T) - e (t - T - 1) \bigr) \quad (7)
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
We have $PV(t) - RV(t) = e_{NN}(t)$ is the prediction error of the NN model thus:
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
e (t) = e _ {N N} (t) - K _ {p} e (t - T) - K _ {i} \sum_ {x = (i - 1) T} ^ {x = t - T} e (x) - K _ {d} \left(e (t - T) - e (t - T - 1)\right) \tag {8}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
The trained NN model guarantees a bounded prediction error such that $\| e_{NN}(t) \| \leq M$ , where $M \in \Re$ is a positive number. The PID approach consists of three parts; the proportional part, $K_{p} e(t)$ , pulls the output of the system slightly towards the real value at $t - T$ . That is because, with periodic data, the neural network to have similar Behavior at steps $t - iT$ , where $i = 1,2,3$ , namely, the errors $e(t - iT)$ are most likely to have the same sign. However, it is not guaranteed that $RV(t) = RV(t - T)$ precisely, therefore, it is not wise to choose $K_{p} = 1$ .
|
| 312 |
+
|
| 313 |
+
In the industrial field, the PID controller is applied recursively until the output reaches the desired target, however, in the prediction problem, the PID-based method is applied once due to the uncertainty of the target, thus it is preferred to choose $K_{p} < 1$ .
|
| 314 |
+
|
| 315 |
+
The summation part provides some information about the performance of the NN model, where the positive sum implies that the NN prediction error has been positive for several consecutive steps, $PV > RV$ , and vice versa. When the errors of several consecutive prediction steps have the same sign, the accumulative error increases. The sum part aims to give the output of the NN model a big push to bring the output of the system to the other side of the real value $RV(t)$ , to guarantee that the input of the next prediction steps is distributed around the real values, thus the accumulative error is mitigated. The prediction error follows the normal distribution, which means that the error takes both negative and positive values with similar probability, this, in turn, makes the sum of the errors oscillate around one point as shown in Fig. 14
|
| 316 |
+
|
| 317 |
+
The derivative part works as a brake to slow down the sum part while the consecutive error values still have the same sign but the altitude is getting smaller.
|
| 318 |
+
|
| 319 |
+

|
| 320 |
+
(a)
|
| 321 |
+
|
| 322 |
+

|
| 323 |
+
(b)
|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
(c)
|
| 327 |
+
|
| 328 |
+

|
| 329 |
+
(d)
|
| 330 |
+
|
| 331 |
+

|
| 332 |
+
(e)
|
| 333 |
+
|
| 334 |
+

|
| 335 |
+
(f)
|
| 336 |
+
Figure 14: The change of the summation term in equation 4 when applying the PID-based method to several models with different data sets:(a) GRUN model with DMA1;(b) GRUN model with DMA2;(c) DCGRU model with DMA1;(d) DCGRU model with DMA2;(e) CNN-LSTM model with DAYTON; and,(f) CNN-LSTM model with FirstEnergy.
|
| 337 |
+
|
| 338 |
+
It is evident from Table 5 that the use of the PID-based method brings reasonable improvements compared to other methods used in the literature to improve the accuracy of the NN-based prediction model. However, it results in the least complexity. The designed PID-based method adds only three variables to the prediction system: $K_{p}$ , $K_{i}$ , and $K_{d}$ . On the other hand, it reduces the error in a way that makes the AIC smaller than that of the other methods. Although the SPNN model reduces the error sufficiently; however, it increases the number of variables by 9312, which in turn leads to a large AIC. The classification step used with the DCGRU model results in a value of AIC similar to that obtained when applying the PID-based method to the same system. However, when there is some tolerance in the prediction accuracy, the PID-based method is preferred, while it is better to apply the classification step in other cases, especially when considering the prediction time, where the PID-based method has a longer prediction time than that of the other methods.
|
| 339 |
+
|
| 340 |
+
The histogram of the absolute error in Fig. 10 and Fig. 11 clarify that most of the error values fall in the intervals around or less than $5\mathrm{m}^3 /15\mathrm{min}$ when using the SPNN and the PID-based methods, which confirm the usefulness of the two methods. However, the PID-based method reduces the number of error values that fall in an interval higher than $5\mathrm{m}^3 /15\mathrm{min}$ more than what the SPNN model does.
|
| 341 |
+
|
| 342 |
+
ExT of the system with the PID-based method is slightly higher than that of the system with the SPNN model or with the classification method. This is due to the extra time required to calculate the integration of errors in the PID-based method. The ExT is identical for all data sets because the number of calculations needed to complete the process does not depend on the size of the data.
|
| 343 |
+
|
| 344 |
+
# 6. Conclusion
|
| 345 |
+
|
| 346 |
+
This work has proposed an effective method to boost the efficiency of NNs models that are designed for multi-step forecasting of time-series events based on the iterative prediction strategy. The proposed method depends on the concept of PID control, which is used widely for controlling time-series events in industrial fields. Tow NN models for water demand prediction and one for energy consumption prediction have been used to justify the efficiency of the PID-based method. Real water demand data from two areas in China and another two energy consumption data sets from the USA have been used. The prediction system when using the proposed PID-based method, have been compared with the same
|
| 347 |
+
|
| 348 |
+
system when involving other methods to enhance the prediction error. The comparison results can be summarized as follows:
|
| 349 |
+
|
| 350 |
+
- The PID-based method reduces the prediction error to the same level as that obtained by other methods considered in this study.
|
| 351 |
+
- The PID-based method increases the execution time slightly compared to the other methods.
|
| 352 |
+
- In terms of system complexity, the PID-based method demonstrates superior performance since it reduces the error effectively with a negligible effect on the number of variables in the system. On the contrary, the SPNN and classification methods substantially increase the number of variables in the system.
|
| 353 |
+
|
| 354 |
+
In the future, the applicability of the proposed method will be investigated when data are not periodic, as well as when using the multi-input multi-output strategy for multi-step prediction. Moreover, the optimization problem of the PID parameter will be studied in future works.
|
| 355 |
+
|
| 356 |
+
# References
|
| 357 |
+
|
| 358 |
+
[1] G. Papacharalampous, H. Tyralis, D. Koutsoyiannis, Comparison of stochastic and machine learning methods for multi-step ahead forecasting of hydrological processes, Stochastic Environmental Research and Risk Assessment 33 (2) (2019) 481-514. doi: 10.1007/s00477-018-1638-6.
|
| 359 |
+
[2] G. Guo, S. Liu, Y. Wu, J. Li, R. Junyu, X. Zhu, Short-term water demand forecast based on deep learning method, Journal of Water Resources Planning and Management 144 (12) (2018) 4018076. doi:10.1061/(ASCE)WR.1943-5452.0000992.
|
| 360 |
+
[3] A. Antunes, A. Andrade-Campos, A. Sardinha-Lourenço, M. S. Oliveira, Short-term water demand forecasting using machine learning techniques, Journal of Hydroinformatics 20 (6) (2018) 1343-1366. doi:10.2166/hydro.2018.163.
|
| 361 |
+
[4] W. He, X. Mu, L. Zhang, Y. Zou, Modeling and trajectory tracking control for flapping-wing micro aerial vehicles, IEEE/CAA Journal of Automatica Sinica 8 (1) (2021) 148-156. doi:10.1109/JAS.2020.1003417.
|
| 362 |
+
|
| 363 |
+
[5] Z. Liu, Z. Han, Z. Zhao, W. He, Modeling and adaptive control for a spatial flexible spacecraft with unknown actuator failures, SCIENCE CHINA Information Sciences (2020) To be published:10.1007/s11432-020-3109-x.
|
| 364 |
+
[6] X. Yu, W. He, Y. Li, C. Xue, Y. Sun, Y. Wang, Adaptive NN impedance control for an SEA-driven robot, SCIENCE CHINA Information Sciences 63 (5) (2020) 159207. doi:10.1007/s11432-018-9631-7.
|
| 365 |
+
[7] Y. Y. Ou, A. C. Tsai, X. P. Zhou, J. F. Wang, Automatic drug pills detection based on enhanced feature pyramid network and convolution neural networks, IET Computer Vision 14 (1) (2020) 9-17. doi:10.1049/iet-cvi.2019.0171.
|
| 366 |
+
[8] W. Ouyang, B. Xu, J. Hou, X. Yuan, Fabric defect detection using activation layer embedded convolutional neural network, IEEE Access 7 (2019) 70130-70140. doi: 10.1109/ACCESS.2019.2913620.
|
| 367 |
+
[9] F. Barchi, E. Parisi, G. Urgese, E. Ficarra, A. Acquaviva, Exploration of Convolutional Neural Network models for source code classification, Engineering Applications of Artificial Intelligence 97 (2021) 104075. doi:10.1016/j.engappai.2020.104075.
|
| 368 |
+
[10] J. Song, J. Zhao, F. Dong, J. Zhao, L. Xu, Z. Yao, A new demagnetization fault recognition and classification method for DPMSLM, IEEE Transactions on Industrial Informatics 16 (3) (2020) 1559-1570. doi:10.1109/TII.2019.2928008.
|
| 369 |
+
[11] G. Capizzi, G. L. Sciuto, C. Napoli, G. Susi, M. Wozniak, A spiking neural network-based long-term prediction system for biogas production, Neural Networks 129 (2020) 271-279. doi:10.1016/j.neunet.2020.06.001.
|
| 370 |
+
[12] H. Xing, G. Wang, C. Liu, M. Suo, PM2.5 concentration modeling and prediction by using temperature-based deep belief network, Neural Networks 133 (2020) 157–165. doi:10.1016/j.neunet.2020.10.013.
|
| 371 |
+
[13] A. Rosenfeld, M. Cohen, S. Kraus, J. Keshet, Online prediction of time series with assumed behavior, Engineering Applications of Artificial Intelligence 88 (2020) 103358. doi:10.1016/j.engappai.2019.103358.
|
| 372 |
+
[14] P. Du, J. Wang, W. Yang, T. Niu, Multi-step ahead forecasting in electrical power system using a hybrid forecasting system, Renewable Energy 122 (2018) 533-550. doi: 10.1016/j.renene.2018.01.113.
|
| 373 |
+
|
| 374 |
+
[15] Y. Liu, C. Gong, L. Yang, Y. Chen, DSTP-RNN: A dual-stage two-phase attention-based recurrent neural network for long-term and multivariate time series prediction, Expert Systems with Applications 143 (2020) 113082. arXiv:1904.07464, doi:10.1016/j.eswa.2019.113082.
|
| 375 |
+
[16] R. Wang, C. Li, W. Fu, G. Tang, Deep Learning Method Based on Gated Recurrent Unit and Variational Mode Decomposition for Short-Term Wind Power Interval Prediction, IEEE Transactions on Neural Networks and Learning Systems 31 (10) (2020) 3814-3827. doi:10.1109/TNNLS.2019.2946414.
|
| 376 |
+
[17] N. Safari, C. Y. Chung, G. C. D. Price, Novel multi-step short-term wind power prediction framework Based on chaotic time series analysis and singular spectrum analysis, IEEE Transactions on Power Systems 33 (1) (2017) 590-601. doi:10.1109/tpwrs.2017.2694705.
|
| 377 |
+
[18] M. Khodayar, O. Kaynak, M. E. Khodayar, Rough deep neural architecture for short-term wind speed forecasting, IEEE Transactions on Industrial Informatics 13 (6) (2017) 2770-2779. doi:10.1109/TII.2017.2730846.
|
| 378 |
+
[19] T. Salloom, O. Kaynak, W. He, A novel deep neural network architecture for real-time water demand forecasting, Journal of Hydrology 599 (2021) 126353. doi:https://doi.org/10.1016/j.jhydrol.2021.126353.
|
| 379 |
+
[20] B. Du, Q. Zhou, J. Guo, S. Guo, L. Wang, Deep learning with long short-term memory neural networks combining wavelet transform and principal component analysis for daily urban water demand forecasting, Expert Systems with Applications 171 (2021) 114571. doi:10.1016/j.eswa.2021.114571.
|
| 380 |
+
[21] S. Ben Taieb, G. Bontempi, A. F. Atiya, A. Sorjamaa, A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition, Expert Systems with Applications 39 (8) (2012) 7067-7083. arXiv:1108.3259, doi:10.1016/j.eswa.2012.01.039.
|
| 381 |
+
[22] Y. Bao, T. Xiong, Z. Hu, Multi-step-ahead time series prediction using multiple-output support vector regression, Neurocomputing 129 (2014) 482-493. doi:10.1016/j.neucom.2013.09.010.
|
| 382 |
+
|
| 383 |
+
[23] P. Cortez, P. J. Pereira, R. Mendes, Multi-step time series prediction intervals using neuroevolution, Neural Computing and Applications (2019) 1-15doi:10.1007/s00521-019-04387-3.
|
| 384 |
+
[24] J. Soto, P. Melin, O. Castillo, A new approach for time series prediction using ensembles of IT2FNN models with optimization of fuzzy integrators, International Journal of Fuzzy Systems 20 (3) (2018) 701-728. doi:10.1007/s40815-017-0443-6.
|
| 385 |
+
[25] J. Soto, O. Castillo, P. Melin, W. Pedrycz, A new approach to multiple time series prediction using MIMO fuzzy aggregation models with modular neural networks, International Journal of Fuzzy Systems 21 (5) (2019) 1629-1648. doi:10.1007/s40815-019-00642-w.
|
| 386 |
+
[26] J. Soto, P. Melin, O. Castillo, Time series prediction using ensembles of ANFIS models with genetic optimization of interval type-2 and type-1 fuzzy integrators, International Journal of Hybrid Intelligent Systems 11 (3) (2014) 211–226. doi: 10.3233/his-140196.
|
| 387 |
+
[27] J. Wang, Y. Song, F. Liu, R. Hou, Analysis and application of forecasting models in wind power integration: A review of multi-step-ahead wind speed forecasting models, Renewable and Sustainable Energy Reviews 60 (2016) 960-981. doi:10.1016/j.rser.2016.01.114.
|
| 388 |
+
[28] J. K. Ambrosio, B. M. Brentan, M. Herrera, E. Luvizotto, L. Ribeiro, J. Izquierdo, Committee machines for hourly water demand forecasting in water supply systems, Mathematical Problems in Engineering 2019 (2019) 1-11. doi:10.1155/2019/9765468.
|
| 389 |
+
[29] R. Ye, Q. Dai, MultiTL-KELM: A multi-task learning algorithm for multi-step-ahead time series prediction, Applied Soft Computing Journal 79 (2019) 227-253. doi:10.1016/j.asoc.2019.03.039.
|
| 390 |
+
[30] A. Sardinha-Lourenço, A. Andrade-Campos, A. Antunes, M. S. Oliveira, Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy, Journal of Hydrology 558 (2018) 392-404. doi:10.1016/j.jhydrol.2018.01.047.
|
| 391 |
+
[31] S. Li, H. Fang, B. Shi, Multi-step-ahead prediction with long short term memory networks and support vector regression, in: Chinese Control Conference, CCC, Vol.
|
| 392 |
+
|
| 393 |
+
2018-July, IEEE Computer Society, 2018, pp. 8104-8109. doi:10.23919/ChiCC.2018.8484066.
|
| 394 |
+
[32] J. Chen, G. Q. Zeng, W. Zhou, W. Du, K. D. Lu, Wind speed forecasting using nonlinear-learning ensemble of deep learning time series prediction and extremal optimization, Energy Conversion and Management 165 (2018) 681-695. doi:10.1016/j.enconman.2018.03.098.
|
| 395 |
+
[33] Y. Li, H. Shi, F. Han, Z. Duan, H. Liu, Smart wind speed forecasting approach using various boosting algorithms, big multi-step forecasting strategy, Renewable Energy 135 (2019) 540-553. doi:10.1016/j.renene.2018.12.035.
|
| 396 |
+
[34] P. S. de Mattos Neto, T. A. Ferreira, A. R. Lima, G. C. Vasconcelos, G. D. Cavalcanti, A perturbative approach for enhancing the performance of time series forecasting, Neural Networks 88 (2017) 114-124. doi:10.1016/j.neunet.2017.02.004.
|
| 397 |
+
[35] Y. Wang, K. Ma, L. Garcia-Hernandez, J. Chen, Z. Hou, K. Ji, Z. Chen, A. Abraham, A CLSTM-TMN for marketing intention detection, Engineering Applications of Artificial Intelligence 91 (2020) 103595. doi:10.1016/j.engappai.2020.103595.
|
| 398 |
+
[36] B. Alhnaity, M. Abbod, A new hybrid financial time series prediction model, Engineering Applications of Artificial Intelligence 95 (2020) 103873. doi:10.1016/j.engappai.2020.103873.
|
| 399 |
+
[37] Y. Xu, J. Zhang, Z. Long, H. Tang, X. Zhang, Hourly urban water demand forecasting using the continuous deep belief echo state network, Water 11 (2) (2019) 351. doi: 10.3390/w11020351.
|
| 400 |
+
[38] H. Shi, Y. Zhang, Z. Zhang, N. Ma, X. Zhao, Y. Gao, J. Sun, Hypergraph-induced convolutional networks for visual classification, IEEE Transactions on Neural Networks and Learning Systems 30 (10) (2019) 2963-2972. doi:10.1109/TNNLS.2018.2869747.
|
| 401 |
+
[39] M. Rabah, A. Rohan, S. A. Mohamed, S. H. Kim, Autonomous moving target-tracking for a UAV quadcopter based on fuzzy-PI, IEEE Access 7 (2019) 38407–38419. doi: 10.1109/ACCESS.2019.2906345.
|
| 402 |
+
[40] D. Zhao, Z. Wang, D. W. C. Ho, G. Wei, Observer-Based PID Security Control for Discrete Time-Delay Systems Under Cyber-Attacks, IEEE Transactions on Systems, Man, and Cybernetics: Systems (2019) 1-13doi:10.1109/tsmc.2019.2952539.
|
| 403 |
+
|
| 404 |
+
[41] Y. Wang, L. Zou, Z. Zhao, X. Bai, H fuzzy PID control for discrete time-delayed T-S fuzzy systems, Neurocomputing 332 (2019) 91-99. doi:10.1016/j.neucom.2018.12.002.
|
| 405 |
+
[42] E. A. Donkor, T. A. Mazzuchi, R. Soyer, J. Alan Roberson, Urban water demand forecasting: review of methods and models, Journal of Water Resources Planning and Management 140 (2) (2012) 146-159. doi:10.1061/(asce)wr.1943-5452.0000314.
|
| 406 |
+
[43] H. Mala-Jetmarova, N. Sultanova, D. Savic, Lost in optimisation of water distribution systems? A literature review of system operation, Environmental Modelling and Software 93 (2017) 209–254. doi:10.1016/j.envsoft.2017.02.009.
|
| 407 |
+
[44] G. Balacco, A. Carbonara, A. Gioia, V. Iacobellis, A. F. Piccinni, Evaluation of peak water demand factors in puglia (Southern Italy), Water (Switzerland) 9 (2) (2017) 96. doi:10.3390/w9020096.
|
| 408 |
+
[45] S. Lu, X. Gao, W. Li, S. Jiang, L. Huang, A study on the spatial and temporal variability of the urban residential water consumption and its influencing factors in the major cities of China, Habitat International 78 (2018) 29-40. doi: 10.1016/j.habitatint.2018.05.002.
|
| 409 |
+
[46] G. Romano, N. Salvati, A. Guerrini, An empirical analysis of the determinants of water demand in Italy, Journal of Cleaner Production 130 (2016) 74-81. doi:10.1016/j.jclepro.2015.09.141.
|
| 410 |
+
[47] M. Romano, Z. Kapelan, Adaptive water demand forecasting for near real-time management of smart water distribution systems, Environmental Modelling and Software 60 (2014) 265-276. doi:10.1016/j.envsoft.2014.06.016.
|
| 411 |
+
[48] H. Najafizadegan, F. Merrikh-Bayat, A. Jalilvand, IMC-PID controller design based on loop shaping via LMI approach, Chemical Engineering Research and Design 124 (2017) 170-180. doi:10.1016/j.cherd.2017.06.007.
|
| 412 |
+
[49] Y. Wang, Q. Jin, R. Zhang, Improved fuzzy PID controller design using predictive functional control structure, ISA Transactions 71 (2017) 354-363. doi:10.1016/j.isatra.2017.09.005.
|
| 413 |
+
[50] T. Salloom, X. Yu, W. He, O. Kaynak, Adaptive neural network control of underwater robotic manipulators tuned by a genetic algorithm, Journal of Intelligent
|
| 414 |
+
|
| 415 |
+
and Robotic Systems: Theory and Applications 97 (3-4) (2020) 657-672. doi: 10.1007/s10846-019-01008-y.
|
| 416 |
+
[51] H. Feng, C. B. Yin, W. wen Weng, W. Ma, J. jing Zhou, W. hua Jia, Z. li Zhang, Robotic excavator trajectory control using an improved GA based PID controller, Mechanical Systems and Signal Processing 105 (2018) 153-168. doi:10.1016/j.ymssp.2017.12.014.
|
| 417 |
+
[52] K. Yan, X. Wang, Y. Du, N. Jin, H. Huang, H. Zhou, Multi-step short-term power consumption forecasting with a hybrid deep learning strategy, Energies 11 (11) (2018) 3089. doi:10.3390/en11113089.
|
| 418 |
+
[53] G. Paneiro, F. O. Durão, M. Costa e Silva, P. Falcao Neves, Artificial neural network model for ground vibration amplitudes prediction due to light railway traffic in urban areas, Neural Computing and Applications 29 (11) (2018) 1045-1057. doi:10.1007/s00521-016-2625-9.
|
| 419 |
+
[54] A. K. Seghouane, New AIC corrected variants for multivariate linear regression model selection, IEEE Transactions on Aerospace and Electronic Systems 47 (2) (2011) 1154-1165. doi:10.1109/TAES.2011.5751249.
|
| 420 |
+
[55] G. Panchal, A. Ganatra, Y. Kosta, D. Panchal, Searching most efficient neural network architecture using akaike's information criterion (AIC), International Journal of Computer Applications 1 (5) (2010) 54-57. doi:10.5120/126-242.
|
| 421 |
+
|
| 422 |
+

|
| 423 |
+
|
| 424 |
+
Tony Salloom received the B.Sc. in electronics and computer engineering from Aleppo University, Aleppo, Syria, in 2008, and the M.Eng. in information and telecommunication engineering from the University of Science and Technology Beijing, Beijing, China, in 2016.
|
| 425 |
+
|
| 426 |
+
From 2009 to 2013, he served as a Database Engineer at the Syrian Telecommunication Company, Aleppo, Syria. He is currently pursuing
|
| 427 |
+
|
| 428 |
+
the Ph.D. degree with the School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China. His current research interests include Deep learning, time-series forecasting, intelligent control systems, and robotics.
|
| 429 |
+
|
| 430 |
+

|
| 431 |
+
|
| 432 |
+
Okayay Kaynak received the B.Sc. degree with first class honors and Ph.D. degrees in electronic and electrical engineering from the University of Birmingham, UK, in 1969 and 1972 respectively. From 1972 to 1979, he held various positions within the industry. In 1979, he joined the Department of Electrical and Electronics Engineering, Bogazici University, Istanbul, Turkey, where he is currently a Professor Emeritus, holding the
|
| 433 |
+
|
| 434 |
+
UNESCO Chair on Mechatronics. He is also a 1000 People Plan Professor at University of Science & Technology Beijing, China. He has hold long-term (near to or more than a year) Visiting Professor/Scholar positions at various institutions in Japan, Germany, U.S., Singapore and China. His current research interests are in the fields of intelligent control and industrial AI applications. He has authored four books, edited five and authored or co-authored more than 400 papers that have appeared in various journals and conference proceedings. Dr. Kaynak has served as the Editor in Chief of IEEE Trans. on Industrial Informatics and IEEE/ASME Trans. on Mechatronics as well as Co-Editor in Chief of IEEE Trans. on Industrial Electronics. Additionally, he is on the Editorial or Advisory Boards of several scholarly journals. In 2016, he received the Chinese Governments Friendship Award and Humboldt Research Prize. Most recently, in 2020, he received the International Research Prize of the Turkish Academy of Sciences.
|
| 435 |
+
|
| 436 |
+
Dr. Kaynak is active in international organizations, has served on many committees of IEEE and was the president of IEEE Industrial Electronics Society during 2002-2003. He was elevated to IEEE fellow status in 2003.
|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
|
| 440 |
+
Xinbo Yu He (S'16-M'20) received the B.E. degree in control technology and instrument from the School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China, in 2013 and the Ph.D. degree in control science and engineering from the School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China, in 2020. He is currently working as an associate professor in the Institute of Artificial Intelligence, University of
|
| 441 |
+
|
| 442 |
+
Science and Technology Beijing, Beijing, China. His current research interests include adaptive neural networks control, robotics and human-robot interaction.
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+
Wei He (S'09-M'12-SM'16) received his B.Eng. in automation and his M.Eng. degrees in control science and engineering from College of Automation Science and Engineering, South China University of Technology (SCUT), China, in 2006 and 2008, respectively, and his Ph.D. degree in control science and engineering from Department of Electrical & Computer Engineering, the National University of Singapore (NUS), Sin-
|
| 447 |
+
|
| 448 |
+
gapore, in 2011.
|
| 449 |
+
|
| 450 |
+
He is currently working as a full professor in School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China. He has co-authored 2 books published in Springer and published over 100 international journal and conference papers. He was awarded a Newton Advanced Fellowship from the Royal Society, UK in 2017. He was a recipient of the IEEE SMC Society Andrew P. Sage Best Transactions Paper Award in 2017. He is serving the Chair of IEEE SMC Society Beijing Capital Region Chapter. He is serving as an Associate Editor of IEEE Transactions on Robotics, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Control Systems Technology, IEEE Transactions on Systems, Man, and Cybernetics: Systems, IEEE/CAA Journal of Automatica Sinica, Neurocomputing, and an Editor of Journal of Intelligent & Robotic Systems. His current research interests include robotics, distributed parameter systems and intelligent control systems.
|
2512.06xxx/2512.06357/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9057c9d06c17742374fdf213cef9d65740fcecd9f3eb8a29cd4b7ca89a71340c
|
| 3 |
+
size 1173878
|
2512.06xxx/2512.06357/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|