mishig HF Staff commited on
Commit
0ec3321
Β·
verified Β·
1 Parent(s): e4ba7eb

Add 1 files

Browse files
Files changed (1) hide show
  1. 2403/2403.17001.md +219 -0
2403/2403.17001.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation
2
+
3
+ URL Source: https://arxiv.org/html/2403.17001
4
+
5
+ Published Time: Fri, 03 May 2024 00:14:56 GMT
6
+
7
+ Markdown Content:
8
+ Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, and Tao Mei
9
+
10
+ HiDream.ai Inc.
11
+
12
+ Fudan University
13
+
14
+ {c1enyang, pandy, tiyao, tmei}@hidream.ai, yanghaibo.fdu@gmail.com
15
+
16
+ ###### Abstract
17
+
18
+ Recent innovations on text-to-3D generation have featured Score Distillation Sampling (SDS), which enables the zero-shot learning of implicit 3D models (NeRF) by directly distilling prior knowledge from 2D diffusion models. However, current SDS-based models still struggle with intricate text prompts and commonly result in distorted 3D models with unrealistic textures or cross-view inconsistency issues. In this work, we introduce a novel Visual Prompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the visual appearance knowledge in 2D visual prompt to boost text-to-3D generation. Instead of solely supervising SDS with text prompt, VP3D first capitalizes on 2D diffusion model to generate a high-quality image from input text, which subsequently acts as visual prompt to strengthen SDS optimization with explicit visual appearance. Meanwhile, we couple the SDS optimization with additional differentiable reward function that encourages rendering images of 3D models to better visually align with 2D visual prompt and semantically match with text prompt. Through extensive experiments, we show that the 2D Visual Prompt in our VP3D significantly eases the learning of visual appearance of 3D models and thus leads to higher visual fidelity with more detailed textures. It is also appealing in view that when replacing the self-generating visual prompt with a given reference image, VP3D is able to trigger a new task of stylized text-to-3D generation. Our project page is available at [https://vp3d-cvpr24.github.io](https://vp3d-cvpr24.github.io/).
19
+
20
+ 1 Introduction
21
+ --------------
22
+
23
+ ![Image 1: Refer to caption](https://arxiv.org/html/2403.17001v1/)
24
+
25
+ Figure 1: Exisiting text-to-3D generation techniques (e.g., Magic3D [[17](https://arxiv.org/html/2403.17001v1#bib.bib17)] and ProlificDreamer [[39](https://arxiv.org/html/2403.17001v1#bib.bib39)]) often suffer from degenerated results (e.g., over-saturated appearances or inaccurate geometries). Our VP3D novelly integrates a visual prompt to strength score distillation sampling, leading to better 3D results.
26
+
27
+ Generative Artificial Intelligence (especially for vision content generation) has aroused great attention in computer vision field [[6](https://arxiv.org/html/2403.17001v1#bib.bib6), [5](https://arxiv.org/html/2403.17001v1#bib.bib5), [26](https://arxiv.org/html/2403.17001v1#bib.bib26), [20](https://arxiv.org/html/2403.17001v1#bib.bib20)], leading to impressive advancements in text-to-image [[30](https://arxiv.org/html/2403.17001v1#bib.bib30), [32](https://arxiv.org/html/2403.17001v1#bib.bib32), [31](https://arxiv.org/html/2403.17001v1#bib.bib31)] and text-to-video generation [[34](https://arxiv.org/html/2403.17001v1#bib.bib34), [14](https://arxiv.org/html/2403.17001v1#bib.bib14), [10](https://arxiv.org/html/2403.17001v1#bib.bib10)]. These accomplishments can be attributed to the availability of large-scale image-text and video-text pair data [[1](https://arxiv.org/html/2403.17001v1#bib.bib1), [33](https://arxiv.org/html/2403.17001v1#bib.bib33)] and the emergence of robust diffusion-based generative models [[35](https://arxiv.org/html/2403.17001v1#bib.bib35), [13](https://arxiv.org/html/2403.17001v1#bib.bib13), [25](https://arxiv.org/html/2403.17001v1#bib.bib25), [12](https://arxiv.org/html/2403.17001v1#bib.bib12)]. Recently, researchers have gone beyond text-driven image/video generation, and begun exploring diffusion models for text-driven content creation of 3D assets (e.g., text-to-3D generation). This direction paves a new way for practical 3D content creation and has a great potential impact for numerous applications like virtual reality, gaming and Metaverse. Compared to image generation, text-to-3D generation, however, is more challenging, due to the complexities associated with intricate 3D geometry and appearance (i.e., object shapes and textures). Moreover, the collection and annotation of 3D data are somewhat resourcefully expensive and thus cannot be easily scaled up to billion level as image data.
28
+
29
+ To tackle this issue, a pioneering text-to-3D work (DreamFusion [[27](https://arxiv.org/html/2403.17001v1#bib.bib27)]) presents the first attempt of exploiting an off-the-shelf text-to-image diffusion model to generate promising 3D assets in a zero-shot fashion. The key design behind such success is Score Distillation Sampling (SDS), which directly optimizes the implicit 3D model of Neural Radiance Field (NeRF) with prior knowledge distilled from 2D diffusion model. Nevertheless, such distilled prior knowledge is merely driven by the input text prompt, and it is non-trivial to learn high-quality NeRF with distilled SDS supervision. Although several subsequent works [[38](https://arxiv.org/html/2403.17001v1#bib.bib38), [22](https://arxiv.org/html/2403.17001v1#bib.bib22), [17](https://arxiv.org/html/2403.17001v1#bib.bib17), [4](https://arxiv.org/html/2403.17001v1#bib.bib4), [39](https://arxiv.org/html/2403.17001v1#bib.bib39)] further upgrade SDS, this kind of SDS-based solution still results in degenerated 3D models with unrealistic/less-detailed textures, especially when feeding intricate text prompts (as seen in Figure [1](https://arxiv.org/html/2403.17001v1#S1.F1 "Figure 1 β€£ 1 Introduction β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") (a-b)).
30
+
31
+ In this work, we propose to mitigate this limitation through a unique design of visual prompt-guided text-to-3D diffusion model, namely VP3D. Intuitively, β€œa picture is worth a thousand words.” That is, a single image can convey human intentions of visual content creation (e.g., the visual appearance or semantic structure) more effectively than textual sentences. This motivates us to introduce additional guidance of visual prompt, and thus decompose the typical single-shot text-to-3D process into two cascaded processes: first text-to-image generation, and then (text plus image)-to-3D generation. In particular, VP3D first leverages off-the-shelf text-to-image diffusion models to produce a high-fidelity image that reflects extremely realistic appearance with rich details. In the latter process, this synthetic image further acts as 2D visual prompt to supervise SDS optimization of NeRF, coupled with the input text prompt. At the same time, a differentiable reward function is additionally utilized to encourage the rendering images of NeRF to be better aligned with 2D visual prompt (visual appearance consistency) and text prompt (semantic consistency). As illustrated in Figure [1](https://arxiv.org/html/2403.17001v1#S1.F1 "Figure 1 β€£ 1 Introduction β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") (c), we show that the novel visual prompt-guided diffusion process in VP3D significantly enhances the visual fidelity of 3D assets with realistic and rich texture details. Meanwhile, when easing the learning of visual appearance of 3D assets via visual prompt guidance, the optimization of NeRF will focus more on the modeling of geometry, leading to better 3D sharps with cross-view consistency. We believe that the ability of unleashing high-quality visual knowledge in 2D visual prompt is potentially a new paradigm of text-to-3D generation.
32
+
33
+ As a by-product, we also demonstrate that our VP3D can be readily adapted for a new task of stylized text-to-3D generation. Intriguingly, we simply replace the self-generating image in VP3D with a user-given reference image, and treat it as a new visual prompt to trigger (text plus image)-to-3D generation. In this way, our VP3D is able to produce a stylized 3D asset, which not only semantically aligns with text prompt but also shares similar geometric & visual style as the reference image.
34
+
35
+ 2 Related Work
36
+ --------------
37
+
38
+ Text-to-3D generation. Significant advancements have been witnessed in text-to-image generation with 2D diffusion models in recent years [[35](https://arxiv.org/html/2403.17001v1#bib.bib35), [13](https://arxiv.org/html/2403.17001v1#bib.bib13), [25](https://arxiv.org/html/2403.17001v1#bib.bib25), [12](https://arxiv.org/html/2403.17001v1#bib.bib12), [30](https://arxiv.org/html/2403.17001v1#bib.bib30), [32](https://arxiv.org/html/2403.17001v1#bib.bib32), [31](https://arxiv.org/html/2403.17001v1#bib.bib31), [3](https://arxiv.org/html/2403.17001v1#bib.bib3)]. However, extending these capabilities to 3D content generation poses a substantial challenge, primarily due to the absence of large-scale paired text-3D datasets. To mitigate the reliance on extensive training data, recent works try to accomplish zero-shot text-to-3D generation [[27](https://arxiv.org/html/2403.17001v1#bib.bib27), [38](https://arxiv.org/html/2403.17001v1#bib.bib38), [22](https://arxiv.org/html/2403.17001v1#bib.bib22), [17](https://arxiv.org/html/2403.17001v1#bib.bib17), [8](https://arxiv.org/html/2403.17001v1#bib.bib8), [7](https://arxiv.org/html/2403.17001v1#bib.bib7), [42](https://arxiv.org/html/2403.17001v1#bib.bib42), [4](https://arxiv.org/html/2403.17001v1#bib.bib4), [39](https://arxiv.org/html/2403.17001v1#bib.bib39)]. Specifically, the pioneering work DreamFusion [[27](https://arxiv.org/html/2403.17001v1#bib.bib27)] showcased remarkable achievements in text-to-3D generation through pre-trained text-to-image diffusion models. SJC [[38](https://arxiv.org/html/2403.17001v1#bib.bib38)] concurrently addressed the out-of-distribution problem in lifting 2D diffusion models to perform text-to-3D generation. Following these, several subsequent works have strived to enhance text-to-3D generation further. For instance, Latent-NeRF [[22](https://arxiv.org/html/2403.17001v1#bib.bib22)] proposed to incorporate a sketch shape to guide the 3D generation directly in the latent space of a latent diffusion model. Magic3D [[17](https://arxiv.org/html/2403.17001v1#bib.bib17)] presented a coarse-to-fine strategy that leverages both low- and high-resolution diffusion priors to learn the underlying 3D representation. Control3D [[8](https://arxiv.org/html/2403.17001v1#bib.bib8)] proposed to enhance user controllability in text-to-3D generation by incorporating additional hand-drawn sketch conditions. ProlificDreamer [[39](https://arxiv.org/html/2403.17001v1#bib.bib39)] presented a principled particle-based variational framework to improve the generation quality.
39
+
40
+ Unlike previous works, we formulate the text-to-3D generation process from a new perspective. We first leverage the off-the-shelf text-to-image diffusion models to generate a high-quality image that faithfully matches the input text prompt. This synthetic reference image then serves as a complementary input alongside the text, synergistically guiding the 3D learning process. Moreover, we showcase the remarkable versatility of this novel architecture by effortlessly extending its capabilities to the realm of stylized text-to-3D generation. The resulting 3D asset not only exhibits semantic alignment with the provided text prompt but also masterfully captures the visual style of the reference image. This capability marks another pivotal distinction between our VP3D and previous text-to-3D approaches.
41
+
42
+ Image-to-3D generation. Recently, prior works RealFusion [[21](https://arxiv.org/html/2403.17001v1#bib.bib21)], NeuralLift-360 [[40](https://arxiv.org/html/2403.17001v1#bib.bib40)] and NeRDi [[9](https://arxiv.org/html/2403.17001v1#bib.bib9)] leverage 2D diffusion models to achieve image-to-3D generation. The following work Make-IT-3D [[37](https://arxiv.org/html/2403.17001v1#bib.bib37)] proposed a two-stage optimization framework to improve the generation quality further. Zero-1-to-3 [[19](https://arxiv.org/html/2403.17001v1#bib.bib19)] finetuned the Stable Diffusion model to enable generating novel views of the input image. It can then be used as a 3D prior model to achieve high-quality image-to-3D generation. Inspired by this, Magic123 [[28](https://arxiv.org/html/2403.17001v1#bib.bib28)] proposed to use 2D and 3D priors simultaneously to generate faithful 3D content from the given image. One-2-3-45 [[18](https://arxiv.org/html/2403.17001v1#bib.bib18)] integrated Zero-1-to-3 and a multi-view reconstruction model to accelerate the 3D generation process. It is worth noting that our work is not targeting image-to-3D generation. We utilize a reference image to guide the text-to-3D learning process, instead of directly turning the reference image into 3D content.
43
+
44
+ 3 VP3D
45
+ ------
46
+
47
+ In this section, we elaborate the architecture of our VP3D, which introduces a novel visual prompt-guided text-to-3D diffusion model. An overview of our VP3D architecture is depicted in Figure [2](https://arxiv.org/html/2403.17001v1#S3.F2 "Figure 2 β€£ 3.1 Background β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation").
48
+
49
+ ### 3.1 Background
50
+
51
+ Text-to-Image Diffusion Models. Diffusion models are a family of generative models that are trained to gradually transform Gaussian noise into samples from a target distribution [[13](https://arxiv.org/html/2403.17001v1#bib.bib13)]. Given a target data distribution q⁒(𝐱)π‘ž 𝐱 q(\mathbf{x})italic_q ( bold_x ), a _forward diffusion process_ is defined to progressively add a small amount of Gaussian noise to the data 𝐱 0 subscript 𝐱 0\mathbf{x}_{0}bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT sampled from q⁒(𝐱)π‘ž 𝐱 q(\mathbf{x})italic_q ( bold_x ). This process follows a Markov chain q⁒(𝐱 1:T)=∏t=1 T q⁒(𝐱 t|𝐱 tβˆ’1)π‘ž subscript 𝐱:1 𝑇 subscript superscript product 𝑇 𝑑 1 π‘ž conditional subscript 𝐱 𝑑 subscript 𝐱 𝑑 1 q(\mathbf{x}_{1:T})=\prod^{T}_{t=1}q(\mathbf{x}_{t}|\mathbf{x}_{t-1})italic_q ( bold_x start_POSTSUBSCRIPT 1 : italic_T end_POSTSUBSCRIPT ) = ∏ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT italic_q ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ) and produces a sequence of latent variables 𝐱 1,…,𝐱 T subscript 𝐱 1…subscript 𝐱 𝑇\mathbf{x}_{1},\dots,\mathbf{x}_{T}bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT after T 𝑇 T italic_T time steps. The marginal distribution of latent variables at time step t 𝑑 t italic_t is given by q⁒(𝐱 t|𝐱)=𝒩⁒(𝐱 t;Ξ± t⁒𝐱,Οƒ t 2⁒𝐈)π‘ž conditional subscript 𝐱 𝑑 𝐱 𝒩 subscript 𝐱 𝑑 subscript 𝛼 𝑑 𝐱 subscript superscript 𝜎 2 𝑑 𝐈 q(\mathbf{x}_{t}|\mathbf{x})=\mathcal{N}(\mathbf{x}_{t};\alpha_{t}\mathbf{x},% \sigma^{2}_{t}\mathbf{I})italic_q ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | bold_x ) = caligraphic_N ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_Ξ± start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT bold_x , italic_Οƒ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT bold_I ). Thus the noisy sample 𝐱 t subscript 𝐱 𝑑\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT can be directly generated through the equation 𝐱 t=Ξ± t⁒𝐱+Οƒ t 2⁒ϡ subscript 𝐱 𝑑 subscript 𝛼 𝑑 𝐱 subscript superscript 𝜎 2 𝑑 italic-Ο΅\mathbf{x}_{t}=\alpha_{t}\mathbf{x}+\sigma^{2}_{t}\epsilon bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_Ξ± start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT bold_x + italic_Οƒ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT italic_Ο΅, where Ο΅βˆΌπ’©β’(𝟎,𝐈)similar-to italic-Ο΅ 𝒩 0 𝐈\epsilon\sim\mathcal{N}({\bf{0,I}})italic_Ο΅ ∼ caligraphic_N ( bold_0 , bold_I ), Ξ± t subscript 𝛼 𝑑\alpha_{t}italic_Ξ± start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and Οƒ t subscript 𝜎 𝑑\sigma_{t}italic_Οƒ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are chosen parameters such that Ξ± t 2+Οƒ t 2=1 subscript superscript 𝛼 2 𝑑 subscript superscript 𝜎 2 𝑑 1\alpha^{2}_{t}+\sigma^{2}_{t}=1 italic_Ξ± start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT + italic_Οƒ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 1. After T 𝑇 T italic_T noise adding steps, 𝐱 T subscript 𝐱 𝑇\mathbf{x}_{T}bold_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT is equivalent to an isotropic Gaussian distribution. Then, a _reverse generative process_ is defined to gradually β€œdenoise” X T subscript 𝑋 𝑇 X_{T}italic_X start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT to reconstruct the original sample. This can be described by a Markov process p ϕ⁒(𝐱 0:T)=p⁒(𝐱 T)⁒∏t=1 T p ϕ⁒(𝐱 tβˆ’1|𝐱 t)subscript 𝑝 italic-Ο• subscript 𝐱:0 𝑇 𝑝 subscript 𝐱 𝑇 subscript superscript product 𝑇 𝑑 1 subscript 𝑝 italic-Ο• conditional subscript 𝐱 𝑑 1 subscript 𝐱 𝑑 p_{\phi}(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod^{T}_{t=1}p_{\phi}(\mathbf{x}% _{t-1}|\mathbf{x}_{t})italic_p start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT 0 : italic_T end_POSTSUBSCRIPT ) = italic_p ( bold_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ) ∏ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT | bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ), with the conditional probability p ϕ⁒(𝐱 tβˆ’1|𝐱 t)=𝒩⁒(𝐱 tβˆ’1;𝝁 ϕ⁒(𝐱 t,t),𝚺 ϕ⁒(𝐱 t,t))subscript 𝑝 italic-Ο• conditional subscript 𝐱 𝑑 1 subscript 𝐱 𝑑 𝒩 subscript 𝐱 𝑑 1 subscript 𝝁 italic-Ο• subscript 𝐱 𝑑 𝑑 subscript 𝚺 italic-Ο• subscript 𝐱 𝑑 𝑑 p_{\phi}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};% \boldsymbol{\mu}_{\phi}(\mathbf{x}_{t},t),\boldsymbol{\Sigma}_{\phi}(\mathbf{x% }_{t},t))italic_p start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT | bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = caligraphic_N ( bold_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT ; bold_italic_ΞΌ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t ) , bold_Ξ£ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t ) ). Commonly, a UNet neural network Ο΅ ϕ⁒(𝐱 t;t)subscript bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑\boldsymbol{\epsilon}_{\phi}(\mathbf{x}_{t};t)bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t ) with parameters Ο• italic-Ο•\phi italic_Ο• is used to predict the noise that was used to produce 𝐱 t subscript 𝐱 𝑑\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT at time step t 𝑑 t italic_t. Text-to-image diffusion models build upon the above theory to condition the diffusion process with a given text prompt y 𝑦 y italic_y using classifier-free guidance (CFG) [[12](https://arxiv.org/html/2403.17001v1#bib.bib12)]. The corresponding noise predictor is remodeled by:
52
+
53
+ Ο΅^ϕ⁒(𝐱 t;𝐳 y,t)=Ο΅ ϕ⁒(𝐱 t;t,βˆ…)+sβˆ—(Ο΅ ϕ⁒(𝐱 t;t,𝐳 y)βˆ’Ο΅ ϕ⁒(𝐱 t;t,βˆ…)),subscript^bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 subscript 𝐳 𝑦 𝑑 subscript bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑 𝑠 subscript bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑 subscript 𝐳 𝑦 subscript bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑\hat{\boldsymbol{\epsilon}}_{\phi}(\mathbf{x}_{t};\mathbf{z}_{y},t)=% \boldsymbol{\epsilon}_{\phi}(\mathbf{x}_{t};t,\emptyset)+s*(\boldsymbol{% \epsilon}_{\phi}(\mathbf{x}_{t};t,\mathbf{z}_{y})-\boldsymbol{\epsilon}_{\phi}% (\mathbf{x}_{t};t,\emptyset)),over^ start_ARG bold_italic_Ο΅ end_ARG start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , italic_t ) = bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , βˆ… ) + italic_s βˆ— ( bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ) - bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , βˆ… ) ) ,(1)
54
+
55
+ where s 𝑠 s italic_s is a scale that denotes the classifier-free guidance weight, 𝐳 y subscript 𝐳 𝑦\mathbf{z}_{y}bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT is the corresponding text embedding of the text prompt y 𝑦 y italic_y and βˆ…\emptysetβˆ… indicates the noise prediction without conditioning. The diffusion model Ο΅ Ο• subscript bold-italic-Ο΅ italic-Ο•\boldsymbol{\epsilon}_{\phi}bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT is typically optimized by a simplified variant of the variational lower bound of the log data likelihood, which is a Mean Squared Error criterion:
56
+
57
+ β„’ diff⁒(Ο•)=𝔼 𝐱,t,ϡ⁒[w⁒(t)⁒‖ϡ^ϕ⁒(𝐱 t;y,t)βˆ’Ο΅β€–2 2],subscript β„’ diff italic-Ο• subscript 𝔼 𝐱 𝑑 italic-Ο΅ delimited-[]𝑀 𝑑 subscript superscript norm subscript^bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑦 𝑑 italic-Ο΅ 2 2\mathcal{L}_{\mathrm{diff}}(\phi)=\mathbb{E}_{\mathbf{x},t,\epsilon}\Bigl{[}w(% t)\|\hat{\boldsymbol{\epsilon}}_{\phi}(\mathbf{x}_{t};y,t)-\epsilon\|^{2}_{2}% \Bigr{]},caligraphic_L start_POSTSUBSCRIPT roman_diff end_POSTSUBSCRIPT ( italic_Ο• ) = blackboard_E start_POSTSUBSCRIPT bold_x , italic_t , italic_Ο΅ end_POSTSUBSCRIPT [ italic_w ( italic_t ) βˆ₯ over^ start_ARG bold_italic_Ο΅ end_ARG start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_y , italic_t ) - italic_Ο΅ βˆ₯ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] ,(2)
58
+
59
+ where w⁒(t)𝑀 𝑑 w(t)italic_w ( italic_t ) is a weighting function that depends on the timestep tβˆΌπ’°β’(0,1)similar-to 𝑑 𝒰 0 1 t\sim\mathcal{U}(0,1)italic_t ∼ caligraphic_U ( 0 , 1 ) and Ο΅βˆΌπ’©β’(𝟎,𝐈)similar-to italic-Ο΅ 𝒩 0 𝐈\epsilon\sim\mathcal{N}({\bf{0,I}})italic_Ο΅ ∼ caligraphic_N ( bold_0 , bold_I ).
60
+
61
+ Score Distillation Sampling. A recent pioneering work called DreamFusion [[27](https://arxiv.org/html/2403.17001v1#bib.bib27)] introduces Score Distillation Sampling (SDS) that enables leveraging the priors of pre-trained text-to-image diffusion models to facilitate text-to-3D generation. Specifically, let ΞΈ πœƒ\theta italic_ΞΈ be the learnable parameters of a 3D model (e.g., NeRF) and g 𝑔 g italic_g be a differentiable rendering function that can render an image 𝐱=g⁒(ΞΈ;𝐜)𝐱 𝑔 πœƒ 𝐜\mathbf{x}=g(\theta;\mathbf{c})bold_x = italic_g ( italic_ΞΈ ; bold_c ) from the 3D model ΞΈ πœƒ\theta italic_ΞΈ at a camera viewpoint 𝐜 𝐜\mathbf{c}bold_c. SDS introduces a loss function β„’ S⁒D⁒S subscript β„’ 𝑆 𝐷 𝑆\mathcal{L}_{SDS}caligraphic_L start_POSTSUBSCRIPT italic_S italic_D italic_S end_POSTSUBSCRIPT to optimize the parameters ΞΈ πœƒ\theta italic_ΞΈ. Its gradient is defined as follows:
62
+
63
+ βˆ‡ΞΈ β„’ S⁒D⁒S=𝔼 t,ϡ⁒[w⁒(t)⁒(Ο΅^ϕ⁒(𝐱 t;t,𝐳 y)βˆ’Ο΅)β’βˆ‚π±βˆ‚ΞΈ],subscriptβˆ‡πœƒ subscript β„’ 𝑆 𝐷 𝑆 subscript 𝔼 𝑑 italic-Ο΅ delimited-[]𝑀 𝑑 subscript^bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑 subscript 𝐳 𝑦 italic-Ο΅ 𝐱 πœƒ\nabla_{\theta}\mathcal{L}_{SDS}=\mathbb{E}_{t,\epsilon}[w(t)(\hat{\boldsymbol% {\epsilon}}_{\phi}(\mathbf{x}_{t};t,\mathbf{z}_{y})-\epsilon)\frac{\partial% \mathbf{x}}{\partial\theta}],βˆ‡ start_POSTSUBSCRIPT italic_ΞΈ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_S italic_D italic_S end_POSTSUBSCRIPT = blackboard_E start_POSTSUBSCRIPT italic_t , italic_Ο΅ end_POSTSUBSCRIPT [ italic_w ( italic_t ) ( over^ start_ARG bold_italic_Ο΅ end_ARG start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ) - italic_Ο΅ ) divide start_ARG βˆ‚ bold_x end_ARG start_ARG βˆ‚ italic_ΞΈ end_ARG ] ,(3)
64
+
65
+ where 𝐱 t subscript 𝐱 𝑑\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is obtained by perturbing the rendered image 𝐱 𝐱\mathbf{x}bold_x with a Gaussian noise Ο΅ italic-Ο΅\epsilon italic_Ο΅ corresponding to the t 𝑑 t italic_t-th timestep of the _forward diffusion process_, 𝐳 y subscript 𝐳 𝑦\mathbf{z}_{y}bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT is the conditioned text embedding of given text prompt y 𝑦 y italic_y. Intuitively, the SDS loss estimates an update direction in which the noised version of rendered image 𝐱 𝐱\mathbf{x}bold_x should be moved towards a denser region in the distribution of real images (aligned with the conditional text prompt y). By randomly sampling views and backpropagating the gradient in Eq. [3](https://arxiv.org/html/2403.17001v1#S3.E3 "Equation 3 β€£ 3.1 Background β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") to the parameters ΞΈ πœƒ\theta italic_ΞΈ through the differential parametric function g 𝑔 g italic_g, this approach eventually results in a 3D model that resembles the text.
66
+
67
+ ![Image 2: Refer to caption](https://arxiv.org/html/2403.17001v1/)
68
+
69
+ Figure 2: An overview of the proposed VP3D framework for visual prompted text-to-3D generation.
70
+
71
+ ### 3.2 Visual-prompted Score Distillation Sampling
72
+
73
+ Visual Prompt Generation. As aforementioned, score distillation sampling plays a key role in text-to-3D generation. Nevertheless, empirical observations [[27](https://arxiv.org/html/2403.17001v1#bib.bib27), [39](https://arxiv.org/html/2403.17001v1#bib.bib39), [11](https://arxiv.org/html/2403.17001v1#bib.bib11)] reveal that SDS still results in degenerated 3D models especially when feeding intricate text prompts. First, SDS-generated results often suffer from over-saturation issues. These issues are, in part, attributed to the necessity of employing a large CFG value (i.e., 100) within the SDS framework [[27](https://arxiv.org/html/2403.17001v1#bib.bib27), [39](https://arxiv.org/html/2403.17001v1#bib.bib39)]. A Large CFG value narrows down the score distillation space to more text-relevant areas. This can mitigate the divergence of diffusion priors in the optimization process, thereby fostering enhanced stability in 3D representation learning. However, this is at the cost of less realistic and diversity generation results, as large CFG values are known to yield over-saturated results [[39](https://arxiv.org/html/2403.17001v1#bib.bib39)]. Second, results generated by SDS still face the risk of text-3D misalignment, such as missing key elements in the scene, especially when text prompts contain multiple objects with specific attributes.
74
+
75
+ A fundamental reason behind the aforementioned issues may lie in the substantial distribution gap between text and 3D modalities. Thus it is non-trivial to directly learn a meaningful 3D scene solely based on a single text prompt. This insight motivates us to introduce an additional visual prompt as a bridge to explicitly establish a connection between the text input and the desired 3D output. Particularly, we leverage off-the-shelf text-to-image diffusion models (e.g., Stable Diffusion) to produce a high-fidelity image that faithfully matches the input text prompt and has an extremely realistic appearance. This image is then used as a visual prompt in conjunction with the input text prompt to jointly supervise the 3D generation process.
76
+
77
+ Score Distillation Sampling with Visual Prompt. We now present visual-prompted score distillation sampling that distillation knowledge from a pre-trained diffusion model to optimize a 3D model by considering inputs not only from a text prompt y 𝑦 y italic_y but also from a visual prompt v 𝑣 v italic_v. To be clear, we restructure the standard SDS-based text-to-3D pipeline by utilizing an image-conditioned diffusion model [[43](https://arxiv.org/html/2403.17001v1#bib.bib43)] to trigger visual prompt-guided text-to-3D generation. Technically, the visual prompt is first converted to a global image embedding 𝐳 v subscript 𝐳 𝑣\mathbf{z}_{v}bold_z start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT by the CLIP image encoder [[29](https://arxiv.org/html/2403.17001v1#bib.bib29)] and a following projection network. This image embedding represents the rich content and style of the visual prompt and has the same dimension as the text embedding 𝐳 y subscript 𝐳 𝑦\mathbf{z}_{y}bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT used in the pre-trained text-to-image diffusion model (Stable Diffusion). Following SDS, we first add noise Ο΅ italic-Ο΅\epsilon italic_Ο΅ to the rendered image of the underlying 3D model according to the random sampled time step t 𝑑 t italic_t to get a noised image 𝐱 t subscript 𝐱 𝑑\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. Then 𝐱 t subscript 𝐱 𝑑\mathbf{x}_{t}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is input to the diffusion model along with the conditional visual prompt embedding 𝐳 v subscript 𝐳 𝑣\mathbf{z}_{v}bold_z start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT and text prompt embedding 𝐳 y subscript 𝐳 𝑦\mathbf{z}_{y}bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT to estimate the added noise as follows:
78
+
79
+ Ο΅~ϕ⁒(𝐱 t;t,𝐳 y,𝐳 v)subscript~bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑 subscript 𝐳 𝑦 subscript 𝐳 𝑣\displaystyle\tilde{\boldsymbol{\epsilon}}_{\phi}(\mathbf{x}_{t};t,\mathbf{z}_% {y},\mathbf{z}_{v})over~ start_ARG bold_italic_Ο΅ end_ARG start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , bold_z start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT )=Ο΅ ϕ⁒(𝐱 t;t,βˆ…,βˆ…)absent subscript bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑\displaystyle=\boldsymbol{\epsilon}_{\phi}(\mathbf{x}_{t};t,\emptyset,\emptyset)= bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , βˆ… , βˆ… )(4)
80
+ +sβˆ—(Ο΅ Ο•(𝐱 t;t,𝐳 y,Ξ»βˆ—π³ v))βˆ’Ο΅ Ο•(𝐱 t;t,βˆ…,βˆ…)),\displaystyle+s*(\boldsymbol{\epsilon}_{\phi}(\mathbf{x}_{t};t,\mathbf{z}_{y},% \lambda*\mathbf{z}_{v}))-\boldsymbol{\epsilon}_{\phi}(\mathbf{x}_{t};t,% \emptyset,\emptyset)),+ italic_s βˆ— ( bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , italic_Ξ» βˆ— bold_z start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) ) - bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , βˆ… , βˆ… ) ) ,
81
+
82
+ where s 𝑠 s italic_s is the classifier-free guidance weight, λ∈[0,1]πœ† 0 1\lambda\in[0,1]italic_Ξ» ∈ [ 0 , 1 ] is the visual prompt condition weight, Ο• italic-Ο•\phi italic_Ο• is the parameter of the pre-trained noise predictor Ο΅ Ο• subscript bold-italic-Ο΅ italic-Ο•\boldsymbol{\epsilon}_{\phi}bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT and Ο΅ ϕ⁒(𝐱 t;t,βˆ…,βˆ…)subscript bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑\boldsymbol{\epsilon}_{\phi}(\mathbf{x}_{t};t,\emptyset,\emptyset)bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , βˆ… , βˆ… ) denotes the noise prediction without conditioning. In this way, our proposed method explicitly incorporates the visual prompt and text prompt in a unified fashion for text-to-3D generation. Consequently, the final gradient of our introduced visual-prompted score distillation sampling (VP-SDS) loss ΞΈ πœƒ\theta italic_ΞΈ is expressed as:
83
+
84
+ βˆ‡ΞΈ β„’ V⁒Pβˆ’S⁒D⁒S=𝔼 t,ϡ⁒[w⁒(t)⁒(Ο΅~ϕ⁒(𝐱 t;t,𝐳 y,𝐳 v)βˆ’Ο΅)β’βˆ‚π±βˆ‚ΞΈ],subscriptβˆ‡πœƒ subscript β„’ 𝑉 𝑃 𝑆 𝐷 𝑆 subscript 𝔼 𝑑 italic-Ο΅ delimited-[]𝑀 𝑑 subscript~bold-italic-Ο΅ italic-Ο• subscript 𝐱 𝑑 𝑑 subscript 𝐳 𝑦 subscript 𝐳 𝑣 bold-italic-Ο΅ 𝐱 πœƒ\nabla_{\theta}\mathcal{L}_{VP-SDS}=\mathbb{E}_{t,\epsilon}[w(t)(\tilde{% \boldsymbol{\epsilon}}_{\phi}(\mathbf{x}_{t};t,\mathbf{z}_{y},\mathbf{z}_{v})-% \boldsymbol{\epsilon})\frac{\partial\mathbf{x}}{\partial\theta}],βˆ‡ start_POSTSUBSCRIPT italic_ΞΈ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_V italic_P - italic_S italic_D italic_S end_POSTSUBSCRIPT = blackboard_E start_POSTSUBSCRIPT italic_t , italic_Ο΅ end_POSTSUBSCRIPT [ italic_w ( italic_t ) ( over~ start_ARG bold_italic_Ο΅ end_ARG start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_t , bold_z start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , bold_z start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) - bold_italic_Ο΅ ) divide start_ARG βˆ‚ bold_x end_ARG start_ARG βˆ‚ italic_ΞΈ end_ARG ] ,(5)
85
+
86
+ where w⁒(t)𝑀 𝑑 w(t)italic_w ( italic_t ) is a scheduling coefficient.
87
+
88
+ Comparison with SDS. Comparing the update gradient of SDS (Eq. [3](https://arxiv.org/html/2403.17001v1#S3.E3 "Equation 3 β€£ 3.1 Background β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation")) and VP-SDS (Eq. [5](https://arxiv.org/html/2403.17001v1#S3.E5 "Equation 5 β€£ 3.2 Visual-prompted Score Distillation Sampling β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation")), SDS is a special case of our VP-SDS by setting Ξ»=0 πœ† 0\lambda=0 italic_Ξ» = 0 where the visual prompt condition is neglected. In accordance with the theoretical analysis presented in [[27](https://arxiv.org/html/2403.17001v1#bib.bib27), [39](https://arxiv.org/html/2403.17001v1#bib.bib39)], the mode-seeking nature of SDS necessitates a large CFG to ensure that the pre-trained diffusion model Ο΅ Ο• subscript bold-italic-Ο΅ italic-Ο•\boldsymbol{\epsilon}_{\phi}bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT delivers a β€œsharp” updating direction for the underlying 3D model. Nevertheless, a large CFG, in turn, results in poor-quality samples and thus a β€œdegraded” update direction. In contrast, VP-SDS leverages the additional visual prompt to narrow down the distillation space of Ο΅ Ο• subscript bold-italic-Ο΅ italic-Ο•\boldsymbol{\epsilon}_{\phi}bold_italic_Ο΅ start_POSTSUBSCRIPT italic_Ο• end_POSTSUBSCRIPT into a more compact region that aligns tightly with the visual prompt. Meanwhile, the distillation space is also refined by the visual prompt as it reflects realistic appearances with rich details. Therefore, the updating direction derived from our VP-SDS is not only β€œsharp” but also β€œfine”, which can obtain much better 3D generation results than SDS.
89
+
90
+ Notably, a recent work ProlificDreamer [[39](https://arxiv.org/html/2403.17001v1#bib.bib39)] presents variational score distillation (VSD) to address the aforementioned issues in SDS. However, VSD needs to train an additional diffusion model using LoRA [[15](https://arxiv.org/html/2403.17001v1#bib.bib15)] during the optimization process, which incurs a considerable computational overhead compared to SDS. Instead, the additional computational cost of our VP-SDS is nearly negligible, making it computationally more efficient than VSD.
91
+
92
+ View-dependent Visual Prompting. Apart from the over-saturation problem discussed above, existing text-to-3D methods are known to also suffer from the multi-view inconsistency problem (e.g., the multi-face Janus problem). This arises from the fact that the underlying prior diffusion model is exclusively trained on individual 2D images and therefore lacks 3D awareness. To alleviate this issue, existing text-to-3D methods [[27](https://arxiv.org/html/2403.17001v1#bib.bib27), [38](https://arxiv.org/html/2403.17001v1#bib.bib38), [17](https://arxiv.org/html/2403.17001v1#bib.bib17), [39](https://arxiv.org/html/2403.17001v1#bib.bib39)] always employ diffusion loss with view-dependent text conditioning, which is to append β€œfront view”, β€œside view”, or β€œback view” to the input text based on the location of the randomly sampled camera. Inspired by this, we devise a view-dependent visual prompting strategy to further mitigate the view inconsistency problem in collaboration with our introduced VP-SDS. Technically, given the input visual prompt (assuming it is shot from the front view), we use a view-conditioned 2D diffusion model, Zero-1-to-3 [[19](https://arxiv.org/html/2403.17001v1#bib.bib19)], to transform it into left-side, right-side and backward views. Then we fed different visual prompts into VP-SDS (Eq. [5](https://arxiv.org/html/2403.17001v1#S3.E5 "Equation 5 β€£ 3.2 Visual-prompted Score Distillation Sampling β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation")) depending on the corresponding sampled camera viewpoint. For instance, when the azimuth angle Ξ³ c⁒a⁒m∈[0∘,360∘]subscript 𝛾 𝑐 π‘Ž π‘š superscript 0 superscript 360\gamma_{cam}\in[0^{\circ},360^{\circ}]italic_Ξ³ start_POSTSUBSCRIPT italic_c italic_a italic_m end_POSTSUBSCRIPT ∈ [ 0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT , 360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ] of the camera position falls in the range near 180∘superscript 180 180^{\circ}180 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT (0∘superscript 0 0^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT denotes the front view), we feed the generated back view counterpart of the input visual prompt into Eq [5](https://arxiv.org/html/2403.17001v1#S3.E5 "Equation 5 β€£ 3.2 Visual-prompted Score Distillation Sampling β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation"). In this way, the inherent 3D geometry information contained in the multi-view visual prompts is encoded into the 3D representation learning through view-dependent VP-SDS, leading to better view consistency in the 3D generation.
93
+
94
+ ### 3.3 Learning with Reward Feedback
95
+
96
+ To further encourage rendered images of the underlying 3D model that are high fidelity and well aligned with the input visual prompt and text prompt, we devise two types of differentiable reward functions that complement the aforementioned VP-SDS objective.
97
+
98
+ Human Feedback Reward. Recent practice has shown the capability of improving text-to-image models with human feedback [[41](https://arxiv.org/html/2403.17001v1#bib.bib41)]. Particularly, it first trains a _reward model_ on a large dataset comprised of human assessments of text-image pairs. Such a reward model thus has the ability to measure the quality of the generated samples in terms of both image fidelity and image-text alignment. Consequently, it can then be used to fine-tune diffusion models to maximize the predicted scores of the reward model through differentiable reward functions, leading to better generation results. Motivated by this, we go one step further to utilize the open-sourced reward model 𝐫 𝐫\mathbf{r}bold_r in ImageReward [[41](https://arxiv.org/html/2403.17001v1#bib.bib41)] for text-to-3D generation. Specifically, we introduce a human feedback reward loss as follows:
99
+
100
+ β„’ h⁒fβˆ’r⁒e⁒w⁒a⁒r⁒d=𝔼 𝐜⁒[ψ⁒(𝐫⁒(𝐱,y))],subscript β„’ β„Ž 𝑓 π‘Ÿ 𝑒 𝑀 π‘Ž π‘Ÿ 𝑑 subscript 𝔼 𝐜 delimited-[]πœ“ 𝐫 𝐱 𝑦\mathcal{L}_{hf-reward}=\mathbb{E}_{\mathbf{c}}[\psi(\mathbf{r}(\mathbf{x},y))],caligraphic_L start_POSTSUBSCRIPT italic_h italic_f - italic_r italic_e italic_w italic_a italic_r italic_d end_POSTSUBSCRIPT = blackboard_E start_POSTSUBSCRIPT bold_c end_POSTSUBSCRIPT [ italic_ψ ( bold_r ( bold_x , italic_y ) ) ] ,(6)
101
+
102
+ where 𝐱=g⁒(ΞΈ;𝐜)𝐱 𝑔 πœƒ 𝐜\mathbf{x}=g(\theta;\mathbf{c})bold_x = italic_g ( italic_ΞΈ ; bold_c ) is a rendered image by the underlying 3D model ΞΈ πœƒ\theta italic_ΞΈ from an arbitrary viewpoint 𝐜 𝐜\mathbf{c}bold_c, y 𝑦 y italic_y is the conditional text prompt and ψ πœ“\psi italic_ψ is a differentiable reward-to-loss map function as in [[41](https://arxiv.org/html/2403.17001v1#bib.bib41)]. Intuitively, minimizing the loss in Eq. [6](https://arxiv.org/html/2403.17001v1#S3.E6 "Equation 6 β€£ 3.3 Learning with Reward Feedback β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") encourages the rendered image 𝐱 𝐱\mathbf{x}bold_x to get a higher reward score from the reward model 𝐫 𝐫\mathbf{r}bold_r, which means the underlying 3D model should update towards the refined direction where the renderings have high appearance fidelity and faithfully match the input text prompt.
103
+
104
+ Visual Consistency Reward. Given that the above human feedback reward only takes into account the input text prompt, we further devised a visual consistency reward to fully leverage the visual prompt as well, since text prompts cannot capture all appearance details. Technically, we adopt a pre-trained self-supervised vision transformer DINO-ViT [[2](https://arxiv.org/html/2403.17001v1#bib.bib2)] to extract the visual features F d⁒i⁒n⁒o⁒(v)subscript 𝐹 𝑑 𝑖 𝑛 π‘œ 𝑣 F_{dino}(v)italic_F start_POSTSUBSCRIPT italic_d italic_i italic_n italic_o end_POSTSUBSCRIPT ( italic_v ) and F d⁒i⁒n⁒o⁒(𝐱)subscript 𝐹 𝑑 𝑖 𝑛 π‘œ 𝐱 F_{dino}(\mathbf{x})italic_F start_POSTSUBSCRIPT italic_d italic_i italic_n italic_o end_POSTSUBSCRIPT ( bold_x ) of the input visual prompt v 𝑣 v italic_v and rendered image 𝐱 𝐱\mathbf{x}bold_x, respectively. Then we penalize the feature-wise difference between them at the visual prompt viewpoint:
105
+
106
+ β„’ v⁒cβˆ’r⁒e⁒w⁒a⁒r⁒d=β€–F d⁒i⁒n⁒o⁒(𝐱)βˆ’F d⁒i⁒n⁒o⁒(𝐯)β€–2.subscript β„’ 𝑣 𝑐 π‘Ÿ 𝑒 𝑀 π‘Ž π‘Ÿ 𝑑 superscript norm subscript 𝐹 𝑑 𝑖 𝑛 π‘œ 𝐱 subscript 𝐹 𝑑 𝑖 𝑛 π‘œ 𝐯 2\mathcal{L}_{vc-reward}=||F_{dino}(\mathbf{x})-F_{dino}(\mathbf{v})||^{2}.caligraphic_L start_POSTSUBSCRIPT italic_v italic_c - italic_r italic_e italic_w italic_a italic_r italic_d end_POSTSUBSCRIPT = | | italic_F start_POSTSUBSCRIPT italic_d italic_i italic_n italic_o end_POSTSUBSCRIPT ( bold_x ) - italic_F start_POSTSUBSCRIPT italic_d italic_i italic_n italic_o end_POSTSUBSCRIPT ( bold_v ) | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT .(7)
107
+
108
+ By imposing such visual consistency loss, we encourage the underlying 3D model to adhere to the plausible shape and appearance properties conveyed by the visual prompt.
109
+
110
+ ### 3.4 3D Representation and Training
111
+
112
+ Inspired by [[17](https://arxiv.org/html/2403.17001v1#bib.bib17)], we adopt a two-stage coarse-to-fine framework for text-to-3D generation with two different 3D scene representations. At the coarse stage, we leverage Instant-NGP [[24](https://arxiv.org/html/2403.17001v1#bib.bib24)] as 3D representation, which is much faster to optimize compared to the vanilla NeRF [[23](https://arxiv.org/html/2403.17001v1#bib.bib23)] and can recover complex geometry. In the fine stage, we leverage DMTet as the 3D representation to further optimize a high-fidelity mesh and texture. Specifically, the 3D shape and texture represented in DMTet are first initialized from the density field and color field of the coarse stage, respectively [[17](https://arxiv.org/html/2403.17001v1#bib.bib17)].
113
+
114
+ During the optimization process in each stage, we first render images from the underlying 3D model through differentiable rasterizers at arbitrary camera poses and optimize the 3D model with a combination of losses:
115
+
116
+ β„’ f⁒i⁒n⁒e=β„’ V⁒Pβˆ’S⁒D⁒S+Ξ» 1⁒ℒ v⁒cβˆ’r⁒e⁒w⁒a⁒r⁒d+Ξ» 2⁒ℒ h⁒fβˆ’r⁒e⁒w⁒a⁒r⁒d,subscript β„’ 𝑓 𝑖 𝑛 𝑒 subscript β„’ 𝑉 𝑃 𝑆 𝐷 𝑆 subscript πœ† 1 subscript β„’ 𝑣 𝑐 π‘Ÿ 𝑒 𝑀 π‘Ž π‘Ÿ 𝑑 subscript πœ† 2 subscript β„’ β„Ž 𝑓 π‘Ÿ 𝑒 𝑀 π‘Ž π‘Ÿ 𝑑\mathcal{L}_{fine}=\mathcal{L}_{VP-SDS}+\lambda_{1}\mathcal{L}_{vc-reward}+% \lambda_{2}\mathcal{L}_{hf-reward},caligraphic_L start_POSTSUBSCRIPT italic_f italic_i italic_n italic_e end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT italic_V italic_P - italic_S italic_D italic_S end_POSTSUBSCRIPT + italic_Ξ» start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_v italic_c - italic_r italic_e italic_w italic_a italic_r italic_d end_POSTSUBSCRIPT + italic_Ξ» start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_h italic_f - italic_r italic_e italic_w italic_a italic_r italic_d end_POSTSUBSCRIPT ,(8)
117
+
118
+ where Ξ» 1,Ξ» 2 subscript πœ† 1 subscript πœ† 2\lambda_{1},\lambda_{2}italic_Ξ» start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_Ξ» start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the trade-off parameters.
119
+
120
+ Table 1: The quantitative results of our method and baselines on T 3 Bench[[11](https://arxiv.org/html/2403.17001v1#bib.bib11)].
121
+
122
+ 4 Experiments
123
+ -------------
124
+
125
+ In this section, we evaluate the effectiveness of our VP3D for text-to-3D generation via extensive empirical evaluations. We first show both quantitative and qualitative results of VP3D in comparison to existing techniques on the newly released text-to-3D benchmark (T 3 Bench [[11](https://arxiv.org/html/2403.17001v1#bib.bib11)]). Next, we conduct ablation studies to validate each design in VP3D. Finally, we demonstrate the extended capability of VP3D for stylized text-to-3D generation.
126
+
127
+ ### 4.1 Experimental Settings
128
+
129
+ Implementation Details. In the coarse and fine stage, the underlying 3D models are both optimized for 5000 iterations using Adam optimizer with 0.001 learning rate. The rendering resolutions are set to 128Γ—128 128 128 128\times 128 128 Γ— 128 and 512Γ—512 512 512 512\times 512 512 Γ— 512 for coarse and fine stage, respectively. We implement the underlying Instant-NGP and DMTet 3D representation mainly based on the Stable-DreamFusion codebase [[36](https://arxiv.org/html/2403.17001v1#bib.bib36)]. Ξ» 1 subscript πœ† 1\lambda_{1}italic_Ξ» start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is set to 0.1 in the coarse stage and 0.01 in the fine stage. Ξ» 2 subscript πœ† 2\lambda_{2}italic_Ξ» start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is linearly increased from 0.001 to 0.01 during the optimization process. The visual prompt condition weight is set to 0.5 in all experiments.
130
+
131
+ Evaluation Protocol. Existing text-to-3D generation works commonly examine their methods over the CLIP R-Precision score [[16](https://arxiv.org/html/2403.17001v1#bib.bib16)], which is an automated metric for the consistency of rendered images with respect to the input text. However, this text-image alignment-based metric cannot faithfully represent the overall 3D quality. For example, CLIP-based text-to-3D methods can also achieve high CLIP R-Precision scores even if the resulting 3D scenes are unrealistic and severely distorted [[27](https://arxiv.org/html/2403.17001v1#bib.bib27)]. Taking this into account, we instead conduct experiments on a newly open-sourced benchmark: T 3 Bench[[11](https://arxiv.org/html/2403.17001v1#bib.bib11)], which is the first comprehensive text-to-3D benchmark containing 300 diverse text prompts of three categories (single object, single object with surroundings, and multiple objects).
132
+
133
+ T 3 Bench provides two automatic metrics (quality and alignment) based on the rendered multi-view images to assess the subjective quality and text alignment. The quality metric utilizes a combination of multi-view text-image scores and regional convolution to effectively identify quality and view inconsistency. The alignment metric employs a 3D captioning model and a Large Language Model (i.e., GPT-4) to access text-3D consistency. Following this, we also leverage the quality and alignment metric to quantitatively compare our VP3D against baseline methods.
134
+
135
+ ![Image 3: Refer to caption](https://arxiv.org/html/2403.17001v1/)
136
+
137
+ Figure 3: Comparisons on qualitative results of our VP3D with other text-to-3D techniques on T 3 Bench [[11](https://arxiv.org/html/2403.17001v1#bib.bib11)]. The prompts are (a) β€œA fuzzy pink flamingo lawn ornament”, (b) β€œA blooming potted orchid with purple flowers”, (c) β€œA blue butterfly on a pink flower”,(d) β€œA lighthouse on a rocky shore”,(e) β€œHot popcorn jump out from the red striped popcorn maker”,(f) β€œA chef is making pizza dough in the kitchen”. (a-b), (c-d), (e-f) belongs to the _Single Object_, _Single Object with Surr_ and _Multi Objects_ category in T 3 Bench, respectively.
138
+
139
+ Baselines. To evaluate our method, we compare our VP3D with six state-of-the-art text-to-3D generation methods: DreamFusion [[27](https://arxiv.org/html/2403.17001v1#bib.bib27)], SJC [[38](https://arxiv.org/html/2403.17001v1#bib.bib38)], LatentNeRF [[22](https://arxiv.org/html/2403.17001v1#bib.bib22)], Fantasia3D [[4](https://arxiv.org/html/2403.17001v1#bib.bib4)], Magic3D [[17](https://arxiv.org/html/2403.17001v1#bib.bib17)] and ProlificDreamer [[39](https://arxiv.org/html/2403.17001v1#bib.bib39)]. Specifically, DreamFusion [[27](https://arxiv.org/html/2403.17001v1#bib.bib27)] firstly introduces score distillation sampling (SDS) that enables leveraging 2D diffusion model (Imagen [[14](https://arxiv.org/html/2403.17001v1#bib.bib14)]) to optimize a NeRF [[23](https://arxiv.org/html/2403.17001v1#bib.bib23)]. SJC [[38](https://arxiv.org/html/2403.17001v1#bib.bib38)] concurrently addresses the out-of-distribution problem in SDS and utilizes an open-sourced diffusion model (Stable Diffusion) to optimize a voxel NeRF. Latent-NeRF [[22](https://arxiv.org/html/2403.17001v1#bib.bib22)] first brings NeRF to the latent space to harmonize with latent diffusion models, then refines it in pixel space. Magic3D [[17](https://arxiv.org/html/2403.17001v1#bib.bib17)] extends DreamFusion with a coarse-to-fine framework that first optimizes a low-resolution NeRF model and then a high-resolution DMTet model via SDS. Fantasia3D [[4](https://arxiv.org/html/2403.17001v1#bib.bib4)] disentangles the SDS-based 3D learning into geometry and appearance learning. ProlificDreamer [[39](https://arxiv.org/html/2403.17001v1#bib.bib39)] upgrades DreamFusion by a variational score distillation (VSD) loss that treats the underlying 3D scene as a random variable instead of a single point as in SDS.
140
+
141
+ ### 4.2 Quantitative Results
142
+
143
+ The quantitative performance comparisons of different methods for text-to-3D generation are summarized in Table [1](https://arxiv.org/html/2403.17001v1#S3.T1 "Table 1 β€£ 3.4 3D Representation and Training β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation"). Overall, our VP3D consistently achieves better performances against existing techniques across all evaluation metrics and prompt categories. Remarkably, VP3D achieves an absolute quality-alignment average score improvement of 4.1%percent 4.1 4.1\%4.1 %, 3.3%percent 3.3 3.3\%3.3 %, and 4.5%percent 4.5 4.5\%4.5 % against the best competitor ProlificDreamer across the three text prompt categories, respectively, which validates the effectiveness of our overall proposals. More importantly, while VP3D employs the same NeRF &\&& DMTet 3D representation and coarse-to-fine training scheme as the baseline method Magic3D, it significantly outperforms Magic3D by achieving 53.5%percent 53.5 53.5\%53.5 %, 48.1%percent 48.1 48.1\%48.1 %, and 40.3%percent 40.3 40.3\%40.3 % average scores, representing a substantial improvement over Magic3D’s average scores of 37.0%percent 37.0 37.0\%37.0 %, 35.4%percent 35.4 35.4\%35.4 %, and 25.7%percent 25.7 25.7\%25.7 %. The results generally highlight the key advantage of introducing visual prompts in lifting 2D diffusion models to perform text-to-3D generation.
144
+
145
+ ![Image 4: Refer to caption](https://arxiv.org/html/2403.17001v1/)
146
+
147
+ Figure 4: Stylized text-to-3D generation results of our VP3D.
148
+
149
+ Specifically, DreamFusion and SJC enable the zero-shot learning of implicit 3D models by distilling prior knowledge from 2D diffusion models. However, the generated 3D scenes have relatively low quality and alignment scores, especially in complex scenarios where the text prompt contains multiple objects or surroundings. Latent-NeRF employs score distillation in the latent space and then back to pixel space to further refine the 3D model, leading to better results. The aforementioned three methods only utilize implicit 3D representations (NeRFs). In contrast, Magic3D adopts textured mesh DMTet as 3D representation for enabling high-resolution optimization and exhibits better performances across all three prompt categories. Fantasia3D also capitalizes on DMTet for geometry learning and then leverages BRDF for appearance learning in a disentangled manner. While Fantasia3D achieves better average scores than DreamFusion and SJC, it fails to create high-fidelity results in complex scenes (e.g., β€œmultiple objects”). ProlificDreamer further boosts the performance by training an additional diffusion model during the optimization process to realize a principled particle-based variational score distillation loss. However, our VP3D still outperforms ProlificDreamer across all evaluation metrics and prompt sets, which confirms the effectiveness of our VP3D.
150
+
151
+ ### 4.3 Qualitative Results
152
+
153
+ The qualitative comparisons for text-to-3D generation are presented in Figure[3](https://arxiv.org/html/2403.17001v1#S4.F3 "Figure 3 β€£ 4.1 Experimental Settings β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation"). As can be seen, our VP3D generally produces superior 3D scenes with plausible geometry and realistic textures when compared with baseline methods. Specifically, DreamFusion suffers from a severely over-saturated problem and has difficulty generating complex geometry. Magic3D and Latent-NeRF slightly alleviate these issues through their higher-resolution DMTet and pixel space refinement, respectively. While Fantasia3D and SJC can generate richer textures than DreamFusion, the geometric quality of the generated 3D scenes falls short of expectations. Notably, ProlificDreamer trains an additional diffusion model during the optimization process to perform variational score distillation (VSD) instead of SDS, achieving satisfactory single-object objects. However, the use of VSD at times introduces excessive irrelevant information or geometry noise in more complex scenarios. In contrast, we can clearly observe that the generated 3D scenes by VP3D faithfully match the input text prompt with plausible geometry and realistic appearance, which demonstrates the superiority of VP3D over state-of-the-art methods and its ability to generate high-quality 3D content.
154
+
155
+ ### 4.4 Ablation Study
156
+
157
+ Here we investigate how each design in our VP3D influences the overall generation performance. We depict the qualitative results of each ablated run in Figure [5](https://arxiv.org/html/2403.17001v1#S4.F5 "Figure 5 β€£ 4.5 Extension to Stylized Text-to-3D Generation β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation"). β„’ S⁒D⁒S subscript β„’ 𝑆 𝐷 𝑆\mathcal{L}_{SDS}caligraphic_L start_POSTSUBSCRIPT italic_S italic_D italic_S end_POSTSUBSCRIPT is our baseline model that employs vanilla score distillation sampling loss. As can be seen, the generated 3D scene is over-saturated and geometry unreasonable. Instead, when β„’ V⁒Pβˆ’S⁒D⁒S subscript β„’ 𝑉 𝑃 𝑆 𝐷 𝑆\mathcal{L}_{VP-SDS}caligraphic_L start_POSTSUBSCRIPT italic_V italic_P - italic_S italic_D italic_S end_POSTSUBSCRIPT is employed, the generation quality is clearly enhanced in terms of both geometry and appearance. This highlights the critical effectiveness of our proposed visual-prompted score distillation sampling. Nevertheless, the resulting 3D scenes by β„’ V⁒Pβˆ’S⁒D⁒S subscript β„’ 𝑉 𝑃 𝑆 𝐷 𝑆\mathcal{L}_{VP-SDS}caligraphic_L start_POSTSUBSCRIPT italic_V italic_P - italic_S italic_D italic_S end_POSTSUBSCRIPT are still not satisfying enough. By utilizing additional visual consistency and human feedback reward functions β„’ v⁒cβˆ’r⁒e⁒w⁒a⁒r⁒d subscript β„’ 𝑣 𝑐 π‘Ÿ 𝑒 𝑀 π‘Ž π‘Ÿ 𝑑\mathcal{L}_{vc-reward}caligraphic_L start_POSTSUBSCRIPT italic_v italic_c - italic_r italic_e italic_w italic_a italic_r italic_d end_POSTSUBSCRIPT (Eq. [7](https://arxiv.org/html/2403.17001v1#S3.E7 "Equation 7 β€£ 3.3 Learning with Reward Feedback β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation")) and β„’ h⁒fβˆ’r⁒e⁒w⁒a⁒r⁒d subscript β„’ β„Ž 𝑓 π‘Ÿ 𝑒 𝑀 π‘Ž π‘Ÿ 𝑑\mathcal{L}_{hf-reward}caligraphic_L start_POSTSUBSCRIPT italic_h italic_f - italic_r italic_e italic_w italic_a italic_r italic_d end_POSTSUBSCRIPT (Eq. [6](https://arxiv.org/html/2403.17001v1#S3.E6 "Equation 6 β€£ 3.3 Learning with Reward Feedback β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation")), the generation quality is gradually improved. The results basically validate the effectiveness of these two complementary factors.
158
+
159
+ ### 4.5 Extension to Stylized Text-to-3D Generation
160
+
161
+ In this section, we demonstrate that another advantage of our VP3D is its remarkable versatility in 3D generation as it can be readily adapted for a new task of stylized text-to-3D generation. The main difference is that the visual prompt is no longer generated from the text prompt but from a user-specified reference image. We also empirically discard the loss in Eq. [6](https://arxiv.org/html/2403.17001v1#S3.E6 "Equation 6 β€£ 3.3 Learning with Reward Feedback β€£ 3 VP3D β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") to eliminate the strictly text-image alignment constraint. In this way, our VP3D can integrate the visual cues contained in the reference image into text-to-3D generation and produce a stylized 3D asset. This asset not only semantically aligns with the text prompt but also reflects visual and geometry properties in the reference image.
162
+
163
+ ![Image 5: Refer to caption](https://arxiv.org/html/2403.17001v1/)
164
+
165
+ Figure 5: Comparisons on qualitative results by using different ablated runs of our VP3D. The text prompts are (a) β€œA broken tablespoon lies next to an empty sugar bowl” and (b) β€œA chameleon perched on a tree branch”.
166
+
167
+ Figure [4](https://arxiv.org/html/2403.17001v1#S4.F4 "Figure 4 β€£ 4.2 Quantitative Results β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") shows our stylized text-to-3D generation results. Our VP3D can generate diverse and stylized 3D assets by giving different visual prompts to the same text prompt. As shown in Figure [4](https://arxiv.org/html/2403.17001v1#S4.F4 "Figure 4 β€£ 4.2 Quantitative Results β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") (a-b), the generated result semantically is a rabbit that adheres to the text prompt but also inherits some visual cues of the visual prompt. To be clear, the generated 3D rabbits have somewhat consistent geometry pose and appearance texture with the object in the visual prompt. For example, in Figure [4](https://arxiv.org/html/2403.17001v1#S4.F4 "Figure 4 β€£ 4.2 Quantitative Results β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") (b), the generated rabbit mirrors the β€œhugging pose” of the reference image and also has the same style of β€œcrescent-shaped eyebrows” and β€œyellow plaid jacket” as in the reference image. In Figure [4](https://arxiv.org/html/2403.17001v1#S4.F4 "Figure 4 β€£ 4.2 Quantitative Results β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") (c-d), we showcase the versatility of our VP3D by seamlessly blending styles from different visual prompts. Take Figure [4](https://arxiv.org/html/2403.17001v1#S4.F4 "Figure 4 β€£ 4.2 Quantitative Results β€£ 4 Experiments β€£ VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation") (d) as an instance, we use the leopard image as a visual prompt in the coarse stage and then replace it with an oil painting image in the fine stage. Our VP3D finally resulted in a 3D rabbit that not only has a consistent pose with the leopard but also a colorful oil painting style texture. The stylized 3D generation ability distinct our VP3D from previous text-to-3D approaches and can lead to more creative and diverse 3D content creation.
168
+
169
+ 5 Conclusion
170
+ ------------
171
+
172
+ In this work, we propose VP3D, a new paradigm for text-to-3D generation by leveraging 2D visual prompts. We first capitalize on 2D diffusion models to generate a high-quality image from input text. This image then acts as a visual prompt to strengthen the 3D model learning with our devised visual-prompted score distillation sampling. Meanwhile, we introduce additional human feedback and visual consistency reward functions to encourage the semantic and appearance consistency between the 3D model and input visual &\&& text prompt. Both qualitative and quantitative comparisons on the T 3 Bench benchmark demonstrate the superiority of our VP3D over existing SOTA techniques.
173
+
174
+ References
175
+ ----------
176
+
177
+ * Bain et al. [2021] Max Bain, Arsha Nagrani, GΓΌl Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In _ICCV_, 2021.
178
+ * Caron et al. [2021] Mathilde Caron, Hugo Touvron, Ishan Misra, HervΓ© JΓ©gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In _ICCV_, 2021.
179
+ * Chen et al. [2023a] Jingwen Chen, Yingwei Pan, Ting Yao, and Tao Mei. Controlstyle: Text-driven stylized image generation using diffusion priors. In _ACM Multimedia_, 2023a.
180
+ * Chen et al. [2023b] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation. In _ICCV_, 2023b.
181
+ * Chen et al. [2019a] Yang Chen, Yingwei Pan, Ting Yao, Xinmei Tian, and Tao Mei. Animating your life: Real-time video-to-animation translation. In _ACM MM_, 2019a.
182
+ * Chen et al. [2019b] Yang Chen, Yingwei Pan, Ting Yao, Xinmei Tian, and Tao Mei. Mocycle-gan: Unpaired video-to-video translation. In _ACM MM_, 2019b.
183
+ * Chen et al. [2023c] Yang Chen, Jingwen Chen, Yingwei Pan, Xinmei Tian, and Tao Mei. 3d creation at your fingertips: From text or image to 3d assets. In _ACM MM_, 2023c.
184
+ * Chen et al. [2023d] Yang Chen, Yingwei Pan, Yehao Li, Ting Yao, and Tao Mei. Control3d: Towards controllable text-to-3d generation. In _ACM MM_, 2023d.
185
+ * Deng et al. [2023] Congyue Deng, Chiyu Jiang, Charles R Qi, Xinchen Yan, Yin Zhou, Leonidas Guibas, Dragomir Anguelov, et al. Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors. In _CVPR_, 2023.
186
+ * He et al. [2022] Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. Latent video diffusion models for high-fidelity video generation with arbitrary lengths. _arXiv preprint arXiv:2211.13221_, 2022.
187
+ * He et al. [2023] Yuze He, Yushi Bai, Matthieu Lin, Wang Zhao, Yubin Hu, Jenny Sheng, Ran Yi, Juanzi Li, and Yong-Jin Liu. T 3 bench: Benchmarking current progress in text-to-3d generation. _arXiv preprint arXiv:2310.02977_, 2023.
188
+ * Ho and Salimans [2022] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In _NeurIPS Workshop_, 2022.
189
+ * Ho et al. [2020] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _NeurIPS_, 2020.
190
+ * Ho et al. [2022] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. _arXiv preprint arXiv:2210.02303_, 2022.
191
+ * [15] EJ Hu, Y Shen, P Wallis, Z Allen-Zhu, Y Li, S Wang, L Wang, and W Chen. Low-rank adaptation of large language models, arxiv, 2021. _arXiv preprint arXiv:2106.09685_, 10.
192
+ * Jain et al. [2022] Ajay Jain, Ben Mildenhall, Jonathan T Barron, Pieter Abbeel, and Ben Poole. Zero-shot text-guided object generation with dream fields. In _CVPR_, 2022.
193
+ * Lin et al. [2023] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In _CVPR_, 2023.
194
+ * Liu et al. [2024] Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Zexiang Xu, Hao Su, et al. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. In _NeurIPS_, 2024.
195
+ * Liu et al. [2023] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In _ICCV_, 2023.
196
+ * Luo et al. [2023] Jianjie Luo, Yehao Li, Yingwei Pan, Ting Yao, Jianlin Feng, Hongyang Chao, and Tao Mei. Semantic-conditional diffusion networks for image captioning. In _CVPR_, 2023.
197
+ * Melas-Kyriazi et al. [2023] Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, and Andrea Vedaldi. Realfusion: 360deg reconstruction of any object from a single image. In _CVPR_, 2023.
198
+ * Metzer et al. [2023] Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, and Daniel Cohen-Or. Latent-nerf for shape-guided generation of 3d shapes and textures. In _CVPR_, 2023.
199
+ * Mildenhall et al. [2020] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In _ECCV_, 2020.
200
+ * MΓΌller et al. [2022] Thomas MΓΌller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. _ACM Transactions on Graphics (ToG)_, 2022.
201
+ * Nichol and Dhariwal [2021] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In _ICLR_, 2021.
202
+ * Pan et al. [2017] Yingwei Pan, Zhaofan Qiu, Ting Yao, Houqiang Li, and Tao Mei. To create what you tell: Generating videos from captions. In _ACM Multimedia_, 2017.
203
+ * Poole et al. [2023] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In _ICLR_, 2023.
204
+ * Qian et al. [2024] Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al. Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors. In _ICLR_, 2024.
205
+ * Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _ICML_, 2021.
206
+ * Ramesh et al. [2022] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022.
207
+ * Rombach et al. [2022] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and BjΓΆrn Ommer. High-resolution image synthesis with latent diffusion models. In _CVPR_, 2022.
208
+ * Saharia et al. [2022] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. In _NeurIPS_, 2022.
209
+ * Schuhmann et al. [2022] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. In _NeurIPS_, 2022.
210
+ * Singer et al. [2023] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. In _ICLR_, 2023.
211
+ * Sohl-Dickstein et al. [2015] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _ICML_, 2015.
212
+ * Tang [2022] Jiaxiang Tang. Stable-dreamfusion: Text-to-3d with stable-diffusion, 2022. https://github.com/ashawkey/stable-dreamfusion.
213
+ * Tang et al. [2023] Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, and Dong Chen. Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior. In _ICCV_, 2023.
214
+ * Wang et al. [2023a] Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. In _CVPR_, 2023a.
215
+ * Wang et al. [2023b] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. In _NeurIPS_, 2023b.
216
+ * Xu et al. [2023a] Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Yi Wang, and Zhangyang Wang. Neurallift-360: Lifting an in-the-wild 2d photo to a 3d object with 360deg views. In _CVPR_, 2023a.
217
+ * Xu et al. [2023b] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. _arXiv preprint arXiv:2304.05977_, 2023b.
218
+ * Yang et al. [2023] Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, and Tao Mei. 3dstyle-diffusion: Pursuing fine-grained text-driven 3d stylization with 2d diffusion models. In _ACM MM_, 2023.
219
+ * Ye et al. [2023] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. _arXiv preprint arXiv:2308.06721_, 2023.