Buckets:
Title: Fashion-VDM: Video Diffusion Model for Virtual Try-On
URL Source: https://arxiv.org/html/2411.00225
Published Time: Tue, 05 Nov 2024 02:50:55 GMT
Markdown Content: (2024)
Abstract.
We present Fashion-VDM, a video diffusion model (VDM) for generating virtual try-on videos. Given an input garment image and person video, our method aims to generate a high-quality try-on video of the person wearing the given garment, while preserving the person’s identity and motion. Image-based virtual try-on has shown impressive results; however, existing video virtual try-on (VVT) methods are still lacking garment details and temporal consistency. To address these issues, we propose a diffusion-based architecture for video virtual try-on, split classifier-free guidance for increased control over the conditioning inputs, and a progressive temporal training strategy for single-pass 64-frame, 512px video generation. We also demonstrate the effectiveness of joint image-video training for video try-on, especially when video data is limited. Our qualitative and quantitative experiments show that our approach sets the new state-of-the-art for video virtual try-on. For additional results, visit our project page: https://johannakarras.github.io/Fashion-VDM.
Virtual Try-On, Video Synthesis, Diffusion Models
††submissionid: 561††journal: TOG††journalyear: 2024††copyright: rightsretained††conference: SIGGRAPH Asia 2024 Conference Papers; December 3–6, 2024; Tokyo, Japan††booktitle: SIGGRAPH Asia 2024 Conference Papers (SA Conference Papers ’24), December 3–6, 2024, Tokyo, Japan††doi: 10.1145/3680528.3687623††isbn: 979-8-4007-1131-2/24/12††ccs: Computing methodologies Computer graphics††ccs: Computing methodologies Computer vision
Figure 1. Fashion-VDM. Given an input garment image and a person video, Fashion-VDM generates a video of the person virtually trying on the given garment, while preserving their original identity and motion.
- Introduction
With the popularity of online clothing shopping and social media marketing, there is a strong demand for virtual try-on methods. Given a garment image and a person image, virtual try-on aims to show how the person would look wearing the given garment. In this paper, we explore video virtual try-on, where the input is a garment image and person video. The benefit of a video virtual try-on (VVT) experience is that it would depict how a garment looks at different angles and how it drapes and flows in motion.
VVT is a challenging task, as it requires synthesizing realistic try-on frames from different viewpoints, while generating realistic fabric dynamics (e.g. folds and wrinkles) and maintaining temporal consistency between frames. Additional difficulty arises if the person and garment poses vary significantly, as this creates occluded garment and person regions that need to be hallucinated. Another challenge is the scarcity of try-on video data. Perfect ground truth data (i.e. two videos of different people wearing the same garment and moving in the exact same way) is difficult and expensive to acquire. In general, available human video data, such as UBC Fashion (Zablotskaia et al., 2019), are much more scarce and less diverse than image data, such as LAION 5B(Schuhmann et al., 2022).
Past approaches to virtual try-on typically leverage dense flow fields to explicitly warp the source garment pixels onto the target person frames (Wen-Jiin Tsai, 2023; Zhong et al., 2021; Jiang et al., 2022; Dong et al., 2022; Haoye Dong and Yin, 2019). However, these flow-based approaches can introduce artifacts due to occlusions in the source frame, large pose deformations, and inaccurate flow estimates. Moreover, these methods are incapable of producing realistic and fine-grained fabric dynamics, such as wrinkling, folding, and flowing, as these details are not captured by appearance flows. A recent breakthrough in image-based virtual try-on uses a diffusion model(Zhu et al., 2023), which implicitly warps the input garment under large pose gaps and heavy occlusion using spatial cross-attention. However, directly applying (Zhu et al., 2023) or other image-based try-on methods for VVT in a frame-by-frame manner creates severe flickering artifacts and temporal inconsistencies.
Diffusion models(Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Dhariwal and Nichol, 2021; Ho et al., 2020; Song et al., 2020) have shown promising results on various video synthesis tasks, such as text-to-video generation(Ho et al., 2022b) and image-to-video generation(Karras et al., 2023; Hu et al., 2023; Guo et al., 2023). However, a key challenge is generating longer videos, while maintaining temporal consistency and adhering to computational and memory constraints. Previous works use cascaded approaches(Ho et al., 2022a), sliding windows inference(Ho et al., 2022b; Xu et al., 2023), past-frame conditioning(Harvey et al., 2022; Lee et al., 2023; Mei and Patel, 2023), and transitions or interpolation(Chen et al., 2023a; Wang et al., 2023b). Yet, even with such schemes, longer videos are temporally inconsistent, contain artifacts, and lack realistic textures and details. We argue that, similar to context modeling for LLM’s(Chen et al., 2023b), short-video generation models can be naturally extended for long-video generation by a temporally progressive finetuning scheme, without introducing additional inference passes or multiple networks.
A potential option for diffusion-based VVT is to apply an animation model to a single try-on image generated by an image try-on model. However, as this is not an end-to-end trained system, any image try-on errors will accumulate throughout the video. We argue that a single VVT model would overcome this issue by 1) injecting explicit person and garment conditioning information into the model and 2) having an end-to-end training objective.
We present Fashion-VDM, the first VVT method to synthesize temporally consistent, high-quality try-on videos, even on diverse poses and difficult garments. Fashion-VDM is a single-network, diffusion-based approach. To maintain temporal smoothness, we inflate the M&M VTO(Zhu et al., 2024) architecture with 3D-convolution and temporal attention blocks. We maintain temporal consistency in videos up to 64-frames long with a single network by training in a temporally progressive manner. To address input person and garment fidelity, we introduce split classifier-free guidance (split-CFG) that enables increased control over each input signal. In our experiments, we also show that split-CFG increases realism, temporal consistency, and garment fidelity, compared to ordinary or dual CFG. Additionally, we increase garment fidelity and realism by training jointly with image and video data. Our results show that Fashion-VDM surpasses benchmark methods by a large margin and synthesizes state-of-the-art try-on videos.
Figure 2. Fashion-VDM Architecture. Given a noisy video z t subscript 𝑧 𝑡 z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT at diffusion timestep t 𝑡 t italic_t, a forward pass of Fashion-VDM computes a single denoising step to get the denoised video z t−1′subscript superscript 𝑧′𝑡 1 z^{\prime}{t-1}italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT. Noisy video z t subscript 𝑧 𝑡 z{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is preprocessed into person poses J p subscript 𝐽 𝑝 J_{p}italic_J start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and clothing-agnostic frames I a subscript 𝐼 𝑎 I_{a}italic_I start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, while the garment image I g subscript 𝐼 𝑔 I_{g}italic_I start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT is preprocessed into the garment segmentation S g subscript 𝑆 𝑔 S_{g}italic_S start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and garment poses J g subscript 𝐽 𝑔 J_{g}italic_J start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT (Section 3.3). The architecture follows(Zhu et al., 2024), except the main UNet contains 3D-Conv and temporal attention blocks to maintain temporal consistency. Additionally, we inject temporal down/upsampling blocks during 64-frame temporal training. Noisy video z t subscript 𝑧 𝑡 z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is encoded by the main UNet and the conditioning signals, S g subscript 𝑆 𝑔 S_{g}italic_S start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and I a subscript 𝐼 𝑎 I_{a}italic_I start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, are encoded by separate UNet encoders. In the 8 DiT blocks at the lowest resolution of the UNet, the garment conditioning features are cross-attended with the noisy video features and the spatially-aligned clothing-agnostic features z a subscript 𝑧 𝑎 z_{a}italic_z start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and noisy video features are directly concatenated. J g subscript 𝐽 𝑔 J_{g}italic_J start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and J p subscript 𝐽 𝑝 J_{p}italic_J start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT are encoded by single linear layers, then concatenated to the noisy features in all UNet 2D spatial layers.
- Related Works
2.1. Video Diffusion Models
Many early video diffusion models(Ho et al., 2022b) (VDMs) adapt text-to-image diffusion models to generate batches of consecutive video frames, often employing temporal blocks within the denoising UNet architecture to learn temporal consistency(Ho et al., 2022b, a). Latent VDM’s(Mei and Patel, 2023; Blattmann et al., 2023; Gu et al., 2023; Guo et al., 2023; Karras et al., 2023; He et al., 2022b; Andreas Blattmann, 2023; Wang et al., 2023a) reduce the computational complexity of standard VDM’s by performing diffusion in the latent space.
To achieve longer videos and increased spatial resolution, (Ho et al., 2022a) proposes a cascade of temporal and spatial upsampling UNets. Other methods employ similar schemes of cascaded models for long video generation(Wang et al., 2023a). However, cascaded strategies require multiple networks and inference runs. Another strategy is to synthesize sparse keyframes, then use frame interpolation(Mei and Patel, 2023), past-frame conditioning(He et al., 2022b), temporally overlapping frames(Xu et al., 2023), and predicting transitions between frames(Chen et al., 2023a; Wang et al., 2023b) to achieve longer, temporal-consistent videos. Unlike past long-video VDM’s, Fashion-VDM is a unified (non-cascaded) diffusion model that generates a long video up to 64 frames long in a single inference run, thereby reducing memory requirements and inference time.
2.2. Image and Pose Guidance
Many VDM’s are text-conditioned(Mei and Patel, 2023; Ho et al., 2022a; Blattmann et al., 2023; Andreas Blattmann, 2023) and there is increasing interest in image-conditioned VDM’s(Karras et al., 2023; Hu et al., 2023; Guo et al., 2023). To maintain the exact details of input images, some methods require inference-time finetuning(Karras et al., 2023; Andreas Blattmann, 2023; Guo et al., 2023). In contrast, Fashion-VDM requires no additional finetuning during test time to maintain high-quality details of the input person and garment.
Some recent diffusion-based animation methods are both image- and pose-conditioned(Karras et al., 2023; Girdhar et al., 2023; Hu et al., 2023; Guo et al., 2023; Xu et al., 2023). DreamPose uses a pre-trained (latent) Stable Diffusion model without temporal layers to generate videos in a frame-by-frame manner(Karras et al., 2023). More recently, Animate Anyone(Hu et al., 2023) encodes the image using ReferenceNet and their diffusion model incorporates spatial, cross, and temporal attention layers to maintain consistency and preserve details, while MagicAnimate(Xu et al., 2023) introduces an appearance encoder to maintain the fidelity across the frames and generates a long video using temporally overlapping segments. In contrast, Fashion-VDM is a non-latent, temporally-aware video diffusion model, capable of synthesizing up to 64 consecutive frames in a single inference pass.
2.3. Virtual Try-On
Traditional image virtual try-on approaches first warp the target garment onto the input person, then refine the resulting image (Han et al., 2018; Choi et al., 2021; Lee et al., 2022; Bai et al., 2022; He et al., 2022a; Men et al., 2020; Ren et al., 2022; Yang et al., 2020; Yu et al., 2019; Zhang et al., 2021; Cui et al., 2023). Similarly, for video virtual try-on (VVT), past methods often rely on multiple networks to predict intermediate values, such as optical flow, background masks, and occlusion masks, to warp the target garment to the person in each frame of the video(Wen-Jiin Tsai, 2023; Jiang et al., 2022; Dong et al., 2022; Haoye Dong and Yin, 2019; Zhong et al., 2021). However, inaccuracies in these intermediate values lead to artifacts and misalignment. Some image try-on approaches incorporate optical flow estimation to alleviate this misalignment(Bai et al., 2022; Lee et al., 2022; Lewis et al., 2021; Xintong Han and Scott, 2020). For VVT, MV-TON(Zhong et al., 2021) proposes a memory refinement module to correct inaccurate details in the generated frames by encoding past frames into latent space, then using this as external memory to generate new frames. ClothFormer(Jiang et al., 2022) estimates an occlusion mask to correct for flow inaccuracies. Current state-of-the-art VVT methods achieve improved results by utilizing attention modules in the warping and fusing phases(Jiang et al., 2022; Wen-Jiin Tsai, 2023).
In contrast to earlier flow-based methods, TryOnDiffusion(Zhu et al., 2023) leverages a diffusion-based method conditioned with pose and garment for image virtual try-on. WarpDiffusion(Zhang et al., 2023) tries to reduce the computational cost and data requirements by bridging warping and diffusion-based virtual try-on methods. StableVITON(Kim et al., 2023) avoids warping by finetuning pre-trained latent diffusion(Rombach et al., 2022) encoders for input person and garment conditioning via cross-attention blocks. Mix-and-match (M&M) VTO(Zhu et al., 2023) extends single tryon task for mixmatch tryon application with a novel person embedding finetuning strategy.
2.4. Image and Video Training
Video datasets are often smaller and less diverse, compared to image datasets, as images are more abundant online. To alleviate this problem, (Ho et al., 2022b, a; Xu et al., 2023) propose jointly leveraging image and video data for training. VDM(Ho et al., 2022b) and Imagen Video(Ho et al., 2022a) implement joint training by applying a temporal mask to image batches. MagicAnimate(Xu et al., 2023) applies joint training during the pretraining stage of their appearance encoder and pose ControlNet. We improve upon existing joint training schemes (see Section3.7), ultimately demonstrating the benefit of joint image and video training for video try-on.
Figure 3. Split-CFG Ablation. We compare different split-cfg weights, where (w∅,w p,w g,w full)subscript 𝑤 subscript 𝑤 p subscript 𝑤 g subscript 𝑤 full(w_{\emptyset},w_{\text{p}},w_{\text{g}},w_{\text{full}})( italic_w start_POSTSUBSCRIPT ∅ end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT p end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT g end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT full end_POSTSUBSCRIPT ) correspond to the unconditional guidance, person-only guidance, person and cloth guidance, and full guidance terms, respectively.
- Method
We propose Fashion-VDM, a unified video diffusion model for synthesizing state-of-the-art virtual try-on (VTO) videos up to 64 frames long at 512 512 512 512 px resolution. Our method introduces an end-to-end diffusion-based VVT architecture based on (Zhu et al., 2024) (Section 3.4), split classifier-free guidance (split-CFG) for increased garment fidelity (Section 3.5), progressive temporal training for long-video generation (Section 3.6), and joint image-video training for improved garment fidelity (Section 3.7).
3.1. Problem Formulation
In video virtual try-on, the input is a video {I p 0,I p 1,…,I p N−1}subscript superscript 𝐼 0 𝑝 subscript superscript 𝐼 1 𝑝…subscript superscript 𝐼 𝑁 1 𝑝{I^{0}{p},I^{1}{p},...,I^{N-1}{p}}{ italic_I start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_I start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , … , italic_I start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT } of a person p 𝑝 p italic_p consisting of N 𝑁 N italic_N frames and a single garment image I g subscript 𝐼 𝑔 I{g}italic_I start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT of another person wearing garment g 𝑔 g italic_g. The goal is to synthesize a video {I tr 0,I tr 1,…,I tr N−1}subscript superscript 𝐼 0 𝑡 𝑟 subscript superscript 𝐼 1 𝑡 𝑟…subscript superscript 𝐼 𝑁 1 𝑡 𝑟{I^{0}{tr},I^{1}{tr},...,I^{N-1}{tr}}{ italic_I start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t italic_r end_POSTSUBSCRIPT , italic_I start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t italic_r end_POSTSUBSCRIPT , … , italic_I start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t italic_r end_POSTSUBSCRIPT }, where I tr i subscript superscript 𝐼 𝑖 𝑡 𝑟 I^{i}{tr}italic_I start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t italic_r end_POSTSUBSCRIPT denotes the i 𝑖 i italic_i-th tr y-on video frame that preserves the identity and motion of the person p 𝑝 p italic_p wearing the garment g 𝑔 g italic_g.
3.2. Preliminary: M&M VTO
Our VTO-UDiT network architecture is inspired by(Zhu et al., 2024), a state-of-the-art multi-garment image try-on diffusion model that also enables text-based control of garment layout. VTO-UDiT is represented by
(1)x 0^=x θ(z t,t,c tr)^subscript 𝑥 0 subscript 𝑥 𝜃 subscript 𝑧 𝑡 𝑡 subscript 𝑐 𝑡 𝑟\hat{x_{0}}=x_{\theta}(z_{t},t,c_{tr})over^ start_ARG italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG = italic_x start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t , italic_c start_POSTSUBSCRIPT italic_t italic_r end_POSTSUBSCRIPT )
where x 0^^subscript 𝑥 0\hat{x_{0}}over^ start_ARG italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG is the predicted try-on image by the network x θ subscript 𝑥 𝜃 x_{\theta}italic_x start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT, parameterized by θ 𝜃\theta italic_θ, at diffusion timestep t 𝑡 t italic_t, z t subscript 𝑧 𝑡 z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the noisy image, and c tr subscript 𝑐 𝑡 𝑟 c_{tr}italic_c start_POSTSUBSCRIPT italic_t italic_r end_POSTSUBSCRIPT is the conditioning inputs. VTO-UDiT is parameterized in v-space, following(Salimans and Ho, 2022). Each conditioning input is encoded separately by fully convolutional encoders and processed at the lowest resolution of the main UNet via DiT blocks(Peebles and Xie, 2022), where conditioning features are processed with self-attention or cross-attention modules. However, while it shows impressive results for image try-on, VTO-UDiT cannot reason about temporal consistency when applied to video inputs.
3.3. Input Preprocessing
From the input video frames, we compute the clothing-agnostic frames I a={I a 0,I a 1,…,I a N−1}subscript 𝐼 𝑎 subscript superscript 𝐼 0 𝑎 subscript superscript 𝐼 1 𝑎…subscript superscript 𝐼 𝑁 1 𝑎 I_{a}={I^{0}{a},I^{1}{a},...,I^{N-1}{a}}italic_I start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = { italic_I start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_I start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , … , italic_I start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT }, person poses J p={J p 0,J p 1,…,J p N−1}subscript 𝐽 𝑝 subscript superscript 𝐽 0 𝑝 subscript superscript 𝐽 1 𝑝…subscript superscript 𝐽 𝑁 1 𝑝 J{p}={J^{0}{p},J^{1}{p},...,J^{N-1}{p}}italic_J start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = { italic_J start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_J start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , … , italic_J start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT }, and person masks {M p 0,M p 1,…,M p N−1}subscript superscript 𝑀 0 𝑝 subscript superscript 𝑀 1 𝑝…subscript superscript 𝑀 𝑁 1 𝑝{M^{0}{p},M^{1}{p},...,M^{N-1}{p}}{ italic_M start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_M start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , … , italic_M start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT }. The clothing-agnostic frames mask out the entire bounding box area of the person in the frame, except for the visible body regions (head, hands, legs, and shoes), following TryOnDiffusion(Zhu et al., 2023). Optionally, the clothing-agnostic frames can keep the original bottoms, if doing top try-on only. From the input garment image I g subscript 𝐼 𝑔 I_{g}italic_I start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT, we extract the garment segmentation image S g subscript 𝑆 𝑔 S_{g}italic_S start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT, garment pose J g subscript 𝐽 𝑔 J_{g}italic_J start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT, and garment mask M g subscript 𝑀 𝑔 M_{g}italic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT. The garment pose refers to the pose keypoints of the person wearing the garment before segmentation. We channel-wise concatenate M p i subscript superscript 𝑀 𝑖 𝑝 M^{i}{p}italic_M start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT to I a i subscript superscript 𝐼 𝑖 𝑎 I^{i}{a}italic_I start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and M g subscript 𝑀 𝑔 M_{g}italic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT to I g subscript 𝐼 𝑔 I_{g}italic_I start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT. Poses, masks, and segmentations are computed using an in-house equivalent of Graphonomy(Gong et al., 2019). Both person and garment pose keypoints are preprocessed to be spatially aligned with the person frames and garment image, respectively.
3.4. Architecture
Our overall architecture is depicted in Figure 2. We adapt the VTO-UDiT architecture (Zhu et al., 2023) by inflating the two lowest-resolution downsampling and upsampling blocks with temporal attention and 3D-Conv blocks, as shown in Figure2. To be specific, after the 2D-Conv layers, we add a 3D-Conv block, a temporal attention block, and a temporal mixing block to linearly combine spatial and temporal features, as proposed in (Blattmann et al., 2023). In the temporal mixing blocks, processed features after the spatial attention layer z s subscript 𝑧 𝑠 z_{s}italic_z start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT are linearly combined with processed features after the temporal attention layer z t subscript 𝑧 𝑡 z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT via learned weighting parameter α 𝛼\alpha italic_α:
(2)z t′=α⋅z s+(1−α)⋅z t subscript superscript 𝑧′𝑡⋅𝛼 subscript 𝑧 𝑠⋅1 𝛼 subscript 𝑧 𝑡 z^{\prime}{t}=\alpha\cdot z{s}+(1-\alpha)\cdot z_{t}italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_α ⋅ italic_z start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT + ( 1 - italic_α ) ⋅ italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT
During 64-frame training (see Section3.6), we further inflate the model with temporal downsampling and upsampling blocks with factor 2, to reduce the memory footprint of the model. These blocks are added before and after the lowest-resolution spatial blocks, respectively.
The person and garment poses are encoded and used to condition all 2D spatial layers in the UNet. The 8 Diffusion Transformer (DiT) blocks(Peebles and Xie, 2022) between the UNet encoder and decoder condition our model on the segmented garment and clothing-agnostic image features, as proposed by (Zhu et al., 2024). In each block, the garment images are cross-attended with the noisy target features, while the agnostic input images are concatenated to the noisy target features.
3.5. Split Classifier-Free Guidance
Standard classifier-free guidance (CFG)(Ho and Salimans, 2022) is a sampling technique that pushes the distribution of inference results towards the input conditioning signal(s); however, it does not allow for disentangled guidance towards separate conditioning signals. Instruct-Pix2Pix(Brooks et al., 2023) introduces dual-CFG, which separates the CFG weights for text and image conditioning signals, drawing inspiration from Composable Diffusion(Liu et al., 2022).
We introduce split-CFG, a generalization of dual-CFG which allows independent control over multiple conditioning signals. See Algorithm1. The inputs to Split-CFG are the trained denoising UNet ϵ θ subscript italic-ϵ 𝜃\epsilon_{\theta}italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT, the list of all conditioning signal sets C 𝐶 C italic_C, and the respective conditioning weights W 𝑊 W italic_W. For each subset of conditioning signals c i∈C subscript 𝑐 𝑖 𝐶 c_{i}\in C italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_C, containing one or more conditional inputs, the algorithm computes the conditional result ϵ i subscript italic-ϵ 𝑖\epsilon_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT given c i subscript 𝑐 𝑖 c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Then, the weighted difference between the conditional result ϵ i subscript italic-ϵ 𝑖\epsilon_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT from the past conditional result ϵ i−1 subscript italic-ϵ 𝑖 1\epsilon_{i-1}italic_ϵ start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT is added to the prediction. In this way, the prediction is pushed in the direction of c i subscript 𝑐 𝑖 c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
Split-CFG
(ϵ θ,C,W)subscript italic-ϵ 𝜃 𝐶 𝑊(\epsilon_{\theta},C,W)( italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT , italic_C , italic_W )
c←∅←𝑐 c\leftarrow\emptyset italic_c ← ∅▷▷\triangleright▷ current conditioning signals;
ϵ^θ(z t,C)←w 0ϵ θ(z t,∅)←subscript^italic-ϵ 𝜃 subscript 𝑧 𝑡 𝐶 subscript 𝑤 0 subscript italic-ϵ 𝜃 subscript 𝑧 𝑡\hat{\epsilon}{\theta}(z{t},C)\leftarrow w_{0}\epsilon_{\theta}(z_{t},\emptyset)over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_C ) ← italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , ∅ )▷▷\triangleright▷ initialize prediction;
ϵ^0←ϵ^θ(z t,C)←subscript^italic-ϵ 0 subscript^italic-ϵ 𝜃 subscript 𝑧 𝑡 𝐶\hat{\epsilon}{0}\leftarrow\hat{\epsilon}{\theta}(z_{t},C)over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ← over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_C )▷▷\triangleright▷ store past prediction;
for c i subscript 𝑐 𝑖 c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in C do
c←c∪{c i}←𝑐 𝑐 subscript 𝑐 𝑖 c\leftarrow c\cup{c_{i}}italic_c ← italic_c ∪ { italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT }▷▷\triangleright▷ update
c 𝑐 c italic_c ;
ϵ^i←ϵ θ(z t,c)←subscript^italic-ϵ 𝑖 subscript italic-ϵ 𝜃 subscript 𝑧 𝑡 𝑐\hat{\epsilon}{i}\leftarrow\epsilon{\theta}(z_{t},c)over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c )▷▷\triangleright▷ store new prediction;
ϵ^θ(z t,C)←ϵ^θ(z t,C)+w i(ϵ^i−ϵ^i−1)←subscript^italic-ϵ 𝜃 subscript 𝑧 𝑡 𝐶 subscript^italic-ϵ 𝜃 subscript 𝑧 𝑡 𝐶 subscript 𝑤 𝑖 subscript^italic-ϵ 𝑖 subscript^italic-ϵ 𝑖 1\hat{\epsilon}{\theta}(z{t},C)\leftarrow\hat{\epsilon}{\theta}(z{t},C)+w_{% i}(\hat{\epsilon}{i}-\hat{\epsilon}{i-1})over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_C ) ← over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_C ) + italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT ) ;
ϵ^i−1←ϵ^i←subscript^italic-ϵ 𝑖 1 subscript^italic-ϵ 𝑖\hat{\epsilon}{i-1}\leftarrow\hat{\epsilon}{i}over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT ← over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT▷▷\triangleright▷ update
ϵ^i−1 subscript^italic-ϵ 𝑖 1\hat{\epsilon}_{i-1}over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT
end for
return ϵ^θ(z t,C)subscript^italic-ϵ 𝜃 subscript 𝑧 𝑡 𝐶\hat{\epsilon}{\theta}(z{t},C)over^ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_C )
Algorithm 1 Split Classifier-Free Guidance
Split-CFG is naturally dependent on the order of the conditioning signals. Intuitively, the first conditional output will have the largest distance from the null output, thus most affecting the final result. In our implementation, our conditioning groups C 𝐶 C italic_C consist of (1) the empty set (unconditional inference), (2) the clothing-agnostic images ({I a 0,…,I a N−1})subscript superscript 𝐼 0 𝑎…subscript superscript 𝐼 𝑁 1 𝑎({I^{0}{a},...,I^{N-1}{a}})( { italic_I start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , … , italic_I start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT } ), (3) all clothing-related inputs (S g,J g,M g)subscript 𝑆 𝑔 subscript 𝐽 𝑔 subscript 𝑀 𝑔(S_{g},J_{g},M_{g})( italic_S start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , italic_J start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , italic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ), and (4) lastly, all remaining conditioning inputs ({J p 0,…,J p N−1}({J^{0}{p},...,J^{N-1}{p}}( { italic_J start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , … , italic_J start_POSTSUPERSCRIPT italic_N - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT }. We denote the respective weights of each term as (w∅,w p,w g,w full)subscript 𝑤 subscript 𝑤 𝑝 subscript 𝑤 𝑔 subscript 𝑤 full(w_{\emptyset},w_{p},w_{g},w_{\text{full}})( italic_w start_POSTSUBSCRIPT ∅ end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT full end_POSTSUBSCRIPT ). Empirically, we find this ordering yields the best results.
Overall, we find that controlling sampling via split-CFG not only enhances the frame-wise garment fidelity, but also increases photorealism (FID) the inter-frame consistency of video (FVD), compared to ordinary CFG.
3.6. Progressive Temporal Training
Our novel progressive temporal training enables up to 64-frame video generation in a single inference run. We first train a base image model from scratch on image data at 512 512 512 512 px resolution and image batches of shape BxTxHxWxC 𝐵 𝑥 𝑇 𝑥 𝐻 𝑥 𝑊 𝑥 𝐶 BxTxHxWxC italic_B italic_x italic_T italic_x italic_H italic_x italic_W italic_x italic_C, with batch size B=8 𝐵 8 B=8 italic_B = 8 and length T=1 𝑇 1 T=1 italic_T = 1, for 1M iterations. Then, we inflate the base architecture with temporal blocks and continue training the same spatial layers and new temporal layers with image and video batches with batch size B=1 𝐵 1 B=1 italic_B = 1 and length T=8 𝑇 8 T=8 italic_T = 8. Video batches are consecutive frames of length T 𝑇 T italic_T from the same video. After convergence, we double the video length T 𝑇 T italic_T to 16. This process is repeated until we reach the target length of 64 frames. Each temporal phase is trained for 150K iterations. The benefit of such a progressive process is a faster training speed and better multi-frame consistency. Additional details are provided in the Supplementary.
3.7. Joint Image and Video Training
Training the temporal phases solely with video data, which is much more limited in scale compared to image data, would disregard the image dataset entirely after the pretraining phase. We observe that video-only training in the temporal phases sacrifices image quality and fidelity for temporal smoothness. To combat this issue, we train the temporal phases jointly with 50%percent%% image batches and 50%percent 50 50%50 % video batches. We implement joint training via conditional network branching(Huang et al., 2016), i.e. for image batches, we skip updating the temporal blocks in the network. Unlike temporal masking strategies(Ho et al., 2022b, a), using conditional network branching allows us to include other temporal blocks (Conv-3D, temporal mixing) in addition to temporal attention. Critically, we also train with either image-only or video-only batches, rather than batches of video with appended images(Ho et al., 2022b, a). This improves data diversity and training stability by not constraining the possible batches by the number of available video batches. We observe that improved garment fidelity and multi-view realism, especially for synthesized details in occluded garment regions with joint image-video training compared to video-only training (see Figure4).
Figure 4. Joint Training Ablation. Joint image and video training improves the realism of occluded views.
- Experiments
In this section, we describe our datasets (Section 4.1), evaluation metrics (Section 4.2), and results (Section 4.3). We provide training and inference details in the Supplementary.
4.1. Datasets
Our image dataset is a collection of publicly-crawled online fashion images, containing 17M paired images of people wearing the same garment in different poses. We also collect a video dataset of over 52K publicly-available fashion videos totalling 3.9M frames, which we use for the temporal training phases. During training, the garment image and person frames are randomly sampled from the same video. For evaluation, we collect a separate dataset of 5K videos, containing person videos paired with garment images from a different video. Our custom image and video datasets contain a diverse range of skin tones, body shapes, garments, genders, and motions. We also evaluate on the UBC test dataset(Zablotskaia et al., 2019) of 100 videos. For both test datasets, we randomly pair a garment frame from each video clip with three distinct other video clips to get swapped try-on datasets.
4.1.1. Reproducibility
To promote future work in this area and allow fair comparisons with our method, we plan to release a benchmark dataset, including sample paired person videos, garment images, and corresponding preprocessed inputs. We also analyze a version of our model trained and tested exclusively on publicly-available UBC video data(Zablotskaia et al., 2019) in Section 4.3.1.
4.2. Metrics
We evaluate our method using FID(Heusel et al., 2017), FVD(Unterthiner et al., 2018), and CLIP(Radford et al., 2021) scores in Tables1 and 2. FID measures the similarity of the distributions of the predicted and ground-truth frames, which gives a measure of the realism of the generated video frames. FVD measures temporal consistency of video frames. We compute the CLIP image similarity between the segmented garments of the input garment image and predicted frames. In this way, the CLIP score gives us a measure of try-on garment fidelity.
4.3. Results
We showcase qualitative results of our full method in Figure7 and provide more qualitative results in the Supplementary. Fashion-VDM is capable of synthesizing smooth, photorealistic try-on videos on a variety of input garment types, patterns, skin tones, genders, and motions.
4.3.1. UBC-Only Model
In order to provide a fair comparison to other methods(Guo et al., 2023; Karras et al., 2023; Andreas Blattmann, 2023), we train a version of Fashion-VDM using video data only from the UBC dataset. Similar to other methods, we leverage a pretrained image try-on diffusion models and further train using the publicly-available UBC dataset for the video stages. We show the quantitative results in Table2 and provide further details, discussion, and qualitative examples in the Supplementary.
Table 1. Quantitative Ablation Studies. For each ablated version of our model, we compute FID, FVD, and CLIP scores using both UBC and our test videos with randomly paired garments. Bolded values indicate the best score in each column.
Figure 5. Garment Fidelity Ablations. We compare our full model with ablated versions without split-CFG and without joint image-video training in terms of garment fidelity. Both split-CFG and joint image-video training improve fine-grain garment details (top row) and novel view generation (bottom row).
- Ablation Studies
We ablate each of our design choices with respect to garment fidelity, temporal smoothness, and photorealism. We report quantitative results for each ablated version in Table1. All components are essential to improving realism (FID), temporal consistency (FVD), and garment fidelity (CLIP). Qualitatively, we find that split-CFG and joint training have the largest effect on person/garment fidelity and overall quality (Figure 5), while progressive training and temporal blocks affect the temporal smoothness (Figure 6). We discuss these effects in detail in the remainder of this section.
5.1. Split Classifier-Free Guidance
Split-CFG improves per-frame person and garment fidelity, thereby improving overall inter-frame temporal consistency and photorealism. In Figure 3, we compare results generated with different split-CFG weights at inference time. By increasing the person guidance weight w p subscript 𝑤 𝑝 w_{p}italic_w start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT from 0 to 1, the realism and identity of the input person are improved. Increasing the full-conditional weight w full subscript 𝑤 full w_{\text{full}}italic_w start_POSTSUBSCRIPT full end_POSTSUBSCRIPT improves the garment fidelity, but not as much as by increasing the garment weight w g subscript 𝑤 𝑔 w_{g}italic_w start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT alone, as in the last column. We provide quantitative split-CFG ablation results in the Supplementary. In the Supplementary, we demonstrate that increasing w g subscript 𝑤 𝑔 w_{g}italic_w start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT also increases fine-grain garment details when using a version of our model trained on limited video data. This suggests split-CFG does not require extensive training to be useful and can be impactful in low-resource settings.
5.2. Joint Image-Video Training
We find that training with video data only in the temporal phases sacrifices garment fidelity compared to the base image model. Training jointly with images and videos increases the fidelity to garment details, even compared to the image baseline, as shown by the improved FID and CLIP scores in Table1. The increased access to diverse data with joint image-video training also enables the model to synthesize more plausible occluded regions. For example, as shown in Figure 4, the jointly trained model is able to generate a hood with more realism than the video-only model.
5.3. Temporal Blocks
As seen in prior works(Ho et al., 2022b, a), interleaving 3D-convolution and temporal attention blocks into the 2D UNet greatly improves temporal consistency. Removing temporal blocks entirely causes large temporal inconsistencies. For instance, in the top row of Figure6, the ablated model without temporal blocks swaps the pants and body shape in each frame.
5.4. Progressive Temporal Training
To ablate our progressive training scheme, we train our image base model directly with 16-frame video batches for the same total number of iterations, but skipping the 8-frame training phase entirely. Progressive training enables more temporally smooth results with the same number of training iterations. This is supported by our quantitative findings in Table1, which indicates worse FVD when not doing progressive training. Qualitatively, in Figure6, the non-progressively trained model in the middle row exhibits temporal artifacts in the pants region and intermittently merges the pant legs into a skirt. We hypothesize that, given limited training iterations, it is easier to learn temporal consistency well across a small number of frames. Then, to transfer that knowledge to larger temporal windows only requires minimal additional training.
Figure 6. Temporal Smoothness Ablations. We compare video frames generated by our ablated model without temporal blocks (top row) and without progressive training (middle row) to our full model (bottom row). Both ablated versions exhibit large frame-to-frame inconsistencies and artifacts.
- Comparisons to State-of-the-Art
We qualitatively and quantitatively compare our method to the state-of-the-art in diffusion-based try-on and animation, as no previous diffusion-based video try-on baselines with publicly-available code currently exist: (1) TryOn Diffusion(Zhu et al., 2023) (2) MagicAnimate(Xu et al., 2023), and (3) Animate Anyone(Hu et al., 2023). For (1), we generate try-on results in a frame-by-frame manner for each input frame to generate a video. For (2) and (3), we first generate a single try-on image from the first input frame and garment image using TryOn Diffusion, then use the extracted poses from the input frames to animate the result. In addition, we provide user survey results in the Supplementary.
6.1. Qualitative Results:
We qualitatively compare Fashion-VDM to the baseline methods in Figure8. In the top and bottom rows, we show how other methods exhibit large artifacts with large pose changes. In these examples, baseline methods struggle to preserve garment details and hallucinate plausible occluded views. Plus, both MagicAnimate and Animate Anyone create an overall cartoon-like appearance.
In our supplementary video results, we observe that frame-by-frame TryOn Diffusion results exhibit lots of flickering and garment inconsistencies. MagicAnimate fails to preserve the correct background and also does not maintain a consistent garment appearance througout the video. Animate Anyone also exhibits garment temporal inconsistency, especially with large viewpoint changes, and the human motion has an unrealistic, warping effect. Overall, Fashion-VDM synthesizes more natural-looking garment motion, such as folding, wrinkling, and flow, and better preserves garment appearance.
6.2. Quantitative Results:
We compute FID scores on from 300 16-frame videos of the UBC dataset and on 300 16-frame videos of our custom video test dataset. For both datasets, we compute FVD scores and CLIP on 100 distinct 16-frame videos. The results are displayed in Table2. In our experiments, Fashion-VDM surpasses all baselines in both image quality (FID), video quality (FVD), and garment fidelity (CLIP). Although the UBC-only model excels in terms of all UBC metrics, we qualitatively observe over-smoothing and worse garment detail preservation, compared to the full version trained on our larger, more diverse video dataset.
Table 2. Quantitative Comparisons. We compare Fashion-VDM to the baseline methods using the UBC test dataset(Zablotskaia et al., 2019) and our test dataset of internet videos. Fashion-VDM quantitatively outperforms other methods on all metrics.
- Limitations and Future Work
The main limitations of Fashion-VDM include inaccurate body shape, artifacts, and incorrect details in occluded garment regions. See examples and further discussion in the Supplementary. Improbable details may be hallucinated in unseen garment regions, because the input image only shows one view of the garment. Future work might consider multi-view conditioning and individual person customization for improved garment and person fidelity. Other errors include minor aliasing for fine-grained patterns. Finally, our method does not simulate exact physical cloth dynamics, but rather realistic video try-on visualization. Establishing physics could be a great next step.
- Discussion
We present Fashion-VDM, a diffusion-based video try-on model. Given an input garment image and person video, Fashion-VDM synthesizes a try-on video with the input garment fitted to the person in motion, maintaining realistic details and fabric dynamics. We show qualitatively and quantitatively that our method significantly surpasses existing state-of-the-art diffusion-based image try-on and animation methods.
- Ethics Statement
While we believe our research creates a positive contribution to the research community by advancing the state-of-the-art in generative video diffusion, we also condemn its potential for misuse, including any spreading misinformation or manipulating human content for malicious purposes. While our method is trained on public data containing identifiable humans, we will not release any images or videos containing personally identifiable features, such as faces, tattoos, or logos to protect the privacy of these individuals.
References
- (1)
- Andreas Blattmann (2023) Sumith Kulal Daniel Mendelevitch Maciej Kilian Dominik Lorenz Yam Levi Zion English Vikram Voleti Adam Letts Varun Jampani Robin Rombach Andreas Blattmann, Tim Dockhorn. 2023. Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets.
- Bai et al. (2022) Shuai Bai, Huiling Zhou, Zhikang Li, Chang Zhou, and Hongxia Yang. 2022. Single Stage Virtual Try-On Via Deformable Attention Flows. In Computer Vision – ECCV 2022, Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner (Eds.). Springer Nature Switzerland, Cham, 409–425.
- Blattmann et al. (2023) Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. 2023. Align Your Latents: High-Resolution Video Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 22563–22575.
- Brooks et al. (2023) Tim Brooks, Aleksander Holynski, and Alexei A. Efros. 2023. InstructPix2Pix: Learning To Follow Image Editing Instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18392–18402.
- Chen et al. (2023b) Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending Context Window of Large Language Models via Positional Interpolation. arXiv:arXiv:2306.15595
- Chen et al. (2023a) Xinyuan Chen, Yaohui Wang, Lingjun Zhang, Shaobin Zhuang, Xin Ma, Jiashuo Yu, Yali Wang, Dahua Lin, Yu Qiao, and Ziwei Liu. 2023a. SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction. arXiv:2310.20700
- Choi et al. (2021) Seunghwan Choi, Sunghyun Park, Minsoo Lee, and Jaegul Choo. 2021. VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 14131–14140.
- Cui et al. (2023) Aiyu Cui, Jay Mahajan, Viraj Shah, Preeti Gomathinayagam, and Svetlana Lazebnik. 2023. Street TryOn: Learning In-the-Wild Virtual Try-On from Unpaired Person Images. arXiv:2311.16094
- Dhariwal and Nichol (2021) Prafulla Dhariwal and Alex Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. arXiv:arXiv:2105.05233
- Dong et al. (2022) Xin Dong, Fuwei Zhao, Zhenyu Xie, Xijin Zhang, Daniel K. Du, Min Zheng, Xiang Long, Xiaodan Liang, and Jianchao Yang. 2022. Dressing in the Wild by Watching Dance Videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3480–3489.
- Girdhar et al. (2023) Rohit Girdhar, Mannat Singh, Andrew Brown, Quentin Duval, Samaneh Azadi, Sai Saketh Rambhatla, Akbar Shah, Xi Yin, Devi Parikh, and Ishan Misra. 2023. Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning. arXiv:arXiv:2311.10709
- Gong et al. (2019) Ke Gong, Yiming Gao, Xiaodan Liang, Xiaohui Shen, Meng Wang, and Liang Lin. 2019. Graphonomy: Universal Human Parsing via Graph Transfer Learning. arXiv:arXiv:1904.04536
- Gu et al. (2023) Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu, Songcen Xu, Wei Zhang, Yu-Gang Jiang, and Hang Xu. 2023. Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation. arXiv:arXiv:2309.03549
- Guo et al. (2023) Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai. 2023. AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning. arXiv:arXiv:2307.04725
- Han et al. (2018) Xintong Han, Zuxuan Wu, Zhe Wu, Ruichi Yu, and Larry S. Davis. 2018. VITON: An Image-Based Virtual Try-On Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Haoye Dong and Yin (2019) Xiaohui Shen B. Wu Bing cheng Chen Haoye Dong, Xiaodan Liang and J. Yin. 2019. FW-GAN: Flow-Navigated Warping GAN for Video Virtual Try-On. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Yunlin, Taiwan, 1161––1170.
- Harvey et al. (2022) William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, and Frank Wood. 2022. Flexible Diffusion Modeling of Long Videos. arXiv:arXiv:2205.11495
- He et al. (2022a) Sen He, Yi-Zhe Song, and Tao Xiang. 2022a. Style-Based Global Appearance Flow for Virtual Try-On. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3470–3479.
- He et al. (2022b) Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen. 2022b. Latent Video Diffusion Models for High-Fidelity Long Video Generation. arXiv:arXiv:2211.13221
- Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems, I.Guyon, U.Von Luxburg, S.Bengio, H.Wallach, R.Fergus, S.Vishwanathan, and R.Garnett (Eds.), Vol.30. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2017/file/8a1d694707eb0fefe65871369074926d-Paper.pdf
- Ho et al. (2022a) Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. 2022a. Imagen Video: High Definition Video Generation with Diffusion Models. arXiv:arXiv:2210.02303
- Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. arXiv:arXiv:2006.11239
- Ho and Salimans (2022) Jonathan Ho and Tim Salimans. 2022. Classifier-Free Diffusion Guidance. arXiv:arXiv:2207.12598
- Ho et al. (2022b) Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. 2022b. Video Diffusion Models. arXiv:arXiv:2204.03458
- Hu et al. (2023) Li Hu, Xin Gao, Peng Zhang, Ke Sun, Bang Zhang, and Liefeng Bo. 2023. Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation. arXiv:arXiv:2311.17117
- Huang et al. (2016) Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. 2016. Deep Networks with Stochastic Depth. arXiv:arXiv:1603.09382
- Jiang et al. (2022) Jianbin Jiang, Tan Wang, He Yan, and Junhui Liu. 2022. ClothFormer: Taming Video Virtual Try-On in All Module. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10799–10808.
- Karras et al. (2023) Johanna Karras, Aleksander Holynski, Ting-Chun Wang, and Ira Kemelmacher-Shlizerman. 2023. DreamPose: Fashion Video Synthesis with Stable Diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 22680–22690.
- Kim et al. (2023) Jeongho Kim, Gyojung Gu, Minho Park, Sunghyun Park, and Jaegul Choo. 2023. StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On. arXiv:2312.01725
- Lee et al. (2022) Sangyun Lee, Gyojung Gu, Sunghyun Park, Seunghwan Choi, and Jaegul Choo. 2022. High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions. In Computer Vision – ECCV 2022, Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner (Eds.). Springer Nature Switzerland, Cham, 204–219.
- Lee et al. (2023) Seung Hyun Lee, Sieun Kim, Innfarn Yoo, Feng Yang, Donghyeon Cho, Youngseo Kim, Huiwen Chang, Jinkyu Kim, and Sangpil Kim. 2023. Soundini: Sound-Guided Diffusion for Natural Video Editing. arXiv:2304.06818
- Lewis et al. (2021) Kathleen M Lewis, Srivatsan Varadharajan, and Ira Kemelmacher-Shlizerman. 2021. TryOnGAN: body-aware try-on via layered interpolation. ACM Trans. Graph. 40, 4, Article 115 (jul 2021), 10 pages. https://doi.org/10.1145/3450626.3459884
- Liu et al. (2022) Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. 2022. Compositional visual generation with composable diffusion models. In European Conference on Computer Vision. Springer, 423–439.
- Mei and Patel (2023) Kangfu Mei and Vishal Patel. 2023. VIDM: Video Implicit Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence 37, 8 (Jun. 2023), 9117–9125.
- Men et al. (2020) Yifang Men, Yiming Mao, Yuning Jiang, Wei-Ying Ma, and Zhouhui Lian. 2020. Controllable Person Image Synthesis With Attribute-Decomposed GAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
- Peebles and Xie (2022) William Peebles and Saining Xie. 2022. Scalable Diffusion Models with Transformers. arXiv:arXiv:2212.09748
- Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. arXiv:arXiv:2103.00020
- Ren et al. (2022) Yurui Ren, Xiaoqing Fan, Ge Li, Shan Liu, and Thomas H. Li. 2022. Neural Texture Extraction and Distribution for Controllable Person Image Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 13535–13544.
- Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.
- Salimans and Ho (2022) Tim Salimans and Jonathan Ho. 2022. Progressive Distillation for Fast Sampling of Diffusion Models. arXiv:arXiv:2202.00512
- Schuhmann et al. (2022) Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. arXiv:arXiv:2210.08402
- Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. arXiv:arXiv:1503.03585
- Song et al. (2020) Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising Diffusion Implicit Models. arXiv:arXiv:2010.02502
- Song and Ermon (2019) Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. arXiv:arXiv:1907.05600
- Unterthiner et al. (2018) Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. 2018. Towards Accurate Generative Models of Video: A New Metric & Challenges. arXiv:arXiv:1812.01717
- Wang et al. (2023b) Fu-Yun Wang, Wenshuo Chen, Guanglu Song, Han-Jia Ye, Yu Liu, and Hongsheng Li. 2023b. Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising. arXiv:2305.18264
- Wang et al. (2023a) Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, Yuwei Guo, Tianxing Wu, Chenyang Si, Yuming Jiang, Cunjian Chen, Chen Change Loy, Bo Dai, Dahua Lin, Yu Qiao, and Ziwei Liu. 2023a. LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models. arXiv:arXiv:2309.15103
- Wen-Jiin Tsai (2023) Yi-Cheng Tien Wen-Jiin Tsai. 2023. Attention-based Video Virtual Try-On. ACM, Proceedings of the 2023 ACM International Conference on Multimedia Retrieval, 209–216.
- Xintong Han and Scott (2020) Weilin Huang Xintong Han, Xiaojun Hu and Matthew R Scott. 2020. Clothflow: A flow-based model for clothed person generation. Proceedings of the IEEE/CVF international conference on computer vision, 139–144,.
- Xu et al. (2023) Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Hanshu Yan, Jia-Wei Liu, Chenxu Zhang, Jiashi Feng, and Mike Zheng Shou. 2023. MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model. arXiv:arXiv:2311.16498
- Yang et al. (2020) Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, and Ping Luo. 2020. Towards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
- Yu et al. (2019) Ruiyun Yu, Xiaoqi Wang, and Xiaohui Xie. 2019. Vtnfp: An image-based virtual try-on network with body and clothing feature preservation. Proceedings of the IEEE/CVF international conference on computer vision, 10511–10520.
- Zablotskaia et al. (2019) Polina Zablotskaia, Aliaksandr Siarohin, Bo Zhao, and Leonid Sigal. 2019. DwNet: Dense warp-based network for pose-guided human video generation. arXiv:arXiv:1910.09139
- Zhang et al. (2021) Jinsong Zhang, Kun Li, Yu-Kun Lai, and Jingyu Yang. 2021. PISE: Person Image Synthesis and Editing With Decoupled GAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 7982–7990.
- Zhang et al. (2023) Xujie Zhang, Xiu Li, Michael Kampffmeyer, Xin Dong, Zhenyu Xie, Feida Zhu, Haoye Dong, and Xiaodan Liang. 2023. WarpDiffusion: Efficient Diffusion Model for High-Fidelity Virtual Try-on. arXiv:2312.03667
- Zhong et al. (2021) Xiaojing Zhong, Zhonghua Wu, Taizhe Tan, Guosheng Lin, and Qingyao Wu. 2021. MV-TON: Memory-based Video Virtual Try-on network. (2021). arXiv:arXiv:2108.07502
- Zhu et al. (2024) Luyang Zhu, Yingwei Li, Nan Liu, Hao Peng, Dawei Yang, and Ira Kemelmacher-Shlizerman. 2024. M&M VTO: Multi-Garment Virtual Try-On and Editing.
- Zhu et al. (2023) Luyang Zhu, Dawei Yang, Tyler Zhu, Fitsum Reda, William Chan, Chitwan Saharia, Mohammad Norouzi, and Ira Kemelmacher-Shlizerman. 2023. TryOnDiffusion: A Tale of Two UNets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4606–4615.
Figure 7. Qualitative Results. We showcase video try-on results generated by Fashion-VDM using randomly paired person-garment test videos from the UBC dataset(Zablotskaia et al., 2019) and our own collected test dataset. Note that the input garment image and input person frames come from different videos.
Figure 8. Qualitative Comparisons. Fashion-VDM outperforms past methods in garment fidelity and realism. Especially in cases of large disocclusion, our method synthesizes more realistic novel views.
Supplementary Material
Appendix A Progressive Training Details
Figure 9. Progressive Training Strategy. Fashion-VDM is trained in multiple phases of increasing frame length. We first pretrain an image model, by training only the spatial layers on our image dataset. In subsequent phases, we train temporal and spatial layers on increasingly long batches of consecutive frames from our video dataset.
The overall progressive temporal training strategy is depicted in Figure9. We first train a base image model from scratch on image data at 512 512 512 512 px resolution and batch size 8 for 1M iterations. Then, we inflate the base architecture with temporal blocks and continue training the model using our joint image-video training strategy. In these temporal training phases, half of the batches are from the image dataset and the other half are batches of consecutive video frames from the video dataset. When training with an image batch, we skip the temporal blocks entirely in the forward and backward passes. At each successive phase of temporal training, we initialize the model from the previous phase’s checkpoint and double the training video length: 8→16→32→64→8 16→32→64 8\rightarrow 16\rightarrow 32\rightarrow 64 8 → 16 → 32 → 64. We train each temporal training phase for 150K iterations. Once the video length becomes prohibitively large in memory at 64-frames, we introduce temporal downsampling and upsampling layers to the model. At test time, our model generates 512×384 512 384 512\times 384 512 × 384 px videos up to 64-frames in one inference pass with a single network.
A.1. Training and Inference Details
We train our model on 16 TPU-v4’s for approximately 2 weeks, including all training phases. Our image baseline model is trained for 1M iterations with a batch size of 8 and resolution 512×384 512 384 512\times 384 512 × 384 px using the Adam optimizer with a linearly decaying learning rate of 1e−4 1 superscript 𝑒 4 1e^{-4}1 italic_e start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT to 1e−5 1 𝑒 5 1e{-5}1 italic_e - 5 over 1M steps and 10K warm-up steps. Each phase of progressive temporal training is initialized from the previous checkpoint and trained for 150K iterations, following the order of phases described in the Section A. For all phases, we incorporate dropout for each conditional input independently 10% of the time. We train with an L2 loss on ϵ italic-ϵ\epsilon italic_ϵ.
During inference, we use the DDPM sampler (Ho et al., 2020) with 1000 refinement steps. Each video takes approximately 8 minutes to synthesize with split-CFG and 5 minutes without split-CFG.
Figure 10. Failure Cases. Errors in the person segmentation may lead to artifacts (top row). Fashion-VDM may incorrectly represent body shape (bottom row).
Appendix B Examples of Failure Cases
We show two examples of failure cases of our method in Figure10. In row 1, we show artifacts that appear in the body/garment boundary, due to an imperfect person segmentation in the clothing-agnostic image. Imperfect segmentation is a common cause of such artifacts, and may also incorrectly leak regions from the original garment. In our human evaluation (SectionC.1), 10/17 videos that were failed had agnostic errors. In general, although our preprocessing methods are state-of-the-art, other types of preprocessing errors occur limit the quality of Fashion-VDM. In total, 70% of videos not chosen by human raters had errors in one or more inputs. As shown in row 2, body shape misrepresentation (e.g. slimming) occurs, because the clothing-agnostic images remove all body parts, besides hands, feet, and head, thus they do not include detailed information about body size.
Appendix C Split-CFG Weights Ablations
We quantitatively evaluate our choice of split-CFG weights for both datasets on a held-out validation set. The results are shown in Table 3. Calibrating these weights correctly is not only beneficial to preserving garment fidelity, as shown by the FID score, but also increasing temporal consistency, as shown by the FVD score. Intuitively, by increasing the similarity of the output garment to the input garment, there is less allowed variability in the appearance of each frame, thus increased temporal smoothness. Based on these results, we employ weights (1,1,3,1)1 1 3 1(1,1,3,1)( 1 , 1 , 3 , 1 ) for UBC and weights (1,1,1,1)1 1 1 1(1,1,1,1)( 1 , 1 , 1 , 1 ) for our test dataset.
Table 3. Quantitative Ablation of Split-CFG Weights. We compute FID, FVD, and CLIP scores of our full model using different split-CFG weights.
C.1. User Study
Table 4. User Study. Our study indicates that users overwhelmingly prefer Fashion-VDM to other baselines in terms of video smoothness, person fidelity, and garment fidelity on both test datasets.
In addition to qualitative and quantitative evaluations, we perform user studies for our state-of-the-art comparisons. The results are shown in Table4. Our user studies are conducted by 5 human raters who are unfamiliar with the method. For each sample, the raters were asked to select which video performs best in each category: temporal smoothness, garment fidelity to the input garment image, and person fidelity to the input person video. The scores on both UBC test dataset and our test dataset reported are fraction of total votes divided by the total number of videos. Fashion-VDM exceeds other methods on all three user preference categories for both datasets.
C.2. UBC-Only Model
Figure 11. Split-CFG Ablation with UBC-Only Model. When Fashion-VDM is trained on the limited UBC dataset only, we observe overfitting to the largely plain garments in the UBC train dataset. However, we find that increasing garment image guidance (w g subscript 𝑤 𝑔 w_{g}italic_w start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT) in split-CFG significantly increases garment details.
Figure 12. Qualitative Results for UBC-Only Model. Our model trained only on UBC data generates temporally consistent, smooth try-on videos for plain and simple patterned garments, but struggles to preserve intricate patterns and complex garment shapes.
We initialize this model from our pretrained image model, which is comparable to an open source image diffusion model, like Stable Diffusion(stable-diffusion), which are trained on even larger image datasets, including LAION 5B(Schuhmann et al., 2022). We then train progressively using both image data and UBC video data, following the same progressive training scheme as the full model.
The UBC-only model exceeds all baselines on the UBC test dataset quantitatively, but is qualitatively worse at preserving intricate garment details and patterns. This is expected, given the limited size and lack of diversity of UBC training dataset. However, we discovered that increasing the split-CFG garment weight significantly improves lost garment details, even more so than with the full model. We qualitatively show this in Figure12. This implies that when training with limited data, split-CFG becomes even more crucial to preserving the conditioning image details.
We provide qualitative examples generated by our model trained only on the UBC dataset(Zablotskaia et al., 2019) in Figure12. While the results are still smooth and temporally consistent, the model struggles to maintain complex patterns and garment shape details. This is likely due to overfitting to the limited size and scope of the UBC training dataset, consisting of 500 videos of women in dresses.
Figure 13. Additional Qualitative Results. We showcase video try-on results generated by Fashion-VDM using swapped test videos from the UBC dataset(Zablotskaia et al., 2019) and our own collected test dataset. Note that the input garment image and input person frames come from different videos.
Xet Storage Details
- Size:
- 84.2 kB
- Xet hash:
- 70b9f9ab511cf223cb89602ac096c7b731b9a47abcb5efca452dde2ab7f86e93
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.











